Here at MetaMetrics, we were feeling particularly meta today so we decided to let ChatGPT rewrite our blog post about ChatGPT (the original was written by Alistair Van Moere, Ph.D., President of MetaMetrics). The result captures the essence of the debate about whether ChatGPT can really create Lexile measures pretty darn well! Read on for a witty (and surprisingly humble) AI take on it:
“Well, well, well. We’ve been getting a lot of people reaching out to us at MetaMetrics, saying things like, “Hey, did you hear that ChatGPT can generate Lexile measures for texts?” It seems like ChatGPT is quite the talk of the town these days.
But we’re here to clear things up: ChatGPT does not provide Lexile measures. Sorry to burst your bubble, folks. Sure, it’ll give you a number with an “L” behind it if you ask nicely, but it’s not the real deal. It’s like asking your drunk uncle for medical advice – sure, he’ll give you an answer, but it’s probably not going to be accurate.
Let’s break it down: ChatGPT is a natural language generator. It’s designed to generate language, not diagnose illnesses or provide accurate Lexile measures. It’s like a chatty robot friend that’s really good at regurgitating information it’s read on the internet. So, when you ask it for a Lexile measure, it’s just using its vast knowledge of texts to hypothesize what it thinks the number should be. And let’s be real, ChatGPT isn’t always the most reliable source of information – it’s been known to get things wrong.
Now, let’s talk about the Lexile Analyzer. This bad boy is a piece of software that’s specifically designed to analyze text features and calculate the text’s Lexile measure. It’s like a super-smart reading detective that knows exactly how to measure complexity and predict comprehension. And the best part? It’s based on years of research and thousands of texts and students, so you know it’s legit.
So, to sum things up: if you want an accurate Lexile measure, go with the Lexile Analyzer. If you want to chat with a robot friend, go with ChatGPT. Just don’t expect it to know everything – it’s not that kind of robot.”
Original blog post that we used to prompt ChatGPT:
Can ChatGPT create Lexile text measures?
By Alistair Van Moere, Ph.D., President of MetaMetrics
Various people have contacted us at MetaMetrics to tell us “Did you know that ChatGPT can generate Lexile measures for texts?”. So, we would like to set the record straight and let you know that ChatGPT does not, in fact, provide Lexile measures.
Now, ChatGPT does generate a likely-looking number with an “L” behind it when you ask it for one – it’s very helpful that way. Unfortunately, it’s not actually performing the same statistical analyses as the Lexile® Text Analyzer, so it’s providing a number that is not the actual Lexile measure of the text.
Let’s take a quick look at how ChatGPT and the Lexile Analyzer are different.
ChatGPT is a natural language generator. It is designed to generate language. For example, if you are sick and tell it your symptoms, it will generate some language that hypothesizes a likely set of illnesses that you might have, based on its processing of a massive amount of medical texts. But it is not actually diagnosing your sickness. It is not, for example, looking in your ear or taking a swab from your throat for analysis.
ChatGPT is doing much the same thing when you ask it for a Lexile measure. Because it has “read” the MetaMetrics website and other sites that explain Lexile measures, it assigns a Lexile measure based on how often that number is likely to occur with texts of similar characteristics. And, it generates a number that might be close to an actual Lexile measure, or might be very wrong. As we all know by now, it can get a lot of “facts” wrong.
How about the Lexile Analyzer? This is a piece of software that performs a specific analysis of the text features, and then applies a calculation derived from studies we have done with thousands of texts and thousands of students. In other words, it’s a proprietary algorithm that ChatGPT does not have access to. This is important because a lot of work went into developing that algorithm to make sure that it does not just measure complexity, but that it also predicts whether people reading that text will understand it. It also recommends text at the appropriate level based on an individual person’s Lexile measure and reading needs.
Anytime there is a match between the person’s Lexile measure and a text’s Lexile measure, there is 75% probability that the person will correctly answer a global comprehension question about that text. For example, a person who reads at 1000L (based on a test score or evaluation) can encounter a text at 1000L (based on the Lexile text analysis) and will have a 75% probability of correctly answering a question about it.
Not only this, but as the text gets easier or harder, we can estimate how the probability of that person getting the answer correct will change. If a student reads a text that is 250L above their level, there is a 50% chance they will answer a comprehension question correctly. This is a useful zone for learning if the text is highly scaffolded or if the teacher is guiding a close reading activity with discussion and sentence analysis. Similarly, if a student reads a text that is 250L below their level, there is a 90% chance they will answer a comprehension question correctly. This is a great zone of extensive reading for pleasure, developing reading fluency, and reinforcing knowledge or vocabulary.
We designed the Lexile Framework to be useful for teachers, and not just an approximation of text complexity. As for ChatGPT, it is an amazing tool and a game-changer for education, but ChatGPT can’t really create Lexile measures. It just doesn’t know how.
Contact us if you want to learn more about Lexile measures.
Teachers – go to our Lexile® & Quantile® Hub to access the Lexile Text Analyzer, and see tools for vocabulary, decodable texts, careers, and norms.
Publishers – get certified Lexile text measures for your content.
Below are some examples of ChatGPT “guesses” vs actual Lexile measure of the text.
ChatGPT Guesses vs The Lexule Analyzer
Example #1
He is a dog. He is a black dog. The dog runs. The dog plays. Now, there is a white dog, too. The black dog and the white dog play.
Lexile Analyzer: 50L
ChatGPT guess: 290
Example #2
Six pups run and run. Six pups have some fun. Six pups play at tag. Six pups zig and zag. Six pups yip and yap. Ruff, ruff, wuff, wuff. Six pups get in stuff. Six pups dig in the mud. Six pups hop in the tub. Six pups get all fed. Six pups: all in bed.
Lexile Analyzer: 100L (Note: if you like decodable texts for short vowels, find 100+ texts on our Hub!)
ChatGPT guess: 380
Subscribe to Our Monthly Newsletter
Stay up-to-date with the latest updates, insights, and announcements from MetaMetrics and our partners.
Sign-up now