Recently I began writing a series of posts on the topic of comprehensive assessment of dyslexia.
In part I of my post (HERE), I discussed common dyslexia myths as well as general language testing as a starting point in the dyslexia testing battery.
In part II I detailed the next two steps in dyslexia assessment: phonological awareness and word fluency testing (HERE).
Today I would like to discuss part III of comprehensive dyslexia assessment, which discusses reading fluency and reading comprehension testing.
Let’s begin with reading fluency testing, which assesses the students’ ability to read word lists or short paragraphs with appropriate speed and accuracy. Here we are looking for how many words the student can accurately read per minute orally and/or silently (see several examples of fluency rates below).
Research indicates that oral reading fluency (ORF) on passages is more strongly related to reading comprehension than ORF on word lists. This is an important factor which needs to be considered when it comes to oral fluency test selection.
Oral reading fluency tests are significant for a number of reasons. Firstly, they allow us to identify students with impaired reading accuracy. Secondly, they allow us to identify students who can decode words with relative accuracy but who cannot comprehend what they read due to significantly decreased reading speed. When you ask such children: “What did you read about?” They will frequently respond: “I don’t remember because I was so focused on reading the words correctly.”
One example of a popular oral reading fluency test (employing reading passages) is the Gray Oral Reading Tests-5 (GORT-5). It yields the scores on the student’s:
- Oral Reading Index (a composite score based on Fluency and Comprehension scaled scores)
Another types of reading fluency tests are tests of silent reading fluency. Assessments of silent reading fluency can at selectively useful for identifying older students with reading difficulties and monitoring their progress. One obvious advantage to silent reading tests is that they can be administered in group setting to multiple students at once and generally takes just few minutes to administer, which is significantly less then oral reading measures take to be administered to individual students.
Below are a several examples of silent reading tests/subtests.
The Test of Silent Word Reading Fluency (TOSWRF-2) presents students with rows of words, ordered by reading difficulty without spaces (e.g., dimhowfigblue). Students are given 3 minutes to draw a line between the boundaries of as many words as possible (e.g., dim/how/fig/blue).
The Test of Silent Contextual Reading Fluency (TOSCRF-2) presents students with text passages with all words printed in uppercase letters with no separations between words and no punctuation or spaces between sentences and asks them to use dashes to separate words in a 3 minute period.
Similar to the TOSCRF-2, the Contextual Fluency subtest of the Test of Reading Comprehension – Fourth Edition (TORC-4) measures the student’s ability to recognize individual words in a series of passages (taken from the TORC-4′s Text Comprehension subtest) in a period of 3 minutes. Each passage, printed in uppercase letters without punctuation or spaces between words, becomes progressively more difficult in content, vocabulary, and grammar. As students read the segments, they draw a line between as many words as they can in the time allotted. (E.g., THE|LITTLE|DOG|JUMPED|HIGH)
However, it is important to note oral reading fluency is a better predictor of reading comprehension than is silent reading fluency for younger students (early elementary age). In contrast, silent reading measures are more strongly related to reading comprehension in middle school (e.g., grades 6-8) but only for skilled vs. average readers, which is why oral reading fluency measures are probably much better predictors of deficits in this area in children with suspected reading disabilities.
Now let’s move on to the reading comprehension testing, which is an integral component for any dyslexia testing battery. Unfortunately, it is also the most trickiest. Here’s why.
Many children with reading difficulties will be able to read and comprehend short paragraphs containing factual information of decreased complexity. However, this will change dramatically when it comes to the comprehension of longer, more complex, and increasingly abstract age-level text. While a number of tests do assess reading comprehension, none of them truly adequately assess the students ability to comprehend abstract information.
For example, on the Reading Comprehension subtest of the CELF-5, students are allowed to keep the text and refer to it when answering questions. Such option will inflate the students scores and not provide an accurate idea of their comprehension abilities.
To continue, the GORT-5 contains reading comprehension passages, which the students need to answer after the stimuli booklet has been removed from them. However, the passages are far more simplistic then the academic texts the students need to comprehend on daily basis, so the students may do well on this test yet still continue to present with significant comprehension deficits.
Similar could be said for the text comprehension components of major educational testing batteries such as the Woodcock Johnson IV: Passage Comprehension subtest, which gives the student sentences with a missing word, and the student is asked to orally provide the word. However, filling-in a missing word does not text comprehension make.
Likewise, the Wechsler Individual Achievement Test®-Third Edition (WIAT-III), Reading Comprehension subtest is very similar to the CELF-5. Student is asked to read a passage and answer questions by referring back to the text. However, just because a student can look up the answers in text does not mean that they actually understand the text.
So what could be done to accurately assess the student’s ability to comprehend abstract grade level text? My recommendation is to go informal. Select grade-level passages from the student’s curriculum pertaining to science, social studies, geography, etc. vs. language arts (which tends to be more simplistic) and ask the student to read them and answer factual questions regarding supporting details as well as non factual questions relevant to main ideas and implied messages.