Posted on 4 Comments

If It’s NOT CAPD Then Where do SLPs Go From There?

Image result for processingIn July 2015 I wrote a blog post entitled: “Why (C) APD Diagnosis is NOT Valid!” citing the latest research literature to explain that the controversial diagnosis of (C)APD tends to

a) detract from understanding that the child presents with legitimate language based deficits in the areas of comprehension, expression, social communication and literacy development

b) may result in the above deficits not getting adequately addressed due to the provision of controversial APD treatments

To CLARIFY, I was NOT trying to disprove that the processing deficits exhibited by the children diagnosed with “(C)APD” were not REAL. Rather I was trying to point out that these processing deficits are of neurolinguistic origin and as such need to be addressed from a linguistic rather than ‘auditory’ standpoint.

In other words, if one carefully analyzes the child’s so-called processing issues, one will quickly realize that those issues are not related to the processing of auditory input  (auditory domain) since the child is not processing tones, hoots, or clicks, etc. but rather has difficulty processing speech and language (linguistic domain).

Let us review two major APD Models: The Buffalo Model (Katz) and the Bellis/Ferre Model, to support the above stance.

iStock_000009897175XSmall-1-300x300

The Buffalo Model by Jack Katz, PhD contains 4 major categories:

1. The Decoding Category – refers to the ability to quickly and accurately process speech, most importantly at the phonemic level (Since this involves speech sounds then this has nothing to do with the processing of auditory stimuli. In other words deficits in this area are of linguistic nature and the highly correlated with reading deficits characterized by weak/deficient phonemic awareness abilities/poor emergent reading abilities).

Here are a few examples of so-called “decoding” deficits:

  • Difficulty with processing what is heard accurately and quickly; tends to respond more slowly (indicative of weak language abilities)
  • Problems keeping up with the flow of communication and running discourse (indicative of weak language abilities)
  • Problems processing at a phonemic level (e.g, can’t blend ‘t,’ ‘u’ and ‘b’ together to make the word ‘tub’) (indicative of phonemic awareness deficits)
  • Trouble reading and spelling (reading and writing deficits rather then APD)
  • Receptive language problems and impairments in discrimination, closure abilities and temporal resolution (this one just explains itself)

2. Tolerance-Fading Memory (TFM) Category – refers to two skills that are often found together: “tolerance” – understanding speech in noise (processing of language) and “fading memory” – auditory short-term or working memory (memory= higher level cognitive skills vs. a pure auditory entity).

Here are a few examples of given of tolerance-fading memory deficits:

  1. Difficulty blocking out background noise so child’s performance suffers in a noisy classroom environment, may be labeled as distractible (clearly describes the child with poor language comprehension)
  2. Linked to poor reading comprehension, oral and written expression, poor short-term memory (in other words describes a learning disability)

3. Integration category 

  • difficulty bringing in information from different modalities, such as receiving auditory and visual information at the same time; these children are often labeled as learning disabled or even dyslexic (this one just explains itself)
  • They may be poor readers, have trouble with spelling, and exhibit difficulty with multimodal tasks (clearly indicative of reading and writing deficits or students which will often get classified in the schools with specific learning disability)

4. Organization –disorganized thinking; sequencing errors (This appears to be indicative of the social communication / executive function deficits, as well as word-retrieval deficits)

Another major APD model is the Bellis/Ferre Model, which divides the above four categories into the following subtypes:

  • Primary subtype
  1. Auditory decoding – listening difficulties in noisy environments
  2. Integration deficit – problems with tasks requiring both cerebral hemispheres to cooperate
  3. Prosodic deficits- difficulty understanding the intent of verbal messages
  • Secondary
  1. Associative deficits- receptive language disorder
  2. Output organization deficits- attention and/or executive function disorder- might also be caused by an auditory efferent dysfunction

Similar to the Buffalo model, the Bellis/Ferre Model, describes deficits of linguistic versus auditory nature many of which are characteristic of a learning disability.

testing

Consequently, if an SLP is referred a student with confirmed or suspected (C)APD, the first thing they should do is to administer a comprehensive battery of testing to determine the scope of the student’s linguistic deficits. To test general language abilities, consider using the Test of Integrated Language & Literacy Skills (TILLS) (Review HERE). But SLPs shouldn’t just stop there! They need to dig deeper to make sure that the following major areas of language are assessed:

The above list doesn’t even reference assessment of Reading, Writing, and Spelling, all areas which play a crucial role in academic language as any deficits displayed in those areas may also present as CAPD symptoms.  If literacy testing is not performed, it is still important for SLPs to review and seriously consider the results of learning evaluations in order to see the whole child and not just their limited functioning in select areas of oral language comprehension, expression, and use.

It is very important for SLPs to understand that without a comprehensive language and literacy assessment of deficit areas it is very difficult to adequately address the student’s linguistically-based deficits! Thus, if testing shortcuts are taken then the referral of students diagnosed with the (C)APD will not cease, and SLPs will continue to be in the dark regarding which goals should be addressed with these students in therapy.

Related Posts:

 

Posted on 9 Comments

Why Are My Child’s Test Scores Dropping?

“I just don’t understand,” says a parent bewilderingly, “she’s receiving so many different therapies and tutoring every week, but her scores on educational, speech-language, and psychological testing just keep dropping!”

I hear a variation of this comment far too frequently in both my private practice as well as outpatient school in hospital setting, from parents looking for an explanation regarding the decline of their children’s standardized test scores in both cognitive (IQ) and linguistic domains. That is why today I wanted to take a moment to write this blog post to explain a few reasons behind this phenomenon.

Children with language impairments represent a highly diverse group, which exists along a continuum.   Some children’s deficits may be mild while others far more severe. Some children may receive very little intervention  services and thrive academically, while others can receive inordinate amount of interventions and still very limitedly benefit from them.  To put it in very simplistic terms, the above is due to two significant influences – the interaction between the child’s (1) genetic makeup and (2) environmental factors.

There is a reason why language disorders are considered developmental.   Firstly, these difficulties are apparent from a young age when the child’s language just begins to develop.  Secondly, the trajectory of the child’s language deficits also develops along with the child and can progress/lag based on the child’s genetic predisposition, resiliency, parental input, as well as schooling and academically based interventions.

Let us discuss some of the reasons why standardized testing results may decline for select students who are receiving a variety of support services and interventions.

Ineffective Interventions due to Misdiagnosis 

Sometimes, lack of appropriate/relevant intervention provision may be responsible for it.  Let’s take an example of a misdiagnosis of alcohol related deficits as Autism, which I have frequently encountered in my private practice, when performing second opinion testing and consultations. Unfortunately, the above is not uncommon.  Many children with alcohol-related impairments may present with significant social emotional dysregulation coupled with significant externalizing behavior manifestations.  As a result, without a thorough differential diagnosis they may be frequently diagnosed with ASD and then provided with ABA therapy services for years with little to no benefit.

Ineffective Interventions due to Lack of Comprehensive Testing 

Let us examine another example of a student with average intelligence but poor reading performance.  The student may do well in school up to certain grade but then may begin to flounder academically.  Because only the student’s reading abilities ‘seem’ to be adversely impacted, no comprehensive language and literacy evaluations are performed.   The student may receive undifferentiated extra reading support in school while his scores may continue to drop.

Once the situation ‘gets bad enough’, the student’s language and literacy abilities may be comprehensively assessed.  In a vast majority of situations these type of assessments yield the following results:

  1. The student’s oral language expression as well as higher order language abilities are adversely affected and require targeted language intervention
  2. The undifferentiated reading intervention provided to the student was NOT targeting actual areas of weaknesses

As can be seen from above examples, targeted intervention is hugely important and, in a number of cases, may be responsible  for the student’s declining performance. However, that is not always the case.

What if it was definitively confirmed that the student was indeed diagnosed appropriately and was receiving quality services but still continued to decline academically. What then?

Well, we know that many children with genetic disorders (Down Syndrome, Fragile X, etc.) as well as intellectual disabilities (ID) can make incredibly impressive gains in a variety of developmental areas (e.g., gross/fine motor skills, speech/language, socio-emotional, ADL, etc.)  but their gains will not be on par with peers without these diagnoses.

The situation becomes much more complicated when children without ID (or with mild intellectual deficits) and varying degrees of language impairment, receive effective therapies, work very hard in therapy, yet continue  to be perpetually behind their peers when it comes to making academic gains.  This occurs because of a phenomenon known as Cumulative Cognitive Deficit (CCD).

The Effect of Cumulative Cognitive Deficit (CCD) on Academic Performance 

According to Gindis (2005) CCD “refers to a downward trend in the measured intelligence and/or scholastic achievement of culturally/socially disadvantaged children relative to age-appropriate societal norms and expectations” (p. 304). Gindis further elucidates by quoting Satler (1992): “The theory behind cumulative deficit is that children who are deprived of enriching cognitive experiences during their early years are less able to profit from environmental situations because of a mismatch between their cognitive schemata and the requirements of the new (or advanced) learning situation”  (pp. 575-576).

So who are the children potentially at risk for CCD?

One such group are internationally (and domestically) adopted as well as foster care children.  A number of studies show that due to the early life hardships associated with prenatal trauma (e.g., maternal substance abuse, lack of adequate prenatal care, etc.) as well as postnatal stress (e.g., adverse effect of institutionalization), many of these children have much poorer social and academic outcomes despite being adopted by well-to-do, educated parents who continue to provide them with exceptional care in all aspects of their academic and social development.

Another group, are children with diagnosed/suspected psychiatric impairments and concomitant overt/hidden language deficits. Depending on the degree and persistence of the psychiatric impairment, in addition to having intermittent access to classroom academics and therapy interventions, the quality of their therapy may be affected by the course of their illness. Combined with sporadic nature of interventions this may result in them falling further and further behind their peers with respect to social and academic outcomes.

A third group (as mentioned previously) are children with genetic syndromes, neurodevelopmental disorders (e.g., Autism) and intellectual disabilities. Here, it is very important to explicitly state that children with diagnosed or suspected alcohol related deficits (FASD) are particularly at risk due to the lack of consensus/training  regarding FAS detection/diagnosis. Consequently, these children may evidence a steady ‘decline’ on standardized testing despite exhibiting steady functional gains in therapy.

Brief Standardized Testing Score Tutorial:

When we look at norm-referenced testing results, score interpretation can be quite daunting. For the sake of simplicity,  I’d like to restrict this discussion to two types of scores: raw scores and standard scores.

The raw score is the number of items the child answered correctly on a test or a subtest. However, raw scores need to be interpreted to be meaningful.  For example, a 9 year old student can attain a raw score of 12 on a subtest of a particular test (e.g., Listening Comprehension Test-2 or LCT-2).  Without more information, the raw score has no meaning. If the test consisted of 15 questions, a raw score of 12 would be an average score. Alternatively, if the subtest had 36 questions, a raw score of 12 would be significantly below-average (e.g., Test of Problem Solving-3 or TOPS-3).

Consequently, the raw score needs to be converted to a standard score. Standard scores compare the student’s performance on a test to the performance of other students his/her age.  Many standardized language assessments have a mean of 100 and a standard deviation of 15. Thus, scores between 85 and 115 are considered to be in the average range of functioning.

Now lets discuss testing performance variation across time. Let’s say an 8.6 year old student took the above mentioned LCT-2 and attained poor standard scores on all subtests.   That student qualifies for services and receives them for a period of one year. At that time the LCT-2 is re-administered once again and much to the parents surprise the student’s standard scores appear to be even lower than when he had taken the test as an eight year old (illustration below).

Results of The Listening Comprehension Test -2 (LCT-2): Age: 8:4

Subtests Raw Score Standard Score Percentile Rank Description
Main Idea 5 67 2 Severely Impaired
Details 2 63 1 Severely Impaired
Reasoning 2 69 2 Severely Impaired
Vocabulary 0 Below Norms Below Norms Profoundly Impaired
Understanding Messages 0 <61 <1 Profoundly Impaired
Total Test Score 9 <63 1 Profoundly Impaired

(Mean = 100, Standard Deviation = +/-15)

Results of The Listening Comprehension Test -2 (LCT-2):  Age: 9.6

Subtests Raw Score Standard Score Percentile Rank Description
Main Idea 6 60 0 Severely Impaired
Details 5 66 1 Severely Impaired
Reasoning 3 62 1 Severely Impaired
Vocabulary 4 74 4 Moderately Impaired
Understanding Messages 2 54 0 Profoundly Impaired
Total Test Score 20 <64 1 Profoundly Impaired

(Mean = 100, Standard Deviation = +/-15)

However, if one looks at the raw score column on the far left, one can see that the student as a 9 year old actually answered more questions than as an 8 year old and his total raw test score went up by 11 points.

The above is a perfect illustration of CCD in action. The student was able to answer more questions on the test but because academic, linguistic, and cognitive demands continue to steadily increase with age, this quantitative improvement in performance (increase in total number of questions answered) did not result in qualitative  improvement in performance (increase in standard scores).

In the first part of this series I have introduced the concept of Cumulative Cognitive Deficit and its effect on academic performance. Stay tuned for part II of this series which describes what parents and professionals can do to improve functional performance of students with Cumulative Cognitive Deficit.

References:

  • Bowers, L., Huisingh, R., & LoGiudice, C. (2006). The Listening Comprehension Test-2 (LCT-2). East Moline, IL: LinguiSystems, Inc.
  • Bowers, L., Huisingh, R., & LoGiudice, C. (2005). The Test of Problem Solving 3-Elementary (TOPS-3). East Moline, IL: LinguiSystems.
  • Gindis, B. (2005). Cognitive, language, and educational issues of children adopted from overseas orphanages. Journal of Cognitive Education and Psychology, 4 (3): 290-315.
  • Sattler, J. M. (1992). Assessment of Children. Revised and updated 3rd edition. San Diego: Jerome M. Sattler.
Posted on 6 Comments

What do Auditory Memory Deficits Indicate in the Presence of Average General Language Scores?

I frequently see a variation of the following question on a variety of speech language forums: “My student scored within the average range on all the tested subtests with the exception of working memory and sentence recall. What other testing do you recommend to determine whether these difficulties are impacting their academics?”

First, lets provide a definition of working memory (WM). WM is the memory used for temporarily storing and manipulating information so we can perform a particular task. It’s one of the executive functions (EFs) and contains two important subcomponents: a phonological loop that stores verbal information and a visuo-spatial ‘sketchpad’ which stores visual and spatial information (Baddeley & Hitch, 2007). Together they are responsible for acquisition of sound-letter correspondence, phonemic awareness and ultimately reading comprehension since WM influences the duration the information stays in memory as well as its eventual transfer (or lack of thereof) to long-term memory.

In other words, students with adequate working memory will have enough capacity to appropriately decode, fluently read and adequately comprehend text while students with poor working memory will expend all their capacity on basic tasks such as decoding, which leaves them with very little capacity to devote to comprehension of read material.

Outside of testing, WM deficits typically become glaringly apparent as students move up grade levels and are given challenging subject-specific abstract texts, requiring in-depth analysis.  This is when parents and professionals start to see that in addition to experiencing difficulty comprehending the read texts, students with poor WM also tire easily when presented with lengthy texts, and tend to evidence increased frustration and decreased self-efficacy during reading tasks.

Now let’s get back to our original question: “What other testing do you recommend to determine whether these [memory] difficulties are impacting their academics?”

Typically when asked that question I always tend to recommend that the therapist (an SLP trained in reading disorders) or a related special educational professional (e.g., learning specialist) preform a series of tests aimed to determine whether the student presents with reading deficits.

In my clinical experience (which is of course substantiated by research) in 99% of cases, reading disabilities are the hidden culprit behind seemingly average oral language skills and working memory deficits.   For more information on what testing is recommended to tease out the presence of reading disorders, see my series posts on Comprehensive Dyslexia Testing (HERE) as well as on the validity of (C)APD diagnosis (HERE).

keep calm and don't ignore the signs

So the next time you encounter this perplexing pattern of strengths and weaknesses don’t just ignore it as inconsequential and not recommend or dismiss the student from language services.  Delve into it further! You will often find that it is representative of reading difficulties, the cumulative impact of which may significantly affect the student’s academic performance and ultimately school outcomes, unless appropriate therapeutic interventions are provided.

References:

  • Baddeley, A. D., & Hitch, G. J. (2007). Working memory: Past, present…and future? In N.Osaka, R. Logie & M. D’Esposito (Eds), Working Memory – Behavioural & Neural Correlates. Oxford University Press.
Posted on 29 Comments

Why (C) APD Diagnosis is NOT Valid!

Today’s post will make a number of people quite angry and is intended to be controversial!  Why? Because controversy promotes critical thinking, broadens perspectives, allows to acquire better knowledge of the construct in question as well as ultimately guides better decision making on the part of the parties in question. So why the lengthy disclaimer? Because today via the use of the latest research publications, I would like discuss the fact that the diagnosis of Auditory Processing Disorder (APD) or what some may know as Central Auditory Processing Disorder (CAPD) is NOT valid!

Here are just a few reasons why:

  1. There is a strong desire for the (C)APD label on the part of those encountering processing difficulties, yet once the label is given no direct/specific auditory interventions are provided by the audiologist. Subsequent to the diagnosis, confusion ensues regarding the type, frequency, and duration of service provision (typically performed by the SLP) as well as what those services should actually constitute 
  2. Recommendations for training deficits specific areas such as working memory, auditory discrimination, auditory sequencing, etc., do not functionally transfer into practice and fail to create generalization affect
  3. Recommendations for specific costly auditory training programs such Auditory Integration Training (AIT), The Listening Program (TLP), Fast ForWord® (FFW) at the exclusion of all others, without the provision of a detailed breakdown of the child’s deficit areas often cause an incursion of unnecessary expenses for parents and professionals and are found to be INEFFECTIVE or limitedly effective in the long run
  4. General audiological recommendations for accommodations (e.g., FM systems, etc.) are frequently unnecessary, and may actually exacerbate the isolation effect while in no way alleviating the student’s deficits, which require direct and targeted intervention
  5. Auditory deficits don’t cause speech, language, and academic learning difficulties
  6. Numerous non-linguistic based disorders can be misdiagnosed as (C)APD without differential diagnosis
  7. (C)APD testing is hugely influenced by non-auditory factors grounded in higher order cognitive and linguistic processes
  8. Presently there’s no no clear performance criteria to make the (C)APD diagnosis
  9. The diagnosis of (C)APD is appealing because it presents a more attractive explanation than the diagnoses of language and learning disabilities for children with processing deficits
  10. The diagnosis of (C)APD may often detract from identifying legitimate language based deficits in the areas of comprehension, expression, social communication and literacy development, as the result of which these areas will not get adequate therapeutic attention by relevant professionals

A few words on (C)APD popularity, well sort of:

(C)APD  is currently rampantly diagnosed in the United States, Australia and New Zealand, and is even beginning to be diagnosed in the United Kingdom (Dawes & Bishop, 2009). However, presently, (C)APD is not a mainstream diagnostic classifications in the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5) nor is part of an actual educational classification in United States.  Already many of you can see the beginnings of the controversy.  If this diagnoses is so popular and so prevalent why is that major psychological and educational governing bodies such as American Psychiatric Association and the US Department of Education still do not officially recognize it?

(C)APD symptomology:

A. Student presents with difficulty processing information efficiently

  • Requires increased processing time to respond to questions
  • Presents like s/he are ignoring the speaker
  • May request frequent repetition of presented information from speakers
  • Difficulty following long sentences
  • Difficulty keeping up with class discussions in group settings
  • Poor listening abilities under noisy conditions may be interpreted as “distractibility”

B. Student has difficulty maintaining attention on presented tasks

  • Frequent loss of focus
  • Difficulty completing assignments on their own

C. Student has poor short term memory – difficulty remembering instructions and directions or verbally presented information

D.Student has difficulty with phonemic awareness, reading and spelling

  • Poor ability to recognize and produce rhyming words
  • Poor segmentation abilities (separation of sentences, syllables and sounds)
  • Poor sound manipulation abilities (isolation, deletion, substitution, blending, etc)
  • Poor sound letter identification abilities
  • Poor vowel recognition abilities
  • Poor decoding
  • Poor comprehension
  • Spelling errors
  • Limited/disorganized writing

E. The combination of above factors may result in generalized deficits across the board, affecting the child’s social and academic performance:

  • Poor reading comprehension
  • Poor oral and written expression
  • Disorganized thinking (e.g., disjointed narrative production)
  • Sequencing errors (recalling/retelling information in order, following recipes, etc)
  • Poor message interpretation
  • Difficulty making inferences
  • Misinterpreting the meaning of abstract information

I do not know what you see when you read the above description but to me those are the classical signs of a language impairment which has turned into a learning disability masking under the ambiguous label of  (C)APD. 

That is exactly what Dawes & Bishop, stated in 2009, when they asserted that “a child who is regarded as having a specific learning disability by one group of experts may be given an APD diagnosis by another.” They concluded that: “APD, as currently diagnosed, is not a coherent category, but that rather than abandoning the construct, we need to develop improved methods for assessment and diagnosis, with a focus on interdisciplinary evaluation“.

Let us now deconstruct each of the above statements with the assistance of direct quotes from current research.

1. (C)APD – what is it good for? Child goes to an audiologist and receives an ambiguous battery of (C)APD  testing with unclear qualification criteria (more on that below). There are some abnormal findings, so the audiologist states that the child has (C)APD, recommends accommodations and modifications, services in the form of speech language therapy with a focus on auditory training (more below) and/or some form of program similar to Fast ForWord®, and doesn’t see the child again for some time (maybe even years).  Since the child is now being seen by an SLP, who by the way frequently has no idea what to do with that child based on the ambiguous audiological findings, what exactly did the diagnosis of (C) APD just accomplish?

2. Processing Skills Training – Say What? In 2011 Fey and colleagues  (many notable audiologists and speech language pathologists) conducted a systematic review of  25 journal articles on the efficacy of interventions for school-age children with auditory processing disorder (C)APD. Their review found no compelling evidence that auditory interventions provided any unique benefit to auditory, language, or academic outcomes for children with diagnoses of (C)APD or language disorder.

Presently there is no valid evidence that targeting specific processing skills such as auditory discrimination, auditory sequencing, phonological memory, working memory, or rapid serial naming actually improves children’s ‘auditory processing’, language or reading abilities (Fey et al., 2011).

To illustrate further, Melby-Lervåg & Hulme, 2013 performed a meta analysis  of 23 working memory training studies. They found no evidence that memory training was an effective intervention for children with ADHD or dyslexia as it did not lead to better performance outside of the tasks presented within the memory tests. They concluded: “In the light of such evidence, it seems very difficult to justify the use of working memory training programs in relation to the treatment of reading and language disorders.” Further adding: “Our findings also cast strong doubt on claims that working memory training is effective in improving cognitive ability and scholastic attainment.” (Melby-Lervåg, 2013, p. 282).

3. The trouble with prescriptive programs.  (C)APD assessments often yield recommendations for a number of specific costly prescriptive programs such as AIT, FFW, etc.. As humans we are “attracted to interventions that promise relatively rapid improvements in language and academic skills. Interventions that target processing abilities are appealing because they promise significant improvements in language and reading without having to directly target the specific knowledge and skills required to be a proficient speaker, listener, reader, and writer.” (Kamhi and Wallach, 2012)

These programs claim to improve the child’s processing abilities through music, phonics, hearing distortions, etc. When such recommendations are made parents and professionals are urged to carefully review evidence-based research supported information regarding these prescribed programs in order to determine their effectiveness. Presently, there’s no research to support the use of any of these programs with children presenting with processing difficulties. 

Let’s take a look at Fast ForWord®, which is a highly costly program frequently recommended for children with auditory processing deficits. It is designed to help children’s reading and spoken language by training their memory, attention, processing, and sequencing by training 3 to 5 days per week, for 8 to 12 weeks. However, systematic reviews found no sign of a reliable effect of Fast ForWord® on reading or on expressive or receptive spoken language. 

Now some of you may legitimately tell me: “How dare you? I’ve tried it with my child and seen great gains”. And that is terrific! However, it is important to note that ANY intervention is better than NO intervention! And there is currently no scientific proof out there that this program works better than other programs aimed directly at improving the children’s reading abilities and listening skills.  Furthermore, if the child needs assistance with reading rather than spending the money  on Fast ForWord® it would be far more effective to select a systematic Orton-Gillingham (OG) (or similar) reading based program to teach her/him reading!

4. The dreaded FM system! FM systems have become an almost automatic recommendation for children diagnosed with (C)APD but are they actually effective?

Here is what one notable audiologist had to say in the subject. An FM system brings the speaker’s voice via the mic to the listener via loudspeakers or earphones through an amplifier. Only personal systems appropriate for children with TRUE APD-based auditory distractibility problems (understanding speech in the presence of background noise)”.  However, when he did his testing he found that only ~25% of children with (C)APD had issues with hearing speech in noise, the other ~75% didn’t. 

Guess what… a recent meta-analysis showed? Lemos et, al, 2009 did a systematic literature review of articles recommending the use of FM systems for APD. They concluded that: “Strong scientific evidence supporting the use of personal FM systems for APD intervention was not found. Since such device is frequently recommended for the treatment of APD, it becomes essential to carry out studies with high scientific evidence that could safely guide clinical decision making on this subject.

5. (C)APD diagnosis does NOT Language Disorder Make. “There little evidence that auditory perceptual impairments (not referring to hearing deficits) are a significant risk factor for language and academic performance (e.g., Hazan, Messaoud-Galusi, Rosan, Nouwens, & Shakespeare, 2009; Watson & Kidd, 2009)” (Kamhi, 2011, p. 265).  

  • Watson et al., 2003 found that measures of auditory processing (NOT hearing) had no impact on children’s reading or language abilities in Grades 1 through 4.
  • Sharma, Purdy, and Kelly (2009)  found that having auditory processing difficulties did not increase the likelihood that a child would have a language or reading disorder.
  • Hazan et al., 2009; Ramus et al., 2006) found that despite poor phonological processing abilities, individuals with dyslexia perform within normal limits on measures of speech perception. 

(From Kamhi, 2011, p. 268)

6. Are you sure it’s (C)APD?

—Without a careful differential diagnosis, numerous non-linguistic based medical, psychiatric neurological, psychological, and cognitive conditions can be misdiagnosed as (C)APD including (but not limited):

  • —Respiratory Disorders
    • —Adenoid hypertrophy, asthma, allergic rhinitis
  • —Metabolic/Endocrine Disorders
    • —Diabetes  hypo/hyperthyroidism
  • —Hematological Disorders
    • —Anemia
  • —Immunological Disorders
    • —Acquired and congenital immune problems
  • —Cardiac Disorders
    • —Congenital and acquired heart disease, syncopy
  • —Digestive  Disorders
    • —Irritable bowel syndrome, GERD
  • —Neurological Disorders
    • —Traumatic Brain Injuries, Tumors, Encephalopathy
  • Genetic Disorders
    • —Fragile X Syndrome
  • —Toxin Exposure
    • —Lead, Mercury, Drug Exposure
  • —Infections and Infestations
    • —Yeast overgrowth , intestinal worms/parasites
  • —Sleep Disorders
    • Sleep Apnea
  • —Mental Health Disorders
    • —Trauma, Anxiety, mood disorders, adjustment disorders
  • ——Sensory Processing Disorders
    • —Vision, hearing, auditory, tactile
  • —Acquired Disorders
    • —FASD

7. (C)APD testing is NOT so PURE 

(C)APD testing does not simply consists of pure tone audiometry and is heavily comprised of higher order linguistic and cognitive tasks. Testing requires that the listeners attend to given directions, remember and label the presented auditory sequences, etc, in other words participate in tasks aimed to task their linguistic system and executive functions  (DeBonis, 2015)

So what does the research show?

  • Wallach (2011) has indicated that  (C) APD ‘symptomology’ “reflects broader underlying problems in language comprehension and metalinguistic awareness.
  • Dawes and Bishop (2009)  compared children with a CAPD to children diagnosed with dyslexia and found similar attention, reading, and language deficits in both groups.
  •  Kelly et al. (2009)  found that 76% of a sample of 68 children with suspected auditory processing disorder also had language impairment with 53% demonstrating decreased auditory attention and 59% demonstrated decreased auditory memory.
  • Ferguson et al. (2011)  concluded that “the current labels of CAPD and SLI [specific language impairment] may, for all practical purposes, be indistinguishable” (p. 225).

(From DeBonis, 2015 pgs. 126-127)

8. What to Test and How to do it – That IS the Question? 

“Despite lofty claims to the contrary, there is no clear consensus concerning the battery of tests that lead to a diagnosis of CAPD.”  (Burkard, 2009, p. vii) Presently, neither the American Academy of Audiology nor the American Speech Language Hearing Association have a clear criteria on what testing to administer, how many standard deviations the client has to be in order to qualify, as well as even who is a good candidate for (C)APD testing.  (DeBonis, 2015 pg. 125)

As such, presently children diagnosed with (C)APD are diagnosed purely in an arbitrary fashion rather than based on a specific widely accepted standard.  To illustrate W. J. Wilson and Arnott (2013) found that “in a sample of records of 150 school-aged children who had completed at least four CAPD tests, rates of diagnosis ranged from 7.3% to 96% depending on the criteria used” (DeBonis, 2015 pg. 125). Are you “processing” what I am saying? 

9. Looking for the “Right” Label 

As an SLP, I frequently hear the following statement from parents: “We were searching for what was wrong with our child for such a long time; we are so happy that we were finally able to identify that it’s (C)APD.

The above comment is certainly understandable.  After all (C)APD sounds manageable!  The appeal to it is that presumably if the child undergoes specific auditory interventions to improve deficit areas, s/he will get better and all the problems will go away.  In contrast, finding out that the child’s processing difficulties are the result of linguistic deficits in the areas of listening, speaking, reading, and writing can be incredibly overwhelming especially because what we know about the nature of language impairments and that is that more often than not they turn into lifelong learning disabilities.

Some parents and professionals may disagree.  They might point out that many children with (C)APD test just fine on generalized language testing and only present with isolated deficits in the areas of attention, memory, as well as phonological processing. Yet here is the problem! General language testing in the form of administration of tests such as the CELF-5 or the CASL does not complete language assessment make!

The same children who test ‘just fine’ on these assessments often test quite poorly on the measures of social communication, executive function, as well as reading.  In other words if the professionals dig deep enough they often find out that something which outwardly presents as (C)APD is part of much broader language related issues, which require relevant intervention services. This leads me to my final point below.

10. Missing the Big Picture

“The primacy given to auditory processing abilities has resulted at times in neglect of other cognitive factors” (Cowan et al. 2009, p. 192). Focusing on the diagnosis of (C)APD obscures REAL, language-based deficits in children in question. It forces SLPs to address erroneous therapeutic targets based on AuD recommendations. It makes us ignore the BIG Picture and  “Consider non-auditory reasons for listening and comprehension difficulties, such as limitations in working memory, language knowledge, conceptual abilities, attention, and motivation and consequently targeting language, literacy, and knowledge-based goals in therapy.” —(Kamhi &Wallach, 2012)

Conclusion:

So what will happen next? Well, I can tell you with certainty that the controversy will certainly not end here!  Presently, not only is that there is a fierce academic debate between speech language pathologist and audiologists but there is also a raging debate among audiologists themselves!  This controversy will continue for many years among some highly educated people.  And SLPs? Well, we will continue seeing numerous children diagnosed with (C)APD.  Except, I do hope something will change and that is our collective outlook on how we view ambiguously defined and assessed disorders such as (C)APD.

I sincerely hope that we do not blindly defer to other professions and reject current valid research regarding this controversial diagnosis without first spending some time reflecting and critically reviewing these findings in order to better assist us with making informed and educated decisions regarding our clients’ plan of care.

Click HERE to read the second part of this post, which describes how SLPs SHOULD assess and treat children diagnosed by audiologists with (C)APD

References:

  • Burkard, R. (2009). Foreword. In A. Cacace & D. McFarland (Eds.), Controversies in central auditory processing disorder (pp. vii-viii). San Diego, CA: Plural.
  • Cowan, J., Rosen, S., & Moore, D. (2009). Putting the auditory back into auditory processing disorder in children. In Cacace, A., & McFarland, D. (Eds.),Controversies in central auditory processing disorder(pp. 187–197). San Diego, CA: Plural Publishing.
  • Dawes, P., & Bishop, D. (2009). Auditiory processing disorder in relation to developmental disorders of language, communication and attention: A review and critique. International Journal of Language and Communication Disorders, 44, 440–465.
  • DeBonis, D. A. (2015) It Is Time to Rethink Central Auditory Processing Disorder Protocols for School-Aged Children. American Journal of Audiology. v. 24, 124-136.
  • Ferguson, M. A., Hall, R. L., Moore, D. R., & Riley, A. (2011). Communication, listening, cognitive and speech perception skills in children with auditory processing disorder (APD) or specific language impairment (SLI). Journal of Speech, Language, and Hearing Research, 54, 211–227.
  • Fey, M. E., Richard, G. J., Geffner, D., Kamhi, A. G., Medwetsky, L., Paul, D., Schooling, T. (2011). Auditory processing disorder and auditory/language interventions: An evidence-based systematic review. Language, Speech and Hearing Services in Schools, 42, 246–264.
  • Hazan, V., Messaoud-Galusi, S., Rosen, S., Nouwens, S., Shakespeare, B. (2009). Speech perception abilities of adults with dyslexia: Is there any evidence for a true deficit?. Journal of Speech, Language, and Hearing Research. 52 1510–1529
  • Kamhi, A. G. (2011). What speech-language pathologists need to know about auditory processing disorder. Language, Speech, and Hearing Services in Schools, 42, 265–272.
  • Kamhi, A & Wallach, G (2012) What Speech-Language Pathologists Need to Know about Auditory Processing Disorders. ASHA Convention Presentation. Atlanta, GA.
  • Kelly, A. S., Purdy, S. C., & Sharma, M. (2009). Comorbidity of auditory processing, language, and reading disorders. Journal of Speech, Language, and Hearing Research, 53, 706–722.
  • Lemos IC, Jacob RT, Gejao MG, et al. (2009) Frequency modulation (FM) system in auditory processing disorder: An evidence-based practice? Pró-Fono Produtos Especializados para Fonoaudiologia Ltda. 21(3):243-248.
  • Melby-Lervåg, M., & Hulme, C. (2013). Is working memory training effective? A meta-analytic review. Developmental Psychology, 49, 270–291.
  • Ramus, F., White, S., Frith, U. (2006). Weighing the evidence between competing theories of dyslexia.Developmental Science. 9 265–269
  • Sharma, M., Purdy, S. C., Kelly, A. S. (2009). Comorbidity of auditory processing, language, and reading disorders. Journal of Speech, Language, and Hearing Research. 52 706–722
  • Wallach, G. P. (2011). Peeling the onion of auditory processing disorder: A language/curricular-based perspective. Language, Speech, and Hearing Services in Schools, 42, 273–285.
  • Watson, C., Kidd, G. (2009). Associations between auditory abilities, reading, and other language skills in children and adults. Cacace, A., McFarland, D.Controversies in central auditory processing disorder.  218–242 San Diego, CA Plural.
  • Wilson, W. J., & Arnott, W. (2013). Using different criteria to diagnose (central) auditory processing disorder: How big a difference does it make? Journal of Speech, Language, and Hearing Research, 56, 63–70.
Posted on 11 Comments

Special Education Disputes and Comprehensive Language Testing: What Parents, Attorneys, and Advocates Need to Know

Image result for evaluationSeveral years after I started my private speech pathology practice, I began performing comprehensive independent speech and language evaluations (IEEs).

For those of you who may be hearing the term IEE for the first time, an Independent Educational Evaluation is “an evaluation conducted by a qualified examiner who is not employed by the public agency responsible for the education of the child in question.” 34 C.F.R. 300.503. IEE’s can evaluate a broad range of functioning outside of cognitive or academic performance and may include neurological, occupational, speech language, or any other type of evaluations  as long as they bear direct impact on the child’s educational performance.

Independent evaluations can be performed for a wide variety of reasons, including but not limited to:

  • To determine the student’s present level of functioning
  • To determine whether the student presents with hidden, previously undiscovered deficits (e.g., executive function, social communication, etc.)
  • To determine whether the student’s educational classification requires a change
  • To determine if the student requires additional, previously not provided, related services (e.g., language therapy, etc.) or an increase in related services
  • To determine whether a student might benefit from an application of a particular therapy technique or program (e.g, Orton-Gillingham)
  • To determine whether a student with a severe impairment (e.g., severe emotional and behavioral disturbances, genetic syndrome, significant intellectual disability, etc.) is a good candidate for an out of district specialized school

Why can’t similar assessments be performed in school settings?

There are several reasons for that.

Why are IEE’s Needed?

The answer to that is simple:  “To strengthen the role of parents in the educational decision-making process.” According to one Disability Rights site: “Many disagreements between parents and school staff concerning IEP services and placement involve, at some stage, the interpretation of evaluation findings and recommendations. When disagreements occur, the Independent Educational Evaluation (IEE) is one option lawmakers make available to parents, to help answer questions about appropriate special education services and placement“.

Indeed, many of the clients who retain my services also retain the services of educational advocates as well as special education lawyers.  Many of them work on determining appropriate level of services as well as an out of district placement for the children with a variety of special education needs. However, one interesting reoccurring phenomenon I’ve noted over the years is that only a small percentage of special education lawyers, educational advocates, and even parents believed that children with autism spectrum disorders, genetic syndromes, social pragmatic deficits, emotional disturbances, or reading disabilities required a comprehensive language evaluation/reevaluation prior to determining an appropriate out of district placement or an in-district change of service provision.

So today I would like to make a case, in favor of comprehensive independent language evaluations being a routine component of every special education dispute involving a child with impaired academic performance. I will do so through the illustration of past case scenarios that clearly show that comprehensive independent language evaluations do matter, even when it doesn’t look like they may be needed.

Case A: “He is just a weak student”.

Several years ago I was contacted by a parent of a 12 year old boy, who was concerned with his son’s continuously failing academic performance. The child had not qualified for an IEP but was receiving 504 plan in school setting and was reported to significantly struggle due to continuous increase of academic demands with each passing school year.  An in-district language evaluation had been preformed several years prior. It showed that the student’s general language abilities were in the low average range of functioning due to which he did not qualify for speech language services in school setting. However, based on the review of available records it very quickly became apparent that many of the academic areas in which the student struggled (e.g., reading comprehension, social pragmatic ability, critical thinking skills, etc)  were simply not assessed by the general language testing. I had suggested to the parent a comprehensive language evaluation and explained to him on what grounds I was recommending this course of action.  That comprehensive 4 hour assessment broken into several testing sessions revealed that the student presented with severe receptive, expressive, problem solving and social pragmatic language deficits, as well as moderate executive function deficits, which required therapeutic intervention.

Prior to that assessment the parent, reinforced by the feedback from his child’s educational staff believed his son to be an unmotivated student who failed to apply himself in school setting.  However, after the completion of that assessment, the parent clearly understood that it wasn’t his child’s lack of motivation which was impeding his academic performance but rather a true learning disability was making it very difficult for his son to learn without the necessary related services and support. Several months after the appropriate related services were made available to the child in school setting on the basis of the performed IEE, the parent reported significant progress in his child academic performance.

Case B: “She’s just not learning because of her behavior, so there’s nothing we can do”.  

This case involved a six year old girl who presented with a severe speech – language disorder and behavioral deficits in school setting secondary to an intellectual disability of an unspecified origin.

In contrast to Case A scenario, this child had received a variety of assessments and therapies since a very early age; however, her parents were becoming significantly concerned regarding her regression of academic functioning in school setting and felt that a more specialized out of district program with a focus on multiple disabilities would be better suitable to her needs. Unfortunately the school disagreed with them and believed that she could be successfully educated in an in-district setting (despite evidence to the contrary).  Interestingly, an in-depth comprehensive speech language assessment had never been performed on this child because her functioning was considered to be “too low” for such an assessment.

Comprehensive assessment of this little girl’s abilities revealed that via an application of a variety of behavioral management techniques (of non-ABA origin), and highly structured language input, she was indeed capable of significantly better performance then she had exhibited in school setting.  It stood to reason that if she were placed in a specialized school setting composed of educational professionals who were trained in dealing with her complex behavioral and communication needs, her performance would continue to steadily improve.  Indeed, six months following a transfer in schools her parents reported a “drastic” change pertaining to a significant reduction in challenging behavioral manifestations as well as significant increase in her linguistic output.

Case C: “Your child can only learn so much because of his genetic syndrome”.  

This case scenario does not technically involve just one child but rather three different male students between 9 and 11 years of age with several ‘common’ genetic syndromes: Down, Fragile X, and Klinefelter.  All three were different ages, came from completely different school districts, and were seen by me in different calendar years.

However, all three boys had one thing in common, because of their genetic syndromes, which were marked by varying degrees of intellectual disability as well as speech language weaknesses, their parents were collectively told that there could be very little done for them with regards to expanding their expressive language as well as literacy development.

Similarly to the above scenarios, none of the children had undergone comprehensive language testing to determine their strengths, weaknesses, and learning styles. Comprehensive assessment of each student revealed that each had the potential to improve their expressive abilities to speak in compound and complex sentences. Dynamic assessment of literacy also revealed that it was possible to teach each of them how to read.

Following the respective assessments, some of these students had became my private clients, while others’s parents have periodically written to me, detailing their children’s successes over the years.  Each parent had conveyed to me how “life-changing”a comprehensive IEE was to their child.

Case D: “Their behavior is just out of control”

The final case scenario I would like to discuss today involves several students with an educational classification of “Emotionally Disturbed” (pg 71).  Those of you who are familiar with my blog and my work know that my main area of specialty is working with school age students with psychiatric impairments and emotional behavioral disturbances.  There are a number of reasons why I work with this challenging pediatric population. One very important reason is that these students continue to be grossly underserved in school setting. Over the years I have written a variety of articles and blog posts citing a number of research studies, which found that a significant number of students with psychiatric impairments and emotional behavioral disturbances present with undiagnosed linguistic impairments (especially in the area of social communication), which adversely impact their school-based performance.

Here, we are not talking about two or three students rather we’re talking about the numbers in the double digits of students with psychiatric impairments and emotional disturbances, who did not receive appropriate therapies in their respective school settings.

The majority of these students were divided into two distinct categories. In the first category, students began to manifest moderate-to-severe speech language deficits from a very early age. They were classified in preschool and began receiving speech language therapy. However by early elementary age their general language abilities were found to be within the average range of functioning and their language therapies were discontinued.   Unfortunately since general language testing does not assess all categories of linguistic functioning such as critical thinking, executive functions, social communication etc., these students continued to present with hidden linguistic impairments, which continued to adversely impact their behavior.

Students in the second category also began displaying emotional and behavioral challenges from a very early age. However, in contrast to the students in the first category the initial language testing found their general language abilities to be within the average range of functioning. As a result these students never received any language-based therapies and similar to the students in the first category, their hidden linguistic impairments continued to adversely impact their behavior.

Students in both categories ended up following a very similar pattern of behavior. Their behavioral challenges in the school continued to escalate. These were followed by a series of suspensions, out of district placements, myriad of psychiatric and neuropsychological evaluations, until many were placed on home instruction. The one vital element missing from all of these students’ case records were comprehensive language evaluations with an emphasis on assessing their critical thinking, executive functions and social communication abilities. Their worsening patterns of functioning were viewed as “severe misbehaving” without anyone suspecting that their hidden language deficits were a huge contributing factor to their maladaptive behaviors in school setting.

Conclusion:

So there you have it!  As promised, I’ve used four vastly different scenarios that show you the importance of comprehensive language evaluations in situations where it was not so readily apparent that they were needed.  I hope that parents and professionals alike will find this post helpful in reconsidering the need for comprehensive independent evaluations for students presenting with impaired academic performance.

Posted on 8 Comments

Part IV: Components of Comprehensive Dyslexia Testing – Writing and Spelling

Recently I began writing a series of posts on the topic of comprehensive assessment of dyslexia.

In part I of my post (HERE), I discussed common dyslexia myths as well as general language testing as a starting point in the dyslexia testing battery.

In part II (HEREI detailed the next two steps in dyslexia assessment: phonological awareness and word fluency testing.

In part III  (HEREI discussed reading fluency and reading comprehension testing.

Today I would like to discuss part IV of comprehensive dyslexia assessment, which involves spelling and writing testing.

Spelling errors can tell us a lot about the child’s difficulties, which is why they are an integral component of dyslexia assessment battery.   There is a significant number of linguistic skills involved in spelling.   Good spellers  have well-developed abilities in the following areas (Apel 2006, Masterson 2014, Wasowicz, 2015):

  1. Phonological Awareness – segmenting, sequencing, identifying and discriminating sounds in words.
  2. Orthographic Knowledge – knowledge of alphabetic principle, sound-letter relationships; letter patterns and conventional spelling rules
  3. Vocabulary Knowledge -knowledge of word meanings and how they can affect spelling
  4. Morphological Knowledge- knowledge of “word parts”: suffixes, prefixes, base words, word roots, etc.; understanding the semantic relationships between base word and related words; knowing how to make appropriate modifications when adding prefixes and suffixes
  5. Mental Orthographic Images of Words- clear and complete mental representations of words or word parts

By administering and analyzing spelling test results  or  spelling samples and quizzes,  we can determine where students’  deficits lie,  and design appropriate interventions  to improve knowledge and skills in the affected areas.

twsWhile there are a number of spelling assessments currently available on the market  I personally prefer that the  Test of Written Spelling – 5 (TWS-5) (Larsen, Hammill & Moats, 2013). The  TWS-5  can be administered to students 6-18 years of age in about 20 minutes in either individual or group settings. It has two forms, each containing 50 spelling words drawn from eight basal spelling series and graded word lists. You can use the results in several ways: to identify students with significant spelling deficits or to determine progress in spelling as a result of RTI interventions.

Now,  lets  move on to assessments of writing.   Here, we’re looking to assess a number of abilities,  which include:

  • Mechanics – is there appropriate use of punctuation, capitalization, abbreviations, etc.?
  • Grammatical and syntactic complexity – are there word/sentence level errors/omissions? How is the student’s sentence structure?
  • Semantic sophistication-use of appropriate vs. immature vocabulary
  • Productivity – can the student generate  enough paragraphs, sentences, etc. or?
  • Cohesion and coherence-  Is the writing sample organized? Does it flow smoothly? Does it make sense? Are the topic shifts marked by appropriate transitional words?
  •  Analysis – can the student edit and revise his writing appropriately?

Again it’s important to note that much like the assessments of reading comprehension  there are no specific tests which can assess this area adequately and comprehensively.  Here, a combination of standardized tests, informal assessment tasks as well as analysis of the students’ written classroom output is recommended.

TEWL-3_EM-159

For standardized assessment purposes clinicians can select Test of Early Written Language–Third Edition (TEWL–3) or Test of Written Language — Fourth Edition  (TOWL-4)

The TEWL-3 for children 4-12 years of age, takes on average 40 minutes to administer (between 30-50 mins.) and examines the following skill areas:

Basic Writing. This subtest consists of 70 items ordered by difficulty, which are scored as 0, 1, or 2. It measures a child’s understanding of language including their metalinguistic knowledge, directionality, organizational structure, awareness of letter features, spelling, capitalization, punctuation, proofing, sentence combining, and logical sentences. It can be administered independently or in conjunction with the Contextual Writing subtest.

Contextual Writing. This subtest consists of 20 items that are scored 0 to 3. Two sets of pictures are provided, one for younger children (ages 5-0 through 6-11) and one for older children (ages 7-0 through 11-11). This subtest measures a child’s ability to construct a story given a picture prompt. It measures story format, cohesion, thematic maturity, ideation, and story structure. It can be administered independently or in conjunction with the Basic Writing subtest.

Overall Writing. This index combines the scores from the Basic Writing and Contextual Writing subtests. It is a measure of the child’s overall writing ability; students who score high on this quotient demonstrate strengths in composition, syntax, mechanics, fluency, cohesion, and the text structure of written language. This score can only be computed if the child completes both subtests and is at least 5 years of age.

TOWL-4_EM-147The TOWL-4 for students 9-18 years of age, takes between 60-90 minutes to administer (often longer) and examines the following skill areas:

  1. Vocabulary – The student writes a sentence that incorporates a stimulus word. E.g.: For ran, a student writes, “I ran up the hill.”
  2. Spelling – The student writes sentences from dictation, making proper use of spelling rules.
  3. Punctuation – The student writes sentences from dictation, making proper use of punctuation and capitalization rules.
  4. Logical Sentences – The student edits an illogical sentence so that it makes better sense. E.g.:  “John blinked his nose” is changed to “John blinked his eye.”
  5. Sentence Combining – The student integrates the meaning of several short sentences into one grammatically correct written sentence. E.g.:  “John drives fast” is combined with “John has a red car,” making “John drives his red car fast.”
  6. Contextual Conventions – The student writes a story in response to a stimulus picture. Points are earned for satisfying specific arbitrary requirements relative to orthographic (E.g.: punctuation, spelling) and grammatic conventions (E.g.: sentence construction, noun-verb agreement).
  7. Story Composition – The student’s story is evaluated relative to the quality of its composition (E.g.: vocabulary, plot, prose, development of characters, and interest to the reader).

It has 3 composites:

  1. Overall Writing- results of all seven subtests
  2. Contrived Writing- results of 5 contrived subtests
  3. Spontaneous Writing-results of 2 spontaneous writing subtests

However, for the purposes of the comprehensive assessment only select portions of the above tests may need be administered  since other overlapping areas (e.g., spelling, punctuation, etc.) may have already been assessed by other tests, a analyzed via the review of student’s written classroom assignments or were encompassed by educational testing.

Posted on 8 Comments

Part III: Components of Comprehensive Dyslexia Testing – Reading Fluency and Reading Comprehension

Recently I began writing a series of posts on the topic of comprehensive assessment of dyslexia.

In part I of my post (HERE), I discussed common dyslexia myths as well as general language testing as a starting point in the dyslexia testing battery.

In part II I detailed the next two steps in dyslexia assessment: phonological awareness and word fluency testing (HERE).

Today I would like to discuss part III of comprehensive dyslexia assessment, which discusses reading fluency and reading comprehension testing.

Let’s begin with reading fluency testing, which assesses the students’ ability to read word lists or short paragraphs with appropriate speed and accuracy. Here we are looking for how many words the student can accurately read per minute orally and/or silently (see several examples  of fluency rates below).

Average End-of-Year Reading Rate Ranges (Grades 1-8)
Grade Oral rates (wpm) Silent rates (wpm)
1 45-85 50-90
2 80-120 95-145
3 95-135 120-170
4 110-150 135-185
5 125-155 150-200
6 135-160 160-210
7 145-160 170-220
8 145-160 180-230

Source: Morris, D. (2008). Diagnosis and correction of reading problems. New York: Guilford.

Research indicates that oral reading fluency (ORF) on passages is more strongly related to reading comprehension than ORF on word lists. This is an important factor which needs to be considered when it comes to oral fluency test selection.

Oral reading fluency tests are significant for a number of reasons. Firstly, they allow us to identify students with impaired reading accuracy. Secondly, they allow us to identify students who can decode words with relative accuracy but who cannot comprehend what they read due to significantly decreased reading speed. When you ask such children: “What did you read about?” They will frequently respond: “I don’t remember because I was so focused on reading the words correctly.”

One example of a popular oral reading fluency test (employing reading passages) is the Gray Oral Reading Tests-5 (GORT-5). It yields the scores on the student’s:

  • Rate
  • Accuracy
  • Fluency
  • Comprehension
  • Oral Reading Index (a composite score based on Fluency and Comprehension scaled scores)

Another types of reading fluency tests are tests of silent reading fluency. Assessments of silent reading fluency can at selectively useful for identifying older students with reading difficulties and monitoring their progress. One obvious advantage to silent reading tests is that they can be administered in group setting to multiple students at once and generally takes just few minutes to administer, which is significantly less then oral reading measures take to be administered to individual students.

Below are a several examples of silent reading tests/subtests.

The Test of Silent Word Reading Fluency (TOSWRF-2) presents students with rows of words, ordered by reading difficulty without spaces (e.g., dimhowfigblue). Students are given 3 minutes to draw a line between the boundaries of as many words as possible (e.g., dim/how/fig/blue).

The Test of Silent Contextual Reading Fluency (TOSCRF-2) presents students with text passages with all words printed in uppercase letters with no separations between words and no punctuation or spaces between sentences and asks them to use dashes to separate words in a 3 minute period.

Similar to the TOSCRF-2, the Contextual Fluency subtest of the Test of Reading Comprehension – Fourth Edition (TORC-4measures the student’s ability to recognize individual words in a series of passages (taken from the TORC-4Text Comprehension subtest) in a period of 3 minutes. Each passage, printed in uppercase letters without punctuation or spaces between words, becomes progressively more difficult in content, vocabulary, and grammar. As students read the segments, they draw a line between as many words as they can in the time allotted.  (E.g., THE|LITTLE|DOG|JUMPED|HIGH)

However, it is important to note oral reading fluency is a better predictor of reading comprehension than is silent reading fluency for younger students (early elementary age). In contrast silent reading measures are more strongly related to reading comprehension in middle school (e.g., grades 6-8) but only for skilled vs. average readers, which is why oral reading fluency measures are probably much better predictors of deficits in this area in children with suspected reading disabilities.

Now let’s move on to the reading comprehension testing, which is an integral component for any dyslexia testing battery. Unfortunately it is also the most trickiest. Here’s why.

Many children with reading difficulties will be able to read and comprehend short paragraphs containing factual  information of decreased complexity. However, this will change dramatically when it comes to comprehension of longer, more complex, and increasingly abstract age-level text. While a number of tests do assess reading comprehension, none of them truly adequately assess the students ability to comprehend abstract information.

For example, on the Reading Comprehension subtest of the CELF-5, students are allowed to keep the text and refer to it when answering questions. Such option will inflate the students scores and not provide an accurate idea of their comprehension abilities.

To continue, the GORT-5 contains reading comprehension passages, which the students need to answer after the stimuli booklet has been removed from them. However, the passages are far more simplistic then the academic texts the students need to comprehend on daily basis, so the students may do well on this test yet still continue to present with significant comprehension deficits.

Similar could be said for the text comprehension components of major educational testing batteries such as the Woodcock Johnson IV: Passage Comprehension subtest, which gives the student sentences with a missing word, and the student is asked to orally provide the word. However, filling-in a missing word does not text comprehension make.

Likewise, the Wechsler Individual Achievement Test®-Third Edition (WIAT-III), Reading Comprehension subtest is very similar to the CELF-5. Student is asked to read a passage and answer questions by referring back to the text. However, just because a student can look up the answers in text does not mean that they actually understand the text.

So what could be done to accurately assess the student’s ability to comprehend abstract grade level text? My recommendation is to go informal. Select grade-level passages from the student’s curriculum pertaining to science, social studies, geography, etc. vs. language arts (which tends to be more simplistic) and ask the student to read them and answer factual questions regarding supporting details as well as non factual questions relevant to main ideas and implied messages.

Posted on 10 Comments

Part II: Components of Comprehensive Dyslexia Testing – Phonological Awareness and Word Fluency Assessment

Lettere01gorgoA few days ago I posted my first installment in the comprehensive assessment of dyslexia series, discussing common dyslexia myths as well as general language testing as a starting point in the dyslexia testing battery. (You can find this post HERE).

Today I would like to discuss the next two steps in dyslexia assessment, which are phonological awareness and word fluency testing.

Let’s begin with phonological awareness (PA). Phonological awareness is a precursor to emergent reading. It allows children to understand and manipulate sounds in order to form or breakdown words. It’s one of those interesting types of knowledge, which is a prerequisite to everything and is definitive of nothing. I like to compare it to taking a statistics course in college. You need it as a prerequisite to entering a graduate speech pathology program but just because you successfully complete it does not mean that you will graduate the program.  Similarly, the children need to have phonological awareness mastery in order to move on and build upon existing skills to become emergent readers, however, simply having this mastery does not a good reader make (hence this is only one of the tests in dyslexia battery).

When a child has poor phonological awareness for his/her age it is a red flag for reading disabilities. Thus it is very important to assess the child’s ability to successfully manipulate sounds (e.g., by isolating, segmenting, blending, etc.,)  in order to produce real or nonsense words.

Why are nonsense words important?

According to Shaywitz (2003), “The ability to read nonsense words is the best measure of phonological decoding skill in children.” (p. 133-134) Being able to decode and manipulate (blend, segment, etc.) nonsense words is a good indication that the child is acquiring comprehension of the alphabetic principle (understands sound letter correspondence or what common sounds are made by specific letters). It is a very important part of a dyslexia battery since nonsense words cannot be memorized or guessed but need to be “truly decoded.”

While a number of standardized tests assess phonological awareness skills, my personal preference is the Comprehensive Test of Phonological Processing-2 (CTOPP-2), which assesses the following areas:

  • Phonological Segmentation
  • Blending Words
  • Sound Matching
  • Initial, Medial and Final Phoneme Isolation
  • Blending Nonwords 
  • Segmenting Nonwords 
  • Memory for Digits
  • Nonword Repetition 
  • Rapid Digit Naming 
  • Rapid Letter Naming 
  • Rapid Color Naming 
  • Rapid Object Naming 

 As you can see from above description, it not only assesses the children’s ability to manipulate real words but also their ability to manipulate nonsense words. It also assesses word fluency skills via a host of rapid naming tasks, so it’s a very convenient tool to have as part of your dyslexia testing battery.

This brings us to another integral part of the dyslexia testing battery which is word fluency testing (WF).  During word fluency tasks a child is asked to rapidly generate words on a particular topic given timed constraints (e.g., name as many animals as you can in 1 minute, etc.). We test this rapid naming ability because we want to see how quickly and accurately the child can process information. This ability is very much needed to become a fluent reader.

Poor readers can name a number of items but they may not be able to efficiently categorize these words. Furthermore, they will produce the items with a significantly decreased processing speed as compared to good readers. Decreased word fluency is a significant indicator of reading deficits. It is  frequently observable in children with reading disabilities when they encounter a text with which they lack familiarity. That is why this ability is very important to test.

Several tests can be used for this purpose including  CTOPP-2 and Rapid Automatized Naming and Rapid Alternating Stimulus Test (RAN/RAS) just to name a few. However, since CTOPP-2 already has a number of subtests which deal with testing this skill, I prefer to use it to test both phonological awareness and word fluency.

Read part III of this series which discusses components of Reading Fluency and Reading Comprehension testing HERE.

Helpful Links

Posted on 12 Comments

Components of Comprehensive Dyslexia Testing: Part I- Introduction and Language Testing

With the passing of dyslexia laws in the state of New Jersey in 2014, there has been an increased focus on reading disabilities and dyslexia particularly in the area of effective assessment and remediation. More and more parents and health related professionals are looking to understand the components of effective dyslexia testing and who is qualified to perform it. So I decided to write a multi-part series regarding the components of comprehensive dyslexia testing in order to assist parents and professionals to better understand the steps of the testing process (Infographic courtesy of TES Resources).

In this particular post I would like to accomplish two things: dispel several common myths regarding dyslexia testing as well as discuss the first step of SLP based testing which is a language assessment.

Myth 1: Dyslexia can be diagnosed based on a single test!

DYSLEXIA CANNOT BE CONFIRMED BY THE ADMINISTRATION OF ONE SPECIFIC TEST. A comprehensive battery of tests from multiple professionals including neuropsychologists, psychologists, learning specialists, speech language pathologists and even occupational therapists needs to actually be administered in order to confirm the presence of reading based disabilities.

Myth 2: A doctor can diagnose dyslexia!

A doctor does not have adequate training to diagnose learning disabilities, the same way as a doctor cannot diagnose speech and language problems. Both lie squarely outside of their scope of practice! A doctor can listen to parental concerns and suggest an appropriate plan of action (recommend relevant assessments)  but they couldn’t possibly diagnose dyslexia which is made on the basis of team assessments.

Myth 3: Speech Pathologists cannot perform dyslexia testing!

SPEECH LANGUAGE PATHOLOGISTS TRAINED IN IDENTIFICATION OF READING AND WRITING DISORDERS ARE FULLY QUALIFIED TO PERFORM SIGNIFICANT PORTIONS OF DYSLEXIA BATTERY.

So what are the dyslexia battery components?

Prior to initiating an actual face to face assessment with the child, we need to take down a thorough case history (example HERE) in order to determine any pre-existing risk factors. Dyslexia risk factors may include (but are not limited to):

  • History of language and learning difficulties in the family
  • History of language delay (impaired memory,  attention, grammar, syntax, sentence repetition ability, etc) as well as
  • History of impaired phonological awareness skills (difficulty remembering children’s songs, recognizing and making rhymes, confusing words that sound alike,  etc).

After that we need to perform language testing to determine whether the child presents with any deficits in that area. Please note that while children with language impairments are at significant risk for dyslexia not all children with dyslexia present with language impairments. In other words, the child may be cleared by language testing but still present with significant reading disability, which is why comprehensive language testing is only the first step in the dyslexia assessment battery.

LANGUAGE TESTING

Here we are looking to assess the child’s listening comprehension. processing skills, and verbal expression in the form of conversational and narrative competencies. Oral language is the prerequisite to reading and writing.   So a single vocabulary test, a grammar completion task, or even a sentence formulation activity is simply not going to count as a part of a comprehensive assessment.

In children without obvious linguistic deficits such as limited vocabulary, difficulty following directions, or grammatical/syntactic errors (which of course you’ll need to test) I like to use the following tasks, which are sensitive to language impairment:

Listening Comprehension (with a verbal response component)

  • Here it is important to assess the student’s ability to listen to short passages and answer a variety of story related questions vs. passively point at 1 of 4 pictures depicting a particular sentence structure (e.g., Point to the picture which shows: “The duck was following the girl”). I personally like to use the Listening Comprehension Tests for this task but any number of subtests from other tests have similar components.

Semantic Flexibility

  • Here it is important to assess the student’s vocabulary ability via manipulation of words to create synonyms, antonyms, multiple meaning words, definitions, etc. For this task I like to use the WORD Tests (3-Elementary and 2-Adolescent).

Narrative Production:

  • A hugely important part of a language assessment is an informal spontaneously produced narrative sample, which summarizes a book or a movie.  Just one few minute narrative sample can yield information on the following:
  • Sequencing Ability
  • Working Memory
  • Grammar
  • Vocabulary
  • Pragmatics and perspective taking
  • —Story grammar (Stein & Glenn, 1979)

Usually I don’t like to use any standardized testing for assessment of this skill but use the parameters from the materials I created myself based on existing narrative research (click HERE).

Social Pragmatic Language

  • Given my line of work (school in an outpatient psychiatric setting), no testing is complete without some for of social pragmatic language assessment in order to determine whether the student presents with hidden social skill deficits. It is important to note that I’ve seen time and time again students acing the general language testing only to bomb on the social pragmatic tasks which is why this should be a mandatory part of every language test in my eyes. Here, a variety of choices exists. For quick results I typically tends to use the Social Language Development Tests as well as portions of the Social Thinking Dynamic Assessment Protocol®.

Not sure what type of linguistic deficits your student is displaying? Grab a relevant checklist and ask the student’s teacher and parent fill it out (click HERE to see types of available checklists)

So there you have it! The first installment on comprehensive dyslexia testing is complete.

READ part II which discusses components of Phonological Awareness and Word Fluency testing HERE

Read part III of this series which discusses components of Reading Fluency and Reading Comprehension testing HERE.

Helpful Links

Posted on Leave a comment

Language Processing Deficits (LPD) Checklist for School Aged Children

Need a Language Processing Deficits Checklist for School Aged Children

You can find it in my online store HERE

This checklist was created to assist speech-language pathologists (SLPs) with figuring out whether the student presents with language processing deficits which require further follow-up (e.g., screening, comprehensive assessment). The SLP should provide this form to both teacher and caregiver/s to fill out to ensure that the deficit areas are consistent across all settings and people.

Checklist Categories:

  • Listening Skills and Short Term Memory
  • Verbal Expression
  • Emergent Reading/Phonological Awareness
  • General Organizational Abilities
  • Social-Emotional Functioning
  • Behavior
  • Supplemental* Caregiver/Teacher Data Collection Form
  • Select assessments sensitive to Auditory Processing Deficits