Posted on

The Limitations of Using Total/Core Scores When Determining Speech-Language Eligibility

In both of the settings where I work, psychiatric outpatient school as well as private practice, I spend a fair amount of time reviewing speech language evaluation reports.  As I’m looking at these reports I am seeing that many examiners choose to base their decision making with respect to speech language services eligibility on the students’ core, index, or total scores, which are composite scores. For those who are not familiar with this term, composite scores are standard scores based on the sum of various test scaled scores.

When the student displays average abilities on all of the presented subtests, use of composite scores clearly indicates that the child does not present with deficits and thereby is not eligible for therapy services.

The same goes for the reverse, when the child is displaying a pattern of deficits which places their total score well below the average range of functioning. Again, it indicates that the child is performing poorly and requires therapy services.

However, there’s also a the third scenario, which presents a cause for concern namely, when the students display a pattern of strengths and weaknesses on a variety of subtests, but end up with an average/low average total scores, making them ineligible for services. 

Results of the Test of Problem Solving -2 Elementary (TOPS-3)

Subtests Raw Score Standard Score Percentile Rank Description
Making Inferences 19 83 12 Below Average
Sequencing 22 86 17 Low Average
Negative Questions 21 95 38 Average
Problem Solving 21 90 26 Average
Predicting 18 92 29 Average
Determining Causes 13 82 11 Below Average
Total Test 114 86 18 Low Average

Results of the Test of Reading Comprehension-Fourth Edition (TORC-4)

Subtests Raw Score Standard Score Percentile Rank Description
Relational Vocabulary 24 9 37 Average
Sentence Completion 25 9 37 Average
Paragraph Construction 41 12 75 Average
Text Comprehension 21 7 16 Below Average
Contextual Fluency 86 6 9 Below Average
Reading Comprehension Index 90 Average

The above tables, taken from different evaluations, perfectly illustrate such a scenario. While we see that their total/index scores are within average range, the first student has displayed a pattern of strengths and weaknesses across various subtests of the TOPS-3, while the second one displayed a similar performance pattern on the TORC-4.

Typically in such cases clinical judgment dictates a number of options:

  1. Administration of another standardized test further probing into related areas of difficulty (e.g., in such situations the administration of a social pragmatic standardized test may reveal a significant pattern of weaknesses which would confirm student’s eligibility for language therapy services).                                                                                                        
  2. Administration of informal/dynamic assessments/procedures further probing into the student’s critical thinking/verbal reasoning skills.

Here is the problem though: I only see the above follow-up steps in a small percentage of cases. In the vast majority of cases in which score discrepancies occur, I see the examiners ignoring the weaknesses without follow up. This of course results in the child not qualifying for services.

So why do such practices frequently take place? Is it because SLPs want to deny children services?  And the answer is NOT at all! The vast majority of SLPs, I have had the pleasure interacting with, are deeply caring and concerned individuals, who only want what’s best for the student in question. Oftentimes, I believe the problem lies with the misinterpretation of/rigid adherence to the state educational code.

For example, most NJ SLPs know that the New Jersey State Education Code dictates that initial eligibility must be determined via use of two standardized tests on which the student must perform 1.5 standard deviations below the mean (or below the 10th percentile).  Based on such phrasing it is reasonable to assume that any child who receives the total scores on two standardized tests above the 10th percentile will not qualify for services. Yet this is completely incorrect!

Let’s take a closer look at the clarification memo issued on October 6, 2015, by the New Jersey Department of Education, in response to NJ Edu Code misinterpretation.. Here is what it actually states.

In accordance with this regulation, when assessing for a language disorder for purposes of determining whether a student meets the criteria for communication impaired, the problem must be demonstrated through functional assessment of language in other than a testing situation and performance below 1.5 standard deviations, or the 10th percentile on at least two standardized language tests, where such tests are appropriate, one of which shall be a comprehensive test of both receptive and expressive language.”

“When implementing the requirement with respect to “standardized language tests,” test selection for evaluation or reevaluation of an individual student is based on various factors, including the student’s ability to participate in the tests, the areas of suspected language difficulties/deficits (e.g., morphology, syntax, semantics, pragmatics/social language) and weaknesses identified during the assessment process which require further testing, etc. With respect to test interpretation and decision-making regarding eligibility for special education and related services and eligibility for speech-language services, the criteria in the above provision do not limit the types of scores that can be considered (e.g., index, subtest, standard score, etc.).”

Firstly, it emphasizes functional assessments. It doesn’t mean that assessments should be exclusively standardized rather it emphasizes the best appropriate procedures for the student in question be they standardized and nonstandardized.

Secondly, it does not limit standardized assessment to 2 tests only. Rather it uses though phrase “at least” to emphasize the minimum of tests needed.

It explicitly makes a reference to following up on any weaknesses displayed by the students during standardized testing in order to get to the root of a problem.

It specifies that SLPs must assess all displayed areas of difficulty (e.g., social communication) rather than assessing general language abilities only.

Finally, it explicitly points out that SLPs cannot limit their testing interpretation to the total scores but must to look at the testing results holistically, taking into consideration the student’s entire assessment performance.

The problem is that if SLPs only look at total/core scores then numerous children with linguistically-based deficits will fall through the cracks.  We are talking about children with social communication deficits, children with reading disabilities, children with general language weaknesses, etc.  These students may be displaying average total scores but they may also be displaying significant subtest weaknesses. The problem is that unless these weaknesses are accounted for and remediated as they are not going to magically disappear or resolve on their own. In fact both research and clinical judgment dictates that these weaknesses will exacerbate over time and will continue to adversely impact both social communication and academics.

So the next time you see a pattern of strengths and weaknesses and testing, even if it amounts to a total average score, I urge you to dig deeper. I urge you to investigate why this pattern is displayed in the first place. The same goes for you – parents! If you are looking at average total scores  but seeing unexplained weaknesses in select testing areas, start asking questions! Ask the professional to explain why those deficits are occuring and tell them to dig deeper if you are not satisfied with what you are hearing. All students deserve access to FAPE (Free and Appropriate Public Education). This includes access to appropriate therapies, they may need in order to optimally function in the classroom.

I urge my fellow SLP’s to carefully study their respective state codes as well as know who they are state educational representatives are. These are the professionals SLPs can contact with questions regarding educational code clarification.  For example, the SEACDC Consultant for the state of New Jersey is currently Fran Liebner (phone: 609-984-4955; Fax: 609-292-5558; e-mail: [email protected]).

However the Department of Education is not the only place SLPs can contact in their state.  Numerous state associations worked diligently on behalf of SLPs by liaising with the departments of education in order to have access to up to date information pertinent to school services. In the state of New Jersey, the School Affairs Committee (SAC) of the New Jersey Speech Language Hearing Association (NJSHA), has developed a number of documents of interest for the school based SLPs which can be found HERE.

For those SLPs located in states other than New Jersey, ASHA helpfully provides contact information by state HERE.

When it comes to score interpretation, there are a variety of options available to SLPs in addition to the detailed reading of the test manual. We can use them to ensure that the students we serve experience optimal success in both social and academic settings.

Posted on

Why Are My Child’s Test Scores Dropping?

“I just don’t understand,” says a parent bewilderingly, “she’s receiving so many different therapies and tutoring every week, but her scores on educational, speech-language, and psychological testing just keep dropping!”

I hear a variation of this comment far too frequently in both my private practice as well as outpatient school in hospital setting, from parents looking for an explanation regarding the decline of their children’s standardized test scores in both cognitive (IQ) and linguistic domains. That is why today I wanted to take a moment to write this blog post to explain a few reasons behind this phenomenon.

Children with language impairments represent a highly diverse group, which exists along a continuum.   Some children’s deficits may be mild while others far more severe. Some children may receive very little intervention  services and thrive academically, while others can receive inordinate amount of interventions and still very limitedly benefit from them.  To put it in very simplistic terms, the above is due to two significant influences – the interaction between the child’s (1) genetic makeup and (2) environmental factors.

There is a reason why language disorders are considered developmental.   Firstly, these difficulties are apparent from a young age when the child’s language just begins to develop.  Secondly, the trajectory of the child’s language deficits also develops along with the child and can progress/lag based on the child’s genetic predisposition, resiliency, parental input, as well as schooling and academically based interventions.

Let us discuss some of the reasons why standardized testing results may decline for select students who are receiving a variety of support services and interventions.

Ineffective Interventions due to Misdiagnosis 

Sometimes, lack of appropriate/relevant intervention provision may be responsible for it.  Let’s take an example of a misdiagnosis of alcohol related deficits as Autism, which I have frequently encountered in my private practice, when performing second opinion testing and consultations. Unfortunately, the above is not uncommon.  Many children with alcohol-related impairments may present with significant social emotional dysregulation coupled with significant externalizing behavior manifestations.  As a result, without a thorough differential diagnosis they may be frequently diagnosed with ASD and then provided with ABA therapy services for years with little to no benefit.

Ineffective Interventions due to Lack of Comprehensive Testing 

Let us examine another example of a student with average intelligence but poor reading performance.  The student may do well in school up to certain grade but then may begin to flounder academically.  Because only the student’s reading abilities ‘seem’ to be adversely impacted, no comprehensive language and literacy evaluations are performed.   The student may receive undifferentiated extra reading support in school while his scores may continue to drop.

Once the situation ‘gets bad enough’, the student’s language and literacy abilities may be comprehensively assessed.  In a vast majority of situations these type of assessments yield the following results:

  1. The student’s oral language expression as well as higher order language abilities are adversely affected and require targeted language intervention
  2. The undifferentiated reading intervention provided to the student was NOT targeting actual areas of weaknesses

As can be seen from above examples, targeted intervention is hugely important and, in a number of cases, may be responsible  for the student’s declining performance. However, that is not always the case.

What if it was definitively confirmed that the student was indeed diagnosed appropriately and was receiving quality services but still continued to decline academically. What then?

Well, we know that many children with genetic disorders (Down Syndrome, Fragile X, etc.) as well as intellectual disabilities (ID) can make incredibly impressive gains in a variety of developmental areas (e.g., gross/fine motor skills, speech/language, socio-emotional, ADL, etc.)  but their gains will not be on par with peers without these diagnoses.

The situation becomes much more complicated when children without ID (or with mild intellectual deficits) and varying degrees of language impairment, receive effective therapies, work very hard in therapy, yet continue  to be perpetually behind their peers when it comes to making academic gains.  This occurs because of a phenomenon known as Cumulative Cognitive Deficit (CCD).

The Effect of Cumulative Cognitive Deficit (CCD) on Academic Performance 

According to Gindis (2005) CCD “refers to a downward trend in the measured intelligence and/or scholastic achievement of culturally/socially disadvantaged children relative to age-appropriate societal norms and expectations” (p. 304). Gindis further elucidates by quoting Satler (1992): “The theory behind cumulative deficit is that children who are deprived of enriching cognitive experiences during their early years are less able to profit from environmental situations because of a mismatch between their cognitive schemata and the requirements of the new (or advanced) learning situation”  (pp. 575-576).

So who are the children potentially at risk for CCD?

One such group are internationally (and domestically) adopted as well as foster care children.  A number of studies show that due to the early life hardships associated with prenatal trauma (e.g., maternal substance abuse, lack of adequate prenatal care, etc.) as well as postnatal stress (e.g., adverse effect of institutionalization), many of these children have much poorer social and academic outcomes despite being adopted by well-to-do, educated parents who continue to provide them with exceptional care in all aspects of their academic and social development.

Another group, are children with diagnosed/suspected psychiatric impairments and concomitant overt/hidden language deficits. Depending on the degree and persistence of the psychiatric impairment, in addition to having intermittent access to classroom academics and therapy interventions, the quality of their therapy may be affected by the course of their illness. Combined with sporadic nature of interventions this may result in them falling further and further behind their peers with respect to social and academic outcomes.

A third group (as mentioned previously) are children with genetic syndromes, neurodevelopmental disorders (e.g., Autism) and intellectual disabilities. Here, it is very important to explicitly state that children with diagnosed or suspected alcohol related deficits (FASD) are particularly at risk due to the lack of consensus/training  regarding FAS detection/diagnosis. Consequently, these children may evidence a steady ‘decline’ on standardized testing despite exhibiting steady functional gains in therapy.

Brief Standardized Testing Score Tutorial:

When we look at norm-referenced testing results, score interpretation can be quite daunting. For the sake of simplicity,  I’d like to restrict this discussion to two types of scores: raw scores and standard scores.

The raw score is the number of items the child answered correctly on a test or a subtest. However, raw scores need to be interpreted to be meaningful.  For example, a 9 year old student can attain a raw score of 12 on a subtest of a particular test (e.g., Listening Comprehension Test-2 or LCT-2).  Without more information, the raw score has no meaning. If the test consisted of 15 questions, a raw score of 12 would be an average score. Alternatively, if the subtest had 36 questions, a raw score of 12 would be significantly below-average (e.g., Test of Problem Solving-3 or TOPS-3).

Consequently, the raw score needs to be converted to a standard score. Standard scores compare the student’s performance on a test to the performance of other students his/her age.  Many standardized language assessments have a mean of 100 and a standard deviation of 15. Thus, scores between 85 and 115 are considered to be in the average range of functioning.

Now lets discuss testing performance variation across time. Let’s say an 8.6 year old student took the above mentioned LCT-2 and attained poor standard scores on all subtests.   That student qualifies for services and receives them for a period of one year. At that time the LCT-2 is re-administered once again and much to the parents surprise the student’s standard scores appear to be even lower than when he had taken the test as an eight year old (illustration below).

Results of The Listening Comprehension Test -2 (LCT-2): Age: 8:4

Subtests Raw Score Standard Score Percentile Rank Description
Main Idea 5 67 2 Severely Impaired
Details 2 63 1 Severely Impaired
Reasoning 2 69 2 Severely Impaired
Vocabulary 0 Below Norms Below Norms Profoundly Impaired
Understanding Messages 0 <61 <1 Profoundly Impaired
Total Test Score 9 <63 1 Profoundly Impaired

(Mean = 100, Standard Deviation = +/-15)

Results of The Listening Comprehension Test -2 (LCT-2):  Age: 9.6

Subtests Raw Score Standard Score Percentile Rank Description
Main Idea 6 60 0 Severely Impaired
Details 5 66 1 Severely Impaired
Reasoning 3 62 1 Severely Impaired
Vocabulary 4 74 4 Moderately Impaired
Understanding Messages 2 54 0 Profoundly Impaired
Total Test Score 20 <64 1 Profoundly Impaired

(Mean = 100, Standard Deviation = +/-15)

However, if one looks at the raw score column on the far left, one can see that the student as a 9 year old actually answered more questions than as an 8 year old and his total raw test score went up by 11 points.

The above is a perfect illustration of CCD in action. The student was able to answer more questions on the test but because academic, linguistic, and cognitive demands continue to steadily increase with age, this quantitative improvement in performance (increase in total number of questions answered) did not result in qualitative  improvement in performance (increase in standard scores).

In the first part of this series I have introduced the concept of Cumulative Cognitive Deficit and its effect on academic performance. Stay tuned for part II of this series which describes what parents and professionals can do to improve functional performance of students with Cumulative Cognitive Deficit.


  • Bowers, L., Huisingh, R., & LoGiudice, C. (2006). The Listening Comprehension Test-2 (LCT-2). East Moline, IL: LinguiSystems, Inc.
  • Bowers, L., Huisingh, R., & LoGiudice, C. (2005). The Test of Problem Solving 3-Elementary (TOPS-3). East Moline, IL: LinguiSystems.
  • Gindis, B. (2005). Cognitive, language, and educational issues of children adopted from overseas orphanages. Journal of Cognitive Education and Psychology, 4 (3): 290-315.
  • Sattler, J. M. (1992). Assessment of Children. Revised and updated 3rd edition. San Diego: Jerome M. Sattler.
Posted on

Simplifying Testing Results to Understand the Student’s Difficulties


Oftentimes explaining testing results in the form of standard scores, percentiles, and charts is labor-intensive for the SLP and confusing for parents and ancillary  professionals. Furthermore, just because you show testing results does not always ensure that the ramifications of testing are fully understood, especially when it comes to performance of high functioning students with deficits in isolated areas, which may significantly impact the student’s functioning in social and academic settings.

So finding an effective method of sharing testing results was fraught with difficulties until recently. In early January, I attended a Sarah Ward executive function conference, where Sarah shared one of her tricks of sharing testing results.  She used a picture of a bell curve and inserted testing results into it. So it looked a little similar to the picture I have below:


As you can see the student’s listening comprehension and expressive language performance fell in the average range as denoted on the bottom of the picture. In contrast, the student’s problem solving and social pragmatic testing abilities fell in the below average range as is denoted by both a red bar as well as the caption underneath the picture.

It is a visually simple way to see what areas need to be worked on in one snapshot.

Charts in Action: Students with Social Skills Deficits 

This system is even more effective for displaying testing results of higher functioning students with select deficit areas. To illustrate, I recently performed a comprehensive language assessment on a 12-year-old adolescent with suspected ASD. The student had a superior IQ, excellent vocabulary, and phenomenal memory.

When tested in school setting she did not qualify to receive language intervention. However, her comprehensive language testing with me showed a number of disparities. While the majority of her testing fell in the above average and superior range, in a number of testing areas she performed within average and below average range (combined SLP, ED, and Psych. testing results below).


When one looked at the student’s overall testing results, they clearly indicated cumulative performance in the average range of functioning. However, after I plotted all of her results on the bell curve her deficit areas became very clearly apparent and her testing discrepancy clearly indicated that intervention in select areas of functioning was needed.

So even though select scores were clearly in the average range of functioning on the bell curve, they were actually BELOW AVERAGE for this student as compared to significant strengths in all other areas.

Many would argue with me pointing out that scores in the average range mean average range. The student doesn’t qualify – end of story!  So let me explain the above scores in REAL-LIFE terms.

Why Students with Average Scores May Still Require Services 

This particular student was referred for a social pragmatic evaluation due to behavioral difficulties in the classroom which included verbal outbursts, difficulty engaging in cooperative group work and verbal confrontations with classmates.

Interactions with the student revealed an engaging adolescent who preferred the company of adults and was very likable. However, throughout testing she made comments indicating cognizance that she was not accepted by typically developing peers. She frustratedly stated that she “doesn’t get” peers, is not interested in the “typical” experiences and has “nothing in common” with peers her age because she “misses the point” of their verbal interchanges.

Due to her exceptional performance on standardized testing, many school-based professionals  believed that because she did so well well she did not have any “true” social learning deficits. In contrast the student’s peer group was able to see her social differences with very little effort.  In school, the student did not qualify for social pragmatic language therapy, on the basis of her challenges being perceived as too “mild” to merit services, however her social deficits were NOT mild as judged by her peers. They were only mild as compared to individuals with severe social learning challenges. Without appropriate intervention, these difficulties would  continue to pervasively impact her academic and social performance, as well as affect future employment and relationship status.

So this is why I now love plotting scores on the bell curve for parents and professionals. A simple picture clearly shows the significance of score distribution, the deficits areas in need of intervention, and is literally worth a 1000 words!

Helpful Resources Related to Social Pragmatic Language Overview, Assessment  and Remediation: