Posted on Leave a comment

The Science of Reading Literacy Certificate for SLPs: FAQs

In August 2021, the CEU Smart Hub (Powered by the Lavi Institute) has launched a new certificate program: The Science of Reading (SOR) Literacy Certificate for SLPs.  Because of the multitude of questions we have received in advance of the certificate rollout (Financial Disclosure: I am a 50% partner in the CEU Smart Hub/Power Up Conferences), I am writing this post today in an attempt to answer some of the commonly asked questions regarding this certification.

Who is the certificate for? The certificate is open to SLPs who are interested in gaining in-depth knowledge in the areas of assessment and treatment of children with language and literacy disorders. This certification offers not just continuing education hours in the advanced practices pertaining to the assessment and treatment of literacy but also a final examination and 2 lengthy in-depth projects requiring professionals to appropriately and comprehensively design assessment plans and treatment goals to work with literacy impaired clients. Continue reading The Science of Reading Literacy Certificate for SLPs: FAQs

Posted on 7 Comments

Comprehensive Assessment of Elementary Aged Children with Subtle Language and Literacy Deficits

Image result for confused childrenLately, I’ve been seeing more and more posts on social media asking for testing suggestions for students who exhibit subtle language-based difficulties. Many of these children are typically referred for initial assessments or reassessments as part of advocate/attorney involved cases, while others are being assessed due to the parental insistence that something “is not quite right” with their language and literacy abilities, even in the presence of “good grades.” Continue reading Comprehensive Assessment of Elementary Aged Children with Subtle Language and Literacy Deficits

Posted on 2 Comments

But is this the Best Practice Recommendation?

When adopting best practices isn't your best practiceThose of you familiar with my blog, know that a number of my posts take on a form of extended responses to posts and comments on social media which deal with certain questionable speech pathology trends and ongoing issues (e.g., controversial diagnostic labels, questionable recommendations, non-evidence based practices, etc.). So, today, I’d like to talk about sweeping general recommendations as pertaining to literacy interventions. Continue reading But is this the Best Practice Recommendation?

Posted on 3 Comments

Help, My Child is Receiving All These Therapies But It’s NOT Helping

On a daily basis I receive emails and messages from concerned parents and professionals, which read along these lines: “My child/student has been diagnosed with: dyslexia, ADHD, APD etc., s/he has been receiving speech, OT, vision, biofeedback, music therapies, etc. but nothing seems to be working.”

Up until now, I have been providing individualized responses to such queries, however, given the unnerving similarity of all the received messages, today I decided to write this post, so other individuals with similar concerns can see my response. Continue reading Help, My Child is Receiving All These Therapies But It’s NOT Helping

Posted on 2 Comments

Why “good grades” do not automatically rule out “adverse educational impact”

Image result for good grades?As a speech-language pathologist (SLP) working with school-age children, I frequently assess students whose language and literacy abilities adversely impact their academic functioning.   For the parents of school-aged children with suspected language and literacy deficits as well as for the SLPs tasked with screening and evaluating them, the concept of ‘academic impact’ comes up on daily basis. In fact, not a day goes by when I do not see a variation of the following question: “Is there evidence of academic impact?”, being discussed in a variety of Facebook groups dedicated to speech pathology issues. Continue reading Why “good grades” do not automatically rule out “adverse educational impact”

Posted on 1 Comment

Editable Report Template and Tutorial for the Test of Integrated Language and Literacy

Today I am introducing my newest report template for the Test of Integrated Language and Literacy.

This 16-page fully editable report template discusses the testing results and includes the following components: Continue reading Editable Report Template and Tutorial for the Test of Integrated Language and Literacy

Posted on 4 Comments

What Makes an Independent Speech-Language-Literacy Evaluation a GOOD Evaluation?

Image result for Independent Educational EvaluationThree years ago I wrote a blog post entitled: “Special Education Disputes and Comprehensive Language Testing: What Parents, Attorneys, and Advocates Need to Know“. In it, I used  4 very different scenarios to illustrate the importance of comprehensive language evaluations for children with subtle language and learning needs.  Today I would like to expound more on that post in order to explain, what actually constitutes a good independent comprehensive assessment. Continue reading What Makes an Independent Speech-Language-Literacy Evaluation a GOOD Evaluation?

Posted on 17 Comments

Review of the Test of Integrated Language and Literacy (TILLS)

The Test of Integrated Language & Literacy Skills (TILLS) is an assessment of oral and written language abilities in students 6–18 years of age. Published in the Fall 2015, it is  unique in the way that it is aimed to thoroughly assess skills  such as reading fluency, reading comprehension, phonological awareness,  spelling, as well as writing  in school age children.   As I have been using this test since the time it was published,  I wanted to take an opportunity today to share just a few of my impressions of this assessment.

               

First, a little background on why I chose to purchase this test  so shortly after I had purchased the Clinical Evaluation of Language Fundamentals – 5 (CELF-5).   Soon after I started using the CELF-5  I noticed that  it tended to considerably overinflate my students’ scores  on a variety of its subtests.  In fact,  I noticed that unless a student had a fairly severe degree of impairment,  the majority of his/her scores  came out either low/slightly below average (click for more info on why this was happening HERE, HEREor HERE). Consequently,  I was excited to hear regarding TILLS development, almost simultaneously through ASHA as well as SPELL-Links ListServe.   I was particularly happy  because I knew some of this test’s developers (e.g., Dr. Elena Plante, Dr. Nickola Nelson) have published solid research in the areas of  psychometrics and literacy respectively.

According to the TILLS developers it has been standardized for 3 purposes:

  • to identify language and literacy disorders
  • to document patterns of relative strengths and weaknesses
  • to track changes in language and literacy skills over time

The testing subtests can be administered in isolation (with the exception of a few) or in its entirety.  The administration of all the 15 subtests may take approximately an hour and a half, while the administration of the core subtests typically takes ~45 mins).

Please note that there are 5 subtests that should not be administered to students 6;0-6;5 years of age because many typically developing students are still mastering the required skills.

  • Subtest 5 – Nonword Spelling
  • Subtest 7 – Reading Comprehension
  • Subtest 10 – Nonword Reading
  • Subtest 11 – Reading Fluency
  • Subtest 12 – Written Expression

However,  if needed, there are several tests of early reading and writing abilities which are available for assessment of children under 6:5 years of age with suspected literacy deficits (e.g., TERA-3: Test of Early Reading Ability–Third Edition; Test of Early Written Language, Third Edition-TEWL-3, etc.).

Let’s move on to take a deeper look at its subtests. Please note that for the purposes of this review all images came directly from and are the property of Brookes Publishing Co (clicking on each of the below images will take you directly to their source).

TILLS-subtest-1-vocabulary-awareness1. Vocabulary Awareness (VA) (description above) requires students to display considerable linguistic and cognitive flexibility in order to earn an average score.    It works great in teasing out students with weak vocabulary knowledge and use,   as well as students who are unable to  quickly and effectively analyze  words  for deeper meaning and come up with effective definitions of all possible word associations. Be mindful of the fact that  even though the words are presented to the students in written format in the stimulus book, the examiner is still expected to read  all the words to the students. Consequently,  students with good vocabulary knowledge  and strong oral language abilities  can still pass this subtest  despite the presence of significant reading weaknesses. Recommendation:  I suggest informally  checking the student’s  word reading abilities  by asking them to read of all the words, before reading all the word choices to them.   This way  you can informally document any word misreadings  made by the student even in the presence of an average subtest score.

TIILLS-subtest-2-phonemic-awareness

2. The Phonemic Awareness (PA) subtest (description above) requires students to  isolate and delete initial sounds in words of increasing complexity.  While this subtest does not require sound isolation and deletion in various word positions, similar to tests such as the CTOPP-2: Comprehensive Test of Phonological Processing–Second Edition  or the The Phonological Awareness Test 2 (PAT 2)  it is still a highly useful and reliable measure of  phonemic awareness (as one of many precursors to reading fluency success).  This is especially because after the initial directions are given, the student is expected to remember to isolate the initial sounds in words without any prompting from the examiner.  Thus,  this task also  indirectly tests the students’ executive function abilities in addition to their phonemic awareness skills.

TILLS-subtest-3-story-retelling

3. The Story Retelling (SR) subtest (description above) requires students to do just that retell a story. Be mindful of the fact that the presented stories have reduced complexity. Thus, unless the students possess  significant retelling deficits, the above subtest  may not capture their true retelling abilities. Recommendation:  Consider supplementing this subtest  with informal narrative measures. For younger children (kindergarten and first grade) I recommend using wordless picture books to perform a dynamic assessment of their retelling abilities following a clinician’s narrative model (e.g., HERE).  For early elementary aged children (grades 2 and up), I recommend using picture books, which are first read to and then retold by the students with the benefit of pictorial but not written support. Finally, for upper elementary aged children (grades 4 and up), it may be helpful for the students to retell a book or a movie seen recently (or liked significantly) by them without the benefit of visual support all together (e.g., HERE).

TILLS-subtest-4-nonword-repetition

4. The Nonword Repetition (NR) subtest (description above) requires students to repeat nonsense words of increasing length and complexity. Weaknesses in the area of nonword repetition have consistently been associated with language impairments and learning disabilities due to the task’s heavy reliance on phonological segmentation as well as phonological and lexical knowledge (Leclercq, Maillart, Majerus, 2013). Thus, both monolingual and simultaneously bilingual children with language and literacy impairments will be observed to present with patterns of segment substitutions (subtle substitutions of sounds and syllables in presented nonsense words) as well as segment deletions of nonword sequences more than 2-3 or 3-4 syllables in length (depending on the child’s age).

TILLS-subtest-5-nonword-spelling

5. The Nonword Spelling (NS) subtest (description above) requires the students to spell nonwords from the Nonword Repetition (NR) subtest. Consequently, the Nonword Repetition (NR) subtest needs to be administered prior to the administration of this subtest in the same assessment session.  In contrast to the real-word spelling tasks,  students cannot memorize the spelling  of the presented words,  which are still bound by  orthographic and phonotactic constraints of the English language.   While this is a highly useful subtest,  is important to note that simultaneously bilingual children may present with decreased scores due to vowel errors.   Consequently,  it is important to analyze subtest results in order to determine whether dialectal differences rather than a presence of an actual disorder is responsible for the error patterns.

TILLS-subtest-6-listening-comprehension

6. The  Listening Comprehension (LC) subtest (description above) requires the students to listen to short stories  and then definitively answer story questions via available answer choices, which include: “Yes”, “No’, and “Maybe”. This subtest also indirectly measures the students’ metalinguistic awareness skills as they are needed to detect when the text does not provide sufficient information to answer a particular question definitively (e.g., “Maybe” response may be called for).  Be mindful of the fact that because the students are not expected to provide sentential responses  to questions it may be important to supplement subtest administration with another listening comprehension assessment. Tests such as the Listening Comprehension Test-2 (LCT-2), the Listening Comprehension Test-Adolescent (LCT-A),  or the Executive Function Test-Elementary (EFT-E)  may be useful  if  language processing and listening comprehension deficits are suspected or reported by parents or teachers. This is particularly important  to do with students who may be ‘good guessers’ but who are also reported to present with word-finding difficulties at sentence and discourse levels. 

TILLS-subtest-7-reading-comprehension

7. The Reading Comprehension (RC) subtest (description above) requires the students to  read short story and answer story questions in “Yes”, “No’, and “Maybe”  format.   This subtest is not stand alone and must be administered immediately following the administration the Listening Comprehension subtest. The student is asked to read the first story out loud in order to determine whether s/he can proceed with taking this subtest or discontinue due to being an emergent reader. The criterion for administration of the subtest is making 7 errors during the reading of the first story and its accompanying questions. Unfortunately,  in my clinical experience this subtest  is not always accurate at identifying children with reading-based deficits.

While I find it terrific for students with severe-profound reading deficits and/or below average IQ, a number of my students with average IQ and moderately impaired reading skills managed to pass it via a combination of guessing and luck despite being observed to misread aloud between 40-60% of the presented words. Be mindful of the fact that typically  such students may have up to 5-6  errors during the reading of the first story. Thus, according to administration guidelines these students will be allowed to proceed and take this subtest.  They will then continue to make text misreadings  during each story presentation (you will know that by asking them to read each story aloud vs. silently).   However,  because the response mode is in definitive (“Yes”, “No’, and “Maybe”) vs. open ended question format,  a number of these students  will earn average scores by being successful guessers. Recommendation:  I highly recommend supplementing the administration of this subtest with grade level (or below grade level) texts (see HERE and/or HERE),  to assess the student’s reading comprehension informally.

I present a full  one page text to the students and ask them to read it to me in its entirety.   I audio/video record  the student’s reading for further analysis (see Reading Fluency section below).   After the  completion of the story I ask  the student questions with a focus on main idea comprehension and vocabulary definitions.   I also ask questions pertaining to story details.   Depending on the student’s age  I may ask them  abstract/ factual text questions with and without text access.  Overall, I find that informal administration of grade level (or even below grade-level) texts coupled with the administration of standardized reading tests provides me with a significantly better understanding of the student’s reading comprehension abilities rather than administration of standardized reading tests alone.

TILLS-subtest-8-following-directions

8. The Following Directions (FD) subtest (description above) measures the student’s ability to execute directions of increasing length and complexity.  It measures the student’s short-term, immediate and working memory, as well as their language comprehension.  What is interesting about the administration of this subtest is that the graphic symbols (e.g., objects, shapes, letter and numbers etc.) the student is asked to modify remain covered as the instructions are given (to prevent visual rehearsal). After being presented with the oral instruction the students are expected to move the card covering the stimuli and then to executive the visual-spatial, directional, sequential, and logical if–then the instructions  by marking them on the response form.  The fact that the visual stimuli remains covered until the last moment increases the demands on the student’s memory and comprehension.  The subtest was created to simulate teacher’s use of procedural language (giving directions) in classroom setting (as per developers).

TILLS-subtest-9-delayed-story-retelling

9. The Delayed Story Retelling (DSR) subtest (description above) needs to be administered to the students during the same session as the Story Retelling (SR) subtest, approximately 20 minutes after the SR subtest administration.  Despite the relatively short passage of time between both subtests, it is considered to be a measure of long-term memory as related to narrative retelling of reduced complexity. Here, the examiner can compare student’s performance to determine whether the student did better or worse on either of these measures (e.g., recalled more information after a period of time passed vs. immediately after being read the story).  However, as mentioned previously, some students may recall this previously presented story fairly accurately and as a result may obtain an average score despite a history of teacher/parent reported  long-term memory limitations.  Consequently, it may be important for the examiner to supplement the administration of this subtest with a recall of a movie/book recently seen/read by the student (a few days ago) in order to compare both performances and note any weaknesses/limitations.

TILLS-subtest-10-nonword-reading

10. The Nonword Reading (NR) subtest (description above) requires students to decode nonsense words of increasing length and complexity. What I love about this subtest is that the students are unable to effectively guess words (as many tend to routinely do when presented with real words). Consequently, the presentation of this subtest will tease out which students have good letter/sound correspondence abilities as well as solid orthographic, morphological and phonological awareness skills and which ones only memorized sight words and are now having difficulty decoding unfamiliar words as a result.      TILLS-subtest-11-reading-fluency

11. The Reading Fluency (RF) subtest (description above) requires students to efficiently read facts which make up simple stories fluently and correctly.  Here are the key to attaining an average score is accuracy and automaticity.  In contrast to the previous subtest, the words are now presented in meaningful simple syntactic contexts.

It is important to note that the Reading Fluency subtest of the TILLS has a negatively skewed distribution. As per authors, “a large number of typically developing students do extremely well on this subtest and a much smaller number of students do quite poorly.”

Thus, “the mean is to the left of the mode” (see publisher’s image below). This is why a student could earn an average standard score (near the mean) and a low percentile rank when true percentiles are used rather than NCE percentiles (Normal Curve Equivalent). Tills Q&A – Negative Skew

Consequently under certain conditions (See HERE) the percentile rank (vs. the NCE percentile) will be a more accurate representation of the student’s ability on this subtest.

Indeed, due to the reduced complexity of the presented words some students (especially younger elementary aged) may obtain average scores and still present with serious reading fluency deficits.  

I frequently see that in students with average IQ and go to long-term memory, who by second and third grades have managed to memorize an admirable number of sight words due to which their deficits in the areas of reading appeared to be minimized.  Recommendation: If you suspect that your student belongs to the above category I highly recommend supplementing this subtest with an informal measure of reading fluency.  This can be done by presenting to the student a grade level text (I find science and social studies texts particularly useful for this purpose) and asking them to read several paragraphs from it (see HERE and/or HERE).

As the students are reading  I calculate their reading fluency by counting the number of words they read per minute.  I find it very useful as it allows me to better understand their reading profile (e.g, fast/inaccurate reader, slow/inaccurate reader, slow accurate reader, fast/accurate reader).   As the student is reading I note their pauses, misreadings, word-attack skills and the like. Then, I write a summary comparing the students reading fluency on both standardized and informal assessment measures in order to document students strengths and limitations.

TILLS-subtest-12-written-expression

12. The Written Expression (WE) subtest (description above) needs to be administered to the students immediately after the administration of the Reading Fluency (RF) subtest because the student is expected to integrate a series of facts presented in the RF subtest into their writing sample. There are 4 stories in total for the 4 different age groups.

The examiner needs to show the student a different story which integrates simple facts into a coherent narrative. After the examiner reads that simple story to the students s/he is expected to tell the students that the story is  okay, but “sounds kind of “choppy.” They then need to show the student an example of how they could put the facts together in a way that sounds more interesting and less choppy  by combining sentences (see below). Finally, the examiner will ask the students to rewrite the story presented to them in a similar manner (e.g, “less choppy and more interesting.”)

tills

After the student finishes his/her story, the examiner will analyze it and generate the following scores: a discourse score, a sentence score, and a word score. Detailed instructions as well as the Examiner’s Practice Workbook are provided to assist with scoring as it takes a bit of training as well as trial and error to complete it, especially if the examiners are not familiar with certain procedures (e.g., calculating T-units).

Full disclosure: Because the above subtest is still essentially sentence combining, I have only used this subtest a handful of times with my students. Typically when I’ve used it in the past, most of my students fell in two categories: those who failed it completely by either copying text word  for word, failing to generate any written output etc. or those who passed it with flying colors but still presented with notable written output deficits. Consequently, I’ve replaced Written Expression subtest administration with the administration of written standardized tests, which I supplement with an informal grade level expository, persuasive, or narrative writing samples.

Having said that many clinicians may not have the access to other standardized written assessments, or lack the time to administer entire standardized written measures (which may frequently take between 60 to 90 minutes of administration time). Consequently, in the absence of other standardized writing assessments, this subtest can be effectively used to gauge the student’s basic writing abilities, and if needed effectively supplemented by informal writing measures (mentioned above).

TILLS-subtest-13-social-communication

13. The Social Communication (SC) subtest (description above) assesses the students’ ability to understand vocabulary associated with communicative intentions in social situations. It requires students to comprehend how people with certain characteristics might respond in social situations by formulating responses which fit the social contexts of those situations. Essentially students become actors who need to act out particular scenes while viewing select words presented to them.

Full disclosure: Similar to my infrequent administration of the Written Expression subtest, I have also administered this subtest very infrequently to students.  Here is why.

I am an SLP who works full-time in a psychiatric hospital with children diagnosed with significant psychiatric impairments and concomitant language and literacy deficits.  As a result, a significant portion of my job involves comprehensive social communication assessments to catalog my students’ significant deficits in this area. Yet, past administration of this subtest showed me that number of my students can pass this subtest quite easily despite presenting with notable and easily evidenced social communication deficits. Consequently, I prefer the administration of comprehensive social communication testing when working with children in my hospital based program or in my private practice, where I perform independent comprehensive evaluations of language and literacy (IEEs).

Again, as I’ve previously mentioned many clinicians may not have the access to other standardized social communication assessments, or lack the time to administer entire standardized written measures. Consequently, in the absence of other social communication assessments, this subtest can be used to get a baseline of the student’s basic social communication abilities, and then be supplemented with informal social communication measures such as the Informal Social Thinking Dynamic Assessment Protocol (ISTDAP) or observational social pragmatic checklists

TILLS-subtest-14-digit-span-forward

14.  The Digit Span Forward (DSF) subtest (description above) is a relatively isolated  measure  of short term and verbal working memory ( it minimizes demands on other aspects of language such as syntax or vocabulary).

TILLS-subtest-15-digit-span-backward

15.  The Digit Span Backward (DSB) subtest (description above) assesses the student’s working memory and requires the student to mentally manipulate the presented stimuli in reverse order. It allows examiner to observe the strategies (e.g. verbal rehearsal, visual imagery, etc.) the students are using to aid themselves in the process.  Please note that the Digit Span Forward subtest must be administered immediately before the administration of this subtest.

SLPs who have used tests such as the Clinical Evaluation of Language Fundamentals – 5 (CELF-5) or the Test of Auditory Processing Skills – Third Edition (TAPS-3) should be highly familiar with both subtests as they are fairly standard measures of certain aspects of memory across the board.

To continue, in addition to the presence of subtests which assess the students literacy abilities, the TILLS also possesses a number of interesting features.

For starters, the TILLS Easy Score, which allows the examiners to use their scoring online. It is incredibly easy and effective. After clicking on the link and filling out the preliminary demographic information, all the examiner needs to do is to plug in this subtest raw scores, the system does the rest. After the raw scores are plugged in, the system will generate a PDF document with all the data which includes (but is not limited to) standard scores, percentile ranks, as well as a variety of composite and core scores. The examiner can then save the PDF on their device (laptop, PC, tablet etc.) for further analysis.

The there is the quadrant model. According to the TILLS sampler (HERE)  “it allows the examiners to assess and compare students’ language-literacy skills at the sound/word level and the sentence/ discourse level across the four oral and written modalities—listening, speaking, reading, and writing” and then create “meaningful profiles of oral and written language skills that will help you understand the strengths and needs of individual students and communicate about them in a meaningful way with teachers, parents, and students. (pg. 21)”

tills quadrant model

Then there is the Student Language Scale (SLS) which is a one page checklist parents,  teachers (and even students) can fill out to informally identify language and literacy based strengths and weaknesses. It  allows for meaningful input from multiple sources regarding the students performance (as per IDEA 2004) and can be used not just with TILLS but with other tests or in even isolation (as per developers).

Furthermore according to the developers, because the normative sample included several special needs populations, the TILLS can be used with students diagnosed with ASD,  deaf or hard of hearing (see caveat), as well as intellectual disabilities (as long as they are functioning age 6 and above developmentally).

According to the developers the TILLS is aligned with Common Core Standards and can be administered as frequently as two times a year for progress monitoring (min of 6 mos post 1st administration).

With respect to bilingualism examiners can use it with caution with simultaneous English learners but not with sequential English learners (see further explanations HERE).   Translations of TILLS are definitely not allowed as they will undermine test validity and reliability.

So there you have it these are just some of my very few impressions regarding this test.  Now to some of you may notice that I spend a significant amount of time pointing out some of the tests limitations. However, it is very important to note that we have research that indicates that there is no such thing as a “perfect standardized test” (see HERE for more information).   All standardized tests have their limitations

Having said that, I think that TILLS is a PHENOMENAL addition to the standardized testing market, as it TRULY appears to assess not just language but also literacy abilities of the students on our caseloads.

That’s all from me; however, before signing off I’d like to provide you with more resources and information, which can be reviewed in reference to TILLS.  For starters, take a look at Brookes Publishing TILLS resources.  These include (but are not limited to) TILLS FAQ, TILLS Easy-Score, TILLS Correction Document, as well as 3 FREE TILLS Webinars.   There’s also a Facebook Page dedicated exclusively to TILLS updates (HERE).

But that’s not all. Dr. Nelson and her colleagues have been tirelessly lecturing about the TILLS for a number of years, and many of their past lectures and presentations are available on the ASHA website as well as on the web (e.g., HERE, HERE, HERE, etc). Take a look at them as they contain far more in-depth information regarding the development and implementation of this groundbreaking assessment.

To access TILLS fully-editable template, click HERE

Disclaimer:  I did not receive a complimentary copy of this assessment for review nor have I received any encouragement or compensation from either Brookes Publishing  or any of the TILLS developers to write it.  All images of this test are direct property of Brookes Publishing (when clicked on all the images direct the user to the Brookes Publishing website) and were used in this post for illustrative purposes only.

References: 

Leclercq A, Maillart C, Majerus S. (2013) Nonword repetition problems in children with SLI: A deficit in accessing long-term linguistic representations? Topics in Language Disorders. 33 (3) 238-254.

Related Posts:

Posted on 6 Comments

Intervention at the Last Moment or Why We Need Better Preschool Evaluations

“Well, the school did their evaluations and he doesn’t qualify for services” tells me a parent of a 3.5 year old, newly admitted private practice client.  “I just don’t get it” she says bemusedly, “It is so obvious to anyone who spends even 10 minutes with him that his language is nowhere near other kids his age!” “How can this happen?” she asks frustratedly?

This parent is not alone in her sentiment. In my private practice I frequently see preschool children with speech language impairments who for all intents and purposes should have qualified for preschool- based speech language services but do not due to questionable testing practices.

To illustrate, several years ago in my private practice, I started seeing a young preschool girl, 3.2 years of age. Just prior to turning 3, she underwent a collaborative school-based social, psychological, educational, and speech language evaluation.  The 4 combined evaluators from each field only used one standardized assessment instrument “The Battelle Developmental Inventory – Second Edition (BDI-2)” along with a limited ‘structured observation’, without performing any functional or dynamic assessments and found the child to be ineligible for services on account of a low average total score on the BDI-2.

However, during the first session working 1:1 with this client at the age of 3.2 a number of things became very apparent.  The child had very limited highly echolalic verbal output primarily composed of one-word utterances and select two-word phrases.  She had highly limited receptive vocabulary and could not consistently point to basic pictures denoting common household objects and items (e.g., chair, socks, clock, sun, etc.)  Similarly, expressively she exhibited a number of inconsistencies when labeling simple nouns (e.g., called tree a flower, monkey a dog, and sofa a chair, etc.)  Clearly this child’s abilities were nowhere near age level, so how could she possibly not qualify for preschool based services?

Further work with the child over the next several years yielded slow, labored, and inconsistent gains in the areas of listening, speaking, and social communication.  I’ve also had a number of concerns regarding her intellectual abilities that I had shared with the parents.  Finally, two years after preschool eligibility services were denied to this child, she underwent a second round of re-evaluations with the school district at the age of 5.2.

This time around she qualified with bells on! The same speech language pathologist and psychologist who assessed her first time around two years ago, now readily documented significant communication (Preschool Language Scale-5-PLS-5 scores in the 1st % of functioning) and cognitive deficits (Full Scale Intelligence Quotient-FSIQ in low 50’s).

Here is the problem though. This is not a child who had suddenly regressed in her abilities.  This is a child who actually had improved her abilities in all language domains due to private language therapy services.  Her deficits very clearly existed at the time of her first school-based assessment and had continued to persist over time. For the duration of two years this child could have significantly benefited from free and appropriate education in school setting, which was denied to her due to highly limited preschool assessment practices.

Today, I am writing this post to shed light on this issue, which I’m pretty certain is not just confined to the state of New Jersey.  I am writing this post not simply to complain but to inform parents and educators alike on what actually constitutes an appropriate preschool speech-language assessment.

As per NJAC 6A:14-2.5  Protection in evaluation procedures (pgs. 29-30)

(a) In conducting an evaluation, each district board of education shall:

  1. Use a variety of assessment tools and strategies to gather relevant functional and developmental information, including information:
  2. Provided by the parent that may assist in determining whether a child is a student with a disability and in determining the content of the student’s IEP; and
  3. Related to enabling the student to be involved in and progress in the general education curriculum or, for preschool children with disabilities, to participate in appropriate activities;
  4. Not use any single procedure as the sole criterion for determining whether a student is a student with a disability or determining an appropriate educational program for the student; and
  5. Use technically sound instruments that may assess the relative contribution of cognitive and behavioral factors, in addition to physical or developmental factors.

Furthermore, according to the New Special Education Code: N.J.A.C. 6A:14-3.5(c)10 (please refer to your state’s eligibility criteria to find similar guidelinesthe eligibility of a “preschool child with a disability” applies to any student between 3-5 years of age with an identified disabling condition adversely affecting learning/development  (e.g., genetic syndrome), a 33% delay in one developmental area, or a 25% percent delay in two or more developmental areas below :

  1. Physical, including gross/fine motor and sensory (vision and hearing)
  2. Intellectual
  3. Communication
  4. Social/emotional
  5. Adaptive

—These delays can be receptive (listening) or expressive (speaking) and need not be based on a total test score but rather on all testing findings with a minimum of at least two assessments being performed.  A determination of adverse impact in academic and non-academic areas (e.g., social functioning) needs to take place in order for special education and related services be provided.  Additionally, a delay in articulation can serve as a basis for consideration of eligibility as well.

—Moreover, according to  the —State Education Agencies Communication Disabilities Council (SEACDC) Consulatent for NJ – Fran Liebner, the BDI-2 is not the only test which can be used to determine eligibility, since the nature and scope of the evaluation must be determined based on parent, teacher and IEP team feedback.

In fact, New Jersey’s Special Education Code, N.J.A.C. 6A:14 prescribes no specific test in its eligibility requirements.  While it is true that for NJ districts participating in Indicator 7 (Preschool Outcomes) BDI-2 is a required collection tool it does NOT preclude the team from deciding what other diagnostic tools are needed to assess all areas of suspected disability to determine eligibility. 

Speech pathologists have many tests available to them when assessing young preschool children 2 to 6 years of age.

SELECT SPEECH PATHOLOGY TESTS FOR PRESCHOOL CHILDREN (2-6 years of age)

 Articulation:

  • Sunny Articulation Test (SAPT)** Ages: All (nonstandardized)
  • Clinical Assessment of Articulation and Phonology-2 (CAAP-2) Ages: 2.6+
  • Linguisystems Articulation Test (LAT) Ages: 3+
  • Goldman Fristoe Test of Articulation-3 (GFTA-3)    Ages: 2+

 Fluency:

  • Stuttering Severity Instrument -4 (SSI-4) Ages: 2+
  • Test of Childhood Stuttering (TOCS) Ages 4+

General Language: 

  • Preschool Language Assessment Instrument-2 (PLAI-2)  Ages: 3+
  • Clinical Evaluation of Language Fundamentals -Preschool 2 (CELF-P2) Ages: 3+
  • Test of Early Language Development, Third Edition (TELD-3) Ages: 2+
  • Test of Auditory Comprehension of Language Third Edition (TACL-4)      Ages: 3+
  • Preschool Language Scale-5 (PLS-5)* (use with extreme caution) Ages: Birth-7:11

Vocabulary

  • Receptive One-Word Picture Vocabulary Test-4 (ROWPVT-4)  Ages 2+
  • Expressive One-Word Picture Vocabulary Test-4 (EOWPVT-4) Ages 2+
  • Montgomery Assessment of Vocabulary Acquisition (MAVA) 3+
  • Test of Word Finding-3 (TWF-3) Ages 4.6+

Auditory Processing and Phonological Awareness

  • Auditory Skills Assessment (ASA)    Ages 3:6+
  • Test of Auditory Processing Skills-3 (TAPS-3) Ages 4+
  • Comprehensive Test of Phonological Processing-2 (CTOPP-2) Ages 4+

Pragmatics/Social Communication

  • —Language Use Inventory LUI (O’Neil, 2009) Ages 18-47 months
  • —Children’s Communication Checklist-2 (CCC-2) (Bishop, 2006) Ages 4+

—In addition to administering standardized testing SLPs should also use play scales (e.g., Westby Play Scale, 1980) to assess the given child’s play abilities. This is especially important given that “play—both functional and symbolic has been associated with language and social communication ability.” (Toth, et al, 2006, pg. 3)

Finally, by showing children simple wordless picture books, SLPs can also obtain of wealth of information regarding ——the child’s utterance length, as well as narrative abilities ( a narrative assessment can be performed on a verbal child as young as two years of age).

—Comprehensive school-based speech-language assessments should be the norm and not an exception when determining preschoolers eligibility for speech language services and special education classification.

Consequently, let us ensure that our students receive fair and adequate assessments to have access to the best classroom placements, appropriate accommodations and modifications as well as targeted and relevant therapeutic services. Anything less will lead to the denial of Free Appropriate Public Education (FAPE) to which all students are entitled to!

Helpful Smart Speech Therapy Resources Pertaining to Preschoolers: 

Posted on 9 Comments

Why Are My Child’s Test Scores Dropping?

“I just don’t understand,” says a parent bewilderingly, “she’s receiving so many different therapies and tutoring every week, but her scores on educational, speech-language, and psychological testing just keep dropping!”

I hear a variation of this comment far too frequently in both my private practice as well as outpatient school in hospital setting, from parents looking for an explanation regarding the decline of their children’s standardized test scores in both cognitive (IQ) and linguistic domains. That is why today I wanted to take a moment to write this blog post to explain a few reasons behind this phenomenon.

Children with language impairments represent a highly diverse group, which exists along a continuum.   Some children’s deficits may be mild while others far more severe. Some children may receive very little intervention  services and thrive academically, while others can receive inordinate amount of interventions and still very limitedly benefit from them.  To put it in very simplistic terms, the above is due to two significant influences – the interaction between the child’s (1) genetic makeup and (2) environmental factors.

There is a reason why language disorders are considered developmental.   Firstly, these difficulties are apparent from a young age when the child’s language just begins to develop.  Secondly, the trajectory of the child’s language deficits also develops along with the child and can progress/lag based on the child’s genetic predisposition, resiliency, parental input, as well as schooling and academically based interventions.

Let us discuss some of the reasons why standardized testing results may decline for select students who are receiving a variety of support services and interventions.

Ineffective Interventions due to Misdiagnosis 

Sometimes, lack of appropriate/relevant intervention provision may be responsible for it.  Let’s take an example of a misdiagnosis of alcohol related deficits as Autism, which I have frequently encountered in my private practice, when performing second opinion testing and consultations. Unfortunately, the above is not uncommon.  Many children with alcohol-related impairments may present with significant social emotional dysregulation coupled with significant externalizing behavior manifestations.  As a result, without a thorough differential diagnosis they may be frequently diagnosed with ASD and then provided with ABA therapy services for years with little to no benefit.

Ineffective Interventions due to Lack of Comprehensive Testing 

Let us examine another example of a student with average intelligence but poor reading performance.  The student may do well in school up to certain grade but then may begin to flounder academically.  Because only the student’s reading abilities ‘seem’ to be adversely impacted, no comprehensive language and literacy evaluations are performed.   The student may receive undifferentiated extra reading support in school while his scores may continue to drop.

Once the situation ‘gets bad enough’, the student’s language and literacy abilities may be comprehensively assessed.  In a vast majority of situations these type of assessments yield the following results:

  1. The student’s oral language expression as well as higher order language abilities are adversely affected and require targeted language intervention
  2. The undifferentiated reading intervention provided to the student was NOT targeting actual areas of weaknesses

As can be seen from above examples, targeted intervention is hugely important and, in a number of cases, may be responsible  for the student’s declining performance. However, that is not always the case.

What if it was definitively confirmed that the student was indeed diagnosed appropriately and was receiving quality services but still continued to decline academically. What then?

Well, we know that many children with genetic disorders (Down Syndrome, Fragile X, etc.) as well as intellectual disabilities (ID) can make incredibly impressive gains in a variety of developmental areas (e.g., gross/fine motor skills, speech/language, socio-emotional, ADL, etc.)  but their gains will not be on par with peers without these diagnoses.

The situation becomes much more complicated when children without ID (or with mild intellectual deficits) and varying degrees of language impairment, receive effective therapies, work very hard in therapy, yet continue  to be perpetually behind their peers when it comes to making academic gains.  This occurs because of a phenomenon known as Cumulative Cognitive Deficit (CCD).

The Effect of Cumulative Cognitive Deficit (CCD) on Academic Performance 

According to Gindis (2005) CCD “refers to a downward trend in the measured intelligence and/or scholastic achievement of culturally/socially disadvantaged children relative to age-appropriate societal norms and expectations” (p. 304). Gindis further elucidates by quoting Satler (1992): “The theory behind cumulative deficit is that children who are deprived of enriching cognitive experiences during their early years are less able to profit from environmental situations because of a mismatch between their cognitive schemata and the requirements of the new (or advanced) learning situation”  (pp. 575-576).

So who are the children potentially at risk for CCD?

One such group are internationally (and domestically) adopted as well as foster care children.  A number of studies show that due to the early life hardships associated with prenatal trauma (e.g., maternal substance abuse, lack of adequate prenatal care, etc.) as well as postnatal stress (e.g., adverse effect of institutionalization), many of these children have much poorer social and academic outcomes despite being adopted by well-to-do, educated parents who continue to provide them with exceptional care in all aspects of their academic and social development.

Another group, are children with diagnosed/suspected psychiatric impairments and concomitant overt/hidden language deficits. Depending on the degree and persistence of the psychiatric impairment, in addition to having intermittent access to classroom academics and therapy interventions, the quality of their therapy may be affected by the course of their illness. Combined with sporadic nature of interventions this may result in them falling further and further behind their peers with respect to social and academic outcomes.

A third group (as mentioned previously) are children with genetic syndromes, neurodevelopmental disorders (e.g., Autism) and intellectual disabilities. Here, it is very important to explicitly state that children with diagnosed or suspected alcohol related deficits (FASD) are particularly at risk due to the lack of consensus/training  regarding FAS detection/diagnosis. Consequently, these children may evidence a steady ‘decline’ on standardized testing despite exhibiting steady functional gains in therapy.

Brief Standardized Testing Score Tutorial:

When we look at norm-referenced testing results, score interpretation can be quite daunting. For the sake of simplicity,  I’d like to restrict this discussion to two types of scores: raw scores and standard scores.

The raw score is the number of items the child answered correctly on a test or a subtest. However, raw scores need to be interpreted to be meaningful.  For example, a 9 year old student can attain a raw score of 12 on a subtest of a particular test (e.g., Listening Comprehension Test-2 or LCT-2).  Without more information, the raw score has no meaning. If the test consisted of 15 questions, a raw score of 12 would be an average score. Alternatively, if the subtest had 36 questions, a raw score of 12 would be significantly below-average (e.g., Test of Problem Solving-3 or TOPS-3).

Consequently, the raw score needs to be converted to a standard score. Standard scores compare the student’s performance on a test to the performance of other students his/her age.  Many standardized language assessments have a mean of 100 and a standard deviation of 15. Thus, scores between 85 and 115 are considered to be in the average range of functioning.

Now lets discuss testing performance variation across time. Let’s say an 8.6 year old student took the above mentioned LCT-2 and attained poor standard scores on all subtests.   That student qualifies for services and receives them for a period of one year. At that time the LCT-2 is re-administered once again and much to the parents surprise the student’s standard scores appear to be even lower than when he had taken the test as an eight year old (illustration below).

Results of The Listening Comprehension Test -2 (LCT-2): Age: 8:4

Subtests Raw Score Standard Score Percentile Rank Description
Main Idea 5 67 2 Severely Impaired
Details 2 63 1 Severely Impaired
Reasoning 2 69 2 Severely Impaired
Vocabulary 0 Below Norms Below Norms Profoundly Impaired
Understanding Messages 0 <61 <1 Profoundly Impaired
Total Test Score 9 <63 1 Profoundly Impaired

(Mean = 100, Standard Deviation = +/-15)

Results of The Listening Comprehension Test -2 (LCT-2):  Age: 9.6

Subtests Raw Score Standard Score Percentile Rank Description
Main Idea 6 60 0 Severely Impaired
Details 5 66 1 Severely Impaired
Reasoning 3 62 1 Severely Impaired
Vocabulary 4 74 4 Moderately Impaired
Understanding Messages 2 54 0 Profoundly Impaired
Total Test Score 20 <64 1 Profoundly Impaired

(Mean = 100, Standard Deviation = +/-15)

However, if one looks at the raw score column on the far left, one can see that the student as a 9 year old actually answered more questions than as an 8 year old and his total raw test score went up by 11 points.

The above is a perfect illustration of CCD in action. The student was able to answer more questions on the test but because academic, linguistic, and cognitive demands continue to steadily increase with age, this quantitative improvement in performance (increase in total number of questions answered) did not result in qualitative  improvement in performance (increase in standard scores).

In the first part of this series I have introduced the concept of Cumulative Cognitive Deficit and its effect on academic performance. Stay tuned for part II of this series which describes what parents and professionals can do to improve functional performance of students with Cumulative Cognitive Deficit.

References:

  • Bowers, L., Huisingh, R., & LoGiudice, C. (2006). The Listening Comprehension Test-2 (LCT-2). East Moline, IL: LinguiSystems, Inc.
  • Bowers, L., Huisingh, R., & LoGiudice, C. (2005). The Test of Problem Solving 3-Elementary (TOPS-3). East Moline, IL: LinguiSystems.
  • Gindis, B. (2005). Cognitive, language, and educational issues of children adopted from overseas orphanages. Journal of Cognitive Education and Psychology, 4 (3): 290-315.
  • Sattler, J. M. (1992). Assessment of Children. Revised and updated 3rd edition. San Diego: Jerome M. Sattler.