Posted on 16 Comments

Review of the Test of Integrated Language and Literacy (TILLS)

The Test of Integrated Language & Literacy Skills (TILLS) is an assessment of oral and written language abilities in students 6–18 years of age. Published in the Fall 2015, it is  unique in the way that it is aimed to thoroughly assess skills  such as reading fluency, reading comprehension, phonological awareness,  spelling, as well as writing  in school age children.   As I have been using this test since the time it was published,  I wanted to take an opportunity today to share just a few of my impressions of this assessment.

               

First, a little background on why I chose to purchase this test  so shortly after I had purchased the Clinical Evaluation of Language Fundamentals – 5 (CELF-5).   Soon after I started using the CELF-5  I noticed that  it tended to considerably overinflate my students’ scores  on a variety of its subtests.  In fact,  I noticed that unless a student had a fairly severe degree of impairment,  the majority of his/her scores  came out either low/slightly below average (click for more info on why this was happening HERE, HEREor HERE). Consequently,  I was excited to hear regarding TILLS development, almost simultaneously through ASHA as well as SPELL-Links ListServe.   I was particularly happy  because I knew some of this test’s developers (e.g., Dr. Elena Plante, Dr. Nickola Nelson) have published solid research in the areas of  psychometrics and literacy respectively.

According to the TILLS developers it has been standardized for 3 purposes:

  • to identify language and literacy disorders
  • to document patterns of relative strengths and weaknesses
  • to track changes in language and literacy skills over time

The testing subtests can be administered in isolation (with the exception of a few) or in its entirety.  The administration of all the 15 subtests may take approximately an hour and a half, while the administration of the core subtests typically takes ~45 mins).

Please note that there are 5 subtests that should not be administered to students 6;0-6;5 years of age because many typically developing students are still mastering the required skills.

  • Subtest 5 – Nonword Spelling
  • Subtest 7 – Reading Comprehension
  • Subtest 10 – Nonword Reading
  • Subtest 11 – Reading Fluency
  • Subtest 12 – Written Expression

However,  if needed, there are several tests of early reading and writing abilities which are available for assessment of children under 6:5 years of age with suspected literacy deficits (e.g., TERA-3: Test of Early Reading Ability–Third Edition; Test of Early Written Language, Third Edition-TEWL-3, etc.).

Let’s move on to take a deeper look at its subtests. Please note that for the purposes of this review all images came directly from and are the property of Brookes Publishing Co (clicking on each of the below images will take you directly to their source).

TILLS-subtest-1-vocabulary-awareness1. Vocabulary Awareness (VA) (description above) requires students to display considerable linguistic and cognitive flexibility in order to earn an average score.    It works great in teasing out students with weak vocabulary knowledge and use,   as well as students who are unable to  quickly and effectively analyze  words  for deeper meaning and come up with effective definitions of all possible word associations. Be mindful of the fact that  even though the words are presented to the students in written format in the stimulus book, the examiner is still expected to read  all the words to the students. Consequently,  students with good vocabulary knowledge  and strong oral language abilities  can still pass this subtest  despite the presence of significant reading weaknesses. Recommendation:  I suggest informally  checking the student’s  word reading abilities  by asking them to read of all the words, before reading all the word choices to them.   This way  you can informally document any word misreadings  made by the student even in the presence of an average subtest score.

TIILLS-subtest-2-phonemic-awareness

2. The Phonemic Awareness (PA) subtest (description above) requires students to  isolate and delete initial sounds in words of increasing complexity.  While this subtest does not require sound isolation and deletion in various word positions, similar to tests such as the CTOPP-2: Comprehensive Test of Phonological Processing–Second Edition  or the The Phonological Awareness Test 2 (PAT 2)  it is still a highly useful and reliable measure of  phonemic awareness (as one of many precursors to reading fluency success).  This is especially because after the initial directions are given, the student is expected to remember to isolate the initial sounds in words without any prompting from the examiner.  Thus,  this task also  indirectly tests the students’ executive function abilities in addition to their phonemic awareness skills.

TILLS-subtest-3-story-retelling

3. The Story Retelling (SR) subtest (description above) requires students to do just that retell a story. Be mindful of the fact that the presented stories have reduced complexity. Thus, unless the students possess  significant retelling deficits, the above subtest  may not capture their true retelling abilities. Recommendation:  Consider supplementing this subtest  with informal narrative measures. For younger children (kindergarten and first grade) I recommend using wordless picture books to perform a dynamic assessment of their retelling abilities following a clinician’s narrative model (e.g., HERE).  For early elementary aged children (grades 2 and up), I recommend using picture books, which are first read to and then retold by the students with the benefit of pictorial but not written support. Finally, for upper elementary aged children (grades 4 and up), it may be helpful for the students to retell a book or a movie seen recently (or liked significantly) by them without the benefit of visual support all together (e.g., HERE).

TILLS-subtest-4-nonword-repetition

4. The Nonword Repetition (NR) subtest (description above) requires students to repeat nonsense words of increasing length and complexity. Weaknesses in the area of nonword repetition have consistently been associated with language impairments and learning disabilities due to the task’s heavy reliance on phonological segmentation as well as phonological and lexical knowledge (Leclercq, Maillart, Majerus, 2013). Thus, both monolingual and simultaneously bilingual children with language and literacy impairments will be observed to present with patterns of segment substitutions (subtle substitutions of sounds and syllables in presented nonsense words) as well as segment deletions of nonword sequences more than 2-3 or 3-4 syllables in length (depending on the child’s age).

TILLS-subtest-5-nonword-spelling

5. The Nonword Spelling (NS) subtest (description above) requires the students to spell nonwords from the Nonword Repetition (NR) subtest. Consequently, the Nonword Repetition (NR) subtest needs to be administered prior to the administration of this subtest in the same assessment session.  In contrast to the real-word spelling tasks,  students cannot memorize the spelling  of the presented words,  which are still bound by  orthographic and phonotactic constraints of the English language.   While this is a highly useful subtest,  is important to note that simultaneously bilingual children may present with decreased scores due to vowel errors.   Consequently,  it is important to analyze subtest results in order to determine whether dialectal differences rather than a presence of an actual disorder is responsible for the error patterns.

TILLS-subtest-6-listening-comprehension

6. The  Listening Comprehension (LC) subtest (description above) requires the students to listen to short stories  and then definitively answer story questions via available answer choices, which include: “Yes”, “No’, and “Maybe”. This subtest also indirectly measures the students’ metalinguistic awareness skills as they are needed to detect when the text does not provide sufficient information to answer a particular question definitively (e.g., “Maybe” response may be called for).  Be mindful of the fact that because the students are not expected to provide sentential responses  to questions it may be important to supplement subtest administration with another listening comprehension assessment. Tests such as the Listening Comprehension Test-2 (LCT-2), the Listening Comprehension Test-Adolescent (LCT-A),  or the Executive Function Test-Elementary (EFT-E)  may be useful  if  language processing and listening comprehension deficits are suspected or reported by parents or teachers. This is particularly important  to do with students who may be ‘good guessers’ but who are also reported to present with word-finding difficulties at sentence and discourse levels. 

TILLS-subtest-7-reading-comprehension

7. The Reading Comprehension (RC) subtest (description above) requires the students to  read short story and answer story questions in “Yes”, “No’, and “Maybe”  format.   This subtest is not stand alone and must be administered immediately following the administration the Listening Comprehension subtest. The student is asked to read the first story out loud in order to determine whether s/he can proceed with taking this subtest or discontinue due to being an emergent reader. The criterion for administration of the subtest is making 7 errors during the reading of the first story and its accompanying questions. Unfortunately,  in my clinical experience this subtest  is not always accurate at identifying children with reading-based deficits.

While I find it terrific for students with severe-profound reading deficits and/or below average IQ, a number of my students with average IQ and moderately impaired reading skills managed to pass it via a combination of guessing and luck despite being observed to misread aloud between 40-60% of the presented words. Be mindful of the fact that typically  such students may have up to 5-6  errors during the reading of the first story. Thus, according to administration guidelines these students will be allowed to proceed and take this subtest.  They will then continue to make text misreadings  during each story presentation (you will know that by asking them to read each story aloud vs. silently).   However,  because the response mode is in definitive (“Yes”, “No’, and “Maybe”) vs. open ended question format,  a number of these students  will earn average scores by being successful guessers. Recommendation:  I highly recommend supplementing the administration of this subtest with grade level (or below grade level) texts (see HERE and/or HERE),  to assess the student’s reading comprehension informally.

I present a full  one page text to the students and ask them to read it to me in its entirety.   I audio/video record  the student’s reading for further analysis (see Reading Fluency section below).   After the  completion of the story I ask  the student questions with a focus on main idea comprehension and vocabulary definitions.   I also ask questions pertaining to story details.   Depending on the student’s age  I may ask them  abstract/ factual text questions with and without text access.  Overall, I find that informal administration of grade level (or even below grade-level) texts coupled with the administration of standardized reading tests provides me with a significantly better understanding of the student’s reading comprehension abilities rather than administration of standardized reading tests alone.

TILLS-subtest-8-following-directions

8. The Following Directions (FD) subtest (description above) measures the student’s ability to execute directions of increasing length and complexity.  It measures the student’s short-term, immediate and working memory, as well as their language comprehension.  What is interesting about the administration of this subtest is that the graphic symbols (e.g., objects, shapes, letter and numbers etc.) the student is asked to modify remain covered as the instructions are given (to prevent visual rehearsal). After being presented with the oral instruction the students are expected to move the card covering the stimuli and then to executive the visual-spatial, directional, sequential, and logical if–then the instructions  by marking them on the response form.  The fact that the visual stimuli remains covered until the last moment increases the demands on the student’s memory and comprehension.  The subtest was created to simulate teacher’s use of procedural language (giving directions) in classroom setting (as per developers).

TILLS-subtest-9-delayed-story-retelling

9. The Delayed Story Retelling (DSR) subtest (description above) needs to be administered to the students during the same session as the Story Retelling (SR) subtest, approximately 20 minutes after the SR subtest administration.  Despite the relatively short passage of time between both subtests, it is considered to be a measure of long-term memory as related to narrative retelling of reduced complexity. Here, the examiner can compare student’s performance to determine whether the student did better or worse on either of these measures (e.g., recalled more information after a period of time passed vs. immediately after being read the story).  However, as mentioned previously, some students may recall this previously presented story fairly accurately and as a result may obtain an average score despite a history of teacher/parent reported  long-term memory limitations.  Consequently, it may be important for the examiner to supplement the administration of this subtest with a recall of a movie/book recently seen/read by the student (a few days ago) in order to compare both performances and note any weaknesses/limitations.

TILLS-subtest-10-nonword-reading

10. The Nonword Reading (NR) subtest (description above) requires students to decode nonsense words of increasing length and complexity. What I love about this subtest is that the students are unable to effectively guess words (as many tend to routinely do when presented with real words). Consequently, the presentation of this subtest will tease out which students have good letter/sound correspondence abilities as well as solid orthographic, morphological and phonological awareness skills and which ones only memorized sight words and are now having difficulty decoding unfamiliar words as a result.      TILLS-subtest-11-reading-fluency

11. The Reading Fluency (RF) subtest (description above) requires students to efficiently read facts which make up simple stories fluently and correctly.  Here are the key to attaining an average score is accuracy and automaticity.  In contrast to the previous subtest, the words are now presented in meaningful simple syntactic contexts.

It is important to note that the Reading Fluency subtest of the TILLS has a negatively skewed distribution. As per authors, “a large number of typically developing students do extremely well on this subtest and a much smaller number of students do quite poorly.”

Thus, “the mean is to the left of the mode” (see publisher’s image below). This is why a student could earn an average standard score (near the mean) and a low percentile rank when true percentiles are used rather than NCE percentiles (Normal Curve Equivalent). Tills Q&A – Negative Skew

Consequently under certain conditions (See HERE) the percentile rank (vs. the NCE percentile) will be a more accurate representation of the student’s ability on this subtest.

Indeed, due to the reduced complexity of the presented words some students (especially younger elementary aged) may obtain average scores and still present with serious reading fluency deficits.  

I frequently see that in students with average IQ and go to long-term memory, who by second and third grades have managed to memorize an admirable number of sight words due to which their deficits in the areas of reading appeared to be minimized.  Recommendation: If you suspect that your student belongs to the above category I highly recommend supplementing this subtest with an informal measure of reading fluency.  This can be done by presenting to the student a grade level text (I find science and social studies texts particularly useful for this purpose) and asking them to read several paragraphs from it (see HERE and/or HERE).

As the students are reading  I calculate their reading fluency by counting the number of words they read per minute.  I find it very useful as it allows me to better understand their reading profile (e.g, fast/inaccurate reader, slow/inaccurate reader, slow accurate reader, fast/accurate reader).   As the student is reading I note their pauses, misreadings, word-attack skills and the like. Then, I write a summary comparing the students reading fluency on both standardized and informal assessment measures in order to document students strengths and limitations.

TILLS-subtest-12-written-expression

12. The Written Expression (WE) subtest (description above) needs to be administered to the students immediately after the administration of the Reading Fluency (RF) subtest because the student is expected to integrate a series of facts presented in the RF subtest into their writing sample. There are 4 stories in total for the 4 different age groups.

The examiner needs to show the student a different story which integrates simple facts into a coherent narrative. After the examiner reads that simple story to the students s/he is expected to tell the students that the story is  okay, but “sounds kind of “choppy.” They then need to show the student an example of how they could put the facts together in a way that sounds more interesting and less choppy  by combining sentences (see below). Finally, the examiner will ask the students to rewrite the story presented to them in a similar manner (e.g, “less choppy and more interesting.”)

tills

After the student finishes his/her story, the examiner will analyze it and generate the following scores: a discourse score, a sentence score, and a word score. Detailed instructions as well as the Examiner’s Practice Workbook are provided to assist with scoring as it takes a bit of training as well as trial and error to complete it, especially if the examiners are not familiar with certain procedures (e.g., calculating T-units).

Full disclosure: Because the above subtest is still essentially sentence combining, I have only used this subtest a handful of times with my students. Typically when I’ve used it in the past, most of my students fell in two categories: those who failed it completely by either copying text word  for word, failing to generate any written output etc. or those who passed it with flying colors but still presented with notable written output deficits. Consequently, I’ve replaced Written Expression subtest administration with the administration of written standardized tests, which I supplement with an informal grade level expository, persuasive, or narrative writing samples.

Having said that many clinicians may not have the access to other standardized written assessments, or lack the time to administer entire standardized written measures (which may frequently take between 60 to 90 minutes of administration time). Consequently, in the absence of other standardized writing assessments, this subtest can be effectively used to gauge the student’s basic writing abilities, and if needed effectively supplemented by informal writing measures (mentioned above).

TILLS-subtest-13-social-communication

13. The Social Communication (SC) subtest (description above) assesses the students’ ability to understand vocabulary associated with communicative intentions in social situations. It requires students to comprehend how people with certain characteristics might respond in social situations by formulating responses which fit the social contexts of those situations. Essentially students become actors who need to act out particular scenes while viewing select words presented to them.

Full disclosure: Similar to my infrequent administration of the Written Expression subtest, I have also administered this subtest very infrequently to students.  Here is why.

I am an SLP who works full-time in a psychiatric hospital with children diagnosed with significant psychiatric impairments and concomitant language and literacy deficits.  As a result, a significant portion of my job involves comprehensive social communication assessments to catalog my students’ significant deficits in this area. Yet, past administration of this subtest showed me that number of my students can pass this subtest quite easily despite presenting with notable and easily evidenced social communication deficits. Consequently, I prefer the administration of comprehensive social communication testing when working with children in my hospital based program or in my private practice, where I perform independent comprehensive evaluations of language and literacy (IEEs).

Again, as I’ve previously mentioned many clinicians may not have the access to other standardized social communication assessments, or lack the time to administer entire standardized written measures. Consequently, in the absence of other social communication assessments, this subtest can be used to get a baseline of the student’s basic social communication abilities, and then be supplemented with informal social communication measures such as the Informal Social Thinking Dynamic Assessment Protocol (ISTDAP) or observational social pragmatic checklists

TILLS-subtest-14-digit-span-forward

14.  The Digit Span Forward (DSF) subtest (description above) is a relatively isolated  measure  of short term and verbal working memory ( it minimizes demands on other aspects of language such as syntax or vocabulary).

TILLS-subtest-15-digit-span-backward

15.  The Digit Span Backward (DSB) subtest (description above) assesses the student’s working memory and requires the student to mentally manipulate the presented stimuli in reverse order. It allows examiner to observe the strategies (e.g. verbal rehearsal, visual imagery, etc.) the students are using to aid themselves in the process.  Please note that the Digit Span Forward subtest must be administered immediately before the administration of this subtest.

SLPs who have used tests such as the Clinical Evaluation of Language Fundamentals – 5 (CELF-5) or the Test of Auditory Processing Skills – Third Edition (TAPS-3) should be highly familiar with both subtests as they are fairly standard measures of certain aspects of memory across the board.

To continue, in addition to the presence of subtests which assess the students literacy abilities, the TILLS also possesses a number of interesting features.

For starters, the TILLS Easy Score, which allows the examiners to use their scoring online. It is incredibly easy and effective. After clicking on the link and filling out the preliminary demographic information, all the examiner needs to do is to plug in this subtest raw scores, the system does the rest. After the raw scores are plugged in, the system will generate a PDF document with all the data which includes (but is not limited to) standard scores, percentile ranks, as well as a variety of composite and core scores. The examiner can then save the PDF on their device (laptop, PC, tablet etc.) for further analysis.

The there is the quadrant model. According to the TILLS sampler (HERE)  “it allows the examiners to assess and compare students’ language-literacy skills at the sound/word level and the sentence/ discourse level across the four oral and written modalities—listening, speaking, reading, and writing” and then create “meaningful profiles of oral and written language skills that will help you understand the strengths and needs of individual students and communicate about them in a meaningful way with teachers, parents, and students. (pg. 21)”

tills quadrant model

Then there is the Student Language Scale (SLS) which is a one page checklist parents,  teachers (and even students) can fill out to informally identify language and literacy based strengths and weaknesses. It  allows for meaningful input from multiple sources regarding the students performance (as per IDEA 2004) and can be used not just with TILLS but with other tests or in even isolation (as per developers).

Furthermore according to the developers, because the normative sample included several special needs populations, the TILLS can be used with students diagnosed with ASD,  deaf or hard of hearing (see caveat), as well as intellectual disabilities (as long as they are functioning age 6 and above developmentally).

According to the developers the TILLS is aligned with Common Core Standards and can be administered as frequently as two times a year for progress monitoring (min of 6 mos post 1st administration).

With respect to bilingualism examiners can use it with caution with simultaneous English learners but not with sequential English learners (see further explanations HERE).   Translations of TILLS are definitely not allowed as they will undermine test validity and reliability.

So there you have it these are just some of my very few impressions regarding this test.  Now to some of you may notice that I spend a significant amount of time pointing out some of the tests limitations. However, it is very important to note that we have research that indicates that there is no such thing as a “perfect standardized test” (see HERE for more information).   All standardized tests have their limitations

Having said that, I think that TILLS is a PHENOMENAL addition to the standardized testing market, as it TRULY appears to assess not just language but also literacy abilities of the students on our caseloads.

That’s all from me; however, before signing off I’d like to provide you with more resources and information, which can be reviewed in reference to TILLS.  For starters, take a look at Brookes Publishing TILLS resources.  These include (but are not limited to) TILLS FAQ, TILLS Easy-Score, TILLS Correction Document, as well as 3 FREE TILLS Webinars.   There’s also a Facebook Page dedicated exclusively to TILLS updates (HERE).

But that’s not all. Dr. Nelson and her colleagues have been tirelessly lecturing about the TILLS for a number of years, and many of their past lectures and presentations are available on the ASHA website as well as on the web (e.g., HERE, HERE, HERE, etc). Take a look at them as they contain far more in-depth information regarding the development and implementation of this groundbreaking assessment.

To access TILLS fully-editable template, click HERE

Disclaimer:  I did not receive a complimentary copy of this assessment for review nor have I received any encouragement or compensation from either Brookes Publishing  or any of the TILLS developers to write it.  All images of this test are direct property of Brookes Publishing (when clicked on all the images direct the user to the Brookes Publishing website) and were used in this post for illustrative purposes only.

References: 

Leclercq A, Maillart C, Majerus S. (2013) Nonword repetition problems in children with SLI: A deficit in accessing long-term linguistic representations? Topics in Language Disorders. 33 (3) 238-254.

Related Posts:

Posted on 8 Comments

What Research Shows About the Functional Relevance of Standardized Language Tests

Image result for standardized language testsAs an SLP who routinely conducts speech and language assessments in several settings (e.g., school and private practice), I understand the utility of and the need for standardized speech, language, and literacy tests.  However, as an SLP who works with children with dramatically varying degree of cognition, abilities, and skill-sets, I also highly value supplementing these standardized tests with functional and dynamic assessments, interactions, and observations.

Since a significant value is placed on standardized testing by both schools and insurance companies for the purposes of service provision and reimbursement, I wanted to summarize in today’s post the findings of recent articles on this topic.  Since my primary interest lies in assessing and treating school-age children, for the purposes of today’s post all of the reviewed articles came directly from the Language Speech and Hearing Services in Schools  (LSHSS) journal.

We’ve all been there. We’ve all had situations in which students scored on the low end of normal, or had a few subtest scores in the below average range, which equaled  an average total score.  We’ve all poured over eligibility requirements trying to figure out whether the student should receive therapy services given the stringent standardized testing criteria in some states/districts.

Of course, as it turns out, the answer is never simple.  In 2006, Spaulding, Plante & Farinella set out to examine the assumption: “that children with language impairment will receive low scores on standardized tests, and therefore [those] low scores will accurately identify these children” (61).   So they analyzed the data from 43 commercially available child language tests to identify whether evidence exists to support their use in identifying language impairment in children.

Turns out it did not!  Turns out due to the variation in psychometric properties of various tests (see article for specific details), many children with language impairment are overlooked by standardized tests by receiving scores within the average range or not receiving low enough scores to qualify for services. Thus, “the clinical consequence is that a child who truly has a language impairment has a roughly equal chance of being correctly or incorrectly identified, depending on the test that he or she is given.” Furthermore, “even if a child is diagnosed accurately as language impaired at one point in time, future diagnoses may lead to the false perception that the child has recovered, depending on the test(s) that he or she has been given (69).”

Consequently, they created a decision tree (see below) with recommendations for clinicians using standardized testing. They recommend using alternate sources of data (sensitivity and specificity rates) to support accurate identification (available for a small subset of select tests).

The idea behind it is: “if sensitivity and specificity data are strong, and these data were derived from subjects who are comparable to the child tested, then the clinician can be relatively confident in relying on the test score data to aid his or her diagnostic decision. However, if the data are weak, then more caution is warranted and other sources of information on the child’s status might have primacy in making a diagnosis (70).”

Fast forward 6 years, and a number of newly revised tests later,  in 2012, Spaulding and colleagues set out to “identify various U.S. state education departments’ criteria for determining the severity of language impairment in children, with particular focus on the use of norm-referenced tests” as well as to “determine if norm-referenced tests of child language were developed for the purpose of identifying the severity of children’s language impairment”  (176).

They obtained published procedures for severity determinations from available U.S. state education departments, which specified the use of norm-referenced tests, and reviewed the manuals for 45 norm-referenced tests of child language to determine if each test was designed to identify the degree of a child’s language impairment.

What they found out was “the degree of use and cutoff-point criteria for severity determination varied across states. No cutoff-point criteria aligned with the severity cutoff points described within the test manuals. Furthermore, tests that included severity information lacked empirical data on how the severity categories were derived (176).”

Thus they urged SLPs to exercise caution in determining the severity of children’s language impairment via norm-referenced test performance “given the inconsistency in guidelines and lack of empirical data within test manuals to support this use (176)”.

Following the publication of this article, Ireland, Hall-Mills & Millikin issued a response to the  Spaulding and colleagues article. They pointed out that the “severity of language impairment is only one piece of information considered by a team for the determination of eligibility for special education and related services”.  They noted that  they left out a host of federal and state guideline requirements and “did not provide an analysis of the regulations governing special education evaluation and criteria for determining eligibility (320).” They pointed out that “IDEA prohibits the use of ‘any single measure or assessment as the sole criterion’ for determination of disability  and requires that IEP teams ‘draw upon information from a variety of sources.”

They listed a variety of examples from several different state departments of education (FL, NC, VA, etc.), which mandate the use of functional assessments, dynamic assessments criterion-referenced assessments, etc. for their determination of language therapy eligibility.

But are the SLPs from across the country appropriately using the federal and state guidelines in order to determine eligibility? While one should certainly hope so, it does not always seem to be the case.  To illustrate, in 2012, Betz & colleagues asked 364 SLPs to complete a survey “regarding how frequently they used specific standardized tests when diagnosing suspected specific language impairment (SLI) (133).”

Their purpose was to determine “whether the quality of standardized tests, as measured by the test’s psychometric properties, is related to how frequently the tests are used in clinical practice” (133).

What they found out was that the most frequently used tests were the comprehensive assessments including the Clinical Evaluation of Language Fundamentals and the Preschool Language Scale as well as one word vocabulary tests such as the Peabody Picture Vocabulary Test. Furthermore, the date of publication seemed to be the only factor which affected the frequency of test selection.

They also found out that frequently SLPs did not follow up the comprehensive standardized testing with domain specific assessments (critical thinking, social communication, etc.) but instead used the vocabulary testing as a second measure.  They were understandably puzzled by that finding. “The emphasis placed on vocabulary measures is intriguing because although vocabulary is often a weakness in children with SLI (e.g., Stothard et al., 1998), the research to date does not show vocabulary to be more impaired than other language domains in children with SLI (140).

According to the authors, “perhaps the most discouraging finding of this study was the lack of a correlation between frequency of test use and test accuracy, measured both in terms of sensitivity/specificity and mean difference scores (141).”

If since the time (2012) SLPs have not significantly change their practices, the above is certainly disheartening, as it implies that rather than being true diagnosticians, SLPs are using whatever is at hand that has been purchased by their department to indiscriminately assess students with suspected speech language disorders. If that is truly the case, it certainly places into question the Ireland, Hall-Mills & Millikin’s response to Spaulding and colleagues.  In other words, though SLPs are aware that they need to comply with state and federal regulations when it comes to unbiased and targeted assessments of children with suspected language disorders, they may not actually be using appropriate standardized testing much less supplementary informal assessments (e.g., dynamic, narrative, language sampling) in order to administer well-rounded assessments.  

So where do we go from here? Well, it’s quite simple really!   We already know what the problem is. Based on the above articles we know that:

  1. Standardized tests possess significant limitations
  2. They are not used with optimal effectiveness by many SLPs
  3.  They may not be frequently supplemented by relevant and targeted informal assessment measures in order to improve the accuracy of disorder determination and subsequent therapy eligibility

Now that we have identified a problem, we need to develop and consistently implement effective practices to ameliorate it.  These include researching psychometric properties of tests to review sample size, sensitivity and specificity, etc, use domain specific assessments to supplement administration of comprehensive testing, as well as supplement standardized testing with a plethora of functional assessments.

SLPs can review testing manuals and consult with colleagues when they feel that the standardized testing is underidentifying students with language impairments (e.g., HERE and HERE).  They can utilize referral checklists (e.g., HERE) in order to pinpoint the students’ most significant difficulties. Finally, they can develop and consistently implement informal assessment practices (e.g., HERE and HERE) during testing in order to gain a better grasp on their students’ TRUE linguistic functioning.

Stay tuned for the second portion of this post entitled: “What Research Shows About the Functional Relevance of Standardized Speech Tests?” to find out the best practices in the assessment of speech sound disorders in children.

References:

  1. Spaulding, Plante & Farinella (2006) Eligibility Criteria for Language Impairment: Is the Low End of Normal Always Appropriate?
  2. Spaulding, Szulga, & Figueria (2012) Using Norm-Referenced Tests to Determine Severity of Language Impairment in Children: Disconnect Between U.S. Policy Makers and Test Developers
  3. Ireland, Hall-Mills & Millikin (2012) Appropriate Implementation of Severity Ratings, Regulations, and State Guidance: A Response to “Using Norm-Referenced Tests to Determine Severity of Language Impairment in Children: Disconnect Between U.S. Policy Makers and Test Developers” by Spaulding, Szulga, & Figueria (2012)
  4. Betz et al. (2013) Factors Influencing the Selection of Standardized Tests for the Diagnosis of Specific Language Impairment

 

Posted on Leave a comment

Teaching Punctuation for Writing Success

Child, Kid, Play, Tranquil, Study, Color, Write, LearnLast week  I wrote a blog post entitled: “Teaching Metalinguistic Vocabulary for Reading Success” in which I described the importance of explicitly teaching students basic metalinguistic vocabulary terms as elementary building blocks needed for reading success (HERE).  This week I wanted to write a brief blog post regarding terminology related to one particular, often ignored aspect of writing, punctuation.

Punctuation brings written words to life. As we have seen from countless of grammar memes, an error in punctuation results in conveying a completely different meaning.

In my experience administering the Test of Written Language – 4 (TOWL – 4) as well as analyzing informal writing samples I frequently see an almost complete absence of any and all punctuation marks in the presented writing samples.  These are not the samples of 2nd, 3rd, or even 4th graders that I am referring to. Sadly, I’m referring to written samples of students in middle school and even high school, which frequently lack basic punctuation and capitalization.

This explicit instruction of punctuation terminology does significantly improve my students understanding of sentence formation. Even my students with mild to moderate intellectual disabilities significantly benefit from understanding how to use periods, commas and question marks in sentences.

I even created a basic handout to facilitate my students comprehension of usage of punctuation marks (FREE HERE) in sentences.

Similarly to my metalinguistic vocabulary handout, I ask my older elementary aged students with average IQ, to look up online and write down rules of usage for each of the provided terms (e.g., colon, hyphen, etc,.), under therapist supervision.

This in turns becomes a critical thinking and an executive functions activity. Students need sift through quite a bit of information to find a website which provides the clearest answers regarding the usage of specific punctuation marks. Here, it’s important for students to locate kid friendly websites which will provide them with simple but accurate descriptions of punctuation marks usage.  One example of such website is Enchanted Learning which also provides free worksheets related to practicing punctuation usage.

In contrast to the above, I use structured worksheets and punctuation related workbooks for younger elementary age students (e.g., 1st – 5th grades) as well as older students with intellectual impairments (click on each grade number above to see the workbooks).

I find that even after several sessions of explicitly teaching punctuation usage to my students, their written sentences significantly improve in clarity and cohesion.

One of the best parts about this seemingly simple activity, is that due to the sheer volume of provided punctuation mark vocabulary (20 items in total), a creative clinician/parent can stretch this activity into multiple therapy sessions. This is because careful rule identification for each punctuation mark will in turn involve a number of related vocabulary definition tasks.  Furthermore, correct usage of each punctuation mark in a sentence for internalization purposes (rather mere memorization) will also take-up a significant period of time.

How about you? Do you explicitly work on teaching punctuation?

Posted on 1 Comment

Teaching Metalinguistic Vocabulary for Reading Success

In my therapy sessions I spend a significant amount of time improving literacy skills (reading, spelling, and writing) of language impaired students.  In my work with these students I emphasize goals with a focus on phonics, phonological awareness, encoding (spelling) etc. However, what I have frequently observed in my sessions are significant gaps in the students’ foundational knowledge pertaining to the basics of sound production and letter recognition.  Basic examples of these foundational deficiencies involve students not being able to fluently name the letters of the alphabet, understand the difference between vowels and consonants, or fluently engage in sound/letter correspondence tasks (e.g., name a letter and then quickly and accurately identify which sound it makes).  Consequently, a significant portion of my sessions involves explicit instruction of the above concepts.

This got me thinking regarding my students’ vocabulary knowledge in general.  We, SLPs, spend a significant amount of time on explicit and systematic vocabulary instruction with our students because as compared to typically developing peers, they have immature and limited vocabulary knowledge. But do we teach our students the abstract vocabulary necessary for reading success? Do we explicitly teach them definitions of a letter, a word, a sentence? etc.

A number of my colleagues are skeptical. “Our students already have poor comprehension”, they tell me, “Why should we tax their memory with abstract words of little meaning to them?”  And I agree with them of course, but up to a point.

I agree that our students have working memory and processing speed deficits as a result of which they have a much harder time learning and recalling new words.

However, I believe that not teaching them meanings of select words pertaining to language is a huge disservice to them. Here is why. To be a successful communicator, speaker, reader, and writer, individuals need to possess adequate metalinguistic skills.

In simple terms “metalinguistics” refers to the individual’s ability to actively think about, talk about, and manipulate language. Reading, writing, and spelling require active level awareness and thought about language. Students with poor metalinguistic skills have difficulty learning to read, write, and spell.  They lack awareness that spoken words are made up of individual units of sound, which can be manipulated. They lack awareness that letters form words, words form phrases and sentences, and sentences form paragraphs. They may not understand that letters make sounds or that a word may consist of more letters than sounds (e.g., /ship/). The bottom line is that students with decreased metalinguistic skills cannot effectively use language to talk about concepts like sounds, letters, or words unless they are explicitly taught those abilities.

So I do! Furthermore, I can tell you that explicit instruction of metalinguistic vocabulary does significantly improve my students understanding of the tasks involved in obtaining literacy competence. Even my students with mild to moderate intellectual disabilities significantly benefit from understanding the meanings of: letters, words, sentences, etc.

I even created a basic abstract vocabulary handout to facilitate my students comprehension of these words (FREE HERE). While by no means exhaustive, it is a decent starting point for teaching my students the vocabulary needed to improve their metalinguistic skills.

For older elementary aged students with average IQ, I only provide the words I want them to define, and then ask them to look up their meanings online via the usage of PC or an iPad. This turns of vocabulary activity into a critical thinking and an executive functions task.

Students need to figure out the appropriate search string needed to in order to locate the answer as well as which definition comes the closest to clearly and effectively defining the presented word. One of the things I really like about Google online dictionary, is that it provides multiple definitions of the same words along with word origins. As a result, it teaches students to carefully review and reflect upon their selected definition in order to determine its appropriateness.

A word of caution as though regarding using Kiddle, Google-powered search engine for children. While it’s great for locating child friendly images, it is not appropriate for locating abstract definition of words. To illustrate, when you type in the string search into Google, “what is the definition of a letter?” You will get several responses which will appropriately match  some meanings of your query.  However the same string search in Kiddle, will merely yield helpful tips on writing a letter as well as images of envelopes with stamps affixed to them.

In contrast to the above, I use a more structured vocabulary defining activities for younger elementary age students as well as students with intellectual impairments. I provide simple definitions of abstract words, attach images and examples to each definition as well as create cloze activities and several choices of answers in order to ensure my students’ comprehension of these words.

I find that this and other metalinguistic activities significantly improve my students comprehension of abstract words such as ‘communication’, ‘language’, as well as ‘literacy’. They cease being mere buzzwords, frequently heard yet consistently not understood.  To my students these words begin to come to life, brim with meaning, and inspire numerous ‘aha’ moments.

Now that you’ve had a glimpse of my therapy sessions I’d love to have a glimpse of yours. What metalinguistic goals related to literacy are you targeting with your students? Comment below to let me know.

 

Posted on Leave a comment

The Limitations of Using Total/Core Scores When Determining Speech-Language Eligibility

In both of the settings where I work, psychiatric outpatient school as well as private practice, I spend a fair amount of time reviewing speech language evaluation reports.  As I’m looking at these reports I am seeing that many examiners choose to base their decision making with respect to speech language services eligibility on the students’ core, index, or total scores, which are composite scores. For those who are not familiar with this term, composite scores are standard scores based on the sum of various test scaled scores.

When the student displays average abilities on all of the presented subtests, use of composite scores clearly indicates that the child does not present with deficits and thereby is not eligible for therapy services.

The same goes for the reverse, when the child is displaying a pattern of deficits which places their total score well below the average range of functioning. Again, it indicates that the child is performing poorly and requires therapy services.

However, there’s also a the third scenario, which presents a cause for concern namely, when the students display a pattern of strengths and weaknesses on a variety of subtests, but end up with an average/low average total scores, making them ineligible for services. 

Results of the Test of Problem Solving -2 Elementary (TOPS-3)

Subtests Raw Score Standard Score Percentile Rank Description
Making Inferences 19 83 12 Below Average
Sequencing 22 86 17 Low Average
Negative Questions 21 95 38 Average
Problem Solving 21 90 26 Average
Predicting 18 92 29 Average
Determining Causes 13 82 11 Below Average
Total Test 114 86 18 Low Average

Results of the Test of Reading Comprehension-Fourth Edition (TORC-4)

Subtests Raw Score Standard Score Percentile Rank Description
Relational Vocabulary 24 9 37 Average
Sentence Completion 25 9 37 Average
Paragraph Construction 41 12 75 Average
Text Comprehension 21 7 16 Below Average
Contextual Fluency 86 6 9 Below Average
Reading Comprehension Index 90 Average

The above tables, taken from different evaluations, perfectly illustrate such a scenario. While we see that their total/index scores are within average range, the first student has displayed a pattern of strengths and weaknesses across various subtests of the TOPS-3, while the second one displayed a similar performance pattern on the TORC-4.

Typically in such cases, clinical judgment dictates a number of options:

  1. Administration of another standardized test further probing into related areas of difficulty (e.g., in such situations the administration of a social pragmatic standardized test may reveal a significant pattern of weaknesses which would confirm student’s eligibility for language therapy services).                                                                                                        
  2. Administration of informal/dynamic assessments/procedures further probing into the student’s critical thinking/verbal reasoning skills.

Image result for follow upHere is the problem though: I only see the above follow-up steps in a small percentage of cases. In the vast majority of cases in which score discrepancies occur, I see the examiners ignoring the weaknesses without follow up. This of course results in the child not qualifying for services.

So why do such practices frequently take place? Is it because SLPs want to deny children services?  And the answer is NOT at all! The vast majority of SLPs, I have had the pleasure interacting with, are deeply caring and concerned individuals, who only want what’s best for the student in question. Oftentimes, I believe the problem lies with the misinterpretation of/rigid adherence to the state educational code.

For example, most NJ SLPs know that the New Jersey State Education Code dictates that initial eligibility must be determined via use of two standardized tests on which the student must perform 1.5 standard deviations below the mean (or below the 10th percentile).  Based on such phrasing it is reasonable to assume that any child who receives the total scores on two standardized tests above the 10th percentile will not qualify for services. Yet this is completely incorrect!

Let’s take a closer look at the clarification memo issued on October 6, 2015, by the New Jersey Department of Education, in response to NJ Edu Code misinterpretation. Here is what it actually states.

In accordance with this regulation, when assessing for a language disorder for purposes of determining whether a student meets the criteria for communication impaired, the problem must be demonstrated through functional assessment of language in other than a testing situation and performance below 1.5 standard deviations, or the 10th percentile on at least two standardized language tests, where such tests are appropriate, one of which shall be a comprehensive test of both receptive and expressive language.”

“When implementing the requirement with respect to “standardized language tests,” test selection for evaluation or reevaluation of an individual student is based on various factors, including the student’s ability to participate in the tests, the areas of suspected language difficulties/deficits (e.g., morphology, syntax, semantics, pragmatics/social language) and weaknesses identified during the assessment process which require further testing, etc. With respect to test interpretation and decision-making regarding eligibility for special education and related services and eligibility for speech-language services, the criteria in the above provision do not limit the types of scores that can be considered (e.g., index, subtest, standard score, etc.).”

Firstly, it emphasizes functional assessments. It doesn’t mean that assessments should be exclusively standardized rather it emphasizes the best appropriate procedures for the student in question be they standardized and nonstandardized.

Secondly, it does not limit standardized assessment to 2 tests only. Rather it uses though phrase “at least” to emphasize the minimum of tests needed.

It explicitly makes a reference to following up on any weaknesses displayed by the students during standardized testing in order to get to the root of a problem.

It specifies that SLPs must assess all displayed areas of difficulty (e.g., social communication) rather than assessing general language abilities only.

Finally, it explicitly points out that SLPs cannot limit their testing interpretation to the total scores but must to look at the testing results holistically, taking into consideration the student’s entire assessment performance.

The problem is that if SLPs only look at total/core scores then numerous children with linguistically-based deficits will fall through the cracks.  We are talking about children with social communication deficits, children with reading disabilities, children with general language weaknesses, etc.  These students may be displaying average total scores but they may also be displaying significant subtest weaknesses. The problem is that unless these weaknesses are accounted for and remediated as they are not going to magically disappear or resolve on their own. In fact both research and clinical judgment dictates that these weaknesses will exacerbate over time and will continue to adversely impact both social communication and academics.

So the next time you see a pattern of strengths and weaknesses and testing, even if it amounts to a total average score, I urge you to dig deeper. I urge you to investigate why this pattern is displayed in the first place. The same goes for you – parents! If you are looking at average total scores  but seeing unexplained weaknesses in select testing areas, start asking questions! Ask the professional to explain why those deficits are occuring and tell them to dig deeper if you are not satisfied with what you are hearing. All students deserve access to FAPE (Free and Appropriate Public Education). This includes access to appropriate therapies, they may need in order to optimally function in the classroom.

I urge my fellow SLP’s to carefully study their respective state codes as well as know who they are state educational representatives are. These are the professionals SLPs can contact with questions regarding educational code clarification.  For example, the SEACDC Consultant for the state of New Jersey is currently Fran Liebner (phone: 609-984-4955; Fax: 609-292-5558; e-mail: [email protected]).

However, the Department of Education is not the only place SLPs can contact in their state.  Numerous state associations worked diligently on behalf of SLPs by liaising with the departments of education in order to have access to up to date information pertaining to school services. In the state of New Jersey, the School Affairs Committee (SAC) of the New Jersey Speech-Language-Hearing Association (NJSHA), has developed a number of documents of interest for the school-based SLPs which can be found HERE.

For those SLPs located in states other than New Jersey, ASHA helpfully provides contact information by state HERE.

When it comes to score interpretation, there are a variety of options available to SLPs in addition to the detailed reading of the test manual. We can use them to ensure that the students we serve experience optimal success in both social and academic settings.

Helpful Smart Speech Therapy Resources:

Posted on 6 Comments

Intervention at the Last Moment or Why We Need Better Preschool Evaluations

“Well, the school did their evaluations and he doesn’t qualify for services” tells me a parent of a 3.5 year old, newly admitted private practice client.  “I just don’t get it” she says bemusedly, “It is so obvious to anyone who spends even 10 minutes with him that his language is nowhere near other kids his age!” “How can this happen?” she asks frustratedly?

This parent is not alone in her sentiment. In my private practice I frequently see preschool children with speech language impairments who for all intents and purposes should have qualified for preschool- based speech language services but do not due to questionable testing practices.

To illustrate, several years ago in my private practice, I started seeing a young preschool girl, 3.2 years of age. Just prior to turning 3, she underwent a collaborative school-based social, psychological, educational, and speech language evaluation.  The 4 combined evaluators from each field only used one standardized assessment instrument “The Battelle Developmental Inventory – Second Edition (BDI-2)” along with a limited ‘structured observation’, without performing any functional or dynamic assessments and found the child to be ineligible for services on account of a low average total score on the BDI-2.

However, during the first session working 1:1 with this client at the age of 3.2 a number of things became very apparent.  The child had very limited highly echolalic verbal output primarily composed of one-word utterances and select two-word phrases.  She had highly limited receptive vocabulary and could not consistently point to basic pictures denoting common household objects and items (e.g., chair, socks, clock, sun, etc.)  Similarly, expressively she exhibited a number of inconsistencies when labeling simple nouns (e.g., called tree a flower, monkey a dog, and sofa a chair, etc.)  Clearly this child’s abilities were nowhere near age level, so how could she possibly not qualify for preschool based services?

Further work with the child over the next several years yielded slow, labored, and inconsistent gains in the areas of listening, speaking, and social communication.  I’ve also had a number of concerns regarding her intellectual abilities that I had shared with the parents.  Finally, two years after preschool eligibility services were denied to this child, she underwent a second round of re-evaluations with the school district at the age of 5.2.

This time around she qualified with bells on! The same speech language pathologist and psychologist who assessed her first time around two years ago, now readily documented significant communication (Preschool Language Scale-5-PLS-5 scores in the 1st % of functioning) and cognitive deficits (Full Scale Intelligence Quotient-FSIQ in low 50’s).

Here is the problem though. This is not a child who had suddenly regressed in her abilities.  This is a child who actually had improved her abilities in all language domains due to private language therapy services.  Her deficits very clearly existed at the time of her first school-based assessment and had continued to persist over time. For the duration of two years this child could have significantly benefited from free and appropriate education in school setting, which was denied to her due to highly limited preschool assessment practices.

Today, I am writing this post to shed light on this issue, which I’m pretty certain is not just confined to the state of New Jersey.  I am writing this post not simply to complain but to inform parents and educators alike on what actually constitutes an appropriate preschool speech-language assessment.

As per NJAC 6A:14-2.5  Protection in evaluation procedures (pgs. 29-30)

(a) In conducting an evaluation, each district board of education shall:

  1. Use a variety of assessment tools and strategies to gather relevant functional and developmental information, including information:
  2. Provided by the parent that may assist in determining whether a child is a student with a disability and in determining the content of the student’s IEP; and
  3. Related to enabling the student to be involved in and progress in the general education curriculum or, for preschool children with disabilities, to participate in appropriate activities;
  4. Not use any single procedure as the sole criterion for determining whether a student is a student with a disability or determining an appropriate educational program for the student; and
  5. Use technically sound instruments that may assess the relative contribution of cognitive and behavioral factors, in addition to physical or developmental factors.

Furthermore, according to the New Special Education Code: N.J.A.C. 6A:14-3.5(c)10 (please refer to your state’s eligibility criteria to find similar guidelinesthe eligibility of a “preschool child with a disability” applies to any student between 3-5 years of age with an identified disabling condition adversely affecting learning/development  (e.g., genetic syndrome), a 33% delay in one developmental area, or a 25% percent delay in two or more developmental areas below :

  1. Physical, including gross/fine motor and sensory (vision and hearing)
  2. Intellectual
  3. Communication
  4. Social/emotional
  5. Adaptive

—These delays can be receptive (listening) or expressive (speaking) and need not be based on a total test score but rather on all testing findings with a minimum of at least two assessments being performed.  A determination of adverse impact in academic and non-academic areas (e.g., social functioning) needs to take place in order for special education and related services be provided.  Additionally, a delay in articulation can serve as a basis for consideration of eligibility as well.

—Moreover, according to  the —State Education Agencies Communication Disabilities Council (SEACDC) Consulatent for NJ – Fran Liebner, the BDI-2 is not the only test which can be used to determine eligibility, since the nature and scope of the evaluation must be determined based on parent, teacher and IEP team feedback.

In fact, New Jersey’s Special Education Code, N.J.A.C. 6A:14 prescribes no specific test in its eligibility requirements.  While it is true that for NJ districts participating in Indicator 7 (Preschool Outcomes) BDI-2 is a required collection tool it does NOT preclude the team from deciding what other diagnostic tools are needed to assess all areas of suspected disability to determine eligibility. 

Speech pathologists have many tests available to them when assessing young preschool children 2 to 6 years of age.

SELECT SPEECH PATHOLOGY TESTS FOR PRESCHOOL CHILDREN (2-6 years of age)

 Articulation:

  • Sunny Articulation Test (SAPT)** Ages: All (nonstandardized)
  • Clinical Assessment of Articulation and Phonology-2 (CAAP-2) Ages: 2.6+
  • Linguisystems Articulation Test (LAT) Ages: 3+
  • Goldman Fristoe Test of Articulation-3 (GFTA-3)    Ages: 2+

 Fluency:

  • Stuttering Severity Instrument -4 (SSI-4) Ages: 2+
  • Test of Childhood Stuttering (TOCS) Ages 4+

General Language: 

  • Preschool Language Assessment Instrument-2 (PLAI-2)  Ages: 3+
  • Clinical Evaluation of Language Fundamentals -Preschool 2 (CELF-P2) Ages: 3+
  • Test of Early Language Development, Third Edition (TELD-3) Ages: 2+
  • Test of Auditory Comprehension of Language Third Edition (TACL-4)      Ages: 3+
  • Preschool Language Scale-5 (PLS-5)* (use with extreme caution) Ages: Birth-7:11

Vocabulary

  • Receptive One-Word Picture Vocabulary Test-4 (ROWPVT-4)  Ages 2+
  • Expressive One-Word Picture Vocabulary Test-4 (EOWPVT-4) Ages 2+
  • Montgomery Assessment of Vocabulary Acquisition (MAVA) 3+
  • Test of Word Finding-3 (TWF-3) Ages 4.6+

Auditory Processing and Phonological Awareness

  • Auditory Skills Assessment (ASA)    Ages 3:6+
  • Test of Auditory Processing Skills-3 (TAPS-3) Ages 4+
  • Comprehensive Test of Phonological Processing-2 (CTOPP-2) Ages 4+

Pragmatics/Social Communication

  • —Language Use Inventory LUI (O’Neil, 2009) Ages 18-47 months
  • —Children’s Communication Checklist-2 (CCC-2) (Bishop, 2006) Ages 4+

—In addition to administering standardized testing SLPs should also use play scales (e.g., Westby Play Scale, 1980) to assess the given child’s play abilities. This is especially important given that “play—both functional and symbolic has been associated with language and social communication ability.” (Toth, et al, 2006, pg. 3)

Finally, by showing children simple wordless picture books, SLPs can also obtain of wealth of information regarding ——the child’s utterance length, as well as narrative abilities ( a narrative assessment can be performed on a verbal child as young as two years of age).

—Comprehensive school-based speech-language assessments should be the norm and not an exception when determining preschoolers eligibility for speech language services and special education classification.

Consequently, let us ensure that our students receive fair and adequate assessments to have access to the best classroom placements, appropriate accommodations and modifications as well as targeted and relevant therapeutic services. Anything less will lead to the denial of Free Appropriate Public Education (FAPE) to which all students are entitled to!

Helpful Smart Speech Therapy Resources Pertaining to Preschoolers: 

Posted on 4 Comments

What’s Memes Got To Do With It?

Today, after a long hiatus, I am continuing my series of blog posts on “Scholars Who do Not Receive Enough Mainstream Exposure” by summarizing select key points from Dr. Alan G. Kamhi’s 2004 article: “A Meme’s Eye View of Speech-Language Pathology“.

Alan Kamhi

Some of you may be wondering: “Why is she reviewing an article that is more than a decade old? The answer is simple.  It is just as relevant, if not more so today, as it was 12 years ago, when it first came out.

In this article, Dr. Kamhi, asks a provocative question: “Why do some terms, labels, ideas, and constructs [in the field of speech pathology] prevail whereas others fail to gain acceptance?

He attempts to answer this question by explaining the vital role the concept of memes play in the evolution and spread of ideas.

—A meme (shortened from the Greek mimeme to imitate) is an idea, behavior, or style that spreads from person to person within a culture”. The term was originally coined by British evolutionary biologist Richard Dawkins in The Selfish Gene (1976) to explain the spread of ideas and cultural phenomena such as tunes, ideas, catchphrases, customs, etc.

‘Selfish’ in this case means that memes “care only about their own self-replication“.  Consequently, “successful memes are those that get copied accurately (fidelity), have many copies (fecundity), and last a long time (longevity).” Therefore, “memes that are easy to understand, remember, and communicate to others” have the highest risk of survival and replication (pp. 105-106).

So what were some of the more successful memes which Dr. Kamhi identified in his article, which still persist more than a decade later?

  • Learning Disability
  • Auditory Processing Disorder
  • Sensory Integration Disorder
  • Dyslexia
  • Articulation disorder
  • Speech Therapist/ Pathologist

Interestingly the losers of the “contest” were memes that contained the word language in it:

  • Language disorder
  • Language learning disability
  • Speech-language pathologist (albeit this term has gained far more acceptance in the past decade)

Dr. Kamhi further asserts that ‘language-based disorders have failed to become a recognizable learning problem in the community at large‘ (p.106).

So why are labels with the words ‘language’ NOT successful memes?

According to Dr. Kamhi that is because “language-based disorders must be difficult to understand, remember, and communicate to others“. Professional (SLP) explanations of what constitutes language are lengthy and complex (e.g., ASHA’s comprehensive definition) and as a result are not frequently applied in clinical practice, even when its aspects are familiar to SLPs.

Some scholars have suggested that the common practice of evaluating language with standardized language tools, restricts full understanding of the interactions of all of its domains (“within larger sociocultural context“) because they only examine isolated aspects of language. (Apel, 1999)

Dr. Kamhi, in turn explains this within the construct of the memetic theory: namely “simple constructs are more likely to replicate than complex ones.” In other words: “even professionals who understand language may have difficulty communicating its meaning to others and applying this meaning to clinical practice” (p. 107).

Let’s talk about the parents who are interested in learning the root-cause of their child’s difficulty learning and using language.  Based on specific child’s genetic and developmental background as well as presenting difficulties, an educated clinician can explain to the parent the multifactorial nature of their child’s deficits.

However, these informed but frequently complex explanations are certainly in no way simplistic. As a result, many parents will still attempt to seek other professionals who can readily provide them with a “straightforward explanation” of their child’s difficulty.  Since parents are “ultimately interested in finding the most effective and efficient treatment for their children” it makes sense to believe/hope that “the professional who knows the cause of the problem will also know the most effective way to treat it“(p. 107).

This brings us back to the concept of successful memes such as Auditory Processing Disorder (C/APD) as well as Sensory Processing Disorder (SPD) as isolated diagnoses.

Here are just some of the reasons behind their success:

  • They provide a simple solution (which is not necessarily a correct one) that “the learning problem is the result of difficulty processing auditory information or difficulty integrating sensory information“.
  • The assumption is “improving auditory processing and sensory integration abilities” will improve learning difficulties
  • Both, “APD and SID each have only one cause“, so “finding an appropriate treatment …seems more feasible because there is only one problem to eliminate
  • Gives parents “a sense of relief” that they finally have an “understandable explanation for what is wrong with their child
  • Gives parents  hope that the “diagnosis will lead to successful remediation of the learning problem

For more information on why APD and SPD are not valid stand-alone diagnoses please see HERE and HERE respectively.

A note on the lack of success of “phonological” memes:

  • They are difficult to understand and explain (especially due to a lack of consensus of what constitutes a phonological disorder)
  • Lack of familiarity with the term ‘phonological’ results in poor comprehension of “phonological bases of reading problems since its “much easier to associate reading with visual processing abilities, good instruction, and a literacy rich environment” (p. 108).

Let’s talk about MEMEPLEXES (Blackmore, 1999)  or what occurs whennonprofessionals think they know how children learn language and the factors that affect language learning (Kamhi, 2004, p.108).

A memplex is a group of memes, which become much more memorable to individuals (can replicate more efficiently) as a team vs. in isolation.

Why is APD Memeplex So Appealing? 

According to Dr. Kamhi, if one believes that ‘a) sounds are the building blocks of speech and language and (b) children learn to talk by stringing together sounds and constructing meanings out of strings of sounds’ (both wrong assumptions) then its quite a simple leap to make with respect to the following fallacies:

  • Auditory processing are not influenced by language knowledge
  • You can reliably discriminate between APD and language deficits
  • You can validly and reliably assess “uncontaminated” auditory processing abilities and thus diagnose stand-alone APD
  • You can target auditory abilities in isolation without targeting language
  • Improvements in discrimination and identification of ‘speech sounds will lead to improvements in speech and language abilities

For more detailed information, why the above is incorrect, click: HERE

On the success of the Dyslexia Meme:

  • Most nonprofessionals view dyslexia as visually based “reading problem characterized by letter reversals and word transpositions that affects bright children and adults
  • Its highly appealing due to the simple nature of its diagnosis (high intelligence and poor reading skills)
  • The diagnosis of dyslexia has historically been made by physicians and psychologists rather than educators‘, which makes memetic replication highly successful
  • The ‘dyslexic’ label is far more appealing and desirable than calling self ‘reading disabled’

For more detailed information, why the above is far too simplistic of an explanation, click: HERE and HERE

Final Thoughts:

As humans we engage in transmission of  ideas (good and bad) on constant basis. The popularity of powerful social media tools such as Facebook and Twitter ensure their instantaneous and far reaching delivery and impact.  However, “our processing limitations, cultural biases, personal preferences, and human nature make us more susceptible to certain ideas than to others (p. 110).”

As professionals it is important that we use evidence based practices and the latest research to evaluate all claims pertaining to assessment and treatment of language based disorders. However, as Dr. Kamhi points out (p.110):

  • “Competing theories may be supported by different bodies of evidence, and the same evidence may be used to support competing theories.”
  • “Reaching a scientific consensus also takes time.”

While these delays may play a negligible role when it comes to scientific research, they pose a significant problem for parents, teachers and health professionals who are seeking to effectively assist these youngsters on daily basis. Furthermore, even when select memes such as APD are beneficial because they allow for a delivery of services to a student who may otherwise be ineligible to receive them, erroneous intervention recommendations (e.g., working on isolated auditory discrimination skills) may further delay the delivery of appropriate and targeted intervention services.

So what are SLPs to do in the presence of persistent erroneous memes?

Spread our language-based memes to all who will listen” (Kamhi, 2004, 110) of course! Since we are the professionals whose job is to treat any difficulties involving words. Consequently, our scope of practice certainly includes assessment, diagnosis and treatment of children and adults with speaking, listening, reading, writing, and spelling difficulties.

As for myself, I intend to start that task right now by hitting the ‘publish’ button on this post!

I am a SLP

 References:

Kamhi, A. (2004). A meme’s eye view of speech-language pathology. [PDFLanguage, Speech, and Hearing Services in Schools35, 105-112.

Posted on Leave a comment

Language Processing Deficits (LPD) Checklist for School Aged Children

Need a Language Processing Deficits Checklist for School Aged Children

You can find it in my online store HERE

This checklist was created to assist speech-language pathologists (SLPs) with figuring out whether the student presents with language processing deficits which require further follow-up (e.g., screening, comprehensive assessment). The SLP should provide this form to both teacher and caregiver/s to fill out to ensure that the deficit areas are consistent across all settings and people.

Checklist Categories:

  • Listening Skills and Short Term Memory
  • Verbal Expression
  • Emergent Reading/Phonological Awareness
  • General Organizational Abilities
  • Social-Emotional Functioning
  • Behavior
  • Supplemental* Caregiver/Teacher Data Collection Form
  • Select assessments sensitive to Auditory Processing Deficits