Posted on

Have I Got This Right? Developing Self-Questioning to Improve Metacognitive and Metalinguistic Skills

Image result for ambiguousMany of my students with Developmental Language Disorders (DLD) lack insight and have poorly developed metalinguistic (the ability to think about and discuss language) and metacognitive (think about and reflect upon own thinking) skills. This, of course, creates a significant challenge for them in both social and academic settings. Not only do they have a poorly developed inner dialogue for critical thinking purposes but they also because they present with significant self-monitoring and self-correcting challenges during speaking and reading tasks.

There are numerous therapeutic goals suitable for improving metalinguistic and metacognitive abilities for social and academic purposes. These include repairing communicative breakdowns, adjusting tone of voice to different audiences, repairing syntactically, pragmatically, and semantically incorrect sentences, producing definitions of various figurative language expressions, and much, much more. However, there is one goal, which both my students and I find particularly useful, and fun, for this purpose and that is the interpretation of ambiguously worded sentences.

Image result for amphibologySyntactic ambiguity, or amphibology, occurs when a sentence could be interpreted by the listener in a variety of ways due to its ambiguous structure.  Typically, this occurs not due to the range of meanings of single words in a sentence (lexical ambiguity), but rather due to the relationship between the words and clauses in the presented sentence.

This deceptively simple-looking task is actually far more complex than the students realize.  It requires a solid vocabulary base as well as good manipulation of language in order for the students to formulate coherent and cohesive explanations that do not utilize and reuse too many parts of the original ambiguously worded sentence.

Very generally speaking, sentence ambiguities can be local or global.  If a sentence is locally ambiguous (aka “garden path”), the listeners’ confusion will be cleared once they heard the entire sentence.   However, if a sentence is globally ambiguous, then it will continue to remain ambiguous even after its heard in its entirety.

Lets’ take a look at an example of an ambiguously worded global phrase, which I’ve read, while walking on the beach during my vacation: ‘Octopus Boarding’.  Seems innocuous enough, right?  Well, as adults we can immediately come up with a myriad of explanations.  Perhaps that particular spot was a place where people boarded up their octopedes into boxes.  Perhaps, the sign indicated that this was a boarding house for octopedes where they could obtain room and board. Still, another explanation is that this is where octopedes went to boarding school, and so on and so forth.  By now you are probably mildly intrigued and would like to find out what the sign actually meant.  In this particular case, it was an indication that this was a location for a boarding of the catamaran entitled, you guessed it, Octopus!

Of course, when I presented the written text (without the picture) to my 13-year-old adolescent students, they had an incredibly difficult time generating even one, much less several explanations of what this ambiguously-phrased statement actually meant. This, of course, gave me the idea not only to have them work on this goal but to A. create a list of globally syntactically ambiguously worded sentences; b. locate websites containing many more ambiguously worded sentences, so I could share them with my fellow SLPs.  A word of caution, though! Make sure to screen the below sentences and website links very carefully in order to determine their suitability for your students in terms of complexity as well as subject matter (use of profanities; adult subject matter, etc.).

Below are 20 ambiguously worded newspaper and advertisement headlines for your use from a variety of online sources.Image result for ambiguous sentences

  1. The professor said on Monday he would give an exam.
  2. The chicken is ready to eat.
  3. The burglar threatened the student with the knife.
  4. Visiting relatives can be boring.
  5. I saw the man with the binoculars 
  6. Look at that bird with one eye 
  7. I watched her duck 
  8. The peasants are revolting 
  9. I saw a man on a hill with a telescope.
  10. He fed her cat food.
  11. Police helps dog bite victim
  12.  Enraged cow injures farmer with ax
  13. Court to try shooting defendant
  14. Stolen painting found by tree
  15. Two sisters reunited after 18 years in checkout counter
  16. Killer sentenced to die for second time in 10 years
  17. Most parents and doctors trust Tylenol
  18. Come meet our new French pastry chef
  19. Robert went to the bank. 
  20. I shot an elephant in my pajamas.

You can find hundreds more ambiguously worded sentences in the below links.

  1. Ambiguous newspaper headlines  Catanduanes Tribune (32 sentences)
  2. Ambiguous Headlines   Fun with Words Website (33 sentences)
  3. Actual Newspaper Headlines davidvanalstyne.com website (~100 sentences; *contains adult subject matter)
  4. Linguistic Humor Headlines  Univ. of Penn. Dept of Linguistics (~120 sentences)
  5. Bonus: Ambiguous words  Dillfrog Muse rhyming dictionary, which happens to be a really cool site  which you should absolutely check out.

Interested in creating your own ambiguous sentences? Here is some quick advice, use a telegraphic style and omit the copulas, which will, in turn, create a syntactic ambiguity.

Image result for goalsSo now that they have this plethora of sentences to choose from, here’s a quick example of how I approach ambiguous sentence interpretation in my sessions. First, I provide the students with a definition and explain that these sentences could mean different things depending on their context. Then, I provide a few examples of ambiguously worded sentences and generate clear, coherent and cohesive explanations regarding their different meanings.

For example, let’s use sentence # 18 on my list: ‘Robert went to the bank’.  Here I may explain, that ‘Robert went to visit his financial institution where he keeps his money‘, or ‘Robert went to the bank of a river, perhaps to do some fishing‘. Of course, the language that I use with my students varies with their age and level of cognitive and linguistic abilities. I may use the word ‘financial institution’, with a 14-year-old, but use the explanation, ‘a bank where Robert keeps his money’ with a 10-year-old.

Then I provide my students with select sentences (I try to arrange them in a hierarchy from simple to more complex) and ask them to generate their own explanations of what the sentences could potentially mean.  I also make sure to provide them with plenty of prompts, cues, as well as scaffolding to ensure that their experience success in their explanations.

Image result for read it write it learn itHowever, I don’t just stop with the oral portion of this goal. Its literacy-based extensions include having the students read the sentences rather than have me present them orally. Furthermore, once the students have provided me with two satisfactory explanations of the presented ambiguous sentence, I ask them to select at least one explanation and clarify it in writing, so the meaning of the sentence becomes clear.

I find that this goal goes a long way in promoting my students metalinguistic and metacognitive abilities, deepens their insight into their own strengths and weaknesses, as well as facilitates critical thinking in the form of constant self-questioning as well as the evaluation of self-produced information.  Even students as young as 8-9 years of age can benefit significantly from this goal if adapted correctly to meet their linguistic needs.

So give it a try, and let me know what you think!

 

 

 

 

 

 

Posted on

It’s All Due to …Language: How Subtle Symptoms Can Cause Serious Academic Deficits

Scenario: Len is a 7-2-year-old, 2nd-grade student who struggles with reading and writing in the classroom. He is very bright and has a high average IQ, yet when he is speaking he frequently can’t get his point across to others due to excessive linguistic reformulations and word-finding difficulties. The problem is that Len passed all the typical educational and language testing with flying colors, receiving average scores across the board on various tests including the Woodcock-Johnson Fourth Edition (WJ-IV) and the Clinical Evaluation of Language Fundamentals-5 (CELF-5). Stranger still is the fact that he aced Comprehensive Test of Phonological Processing, Second Edition (CTOPP-2), with flying colors, so he is not even eligible for a “dyslexia” diagnosis. Len is clearly struggling in the classroom with coherently expressing self, telling stories, understanding what he is reading, as well as putting his thoughts on paper. His parents have compiled impressively huge folders containing examples of his struggles. Yet because of his performance on the basic standardized assessment batteries, Len does not qualify for any functional assistance in the school setting, despite being virtually functionally illiterate in second grade.

The truth is that Len is quite a familiar figure to many SLPs, who at one time or another have encountered such a student and asked for guidance regarding the appropriate accommodations and services for him on various SLP-geared social media forums. But what makes Len such an enigma, one may inquire? Surely if the child had tangible deficits, wouldn’t standardized testing at least partially reveal them?

Well, it all depends really, on what type of testing was administered to Len in the first place. A few years ago I wrote a post entitled: “What Research Shows About the Functional Relevance of Standardized Language Tests“.  What researchers found is that there is a “lack of a correlation between frequency of test use and test accuracy, measured both in terms of sensitivity/specificity and mean difference scores” (Betz et al, 2012, 141). Furthermore, they also found that the most frequently used tests were the comprehensive assessments including the Clinical Evaluation of Language Fundamentals and the Preschool Language Scale as well as one-word vocabulary tests such as the Peabody Picture Vocabulary Test”. Most damaging finding was the fact that: “frequently SLPs did not follow up the comprehensive standardized testing with domain-specific assessments (critical thinking, social communication, etc.) but instead used the vocabulary testing as a second measure”.(Betz et al, 2012, 140)

In other words, many SLPs only use the tests at hand rather than the RIGHT tests aimed at identifying the student’s specific deficits. But the problem doesn’t actually stop there. Due to the variation in psychometric properties of various tests, many children with language impairment are overlooked by standardized tests by receiving scores within the average range or not receiving low enough scores to qualify for services.

Thus, “the clinical consequence is that a child who truly has a language impairment has a roughly equal chance of being correctly or incorrectly identified, depending on the test that he or she is given.” Furthermore, “even if a child is diagnosed accurately as language impaired at one point in time, future diagnoses may lead to the false perception that the child has recovered, depending on the test(s) that he or she has been given (Spaulding, Plante & Farinella, 2006, 69).”

There’s of course yet another factor affecting our hypothetical client and that is his relatively young age. This is especially evident with many educational and language testing for children in the 5-7 age group. Because the bar is set so low, concept-wise for these age-groups, many children with moderate language and literacy deficits can pass these tests with flying colors, only to be flagged by them literally two years later and be identified with deficits, far too late in the game.  Coupled with the fact that many SLPs do not utilize non-standardized measures to supplement their assessments, Len is in a pretty serious predicament.

But what if there was a do-over? What could we do differently for Len to rectify this situation? For starters, we need to pay careful attention to his deficits profile in order to choose appropriate tests to evaluate his areas of needs. The above can be accomplished via a number of ways. The SLP can interview Len’s teacher and his caregiver/s in order to obtain a summary of his pressing deficits. Depending on the extent of the reported deficits the SLP can also provide them with a referral checklist to mark off the most significant areas of need.

In Len’s case, we already have a pretty good idea regarding what’s going on. We know that he passed basic language and educational testing, so in the words of Dr. Geraldine Wallach, we need to keep “peeling the onion” via the administration of more sensitive tests to tap into Len’s reported areas of deficits which include: word-retrieval, narrative production, as well as reading and writing.

For that purpose, Len is a good candidate for the administration of the Test of Integrated Language and Literacy (TILLS), which was developed to identify language and literacy disorders, has good psychometric properties, and contains subtests for assessment of relevant skills such as reading fluency, reading comprehension, phonological awareness,  spelling, as well as writing  in school-age children.

Given Len’s reported history of narrative production deficits, Len is also a good candidate for the administration of the Social Language Development Test Elementary (SLDTE). Here’s why. Research indicates that narrative weaknesses significantly correlate with social communication deficits (Norbury, Gemmell & Paul, 2014). As such, it’s not just children with Autism Spectrum Disorders who present with impaired narrative abilities. Many children with developmental language impairment (DLD) (#devlangdis) can present with significant narrative deficits affecting their social and academic functioning, which means that their social communication abilities need to be tested to confirm/rule out presence of these difficulties.

However, standardized tests are not enough, since even the best-standardized tests have significant limitations. As such, several non-standardized assessments in the areas of narrative production, reading, and writing, may be recommended for Len to meaningfully supplement his testing.

Let’s begin with an informal narrative assessment which provides detailed information regarding microstructural and macrostructural aspects of storytelling as well as child’s thought processes and socio-emotional functioning. My nonstandardized narrative assessments are based on the book elicitation recommendations from the SALT website. For 2nd graders, I use the book by Helen Lester entitled Pookins Gets Her Way. I first read the story to the child, then cover up the words and ask the child to retell the story based on pictures. I read the story first because: “the model narrative presents the events, plot structure, and words that the narrator is to retell, which allows more reliable scoring than a generated story that can go in many directions” (Allen et al, 2012, p. 207).

As the child is retelling his story I digitally record him using the Voice Memos application on my iPhone, for a later transcription and thorough analysis.  During storytelling, I only use the prompts: ‘What else can you tell me?’ and ‘Can you tell me more?’ to elicit additional information. I try not to prompt the child excessively since I am interested in cataloging all of his narrative-based deficits. After I transcribe the sample, I analyze it and make sure that I include the transcription and a detailed write-up in the body of my report, so parents and professionals can see and understand the nature of the child’s errors/weaknesses.

Now we are ready to move on to a brief nonstandardized reading assessment. For this purpose, I often use the books from the Continental Press series entitled: Reading for Comprehension, which contains books for grades 1-8.  After I confirm with either the parent or the child’s teacher that the selected passage is reflective of the complexity of work presented in the classroom for his grade level, I ask the child to read the text.  As the child is reading, I calculate the correct number of words he reads per minute as well as what type of errors the child is exhibiting during reading.  Then I ask the child to state the main idea of the text, summarize its key points as well as define select text embedded vocabulary words and answer a few, verbally presented reading comprehension questions. After that, I provide the child with accompanying 5 multiple choice question worksheet and ask the child to complete it. I analyze my results in order to determine whether I have accurately captured the child’s reading profile.

Finally, if any additional information is needed, I administer a nonstandardized writing assessment, which I base on the Common Core State Standards for 2nd grade. For this task, I provide a student with a writing prompt common for second grade and give him a period of 15-20 minutes to generate a writing sample. I then analyze the writing sample with respect to contextual conventions (punctuation, capitalization, grammar, and syntax) as well as story composition (overall coherence and cohesion of the written sample).

The above relatively short assessment battery (2 standardized tests and 3 informal assessment tasks) which takes approximately 2-2.5 hours to administer, allows me to create a comprehensive profile of the child’s language and literacy strengths and needs. It also allows me to generate targeted goals in order to begin effective and meaningful remediation of the child’s deficits.

Children like Len will, unfortunately, remain unidentified unless they are administered more sensitive tasks to better understand their subtle pattern of deficits. Consequently, to ensure that they do not fall through the cracks of our educational system due to misguided overreliance on a limited number of standardized assessments, it is very important that professionals select the right assessments, rather than the assessments at hand, in order to accurately determine the child’s areas of needs.

References:

Posted on

Making Our Interventions Count or What’s Research Got To Do With It?

Two years ago I wrote a blog post entitled: “What’s Memes Got To Do With It?” which summarized key points of Dr. Alan G. Kamhi’s 2004 article: “A Meme’s Eye View of Speech-Language Pathology“. It delved into answering the following question: “Why do some terms, labels, ideas, and constructs [in our field] prevail whereas others fail to gain acceptance?”.

Today I would like to reference another article by Dr. Kamhi written in 2014, entitled “Improving Clinical Practices for Children With Language and Learning Disorders“.

This article was written to address the gaps between research and clinical practice with respect to the implementation of EBP for intervention purposes.

Dr. Kamhi begins the article by posing 10 True or False questions for his readers:

  1. Learning is easier than generalization.
  2. Instruction that is constant and predictable is more effective than instruction that varies the conditions of learning and practice.
  3. Focused stimulation (massed practice) is a more effective teaching strategy than varied stimulation (distributed practice).
  4. The more feedback, the better.
  5. Repeated reading of passages is the best way to learn text information.
  6. More therapy is always better.
  7. The most effective language and literacy interventions target processing limitations rather than knowledge deficits.
  8. Telegraphic utterances (e.g., push ball, mommy sock) should not be provided as input for children with limited language.
  9. Appropriate language goals include increasing levels of mean length of utterance (MLU) and targeting Brown’s (1973) 14 grammatical morphemes.
  10. Sequencing is an important skill for narrative competence.

Guess what? Only statement 8 of the above quiz is True! Every other statement from the above is FALSE!

Now, let’s talk about why that is!

First up is the concept of learning vs. generalization. Here Dr. Kamhi discusses that some clinicians still possess an “outdated behavioral view of learning” in our field, which is not theoretically and clinically useful. He explains that when we are talking about generalization – what children truly have a difficulty with is “transferring narrow limited rules to new situations“. “Children with language and learning problems will have difficulty acquiring broad-based rules and modifying these rules once acquired, and they also will be more vulnerable to performance demands on speech production and comprehension (Kamhi, 1988)” (93). After all, it is not “reasonable to expect children to use language targets consistently after a brief period of intervention” and while we hope that “language intervention [is] designed to lead children with language disorders to acquire broad-based language rules” it is a hugely difficult task to undertake and execute.

Next, Dr. Kamhi addresses the issue of instructional factors, specifically the importance of “varying conditions of instruction and practice“.  Here, he addresses the fact that while contextualized instruction is highly beneficial to learners unless we inject variability and modify various aspects of instruction including context, composition, duration, etc., we ran the risk of limiting our students’ long-term outcomes.

After that, Dr. Kamhi addresses the concept of distributed practice (spacing of intervention) and how important it is for teaching children with language disorders. He points out that a number of recent studies have found that “spacing and distribution of teaching episodes have more of an impact on treatment outcomes than treatment intensity” (94).

He also advocates reducing evaluative feedback to learners to “enhance long-term retention and generalization of motor skills“. While he cites research from studies pertaining to speech production, he adds that language learning could also benefit from this practice as it would reduce conversational disruptions and tunning out on the part of the student.

From there he addresses the limitations of repetition for specific tasks (e.g., text rereading). He emphasizes how important it is for students to recall and retrieve text rather than repeatedly reread it (even without correction), as the latter results in a lack of comprehension/retention of read information.

After that, he discusses treatment intensity. Here he emphasizes the fact that higher dose of instruction will not necessarily result in better therapy outcomes due to the research on the effects of “learning plateaus and threshold effects in language and literacy” (95). We have seen research on this with respect to joint book reading, vocabulary words exposure, etc. As such, at a certain point in time increased intensity may actually result in decreased treatment benefits.

His next point against processing interventions is very near and dear to my heart. Those of you familiar with my blog know that I have devoted a substantial number of posts pertaining to the lack of validity of CAPD diagnosis (as a standalone entity) and urged clinicians to provide language based vs. specific auditory interventions which lack treatment utility. Here, Dr. Kamhi makes a great point that: “Interventions that target processing skills are particularly appealing because they offer the promise of improving language and learning deficits without having to directly target the specific knowledge and skills required to be a proficient speaker, listener, reader, and writer.” (95) The problem is that we have numerous studies on the topic of improvement of isolated skills (e.g., auditory skills, working memory, slow processing, etc.) which clearly indicate lack of effectiveness of these interventions.  As such, “practitioners should be highly skeptical of interventions that promise quick fixes for language and learning disabilities” (96).

Now let us move on to language and particularly the models we provide to our clients to encourage greater verbal output. Research indicates that when clinicians are attempting to expand children’s utterances, they need to provide well-formed language models. Studies show that children select strong input when its surrounded by weaker input (the surrounding weaker syllables make stronger syllables stand out).  As such, clinicians should expand upon/comment on what clients are saying with grammatically complete models vs. telegraphic productions.

From there lets us take a look at Dr. Kamhi’s recommendations for grammar and syntax. Grammatical development goes much further than addressing Brown’s morphemes in therapy and calling it a day. As such, it is important to understand that children with developmental language disorders (DLD) (#DevLang) do not have difficulty acquiring all morphemes. Rather studies have shown that they have difficulty learning grammatical morphemes that reflect tense and agreement  (e.g., third-person singular, past tense, auxiliaries, copulas, etc.). As such, use of measures developed by (e.g., Tense Marker Total & Productivity Score) can yield helpful information regarding which grammatical structures to target in therapy.

With respect to syntax, Dr. Kamhi notes that many clinicians erroneously believe that complex syntax should be targeted when children are much older. The Common Core State Standards do not help this cause further, since according to the CCSS complex syntax should be targeted 2-3 grades, which is far too late. Typically developing children begin developing complex syntax around 2 years of age and begin readily producing it around 3 years of age. As such, clinicians should begin targeting complex syntax in preschool years and not wait until the children have mastered all morphemes and clauses (97)

Finally, Dr. Kamhi wraps up his article by offering suggestions regarding prioritizing intervention goals. Here, he explains that goal prioritization is affected by

  • clinician experience and competencies
  • the degree of collaboration with other professionals
  • type of service delivery model
  • client/student factors

He provides a hypothetical case scenario in which the teaching responsibilities are divvied up between three professionals, with SLP in charge of targeting narrative discourse. Here, he explains that targeting narratives does not involve targeting sequencing abilities. “The ability to understand and recall events in a story or script depends on conceptual understanding of the topic and attentional/memory abilities, not sequencing ability.”  He emphasizes that sequencing is not a distinct cognitive process that requires isolated treatment. Yet many SLPs “continue to believe that  sequencing is a distinct processing skill that needs to be assessed and treated.” (99)

Dr. Kamhi supports the above point by providing an example of two passages. One, which describes a random order of events, and another which follows a logical order of events. He then points out that the randomly ordered story relies exclusively on attention and memory in terms of “sequencing”, while the second story reduces demands on memory due to its logical flow of events. As such, he points out that retelling deficits seemingly related to sequencing, tend to be actually due to “limitations in attention, working memory, and/or conceptual knowledge“. Hence, instead of targeting sequencing abilities in therapy, SLPs should instead use contextualized language intervention to target aspects of narrative development (macro and microstructural elements).

Furthermore, here it is also important to note that the “sequencing fallacy” affects more than just narratives. It is very prevalent in the intervention process in the form of the ubiquitous “following directions” goal/s. Many clinicians readily create this goal for their clients due to their belief that it will result in functional therapeutic language gains. However, when one really begins to deconstruct this goal, one will realize that it involves a number of discrete abilities including: memory, attention, concept knowledge, inferencing, etc.  Consequently, targeting the above goal will not result in any functional gains for the students (their memory abilities will not magically improve as a result of it). Instead, targeting specific language and conceptual goals  (e.g., answering questions, producing complex sentences, etc.) and increasing the students’ overall listening comprehension and verbal expression will result in improvements in the areas of attention, memory, and processing, including their ability to follow complex directions.

There you have it! Ten practical suggestions from Dr. Kamhi ready for immediate implementation! And for more information, I highly recommend reading the other articles in the same clinical forum, all of which possess highly practical and relevant ideas for therapeutic implementation. They include:

References:

Kamhi, A. (2014). Improving clinical practices for children with language and learning disorders.  Language, Speech, and Hearing Services in Schools, 45(2), 92-103

Helpful Social Media Resources:

SLPs for Evidence-Based Practice

Posted on

Smart Speech Therapy Black Friday Sale!

Posted on

New Products for the 2017 Academic School Year for SLPs

Image result for back to schoolSeptember is quickly approaching and  school-based speech language pathologists (SLPs) are preparing to go back to work. Many of them are looking to update their arsenal of speech and language materials for the upcoming academic school year.

With that in mind, I wanted to update my readers regarding all the new products I have recently created with a focus on assessment and treatment in speech language pathology.

My most recent product Assessment of Adolescents with Language and Literacy Impairments in Speech Language Pathology  is a 130-slide pdf download which discusses how to effectively select assessment materials in order to conduct comprehensive evaluations of adolescents with suspected language and literacy disorders. It contains embedded links to ALL the books and research articles used in the development of this product.

Effective Reading Instruction Strategies for Intellectually Impaired Students is a 50-slide downloadable presentation in pdf format which describes how speech-language pathologists (SLPs) trained in assessment and intervention of literacy disorders (reading, spelling, and writing) can teach phonological awareness, phonics, as well as reading fluency skills to children with mild-moderate intellectual disabilities. It reviews the research on reading interventions conducted with children with intellectual disabilities, lists components of effective reading instruction as well as explains how to incorporate components of reading instruction into language therapy sessions.

Dysgraphia Checklist for School-Aged Children helps to identify the students’ specific written language deficits who may require further assessment and treatment services to improve their written abilities.

Processing Disorders: Controversial Aspects of Diagnosis and Treatment is a 28-slide downloadable pdf presentation which provides an introduction to processing disorders.  It describes the diversity of ‘APD’ symptoms as well as explains the current controversies pertaining to the validity of the ‘APD’ diagnosis.  It also discusses how the label “processing difficulties” often masks true language and learning deficits in students which require appropriate language and literacy assessment and targeted intervention services.

Checklist for Identification of Speech Language Disorders in Bilingual and Multicultural Children was created to assist Speech Language Pathologists (SLPs) and Teachers in the decision-making process of how to appropriately identify bilingual and multicultural children who present with speech-language delay/deficits (vs. a language difference), for the purpose of initiating a formal speech-language-literacy evaluation.  The goal is to ensure that educational professionals are appropriately identifying bilingual children for assessment and service provision due to legitimate speech language deficits/concerns, and are not over-identifying students because they speak multiple languages or because they come from low socioeconomic backgrounds.

Comprehensive Assessment and Treatment of Literacy Disorders in Speech-Language Pathology is a 125 slide presentation which describes how speech-language pathologists can effectively assess and treat children with literacy disorders, (reading, spelling, and writing deficits including dyslexia) from preschool through adolescence.  It explains the impact of language disorders on literacy development, lists formal and informal assessment instruments and procedures, as well as describes the importance of assessing higher order language skills for literacy purposes. It reviews components of effective reading instruction including phonological awareness, orthographic knowledge, vocabulary awareness,  morphological awareness, as well as reading fluency and comprehension. Finally, it provides recommendations on how components of effective reading instruction can be cohesively integrated into speech-language therapy sessions in order to improve literacy abilities of children with language disorders and learning disabilities.

Improving critical thinking via picture booksImproving Critical Thinking Skills via Picture Books in Children with Language Disorders is a partial 30-slide presentation which discusses effective instructional strategies for teaching language disordered children critical thinking skills via the use of picture books utilizing both the Original (1956) and Revised (2001) Bloom’s Taxonomy: Cognitive Domain which encompasses the (R) categories of remembering, understanding, applying, analyzing, evaluating and creating.

from wordless books to reading From Wordless Picture Books to Reading Instruction: Effective Strategies for SLPs Working with Intellectually Impaired Students is a full 92 slide presentation which discusses how to address the development of critical thinking skills through a variety of picture books  utilizing the framework outlined in Bloom’s Taxonomy: Cognitive Domain which encompasses the categories of knowledge, comprehension, application, analysis, synthesis, and evaluation in children with intellectual impairments. It shares a number of similarities with the above product as it also reviews components of effective reading instruction for children with language and intellectual disabilities as well as provides recommendations on how to integrate reading instruction effectively into speech-language therapy sessions.

Best Practices in Bilingual LiteracyBest Practices in Bilingual Literacy Assessments and Interventions is a 105 slide presentation which focuses on how bilingual speech-language pathologists (SLPs) can effectively assess and intervene with simultaneously bilingual and multicultural children (with stronger academic English language skills) diagnosed with linguistically-based literacy impairments. Topics include components of effective literacy assessments for simultaneously bilingual children (with stronger English abilities), best instructional literacy practices, translanguaging support strategies, critical questions relevant to the provision of effective interventions, as well as use of accommodations, modifications and compensatory strategies for improvement of bilingual students’ performance in social and academic settings.

Comprehensive Literacy Checklist For School-Aged Children was created to assist Speech Language Pathologists (SLPs) in the decision-making process of how to identify deficit areas and select assessment instruments to prioritize a literacy assessment for school aged children. The goal is to eliminate administration of unnecessary or irrelevant tests and focus on the administration of instruments directly targeting the specific areas of difficulty that the student presents with.

You can find these and other products in my online store (HERE). Wishing all of you a highly successful and rewarding school year!

Image result for happy school year

Posted on

Treatment of Children with “APD”: What SLPs Need to Know

Free stock photo of people, woman, cute, playingIn recent years there has been an increase in research on the subject of diagnosis and treatment of Auditory Processing Disorders (APD), formerly known as Central Auditory Processing Disorders or CAPD.

More and more studies in the fields of audiology and speech-language pathology began confirming the lack of validity of APD as a standalone (or useful) diagnosis. To illustrate, in June 2015, the American Journal of Audiology published an article by David DeBonis entitled: “It Is Time to Rethink Central Auditory Processing Disorder Protocols for School-Aged Children.” In this article, DeBonis pointed out numerous inconsistencies involved in APD testing and concluded that “routine use of APD test protocols cannot be supported” and that [APD] “intervention needs to be contextualized and functional” (DeBonis, 2015, p. 124)

Image result for time to rethink quotesFurthermore, in April 2017, an article entitled: “AAA (2010) CAPD clinical practice guidelines: need for an update” (also written by DeBonnis) concluded that the “AAA CAPD guidance document will need to be updated and re-conceptualised in order to provide meaningful guidance for clinicians” due to the fact that the “AAA document … does not reflect the current literature, fails to help clinicians understand for whom auditory processing testing and intervention would be most useful, includes contradictory suggestions which reduce clarity and appears to avoid conclusions that might cast the CAPD construct in a negative light. It also does not include input from diverse affected groups. All of these reduce the document’s credibility.” 

Image result for systematic reviewIn April 2016, de Wit and colleagues published a systematic review in the Journal of Speech, Language, and Hearing ResearchThey reviewed research studies which described the characteristics of APD in children to determine whether these characteristics merited a label of a distinct clinical disorder vs. being representative of other disorders.  After a search of 6 databases, they chose 48 studies which satisfied appropriate inclusion criteria. Unfortunately, they unearthed only one study with strong methodological quality. Even more disappointing was that the children in these studies presented with incredibly diverse symptomology. The authors concluded that “The listening difficulties of children with APD may be a consequence of cognitive, language, and attention issues rather than bottom-up auditory processing” (de Wit et al., 2016, p. 384).  In other words, none of the reviewed studies had conclusively proven that APD was a distinct clinical disorder.  Instead, these studies showed that the children diagnosed with APD exhibited language-based deficits. In other words, the diagnosis of APD did not reveal any new information regarding the child beyond the fact that s/he is in great need of a comprehensive language assessment in order to determine which language-based interventions s/he would optimally benefit from.

Now, it is important to reiterate that students diagnosed with “APD” present with legitimate symptomology (e.g., difficulty processing language, difficulty organizing narratives, difficulty decoding text, etc.). However, all the research to date indicates that these symptoms are indicative of broader language-based deficits, which require targeted language/literacy-based interventions rather than recommendations for specific prescriptive programs (e.g., CAPDOTS, Fast ForWord, etc.) or mere in-school accommodations.

Image result for dig deeper quotesUnfortunately, on numerous occasions when the students do receive the diagnosis of APDthe testing does not “dig further,” which leads to many of them not receiving appropriate comprehensive language-literacy assessments.  Furthermore, APD then becomes the “primary” diagnosis for the student, which places SLPs in situations in which they must address inappropriate therapeutic targets based on an audiologist’s recommendations.  Even worse, in many of these situations, the diagnosis of APD limits the provision of appropriate language-based services to the student.

Since the APD controversy has been going on for years with no end in sight despite the mounting evidence pointing to the lack of its validity, we know that SLPs will continue to have students on their caseloads diagnosed with APD. Thus, the aim of today’s post is to offer some constructive suggestions for SLPs who are asked to assess and treat students with “confirmed” or suspected APD.

The first suggestion comes directly from Dr. Alan Kamhi, who states: “Do not assume that a child who has been diagnosed with APD needs to be treated any differently than children who have been diagnosed with language and learning disabilities” (Kamhi, 2011, p. 270).  In other words, if one carefully analyzes the child’s so-called processing issues, one will quickly realize that those issues are not related to the processing of auditory input  (auditory domain) since the child is not processing tones, hoots, or clicks, etc. but rather has difficulty processing speech and language (language domain).

If a student with confirmed or suspected APD is referred to an SLP, it is important, to begin with formal and informal assessments of language and literacy knowledge and skills. (details HERE)   SLPs need to “consider non-auditory reasons for listening and comprehension difficulties, such as limitations in working memory, language knowledge, conceptual abilities, attention, and motivation (Kamhi & Wallach, 2012).

Image result for language goalsAfter performing a comprehensive assessment, SLPs need to formulate language goals based on determined areas of weaknesses. Please note that a systematic review by Fey and colleagues (2011) found no compelling evidence that auditory interventions provided any unique benefit to auditory, language, or academic outcomes for children with diagnoses of (C)APD or language disorder. As such it’s important to avoid formulating goals focused on targeting isolated processing abilities like auditory discrimination, auditory sequencing, recognizing speech in noise, etc., because these processing abilities have not been shown to improve language and literacy skills (Fey et al., 2011; Kamhi, 2011).

Instead, SLPs need to target we need to focus on the language underpinnings of the above skills and turn them into language and literacy goals. For example, if the child has difficulty recognizing speech in noise, improve the child’s knowledge and access to specific vocabulary words.  This will help the child detect the word when the auditory information is degraded.  Child presents with phonemic awareness deficits? Figure out where in the hierarchy of phonemic awareness their strengths and weaknesses lie and formulate goals based on the remaining areas in need of mastery.  Received a description of the child’s deficits from the audiologist in an accompanying report? Turn them into language goals as well!  Turn “prosodic deficits” or difficulty understanding the intent of verbal messages into “listening for details and main ideas in stories” goals.   In other words, figure out the language correlate to the ‘auditory processing’ deficit and replace it.

Image result for quackeryIt is easy to understand the appeal of using dubious practices which promise a quick fix for our student’s “APD deficits” instead of labor-intensive language therapy sessions. But one must also keep something else in mind as well:   Acquiring higher order language abilities takes a significant period of time, especially for those students whose skills and abilities are significantly below age-matched peers.

APD Summary 

  1. There is still no compelling evidence that APD is a stand-alone diagnosis with clear diagnostic criteria.
  2. There is still no compelling evidence that auditory deficits are a “significant risk factor for  language or academic performance.”
  3. There is still no compelling evidence that “auditory interventions provide any unique benefit to auditory, language, or academic outcomes” (Hazan, Messaoud-Galusi, Rosan, Nouwens, & Shakespeare, 2009; Watson & Kidd, 2009).
  4. APD deficits are language based deficits which accompany a host of developmental conditions ranging from developmental language disorders to learning disabilities, etc.
  5. SLPs should perform comprehensive language and literacy assessments of children diagnosed with APD.
  6. SLPs should target   literacy goals.
  7. SLPS should be wary of any goals or recommendations which focus on remediation of isolated skills such as: “auditory discrimination, auditory sequencing, phonological memory, working memory, or rapid serial naming” since studies have definitively confirmed their lack of effectiveness (Fey et al., 2011).
  8. SLPs should be wary of any prescriptive programs offering APD “interventions” and instead focus on improving children’s abilities for functional communication including listening, speaking, reading, and writing (see Wallach, 2014: Improving Clinical Practice: A School-Age and School-Based Perspective).  This article  “presents a conceptual framework for intervention at school-age levels” and discusses “advanced levels of language that move beyond preschool and early elementary grade goals and objectives with a focus on comprehension and meta-abilities.”

There you have it!  Students diagnosed with APD are best served by targeting the language and literacy problems that are affecting their performance in school. 

Related Posts:

 

 

Posted on

C/APD Update: New Developments on an Old Controversy

In the past two years, I wrote a series of research-based posts (HERE and HERE) regarding the validity of (Central) Auditory Processing Disorder (C/APD) as a standalone diagnosis as well as questioned the utility of it for classification purposes in the school setting.

Once again I want to reiterate that I was in no way disputing the legitimate symptoms (e.g., difficulty processing language, difficulty organizing narratives, difficulty decoding text, etc.), which the students diagnosed with “CAPD” were presenting with.

Rather, I was citing research to indicate that these symptoms were indicative of broader linguistic-based deficits, which required targeted linguistic/literacy-based interventions rather than recommendations for specific prescriptive programs (e.g., CAPDOTS, Fast ForWord, etc.),  or mere accommodations.

I was also significantly concerned that overfocus on the diagnosis of (C)APD tended to obscure REAL, language-based deficits in children and forced SLPs to address erroneous therapeutic targets based on AuD recommendations or restricted them to a receipt of mere accommodations rather than rightful therapeutic remediation.

Today I wanted to update you regarding new developments, which took place since my last blog post was written 1.5 years ago, regarding the validity of “C/APD” diagnosis.

In April 2016, de Wit and colleagues published a systematic review in the Journal of Speech, Language, and Hearing Research. Their purpose was to review research studies describing the characteristics of APD in children and determine whether these characteristics merited a label of a distinct clinical disorder vs. being representative of other disorders.  After they searched 6 databases they chose 48 studies which satisfied appropriate inclusion criteria. Unfortunately, only 1 study had strong methodological quality and what’s even more disappointing, the children in their studies were very dissimilar and presented with incredibly diverse symptomology. The authors concluded that: “the listening difficulties of children with APD may be a consequence of cognitive, language, and attention issues rather than bottom-up auditory processing.”

In other words, because APD is not a distinct clinical disorder, a diagnosis of APD would not contribute anything to the child’s functioning beyond showing that the child is experiencing linguistically based deficits, which bear further investigation.

To continue, you may remember that in my first CAPD post I extensively cited a tutorial written by Dr. David DeBonis, who is an AuD. In his article, he pointed out numerous inconsistencies involved in CAPD testing and concluded that “routine use of CAPD test protocols cannot be supported” and that [CAPD] “intervention needs to be contextualized and functional.”

In July 2016, Iliadou, Sirimanna, & Bamiou published an article: “CAPD Is Classified in ICD-10 as H93.25 and Hearing Evaluation—Not Screening—Should Be Implemented in Children With Verified Communication and/or Listening Deficits” protesting DeBonis’s claim that CAPD is not a unique clinical entity and as such should not be included in any disease classification system.  They stated that DeBonis omitted the fact that “CAPD is included in the U.S. version of the International Statistical Classification of Diseases and Related Health Problems–10th Revision (ICD-10) under the code H93.25” (p. 368). They also listed what they believed to be a number of article omissions, which they claimed biased DeBonis’s tutorial’s conclusions.

The authors claimed that DeBonis provided a limited definition of CAPD based only on ASHA’s Technical report vs. other sources such as American Academy of Audiology (2010), British Society of Audiology Position Statement (2011), and Canadian Guidelines on Auditory Processing Disorder in Children and Adults: Assessment Intervention (2012).  (p. 368)

The also authors claimed that DeBonis did not adequately define the term “traditional testing” and failed to provide several key references for select claims.  They disagreed with DeBonis’s linkage of certain digit tests, as well as his “lumping” of studies which included children with suspected and diagnosed APD into the same category. (p. 368-9)  They also objected to the fact that he “oversimplified” results of positive gains of select computer-based interventions for APD, and that in his summary section he listed only selected studies pertinent to the topic of intelligence and auditory processing skills. (p. 369).

Their main objection, however, had to do with the section of DeBonis’s article that contained “recommended assessment and intervention process for children with listening and communication difficulties in the classroom”.  They expressed concerns with his recommendations on the grounds that he failed to provide published research to support that this was the optimal way to provide intervention. The authors concluded their article by stating that due to the above-mentioned omissions they felt that DeBonis’s tutorial “show(ed) unacceptable bias” (p. 370).

In response to the Iliadou, Sirimanna, & Bamiou, 2016 concerns, DeBonis issued his own response article shortly thereafter (DeBonis, 2016). Firstly, he pointed out that when his tutorial was released in June 2015 the ICD-10 was not yet in effect (it was enacted Oct 1, 2015). As such his statement was factually accurate.

Secondly, he also made a very important point regarding the C/APD construct validity, namely that it fails to satisfy the Sydenham–Guttentag criteria as a distinct clinical entity (Vermiglio, 2014). Namely, despite attempts at diagnostic uniformity, CAPD remains ambiguously defined, with testing failing to “represent a homogenous patient group.” (p. 906).

For those who are unfamiliar with this terminology (as per direct quote from Dr. Vermiglio’s presentation): “The Sydenham-Guttentag Criteria for the Clinical Entity Proposed by Vermiglio (accepted 2014, JAAA) is as follows:

  1. The clinical entity must possess an unambiguous definition (Sydenham, 1676; FDA, 2000)
  2. It must represent a homogeneous patient group (Sydenham, 1676; Guttentag, 1949, 1950; FDA, 2000)
  3. It must represent a perceived limitation (Guttentag, 1949)
  4. It must facilitate diagnosis and intervention (Sydenham, 1676; Guttentag, 1949; FDA, 2000)

Thirdly, DeBonis addressed Iliadou, Sirimanna, & Bamiou, 2016 concerns that he did not use the most recent definition of APD by pointing out that he was most qualified to discuss the US system and its definitions of CAPD, as well as that “the U.S. guidelines, despite their limitations and age, continue to have a major impact on the approach to auditory processing disorders worldwide” (p.372). He also elucidated that: the AAA’s (2010) definition of CAPD is “not so much built on previous definitions but rather has continued to rely on them” and as such does not constitute a “more recent” source of CAPD definitions. (p.372)

DeBonis next addressed the claim that he did not adequately define the term “traditional testing”. He stated that he defined it on pg. 125 of his tutorial and that information on it was taken directly from the AAA (2010) document. He then explained how it is “aligned with bottom-up aspects of the auditory system” by citing numerous references (see p. 372 for further details).  After that, he addressed Iliadou, Sirimanna, & Bamiou, 2016 claim that he failed to provide references by pointing out the relevant citation in his article, which they failed to see.

Next, he proceeded to address their concerns “regarding the interaction between cognition and auditory processing” by reiterating that auditory processing testing is “not so pure” and is affected by constructs such as memory, executive function skills, etc. He also referenced the findings of  Beck, Clarke and Moore (2016)  that “most currently used tests of APD are tests of language and attention…lack sensitivity and specificity” (p. 27).

The next point addressed by DeBonis was the use of studies which included children with suspected vs. confirmed APD. He agreed that “one cannot make inferences about one population from another” but added that the data from the article in question “provided insight into the important role of attention and memory in children who are poor listeners” and that “such listeners represent the population [which] should be [AuD’s] focus.” (p.373)

From there on, DeBonis moved on to address Iliadou, Sirimanna, & Bamiou, 2016 claims that he “oversimplified” the results of one CBAT study dealing with effects of computer-based interventions for APD. He responded that the authors of that review themselves stated that: “the evidence for improving phonological awareness is “initial”.

Consequently, “improvements in auditory processing—without subsequent changes in the very critical tasks of reading and language—certainly do not represent an endorsement for the auditory training techniques that were studied.” (p.373)

Here, DeBonis also raised concerns regarding the overall concept of treatment effectiveness, stating that it should not be based on “improved performance on behavioral tests of auditory processing or electrophysiological measures” but ratheron improvements on complex listening and academic tasks“. (p.373) As such,

  1. “This limited definition of effectiveness leads to statements about the impact of certain interventions that can be misinterpreted at best and possibly misleading.”
  2. “Such a definition of effectiveness is unlikely to be satisfying to working clinicians or parents of children with communication difficulties who hope to see changes in day-to-day communication and academic abilities.” (p.373)

Then, DeBonis addressed Iliadou, Sirimanna, & Bamiou, 2016 concerns regarding the omission of an article supporting CAPD and intelligence as separate entities. He reiterated that the aim of his tutorial was to note that “performance on commonly used tests of auditory processing is highly influenced by a number of cognitive and linguistic factors” rather than to “do an overview of research in support of and in opposition to the construct”. (p.373)

Subsequently, DeBonis addressed the Iliadou, Sirimanna, & Bamiou, 2016 claim that he did not provide research to support his proposed testing protocol, as well as that he made a figure error. He conceded that the authors were correct with respect to the figure error (the information provided in the figure was not sufficient). However, he pointed out that the purpose of his tutorial was to “to review the literature related to ongoing concerns about the use of the CAPD construct in school-aged children and to propose an alternative assessment/intervention procedure that moves away from testing “auditory processing” and moves toward identifying and supporting students who have listening challenges”. As such, while the effectiveness of his model is being tested, it makes sense to “use of questionnaires and speech-in-noise tests with very strong psychometric characteristics” and thoroughly assess these children’s “language and cognitive skills to reduce the chance of misdiagnosis”  in order to provide functional interventions (p.373).

Finally, Debonis addressed the Iliadou, Sirimanna, & Bamiou, 2016 accusation that his tutorial contained “unacceptable bias”. He pointed out that “the reviewers of this [his 2015 article article] did not agree” and that since the time of that article’s publication “readers and other colleagues have viewed it as a vehicle for important thought about how best to help children who have listening difficulties.” (p. 374)

Having read the above information, many of you by now must be wondering: “Why is the research on APD as a valid stand alone diagnosis continues to be published at regular intervals?”

To explain the above phenomenon, I will use several excerpts from an excellent presentation by Kamhi, A, Vermiglio, A, & Wallach, G (2016), which I attended during the 2016 ASHA Convention in Philadephia, PA.

It has been suggested that the above has to do with: “The bias of the CAPD Convention Committee that reviews submissions.” Namely, “The committee only accepts submissions consistent with the traditional view of (C)APD espoused by Bellis, Chermak and others who wrote the ASHA (2005) position statement on CAPD.”

Kamhi Vermiglio, and Wallach (2016) supported this claim by pointing out that when Dr. Vermiglio attempted to submit his findings on the nature of “C/APD” for the 2015 ASHA Convention, “the committee did not accept Vermiglio’s submission” but instead accepted the following seminar: “APD – It Exists! Differential Diagnosis & Remediation” and allocated for it “a prominent location in the program planner.”

Indeed, during the 2016 ASHA convention alone, there was a host of 1 and 2-hour pro-APD sessions such as: “Yes, You CANS! Adding Therapy for Specific CAPDs to an IEP“, “Perspectives on the Assessment & Treatment of Individuals With Central Auditory Processing Disorder (CAPD)“, as well asThe Buffalo Model for CAPD: Looking Back & Forward, in addition to a host of posters and technical reports attempting to validate this diagnosis despite mounting evidence refuting that very fact. Yet only one session, “Never-Ending Controversies With CAPD: What Thinking SLPs & Audiologists Know” presented by Kamhi, Vermiglio, & Wallach (two SLPs and one AuD) and accepted by a non-AuD committee, discussed the current controversies raging in the fields of speech pathology and audiology pertaining to “C/APD”. 

In 2016, Diane Paul, the Director of Clinical Issues in Speech-Language Pathology at ASHA  had asked Kamhi, Vermiglio, and Wallach “to offer comments on the outline of audiology and SLP roles in assessing and treating CAPD”.  According to Kamhi, et al, 2016, the outline did not mention any of controversies in assessment and diagnosis documented by numerous authors dating as far as 2009. It also did not “mention the lack of evidence on the efficacy of auditory interventions documented in the systematic review by Fey et al. (2011) and DeBonis (2015).”

At this juncture, it’s important to start thinking regarding possible incentives a professional might have to continue performing APD testing and making prescriptive program recommendations despite all the existing evidence refuting the validity and utility of APD diagnosis for children presenting with listening difficulties.

Conclusions:

  • There is still no compelling evidence that APD is a stand-alone diagnosis with clear diagnostic criteria
  • There is still no compelling evidence that auditory deficits are a “significant risk factor for  language or academic performance”
  • There is still no compelling evidence that “auditory interventions provide any unique benefit to auditory, language, or academic outcomes” (Hazan, Messaoud-Galusi, Rosan, Nouwens, & Shakespeare, 2009; Watson & Kidd, 2009)
  • APD deficits are linguistically based deficits which accompany a host of developmental conditions ranging from developmental language disorders to learning disabilities, etc.
  • SLPs should continue comprehensively assessing children diagnosed with “C/APD” to determine the scope of their linguistic deficits
  • SLPs should continue formulating language goals to  determine linguistic areas of weaknesses
  • SLPS should be wary of any goals or recommendations which focus on remediation of isolated skills such as: “auditory discrimination, auditory sequencing, phonological memory, working memory, or rapid serial naming” since studies have definitively confirmed their lack of effectiveness (Fey, et al, 2011)
  • SLPs should be wary of any prescriptive programs offering C/APD “interventions”
  • SLPs should focus on improving children’s abilities for functional communication including listening, speaking, reading, and writing
    • Please see excellent article written by Dr. Wallach in 2014 entitled: Improving Clinical Practice: A School-Age and School-Based Perspective. It “presents a conceptual framework for intervention at school-age levels” and discusses “advanced levels of language that move beyond preschool and early elementary grade goals and objectives with a focus on comprehension and meta-abilities.”

So there you have it, sadly, despite research and logic, the controversy is very much alive! Except I am seeing some new developments!

I see SLPs, newly-minted and seasoned alike, steadily voicing their concerns regarding the symptomology they are documenting in children diagnosed with so-called “CAPD” as being purely auditory in nature.

I see more and more SLPs supporting research evidence and science by voicing their concerns regarding the numerous diagnostic markers of ‘CAPD’ which do not make sense to them by stating “Wait a second – that can’t be right!”.

I see more and more SLPs documenting the lack of progress children make after being prescribed isolated FM systems or computer programs which claim to treat “APD symptomology” (without provision of therapy services).  I see more and more SLPs beginning to understand the lack of usefulness of this diagnosis, who switch to using language-based interventions to teach children to listen, speak, read and write and to generalize these abilities to both social and academic settings.

I see more and more SLPs beginning to understand the lack of usefulness of this diagnosis, who switch to using language-based interventions to teach children to listen, speak, read and write and to generalize these abilities to both social and academic settings.

So I definitely do see hope on the horizon!

References:

(arranged in chronological order of citation in the blog post):

Related Posts:

 

Posted on

New Product Giveaway: Comprehensive Literacy Checklist For School-Aged Children

I wanted to start the new year right by giving away a few copies of a new checklist I recently created entitled: “Comprehensive Literacy Checklist For School-Aged Children“.

It was created to assist Speech Language Pathologists (SLPs) in the decision-making process of how to identify deficit areas and select assessment instruments to prioritize a literacy assessment for school aged children.

The goal is to eliminate administration of unnecessary or irrelevant tests and focus on the administration of instruments directly targeting the specific areas of difficulty that the student presents with.

*For the purpose of this product, the term “literacy checklist” rather than “dyslexia checklist” is used throughout this document to refer to any deficits in the areas of reading, writing, and spelling that the child may present with in order to identify any possible difficulties the child may present with, in the areas of literacy as well as language.

This checklist can be used for multiple purposes.

1. To identify areas of deficits the child presents with for targeted assessment purposes

2. To highlight areas of strengths (rather than deficits only) the child presents with pre or post intervention

3. To highlight residual deficits for intervention purpose in children already receiving therapy services without further reassessment

Checklist Contents:

  • Page 1 Title
  • Page 2 Directions
  • Pages 3-9 Checklist
  • Page 10 Select Tests of Reading, Spelling, and Writing for School-Aged Children
  • Pages 11-12 Helpful Smart Speech Therapy Materials

Checklist Areas:

  1. AT RISK FAMILY HISTORY
  2. AT RISK DEVELOPMENTAL HISTORY
  3. BEHAVIORAL MANIFESTATIONS 
  4. LEARNING DEFICITS   
    1. Memory for Sequences
    2. Vocabulary Knowledge
    3. Narrative Production
    4. Phonological Awareness
    5. Phonics
    6. Morphological Awareness
    7. Reading Fluency
    8. Reading Comprehension
    9. Spelling
    10. Writing Conventions
    11. Writing Composition 
    12. Handwriting

You can find this product in my online store HERE.

Would you like to check it out in action? I’ll be giving away two copies of the checklist in a Rafflecopter Giveaway to two winners.  So enter today to win your own copy!

a Rafflecopter giveaway

Posted on

Review of the Test of Integrated Language and Literacy (TILLS)

The Test of Integrated Language & Literacy Skills (TILLS) is an assessment of oral and written language abilities in students 6–18 years of age. Published in the Fall 2015, it is  unique in the way that it is aimed to thoroughly assess skills  such as reading fluency, reading comprehension, phonological awareness,  spelling, as well as writing  in school age children.   As I have been using this test since the time it was published,  I wanted to take an opportunity today to share just a few of my impressions of this assessment.

               

First, a little background on why I chose to purchase this test  so shortly after I had purchased the Clinical Evaluation of Language Fundamentals – 5 (CELF-5).   Soon after I started using the CELF-5  I noticed that  it tended to considerably overinflate my students’ scores  on a variety of its subtests.  In fact,  I noticed that unless a student had a fairly severe degree of impairment,  the majority of his/her scores  came out either low/slightly below average (click for more info on why this was happening HERE, HEREor HERE). Consequently,  I was excited to hear regarding TILLS development, almost simultaneously through ASHA as well as SPELL-Links ListServe.   I was particularly happy  because I knew some of this test’s developers (e.g., Dr. Elena Plante, Dr. Nickola Nelson) have published solid research in the areas of  psychometrics and literacy respectively.

According to the TILLS developers it has been standardized for 3 purposes:

  • to identify language and literacy disorders
  • to document patterns of relative strengths and weaknesses
  • to track changes in language and literacy skills over time

The testing subtests can be administered in isolation (with the exception of a few) or in its entirety.  The administration of all the 15 subtests may take approximately an hour and a half, while the administration of the core subtests typically takes ~45 mins).

Please note that there are 5 subtests that should not be administered to students 6;0-6;5 years of age because many typically developing students are still mastering the required skills.

  • Subtest 5 – Nonword Spelling
  • Subtest 7 – Reading Comprehension
  • Subtest 10 – Nonword Reading
  • Subtest 11 – Reading Fluency
  • Subtest 12 – Written Expression

However,  if needed, there are several tests of early reading and writing abilities which are available for assessment of children under 6:5 years of age with suspected literacy deficits (e.g., TERA-3: Test of Early Reading Ability–Third Edition; Test of Early Written Language, Third Edition-TEWL-3, etc.).

Let’s move on to take a deeper look at its subtests. Please note that for the purposes of this review all images came directly from and are the property of Brookes Publishing Co (clicking on each of the below images will take you directly to their source).

TILLS-subtest-1-vocabulary-awareness1. Vocabulary Awareness (VA) (description above) requires students to display considerable linguistic and cognitive flexibility in order to earn an average score.    It works great in teasing out students with weak vocabulary knowledge and use,   as well as students who are unable to  quickly and effectively analyze  words  for deeper meaning and come up with effective definitions of all possible word associations. Be mindful of the fact that  even though the words are presented to the students in written format in the stimulus book, the examiner is still expected to read  all the words to the students. Consequently,  students with good vocabulary knowledge  and strong oral language abilities  can still pass this subtest  despite the presence of significant reading weaknesses. Recommendation:  I suggest informally  checking the student’s  word reading abilities  by asking them to read of all the words, before reading all the word choices to them.   This way  you can informally document any word misreadings  made by the student even in the presence of an average subtest score.

TIILLS-subtest-2-phonemic-awareness

2. The Phonemic Awareness (PA) subtest (description above) requires students to  isolate and delete initial sounds in words of increasing complexity.  While this subtest does not require sound isolation and deletion in various word positions, similar to tests such as the CTOPP-2: Comprehensive Test of Phonological Processing–Second Edition  or the The Phonological Awareness Test 2 (PAT 2)  it is still a highly useful and reliable measure of  phonemic awareness (as one of many precursors to reading fluency success).  This is especially because after the initial directions are given, the student is expected to remember to isolate the initial sounds in words without any prompting from the examiner.  Thus,  this task also  indirectly tests the students’ executive function abilities in addition to their phonemic awareness skills.

TILLS-subtest-3-story-retelling

3. The Story Retelling (SR) subtest (description above) requires students to do just that retell a story. Be mindful of the fact that the presented stories have reduced complexity. Thus, unless the students possess  significant retelling deficits, the above subtest  may not capture their true retelling abilities. Recommendation:  Consider supplementing this subtest  with informal narrative measures. For younger children (kindergarten and first grade) I recommend using wordless picture books to perform a dynamic assessment of their retelling abilities following a clinician’s narrative model (e.g., HERE).  For early elementary aged children (grades 2 and up), I recommend using picture books, which are first read to and then retold by the students with the benefit of pictorial but not written support. Finally, for upper elementary aged children (grades 4 and up), it may be helpful for the students to retell a book or a movie seen recently (or liked significantly) by them without the benefit of visual support all together (e.g., HERE).

TILLS-subtest-4-nonword-repetition

4. The Nonword Repetition (NR) subtest (description above) requires students to repeat nonsense words of increasing length and complexity. Weaknesses in the area of nonword repetition have consistently been associated with language impairments and learning disabilities due to the task’s heavy reliance on phonological segmentation as well as phonological and lexical knowledge (Leclercq, Maillart, Majerus, 2013). Thus, both monolingual and simultaneously bilingual children with language and literacy impairments will be observed to present with patterns of segment substitutions (subtle substitutions of sounds and syllables in presented nonsense words) as well as segment deletions of nonword sequences more than 2-3 or 3-4 syllables in length (depending on the child’s age).

TILLS-subtest-5-nonword-spelling

5. The Nonword Spelling (NS) subtest (description above) requires the students to spell nonwords from the Nonword Repetition (NR) subtest. Consequently, the Nonword Repetition (NR) subtest needs to be administered prior to the administration of this subtest in the same assessment session.  In contrast to the real-word spelling tasks,  students cannot memorize the spelling  of the presented words,  which are still bound by  orthographic and phonotactic constraints of the English language.   While this is a highly useful subtest,  is important to note that simultaneously bilingual children may present with decreased scores due to vowel errors.   Consequently,  it is important to analyze subtest results in order to determine whether dialectal differences rather than a presence of an actual disorder is responsible for the error patterns.

TILLS-subtest-6-listening-comprehension

6. The  Listening Comprehension (LC) subtest (description above) requires the students to listen to short stories  and then definitively answer story questions via available answer choices, which include: “Yes”, “No’, and “Maybe”. This subtest also indirectly measures the students’ metalinguistic awareness skills as they are needed to detect when the text does not provide sufficient information to answer a particular question definitively (e.g., “Maybe” response may be called for).  Be mindful of the fact that because the students are not expected to provide sentential responses  to questions it may be important to supplement subtest administration with another listening comprehension assessment. Tests such as the Listening Comprehension Test-2 (LCT-2), the Listening Comprehension Test-Adolescent (LCT-A),  or the Executive Function Test-Elementary (EFT-E)  may be useful  if  language processing and listening comprehension deficits are suspected or reported by parents or teachers. This is particularly important  to do with students who may be ‘good guessers’ but who are also reported to present with word-finding difficulties at sentence and discourse levels. 

TILLS-subtest-7-reading-comprehension

7. The Reading Comprehension (RC) subtest (description above) requires the students to  read short story and answer story questions in “Yes”, “No’, and “Maybe”  format.   This subtest is not stand alone and must be administered immediately following the administration the Listening Comprehension subtest. The student is asked to read the first story out loud in order to determine whether s/he can proceed with taking this subtest or discontinue due to being an emergent reader. The criterion for administration of the subtest is making 7 errors during the reading of the first story and its accompanying questions. Unfortunately,  in my clinical experience this subtest  is not always accurate at identifying children with reading-based deficits.

While I find it terrific for students with severe-profound reading deficits and/or below average IQ, a number of my students with average IQ and moderately impaired reading skills managed to pass it via a combination of guessing and luck despite being observed to misread aloud between 40-60% of the presented words. Be mindful of the fact that typically  such students may have up to 5-6  errors during the reading of the first story. Thus, according to administration guidelines these students will be allowed to proceed and take this subtest.  They will then continue to make text misreadings  during each story presentation (you will know that by asking them to read each story aloud vs. silently).   However,  because the response mode is in definitive (“Yes”, “No’, and “Maybe”) vs. open ended question format,  a number of these students  will earn average scores by being successful guessers. Recommendation:  I highly recommend supplementing the administration of this subtest with grade level (or below grade level) texts (see HERE and/or HERE),  to assess the student’s reading comprehension informally.

I present a full  one page text to the students and ask them to read it to me in its entirety.   I audio/video record  the student’s reading for further analysis (see Reading Fluency section below).   After the  completion of the story I ask  the student questions with a focus on main idea comprehension and vocabulary definitions.   I also ask questions pertaining to story details.   Depending on the student’s age  I may ask them  abstract/ factual text questions with and without text access.  Overall, I find that informal administration of grade level (or even below grade-level) texts coupled with the administration of standardized reading tests provides me with a significantly better understanding of the student’s reading comprehension abilities rather than administration of standardized reading tests alone.

TILLS-subtest-8-following-directions

8. The Following Directions (FD) subtest (description above) measures the student’s ability to execute directions of increasing length and complexity.  It measures the student’s short-term, immediate and working memory, as well as their language comprehension.  What is interesting about the administration of this subtest is that the graphic symbols (e.g., objects, shapes, letter and numbers etc.) the student is asked to modify remain covered as the instructions are given (to prevent visual rehearsal). After being presented with the oral instruction the students are expected to move the card covering the stimuli and then to executive the visual-spatial, directional, sequential, and logical if–then the instructions  by marking them on the response form.  The fact that the visual stimuli remains covered until the last moment increases the demands on the student’s memory and comprehension.  The subtest was created to simulate teacher’s use of procedural language (giving directions) in classroom setting (as per developers).

TILLS-subtest-9-delayed-story-retelling

9. The Delayed Story Retelling (DSR) subtest (description above) needs to be administered to the students during the same session as the Story Retelling (SR) subtest, approximately 20 minutes after the SR subtest administration.  Despite the relatively short passage of time between both subtests, it is considered to be a measure of long-term memory as related to narrative retelling of reduced complexity. Here, the examiner can compare student’s performance to determine whether the student did better or worse on either of these measures (e.g., recalled more information after a period of time passed vs. immediately after being read the story).  However, as mentioned previously, some students may recall this previously presented story fairly accurately and as a result may obtain an average score despite a history of teacher/parent reported  long-term memory limitations.  Consequently, it may be important for the examiner to supplement the administration of this subtest with a recall of a movie/book recently seen/read by the student (a few days ago) in order to compare both performances and note any weaknesses/limitations.

TILLS-subtest-10-nonword-reading

10. The Nonword Reading (NR) subtest (description above) requires students to decode nonsense words of increasing length and complexity. What I love about this subtest is that the students are unable to effectively guess words (as many tend to routinely do when presented with real words). Consequently, the presentation of this subtest will tease out which students have good letter/sound correspondence abilities as well as solid orthographic, morphological and phonological awareness skills and which ones only memorized sight words and are now having difficulty decoding unfamiliar words as a result.      TILLS-subtest-11-reading-fluency

11. The Reading Fluency (RF) subtest (description above) requires students to efficiently read facts which make up simple stories fluently and correctly.  Here are the key to attaining an average score is accuracy and automaticity.  In contrast to the previous subtest, the words are now presented in meaningful simple syntactic contexts.

It is important to note that the Reading Fluency subtest of the TILLS has a negatively skewed distribution. As per authors, “a large number of typically developing students do extremely well on this subtest and a much smaller number of students do quite poorly.”

Thus, “the mean is to the left of the mode” (see publisher’s image below). This is why a student could earn an average standard score (near the mean) and a low percentile rank when true percentiles are used rather than NCE percentiles (Normal Curve Equivalent). Tills Q&A – Negative Skew

Consequently under certain conditions (See HERE) the percentile rank (vs. the NCE percentile) will be a more accurate representation of the student’s ability on this subtest.

Indeed, due to the reduced complexity of the presented words some students (especially younger elementary aged) may obtain average scores and still present with serious reading fluency deficits.  

I frequently see that in students with average IQ and go to long-term memory, who by second and third grades have managed to memorize an admirable number of sight words due to which their deficits in the areas of reading appeared to be minimized.  Recommendation: If you suspect that your student belongs to the above category I highly recommend supplementing this subtest with an informal measure of reading fluency.  This can be done by presenting to the student a grade level text (I find science and social studies texts particularly useful for this purpose) and asking them to read several paragraphs from it (see HERE and/or HERE).

As the students are reading  I calculate their reading fluency by counting the number of words they read per minute.  I find it very useful as it allows me to better understand their reading profile (e.g, fast/inaccurate reader, slow/inaccurate reader, slow accurate reader, fast/accurate reader).   As the student is reading I note their pauses, misreadings, word-attack skills and the like. Then, I write a summary comparing the students reading fluency on both standardized and informal assessment measures in order to document students strengths and limitations.

TILLS-subtest-12-written-expression

12. The Written Expression (WE) subtest (description above) needs to be administered to the students immediately after the administration of the Reading Fluency (RF) subtest because the student is expected to integrate a series of facts presented in the RF subtest into their writing sample. There are 4 stories in total for the 4 different age groups.

The examiner needs to show the student a different story which integrates simple facts into a coherent narrative. After the examiner reads that simple story to the students s/he is expected to tell the students that the story is  okay, but “sounds kind of “choppy.” They then need to show the student an example of how they could put the facts together in a way that sounds more interesting and less choppy  by combining sentences (see below). Finally, the examiner will ask the students to rewrite the story presented to them in a similar manner (e.g, “less choppy and more interesting.”)

tills

After the student finishes his/her story, the examiner will analyze it and generate the following scores: a discourse score, a sentence score, and a word score. Detailed instructions as well as the Examiner’s Practice Workbook are provided to assist with scoring as it takes a bit of training as well as trial and error to complete it, especially if the examiners are not familiar with certain procedures (e.g., calculating T-units).

Full disclosure: Because the above subtest is still essentially sentence combining, I have only used this subtest a handful of times with my students. Typically when I’ve used it in the past, most of my students fell in two categories: those who failed it completely by either copying text word  for word, failing to generate any written output etc. or those who passed it with flying colors but still presented with notable written output deficits. Consequently, I’ve replaced Written Expression subtest administration with the administration of written standardized tests, which I supplement with an informal grade level expository, persuasive, or narrative writing samples.

Having said that many clinicians may not have the access to other standardized written assessments, or lack the time to administer entire standardized written measures (which may frequently take between 60 to 90 minutes of administration time). Consequently, in the absence of other standardized writing assessments, this subtest can be effectively used to gauge the student’s basic writing abilities, and if needed effectively supplemented by informal writing measures (mentioned above).

TILLS-subtest-13-social-communication

13. The Social Communication (SC) subtest (description above) assesses the students’ ability to understand vocabulary associated with communicative intentions in social situations. It requires students to comprehend how people with certain characteristics might respond in social situations by formulating responses which fit the social contexts of those situations. Essentially students become actors who need to act out particular scenes while viewing select words presented to them.

Full disclosure: Similar to my infrequent administration of the Written Expression subtest, I have also administered this subtest very infrequently to students.  Here is why.

I am an SLP who works full-time in a psychiatric hospital with children diagnosed with significant psychiatric impairments and concomitant language and literacy deficits.  As a result, a significant portion of my job involves comprehensive social communication assessments to catalog my students’ significant deficits in this area. Yet, past administration of this subtest showed me that number of my students can pass this subtest quite easily despite presenting with notable and easily evidenced social communication deficits. Consequently, I prefer the administration of comprehensive social communication testing when working with children in my hospital based program or in my private practice, where I perform independent comprehensive evaluations of language and literacy (IEEs).

Again, as I’ve previously mentioned many clinicians may not have the access to other standardized social communication assessments, or lack the time to administer entire standardized written measures. Consequently, in the absence of other social communication assessments, this subtest can be used to get a baseline of the student’s basic social communication abilities, and then be supplemented with informal social communication measures such as the Informal Social Thinking Dynamic Assessment Protocol (ISTDAP) or observational social pragmatic checklists

TILLS-subtest-14-digit-span-forward

14.  The Digit Span Forward (DSF) subtest (description above) is a relatively isolated  measure  of short term and verbal working memory ( it minimizes demands on other aspects of language such as syntax or vocabulary).

TILLS-subtest-15-digit-span-backward

15.  The Digit Span Backward (DSB) subtest (description above) assesses the student’s working memory and requires the student to mentally manipulate the presented stimuli in reverse order. It allows examiner to observe the strategies (e.g. verbal rehearsal, visual imagery, etc.) the students are using to aid themselves in the process.  Please note that the Digit Span Forward subtest must be administered immediately before the administration of this subtest.

SLPs who have used tests such as the Clinical Evaluation of Language Fundamentals – 5 (CELF-5) or the Test of Auditory Processing Skills – Third Edition (TAPS-3) should be highly familiar with both subtests as they are fairly standard measures of certain aspects of memory across the board.

To continue, in addition to the presence of subtests which assess the students literacy abilities, the TILLS also possesses a number of interesting features.

For starters, the TILLS Easy Score, which allows the examiners to use their scoring online. It is incredibly easy and effective. After clicking on the link and filling out the preliminary demographic information, all the examiner needs to do is to plug in this subtest raw scores, the system does the rest. After the raw scores are plugged in, the system will generate a PDF document with all the data which includes (but is not limited to) standard scores, percentile ranks, as well as a variety of composite and core scores. The examiner can then save the PDF on their device (laptop, PC, tablet etc.) for further analysis.

The there is the quadrant model. According to the TILLS sampler (HERE)  “it allows the examiners to assess and compare students’ language-literacy skills at the sound/word level and the sentence/ discourse level across the four oral and written modalities—listening, speaking, reading, and writing” and then create “meaningful profiles of oral and written language skills that will help you understand the strengths and needs of individual students and communicate about them in a meaningful way with teachers, parents, and students. (pg. 21)”

tills quadrant model

Then there is the Student Language Scale (SLS) which is a one page checklist parents,  teachers (and even students) can fill out to informally identify language and literacy based strengths and weaknesses. It  allows for meaningful input from multiple sources regarding the students performance (as per IDEA 2004) and can be used not just with TILLS but with other tests or in even isolation (as per developers).

Furthermore according to the developers, because the normative sample included several special needs populations, the TILLS can be used with students diagnosed with ASD,  deaf or hard of hearing (see caveat), as well as intellectual disabilities (as long as they are functioning age 6 and above developmentally).

According to the developers the TILLS is aligned with Common Core Standards and can be administered as frequently as two times a year for progress monitoring (min of 6 mos post 1st administration).

With respect to bilingualism examiners can use it with caution with simultaneous English learners but not with sequential English learners (see further explanations HERE).   Translations of TILLS are definitely not allowed as they will undermine test validity and reliability.

So there you have it these are just some of my very few impressions regarding this test.  Now to some of you may notice that I spend a significant amount of time pointing out some of the tests limitations. However, it is very important to note that we have research that indicates that there is no such thing as a “perfect standardized test” (see HERE for more information).   All standardized tests have their limitations

Having said that, I think that TILLS is a PHENOMENAL addition to the standardized testing market, as it TRULY appears to assess not just language but also literacy abilities of the students on our caseloads.

That’s all from me; however, before signing off I’d like to provide you with more resources and information, which can be reviewed in reference to TILLS.  For starters, take a look at Brookes Publishing TILLS resources.  These include (but are not limited to) TILLS FAQ, TILLS Easy-Score, TILLS Correction Document, as well as 3 FREE TILLS Webinars.   There’s also a Facebook Page dedicated exclusively to TILLS updates (HERE).

But that’s not all. Dr. Nelson and her colleagues have been tirelessly lecturing about the TILLS for a number of years, and many of their past lectures and presentations are available on the ASHA website as well as on the web (e.g., HERE, HERE, HERE, etc). Take a look at them as they contain far more in-depth information regarding the development and implementation of this groundbreaking assessment.

Disclaimer:  I did not receive a complimentary copy of this assessment for review nor have I received any encouragement or compensation from either Brookes Publishing  or any of the TILLS developers to write it.  All images of this test are direct property of Brookes Publishing (when clicked on all the images direct the user to the Brookes Publishing website) and were used in this post for illustrative purposes only.

References: 

Leclercq A, Maillart C, Majerus S. (2013) Nonword repetition problems in children with SLI: A deficit in accessing long-term linguistic representations? Topics in Language Disorders. 33 (3) 238-254.

Related Posts:

Posted on

If It’s NOT CAPD Then Where do SLPs Go From There?

Image result for processingIn July 2015 I wrote a blog post entitled: “Why (C) APD Diagnosis is NOT Valid!” citing the latest research literature to explain that the controversial diagnosis of (C)APD tends to

a) detract from understanding that the child presents with legitimate language based deficits in the areas of comprehension, expression, social communication and literacy development

b) may result in the above deficits not getting adequately addressed due to the provision of controversial APD treatments

To CLARIFY, I was NOT trying to disprove that the processing deficits exhibited by the children diagnosed with “(C)APD” were not REAL. Rather I was trying to point out that these processing deficits are of neurolinguistic origin and as such need to be addressed from a linguistic rather than ‘auditory’ standpoint.

In other words, if one carefully analyzes the child’s so-called processing issues, one will quickly realize that those issues are not related to the processing of auditory input  (auditory domain) since the child is not processing tones, hoots, or clicks, etc. but rather has difficulty processing speech and language (linguistic domain).

Let us review two major APD Models: The Buffalo Model (Katz) and the Bellis/Ferre Model, to support the above stance.

iStock_000009897175XSmall-1-300x300

The Buffalo Model by Jack Katz, PhD contains 4 major categories:

1. The Decoding Category – refers to the ability to quickly and accurately process speech, most importantly at the phonemic level (Since this involves speech sounds then this has nothing to do with the processing of auditory stimuli. In other words deficits in this area are of linguistic nature and the highly correlated with reading deficits characterized by weak/deficient phonemic awareness abilities/poor emergent reading abilities).

Here are a few examples of so-called “decoding” deficits:

  • Difficulty with processing what is heard accurately and quickly; tends to respond more slowly (indicative of weak language abilities)
  • Problems keeping up with the flow of communication and running discourse (indicative of weak language abilities)
  • Problems processing at a phonemic level (e.g, can’t blend ‘t,’ ‘u’ and ‘b’ together to make the word ‘tub’) (indicative of phonemic awareness deficits)
  • Trouble reading and spelling (reading and writing deficits rather then APD)
  • Receptive language problems and impairments in discrimination, closure abilities and temporal resolution (this one just explains itself)

2. Tolerance-Fading Memory (TFM) Category – refers to two skills that are often found together: “tolerance” – understanding speech in noise (processing of language) and “fading memory” – auditory short-term or working memory (memory= higher level cognitive skills vs. a pure auditory entity).

Here are a few examples of given of tolerance-fading memory deficits:

  1. Difficulty blocking out background noise so child’s performance suffers in a noisy classroom environment, may be labeled as distractible (clearly describes the child with poor language comprehension)
  2. Linked to poor reading comprehension, oral and written expression, poor short-term memory (in other words describes a learning disability)

3. Integration category 

  • difficulty bringing in information from different modalities, such as receiving auditory and visual information at the same time; these children are often labeled as learning disabled or even dyslexic (this one just explains itself)
  • They may be poor readers, have trouble with spelling, and exhibit difficulty with multimodal tasks (clearly indicative of reading and writing deficits or students which will often get classified in the schools with specific learning disability)

4. Organization –disorganized thinking; sequencing errors (This appears to be indicative of the social communication / executive function deficits, as well as word-retrieval deficits)

Another major APD model is the Bellis/Ferre Model, which divides the above four categories into the following subtypes:

  • Primary subtype
  1. Auditory decoding – listening difficulties in noisy environments
  2. Integration deficit – problems with tasks requiring both cerebral hemispheres to cooperate
  3. Prosodic deficits- difficulty understanding the intent of verbal messages
  • Secondary
  1. Associative deficits- receptive language disorder
  2. Output organization deficits- attention and/or executive function disorder- might also be caused by an auditory efferent dysfunction

Similar to the Buffalo model, the Bellis/Ferre Model, describes deficits of linguistic versus auditory nature many of which are characteristic of a learning disability.

testing

Consequently, if an SLP is referred a student with confirmed or suspected (C)APD, the first thing they should do is to administer a comprehensive battery of testing to determine the scope of the student’s linguistic deficits. To test general language abilities, consider using the Test of Integrated Language & Literacy Skills (TILLS) (Review HERE). But SLPs shouldn’t just stop there! They need to dig deeper to make sure that the following major areas of language are assessed:

The above list doesn’t even reference assessment of Reading, Writing, and Spelling, all areas which play a crucial role in academic language as any deficits displayed in those areas may also present as CAPD symptoms.  If literacy testing is not performed, it is still important for SLPs to review and seriously consider the results of learning evaluations in order to see the whole child and not just their limited functioning in select areas of oral language comprehension, expression, and use.

It is very important for SLPs to understand that without a comprehensive language and literacy assessment of deficit areas it is very difficult to adequately address the student’s linguistically-based deficits! Thus, if testing shortcuts are taken then the referral of students diagnosed with the (C)APD will not cease, and SLPs will continue to be in the dark regarding which goals should be addressed with these students in therapy.

Related Posts: