Posted on

Test Review: Clinical Assessment of Pragmatics (CAPs)

Today due to popular demand I am reviewing the Clinical Assessment of Pragmatics (CAPs) for children and young adults ages 7 – 18, developed by the Lavi Institute. Readers of this blog are familiar with the fact that I specialize in working with children diagnosed with psychiatric impairments and behavioral and emotional difficulties. They are also aware that I am constantly on the lookout for good quality social communication assessments due to a notorious dearth of good quality instruments in this area of language.

I must admit when I first learned about the existence of CAPs in May 2018, I was definitely interested but quite cautious. Many standardized tests assessing pragmatics and social language contain notable psychometric limitations due to the inclusion of children with social and pragmatic difficulties into the normative sample. This, in turn, tends to overinflate test scores and produce false negatives (a belief that the child does not possess a social communication impairment due to receiving average scores on the test).  Furthermore, tests of pragmatics such as Test of Pragmatic Language -2 (TOPL-2) tend to primarily assess the child’s knowledge of rules of politeness and knowing the right thing to say under a particular set of circumstances and as such are of limited value when it comes to gauging the child’s ability to truly assume perspectives and adequately showcase social cognitive abilities.

The CAPs is a unique test as compared to others with a similar purpose, due to the fact that the testing administration (which can take between 45-60 mins) is conducted exclusively via videos. The CAPs consists of 6 subtests and 3 indices.

Subtests (You can read up more on the comparison of the CAPs subtests HERE ):

Instrumental Performance Appraisal (IPA) subtest (Awareness of Basic Social Routines) is a relatively straightforward subtest which examines the student’s ability to be polite in basic social contexts. The student is asked to first identify “if anything went wrong in the presented scenario?” After that, the student is asked to explain, what went wrong and how s/he knows? Targeted structures include greeting and closure, making requests, responding to gratitude, requesting help, answering phone calls, asking for directions, asking permission, etc.  Goals: can the student discern between appropriate and inappropriate language and then provide a verbal rationale in a coherent and cohesive manner.

Score types: (2) correct identification of problem or lack of thereof + correct justification; (1) correct identification but incorrect rationale; (0) incorrect identification.

Social Context Appraisal (SCA) subtest (Reading Context Cues) requires the student to engage in effective perspective taking (assume mutual vs. individual perspectives) by identifying sarcasm, irony, and figurative language in the presented video scenarios. The student is then asked to provide a coherent and cohesive verbal explanation and effectively justify own response.

Score types: (3) correct identification of the problem or lack of thereof + identification of idiom or sarcasm + reference to both characters actions; (2) correct identification of the problem or lack of thereof + identification of idiom or sarcasm + reference to one character’s actions; (1) correct identification of the problem or lack of thereof but an inability to verbalize the problem in the situation; (0) for incorrect identification.

Paralinguistic Decoding (PD) subtest (Reading Nonverbal Cues) assesses the students’ ability to notice and interpret micro-expressions and nonverbal language.  The aim of this subtest is to have the students grasp what went wrong vs. well in the presented videos, assume mutual perspectives,  as well as verbally justify their responses providing adequate and relevant details.

Score types: (3) correct identification of the problem or lack of thereof + explanation of situation + reference to both characters facial expressions and tone of voices; (2) correct identification of the problem or lack of thereof + explanation of situation  + reference to one character’s facial expression and tone of voice  (1) correct identification of the problem or lack of thereof but an inability to explain actions and/or nonverbal body language; (0) for incorrect identification.

Instrumental Performance (IP) subtest (Use of Social Routine Language) assesses the student’s ability to use rules of politeness (e.g., make requests, respond to gratitude, answer phone calls, etc.) by providing adequately supportive responses using first-person perspectives relevant to various social situations.

Score types: (2) appropriate introduction + use of supportive statements; (1) appropriate introduction without the use of supportive statements; (0) inappropriate intent of message or use of impolite language 

Affective Expression (AE) subtest (Expressing Emotions) assesses the student’s ability to effectively display empathy, gratitude, praise, apology, etc., towards affected peers in the video scenario. It requires the usage of relevant facial expressions, tone of voice, as well as stating appropriately supportive comments.

Score types: (2) expresses empathy, praise, apology, gratitude, etc. along with supportive statements +appropriate facial and prosodic affect; (1) expresses empathy, praise, apology, gratitude, etc. + appropriate facial and prosodic affect without relevant supportive statements;   (0) provides an approrpiate response but lacks adequate prosody and affect, or message contains inappropriate intent 

Paralinguistic Signals (PS) subtest (Using Nonverbal Cues)assesses the student’s ability to appropriately use facial expressions, gestures, and prosody (act out vs. recognize and interpret facial expression and gestures). This includes showing appropriate expression of empathy, frustration, alarm, excitement, gratitude, etc., exhibiting relevant inflection in prosody as well as showing appropriate to the situation facial expression (vs. having inappropriate message intent, be monotone, have flat affect, etc.)

Score types: (2) appropriately expresses urgency, empathy apology, etc. +exhibits inflections in prosody and shows relevant facial expressions; (1) appropriately expresses urgency, empathy apology, etc. +exhibits inflections in prosody without showing relevant facial expressions  (0)   inappropriate intent of message or monotone prosody. 

Indices (information regarding the student’s pragmatic proficiency):

  1. Pragmatic Judgement (Sum of IPA, SCA & PD scaled scores)
  2. Pragmatic Performance (Sum of IP, AE & PS scaled scores)
  3. Paralinguistic (Sum of PD, AE & PS scaled scores)

Based on the administration of this test the following goals can be formulated for remediation purposes:

Long Term Goal: Student will improve pragmatic abilities for social and academic purposes

Short-Term Objectives: 

  1. The student will verbally identify instances of politeness or impoliteness in presented social routines
  2. The student will provide relevant justifications explaining which aspects of the presented scenarios were appropriate vs. inappropriate
  3. The student will verbally identify sarcasm, irony, and figurative language in presented social scenarios
  4. The student will effectively explain sarcasm, irony, and figurative language in presented social scenarios
  5. The student will verbally interpret micro-expressions and nonverbal body language  (e.g., they feel disgusted; the girl is smirking, the man’s hands are crossed, etc.)
  6. The student will effectively use rules of politeness and provide adequately supportive responses using first-person perspectives pertaining to various aspects of social scenarios
  7. The student will display a range of emotional expressions via the use of relevant facial expressions, tone of voice when providing supporting responses
  8. The student will state appropriately supportive comments regarding relevant social scenarios
  9. The student will use a range of facial expressions, gestures, and relevant prosody pertinent to the provided social scenarios

Furthermore, this test comes with a Contextualized Assessment of Pragmatics Checklist as well as a downloadable  Free Report Template.

Multiple videos posted by the Lavi Institute showcasing individual subtest administration can be accessed by clicking on the above-highlighted links as well as on YouTube.

Psychometrics: the normative sample consisted of 914 individuals out of which 137 (or 15%)  included individuals with atypical language development: ASD: N-18; SLI: N-27; Other (Learning Disabilities): N-92.

Excellent Sensitivity and Specificity Cut Scores (at 1, 1.5 & 2 SD) for clients with ASD ONLY:

Impressions:  To date, I have used this assessment with only 3 students. As such, expect multiple updates of this post as I continue to document how well it suited to identify children with social communication difficulties. Below are my preliminary impressions on how well this test is suited for children with varying pragmatic profiles.

A. Initial Assessment: 8-3-year-old male diagnosed with Autism

The CAPs had captured the student’s display of pragmatic deficits extremely well.  It was able to highlight the student’s relative strengths as well as pervasive pragmatic needs.  Based on the results of the CAPs, I was able to generate relevant pragmatic goals to target with this student in therapy.

B.  Yearly Reassessment: 8-11-year-old diagnosed with Anxiety:

I definitely had some trepidation about how well the CAPs will be able to capture this student’s pragmatic difficulties. This student was initially assessed via the Social Language Development Test-Elementary (SLDTE), which did show deficits in the areas of making inferences, interpersonal negotiation, as well as multiple interpretations of social situations. However, subsequent to his assessment that student did exceptionally well in treatment and had improved exponentially. While I knew that the student was not done with the treatment quite yet, I wasn’t certain if the CAPs was capable of picking up his subtle social pragmatic difficulties. Much to my surprise, the CAPs was effective in highlighting my student’s difficulties on a number of subtests including those pertaining to the effective reading and use of context and nonverbal cues, comprehension  and interpretation of irony and sarcasm, effective support of peers via a variety of statements relevant to social situations (coherent and cohesive sentence formulation given relevant details), as well as use of relevant prosody, facial expressions, tone of voice, and nonverbal cues.

C. Initial Assessment: 11-year-old student with suspected language and literacy deficits 

This was definitely the trickiest assessment subject from my small sample. Based on the collected data I suspected the student had social communication deficits, however, given his relative strengths in a variety of areas and that the fact that no one had previously brought it up, I truly did not anticipate that CAPs will effectively and accurately identify his pragmatic needs. As expected, the student did quite well on that “easier” subtests of the CAPs: (IPA, IP, and AE). However, I was very pleasantly surprised that the CAPs had accurately picked up on the fact that the student presented with difficulty reading both context and nonverbal cues as well as using nonverbal cues to effectively answer the presented questions.

Summary: While my sample of subjects has been quite small to date, I fully intend to continue using the CAPs with students of varying ages with varying diagnoses in order to continue refining profile of students who will significantly benefit from CAPs administration for assessment and reassessment purposes.

MISC:

Current Cost $149

Where to purchase: Effective 1/7/19 on the WPS Publishing website

There you have it! These are my impressions of using the CAPs in my settings. How about you? Have you used this test with any of your students to date? If yes, what are some strengths and limitations you are noticing?

 

 

 

Posted on

On the Limitations of Using Vocabulary Tests with School-Aged Students

Those of you who read my blog on a semi-regular basis, know that I spend a considerable amount of time in both of my work settings (an outpatient school located in a psychiatric hospital as well as private practice), conducting language and literacy evaluations of preschool and school-aged children 3-18 years of age. During that process, I spend a significant amount of time reviewing outside speech and language evaluations. Interestingly, what I have been seeing is that no matter what the child’s age is (7 or 17), invariably some form of receptive and/or expressive vocabulary testing is always mentioned in their language report.

Many of you may be wondering, “What’s wrong with having a vocabulary test as part of an assessment battery? Isn’t vocabulary hugely correlated with both language and literacy outcomes?”  The answer is, “It is more complicated than that.” Here’s why.

Children with robust lexicons formulate longer sentences and more interesting stories, better comprehend complex texts, and even compensate to some degree for reading deficits (Colozzo et al, 2011Law and Edwards, 2015; Rvachew and Grawburg, 2006).

In contrast, studies have found that children with Developmental Language Disorder (DLD) (formerly known as Specific Language Impairment or SLI) have limited expressive vocabularies (Leonard, 2014), have trouble learning new words (Alt & Spaulding, 2011; Storkel et al, 2016), and have clinically significant word retrieval deficits (Dockrell, Messer, George, & Wilson, 1998).

Due to these deficits, one-word vocabulary tests are often used in the assessment process to qualify children for speech and language services (Betz, Eickhoff, & Sullivan, 2013). However, studies have found that single word vocabulary tests have poor psychometric properties and/or are not representative of linguistic competence embedded in life-activities (Gray et al., 1999; Ukrainetz & Blomquist, 2002; Bogue, DeThorne, Schaefer, 2014).

Furthermore, because of this, single word vocabulary tests can overinflate testing scores and not represent the child’s true expressive language competence. Finally, even when a student truly has solid or even superior vocabulary knowledge and naming skills, doesn’t mean that s/he can effectively utilize these abilities during the narrative production as well as reading and writing tasks.

Image result for test resultsDon’t believe me?  Consider reviewing language evaluations of current or former students who received outstanding scores on one-word vocabulary tests, yet who were unable to utilize these words to perform semantic flexibility tasks (e.g., name antonyms, synonyms, provide clear definitions as well as define multiple meaning words), produce coherent and cohesive narratives, comprehend these words in the context of read texts, or utilize them during writing composition tasks.

The problem is that numerous SLPs overuse these tests and rely on them for qualification purposes when diagnosing language impairment (Betz, Eickhoff, & Sullivan, 2013). However, the practice of qualifying students based on single-word vocabulary testing in conjunction with psychometrically weak comprehensive testing (visit HERE for a compilation of psychometric data on major SLP testing), can often result in many language-impaired students not being qualified for language therapy services despite desperately needing them.

Image result for informed decisionNow it’s important to understand that I am not recommending elimination of vocabulary tests from SLP assessment batteries.  I am merely suggesting that SLPs use these tests wisely during the assessment process, and utilize them with children who truly benefit from their administration. Such populations include toddlers and preschoolers (under 5 years of age) as well as any children presenting with severe language deficits regardless of age, secondary to intellectual and neuro/developmental impairments such as ASD, DS, FXS, FASD, etc.  They are especially relevant for children with limited vocabularies who are unable to effectively participate in semantic flexibility tasks or produce narratives. As such, we want to learn more about the types of words they know and use on a daily basis to express their wants/needs, so we can increase their lexicon for functional communication purposes and prepare them for effective engagement in both semantic flexibility as well as narrative tasks, in order to further improve their language abilities.

In contrast, for children age 5-6 and older, it is far more practical for SLPs to functionally determine their linguistic flexibility skills as pertaining to the use of language.  This can be accomplished via standardized as well as informal measures. As mentioned above, broadly speaking, linguistic flexibility tasks focus on the manipulation of language.  Tasks such as generation of attributes, production of synonyms and antonyms, formulation of clear and precise definitions of words as well as explanations of multiple meaning, figurative, and ambiguous words and sentences are all examples of language manipulation tasks.

As such, these tasks are far more representative of the student’s language ability in an academic setting versus selecting a picture out of a visual field of four items (receptive identification) or naming a word in the presented picture (expressive generation).

Now there are numerous tests which possess subtests relevant to this purpose.  I, personally, often use select subtests from the below tests:

  • The WORD Tests (Elementary and Adolescent)
    • Associations
    • Antonyms
    • Synonyms
    • Definitions
    • Flexible Meanings
  • Language Processing Test – 3 (LPT-3)
    • Similarities and Differences
    • Multiple Meaning Words
    • Attributes
  • Expressive Language Test – 2 (ELT-2)
    • Metalinguistics
    • Defining Categories
  • Test of Integrated Language and Literacy 
    • Vocabulary Awareness
  • Clinical Evaluation of Language Fundamentals – 5 Metalinguistics (CELF-5M)
    • Multiple Meanings
    • Figurative Language

There are a number of other tests which contain subtests suitable for this purpose. SLPs can also easily create their own informal assessment procedures, similar to the above, for clinical assessment purposes.

However, even these tasks, though a huge improvement over one-word vocabulary tests are not sufficient. In addition to these, research strongly recommends the inclusion of narrative assessment (which is highly correlated with social, reading, as well as academic outcomes), as part of SLP assessment battery.

Related imageNarrative language skills have routinely been identified as one of the single best predictors of future academic success (Bishop & Edmundson, 1987; Feegans & Appelbaum 1986; Dickinson and McCabe, 2001). Language produced during story retelling is positively related to monolingual and bilingual reading achievement (Reese et al, 2010; Miller et al, 2006) Narratives provide insights into child’s verbal expression by tapping into multiple language features and organizational abilities simultaneously (Hoffman, 2009; Ukrainetz, 2006;Bliss & McCabe, 2012). They encompass a number of higher-level language and cognitive skills (Paul et al, 1996) such as event sequencing, text cohesiveness, use of precise vocabulary to convey ideas without visual support, comprehension of cause-effect relationships, etc. Narratives bridge the gap between oral and written language and are needed for solid reading and writing development (Snow et al, 1998).

Contrastingly, poor discourse and narrative abilities place children at risk for learning and literacy-related difficulties including reading problems (McCabe & Rosenthal-Rollins, 1994), while narrative weaknesses significantly correlate with social communication deficits (Norbury, Gemmell & Paul, 2014). As a result, narrative analyses help SLPs with distinguishing children with DLD from their typically developing (TD) peers (Allen et al 2012).

So the next time you are tasked with selecting appropriate language testing to determine whether a student presents with language and literacy deficits, don’t be so hasty in picking up that single-word vocabulary test.  Take a moment to carefully consider its utility for the student in question. After all, it may very well be a determining factor in deciding whether the student will qualify for language therapy services.

References: 

  1. Allen, M,  Ukrainetz, T & Carswell, A (2012) The narrative language performance of three types of at-risk first-grade readersLanguage, Speech, and Hearing Services in Schools, 43(2), 205-221.
  2. Alt, M., & Spaulding, T. (2011). The effect of time on word learning: An examination of decay of the memory trace and vocal rehearsal in children with and without specific language impairmentJournal of Communication Disorders44(6), 640–654
  3. Betz, Eickhoff, & Sullivan,( 2013) Factors Influencing the Selection of Standardized Tests for the Diagnosis of Specific Language Impairment. Language, Speech, and Hearing Services in Schools, 44, 133-146.
  4. Bishop, D. V. M., & Edmundson, A. (1987). Language-impaired 4-year-olds: Distinguishing transient from persistent impairment. Journal of Speech and Hearing Disorders, 52, 156–173.
  5. Bliss, L. & McCabe, A (2012, Oct) Personal Narratives: Assessment and InterventionPerspectives on Language Learning and Education. 19:130-138.
  6. Bogue, E. L., DeThorne, L. S., & Schaefer, B. A. (2014). A psychometric analysis of childhood vocabulary tests. Contemporary Issues in Communication Science and Disorders, 41, 55-69.
  7. Colozzo, P., Gillam, R. B., Wood, M., Schnell, R. D., & Johnston, J. R. (2011). Content and form in the narratives of children with specific language impairment. Journal of Speech, Language, and Hearing Research, 54(6), 1609-1627.
  8. Dickinson D. K., McCabe A. (2001). Bringing it all together: the multiple origins, skills and environmental supports of early literacy. Learning Disabilities Research and Practice. 16, 186–202.
  9. Dockrell, J. E., Messer, D., George, R., & Wilson, G. (1998). Children with word-finding difficulties: Prevalence, presentation and naming problems. International Journal of Language & Communication Disorders, 33, 445–454.
  10. Feegans, L.,& Appelbaum, M (1986). Validation of language subtypes in learning disabled childrenJournal of Educational Psychology78, 358–364.
  11. Gray, S., Plante, E., Vance, R., & Henrichsen, M. (1999). The diagnostic accuracy of four vocabulary tests administered to preschool-age children. Language, Speech, and Hearing Services in Schools30(2), 196–206.
  12. Hoffman, L. M. (2009). Narrative language intervention intensity and dosage: Telling the whole story. Topics in Language Disorders29, 329–343.
  13. Law, F., II, & Edwards, J.R. (2015). Effects of vocabulary size on online lexical processing by preschoolers. Language Learning and Development11, 331–355.
  14. Leonard, L. B. (2014). Children with specific language impairment. Cambridge, MA: MIT Press.
  15. McCabe, A., & Rollins, P. R. (1994). Assessment of preschool narrative skills. American Journal of Speech-Language Pathology, 3(1), 45–56
  16. Miller, J et al (2006). Oral language and reading in bilingual childrenLearning Disabilities Research and Practice, 21, 30–43
  17. Norbury, C. F., Gemmell, T., & Paul, R. (2014). Pragmatics abilities in narrative production: a cross-disorder comparison. Journal of child language, 41(03), 485-510.
  18. Paul R, Hernandez R, Taylor L, Johnson K. (1996) Narrative development in late talkers: early school age. Journal of Speech and Hearing Research, 39(6):1295–1303
  19. Reese E., Suggate S., Long J., Schaughency E. (2010). Children’s oral narrative and reading skills in the first three years of reading instruction. Reading & Writing: An Interdisciplinary Journal, 23, 627–644.
  20. Rvachew S., Grawburg M. (2006). Correlates of phonological awareness in preschoolers with speech sound disorders. Journal of Speech, Language, and Hearing Research, 49: 74–87.
  21. Snow, C.E., Burns, M.S., & Griffin, P. (eds.) (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press
  22. Ukrainetz, T. A. (2006). Teaching narrative structure: Coherence, cohesion, and captivation. In T. A. Ukrainetz (Ed.), Contextualized language intervention: Scaffolding PreK–12 literacy achievement (pp. 195–246). Austin, TX: Pro-Ed.
  23. Ukrainetz, T. A., & Blomquist, C. (2002). The criterion validity of four vocabulary tests compared with a language
    sample. Child Language Teaching and Therapy, 18, 59–78.

 

Posted on

Help, My Student has a Huge Score Discrepancy Between Tests and I Don’t Know Why?

Here’s a  familiar scenario to many SLPs. You’ve administered several standardized language tests to your student (e.g., CELF-5 & TILLS). You expected to see roughly similar scores across tests. Much to your surprise, you find that while your student attained somewhat average scores on one assessment, s/he had completely bombed the second assessment, and you have no idea why that happened.

So you go on social media and start crowdsourcing for information from a variety of SLPs located in a variety of states and countries in order to figure out what has happened and what you should do about this. Of course, the problem in such situations is that while some responses will be spot on, many will be utterly inappropriate. Luckily, the answer lies much closer than you think, in the actual technical manual of the administered tests.

So what is responsible for such as drastic discrepancy?  A few things actually. For starters, unless both tests were co-normed (used the same sample of test takers) be prepared to see disparate scores due to the ability levels of children in the normative groups of each test.  Another important factor involved in the score discrepancy is how accurately does the test differentiate disordered children from typical functioning ones.

Let’s compare two actual language tests to learn more. For the purpose of this exercise let us select The Clinical Evaluation of Language Fundamentals-5 (CELF-5) and the Test of Integrated Language and Literacy (TILLS).   The former is a very familiar entity to numerous SLPs, while the latter is just coming into its own, having been released in the market only several years ago.

Both tests share a number of similarities. Both were created to assess the language abilities of children and adolescents with suspected language disorders. Both assess aspects of language and literacy (albeit not to the same degree nor with the same level of thoroughness).  Both can be used for language disorder classification purposes, or can they?

Actually, my last statement is rather debatable.  A careful perusal of the CELF – 5 reveals that its normative sample of 3000 children included a whopping 23% of children with language-related disabilities. In fact, the folks from the Leaders Project did such an excellent and thorough job reviewing its psychometric properties rather than repeating that information, the readers can simply click here to review the limitations of the CELF – 5 straight on the Leaders Project website.  Furthermore, even the CELF – 5 developers themselves have stated that: “Based on CELF-5 sensitivity and specificity values, the optimal cut score to achieve the best balance is -1.33 (standard score of 80). Using a standard score of 80 as a cut score yields sensitivity and specificity values of .97.

In other words, obtaining a standard score of 80 on the CELF – 5 indicates that a child presents with a language disorder. Of course, as many SLPs already know, the eligibility criteria in the schools requires language scores far below that in order for the student to qualify to receive language therapy services.

In fact, the test’s authors are fully aware of that and acknowledge that in the same document. “Keep in mind that students who have language deficits may not obtain scores that qualify him or her for placement based on the program’s criteria for eligibility. You’ll need to plan how to address the student’s needs within the framework established by your program.”

But here is another issue – the CELF-5 sensitivity group included only a very small number of: “67 children ranging from 5;0 to 15;11”, whose only requirement was to score 1.5SDs < mean “on any standardized language test”.  As the Leaders Project reviewers point out: “This means that the 67 children in the sensitivity group could all have had severe disabilities. They might have multiple disabilities in addition to severe language disorders including severe intellectual disabilities or Autism Spectrum Disorder making it easy for a language disorder test to identify this group as having language disorders with extremely high accuracy. ” (pgs. 7-8)

Of course, this begs the question,  why would anyone continue to administer any test to students, if its administration A. Does not guarantee disorder identification B. Will not make the student eligible for language therapy despite demonstrated need?

The problem is that even though SLPs are mandated to use a variety of quantitative clinical observations and procedures in order to reliably qualify students for services, standardized tests still carry more value then they should.  Consequently,  it is important for SLPs to select the right test to make their job easier.

The TILLS is a far less known assessment than the CELF-5 yet in the few years it has been out on the market it really made its presence felt by being a solid assessment tool due to its valid and reliable psychometric properties. Again, the venerable Dr. Carol Westby had already done such an excellent job reviewing its psychometric properties that I will refer the readers to her review here, rather than repeating this information as it will not add anything new on this topic. The upshot of her review as follows: “The TILLS does not include children and adolescents with language/literacy impairments (LLIs) in the norming sample. Since the 1990s, nearly all language assessments have included children with LLIs in the norming sample. Doing so lowers overall scores, making it more difficult to use the assessment to identify students with LLIs. (pg. 11)”

Now, here many proponents of inclusion of children with language disorders in the normative sample will make a variation of the following claim: “You CANNOT diagnose a language impairment if children with language impairment were not included in the normative sample of that assessment!Here’s a major problem with such assertion. When a child is referred for a language assessment, we really have no way of knowing if this child has a language impairment until we actually finish testing them. We are in fact attempting to confirm or refute this fact, hopefully via the use of reliable and valid testing. However, if the normative sample includes many children with language and learning difficulties, this significantly affects the accuracy of our identification, since we are interested in comparing this child’s results to typically developing children and not the disordered ones, in order to learn if the child has a disorder in the first place.  As per Peña, Spaulding and Plante (2006), “the inclusion of children with disabilities may be at odds with the goal of classification, typically the primary function of the speech pathologist’s assessment. In fact, by including such children in the normative sample, we may be “shooting ourselves in the foot” in terms of testing for the purpose of identifying disorders.”(p. 248)

Then there’s a variation of this assertion, which I have seen in several Facebook groups: “Children with language disorders score at the low end of normal distribution“.  Once again such assertion is incorrect since Spaulding, Plante & Farinella (2006) have actually shown that on average, these kids will score at least 1.28 SDs below the mean, which is not the low average range of normal distribution by any means.  As per authors: “Specific data supporting the application of “low score” criteria for the identification of language impairment is not supported by the majority of current commercially available tests. However, alternate sources of data (sensitivity and specificity rates) that support accurate identification are available for a subset of the available tests.” (p. 61)

Now, let us get back to your child in question, who performed so differently on both of the administered tests. Given his clinically observed difficulties, you fully expected your testing to confirm it. But you are now more confused than before. Don’t be! Search the technical manual for information on the particular test’s sensitivity and specificity to look up the numbers.   Vance and Plante (1994) put forth the following criteria for accurate identification of a disorder (discriminant accuracy): “90% should be considered good discriminant accuracy; 80% to 89% should be considered fair. Below 80%, misidentifications occur at unacceptably high rates” and leading to “serious social consequences” of misidentified children. (p. 21)

Review the sensitivity and specificity of your test/s, take a look at the normative samples, see if anything unusual jumps out at you, which leads you to believe that the administered test may have some issues with assessing what it purports to assess. Then, after supplementing your standardized testing results with good quality clinical data (e.g., narrative samples, dynamic assessment tasks, etc.), consider creating a solidly referenced purchasing pitch to your administration to invest in more valid and reliable standardized tests.

Hope you find this information helpful in your quest to better serve the clients on your caseload. If you are interested in learning more regarding evidence-based assessment practices as well as psychometric properties of various standardized speech-language tests visit the SLPs for Evidence-Based Practice  group on Facebook learn more.

References:

 

 

Posted on

Review and Giveaway: Test of Semantic Reasoning (TOSR)

Today I am reviewing a new receptive vocabulary measure for students 7-17 years of age, entitled the Test of Semantic Reasoning (TOSR) created by Beth Lawrence, MA, CCC-SLP  and Deena Seifert, MS, CCC-SLP, available via Academic Therapy Publications.

The TOSR assesses the student’s semantic reasoning skills or the ability to nonverbally identify vocabulary via image analysis and retrieve it from one’s lexicon.

According to the authors, the TOSR assesses “breadth (the number of lexical entries one has) and depth (the extent of semantic representation for each known word) of vocabulary knowledge without taxing expressive language skills”.

The test was normed on 1117 students ranging from 7 through 17 years of age with the norming sample including such diagnoses as learning disabilities, language impairments, ADHD, and autism. This fact is important because the manual did indicate how the above students were identified. According to Peña, Spaulding and Plante (2006), the inclusion of children with disabilities in the normative sample can negatively affect the test’s discriminant accuracy (separate typically developing from disordered children) by lowering the mean score, which may limit the test’s ability to diagnose children with mild disabilities.

TOSR administration takes approximately 20 minutes or so, although it can take a little longer or shorter depending on the child’s level of knowledge.  It is relatively straightforward. You start at the age-based point and then calculate a basal and a ceiling. For a basal rule, if the child missed any of the first 3 items, the examiner must go backward until the child retains 3 correct responses in a row. To attain a ceiling, test administration can be discontinued after the student makes 6 out of 8 incorrect responses.

Test administration is as follows. Students are presented with 4 images and told 4 words which accompany the images. The examiner asks the question: “Which word goes with all four pictures? The words are…

Students then must select the single word from a choice of four that best represents the multiple contexts of the word represented by all the images.

According to the authors, this assessment can provide “information on children and adolescents basic receptive vocabulary knowledge, as well as their higher order thinking and reasoning in the semantic domain.”

My impressions:

During the time I had this test I’ve administered it to 6 students on my caseload with documented history of language disorders and learning disabilities. Interestingly all students with the exception of one had passed it with flying colors. 4 out of 6 received standard scores solidly in the average range of functioning including a recently added to the caseload student with significant word-finding deficits. Another student with moderate intellectual disability scored in the low average range (18th percentile). Finally, my last student scored very poorly (1st%); however, in addition to being a multicultural speaker he also had a significant language disorder. He was actually tested for a purpose of a comparison with the others to see what it takes not to pass the test if you will.

I was surprised to see several children with documented vocabulary knowledge deficits to pass this test. Furthermore, when I informally used the test and asked them to identify select vocabulary words expressively or in sentences, very few of the children could actually accomplish these tasks successfully. As such it is important for clinicians to be aware of the above finding since receptive knowledge given multiple choices of responses does not constitute spontaneous word retrieval. 

Consequently, I caution SLPs from using the TOSR as an isolated vocabulary measure to qualify/disqualify children for services, and encourage them to add an informal expressive administration of this measure in words in sentences to get further informal information regarding their students’ expressive knowledge base.

I also caution test administration to Culturally and Linguistically Diverse (CLD)  students (who are being tested for the first time vs. retesting of CLD students with confirmed language disorders) due to increased potential for linguistic and cultural bias, which may result in test answers being marked incorrect due lack of relevant receptive vocabulary knowledge (in the absence of actual disorder).

Final Thoughts:

I think that SLPs can use this test as a replacement for the Receptive One-Word Picture Vocabulary Test-4 (ROWPVT-4) effectively, as it does provide them with more information regarding the student’s reasoning and receptive vocabulary abilities.  I think this test may be helpful to use with children with word-finding deficits in order to tease out a lack of knowledge vs. a retrieval issue.

You can find this assessment for purchase on the ATP website HERE. Finally, due to the generosity of one of its creators, Deena Seifert, MS, CCC-SLP, you can enter my Rafflecopter giveaway below for a chance to win your own copy!

Disclaimer:  I did receive a complimentary copy of this assessment for review from the publisher. Furthermore, the test creators will be mailing a copy of the test to one Rafflecopter winner. However, all the opinions expressed in this post are my own and are not influenced by the publisher or test developers.

References:

Peña ED, Spaulding TJ, and Plante E. ( 2006) The composition of normative groups and diagnostic decision-making: Shooting ourselves in the foot. American Journal of Speech-Language Pathology 15: 24754

  a Rafflecopter giveaway

Posted on

Comprehensive Assessment of Adolescents with Suspected Language and Literacy Disorders

When many of us think of such labels as “language disorder” or “learning disability”, very infrequently do adolescents (students 13-18 years of age) come to mind. Even today, much of the research in the field of pediatric speech pathology involves preschool and school-aged children under 12 years of age.

The prevalence and incidence of language disorders in adolescents is very difficult to estimate due to which some authors even referred to them as a Neglected Group with Significant Problems having an “invisible disability“.

Far fewer speech language therapists work with middle-schoolers vs. preschoolers and elementary aged kids, while the numbers of SLPs working with high-school aged students is frequently in single digits in some districts while being completely absent in others. In fact, I am frequently told (and often see it firsthand) that some administrators try to cut costs by attempting to dictate a discontinuation of speech-language services on the grounds that adolescents “are far too old for services” or can “no longer benefit from services”.  

But of course the above is blatantly false. Undetected language deficits don’t resolve with age! They simply exacerbate and turn into learning disabilities. Similarly, lack of necessary and appropriate service provision to children with diagnosed language impairments  at the middle-school and high-school levels will strongly affect their academic functioning and hinder their future vocational outcomes.

A cursory look at the Speech Pathology Related  Facebook Groups as well as ASHA forums reveals numerous SLPs in a continual search for best methods of assessment and treatment of older students (~12-18 years of age).  

Consequently, today I wanted to dedicate this post to a review of standardized assessments options available for students 12-18 years of age with suspected language and literacy deficits.

Most comprehensive standardized assessments, “typically focus on semantics, syntax, morphology, and phonology, as these are the performance areas in which specific skill development can be most objectively measured” (Hill & Coufal, 2005, p 35). Very few of them actually incorporate aspects of literacy into its subtests in a meaningful way.  Yet by the time students reach adolescence literacy begins to play an incredibly critical role not just in all the aspects of academics but also social communication.

So when it comes to comprehensive general language testing I highly recommended that SLPs select  standardized measures with a focus on not  language but also literacy.  Presently of all the comprehensive assessment tools   I highly prefer the Test of Integrated Language and Literacy (TILLS) for students up to 18 years of age, (see a comprehensive review HERE),  which covers such literacy areas as phonological awareness, reading fluency, reading comprehension, writing and spelling in addition to traditional language areas as as vocabulary awareness, following directions, story recall, etc. However,  while comprehensive tests have  numerous  uses,  their sole  administration will not constitute an adequate assessment.

So what areas should be assessed during language and literacy testing?  Below are  a few suggestions of standardized testing measures (and informal procedures) aimed at exploring the student abilities in particular areas pertaining to language and literacy.

TESTS OF LANGUAGE

TESTS OF LITERACYscreen-shot-2016-10-09-at-2-29-57-pm

It is understandable  how given the sheer amount of assessment choices some clinicians may feel overwhelmed and be unsure regarding the starting point of an adolescent evaluation.   Consequently, the use the checklist prior to the initiation of assessment may be highly useful in order to identify potential language weaknesses/deficits the students might experience. It will also allow clinicians to prioritize  the hierarchy of testing instruments to use during the assessment.  

While clinicians are encouraged to develop such checklists for their personal use,  those who lack time and opportunity can locate a number of already available checklists on the market. 

For example, the comprehensive 6-page Speech Language Assessment Checklist for Adolescents (below) can be given to caregivers, classroom teachers, and even older students in order to check off the most pressing difficulties the student is experiencing in an academic setting. 

adolescent checklist

It is important for several individuals to fill out this checklist to ensure consistency of deficits, prior to determining whether an assessment is warranted in the first place and if so, which assessment areas need to be targeted.

Checklist Categories:

  1. Receptive Language
  2. Memory, Attention and Cognition
  3. Expressive Language
  4. Vocabulary
  5. Discourse
  6. Speech
  7. Voice
  8. Prosody
  9. Resonance
  10. Reading
  11. Writing
  12. Problem Solving
  13. Pragmatic Language Skills
  14. Social Emotional Development
  15. Executive Functioning

alolescent pages sample

Based on the checklist administration SLPs can  reliably pinpoint the student’s areas of deficits without needless administration of unrelated/unnecessary testing instruments.  For example, if a student presents with deficits in the areas of problem solving and social pragmatic functioning the administration of a general language test such as the Clinical Evaluation of Language Fundamentals® – Fifth Edition (CELF-5) would NOT be functional (especially if the previous administration of educational testing did not reveal any red flags). In contrast, the administration of such tests as Test Of Problem Solving 2 Adolescent and Social Language Development Test Adolescent would be better reflective of the student’s deficits in the above areas. (Checklist HERE; checklist sample HERE). 

It is very important to understand that students presenting with language and literacy deficits will not outgrow these deficits on their own. While there may be “a time period when the students with early language disorders seem to catch up with their typically developing peers” (e.g., illusory recovery) by undergoing a “spurt” in language learning”(Sun & Wallach, 2014). These spurts are typically followed by a “post-spurt plateau”. This is because due to the ongoing challenges and an increase in academic demands “many children with early language disorders fail to “outgrow” these difficulties or catch up with their typically developing peers”(Sun & Wallach, 2014).  As such many adolescents “may not show academic or language-related learning difficulties until linguistic and cognitive demands of the task increase and exceed their limited abilities” (Sun & Wallach, 2014).  Consequently, SLPs must consider the “underlying deficits that may be masked by early oral language development” and “evaluate a child’s language abilities in all modalities, including pre-literacy, literacy, and metalinguistic skills” (Sun & Wallach, 2014).

References:

  1. Hill, J. W., & Coufal, K. L. (2005). Emotional/behavioral disorders: A retrospective examination of social skills, linguistics, and student outcomes. Communication Disorders Quarterly27(1), 33–46.
  2. Sun, L & Wallach G (2014) Language Disorders Are Learning Disabilities: Challenges on the Divergent and Diverse Paths to Language Learning Disability. Topics in Language Disorders, Vol. 34; (1), pp 25–38.

Helpful Smart Speech Therapy Resources 

  1. Assessment of Adolescents with Language and Literacy Impairments in Speech Language Pathology 
  2. Assessment and Treatment Bundles 
  3. Social Communication Materials
  4. Multicultural Materials 

 

Posted on

Review of the Test of Integrated Language and Literacy (TILLS)

The Test of Integrated Language & Literacy Skills (TILLS) is an assessment of oral and written language abilities in students 6–18 years of age. Published in the Fall 2015, it is  unique in the way that it is aimed to thoroughly assess skills  such as reading fluency, reading comprehension, phonological awareness,  spelling, as well as writing  in school age children.   As I have been using this test since the time it was published,  I wanted to take an opportunity today to share just a few of my impressions of this assessment.

               

First, a little background on why I chose to purchase this test  so shortly after I had purchased the Clinical Evaluation of Language Fundamentals – 5 (CELF-5).   Soon after I started using the CELF-5  I noticed that  it tended to considerably overinflate my students’ scores  on a variety of its subtests.  In fact,  I noticed that unless a student had a fairly severe degree of impairment,  the majority of his/her scores  came out either low/slightly below average (click for more info on why this was happening HERE, HEREor HERE). Consequently,  I was excited to hear regarding TILLS development, almost simultaneously through ASHA as well as SPELL-Links ListServe.   I was particularly happy  because I knew some of this test’s developers (e.g., Dr. Elena Plante, Dr. Nickola Nelson) have published solid research in the areas of  psychometrics and literacy respectively.

According to the TILLS developers it has been standardized for 3 purposes:

  • to identify language and literacy disorders
  • to document patterns of relative strengths and weaknesses
  • to track changes in language and literacy skills over time

The testing subtests can be administered in isolation (with the exception of a few) or in its entirety.  The administration of all the 15 subtests may take approximately an hour and a half, while the administration of the core subtests typically takes ~45 mins).

Please note that there are 5 subtests that should not be administered to students 6;0-6;5 years of age because many typically developing students are still mastering the required skills.

  • Subtest 5 – Nonword Spelling
  • Subtest 7 – Reading Comprehension
  • Subtest 10 – Nonword Reading
  • Subtest 11 – Reading Fluency
  • Subtest 12 – Written Expression

However,  if needed, there are several tests of early reading and writing abilities which are available for assessment of children under 6:5 years of age with suspected literacy deficits (e.g., TERA-3: Test of Early Reading Ability–Third Edition; Test of Early Written Language, Third Edition-TEWL-3, etc.).

Let’s move on to take a deeper look at its subtests. Please note that for the purposes of this review all images came directly from and are the property of Brookes Publishing Co (clicking on each of the below images will take you directly to their source).

TILLS-subtest-1-vocabulary-awareness1. Vocabulary Awareness (VA) (description above) requires students to display considerable linguistic and cognitive flexibility in order to earn an average score.    It works great in teasing out students with weak vocabulary knowledge and use,   as well as students who are unable to  quickly and effectively analyze  words  for deeper meaning and come up with effective definitions of all possible word associations. Be mindful of the fact that  even though the words are presented to the students in written format in the stimulus book, the examiner is still expected to read  all the words to the students. Consequently,  students with good vocabulary knowledge  and strong oral language abilities  can still pass this subtest  despite the presence of significant reading weaknesses. Recommendation:  I suggest informally  checking the student’s  word reading abilities  by asking them to read of all the words, before reading all the word choices to them.   This way  you can informally document any word misreadings  made by the student even in the presence of an average subtest score.

TIILLS-subtest-2-phonemic-awareness

2. The Phonemic Awareness (PA) subtest (description above) requires students to  isolate and delete initial sounds in words of increasing complexity.  While this subtest does not require sound isolation and deletion in various word positions, similar to tests such as the CTOPP-2: Comprehensive Test of Phonological Processing–Second Edition  or the The Phonological Awareness Test 2 (PAT 2)  it is still a highly useful and reliable measure of  phonemic awareness (as one of many precursors to reading fluency success).  This is especially because after the initial directions are given, the student is expected to remember to isolate the initial sounds in words without any prompting from the examiner.  Thus,  this task also  indirectly tests the students’ executive function abilities in addition to their phonemic awareness skills.

TILLS-subtest-3-story-retelling

3. The Story Retelling (SR) subtest (description above) requires students to do just that retell a story. Be mindful of the fact that the presented stories have reduced complexity. Thus, unless the students possess  significant retelling deficits, the above subtest  may not capture their true retelling abilities. Recommendation:  Consider supplementing this subtest  with informal narrative measures. For younger children (kindergarten and first grade) I recommend using wordless picture books to perform a dynamic assessment of their retelling abilities following a clinician’s narrative model (e.g., HERE).  For early elementary aged children (grades 2 and up), I recommend using picture books, which are first read to and then retold by the students with the benefit of pictorial but not written support. Finally, for upper elementary aged children (grades 4 and up), it may be helpful for the students to retell a book or a movie seen recently (or liked significantly) by them without the benefit of visual support all together (e.g., HERE).

TILLS-subtest-4-nonword-repetition

4. The Nonword Repetition (NR) subtest (description above) requires students to repeat nonsense words of increasing length and complexity. Weaknesses in the area of nonword repetition have consistently been associated with language impairments and learning disabilities due to the task’s heavy reliance on phonological segmentation as well as phonological and lexical knowledge (Leclercq, Maillart, Majerus, 2013). Thus, both monolingual and simultaneously bilingual children with language and literacy impairments will be observed to present with patterns of segment substitutions (subtle substitutions of sounds and syllables in presented nonsense words) as well as segment deletions of nonword sequences more than 2-3 or 3-4 syllables in length (depending on the child’s age).

TILLS-subtest-5-nonword-spelling

5. The Nonword Spelling (NS) subtest (description above) requires the students to spell nonwords from the Nonword Repetition (NR) subtest. Consequently, the Nonword Repetition (NR) subtest needs to be administered prior to the administration of this subtest in the same assessment session.  In contrast to the real-word spelling tasks,  students cannot memorize the spelling  of the presented words,  which are still bound by  orthographic and phonotactic constraints of the English language.   While this is a highly useful subtest,  is important to note that simultaneously bilingual children may present with decreased scores due to vowel errors.   Consequently,  it is important to analyze subtest results in order to determine whether dialectal differences rather than a presence of an actual disorder is responsible for the error patterns.

TILLS-subtest-6-listening-comprehension

6. The  Listening Comprehension (LC) subtest (description above) requires the students to listen to short stories  and then definitively answer story questions via available answer choices, which include: “Yes”, “No’, and “Maybe”. This subtest also indirectly measures the students’ metalinguistic awareness skills as they are needed to detect when the text does not provide sufficient information to answer a particular question definitively (e.g., “Maybe” response may be called for).  Be mindful of the fact that because the students are not expected to provide sentential responses  to questions it may be important to supplement subtest administration with another listening comprehension assessment. Tests such as the Listening Comprehension Test-2 (LCT-2), the Listening Comprehension Test-Adolescent (LCT-A),  or the Executive Function Test-Elementary (EFT-E)  may be useful  if  language processing and listening comprehension deficits are suspected or reported by parents or teachers. This is particularly important  to do with students who may be ‘good guessers’ but who are also reported to present with word-finding difficulties at sentence and discourse levels. 

TILLS-subtest-7-reading-comprehension

7. The Reading Comprehension (RC) subtest (description above) requires the students to  read short story and answer story questions in “Yes”, “No’, and “Maybe”  format.   This subtest is not stand alone and must be administered immediately following the administration the Listening Comprehension subtest. The student is asked to read the first story out loud in order to determine whether s/he can proceed with taking this subtest or discontinue due to being an emergent reader. The criterion for administration of the subtest is making 7 errors during the reading of the first story and its accompanying questions. Unfortunately,  in my clinical experience this subtest  is not always accurate at identifying children with reading-based deficits.

While I find it terrific for students with severe-profound reading deficits and/or below average IQ, a number of my students with average IQ and moderately impaired reading skills managed to pass it via a combination of guessing and luck despite being observed to misread aloud between 40-60% of the presented words. Be mindful of the fact that typically  such students may have up to 5-6  errors during the reading of the first story. Thus, according to administration guidelines these students will be allowed to proceed and take this subtest.  They will then continue to make text misreadings  during each story presentation (you will know that by asking them to read each story aloud vs. silently).   However,  because the response mode is in definitive (“Yes”, “No’, and “Maybe”) vs. open ended question format,  a number of these students  will earn average scores by being successful guessers. Recommendation:  I highly recommend supplementing the administration of this subtest with grade level (or below grade level) texts (see HERE and/or HERE),  to assess the student’s reading comprehension informally.

I present a full  one page text to the students and ask them to read it to me in its entirety.   I audio/video record  the student’s reading for further analysis (see Reading Fluency section below).   After the  completion of the story I ask  the student questions with a focus on main idea comprehension and vocabulary definitions.   I also ask questions pertaining to story details.   Depending on the student’s age  I may ask them  abstract/ factual text questions with and without text access.  Overall, I find that informal administration of grade level (or even below grade-level) texts coupled with the administration of standardized reading tests provides me with a significantly better understanding of the student’s reading comprehension abilities rather than administration of standardized reading tests alone.

TILLS-subtest-8-following-directions

8. The Following Directions (FD) subtest (description above) measures the student’s ability to execute directions of increasing length and complexity.  It measures the student’s short-term, immediate and working memory, as well as their language comprehension.  What is interesting about the administration of this subtest is that the graphic symbols (e.g., objects, shapes, letter and numbers etc.) the student is asked to modify remain covered as the instructions are given (to prevent visual rehearsal). After being presented with the oral instruction the students are expected to move the card covering the stimuli and then to executive the visual-spatial, directional, sequential, and logical if–then the instructions  by marking them on the response form.  The fact that the visual stimuli remains covered until the last moment increases the demands on the student’s memory and comprehension.  The subtest was created to simulate teacher’s use of procedural language (giving directions) in classroom setting (as per developers).

TILLS-subtest-9-delayed-story-retelling

9. The Delayed Story Retelling (DSR) subtest (description above) needs to be administered to the students during the same session as the Story Retelling (SR) subtest, approximately 20 minutes after the SR subtest administration.  Despite the relatively short passage of time between both subtests, it is considered to be a measure of long-term memory as related to narrative retelling of reduced complexity. Here, the examiner can compare student’s performance to determine whether the student did better or worse on either of these measures (e.g., recalled more information after a period of time passed vs. immediately after being read the story).  However, as mentioned previously, some students may recall this previously presented story fairly accurately and as a result may obtain an average score despite a history of teacher/parent reported  long-term memory limitations.  Consequently, it may be important for the examiner to supplement the administration of this subtest with a recall of a movie/book recently seen/read by the student (a few days ago) in order to compare both performances and note any weaknesses/limitations.

TILLS-subtest-10-nonword-reading

10. The Nonword Reading (NR) subtest (description above) requires students to decode nonsense words of increasing length and complexity. What I love about this subtest is that the students are unable to effectively guess words (as many tend to routinely do when presented with real words). Consequently, the presentation of this subtest will tease out which students have good letter/sound correspondence abilities as well as solid orthographic, morphological and phonological awareness skills and which ones only memorized sight words and are now having difficulty decoding unfamiliar words as a result.      TILLS-subtest-11-reading-fluency

11. The Reading Fluency (RF) subtest (description above) requires students to efficiently read facts which make up simple stories fluently and correctly.  Here are the key to attaining an average score is accuracy and automaticity.  In contrast to the previous subtest, the words are now presented in meaningful simple syntactic contexts.

It is important to note that the Reading Fluency subtest of the TILLS has a negatively skewed distribution. As per authors, “a large number of typically developing students do extremely well on this subtest and a much smaller number of students do quite poorly.”

Thus, “the mean is to the left of the mode” (see publisher’s image below). This is why a student could earn an average standard score (near the mean) and a low percentile rank when true percentiles are used rather than NCE percentiles (Normal Curve Equivalent). Tills Q&A – Negative Skew

Consequently under certain conditions (See HERE) the percentile rank (vs. the NCE percentile) will be a more accurate representation of the student’s ability on this subtest.

Indeed, due to the reduced complexity of the presented words some students (especially younger elementary aged) may obtain average scores and still present with serious reading fluency deficits.  

I frequently see that in students with average IQ and go to long-term memory, who by second and third grades have managed to memorize an admirable number of sight words due to which their deficits in the areas of reading appeared to be minimized.  Recommendation: If you suspect that your student belongs to the above category I highly recommend supplementing this subtest with an informal measure of reading fluency.  This can be done by presenting to the student a grade level text (I find science and social studies texts particularly useful for this purpose) and asking them to read several paragraphs from it (see HERE and/or HERE).

As the students are reading  I calculate their reading fluency by counting the number of words they read per minute.  I find it very useful as it allows me to better understand their reading profile (e.g, fast/inaccurate reader, slow/inaccurate reader, slow accurate reader, fast/accurate reader).   As the student is reading I note their pauses, misreadings, word-attack skills and the like. Then, I write a summary comparing the students reading fluency on both standardized and informal assessment measures in order to document students strengths and limitations.

TILLS-subtest-12-written-expression

12. The Written Expression (WE) subtest (description above) needs to be administered to the students immediately after the administration of the Reading Fluency (RF) subtest because the student is expected to integrate a series of facts presented in the RF subtest into their writing sample. There are 4 stories in total for the 4 different age groups.

The examiner needs to show the student a different story which integrates simple facts into a coherent narrative. After the examiner reads that simple story to the students s/he is expected to tell the students that the story is  okay, but “sounds kind of “choppy.” They then need to show the student an example of how they could put the facts together in a way that sounds more interesting and less choppy  by combining sentences (see below). Finally, the examiner will ask the students to rewrite the story presented to them in a similar manner (e.g, “less choppy and more interesting.”)

tills

After the student finishes his/her story, the examiner will analyze it and generate the following scores: a discourse score, a sentence score, and a word score. Detailed instructions as well as the Examiner’s Practice Workbook are provided to assist with scoring as it takes a bit of training as well as trial and error to complete it, especially if the examiners are not familiar with certain procedures (e.g., calculating T-units).

Full disclosure: Because the above subtest is still essentially sentence combining, I have only used this subtest a handful of times with my students. Typically when I’ve used it in the past, most of my students fell in two categories: those who failed it completely by either copying text word  for word, failing to generate any written output etc. or those who passed it with flying colors but still presented with notable written output deficits. Consequently, I’ve replaced Written Expression subtest administration with the administration of written standardized tests, which I supplement with an informal grade level expository, persuasive, or narrative writing samples.

Having said that many clinicians may not have the access to other standardized written assessments, or lack the time to administer entire standardized written measures (which may frequently take between 60 to 90 minutes of administration time). Consequently, in the absence of other standardized writing assessments, this subtest can be effectively used to gauge the student’s basic writing abilities, and if needed effectively supplemented by informal writing measures (mentioned above).

TILLS-subtest-13-social-communication

13. The Social Communication (SC) subtest (description above) assesses the students’ ability to understand vocabulary associated with communicative intentions in social situations. It requires students to comprehend how people with certain characteristics might respond in social situations by formulating responses which fit the social contexts of those situations. Essentially students become actors who need to act out particular scenes while viewing select words presented to them.

Full disclosure: Similar to my infrequent administration of the Written Expression subtest, I have also administered this subtest very infrequently to students.  Here is why.

I am an SLP who works full-time in a psychiatric hospital with children diagnosed with significant psychiatric impairments and concomitant language and literacy deficits.  As a result, a significant portion of my job involves comprehensive social communication assessments to catalog my students’ significant deficits in this area. Yet, past administration of this subtest showed me that number of my students can pass this subtest quite easily despite presenting with notable and easily evidenced social communication deficits. Consequently, I prefer the administration of comprehensive social communication testing when working with children in my hospital based program or in my private practice, where I perform independent comprehensive evaluations of language and literacy (IEEs).

Again, as I’ve previously mentioned many clinicians may not have the access to other standardized social communication assessments, or lack the time to administer entire standardized written measures. Consequently, in the absence of other social communication assessments, this subtest can be used to get a baseline of the student’s basic social communication abilities, and then be supplemented with informal social communication measures such as the Informal Social Thinking Dynamic Assessment Protocol (ISTDAP) or observational social pragmatic checklists

TILLS-subtest-14-digit-span-forward

14.  The Digit Span Forward (DSF) subtest (description above) is a relatively isolated  measure  of short term and verbal working memory ( it minimizes demands on other aspects of language such as syntax or vocabulary).

TILLS-subtest-15-digit-span-backward

15.  The Digit Span Backward (DSB) subtest (description above) assesses the student’s working memory and requires the student to mentally manipulate the presented stimuli in reverse order. It allows examiner to observe the strategies (e.g. verbal rehearsal, visual imagery, etc.) the students are using to aid themselves in the process.  Please note that the Digit Span Forward subtest must be administered immediately before the administration of this subtest.

SLPs who have used tests such as the Clinical Evaluation of Language Fundamentals – 5 (CELF-5) or the Test of Auditory Processing Skills – Third Edition (TAPS-3) should be highly familiar with both subtests as they are fairly standard measures of certain aspects of memory across the board.

To continue, in addition to the presence of subtests which assess the students literacy abilities, the TILLS also possesses a number of interesting features.

For starters, the TILLS Easy Score, which allows the examiners to use their scoring online. It is incredibly easy and effective. After clicking on the link and filling out the preliminary demographic information, all the examiner needs to do is to plug in this subtest raw scores, the system does the rest. After the raw scores are plugged in, the system will generate a PDF document with all the data which includes (but is not limited to) standard scores, percentile ranks, as well as a variety of composite and core scores. The examiner can then save the PDF on their device (laptop, PC, tablet etc.) for further analysis.

The there is the quadrant model. According to the TILLS sampler (HERE)  “it allows the examiners to assess and compare students’ language-literacy skills at the sound/word level and the sentence/ discourse level across the four oral and written modalities—listening, speaking, reading, and writing” and then create “meaningful profiles of oral and written language skills that will help you understand the strengths and needs of individual students and communicate about them in a meaningful way with teachers, parents, and students. (pg. 21)”

tills quadrant model

Then there is the Student Language Scale (SLS) which is a one page checklist parents,  teachers (and even students) can fill out to informally identify language and literacy based strengths and weaknesses. It  allows for meaningful input from multiple sources regarding the students performance (as per IDEA 2004) and can be used not just with TILLS but with other tests or in even isolation (as per developers).

Furthermore according to the developers, because the normative sample included several special needs populations, the TILLS can be used with students diagnosed with ASD,  deaf or hard of hearing (see caveat), as well as intellectual disabilities (as long as they are functioning age 6 and above developmentally).

According to the developers the TILLS is aligned with Common Core Standards and can be administered as frequently as two times a year for progress monitoring (min of 6 mos post 1st administration).

With respect to bilingualism examiners can use it with caution with simultaneous English learners but not with sequential English learners (see further explanations HERE).   Translations of TILLS are definitely not allowed as they will undermine test validity and reliability.

So there you have it these are just some of my very few impressions regarding this test.  Now to some of you may notice that I spend a significant amount of time pointing out some of the tests limitations. However, it is very important to note that we have research that indicates that there is no such thing as a “perfect standardized test” (see HERE for more information).   All standardized tests have their limitations

Having said that, I think that TILLS is a PHENOMENAL addition to the standardized testing market, as it TRULY appears to assess not just language but also literacy abilities of the students on our caseloads.

That’s all from me; however, before signing off I’d like to provide you with more resources and information, which can be reviewed in reference to TILLS.  For starters, take a look at Brookes Publishing TILLS resources.  These include (but are not limited to) TILLS FAQ, TILLS Easy-Score, TILLS Correction Document, as well as 3 FREE TILLS Webinars.   There’s also a Facebook Page dedicated exclusively to TILLS updates (HERE).

But that’s not all. Dr. Nelson and her colleagues have been tirelessly lecturing about the TILLS for a number of years, and many of their past lectures and presentations are available on the ASHA website as well as on the web (e.g., HERE, HERE, HERE, etc). Take a look at them as they contain far more in-depth information regarding the development and implementation of this groundbreaking assessment.

To access TILLS fully-editable template, click HERE

Disclaimer:  I did not receive a complimentary copy of this assessment for review nor have I received any encouragement or compensation from either Brookes Publishing  or any of the TILLS developers to write it.  All images of this test are direct property of Brookes Publishing (when clicked on all the images direct the user to the Brookes Publishing website) and were used in this post for illustrative purposes only.

References: 

Leclercq A, Maillart C, Majerus S. (2013) Nonword repetition problems in children with SLI: A deficit in accessing long-term linguistic representations? Topics in Language Disorders. 33 (3) 238-254.

Related Posts:

Posted on

What Research Shows About the Functional Relevance of Standardized Language Tests

Image result for standardized language testsAs an SLP who routinely conducts speech and language assessments in several settings (e.g., school and private practice), I understand the utility of and the need for standardized speech, language, and literacy tests.  However, as an SLP who works with children with dramatically varying degree of cognition, abilities, and skill-sets, I also highly value supplementing these standardized tests with functional and dynamic assessments, interactions, and observations.

Since a significant value is placed on standardized testing by both schools and insurance companies for the purposes of service provision and reimbursement, I wanted to summarize in today’s post the findings of recent articles on this topic.  Since my primary interest lies in assessing and treating school-age children, for the purposes of today’s post all of the reviewed articles came directly from the Language Speech and Hearing Services in Schools  (LSHSS) journal.

We’ve all been there. We’ve all had situations in which students scored on the low end of normal, or had a few subtest scores in the below average range, which equaled  an average total score.  We’ve all poured over eligibility requirements trying to figure out whether the student should receive therapy services given the stringent standardized testing criteria in some states/districts.

Of course, as it turns out, the answer is never simple.  In 2006, Spaulding, Plante & Farinella set out to examine the assumption: “that children with language impairment will receive low scores on standardized tests, and therefore [those] low scores will accurately identify these children” (61).   So they analyzed the data from 43 commercially available child language tests to identify whether evidence exists to support their use in identifying language impairment in children.

Turns out it did not!  Turns out due to the variation in psychometric properties of various tests (see article for specific details), many children with language impairment are overlooked by standardized tests by receiving scores within the average range or not receiving low enough scores to qualify for services. Thus, “the clinical consequence is that a child who truly has a language impairment has a roughly equal chance of being correctly or incorrectly identified, depending on the test that he or she is given.” Furthermore, “even if a child is diagnosed accurately as language impaired at one point in time, future diagnoses may lead to the false perception that the child has recovered, depending on the test(s) that he or she has been given (69).”

Consequently, they created a decision tree (see below) with recommendations for clinicians using standardized testing. They recommend using alternate sources of data (sensitivity and specificity rates) to support accurate identification (available for a small subset of select tests).

The idea behind it is: “if sensitivity and specificity data are strong, and these data were derived from subjects who are comparable to the child tested, then the clinician can be relatively confident in relying on the test score data to aid his or her diagnostic decision. However, if the data are weak, then more caution is warranted and other sources of information on the child’s status might have primacy in making a diagnosis (70).”

Fast forward 6 years, and a number of newly revised tests later,  in 2012, Spaulding and colleagues set out to “identify various U.S. state education departments’ criteria for determining the severity of language impairment in children, with particular focus on the use of norm-referenced tests” as well as to “determine if norm-referenced tests of child language were developed for the purpose of identifying the severity of children’s language impairment”  (176).

They obtained published procedures for severity determinations from available U.S. state education departments, which specified the use of norm-referenced tests, and reviewed the manuals for 45 norm-referenced tests of child language to determine if each test was designed to identify the degree of a child’s language impairment.

What they found out was “the degree of use and cutoff-point criteria for severity determination varied across states. No cutoff-point criteria aligned with the severity cutoff points described within the test manuals. Furthermore, tests that included severity information lacked empirical data on how the severity categories were derived (176).”

Thus they urged SLPs to exercise caution in determining the severity of children’s language impairment via norm-referenced test performance “given the inconsistency in guidelines and lack of empirical data within test manuals to support this use (176)”.

Following the publication of this article, Ireland, Hall-Mills & Millikin issued a response to the  Spaulding and colleagues article. They pointed out that the “severity of language impairment is only one piece of information considered by a team for the determination of eligibility for special education and related services”.  They noted that  they left out a host of federal and state guideline requirements and “did not provide an analysis of the regulations governing special education evaluation and criteria for determining eligibility (320).” They pointed out that “IDEA prohibits the use of ‘any single measure or assessment as the sole criterion’ for determination of disability  and requires that IEP teams ‘draw upon information from a variety of sources.”

They listed a variety of examples from several different state departments of education (FL, NC, VA, etc.), which mandate the use of functional assessments, dynamic assessments criterion-referenced assessments, etc. for their determination of language therapy eligibility.

But are the SLPs from across the country appropriately using the federal and state guidelines in order to determine eligibility? While one should certainly hope so, it does not always seem to be the case.  To illustrate, in 2012, Betz & colleagues asked 364 SLPs to complete a survey “regarding how frequently they used specific standardized tests when diagnosing suspected specific language impairment (SLI) (133).”

Their purpose was to determine “whether the quality of standardized tests, as measured by the test’s psychometric properties, is related to how frequently the tests are used in clinical practice” (133).

What they found out was that the most frequently used tests were the comprehensive assessments including the Clinical Evaluation of Language Fundamentals and the Preschool Language Scale as well as one word vocabulary tests such as the Peabody Picture Vocabulary Test. Furthermore, the date of publication seemed to be the only factor which affected the frequency of test selection.

They also found out that frequently SLPs did not follow up the comprehensive standardized testing with domain specific assessments (critical thinking, social communication, etc.) but instead used the vocabulary testing as a second measure.  They were understandably puzzled by that finding. “The emphasis placed on vocabulary measures is intriguing because although vocabulary is often a weakness in children with SLI (e.g., Stothard et al., 1998), the research to date does not show vocabulary to be more impaired than other language domains in children with SLI (140).

According to the authors, “perhaps the most discouraging finding of this study was the lack of a correlation between frequency of test use and test accuracy, measured both in terms of sensitivity/specificity and mean difference scores (141).”

If since the time (2012) SLPs have not significantly change their practices, the above is certainly disheartening, as it implies that rather than being true diagnosticians, SLPs are using whatever is at hand that has been purchased by their department to indiscriminately assess students with suspected speech language disorders. If that is truly the case, it certainly places into question the Ireland, Hall-Mills & Millikin’s response to Spaulding and colleagues.  In other words, though SLPs are aware that they need to comply with state and federal regulations when it comes to unbiased and targeted assessments of children with suspected language disorders, they may not actually be using appropriate standardized testing much less supplementary informal assessments (e.g., dynamic, narrative, language sampling) in order to administer well-rounded assessments.  

So where do we go from here? Well, it’s quite simple really!   We already know what the problem is. Based on the above articles we know that:

  1. Standardized tests possess significant limitations
  2. They are not used with optimal effectiveness by many SLPs
  3.  They may not be frequently supplemented by relevant and targeted informal assessment measures in order to improve the accuracy of disorder determination and subsequent therapy eligibility

Now that we have identified a problem, we need to develop and consistently implement effective practices to ameliorate it.  These include researching psychometric properties of tests to review sample size, sensitivity and specificity, etc, use domain specific assessments to supplement administration of comprehensive testing, as well as supplement standardized testing with a plethora of functional assessments.

SLPs can review testing manuals and consult with colleagues when they feel that the standardized testing is underidentifying students with language impairments (e.g., HERE and HERE).  They can utilize referral checklists (e.g., HERE) in order to pinpoint the students’ most significant difficulties. Finally, they can develop and consistently implement informal assessment practices (e.g., HERE and HERE) during testing in order to gain a better grasp on their students’ TRUE linguistic functioning.

Stay tuned for the second portion of this post entitled: “What Research Shows About the Functional Relevance of Standardized Speech Tests?” to find out the best practices in the assessment of speech sound disorders in children.

References:

  1. Spaulding, Plante & Farinella (2006) Eligibility Criteria for Language Impairment: Is the Low End of Normal Always Appropriate?
  2. Spaulding, Szulga, & Figueria (2012) Using Norm-Referenced Tests to Determine Severity of Language Impairment in Children: Disconnect Between U.S. Policy Makers and Test Developers
  3. Ireland, Hall-Mills & Millikin (2012) Appropriate Implementation of Severity Ratings, Regulations, and State Guidance: A Response to “Using Norm-Referenced Tests to Determine Severity of Language Impairment in Children: Disconnect Between U.S. Policy Makers and Test Developers” by Spaulding, Szulga, & Figueria (2012)
  4. Betz et al. (2013) Factors Influencing the Selection of Standardized Tests for the Diagnosis of Specific Language Impairment

 

Posted on

Test Review: Test of Written Language-4 (TOWL-4)

TOWL-4_EM-147Today due to popular demand I am reviewing the The Test of Written Language-4 or TOWL-4. TOWL-4 assesses the basic writing readiness skills of students 9:00-17:11 years of age. The tests consists of two forms – A and B, (which contain different subtest content).

According to the manual, the entire test takes approximately between 60-90 minutes to administer and examines 7 skill areas. Only the “Story Composition” subtest is officially timed (the student is given 15 minutes to write it and 5 minutes previous to that, to draft it). However, in my experience each subtest administration, even with students presenting with mild-moderately impaired writing abilities, takes approximately 10 minutes to complete with average results (can you see where I am going with this yet?) 

For detailed information regarding the TOWL-4 development and standardization, validity and reliability, please see HERE.

Below are my impressions (to date) of using this assessment with students between 11-14 years of age with (known) mild-moderate writing impairments.

Subtests:

1. Vocabulary – The student is asked to write a sentence that incorporates a stimulus word. E.g.: For ‘ran’, a student may write, “I ran up the hill.”  The student is not allowed to change the word in any way, such as write ‘running’ instead of run’. If this occurs, an automatic loss of points takes place. Ceiling is reached when the student makes 3 errors in a row. 

To continue, while some of the subtest vocabulary words are perfectly appropriate for younger children (~9), the majority are too simplistic to assess the written vocabulary of middle and high schoolers. For example, other words included in the ‘Vocabulary’ subtest include:

  1. Form A (#1-20): eat, tree, house, circus, walk, bird, edge, laugh, donate, faithful, aboard, humble, though, confusion, lethal, deny, pulp, verge, revive, intact, etc.
  2. Form B (#1-20): see, help, prize, sky, stove, cry, enormous, chimney, avoid, nonsense, snout, wept, exotic, cycle, deb, specify, debatable, pastel, rugged, studious, etc.

These words may work well to test the knowledge of younger children but they do not take into the account the challenging academic standards set forth for older students. As a result, students 11+ years of age may pass this subtest with flying colors but still present with a fair amount of difficulty usingsophisticated vocabulary words in written compositions.

2/3.   Spelling and Punctuation (subtests 2 and 3). These two subtests are administered jointly but scored separately. Here, the student is asked to write sentences dictated by the examiner using appropriate rules for spelling and punctuation and capitalization. Ceiling for each subtest is reached separately. It  occurs when the student makes 3 errors in a row in each of the subtests.   In other words if a student uses correct punctuation but incorrect spelling, his/her ceiling on the ‘Spelling’ subtest will be reached sooner then on the ‘Punctuation’ subtest and vise versa.

Similar to the ‘Vocabulary‘ subtest I feel that the sentences the students are asked to write are far too simplistic to showcase their “true” grade level abilities. Below are some examples of sentences from both forms:

  1. Form A: (2) Run away.; (3) Birds fly.; (9) Who ate the food? (17) The electricity failed in Dallas, Texas.; (22) Because of the confusion, she sought legal help.
  2. Form B: (3) Am I going?; (18) Bring back three items: milk, crackers, and butter.; (23) After the door was closed, the sound was barely audible. 

As you can see from the above, the requirements of these subtest are also not too stringent.  The spelling words are simple and the punctuation requirements are very basic: a question mark here, an exclamation mark there, with a few commas in between. But I was particularly disappointed with the ‘Spelling‘ subtest.

Here’s why. I have a 6th grade client on my caseload with significant well-documented spelling difficulties. When this subtest was administered to him he scored within the average range (Scaled Score of 8 and Percentile Rank of 25).  However, an administration of Spelling Performance Evaluation for Language and Literacy – SPELL-2yielded 3 assessment pages of spelling errors, as well as 7 pages of recommendations on how to remediate those errors.  Had he received this assessment as part of an independent evaluation from a different examiner, nothing more would have been done regarding his spelling difficulties, since the TOWL-4 revealed an average spelling performance due to it’s focus on overly simplistic vocabulary.

4. Logical Sentences – The student is asked to edit an illogical sentence so that it makes better sense. E.g.:  “John blinked his nose” is changed to “John blinked his eye.”  Ceiling is reached when the student makes 3 errors in a row. Again I’m not too thrilled with this subtest. Rather than truly attempting to ascertain the student’s grammatical and syntactic knowledge at sentence level a large portion of this subtest deals with easily recognizable semantic incongruities such as the one above.

5. Sentence Combining – The student integrates the meaning of several short sentences into one grammatically correct written sentence. E.g.:  “John drives fast” is combined with “John has a red car,” making “John drives his red car fast.”  Ceiling is reached when the student makes 3 errors in a row.  The first few items contain only two sentences which can be combined by adding the conjunction “and” .

Remaining items are a bit more difficult due to the a. addition of more sentences and b. increase in the complexity of language needed to efficiently combine them. This is a nice subtest to administer to students who present with difficulty effectively and efficiently expressing their written thoughts on paper. It is particularly useful with students who write down  a lot of extraneous information in their compositions/essays and frequently overuse run-on sentences. 

6. Contextual Conventions – The student is asked to write a story in response to a stimulus picture. S/he earn points for satisfying specific requirements (identified below) relative to combined orthographic (E.g.: punctuation, spelling) and grammatical conventions (E.g.: sentence construction, noun-verb agreement).  The student’s written composition needs to contain more than 40 words in order for the effective analysis to take place.

The scoring criteria ranges from no credit or a score of 0 ( based on 3 or more mistakes), to partial credit, a score of 1 (based on 1-2 mistakes) to full a credit – a score of 3 (no mistakes).

Scoring Parameters:

  1. Sentences begin with a capital letter
  2. Paragraphs
  3. Use of quotations marks
  4. Use of comma to set off direct quotes
  5. Correct use of apostrophe
  6. Use of a question mark
  7. Use of exlamation point
  8. Capitalization of proper nouns (including story title)
  9. Number of non-duplicated misspelled words
  10. Other use of punctuation (hyphen, parentheses, etc.)
  11. Use of fragments
  12. Use of run-on/rambling sentences
  13. Use of compound sentences
  14. Use of specific coordinating conjunction
  15. use of introductory phrases/clauses
  16. Noun-verb disagreement
  17. Sentences in paragraphs
  18. Sentence composition
  19. Number of correctly spelled words with 7 or more letters
  20. Number of correctly spelled words with 3 syllables or more
  21. Appropriate use of articles

While the above criteria is highly useful for younger elementary-aged students who may exhibit significant difficulties in the domain of writing, older middle school and high-school aged students as well as elementary aged students with moderate writing difficulties may attain average scoring on this subtest but still present with significant difficulties in this area as compared to typically developing grade level peers. As a result, in addition to this assessment it is recommended that a functional assessment of grade level writing also be performed in order to accurately identify the student’s writing needs.

7. Story Composition – The student’s story is evaluated relative to the quality of its composition (E.g.: vocabulary, plot, prose, development of characters, and interest to the reader).

The examiner first provides the student with an example of a good story by reading one written by another student.  Then, the examiner provides the student with an appropriate picture card and tell them that they need to take time to plan their story and make an outline on the (also provided) scratch paper.  The student has 5 minutes to plan before writing the actual story.  After the 5 minutes, elapses they 15 minutes to write the story.  It is important to note that story composition is the very first subtest administered to the student. Once they complete it they are ready to move on to the Vocabulary subtest.

Scoring Parameters:

  1. Story beginning
  2. Reference to a specific event (occurring before or after the picture)
  3. Story sequence
  4. Plot
  5. Characters show emotions
  6. Story action
  7. Story ending
  8. Writing style
  9. Story (overall)
  10. Specific (listed) story vocabulary
  11. Overall vocabulary

With respect to this subtest it was significantly more useful for me to use with younger students as well as significantly impaired students vs. older students or students with mild-moderate writing difficulties. Again if your aim is to get an accurate picture of the older students writing abilities I definitely recommend usage of informal writing assessment rubrics based on the student’s grade level in order to have an accurate picture of  their abilities.

OVERALL IMPRESSIONS:

Strengths:

  • Thorough assessment of basic writing areas
  • Flexible subtest administration (can be done on multiple occasions with students who fatigue easily)

Limitations:

  • Untimed testing administration (with the exception of story composition subtests) may not be very functional with students who present with significant processing difficulties. One 12 year old student actually took ~40 minutes complete each subtest
  • Primarily  useful for students with severe deficits in the area of written expression
  • Lack of computer scoring
  • Lack of remediation suggestions based on subtest deficits

Overall, I do find TOWL-4 a very useful testing measure to have in my toolbox as it is terrific for ruling out weaknesses in the student’s basic writing abilities, with respect to simple vocabulary, sentence construction, writing mechanics, punctuation, etc.  If I identify previously unidentified gaps in basic writing skills I can then readily intervene, where needed, if needed.

However, it is important to understand that the TOWL-4 is only a starting point for most of our students with complex literacy needs whose writing abilities are above severe level of functioning. Most students with mild-moderate writing difficulties will pass this test with flying colors but still present with significant writing needs. As a result I highly recommend a functional grade level writing assessment as a supplement to the above standardized testing.

References: 

Hammill, D. D., & Larson, S. C. (2009). Test of Written Language—Fourth Edition. (TOWL-4). Austin, TX: PRO-ED.

Disclaimer: The views expressed in this post are the personal impressions of the author. This author is not affiliated with PRO-ED in any way and was NOT provided by them with any complimentary products or compensation for the review of this product. 

 

Posted on

Review of Social Language Development Test Adolescent: What SLPs Need to Know

Product ImageA few weeks ago I reviewed the  Social Language Development Test Elementary  (SLDTE) and today I am reviewing the  Social Language Development Test Adolescent  (SLDTA) currently available from PRO-ED.

Basic overview

Release date: 2010
Age Range: 12-18
Authors:Linda Bowers, Rosemary Huisingh, Carolyn LoGiudice
Publisher: Linguisystems (PRO-ED as of 2014)

The Social Language Development Test: Adolescent (SLDT-A) assesses adolescent students’ social language competence. The test addresses the students ability to take on someone else’s perspective, make correct inferences, interpret social language, state and justify logical solutions to social problems, engage in appropriate social interactions, as well as interpret ironic statements.

The Making Inferences subtest of the SLDT-A assesses students’ ability to infer what someone in the picture is thinking as well as state what visual cues aided him/her in the making of that inference.

The first question asks the student to pretend to be a person in the photo and then to tell what the person is thinking by responding as a direct quote. The quote must be relevant to the person’s situation and the emotional expression portrayed in the photo.The second question asks the student to identify the relevant visual clues that he used to make the inference.

Targeted Skills include:

  1. detection of nonverbal and context clues
  2. assuming the perspective of a specific person
  3. inferring what the person is thinking and expressing the person’s thought
  4. stating the visual cues that aided with response production

A score of 1 or 0 is assigned to each response, based on relevancy and quality. However, in contrast to the SLDTE student must give a correct response to both questions to achieve a score of 1.

Errors can result due to limited use of direct quotes (needed for correct responses to indicate empathy/attention to task), poor interpretation of provided visual clues (attended to irrelevant visuals) as well as vague, imprecise, and associated responses.

The Interpreting Social Language subtest of the SLDT-A assesses students’ ability to demonstrate actions (including gestures and postures), tell a reason or use for an action, think and talk about language and interpret figurative language including idioms.

A score of 1 or 0 is assigned to each response, based on relevancy and quality. Student must give a correct response to both questions to achieve a score of 1.

Targeted Skills:

  1. Ability to demonstrate actions such as gestures and postures
  2. Ability to explain appropriate reasons or use for actions
  3. Ability to think and talk about language
  4.  Ability to interpret figurative language (e.g., idioms)

Errors can result due to vague, imprecise (off-target), or associated responses as well as lack of responses. Errors can result due to lack of knowledge of correct nonverbal gestures to convey meaning of messages.  Finally errors can result due to literal interpretations of idiomatic
expressions.

The Problem Solving subtest of the SLDT-A assesses students’ ability to offer a logical solution to a problem and explain why that would be a good way to solve the problem.

To receive a score of 1, the student has to provide an appropriate solution with relevant justification. A score of 0 is given if any of the responses to either question were incorrect or inappropriate.

Targeted Skills:

  1. Taking perspectives of other people in various social situations
  2. Attending to and correctly interpreting social cues
  3. Quickly and efficiently determining best outcomes
  4. Coming up with effective solutions to social problems
  5. Effective conflict negotiation

Errors can result due to illogical or irrelevant responses, restatement of the problem, rude solutions, or poor solution justifications.

The Social Interaction subtest of the SLDT-A assesses students’ ability to socially interact with others.

A score of 1 is given for an appropriate response that supports the situation. A score of 0 is given for negative, unsupportive, or passive responses as well as for ignoring the situation, or doing nothing.

Targeted Skills:

  1. Provision of appropriate, supportive responses
  2. Knowing when to ignore the situation

Errors can result due to inappropriate responses that were negative, unsupportive or illogical.

The Interpreting Ironic Statements subtest of the SLDT-A assesses sudents’ ability to recognize sarcasm and interpret ironic statements.

To get a score of 1, the student must give a response that shows s/he understands that the speaker is being sarcastic and is saying the opposite of what s/he means.  A score of 0 is given if the response is literal and ignores the irony of the situation.

Errors can result due to consistent provision of literal idiom meanings indicating lack of
understanding of the speaker’s intentions as well as “missing” the context of the situation. errors also can result due the the student identifying that the speaker is being sarcastic but being unable to explain the reason behind the speaker’s sarcasm (elaboration).

For example, one student was presented with a story of a brother and a sister who extensively labored over a complicated recipe. When their mother asked them about how it came out, the sister responded to their mother’s query: “Oh, it was a piece of cake”. The student was then asked: What did she mean?” Instead of responding that the girl was being sarcastic because the recipe was very difficult, student responded: “easy.”  When presented with a story of a boy who refused to help his sister fold laundry under the pretext that he was “digesting his food”, he was then told by her, “Yeah, I can see you have your hands full.” the student was asked: “What did she mean?” student provided a literal response and stated: “he was busy.”

goal-setting

The following goals can be generated based on the performance on this test:

  • Long Term Goals: Student will improve social pragmatic language skills in order to effectively communicate with a variety of listeners/speakers in all social and academic contexts
  • Short Term Goals
  1. Student will improve his/her ability to  make inferences based on social scenarios
  2. Student will improve his/her interpretation of facial expressions, body language, and gestures
  3. Student will improve his/her ability to interpret social language (demonstrate appropriate gestures and postures, use appropriate reasons for actions, interpret figurative language)
  4. Student will his/her ability to provide multiple interpretations of presented social situations
  5. Student will improve his/her ability to improve social interactions with peers and staff (provide appropriate supportive responses; ignore situations when doing nothing is the best option, etc)
  6. Student will improve his/her ability to  interpret abstract language (e.g., understand common idioms, understand speaker’s beliefs, judge speaker’s attitude, recognize sarcasm, interpret irony, etc)

Caution

A word of caution regarding testing eligibility: 

I would also not administer this test to the following adolescent populations:

  • Students with social pragmatic impairments secondary to intellectual disabilities (IQ <70)
  • Students with severe forms of Autism Spectrum Disorders
  • Students with severe language impairment and limited vocabulary inventories
  • English Language Learners (ELL) with suspected social pragmatic deficits 
  • Students from low SES backgrounds with suspected pragmatic deficits 

—I would not administer this test to Culturally and Linguistically Diverse (CLD)  students due to significantly increased potential for linguistic and cultural bias, which may result in test answers being marked incorrect due to the following:

  • Lack of relevant vocabulary knowledge will affect performance 
  • Lack of exposure to certain cultural and social experiences related to low SES status or lack of formal school instruction
    • How many of such students would know know the meaning of the word “sneer”?
    • —How many can actually show it?
  • Life experiences that the child simply hasn’t encountered yet
    • Has an —entire subtest devoted to idioms
  • —Select topics may be inappropriate for younger children
    • —Dieting
    • —Dating—
  • —Culturally biased when it comes to certain questions regarding friendship and personal values
    • —Individual vs. cooperative culture differences

What I like about this test: 

  • I like the fact that unlike the  CELF-5:M,  the test is composed of open-ended questions instead of offering orally/visually based multiple choice format as it is far more authentic in its representation of real world experiences
  • I really like how the select subtests (Making Inferences) require a response to both questions in order for the responder to achieve credit on the total subtest

Overall, when you carefully review what’s available in the area of assessment of social pragmatic abilities of adolescents this is an important test to have in your assessment toolkit as it provides very useful information for social pragmatic language treatment goal purposes.

Have YOU purchased SLDTA yet? If so how do you like using it? Post your comments, impressions and questions below.

Helpful Resources Related to Social Pragmatic Language Overview, Assessment  and Remediation:

Disclaimer: The views expressed in this post are the personal opinion of the author. The author is not affiliated with PRO-ED or Linguisystems in any way and was not provided by them with any complimentary products or compensation for the review of this product. 

Posted on

Review of Social Language Development Test Elementary: What SLPs Need to Know

sldtelAs the awareness of social pragmatic language disorders continues to grow, more and more speech language pathologists are asking questions regarding various sources of social pragmatic language testing.  Today I am reviewing one such test entitled:  Social Language Development Test Elementary  (SLDTE) currently available from PRO-ED.

Basic overview

Release date: 2008
Age Range: 6:00-11:11
Authors:Linda Bowers, Rosemary Huisingh, Carolyn LoGiudice
Publisher: Linguisystems (PRO-ED as of 2014)

This test assesses the students’ social language competence and addresses their ability to take on someone else’s perspective, make correct inferences, negotiate conflicts with peers, be flexible in interpreting situations and supporting friends diplomatically. 

The test is composed of 4 subtests, of which the first two subtests are subdivided into 2 and 3 tasks respectively.

The Making Inferences subtest (composed of 2 tasks) of the SLDT-E is administered to assess student will’s ability to infer what someone in the picture is thinking (task a) as well as state the visual cues that aided the student in the making of that inference (task b). 

On task /a/ errors can result due to student’s difficulty correctly assuming first person perspective (e.g., “Pretend you are this person. What are you thinking?”) and infering (guessing) what someone in the picture was thinking. Errors can also result due to vague, associated and unrelated responses which do not take into account the person’s context (surroundings) as well as emotions expressed by their body language.   

On task /b/ errors can result due to the student’s inability to coherently verbalize his/her responses which may result in the offer of vague, associated, or unrelated answers to presented questions, which do not take into account facial expressions and body language but instead may focus on people’s feelings, or on the items located in the vicinity of the person in the picture. 

student-think-bubble-clipart-thought-girl-color

The Interpersonal Negotiation subtest (composed of 3 tasks) of the SLDT-E is administered to assess the student’s ability to resolve personal conflicts in the absence of visual stimuli.  Student is asked to state the problem (task a) from first person perspective (e.g., pretend the problem is happening with you and a friend), propose an appropriate solution (task b), as well as explain why the solution she was proposing was a good solution (task c).

On task /a/ errors can result due to the student’s difficulty recognizing that a problem exists in the presented scenarios. Errors can also result due to the student’s difficulty stating a problem from a first person perspective, as a result of which they may initiate their responses with reference to other people vs. self (e.g., “They can’t watch both shows”; “The other one doesn’t want to walk”, etc.). Errors also can also result due to the student’s attempt to provide a solution to the presented problem without acknowledging that a problem exists. Here’s an example of how one student responded on this subtest. When presented with: “You and your friend found a stray kitten in the woods. You each want to keep the kitten as a pet. What is the problem?” A responded: “They can’t keep it.”  When presented with:  You and your friend are at an afterschool center. You both want to play a computer game that is played by one person, but there’s only one computer. What is the problem?” A responded: “You have to play something else.”

On task /b/ errors can result due to provision of inappropriate, irrelevant, or ineffective solutions, which lack arrival to a mutual decision based on dialog.  

On task /c/ errors can result due to vague and inappropriate explanations as to why the solution proposed was a good solution.  

The Multiple Interpretations subtest assesses the student’s flexible thinking ability via the provision of two unrelated but plausible interpretations of what is happening in a photo. Here errors can result due to an inability to provide two different ideas regarding what is happening in the pictures. As a result the student may provide vague, irrelevant, or odd interpretations, which do not truly reflect the depictions in the photos. 

The Supporting Peers subtest assesses student’s ability to take the perspective of a person involved in a situation with a friend and state a supportive reaction to a friend’s situation (to provide a “white lie” rather than hurt the person’s feelings).  Errors on this subtest may result due to the student’s difficulty appropriately complementing, criticizing, or talking with peers.  Thus students who as a rule tend to be excessively blunt, tactless, or ‘thoughtless’ regarding the effect their words may have on others will do poorly on this subtest.   However, there could be situations when a high score on this subtest may also be a cause for concern (see the details on why that is HERE). That is because simply repeating the phrase “I like your ____” over and over again without putting much thinking into their response will earn the responder an average subtest score according to the SLDT-E subtest scoring guidelines.   However, such performance will not be reflective of true subtest competence and needs to be interpreted with significant caution

goal-setting

The following goals can be generated based on the performance on this test:

Long Term Goals: Student will improve social pragmatic language competence in order to effectively communicate with a variety of listeners/speakers in all conversational and academic contexts

Short Term Goals

  • Student will improve ability to  make inferences based on social scenarios
  • Student will improve ability to interpret facial expressions, body language, and gestures
  • Student will improve ability to recognize conflicts from a variety of perspectives (e.g., first person, mutual, etc.)
  • Student will improve ability to  resolve personal conflicts using effective solutions relevant to presented scenarios
  • Student will improve ability  to effectively  justify solutions to presented situational conflicts
  • Student will ability to provide multiple interpretations of presented social situations
  • Student will provide effective responses to appropriately support peers in social situations
  • Student will improve ability to engage in perspective taking (e.g., the ability to infer mental states of others and interpret their knowledge, intentions, beliefs, desires, etc.)

Caution

A word of caution regarding testing eligibility: 

I would also not administer this test to the following populations:

  • Students with social pragmatic impairments secondary to intellectual disabilities (IQ <70)
  • Students with severe forms of Autism Spectrum Disorders
  • Students with severe language impairment and limited vocabulary inventories
  • English Language Learners (ELL) with suspected social pragmatic deficits 
  • Students from low SES backgrounds with suspected pragmatic deficits 

—I would not administer this test to Culturally and Linguistically Diverse (CLD)  students due to significantly increased potential for linguistic and cultural bias, which may result in test answers being marked incorrect due to the following:

  • Lack of relevant vocabulary knowledge
  • Lack of exposure to certain cultural and social experiences related to low SES status or lack of formal school instruction
  • Life experiences that the child simply hasn’t encountered yet
    • For example the format of the Multiple Interpretations subtest may be confusing to students unfamiliar with being “tested” in this manner (asked to provide two completely different reasons for what is happening ina particular photo)

What I like about this test: 

  • I like the fact that the test begins at 6 years of age, so unlike some other related tests such as the CELF-5:M, which begins at 9 years of age or the informal  Social Thinking Dynamic Assessment Protocol® which can be used when the child is approximately 8 years of age, you can detect social pragmatic language deficits much earlier and initiate early intervention in order to optimize social language gains.
  • I like the fact that the test asks open-ended questions instead of offering orally/visually based multiple choice format as it is far more authentic in its representation of real-world experiences
  • I really like how the select subtests are further subdivided into tasks in order to better determine the students’ error breakdown

Overall, when you carefully review what’s available in the area of assessment of social pragmatic abilities this is an important test to have in your assessment toolkit as it provides very useful information for social pragmatic language treatment goal purposes.

Have YOU purchased SLDTE yet? If so how do you like using it?Post your comments, impressions and questions below.

NEW: Need an SLDTE Template Report? Find it HERE

Helpful Resources Related to Social Pragmatic Language Overview, Assessment  and Remediation:

 Disclaimer: The views expressed in this post are the personal opinion of the author. The author is not affiliated with PRO-ED or Linguisystems in any way and was not provided by them with any complimentary products or compensation for the review of this product.