Posted on

Phonological Awareness Screening App Review: ProPA

pro-pa-img1Summer is in full swing and for many SLPs that means a welcome break from work. However, for me, it’s business as usual, since my program is year around, and we have just started our extended school year program.

Of course, even my program is a bit light on activities during the summer. There are lots of field trips, creative and imaginative play, as well as less focus on academics as compared to during the school year. However, I’m also highly cognizant of summer learning loss, which is the phenomena characterized by the loss of academic skills and knowledge over the course of summer holidays.

Image result for summer learning loss

According to Cooper et al, 1996, while generally, typical students lose about one month of learning, there is actually a significant degree of variability of loss based on SES. According to Cooper’s study, low-income students lose approximately two months of achievement. Furthermore, ethnic minorities, twice-exceptional students (2xE), as well as students with language disorders tend to be disproportionately affected (Graham et al, 2011;  Kim & Guryan, 2010; Kim, 2004). Finally, it is important to note that according to research, summer loss is particularly prominent in the area of literacy (Graham et al, 2011).

So this summer I have been busy screening the phonological awareness abilities (PA) of an influx of new students (our program enrolls quite a few students during the ESY), as well as rescreening PA abilities of students already on my caseload, who have been receiving services in this area for the past few months.

Why do I intensively focus on phonological awareness (PA)? Because PA is a precursor to emergent reading. It helps children to manipulate sounds in words (see Age of Aquisition of PA Skills). Children need to attain PA mastery (along with a host of a few literacy-related skills) in order to become good readers.

When children exhibit poor PA skills for their age it is a red flag for reading disabilities. Thus it is very important to assess the child’s PA abilities in order to determine their proficiency in this area.

While there are a number of comprehensive tests available in this area, for the purposes of my screening I prefer to use the ProPA app by Smarty Ears.

pro-pa-img14

The Profile of Phonological Awareness (Pro-PA) is an informal phonological awareness screening. According to the developers on average it takes approximately 10 to 20 minutes to administer based on the child’s age and skill levels. In my particular setting (outpatient school based in a psychiatric hospital) it takes approximately 30 minutes to administer to students on the individual basis. It is by no means a comprehensive tool such as the CTOPP-2 or the PAT-2, as there are not enough trials, complexity or PA categories to qualify for a full-blown informal assessment. However, it is a highly useful measure for a quick determination of the students’ strengths and weaknesses with respect to their phonological awareness abilities. Given its current retail price of $29.99 on iTunes, it is a relatively affordable phonological awareness screening option, as the app allows its users to store data, and generates a two-page report at the completion of the screening.

The Pro-PA assesses six different skill areas:

  • Rhyming
    • Identification
    • Production
  • Blending
    • Syllables
    • Sounds
  • Sound Isolation
    • Initial
    • Final
    • Medial
  • Segmentation
    • Words in sentences
    • Syllables in words
    • Sounds in words
    • Words with consonant clusters
  • Deletion
    • Syllables
    • Sounds
    • Words with consonant clusters
  • Substitution
    • Sounds in initial position of words
    • Sounds in final position of words

pro-pa-img21After the completion of the screening, the app generates a two-page report which describes the students’ abilities as:

  • Achieved (80%+ accuracy)
  • Not achieved (0-50% accuracy)
  • Emerging (~50-79% accuracy)

The above is perfect for quickly tracking progress or for generating phonological awareness goals to target the students’ phonological awareness weaknesses. While the report can certainly be provided as an attachment to parents and teachers, I usually tend to summarize its findings in my own reports for the purpose of brevity. Below is one example of what that looks like:

pro-pa-img29The Profile of Phonological Awareness (Pro-PA), an informal phonological awareness screening was administered to “Justine” in May 2017 to further determine the extent of her phonological awareness strengths and weaknesses.

On the Pro-PA, “Justine” evidenced strengths (80-100% accuracy) in the areas of rhyme identification, initial and final sound isolation in words, syllable segmentation, as well as substitution of sounds in initial position in words.

She also evidenced emerging abilities (~60-66% accuracy) in the areas of syllable and sound blending in words, as well as sound segmentation in CVC words,

However, Pro-PA assessment also revealed weaknesses (inability to perform) in the areas of: rhyme production, isolation of medial sounds in words, segmentation of words, segmentation of sounds in words with consonant blends,deletion of first sounds,  consonant clusters, as well as substitution of sounds in final position in words. Continuation of therapeutic intervention is recommended in order to improve “Justine’s” abilities in these phonological awareness areas.

Now you know how I quickly screen and rescreen my students’ phonological awareness abilities, I’d love to hear from you! What screening instruments are you using (free or paid) to assess your students’ phonological awareness abilities? Do you feel that they are more or less comprehensive/convenient than ProPA?

References:

Posted on

Treatment of Children with “APD”: What SLPs Need to Know

Free stock photo of people, woman, cute, playingIn recent years there has been an increase in research on the subject of diagnosis and treatment of Auditory Processing Disorders (APD), formerly known as Central Auditory Processing Disorders or CAPD.

More and more studies in the fields of audiology and speech-language pathology began confirming the lack of validity of APD as a standalone (or useful) diagnosis. To illustrate, in June 2015, the American Journal of Audiology published an article by David DeBonis entitled: “It Is Time to Rethink Central Auditory Processing Disorder Protocols for School-Aged Children.” In this article, DeBonis pointed out numerous inconsistencies involved in APD testing and concluded that “routine use of APD test protocols cannot be supported” and that [APD] “intervention needs to be contextualized and functional” (DeBonis, 2015, p. 124)

Image result for time to rethink quotesFurthermore, in April 2017, an article entitled: “AAA (2010) CAPD clinical practice guidelines: need for an update” (also written by DeBonnis) concluded that the “AAA CAPD guidance document will need to be updated and re-conceptualised in order to provide meaningful guidance for clinicians” due to the fact that the “AAA document … does not reflect the current literature, fails to help clinicians understand for whom auditory processing testing and intervention would be most useful, includes contradictory suggestions which reduce clarity and appears to avoid conclusions that might cast the CAPD construct in a negative light. It also does not include input from diverse affected groups. All of these reduce the document’s credibility.” 

Image result for systematic reviewIn April 2016, de Wit and colleagues published a systematic review in the Journal of Speech, Language, and Hearing ResearchThey reviewed research studies which described the characteristics of APD in children to determine whether these characteristics merited a label of a distinct clinical disorder vs. being representative of other disorders.  After a search of 6 databases, they chose 48 studies which satisfied appropriate inclusion criteria. Unfortunately, they unearthed only one study with strong methodological quality. Even more disappointing was that the children in these studies presented with incredibly diverse symptomology. The authors concluded that “The listening difficulties of children with APD may be a consequence of cognitive, language, and attention issues rather than bottom-up auditory processing” (de Wit et al., 2016, p. 384).  In other words, none of the reviewed studies had conclusively proven that APD was a distinct clinical disorder.  Instead, these studies showed that the children diagnosed with APD exhibited language-based deficits. In other words, the diagnosis of APD did not reveal any new information regarding the child beyond the fact that s/he is in great need of a comprehensive language assessment in order to determine which language-based interventions s/he would optimally benefit from.

Now, it is important to reiterate that students diagnosed with “APD” present with legitimate symptomology (e.g., difficulty processing language, difficulty organizing narratives, difficulty decoding text, etc.). However, all the research to date indicates that these symptoms are indicative of broader language-based deficits, which require targeted language/literacy-based interventions rather than recommendations for specific prescriptive programs (e.g., CAPDOTS, Fast ForWord, etc.) or mere in-school accommodations.

Image result for dig deeper quotesUnfortunately, on numerous occasions when the students do receive the diagnosis of APDthe testing does not “dig further,” which leads to many of them not receiving appropriate comprehensive language-literacy assessments.  Furthermore, APD then becomes the “primary” diagnosis for the student, which places SLPs in situations in which they must address inappropriate therapeutic targets based on an audiologist’s recommendations.  Even worse, in many of these situations, the diagnosis of APD limits the provision of appropriate language-based services to the student.

Since the APD controversy has been going on for years with no end in sight despite the mounting evidence pointing to the lack of its validity, we know that SLPs will continue to have students on their caseloads diagnosed with APD. Thus, the aim of today’s post is to offer some constructive suggestions for SLPs who are asked to assess and treat students with “confirmed” or suspected APD.

The first suggestion comes directly from Dr. Alan Kamhi, who states: “Do not assume that a child who has been diagnosed with APD needs to be treated any differently than children who have been diagnosed with language and learning disabilities” (Kamhi, 2011, p. 270).  In other words, if one carefully analyzes the child’s so-called processing issues, one will quickly realize that those issues are not related to the processing of auditory input  (auditory domain) since the child is not processing tones, hoots, or clicks, etc. but rather has difficulty processing speech and language (language domain).

If a student with confirmed or suspected APD is referred to an SLP, it is important, to begin with formal and informal assessments of language and literacy knowledge and skills. (details HERE)   SLPs need to “consider non-auditory reasons for listening and comprehension difficulties, such as limitations in working memory, language knowledge, conceptual abilities, attention, and motivation (Kamhi & Wallach, 2012).

Image result for language goalsAfter performing a comprehensive assessment, SLPs need to formulate language goals based on determined areas of weaknesses. Please note that a systematic review by Fey and colleagues (2011) found no compelling evidence that auditory interventions provided any unique benefit to auditory, language, or academic outcomes for children with diagnoses of (C)APD or language disorder. As such it’s important to avoid formulating goals focused on targeting isolated processing abilities like auditory discrimination, auditory sequencing, recognizing speech in noise, etc., because these processing abilities have not been shown to improve language and literacy skills (Fey et al., 2011; Kamhi, 2011).

Instead, SLPs need to target we need to focus on the language underpinnings of the above skills and turn them into language and literacy goals. For example, if the child has difficulty recognizing speech in noise, improve the child’s knowledge and access to specific vocabulary words.  This will help the child detect the word when the auditory information is degraded.  Child presents with phonemic awareness deficits? Figure out where in the hierarchy of phonemic awareness their strengths and weaknesses lie and formulate goals based on the remaining areas in need of mastery.  Received a description of the child’s deficits from the audiologist in an accompanying report? Turn them into language goals as well!  Turn “prosodic deficits” or difficulty understanding the intent of verbal messages into “listening for details and main ideas in stories” goals.   In other words, figure out the language correlate to the ‘auditory processing’ deficit and replace it.

Image result for quackeryIt is easy to understand the appeal of using dubious practices which promise a quick fix for our student’s “APD deficits” instead of labor-intensive language therapy sessions. But one must also keep something else in mind as well:   Acquiring higher order language abilities takes a significant period of time, especially for those students whose skills and abilities are significantly below age-matched peers.

APD Summary 

  1. There is still no compelling evidence that APD is a stand-alone diagnosis with clear diagnostic criteria.
  2. There is still no compelling evidence that auditory deficits are a “significant risk factor for  language or academic performance.”
  3. There is still no compelling evidence that “auditory interventions provide any unique benefit to auditory, language, or academic outcomes” (Hazan, Messaoud-Galusi, Rosan, Nouwens, & Shakespeare, 2009; Watson & Kidd, 2009).
  4. APD deficits are language based deficits which accompany a host of developmental conditions ranging from developmental language disorders to learning disabilities, etc.
  5. SLPs should perform comprehensive language and literacy assessments of children diagnosed with APD.
  6. SLPs should target   literacy goals.
  7. SLPS should be wary of any goals or recommendations which focus on remediation of isolated skills such as: “auditory discrimination, auditory sequencing, phonological memory, working memory, or rapid serial naming” since studies have definitively confirmed their lack of effectiveness (Fey et al., 2011).
  8. SLPs should be wary of any prescriptive programs offering APD “interventions” and instead focus on improving children’s abilities for functional communication including listening, speaking, reading, and writing (see Wallach, 2014: Improving Clinical Practice: A School-Age and School-Based Perspective).  This article  “presents a conceptual framework for intervention at school-age levels” and discusses “advanced levels of language that move beyond preschool and early elementary grade goals and objectives with a focus on comprehension and meta-abilities.”

There you have it!  Students diagnosed with APD are best served by targeting the language and literacy problems that are affecting their performance in school. 

Related Posts:

 

 

Posted on

Review and Giveaway: Test of Semantic Reasoning (TOSR)

Today I am reviewing a new receptive vocabulary measure for students 7-17 years of age, entitled the Test of Semantic Reasoning (TOSR) created by Beth Lawrence, MA, CCC-SLP  and Deena Seifert, MS, CCC-SLP, available via Academic Therapy Publications.

The TOSR assesses the student’s semantic reasoning skills or the ability to nonverbally identify vocabulary via image analysis and retrieve it from one’s lexicon.

According to the authors, the TOSR assesses “breadth (the number of lexical entries one has) and depth (the extent of semantic representation for each known word) of vocabulary knowledge without taxing expressive language skills”.

The test was normed on 1117 students ranging from 7 through 17 years of age with the norming sample including such diagnoses as learning disabilities, language impairments, ADHD, and autism. This fact is important because the manual did indicate how the above students were identified. According to Peña, Spaulding and Plante (2006), the inclusion of children with disabilities in the normative sample can negatively affect the test’s discriminant accuracy (separate typically developing from disordered children) by lowering the mean score, which may limit the test’s ability to diagnose children with mild disabilities.

TOSR administration takes approximately 20 minutes or so, although it can take a little longer or shorter depending on the child’s level of knowledge.  It is relatively straightforward. You start at the age-based point and then calculate a basal and a ceiling. For a basal rule, if the child missed any of the first 3 items, the examiner must go backward until the child retains 3 correct responses in a row. To attain a ceiling, test administration can be discontinued after the student makes 6 out of 8 incorrect responses.

Test administration is as follows. Students are presented with 4 images and told 4 words which accompany the images. The examiner asks the question: “Which word goes with all four pictures? The words are…

Students then must select the single word from a choice of four that best represents the multiple contexts of the word represented by all the images.

According to the authors, this assessment can provide “information on children and adolescents basic receptive vocabulary knowledge, as well as their higher order thinking and reasoning in the semantic domain.”

My impressions:

During the time I had this test I’ve administered it to 6 students on my caseload with documented history of language disorders and learning disabilities. Interestingly all students with the exception of one had passed it with flying colors. 4 out of 6 received standard scores solidly in the average range of functioning including a recently added to the caseload student with significant word-finding deficits. Another student with moderate intellectual disability scored in the low average range (18th percentile). Finally, my last student scored very poorly (1st%); however, in addition to being a multicultural speaker he also had a significant language disorder. He was actually tested for a purpose of a comparison with the others to see what it takes not to pass the test if you will.

I was surprised to see several children with documented vocabulary knowledge deficits to pass this test. Furthermore, when I informally used the test and asked them to identify select vocabulary words expressively or in sentences, very few of the children could actually accomplish these tasks successfully. As such it is important for clinicians to be aware of the above finding since receptive knowledge given multiple choices of responses does not constitute spontaneous word retrieval. 

Consequently, I caution SLPs from using the TOSR as an isolated vocabulary measure to qualify/disqualify children for services, and encourage them to add an informal expressive administration of this measure in words in sentences to get further informal information regarding their students’ expressive knowledge base.

I also caution test administration to Culturally and Linguistically Diverse (CLD)  students (who are being tested for the first time vs. retesting of CLD students with confirmed language disorders) due to increased potential for linguistic and cultural bias, which may result in test answers being marked incorrect due lack of relevant receptive vocabulary knowledge (in the absence of actual disorder).

Final Thoughts:

I think that SLPs can use this test as a replacement for the Receptive One-Word Picture Vocabulary Test-4 (ROWPVT-4) effectively, as it does provide them with more information regarding the student’s reasoning and receptive vocabulary abilities.  I think this test may be helpful to use with children with word-finding deficits in order to tease out a lack of knowledge vs. a retrieval issue.

You can find this assessment for purchase on the ATP website HERE. Finally, due to the generosity of one of its creators, Deena Seifert, MS, CCC-SLP, you can enter my Rafflecopter giveaway below for a chance to win your own copy!

Disclaimer:  I did receive a complimentary copy of this assessment for review from the publisher. Furthermore, the test creators will be mailing a copy of the test to one Rafflecopter winner. However, all the opinions expressed in this post are my own and are not influenced by the publisher or test developers.

References:

Peña ED, Spaulding TJ, and Plante E. ( 2006) The composition of normative groups and diagnostic decision-making: Shooting ourselves in the foot. American Journal of Speech-Language Pathology 15: 24754

  a Rafflecopter giveaway

Posted on

What do Narratives and Pediatric Psychiatric Impairments Have in Common?

High comorbidity between language and psychiatric disorders has been well documented (Beitchman, Cohen, Konstantaras, & Tannock, 1996; Cohen, Barwick, Horodezky, Vallence, & Im, 1998; Toppelberg & Shapiro, 2000). However, a lesser known fact is that there’s also a significant under-diagnosis of language impairments in children with psychiatric disorders.

In late 90’s, a study by Cohen, Barwick, Horodezky, Vallance, & Im (1998) found that 40% of children between the ages of 7 and 14 referred solely for psychiatric problems had a language impairment that had not been previously suspected.

Several decades later not much has changed. Hollo, Wehby, & Oliver (2014) did a meta-analysis of 22 studies, which reported results of language assessments in children with emotional and behavioral disturbances, EBD, with no prior history of language impairment (LI). They found that more than 80% of these children displayed below average language performance on standardized assessments (1–2 SD below the mean on a single measure) and 46.5% of these children qualified for criteria of moderate-severe LI (>2 SD below the mean on a single measure).

The above illustrates that children with psychiatric impairments often spend years “under the radar” without the recognition from medical and educational professionals that they present with difficulty adequately comprehending and expressing language. This is particularly damaging because good language development is critically important in order for psychotherapy and cognitive-behavioral therapies to be effective for the child. Without relevant speech-language intervention services, psychotherapy referrals are rendered virtually useless, since those children who lack adequate linguistic abilities would not make meaningful therapeutic gains even after spending years in psychotherapy.

Narrative abilities are “highly relevant for the child psychiatry population as means for both psychotherapeutic evaluation (Emde, Wolf, & Oppenheim, 2003) and intervention (Angus & McLeod, 2004; Chaika, 2000; Gardner, 1993)”.  That is why it is crucial that language impairments be “identified, taken into account, and remediated (Losh & Capps, 2003)” (Pearce, et al, 2014, p. 245).

Over a two-year period, Pearce and colleagues (2014) assessed 48 children, 6–12 years old who were admitted: “for a four-week diagnostic period to the Child Psychiatry Inpatient Unit in a children’s hospital”. The children selected for the study had a minimum IQ of 85, had passed a hearing test and did not present with any acute psychotic symptoms (e.g., delusions, hallucinations, etc.). The children were administered the core subtests of The Clinical Evaluation of Language Fundamentals–4 (CELF-4) as well as the Test of Narrative Language (TNL).

Study results found that:

  1. “The mean scores for less complex core language production and comprehension were in the average range”, whereas the mean narrative-production scores on the TNL were in the clinical range. In other words: “These children perhaps had acquired foundational language skills sufficient for functional communication and produced verbal output at a rate and complexity not noticeably different from their peers, particularly with the overlay of social or emotional disturbance, yet had impaired discourse skills difficult to detect in the typical psychiatric interview, psychotherapy session, or classroom setting” (Pearce, et al, 2014, p. 253).
  2. The study also found a significant correlation between narratives and social skills (but not between core language and social skills). That is because, in contrast to general language tests, which assess basic constructs such as vocabulary and grammar and often require single word responses, storytelling involves a number of higher order skills such as sequencing, emotion processing, perspective taking, pragmatic presupposition, gauging the listener’s level of interest, etc., which children with psychiatric impairments understandably lack.
  3. Consequently, the authors concluded that: “More than half the children in our complex population not previously diagnosed with language impairment were identified as having impaired language when higher-level discourse skills, measured by narrative ability, were tested in addition to core language abilities.”(Pearce, et al, 2014, p. 257)

Additionally, it is important to note that the above study utilized two fairly basic language measures and was still able to attain very significant results. It is strongly speculated that if the study was conducted in the present and utilized a general language test such as the Test of Integrated Language and Literacy the results would have been even more dramatic and the impairment would have extended to language abilities as well as narratives.

So the takeaway messages are as follows:

  1. Do not assume that children who present with challenging behaviors are merely “acting out” and present with intact language abilities. Assess them in order to confirm/rule out a language disorder (and make a relevant psychiatric referral if needed).
  2. Do not assume that children with emotional and behavioral disturbances are ONLY behaviorally/psychiatrically impaired and have average language abilities. Consequently, perform necessary testing in order to confirm/rule out the presence of concomitant language disorder.
  3. General language tests such do NOT directly test children’s narrative abilities or social language skills. Thus, many children can attain average scores on these tests yet still present with pervasive higher order language deficits, so more sensitive testing IS NEEDED
  4. Don’t ascribe linguistic deficits to externalizing symptomology (e.g., impulsivity, anxiety, inattention, challenging behaviors, etc.)  when the cause of it may in actuality be an undiagnosed language impairment. Perform a thorough assessment of higher-order linguistic abilities to ensure that the child receives the best possible care in order to optimally function in social and academic settings.

Helpful Resources:

References:

  • Angus, L. E., & McLeod, J. (Eds.) (2004). The handbook of narrative and psychotherapy. London, UK: Sage Publications
  • Beitchman, J., Cohen, N., Konstantareas, M., & Tannock, R. (Eds.) (1996). Language, learning and behaviour disorders: Developmental, biological and clinical perspectives. Cambridge, NY: Cambridge University Press.
  • Chaika, E. (2000). Linguistics, pragmatics and psychotherapy. London, UK: Whurr Publishers
  • Cohen, N., Barwick, M., Horodezky, N., Vallance, D., & Im, N. (1998). Language, achievement, and cognitive processing in psychiatrically disturbed children with previously identified and unsuspected language impairments. Journal of Child Psychology and Psychiatry, 39, 865–877.
  • Cohen, N., & Horodezky, N. (1998). Prevalence of language impairments in psychiatrically referred children at different ages: Preschool to adolescence [Letter to the editor]. Journal of the American Academy of Child and Adolescent Psychiatry, 35, 461–262.
  • Emde, R., Wolf, D., & Oppenheim, D. (Eds.) (2003). Revealing the inner worlds of young children—The MacArthur story stem battery. New York, NY: Oxford University Press.
  • Gardner, R. (1993). Storytelling in psychotherapy with children. London, UK: Jason Aronson.
  • Hollo, A., Wehby, J. H., & Oliver, R. O.  (2014). Unsuspected language deficits in children with emotional and behavioral disorders: A meta-analysis. Exceptional Children, Vol. 80, No. 2, pp. 169-186.
  • Losh, M., & Capps, L. (2003). Narrative ability in high-functioning children with autism or Asperger’s syndrome. Journal of Autism and Developmental Disorders, 33, 239–251.
  • Pearce, P. et al. (2014). Use of narratives to assess language disorders in an inpatient pediatric psychiatric population. Clin Child Psychol Psychiatry, 19(2) 244-259.
  • Toppelberg, C., & Shapiro, T. (2000). Language disorders: A 10-year research update review. Journal of the American Academy of Child and Adolescent Psychiatry, 39, 143–152.
Posted on

C/APD Update: New Developments on an Old Controversy

In the past two years, I wrote a series of research-based posts (HERE and HERE) regarding the validity of (Central) Auditory Processing Disorder (C/APD) as a standalone diagnosis as well as questioned the utility of it for classification purposes in the school setting.

Once again I want to reiterate that I was in no way disputing the legitimate symptoms (e.g., difficulty processing language, difficulty organizing narratives, difficulty decoding text, etc.), which the students diagnosed with “CAPD” were presenting with.

Rather, I was citing research to indicate that these symptoms were indicative of broader linguistic-based deficits, which required targeted linguistic/literacy-based interventions rather than recommendations for specific prescriptive programs (e.g., CAPDOTS, Fast ForWord, etc.),  or mere accommodations.

I was also significantly concerned that overfocus on the diagnosis of (C)APD tended to obscure REAL, language-based deficits in children and forced SLPs to address erroneous therapeutic targets based on AuD recommendations or restricted them to a receipt of mere accommodations rather than rightful therapeutic remediation.

Today I wanted to update you regarding new developments, which took place since my last blog post was written 1.5 years ago, regarding the validity of “C/APD” diagnosis.

In April 2016, de Wit and colleagues published a systematic review in the Journal of Speech, Language, and Hearing Research. Their purpose was to review research studies describing the characteristics of APD in children and determine whether these characteristics merited a label of a distinct clinical disorder vs. being representative of other disorders.  After they searched 6 databases they chose 48 studies which satisfied appropriate inclusion criteria. Unfortunately, only 1 study had strong methodological quality and what’s even more disappointing, the children in their studies were very dissimilar and presented with incredibly diverse symptomology. The authors concluded that: “the listening difficulties of children with APD may be a consequence of cognitive, language, and attention issues rather than bottom-up auditory processing.”

In other words, because APD is not a distinct clinical disorder, a diagnosis of APD would not contribute anything to the child’s functioning beyond showing that the child is experiencing linguistically based deficits, which bear further investigation.

To continue, you may remember that in my first CAPD post I extensively cited a tutorial written by Dr. David DeBonis, who is an AuD. In his article, he pointed out numerous inconsistencies involved in CAPD testing and concluded that “routine use of CAPD test protocols cannot be supported” and that [CAPD] “intervention needs to be contextualized and functional.”

In July 2016, Iliadou, Sirimanna, & Bamiou published an article: “CAPD Is Classified in ICD-10 as H93.25 and Hearing Evaluation—Not Screening—Should Be Implemented in Children With Verified Communication and/or Listening Deficits” protesting DeBonis’s claim that CAPD is not a unique clinical entity and as such should not be included in any disease classification system.  They stated that DeBonis omitted the fact that “CAPD is included in the U.S. version of the International Statistical Classification of Diseases and Related Health Problems–10th Revision (ICD-10) under the code H93.25” (p. 368). They also listed what they believed to be a number of article omissions, which they claimed biased DeBonis’s tutorial’s conclusions.

The authors claimed that DeBonis provided a limited definition of CAPD based only on ASHA’s Technical report vs. other sources such as American Academy of Audiology (2010), British Society of Audiology Position Statement (2011), and Canadian Guidelines on Auditory Processing Disorder in Children and Adults: Assessment Intervention (2012).  (p. 368)

The also authors claimed that DeBonis did not adequately define the term “traditional testing” and failed to provide several key references for select claims.  They disagreed with DeBonis’s linkage of certain digit tests, as well as his “lumping” of studies which included children with suspected and diagnosed APD into the same category. (p. 368-9)  They also objected to the fact that he “oversimplified” results of positive gains of select computer-based interventions for APD, and that in his summary section he listed only selected studies pertinent to the topic of intelligence and auditory processing skills. (p. 369).

Their main objection, however, had to do with the section of DeBonis’s article that contained “recommended assessment and intervention process for children with listening and communication difficulties in the classroom”.  They expressed concerns with his recommendations on the grounds that he failed to provide published research to support that this was the optimal way to provide intervention. The authors concluded their article by stating that due to the above-mentioned omissions they felt that DeBonis’s tutorial “show(ed) unacceptable bias” (p. 370).

In response to the Iliadou, Sirimanna, & Bamiou, 2016 concerns, DeBonis issued his own response article shortly thereafter (DeBonis, 2016). Firstly, he pointed out that when his tutorial was released in June 2015 the ICD-10 was not yet in effect (it was enacted Oct 1, 2015). As such his statement was factually accurate.

Secondly, he also made a very important point regarding the C/APD construct validity, namely that it fails to satisfy the Sydenham–Guttentag criteria as a distinct clinical entity (Vermiglio, 2014). Namely, despite attempts at diagnostic uniformity, CAPD remains ambiguously defined, with testing failing to “represent a homogenous patient group.” (p. 906).

For those who are unfamiliar with this terminology (as per direct quote from Dr. Vermiglio’s presentation): “The Sydenham-Guttentag Criteria for the Clinical Entity Proposed by Vermiglio (accepted 2014, JAAA) is as follows:

  1. The clinical entity must possess an unambiguous definition (Sydenham, 1676; FDA, 2000)
  2. It must represent a homogeneous patient group (Sydenham, 1676; Guttentag, 1949, 1950; FDA, 2000)
  3. It must represent a perceived limitation (Guttentag, 1949)
  4. It must facilitate diagnosis and intervention (Sydenham, 1676; Guttentag, 1949; FDA, 2000)

Thirdly, DeBonis addressed Iliadou, Sirimanna, & Bamiou, 2016 concerns that he did not use the most recent definition of APD by pointing out that he was most qualified to discuss the US system and its definitions of CAPD, as well as that “the U.S. guidelines, despite their limitations and age, continue to have a major impact on the approach to auditory processing disorders worldwide” (p.372). He also elucidated that: the AAA’s (2010) definition of CAPD is “not so much built on previous definitions but rather has continued to rely on them” and as such does not constitute a “more recent” source of CAPD definitions. (p.372)

DeBonis next addressed the claim that he did not adequately define the term “traditional testing”. He stated that he defined it on pg. 125 of his tutorial and that information on it was taken directly from the AAA (2010) document. He then explained how it is “aligned with bottom-up aspects of the auditory system” by citing numerous references (see p. 372 for further details).  After that, he addressed Iliadou, Sirimanna, & Bamiou, 2016 claim that he failed to provide references by pointing out the relevant citation in his article, which they failed to see.

Next, he proceeded to address their concerns “regarding the interaction between cognition and auditory processing” by reiterating that auditory processing testing is “not so pure” and is affected by constructs such as memory, executive function skills, etc. He also referenced the findings of  Beck, Clarke and Moore (2016)  that “most currently used tests of APD are tests of language and attention…lack sensitivity and specificity” (p. 27).

The next point addressed by DeBonis was the use of studies which included children with suspected vs. confirmed APD. He agreed that “one cannot make inferences about one population from another” but added that the data from the article in question “provided insight into the important role of attention and memory in children who are poor listeners” and that “such listeners represent the population [which] should be [AuD’s] focus.” (p.373)

From there on, DeBonis moved on to address Iliadou, Sirimanna, & Bamiou, 2016 claims that he “oversimplified” the results of one CBAT study dealing with effects of computer-based interventions for APD. He responded that the authors of that review themselves stated that: “the evidence for improving phonological awareness is “initial”.

Consequently, “improvements in auditory processing—without subsequent changes in the very critical tasks of reading and language—certainly do not represent an endorsement for the auditory training techniques that were studied.” (p.373)

Here, DeBonis also raised concerns regarding the overall concept of treatment effectiveness, stating that it should not be based on “improved performance on behavioral tests of auditory processing or electrophysiological measures” but ratheron improvements on complex listening and academic tasks“. (p.373) As such,

  1. “This limited definition of effectiveness leads to statements about the impact of certain interventions that can be misinterpreted at best and possibly misleading.”
  2. “Such a definition of effectiveness is unlikely to be satisfying to working clinicians or parents of children with communication difficulties who hope to see changes in day-to-day communication and academic abilities.” (p.373)

Then, DeBonis addressed Iliadou, Sirimanna, & Bamiou, 2016 concerns regarding the omission of an article supporting CAPD and intelligence as separate entities. He reiterated that the aim of his tutorial was to note that “performance on commonly used tests of auditory processing is highly influenced by a number of cognitive and linguistic factors” rather than to “do an overview of research in support of and in opposition to the construct”. (p.373)

Subsequently, DeBonis addressed the Iliadou, Sirimanna, & Bamiou, 2016 claim that he did not provide research to support his proposed testing protocol, as well as that he made a figure error. He conceded that the authors were correct with respect to the figure error (the information provided in the figure was not sufficient). However, he pointed out that the purpose of his tutorial was to “to review the literature related to ongoing concerns about the use of the CAPD construct in school-aged children and to propose an alternative assessment/intervention procedure that moves away from testing “auditory processing” and moves toward identifying and supporting students who have listening challenges”. As such, while the effectiveness of his model is being tested, it makes sense to “use of questionnaires and speech-in-noise tests with very strong psychometric characteristics” and thoroughly assess these children’s “language and cognitive skills to reduce the chance of misdiagnosis”  in order to provide functional interventions (p.373).

Finally, Debonis addressed the Iliadou, Sirimanna, & Bamiou, 2016 accusation that his tutorial contained “unacceptable bias”. He pointed out that “the reviewers of this [his 2015 article article] did not agree” and that since the time of that article’s publication “readers and other colleagues have viewed it as a vehicle for important thought about how best to help children who have listening difficulties.” (p. 374)

Having read the above information, many of you by now must be wondering: “Why is the research on APD as a valid stand alone diagnosis continues to be published at regular intervals?”

To explain the above phenomenon, I will use several excerpts from an excellent presentation by Kamhi, A, Vermiglio, A, & Wallach, G (2016), which I attended during the 2016 ASHA Convention in Philadephia, PA.

It has been suggested that the above has to do with: “The bias of the CAPD Convention Committee that reviews submissions.” Namely, “The committee only accepts submissions consistent with the traditional view of (C)APD espoused by Bellis, Chermak and others who wrote the ASHA (2005) position statement on CAPD.”

Kamhi Vermiglio, and Wallach (2016) supported this claim by pointing out that when Dr. Vermiglio attempted to submit his findings on the nature of “C/APD” for the 2015 ASHA Convention, “the committee did not accept Vermiglio’s submission” but instead accepted the following seminar: “APD – It Exists! Differential Diagnosis & Remediation” and allocated for it “a prominent location in the program planner.”

Indeed, during the 2016 ASHA convention alone, there was a host of 1 and 2-hour pro-APD sessions such as: “Yes, You CANS! Adding Therapy for Specific CAPDs to an IEP“, “Perspectives on the Assessment & Treatment of Individuals With Central Auditory Processing Disorder (CAPD)“, as well asThe Buffalo Model for CAPD: Looking Back & Forward, in addition to a host of posters and technical reports attempting to validate this diagnosis despite mounting evidence refuting that very fact. Yet only one session, “Never-Ending Controversies With CAPD: What Thinking SLPs & Audiologists Know” presented by Kamhi, Vermiglio, & Wallach (two SLPs and one AuD) and accepted by a non-AuD committee, discussed the current controversies raging in the fields of speech pathology and audiology pertaining to “C/APD”. 

In 2016, Diane Paul, the Director of Clinical Issues in Speech-Language Pathology at ASHA  had asked Kamhi, Vermiglio, and Wallach “to offer comments on the outline of audiology and SLP roles in assessing and treating CAPD”.  According to Kamhi, et al, 2016, the outline did not mention any of controversies in assessment and diagnosis documented by numerous authors dating as far as 2009. It also did not “mention the lack of evidence on the efficacy of auditory interventions documented in the systematic review by Fey et al. (2011) and DeBonis (2015).”

At this juncture, it’s important to start thinking regarding possible incentives a professional might have to continue performing APD testing and making prescriptive program recommendations despite all the existing evidence refuting the validity and utility of APD diagnosis for children presenting with listening difficulties.

Conclusions:

  • There is still no compelling evidence that APD is a stand-alone diagnosis with clear diagnostic criteria
  • There is still no compelling evidence that auditory deficits are a “significant risk factor for  language or academic performance”
  • There is still no compelling evidence that “auditory interventions provide any unique benefit to auditory, language, or academic outcomes” (Hazan, Messaoud-Galusi, Rosan, Nouwens, & Shakespeare, 2009; Watson & Kidd, 2009)
  • APD deficits are linguistically based deficits which accompany a host of developmental conditions ranging from developmental language disorders to learning disabilities, etc.
  • SLPs should continue comprehensively assessing children diagnosed with “C/APD” to determine the scope of their linguistic deficits
  • SLPs should continue formulating language goals to  determine linguistic areas of weaknesses
  • SLPS should be wary of any goals or recommendations which focus on remediation of isolated skills such as: “auditory discrimination, auditory sequencing, phonological memory, working memory, or rapid serial naming” since studies have definitively confirmed their lack of effectiveness (Fey, et al, 2011)
  • SLPs should be wary of any prescriptive programs offering C/APD “interventions”
  • SLPs should focus on improving children’s abilities for functional communication including listening, speaking, reading, and writing
    • Please see excellent article written by Dr. Wallach in 2014 entitled: Improving Clinical Practice: A School-Age and School-Based Perspective. It “presents a conceptual framework for intervention at school-age levels” and discusses “advanced levels of language that move beyond preschool and early elementary grade goals and objectives with a focus on comprehension and meta-abilities.”

So there you have it, sadly, despite research and logic, the controversy is very much alive! Except I am seeing some new developments!

I see SLPs, newly-minted and seasoned alike, steadily voicing their concerns regarding the symptomology they are documenting in children diagnosed with so-called “CAPD” as being purely auditory in nature.

I see more and more SLPs supporting research evidence and science by voicing their concerns regarding the numerous diagnostic markers of ‘CAPD’ which do not make sense to them by stating “Wait a second – that can’t be right!”.

I see more and more SLPs documenting the lack of progress children make after being prescribed isolated FM systems or computer programs which claim to treat “APD symptomology” (without provision of therapy services).  I see more and more SLPs beginning to understand the lack of usefulness of this diagnosis, who switch to using language-based interventions to teach children to listen, speak, read and write and to generalize these abilities to both social and academic settings.

I see more and more SLPs beginning to understand the lack of usefulness of this diagnosis, who switch to using language-based interventions to teach children to listen, speak, read and write and to generalize these abilities to both social and academic settings.

So I definitely do see hope on the horizon!

References:

(arranged in chronological order of citation in the blog post):

Related Posts:

 

Posted on

Review and Giveaway of Strategies by Numbers (by SPELL-Links)

SPELL-Links Strategies By The Numbers

Today I am reviewing a fairly recently released (2014) book from the Learning By Design, Inc. team entitled SPELL-Links Strategies by Numbers.   This 57 page instructional guide was created to support the implementation of the SPELL-Links to Reading and Writing Word Study Curriculum as well as to help students “use the SPELL-Links strategies anytime in any setting.’ (p. iii) Its purpose is to enable students to strategize their way to writing and reading rather than overrelying on memorization techniques.

SPELL-Links Strategies by Numbers contains in-depth explanations of SPELL-Links’ 14 strategies for spelling and reading, detailed instructions on how to teach the strategies during writing and reading activities, as well as helpful ideas for supporting students as they further acquire literacy skills.  It can be used by a wide array of professionals including classroom teachers, speech-language pathologists, reading improvement teachers, learning disabilities teachers, aides, tutors, as well as parents for teaching word study lessons or as carryover and practice during reading and writing tasks.

The author includes a list of key terms used in the book as well as a guide with instructional icons screen-shot-2016-09-24-at-10-57-10-amscreen-shot-2016-09-24-at-10-56-46-am

The goal of the 14 strategies listed in the book is to build vocabulary, improve spelling, word decoding, reading fluency, and reading comprehension as well as improve students’ writing skills. While each strategy is presented in isolation under its own section, the end result is for students to fully integrate and apply multiple strategies when reading or writing.

Here’s the list of the 14 strategies in order of appearance as applied to spelling and reading:

  1. Sound It Out
  2. Check the Order
  3. Catch the Beat
  4. Listen Up
  5. A Little Stress Will Help This Mess
  6. No Fouls
  7. Play By the Rules
  8. Use Rhyme This Time
  9. Spell What You Mean and Mean What You Spell
  10. Be Smart About Word Parts
  11. Build on the Base
  12. Invite the Relatives
  13. Fix the Funny Stuff
  14. Look It Up

Each strategy includes highly detailed implementation instructions with students including pictorial support as well as both instructor and student guidance for practice at various levels during writing and reading tasks.  At the end of the book all the strategies are succinctly summarized in handy table, which is also provided to the user separately as a double sided one page insert printed on reinforced paper to be used as a guide when the book is not handy.

There are a number of things I like about the book. Firstly, of course it is based on the latest research in reading, writing, and spelling. Secondly, clinicians can use it the absence  of SPELL-Links to Reading and Writing Word Study Curriculum since the author’s purpose was to have the students  “use the SPELL-Links strategies anytime in any setting.’ (p. iii).  Thirdly, I love the fact that the book is based on the connectionist research model, which views spelling and reading as a “dynamic interplay of phonological, orthographic, and semantic knowledge.” (iii). Consequently, the listed strategies focus on simultaneously developing and strengthening phonological, orthographic, semantic and morphological knowledge during reading and writing tasks.

You can find this book for purchase on the Learning By Design, Inc. Store HERE. Finally, due to the generosity of Jan Wasowicz  PhD the book’s author, you can enter my Rafflecopter giveaway below for a chance to win your own copy!

 

 

a Rafflecopter giveaway

Posted on

Review of the Test of Integrated Language and Literacy (TILLS)

The Test of Integrated Language & Literacy Skills (TILLS) is an assessment of oral and written language abilities in students 6–18 years of age. Published in the Fall 2015, it is  unique in the way that it is aimed to thoroughly assess skills  such as reading fluency, reading comprehension, phonological awareness,  spelling, as well as writing  in school age children.   As I have been using this test since the time it was published,  I wanted to take an opportunity today to share just a few of my impressions of this assessment.

               

First, a little background on why I chose to purchase this test  so shortly after I had purchased the Clinical Evaluation of Language Fundamentals – 5 (CELF-5).   Soon after I started using the CELF-5  I noticed that  it tended to considerably overinflate my students’ scores  on a variety of its subtests.  In fact,  I noticed that unless a student had a fairly severe degree of impairment,  the majority of his/her scores  came out either low/slightly below average (click for more info on why this was happening HERE, HEREor HERE). Consequently,  I was excited to hear regarding TILLS development, almost simultaneously through ASHA as well as SPELL-Links ListServe.   I was particularly happy  because I knew some of this test’s developers (e.g., Dr. Elena Plante, Dr. Nickola Nelson) have published solid research in the areas of  psychometrics and literacy respectively.

According to the TILLS developers it has been standardized for 3 purposes:

  • to identify language and literacy disorders
  • to document patterns of relative strengths and weaknesses
  • to track changes in language and literacy skills over time

The testing subtests can be administered in isolation (with the exception of a few) or in its entirety.  The administration of all the 15 subtests may take approximately an hour and a half, while the administration of the core subtests typically takes ~45 mins).

Please note that there are 5 subtests that should not be administered to students 6;0-6;5 years of age because many typically developing students are still mastering the required skills.

  • Subtest 5 – Nonword Spelling
  • Subtest 7 – Reading Comprehension
  • Subtest 10 – Nonword Reading
  • Subtest 11 – Reading Fluency
  • Subtest 12 – Written Expression

However,  if needed, there are several tests of early reading and writing abilities which are available for assessment of children under 6:5 years of age with suspected literacy deficits (e.g., TERA-3: Test of Early Reading Ability–Third Edition; Test of Early Written Language, Third Edition-TEWL-3, etc.).

Let’s move on to take a deeper look at its subtests. Please note that for the purposes of this review all images came directly from and are the property of Brookes Publishing Co (clicking on each of the below images will take you directly to their source).

TILLS-subtest-1-vocabulary-awareness1. Vocabulary Awareness (VA) (description above) requires students to display considerable linguistic and cognitive flexibility in order to earn an average score.    It works great in teasing out students with weak vocabulary knowledge and use,   as well as students who are unable to  quickly and effectively analyze  words  for deeper meaning and come up with effective definitions of all possible word associations. Be mindful of the fact that  even though the words are presented to the students in written format in the stimulus book, the examiner is still expected to read  all the words to the students. Consequently,  students with good vocabulary knowledge  and strong oral language abilities  can still pass this subtest  despite the presence of significant reading weaknesses. Recommendation:  I suggest informally  checking the student’s  word reading abilities  by asking them to read of all the words, before reading all the word choices to them.   This way  you can informally document any word misreadings  made by the student even in the presence of an average subtest score.

TIILLS-subtest-2-phonemic-awareness

2. The Phonemic Awareness (PA) subtest (description above) requires students to  isolate and delete initial sounds in words of increasing complexity.  While this subtest does not require sound isolation and deletion in various word positions, similar to tests such as the CTOPP-2: Comprehensive Test of Phonological Processing–Second Edition  or the The Phonological Awareness Test 2 (PAT 2)  it is still a highly useful and reliable measure of  phonemic awareness (as one of many precursors to reading fluency success).  This is especially because after the initial directions are given, the student is expected to remember to isolate the initial sounds in words without any prompting from the examiner.  Thus,  this task also  indirectly tests the students’ executive function abilities in addition to their phonemic awareness skills.

TILLS-subtest-3-story-retelling

3. The Story Retelling (SR) subtest (description above) requires students to do just that retell a story. Be mindful of the fact that the presented stories have reduced complexity. Thus, unless the students possess  significant retelling deficits, the above subtest  may not capture their true retelling abilities. Recommendation:  Consider supplementing this subtest  with informal narrative measures. For younger children (kindergarten and first grade) I recommend using wordless picture books to perform a dynamic assessment of their retelling abilities following a clinician’s narrative model (e.g., HERE).  For early elementary aged children (grades 2 and up), I recommend using picture books, which are first read to and then retold by the students with the benefit of pictorial but not written support. Finally, for upper elementary aged children (grades 4 and up), it may be helpful for the students to retell a book or a movie seen recently (or liked significantly) by them without the benefit of visual support all together (e.g., HERE).

TILLS-subtest-4-nonword-repetition

4. The Nonword Repetition (NR) subtest (description above) requires students to repeat nonsense words of increasing length and complexity. Weaknesses in the area of nonword repetition have consistently been associated with language impairments and learning disabilities due to the task’s heavy reliance on phonological segmentation as well as phonological and lexical knowledge (Leclercq, Maillart, Majerus, 2013). Thus, both monolingual and simultaneously bilingual children with language and literacy impairments will be observed to present with patterns of segment substitutions (subtle substitutions of sounds and syllables in presented nonsense words) as well as segment deletions of nonword sequences more than 2-3 or 3-4 syllables in length (depending on the child’s age).

TILLS-subtest-5-nonword-spelling

5. The Nonword Spelling (NS) subtest (description above) requires the students to spell nonwords from the Nonword Repetition (NR) subtest. Consequently, the Nonword Repetition (NR) subtest needs to be administered prior to the administration of this subtest in the same assessment session.  In contrast to the real-word spelling tasks,  students cannot memorize the spelling  of the presented words,  which are still bound by  orthographic and phonotactic constraints of the English language.   While this is a highly useful subtest,  is important to note that simultaneously bilingual children may present with decreased scores due to vowel errors.   Consequently,  it is important to analyze subtest results in order to determine whether dialectal differences rather than a presence of an actual disorder is responsible for the error patterns.

TILLS-subtest-6-listening-comprehension

6. The  Listening Comprehension (LC) subtest (description above) requires the students to listen to short stories  and then definitively answer story questions via available answer choices, which include: “Yes”, “No’, and “Maybe”. This subtest also indirectly measures the students’ metalinguistic awareness skills as they are needed to detect when the text does not provide sufficient information to answer a particular question definitively (e.g., “Maybe” response may be called for).  Be mindful of the fact that because the students are not expected to provide sentential responses  to questions it may be important to supplement subtest administration with another listening comprehension assessment. Tests such as the Listening Comprehension Test-2 (LCT-2), the Listening Comprehension Test-Adolescent (LCT-A),  or the Executive Function Test-Elementary (EFT-E)  may be useful  if  language processing and listening comprehension deficits are suspected or reported by parents or teachers. This is particularly important  to do with students who may be ‘good guessers’ but who are also reported to present with word-finding difficulties at sentence and discourse levels. 

TILLS-subtest-7-reading-comprehension

7. The Reading Comprehension (RC) subtest (description above) requires the students to  read short story and answer story questions in “Yes”, “No’, and “Maybe”  format.   This subtest is not stand alone and must be administered immediately following the administration the Listening Comprehension subtest. The student is asked to read the first story out loud in order to determine whether s/he can proceed with taking this subtest or discontinue due to being an emergent reader. The criterion for administration of the subtest is making 7 errors during the reading of the first story and its accompanying questions. Unfortunately,  in my clinical experience this subtest  is not always accurate at identifying children with reading-based deficits.

While I find it terrific for students with severe-profound reading deficits and/or below average IQ, a number of my students with average IQ and moderately impaired reading skills managed to pass it via a combination of guessing and luck despite being observed to misread aloud between 40-60% of the presented words. Be mindful of the fact that typically  such students may have up to 5-6  errors during the reading of the first story. Thus, according to administration guidelines these students will be allowed to proceed and take this subtest.  They will then continue to make text misreadings  during each story presentation (you will know that by asking them to read each story aloud vs. silently).   However,  because the response mode is in definitive (“Yes”, “No’, and “Maybe”) vs. open ended question format,  a number of these students  will earn average scores by being successful guessers. Recommendation:  I highly recommend supplementing the administration of this subtest with grade level (or below grade level) texts (see HERE and/or HERE),  to assess the student’s reading comprehension informally.

I present a full  one page text to the students and ask them to read it to me in its entirety.   I audio/video record  the student’s reading for further analysis (see Reading Fluency section below).   After the  completion of the story I ask  the student questions with a focus on main idea comprehension and vocabulary definitions.   I also ask questions pertaining to story details.   Depending on the student’s age  I may ask them  abstract/ factual text questions with and without text access.  Overall, I find that informal administration of grade level (or even below grade-level) texts coupled with the administration of standardized reading tests provides me with a significantly better understanding of the student’s reading comprehension abilities rather than administration of standardized reading tests alone.

TILLS-subtest-8-following-directions

8. The Following Directions (FD) subtest (description above) measures the student’s ability to execute directions of increasing length and complexity.  It measures the student’s short-term, immediate and working memory, as well as their language comprehension.  What is interesting about the administration of this subtest is that the graphic symbols (e.g., objects, shapes, letter and numbers etc.) the student is asked to modify remain covered as the instructions are given (to prevent visual rehearsal). After being presented with the oral instruction the students are expected to move the card covering the stimuli and then to executive the visual-spatial, directional, sequential, and logical if–then the instructions  by marking them on the response form.  The fact that the visual stimuli remains covered until the last moment increases the demands on the student’s memory and comprehension.  The subtest was created to simulate teacher’s use of procedural language (giving directions) in classroom setting (as per developers).

TILLS-subtest-9-delayed-story-retelling

9. The Delayed Story Retelling (DSR) subtest (description above) needs to be administered to the students during the same session as the Story Retelling (SR) subtest, approximately 20 minutes after the SR subtest administration.  Despite the relatively short passage of time between both subtests, it is considered to be a measure of long-term memory as related to narrative retelling of reduced complexity. Here, the examiner can compare student’s performance to determine whether the student did better or worse on either of these measures (e.g., recalled more information after a period of time passed vs. immediately after being read the story).  However, as mentioned previously, some students may recall this previously presented story fairly accurately and as a result may obtain an average score despite a history of teacher/parent reported  long-term memory limitations.  Consequently, it may be important for the examiner to supplement the administration of this subtest with a recall of a movie/book recently seen/read by the student (a few days ago) in order to compare both performances and note any weaknesses/limitations.

TILLS-subtest-10-nonword-reading

10. The Nonword Reading (NR) subtest (description above) requires students to decode nonsense words of increasing length and complexity. What I love about this subtest is that the students are unable to effectively guess words (as many tend to routinely do when presented with real words). Consequently, the presentation of this subtest will tease out which students have good letter/sound correspondence abilities as well as solid orthographic, morphological and phonological awareness skills and which ones only memorized sight words and are now having difficulty decoding unfamiliar words as a result.      TILLS-subtest-11-reading-fluency

11. The Reading Fluency (RF) subtest (description above) requires students to efficiently read facts which make up simple stories fluently and correctly.  Here are the key to attaining an average score is accuracy and automaticity.  In contrast to the previous subtest, the words are now presented in meaningful simple syntactic contexts.

It is important to note that the Reading Fluency subtest of the TILLS has a negatively skewed distribution. As per authors, “a large number of typically developing students do extremely well on this subtest and a much smaller number of students do quite poorly.”

Thus, “the mean is to the left of the mode” (see publisher’s image below). This is why a student could earn an average standard score (near the mean) and a low percentile rank when true percentiles are used rather than NCE percentiles (Normal Curve Equivalent). Tills Q&A – Negative Skew

Consequently under certain conditions (See HERE) the percentile rank (vs. the NCE percentile) will be a more accurate representation of the student’s ability on this subtest.

Indeed, due to the reduced complexity of the presented words some students (especially younger elementary aged) may obtain average scores and still present with serious reading fluency deficits.  

I frequently see that in students with average IQ and go to long-term memory, who by second and third grades have managed to memorize an admirable number of sight words due to which their deficits in the areas of reading appeared to be minimized.  Recommendation: If you suspect that your student belongs to the above category I highly recommend supplementing this subtest with an informal measure of reading fluency.  This can be done by presenting to the student a grade level text (I find science and social studies texts particularly useful for this purpose) and asking them to read several paragraphs from it (see HERE and/or HERE).

As the students are reading  I calculate their reading fluency by counting the number of words they read per minute.  I find it very useful as it allows me to better understand their reading profile (e.g, fast/inaccurate reader, slow/inaccurate reader, slow accurate reader, fast/accurate reader).   As the student is reading I note their pauses, misreadings, word-attack skills and the like. Then, I write a summary comparing the students reading fluency on both standardized and informal assessment measures in order to document students strengths and limitations.

TILLS-subtest-12-written-expression

12. The Written Expression (WE) subtest (description above) needs to be administered to the students immediately after the administration of the Reading Fluency (RF) subtest because the student is expected to integrate a series of facts presented in the RF subtest into their writing sample. There are 4 stories in total for the 4 different age groups.

The examiner needs to show the student a different story which integrates simple facts into a coherent narrative. After the examiner reads that simple story to the students s/he is expected to tell the students that the story is  okay, but “sounds kind of “choppy.” They then need to show the student an example of how they could put the facts together in a way that sounds more interesting and less choppy  by combining sentences (see below). Finally, the examiner will ask the students to rewrite the story presented to them in a similar manner (e.g, “less choppy and more interesting.”)

tills

After the student finishes his/her story, the examiner will analyze it and generate the following scores: a discourse score, a sentence score, and a word score. Detailed instructions as well as the Examiner’s Practice Workbook are provided to assist with scoring as it takes a bit of training as well as trial and error to complete it, especially if the examiners are not familiar with certain procedures (e.g., calculating T-units).

Full disclosure: Because the above subtest is still essentially sentence combining, I have only used this subtest a handful of times with my students. Typically when I’ve used it in the past, most of my students fell in two categories: those who failed it completely by either copying text word  for word, failing to generate any written output etc. or those who passed it with flying colors but still presented with notable written output deficits. Consequently, I’ve replaced Written Expression subtest administration with the administration of written standardized tests, which I supplement with an informal grade level expository, persuasive, or narrative writing samples.

Having said that many clinicians may not have the access to other standardized written assessments, or lack the time to administer entire standardized written measures (which may frequently take between 60 to 90 minutes of administration time). Consequently, in the absence of other standardized writing assessments, this subtest can be effectively used to gauge the student’s basic writing abilities, and if needed effectively supplemented by informal writing measures (mentioned above).

TILLS-subtest-13-social-communication

13. The Social Communication (SC) subtest (description above) assesses the students’ ability to understand vocabulary associated with communicative intentions in social situations. It requires students to comprehend how people with certain characteristics might respond in social situations by formulating responses which fit the social contexts of those situations. Essentially students become actors who need to act out particular scenes while viewing select words presented to them.

Full disclosure: Similar to my infrequent administration of the Written Expression subtest, I have also administered this subtest very infrequently to students.  Here is why.

I am an SLP who works full-time in a psychiatric hospital with children diagnosed with significant psychiatric impairments and concomitant language and literacy deficits.  As a result, a significant portion of my job involves comprehensive social communication assessments to catalog my students’ significant deficits in this area. Yet, past administration of this subtest showed me that number of my students can pass this subtest quite easily despite presenting with notable and easily evidenced social communication deficits. Consequently, I prefer the administration of comprehensive social communication testing when working with children in my hospital based program or in my private practice, where I perform independent comprehensive evaluations of language and literacy (IEEs).

Again, as I’ve previously mentioned many clinicians may not have the access to other standardized social communication assessments, or lack the time to administer entire standardized written measures. Consequently, in the absence of other social communication assessments, this subtest can be used to get a baseline of the student’s basic social communication abilities, and then be supplemented with informal social communication measures such as the Informal Social Thinking Dynamic Assessment Protocol (ISTDAP) or observational social pragmatic checklists

TILLS-subtest-14-digit-span-forward

14.  The Digit Span Forward (DSF) subtest (description above) is a relatively isolated  measure  of short term and verbal working memory ( it minimizes demands on other aspects of language such as syntax or vocabulary).

TILLS-subtest-15-digit-span-backward

15.  The Digit Span Backward (DSB) subtest (description above) assesses the student’s working memory and requires the student to mentally manipulate the presented stimuli in reverse order. It allows examiner to observe the strategies (e.g. verbal rehearsal, visual imagery, etc.) the students are using to aid themselves in the process.  Please note that the Digit Span Forward subtest must be administered immediately before the administration of this subtest.

SLPs who have used tests such as the Clinical Evaluation of Language Fundamentals – 5 (CELF-5) or the Test of Auditory Processing Skills – Third Edition (TAPS-3) should be highly familiar with both subtests as they are fairly standard measures of certain aspects of memory across the board.

To continue, in addition to the presence of subtests which assess the students literacy abilities, the TILLS also possesses a number of interesting features.

For starters, the TILLS Easy Score, which allows the examiners to use their scoring online. It is incredibly easy and effective. After clicking on the link and filling out the preliminary demographic information, all the examiner needs to do is to plug in this subtest raw scores, the system does the rest. After the raw scores are plugged in, the system will generate a PDF document with all the data which includes (but is not limited to) standard scores, percentile ranks, as well as a variety of composite and core scores. The examiner can then save the PDF on their device (laptop, PC, tablet etc.) for further analysis.

The there is the quadrant model. According to the TILLS sampler (HERE)  “it allows the examiners to assess and compare students’ language-literacy skills at the sound/word level and the sentence/ discourse level across the four oral and written modalities—listening, speaking, reading, and writing” and then create “meaningful profiles of oral and written language skills that will help you understand the strengths and needs of individual students and communicate about them in a meaningful way with teachers, parents, and students. (pg. 21)”

tills quadrant model

Then there is the Student Language Scale (SLS) which is a one page checklist parents,  teachers (and even students) can fill out to informally identify language and literacy based strengths and weaknesses. It  allows for meaningful input from multiple sources regarding the students performance (as per IDEA 2004) and can be used not just with TILLS but with other tests or in even isolation (as per developers).

Furthermore according to the developers, because the normative sample included several special needs populations, the TILLS can be used with students diagnosed with ASD,  deaf or hard of hearing (see caveat), as well as intellectual disabilities (as long as they are functioning age 6 and above developmentally).

According to the developers the TILLS is aligned with Common Core Standards and can be administered as frequently as two times a year for progress monitoring (min of 6 mos post 1st administration).

With respect to bilingualism examiners can use it with caution with simultaneous English learners but not with sequential English learners (see further explanations HERE).   Translations of TILLS are definitely not allowed as they will undermine test validity and reliability.

So there you have it these are just some of my very few impressions regarding this test.  Now to some of you may notice that I spend a significant amount of time pointing out some of the tests limitations. However, it is very important to note that we have research that indicates that there is no such thing as a “perfect standardized test” (see HERE for more information).   All standardized tests have their limitations

Having said that, I think that TILLS is a PHENOMENAL addition to the standardized testing market, as it TRULY appears to assess not just language but also literacy abilities of the students on our caseloads.

That’s all from me; however, before signing off I’d like to provide you with more resources and information, which can be reviewed in reference to TILLS.  For starters, take a look at Brookes Publishing TILLS resources.  These include (but are not limited to) TILLS FAQ, TILLS Easy-Score, TILLS Correction Document, as well as 3 FREE TILLS Webinars.   There’s also a Facebook Page dedicated exclusively to TILLS updates (HERE).

But that’s not all. Dr. Nelson and her colleagues have been tirelessly lecturing about the TILLS for a number of years, and many of their past lectures and presentations are available on the ASHA website as well as on the web (e.g., HERE, HERE, HERE, etc). Take a look at them as they contain far more in-depth information regarding the development and implementation of this groundbreaking assessment.

Disclaimer:  I did not receive a complimentary copy of this assessment for review nor have I received any encouragement or compensation from either Brookes Publishing  or any of the TILLS developers to write it.  All images of this test are direct property of Brookes Publishing (when clicked on all the images direct the user to the Brookes Publishing website) and were used in this post for illustrative purposes only.

References: 

Leclercq A, Maillart C, Majerus S. (2013) Nonword repetition problems in children with SLI: A deficit in accessing long-term linguistic representations? Topics in Language Disorders. 33 (3) 238-254.

Related Posts:

Posted on

What Research Shows About the Functional Relevance of Standardized Language Tests

As an SLP who routinely conducts speech and language assessments in several settings (e.g., school and private practice), I understand the utility of and the need for standardized speech, language, and literacy tests.  However, as an SLP who works with children with dramatically varying degree of cognition, abilities, and skill-sets, I also highly value supplementing these standardized tests with functional and dynamic assessments, interactions, and observations.

Since a significant value is placed on standardized testing by both schools and insurance companies for the purposes of service provision and reimbursement, I wanted to summarize in today’s post the findings of recent articles on this topic.  Since my primary interest lies in assessing and treating school-age children, for the purposes of today’s post all of the reviewed articles came directly from the Language Speech and Hearing Services in Schools  (LSHSS) journal.

We’ve all been there. We’ve all had situations in which students scored on the low end of normal, or had a few subtest scores in the below average range, which equaled  an average total score.  We’ve all poured over eligibility requirements trying to figure out whether the student should receive therapy services given the stringent standardized testing criteria in some states/districts.

Of course, as it turns out, the answer is never simple.  In 2006, Spaulding, Plante & Farinella set out to examine the assumption: “that children with language impairment will receive low scores on standardized tests, and therefore [those] low scores will accurately identify these children” (61).   So they analyzed the data from 43 commercially available child language tests to identify whether evidence exists to support their use in identifying language impairment in children.

Turns out it did not!  Turns out due to the variation in psychometric properties of various tests (see article for specific details), many children with language impairment are overlooked by standardized tests by receiving scores within the average range or not receiving low enough scores to qualify for services. Thus, “the clinical consequence is that a child who truly has a language impairment has a roughly equal chance of being correctly or incorrectly identified, depending on the test that he or she is given.” Furthermore, “even if a child is diagnosed accurately as language impaired at one point in time, future diagnoses may lead to the false perception that the child has recovered, depending on the test(s) that he or she has been given (69).”

Consequently, they created a decision tree (see below) with recommendations for clinicians using standardized testing. They recommend using alternate sources of data (sensitivity and specificity rates) to support accurate identification (available for a small subset of select tests).

The idea behind it is: “if sensitivity and specificity data are strong, and these data were derived from subjects who are comparable to the child tested, then the clinician can be relatively confident in relying on the test score data to aid his or her diagnostic decision. However, if the data are weak, then more caution is warranted and other sources of information on the child’s status might have primacy in making a diagnosis (70).”

Fast forward 6 years, and a number of newly revised tests later,  in 2012, Spaulding and colleagues set out to “identify various U.S. state education departments’ criteria for determining the severity of language impairment in children, with particular focus on the use of norm-referenced tests” as well as to “determine if norm-referenced tests of child language were developed for the purpose of identifying the severity of children’s language impairment”  (176).

They obtained published procedures for severity determinations from available U.S. state education departments, which specified the use of norm-referenced tests, and reviewed the manuals for 45 norm-referenced tests of child language to determine if each test was designed to identify the degree of a child’s language impairment.

What they found out was “the degree of use and cutoff-point criteria for severity determination varied across states. No cutoff-point criteria aligned with the severity cutoff points described within the test manuals. Furthermore, tests that included severity information lacked empirical data on how the severity categories were derived (176).”

Thus they urged SLPs to exercise caution in determining the severity of children’s language impairment via norm-referenced test performance “given the inconsistency in guidelines and lack of empirical data within test manuals to support this use (176)”.

Following the publication of this article, Ireland, Hall-Mills & Millikin issued a response to the  Spaulding and colleagues article. They pointed out that the “severity of language impairment is only one piece of information considered by a team for the determination of eligibility for special education and related services”.  They noted that  they left out a host of federal and state guideline requirements and “did not provide an analysis of the regulations governing special education evaluation and criteria for determining eligibility (320).” They pointed out that “IDEA prohibits the use of ‘any single measure or assessment as the sole criterion’ for determination of disability  and requires that IEP teams ‘draw upon information from a variety of sources.”

They listed a variety of examples from several different state departments of education (FL, NC, VA, etc.), which mandate the use of functional assessments, dynamic assessments criterion-referenced assessments, etc. for their determination of language therapy eligibility.

But are the SLPs from across the country appropriately using the federal and state guidelines in order to determine eligibility? While one should certainly hope so, it does not always seem to be the case.  To illustrate, in 2012, Betz & colleagues asked 364 SLPs to complete a survey “regarding how frequently they used specific standardized tests when diagnosing suspected specific language impairment (SLI) (133).”

Their purpose was to determine “whether the quality of standardized tests, as measured by the test’s psychometric properties, is related to how frequently the tests are used in clinical practice” (133).

What they found out was that the most frequently used tests were the comprehensive assessments including the Clinical Evaluation of Language Fundamentals and the Preschool Language Scale as well as one word vocabulary tests such as the Peabody Picture Vocabulary Test. Furthermore, the date of publication seemed to be the only factor which affected the frequency of test selection.

They also found out that frequently SLPs did not follow up the comprehensive standardized testing with domain specific assessments (critical thinking, social communication, etc.) but instead used the vocabulary testing as a second measure.  They were understandably puzzled by that finding. “The emphasis placed on vocabulary measures is intriguing because although vocabulary is often a weakness in children with SLI (e.g., Stothard et al., 1998), the research to date does not show vocabulary to be more impaired than other language domains in children with SLI (140).

According to the authors, “perhaps the most discouraging finding of this study was the lack of a correlation between frequency of test use and test accuracy, measured both in terms of sensitivity/specificity and mean difference scores (141).”

If since the time (2012) SLPs have not significantly change their practices, the above is certainly disheartening, as it implies that rather than being true diagnosticians, SLPs are using whatever is at hand that has been purchased by their department to indiscriminately assess students with suspected speech language disorders. If that is truly the case, it certainly places into question the Ireland, Hall-Mills & Millikin’s response to Spaulding and colleagues.  In other words, though SLPs are aware that they need to comply with state and federal regulations when it comes to unbiased and targeted assessments of children with suspected language disorders, they may not actually be using appropriate standardized testing much less supplementary informal assessments (e.g., dynamic, narrative, language sampling) in order to administer well-rounded assessments.  

So where do we go from here? Well, it’s quite simple really!   We already know what the problem is. Based on the above articles we know that:

  1. Standardized tests possess significant limitations
  2. They are not used with optimal effectiveness by many SLPs
  3.  They may not be frequently supplemented by relevant and targeted informal assessment measures in order to improve the accuracy of disorder determination and subsequent therapy eligibility

Now that we have identified a problem, we need to develop and consistently implement effective practices to ameliorate it.  These include researching psychometric properties of tests to review sample size, sensitivity and specificity, etc, use domain specific assessments to supplement administration of comprehensive testing, as well as supplement standardized testing with a plethora of functional assessments.

SLPs can review testing manuals and consult with colleagues when they feel that the standardized testing is underidentifying students with language impairments (e.g., HERE and HERE).  They can utilize referral checklists (e.g., HERE) in order to pinpoint the students’ most significant difficulties. Finally, they can develop and consistently implement informal assessment practices (e.g., HERE and HERE) during testing in order to gain a better grasp on their students’ TRUE linguistic functioning.

Stay tuned for the second portion of this post entitled: “What Research Shows About the Functional Relevance of Standardized Speech Tests?” to find out the best practices in the assessment of speech sound disorders in children.

References:

  1. Spaulding, Plante & Farinella (2006) Eligibility Criteria for Language Impairment: Is the Low End of Normal Always Appropriate?
  2. Spaulding, Szulga, & Figueria (2012) Using Norm-Referenced Tests to Determine Severity of Language Impairment in Children: Disconnect Between U.S. Policy Makers and Test Developers
  3. Ireland, Hall-Mills & Millikin (2012) Appropriate Implementation of Severity Ratings, Regulations, and State Guidance: A Response to “Using Norm-Referenced Tests to Determine Severity of Language Impairment in Children: Disconnect Between U.S. Policy Makers and Test Developers” by Spaulding, Szulga, & Figueria (2012)
  4. Betz et al. (2013) Factors Influencing the Selection of Standardized Tests for the Diagnosis of Specific Language Impairment

 

Posted on

Test Review: Test of Written Language-4 (TOWL-4)

TOWL-4_EM-147Today due to popular demand I am reviewing the The Test of Written Language-4 or TOWL-4. TOWL-4 assesses the basic writing readiness skills of students 9:00-17:11 years of age. The tests consists of two forms – A and B, (which contain different subtest content).

According to the manual, the entire test takes approximately between 60-90 minutes to administer and examines 7 skill areas. Only the “Story Composition” subtest is officially timed (the student is given 15 minutes to write it and 5 minutes previous to that, to draft it). However, in my experience each subtest administration, even with students presenting with mild-moderately impaired writing abilities, takes approximately 10 minutes to complete with average results (can you see where I am going with this yet?) 

For detailed information regarding the TOWL-4 development and standardization, validity and reliability, please see HERE.

Below are my impressions (to date) of using this assessment with students between 11-14 years of age with (known) mild-moderate writing impairments.

Subtests:

1. Vocabulary – The student is asked to write a sentence that incorporates a stimulus word. E.g.: For ‘ran’, a student may write, “I ran up the hill.”  The student is not allowed to change the word in any way, such as write ‘running’ instead of run’. If this occurs, an automatic loss of points takes place. Ceiling is reached when the student makes 3 errors in a row. 

To continue, while some of the subtest vocabulary words are perfectly appropriate for younger children (~9), the majority are too simplistic to assess the written vocabulary of middle and high schoolers. For example, other words included in the ‘Vocabulary’ subtest include:

  1. Form A (#1-20): eat, tree, house, circus, walk, bird, edge, laugh, donate, faithful, aboard, humble, though, confusion, lethal, deny, pulp, verge, revive, intact, etc.
  2. Form B (#1-20): see, help, prize, sky, stove, cry, enormous, chimney, avoid, nonsense, snout, wept, exotic, cycle, deb, specify, debatable, pastel, rugged, studious, etc.

These words may work well to test the knowledge of younger children but they do not take into the account the challenging academic standards set forth for older students. As a result, students 11+ years of age may pass this subtest with flying colors but still present with a fair amount of difficulty usingsophisticated vocabulary words in written compositions.

2/3.   Spelling and Punctuation (subtests 2 and 3). These two subtests are administered jointly but scored separately. Here, the student is asked to write sentences dictated by the examiner using appropriate rules for spelling and punctuation and capitalization. Ceiling for each subtest is reached separately. It  occurs when the student makes 3 errors in a row in each of the subtests.   In other words if a student uses correct punctuation but incorrect spelling, his/her ceiling on the ‘Spelling’ subtest will be reached sooner then on the ‘Punctuation’ subtest and vise versa.

Similar to the ‘Vocabulary‘ subtest I feel that the sentences the students are asked to write are far too simplistic to showcase their “true” grade level abilities. Below are some examples of sentences from both forms:

  1. Form A: (2) Run away.; (3) Birds fly.; (9) Who ate the food? (17) The electricity failed in Dallas, Texas.; (22) Because of the confusion, she sought legal help.
  2. Form B: (3) Am I going?; (18) Bring back three items: milk, crackers, and butter.; (23) After the door was closed, the sound was barely audible. 

As you can see from the above, the requirements of these subtest are also not too stringent.  The spelling words are simple and the punctuation requirements are very basic: a question mark here, an exclamation mark there, with a few commas in between. But I was particularly disappointed with the ‘Spelling‘ subtest.

Here’s why. I have a 6th grade client on my caseload with significant well-documented spelling difficulties. When this subtest was administered to him he scored within the average range (Scaled Score of 8 and Percentile Rank of 25).  However, an administration of Spelling Performance Evaluation for Language and Literacy – SPELL-2yielded 3 assessment pages of spelling errors, as well as 7 pages of recommendations on how to remediate those errors.  Had he received this assessment as part of an independent evaluation from a different examiner, nothing more would have been done regarding his spelling difficulties, since the TOWL-4 revealed an average spelling performance due to it’s focus on overly simplistic vocabulary.

4. Logical Sentences – The student is asked to edit an illogical sentence so that it makes better sense. E.g.:  “John blinked his nose” is changed to “John blinked his eye.”  Ceiling is reached when the student makes 3 errors in a row. Again I’m not too thrilled with this subtest. Rather than truly attempting to ascertain the student’s grammatical and syntactic knowledge at sentence level a large portion of this subtest deals with easily recognizable semantic incongruities such as the one above.

5. Sentence Combining – The student integrates the meaning of several short sentences into one grammatically correct written sentence. E.g.:  “John drives fast” is combined with “John has a red car,” making “John drives his red car fast.”  Ceiling is reached when the student makes 3 errors in a row.  The first few items contain only two sentences which can be combined by adding the conjunction “and” .

Remaining items are a bit more difficult due to the a. addition of more sentences and b. increase in the complexity of language needed to efficiently combine them. This is a nice subtest to administer to students who present with difficulty effectively and efficiently expressing their written thoughts on paper. It is particularly useful with students who write down  a lot of extraneous information in their compositions/essays and frequently overuse run-on sentences. 

6. Contextual Conventions – The student is asked to write a story in response to a stimulus picture. S/he earn points for satisfying specific requirements (identified below) relative to combined orthographic (E.g.: punctuation, spelling) and grammatical conventions (E.g.: sentence construction, noun-verb agreement).  The student’s written composition needs to contain more than 40 words in order for the effective analysis to take place.

The scoring criteria ranges from no credit or a score of 0 ( based on 3 or more mistakes), to partial credit, a score of 1 (based on 1-2 mistakes) to full a credit – a score of 3 (no mistakes).

Scoring Parameters:

  1. Sentences begin with a capital letter
  2. Paragraphs
  3. Use of quotations marks
  4. Use of comma to set off direct quotes
  5. Correct use of apostrophe
  6. Use of a question mark
  7. Use of exlamation point
  8. Capitalization of proper nouns (including story title)
  9. Number of non-duplicated misspelled words
  10. Other use of punctuation (hyphen, parentheses, etc.)
  11. Use of fragments
  12. Use of run-on/rambling sentences
  13. Use of compound sentences
  14. Use of specific coordinating conjunction
  15. use of introductory phrases/clauses
  16. Noun-verb disagreement
  17. Sentences in paragraphs
  18. Sentence composition
  19. Number of correctly spelled words with 7 or more letters
  20. Number of correctly spelled words with 3 syllables or more
  21. Appropriate use of articles

While the above criteria is highly useful for younger elementary-aged students who may exhibit significant difficulties in the domain of writing, older middle school and high-school aged students as well as elementary aged students with moderate writing difficulties may attain average scoring on this subtest but still present with significant difficulties in this area as compared to typically developing grade level peers. As a result, in addition to this assessment it is recommended that a functional assessment of grade level writing also be performed in order to accurately identify the student’s writing needs.

7. Story Composition – The student’s story is evaluated relative to the quality of its composition (E.g.: vocabulary, plot, prose, development of characters, and interest to the reader).

The examiner first provides the student with an example of a good story by reading one written by another student.  Then, the examiner provides the student with an appropriate picture card and tell them that they need to take time to plan their story and make an outline on the (also provided) scratch paper.  The student has 5 minutes to plan before writing the actual story.  After the 5 minutes, elapses they 15 minutes to write the story.  It is important to note that story composition is the very first subtest administered to the student. Once they complete it they are ready to move on to the Vocabulary subtest.

Scoring Parameters:

  1. Story beginning
  2. Reference to a specific event (occurring before or after the picture)
  3. Story sequence
  4. Plot
  5. Characters show emotions
  6. Story action
  7. Story ending
  8. Writing style
  9. Story (overall)
  10. Specific (listed) story vocabulary
  11. Overall vocabulary

With respect to this subtest it was significantly more useful for me to use with younger students as well as significantly impaired students vs. older students or students with mild-moderate writing difficulties. Again if your aim is to get an accurate picture of the older students writing abilities I definitely recommend usage of informal writing assessment rubrics based on the student’s grade level in order to have an accurate picture of  their abilities.

OVERALL IMPRESSIONS:

Strengths:

  • Thorough assessment of basic writing areas
  • Flexible subtest administration (can be done on multiple occasions with students who fatigue easily)

Limitations:

  • Untimed testing administration (with the exception of story composition subtests) may not be very functional with students who present with significant processing difficulties. One 12 year old student actually took ~40 minutes complete each subtest
  • Primarily  useful for students with severe deficits in the area of written expression
  • Lack of computer scoring
  • Lack of remediation suggestions based on subtest deficits

Overall, I do find TOWL-4 a very useful testing measure to have in my toolbox as it is terrific for ruling out weaknesses in the student’s basic writing abilities, with respect to simple vocabulary, sentence construction, writing mechanics, punctuation, etc.  If I identify previously unidentified gaps in basic writing skills I can then readily intervene, where needed, if needed.

However, it is important to understand that the TOWL-4 is only a starting point for most of our students with complex literacy needs whose writing abilities are above severe level of functioning. Most students with mild-moderate writing difficulties will pass this test with flying colors but still present with significant writing needs. As a result I highly recommend a functional grade level writing assessment as a supplement to the above standardized testing.

References: 

Hammill, D. D., & Larson, S. C. (2009). Test of Written Language—Fourth Edition. (TOWL-4). Austin, TX: PRO-ED.

Disclaimer: The views expressed in this post are the personal impressions of the author. This author is not affiliated with PRO-ED in any way and was NOT provided by them with any complimentary products or compensation for the review of this product. 

 

Posted on

App Review and Giveaway Questions Hunt

Today I am reviewing an app from Virtual Speech Center called Questions Hunt.  The app targets answering 60 yes/no as well as 360 WH questions (what, where, who, when, why and how) in young children with language disorders.

This app is thematically based with questions in the following categories:

  • Beach
  • Park
  • Store
  • Campground
  • Airport and
  • School

This app is very easy to navigate containing typical Virtual Speech Center set up.

IMG_0453

IMG_0454

Once you select your individual student or your group of students simply tap on the questions you want them to answer (when, why, etc.) and the location you want to target (beach, park, etc.) and you’re good to go.IMG_0455

Next you will be taking two different pages depicting campground, school etc. They will depict people and objects with question marks hovering above them. Click on the question mark to begin answering questions.IMG_0456

If you are working on improving the student’s receptive language abilities then the student is shown the question which is also read orally along with 3 multiple choice answers. S/he can answer the questions by tapping on the correct answer choice.

IMG_0457

In expressive tasks,  the students are only shown the questions to which they must provide their own oral responses.

IMG_0458

In order to move between the questions the students need to swipe on the screen with their finger to the right until they get to the very end of all the questions in a particular category. Then if multiple locations were selected such as the park and the beach, the students need to press the “NEXT” button in order to move on to the next location.

 What I like about this app:

  • I like the bright and engaging thematically based illustrations
  • I really like the fact that I can use the app to target multiple goals including:
    • Auditory memory
    • Listening comprehension
    • Sentence formulation
    • Vocabulary knowledge and use
    • Critical thinking and verbal reasoning
  • I love the fact that this app is very useful for my preschool population as well as for children with developmental disabilities such as ASD and genetic syndromes (Down, Fragile X, etc.)

All in all this is a nice functional app for targeting WH questions in language impaired children. You can find this app on iTunes for $4.99, or thanks to the Virtual Speech Center you can enter my Rafflecopter giveaway to win a free code.

a Rafflecopter giveaway