Posted on

Components of Qualitative Writing Assessments: What Exactly are We Trying to Measure?

Writing! The one assessment area that challenges many SLPs on daily basis! If one polls 10 SLPs on the topic of writing, one will get 10 completely different responses ranging from agreement and rejection to the diverse opinions regarding what should actually be assessed and how exactly it should be accomplished.

Consequently, today I wanted to focus on the basics involved in the assessment of adolescent writing. Why adolescents you may ask? Well, frankly because many SLPs (myself included) are far more likely to assess the writing abilities of adolescents rather than elementary-aged children.

Often, when the students are younger and their literacy abilities are weaker, the SLPs may not get to the assessment of writing abilities due to the students presenting with so many other deficits which require precedence intervention-wise. However, as the students get older and the academic requirements increase exponentially, SLPs may be more frequently asked to assess the students’ writing abilities because difficulties in this area significantly affect them in a variety of classes on a variety of subjects.

So what can we assess when it comes to writing? In the words of Helen Lester’s character ‘Pookins’: “Lots!”  There are various types of writing that can be assessed, the most common of which include: expository, persuasive, and fictional. Each of these can be used for assessment purposes in a variety of ways.

To illustrate, if we chose to analyze the student’s written production of fictional narratives then we may broadly choose to analyze the following aspects of the student’s writing: contextual conventions and writing composition.

The former looks at such writing aspects as the use of correct spelling, punctuation, and capitalization, paragraph formation, etc.

The latter looks at the nitty-gritty elements involved in plot development. These include effective use of literate vocabulary, plotline twists, character development,  use of dialogue, etc.

Perhaps we want to analyze the student’s persuasive writing abilities. After all, high school students are expected to utilize this type of writing frequently for essay writing purposes.  Actually, persuasive writing is a complex genre which is particularly difficult for students with language-learning difficulties who struggle to produce essays that are clear, logical, convincing, appropriately sequenced, and take into consideration opposing points of view. It is exactly for that reason that persuasive writing tasks are perfect for assessment purposes.

But what exactly are we looking for analysis wise? What should a typical 15 year old’s persuasive essays contain?

With respect to syntax, a typical student that age is expected to write complex sentences possessing nominal, adverbial, as well as relative clauses.

With the respect to semantics, effective persuasive essays require the use of literate vocabulary words of low frequency such as later developing connectors (e.g., first of all, next, for this reason, on the other hand, consequently, finally, in conclusion) as well as metalinguistic and metacognitive verbs (“metaverbs”) that refer to acts of speaking (e.g., assert, concede, predict, argue, imply) and thinking (e.g., hypothesize, remember, doubt, assume, infer).

With respect to pragmatics, as students  mature, their sensitivity to the perspectives of others improves, as a result, their persuasive essays increase in length (i.e., total number of words produced) and they are able to offer a greater number of different reasons to support their own opinions (Nippold, Ward-Lonergan, & Fanning, 2005).

Now let’s apply our knowledge by analyzing a writing sample of a 15-year-old with suspected literacy deficits. Below 10th-grade student was provided with a written prompt first described in the Nippold, et al, 2005 study, entitled: “The Circus Controversy”.   “People have different views on animals performing in circuses. For example, some people think it is a great idea because it provides lots of entertainment for the public. Also, it gives parents and children something to do together, and the people who train the animals can make some money. However, other people think having animals in circuses is a bad idea because the animals are often locked in small cages and are not fed well. They also believe it is cruel to force a dog, tiger, or elephant to perform certain tricks that might be dangerous. I am interested in learning what you think about this controversy, and whether or not you think circuses with trained animals should be allowed to perform for the public. I would like you to spend the next 20 minutes writing an essay. Tell me exactly what you think about the controversy. Give me lots of good reasons for your opinion. Please use your best writing style, with correct grammar and spelling. If you aren’t sure how to spell a word, just take a guess.”(Nippold, Ward-Lonergan, & Fanning, 2005)

He produced the following written sample during the allotted 20 minutes.

Analysis: This student was able to generate a short, 3-paragraph, composition containing an introduction and a body without a definitive conclusion. His persuasive essay was judged to be very immature for his grade level due to significant disorganization, limited ability to support his point of view as well as the presence of tangential information in the introduction of his composition, which was significantly compromised by many writing mechanics errors (punctuation, capitalization, as well as spelling) that further impacted the coherence and cohesiveness of his written output.

The student’s introduction began with an inventive dialogue, which was irrelevant to the body of his persuasive essay. He did have three important points relevant to the body of the essay: animal cruelty, danger to the animals, and potential for the animals to harm humans. However, he was unable to adequately develop those points into full paragraphs. The notable absence of proofreading and editing of the composition further contributed to its lack of clarity. The above coupled with a lack of a conclusion was not commensurate grade-level expectations.

Based on the above-written sample, the student’s persuasive composition content (thought formulation and elaboration) was judged to be significantly immature for his grade level and is commensurate with the abilities of a much younger student.  The student’s composition contained several emerging claims that suggested a vague position. However, though the student attempted to back up his opinion and support his position (animals should not be performing in circuses), ultimately he was unable to do so in a coherent and cohesive manner.

Now that we know what the student’s written difficulties look like, the following goals will be applicable with respect to his writing remediation:

Long-Term Goals:  Student will improve his written abilities for academic purposes.

  • Short-Term Goals
  1. Student will appropriately utilize parts of speech (e.g., adjectives, adverbs, prepositions, etc.)  in compound and complex sentences.
  2. Student will use a variety of sentence types for story composition purposes (e.g., declarative, interrogative, imperative, and exclamatory sentences).
  3. Student will correctly use past, present, and future verb tenses during writing tasks.
  4. Student will utilize appropriate punctuation at the sentence level (e.g., apostrophes, periods, commas, colons, quotation marks in dialogue, and apostrophes in singular possessives, etc.).
  5. Student will utilize appropriate capitalization at the sentence level (e.g., capitalize proper nouns, holidays, product names, titles with names, initials, geographic locations, historical periods, special events, etc.).
  6. Student will use prewriting techniques to generate writing ideas (e.g., list keywords, state key ideas, etc.).
  7. Student will determine the purpose of his writing and his intended audience in order to establish the tone of his writing as well as outline the main idea of his writing.
  8. Student will generate a draft in which information is organized in chronological order via use of temporal markers (e.g., “meanwhile,” “immediately”) as well as cohesive ties (e.g., ‘but’, ‘yet’, ‘so’, ‘nor’) and cause/effect transitions (e.g., “therefore,” “as a result”).
  9. Student will improve coherence and logical organization of his written output via the use of revision strategies (e.g., modify supporting details, use sentence variety, employ literary devices).
  10. Student will edit his draft for appropriate grammar, spelling, punctuation, and capitalization.

There you have it. A quick and easy qualitative writing assessment which can assist SLPs to determine the extent of the student’s writing difficulties as well as establish writing remediation targets for intervention purposes.

Using a different type of writing assessment with your students? Please share the details below so we can all benefit from each others knowledge of assessment strategies.

References:

  • Nippold, M., Ward-Lonergan, J., & Fanning, J. (2005). Persuasive writing in children, adolescents, and adults: a study of syntactic, semantic, and pragmatic development. Language, Speech, and Hearing Services in Schools, 36, 125-138.

 

Posted on

Making Our Interventions Count or What’s Research Got To Do With It?

Two years ago I wrote a blog post entitled: “What’s Memes Got To Do With It?” which summarized key points of Dr. Alan G. Kamhi’s 2004 article: “A Meme’s Eye View of Speech-Language Pathology“. It delved into answering the following question: “Why do some terms, labels, ideas, and constructs [in our field] prevail whereas others fail to gain acceptance?”.

Today I would like to reference another article by Dr. Kamhi written in 2014, entitled “Improving Clinical Practices for Children With Language and Learning Disorders“.

This article was written to address the gaps between research and clinical practice with respect to the implementation of EBP for intervention purposes.

Dr. Kamhi begins the article by posing 10 True or False questions for his readers:

  1. Learning is easier than generalization.
  2. Instruction that is constant and predictable is more effective than instruction that varies the conditions of learning and practice.
  3. Focused stimulation (massed practice) is a more effective teaching strategy than varied stimulation (distributed practice).
  4. The more feedback, the better.
  5. Repeated reading of passages is the best way to learn text information.
  6. More therapy is always better.
  7. The most effective language and literacy interventions target processing limitations rather than knowledge deficits.
  8. Telegraphic utterances (e.g., push ball, mommy sock) should not be provided as input for children with limited language.
  9. Appropriate language goals include increasing levels of mean length of utterance (MLU) and targeting Brown’s (1973) 14 grammatical morphemes.
  10. Sequencing is an important skill for narrative competence.

Guess what? Only statement 8 of the above quiz is True! Every other statement from the above is FALSE!

Now, let’s talk about why that is!

First up is the concept of learning vs. generalization. Here Dr. Kamhi discusses that some clinicians still possess an “outdated behavioral view of learning” in our field, which is not theoretically and clinically useful. He explains that when we are talking about generalization – what children truly have a difficulty with is “transferring narrow limited rules to new situations“. “Children with language and learning problems will have difficulty acquiring broad-based rules and modifying these rules once acquired, and they also will be more vulnerable to performance demands on speech production and comprehension (Kamhi, 1988)” (93). After all, it is not “reasonable to expect children to use language targets consistently after a brief period of intervention” and while we hope that “language intervention [is] designed to lead children with language disorders to acquire broad-based language rules” it is a hugely difficult task to undertake and execute.

Next, Dr. Kamhi addresses the issue of instructional factors, specifically the importance of “varying conditions of instruction and practice“.  Here, he addresses the fact that while contextualized instruction is highly beneficial to learners unless we inject variability and modify various aspects of instruction including context, composition, duration, etc., we ran the risk of limiting our students’ long-term outcomes.

After that, Dr. Kamhi addresses the concept of distributed practice (spacing of intervention) and how important it is for teaching children with language disorders. He points out that a number of recent studies have found that “spacing and distribution of teaching episodes have more of an impact on treatment outcomes than treatment intensity” (94).

He also advocates reducing evaluative feedback to learners to “enhance long-term retention and generalization of motor skills“. While he cites research from studies pertaining to speech production, he adds that language learning could also benefit from this practice as it would reduce conversational disruptions and tunning out on the part of the student.

From there he addresses the limitations of repetition for specific tasks (e.g., text rereading). He emphasizes how important it is for students to recall and retrieve text rather than repeatedly reread it (even without correction), as the latter results in a lack of comprehension/retention of read information.

After that, he discusses treatment intensity. Here he emphasizes the fact that higher dose of instruction will not necessarily result in better therapy outcomes due to the research on the effects of “learning plateaus and threshold effects in language and literacy” (95). We have seen research on this with respect to joint book reading, vocabulary words exposure, etc. As such, at a certain point in time increased intensity may actually result in decreased treatment benefits.

His next point against processing interventions is very near and dear to my heart. Those of you familiar with my blog know that I have devoted a substantial number of posts pertaining to the lack of validity of CAPD diagnosis (as a standalone entity) and urged clinicians to provide language based vs. specific auditory interventions which lack treatment utility. Here, Dr. Kamhi makes a great point that: “Interventions that target processing skills are particularly appealing because they offer the promise of improving language and learning deficits without having to directly target the specific knowledge and skills required to be a proficient speaker, listener, reader, and writer.” (95) The problem is that we have numerous studies on the topic of improvement of isolated skills (e.g., auditory skills, working memory, slow processing, etc.) which clearly indicate lack of effectiveness of these interventions.  As such, “practitioners should be highly skeptical of interventions that promise quick fixes for language and learning disabilities” (96).

Now let us move on to language and particularly the models we provide to our clients to encourage greater verbal output. Research indicates that when clinicians are attempting to expand children’s utterances, they need to provide well-formed language models. Studies show that children select strong input when its surrounded by weaker input (the surrounding weaker syllables make stronger syllables stand out).  As such, clinicians should expand upon/comment on what clients are saying with grammatically complete models vs. telegraphic productions.

From there lets us take a look at Dr. Kamhi’s recommendations for grammar and syntax. Grammatical development goes much further than addressing Brown’s morphemes in therapy and calling it a day. As such, it is important to understand that children with developmental language disorders (DLD) (#DevLang) do not have difficulty acquiring all morphemes. Rather studies have shown that they have difficulty learning grammatical morphemes that reflect tense and agreement  (e.g., third-person singular, past tense, auxiliaries, copulas, etc.). As such, use of measures developed by (e.g., Tense Marker Total & Productivity Score) can yield helpful information regarding which grammatical structures to target in therapy.

With respect to syntax, Dr. Kamhi notes that many clinicians erroneously believe that complex syntax should be targeted when children are much older. The Common Core State Standards do not help this cause further, since according to the CCSS complex syntax should be targeted 2-3 grades, which is far too late. Typically developing children begin developing complex syntax around 2 years of age and begin readily producing it around 3 years of age. As such, clinicians should begin targeting complex syntax in preschool years and not wait until the children have mastered all morphemes and clauses (97)

Finally, Dr. Kamhi wraps up his article by offering suggestions regarding prioritizing intervention goals. Here, he explains that goal prioritization is affected by

  • clinician experience and competencies
  • the degree of collaboration with other professionals
  • type of service delivery model
  • client/student factors

He provides a hypothetical case scenario in which the teaching responsibilities are divvied up between three professionals, with SLP in charge of targeting narrative discourse. Here, he explains that targeting narratives does not involve targeting sequencing abilities. “The ability to understand and recall events in a story or script depends on conceptual understanding of the topic and attentional/memory abilities, not sequencing ability.”  He emphasizes that sequencing is not a distinct cognitive process that requires isolated treatment. Yet many SLPs “continue to believe that  sequencing is a distinct processing skill that needs to be assessed and treated.” (99)

Dr. Kamhi supports the above point by providing an example of two passages. One, which describes a random order of events, and another which follows a logical order of events. He then points out that the randomly ordered story relies exclusively on attention and memory in terms of “sequencing”, while the second story reduces demands on memory due to its logical flow of events. As such, he points out that retelling deficits seemingly related to sequencing, tend to be actually due to “limitations in attention, working memory, and/or conceptual knowledge“. Hence, instead of targeting sequencing abilities in therapy, SLPs should instead use contextualized language intervention to target aspects of narrative development (macro and microstructural elements).

Furthermore, here it is also important to note that the “sequencing fallacy” affects more than just narratives. It is very prevalent in the intervention process in the form of the ubiquitous “following directions” goal/s. Many clinicians readily create this goal for their clients due to their belief that it will result in functional therapeutic language gains. However, when one really begins to deconstruct this goal, one will realize that it involves a number of discrete abilities including: memory, attention, concept knowledge, inferencing, etc.  Consequently, targeting the above goal will not result in any functional gains for the students (their memory abilities will not magically improve as a result of it). Instead, targeting specific language and conceptual goals  (e.g., answering questions, producing complex sentences, etc.) and increasing the students’ overall listening comprehension and verbal expression will result in improvements in the areas of attention, memory, and processing, including their ability to follow complex directions.

There you have it! Ten practical suggestions from Dr. Kamhi ready for immediate implementation! And for more information, I highly recommend reading the other articles in the same clinical forum, all of which possess highly practical and relevant ideas for therapeutic implementation. They include:

References:

Kamhi, A. (2014). Improving clinical practices for children with language and learning disorders.  Language, Speech, and Hearing Services in Schools, 45(2), 92-103

Helpful Social Media Resources:

SLPs for Evidence-Based Practice

Posted on

New Products for the 2017 Academic School Year for SLPs

Image result for back to schoolSeptember is quickly approaching and  school-based speech language pathologists (SLPs) are preparing to go back to work. Many of them are looking to update their arsenal of speech and language materials for the upcoming academic school year.

With that in mind, I wanted to update my readers regarding all the new products I have recently created with a focus on assessment and treatment in speech language pathology.

My most recent product Assessment of Adolescents with Language and Literacy Impairments in Speech Language Pathology  is a 130-slide pdf download which discusses how to effectively select assessment materials in order to conduct comprehensive evaluations of adolescents with suspected language and literacy disorders. It contains embedded links to ALL the books and research articles used in the development of this product.

Effective Reading Instruction Strategies for Intellectually Impaired Students is a 50-slide downloadable presentation in pdf format which describes how speech-language pathologists (SLPs) trained in assessment and intervention of literacy disorders (reading, spelling, and writing) can teach phonological awareness, phonics, as well as reading fluency skills to children with mild-moderate intellectual disabilities. It reviews the research on reading interventions conducted with children with intellectual disabilities, lists components of effective reading instruction as well as explains how to incorporate components of reading instruction into language therapy sessions.

Dysgraphia Checklist for School-Aged Children helps to identify the students’ specific written language deficits who may require further assessment and treatment services to improve their written abilities.

Processing Disorders: Controversial Aspects of Diagnosis and Treatment is a 28-slide downloadable pdf presentation which provides an introduction to processing disorders.  It describes the diversity of ‘APD’ symptoms as well as explains the current controversies pertaining to the validity of the ‘APD’ diagnosis.  It also discusses how the label “processing difficulties” often masks true language and learning deficits in students which require appropriate language and literacy assessment and targeted intervention services.

Checklist for Identification of Speech Language Disorders in Bilingual and Multicultural Children was created to assist Speech Language Pathologists (SLPs) and Teachers in the decision-making process of how to appropriately identify bilingual and multicultural children who present with speech-language delay/deficits (vs. a language difference), for the purpose of initiating a formal speech-language-literacy evaluation.  The goal is to ensure that educational professionals are appropriately identifying bilingual children for assessment and service provision due to legitimate speech language deficits/concerns, and are not over-identifying students because they speak multiple languages or because they come from low socioeconomic backgrounds.

Comprehensive Assessment and Treatment of Literacy Disorders in Speech-Language Pathology is a 125 slide presentation which describes how speech-language pathologists can effectively assess and treat children with literacy disorders, (reading, spelling, and writing deficits including dyslexia) from preschool through adolescence.  It explains the impact of language disorders on literacy development, lists formal and informal assessment instruments and procedures, as well as describes the importance of assessing higher order language skills for literacy purposes. It reviews components of effective reading instruction including phonological awareness, orthographic knowledge, vocabulary awareness,  morphological awareness, as well as reading fluency and comprehension. Finally, it provides recommendations on how components of effective reading instruction can be cohesively integrated into speech-language therapy sessions in order to improve literacy abilities of children with language disorders and learning disabilities.

Improving critical thinking via picture booksImproving Critical Thinking Skills via Picture Books in Children with Language Disorders is a partial 30-slide presentation which discusses effective instructional strategies for teaching language disordered children critical thinking skills via the use of picture books utilizing both the Original (1956) and Revised (2001) Bloom’s Taxonomy: Cognitive Domain which encompasses the (R) categories of remembering, understanding, applying, analyzing, evaluating and creating.

from wordless books to reading From Wordless Picture Books to Reading Instruction: Effective Strategies for SLPs Working with Intellectually Impaired Students is a full 92 slide presentation which discusses how to address the development of critical thinking skills through a variety of picture books  utilizing the framework outlined in Bloom’s Taxonomy: Cognitive Domain which encompasses the categories of knowledge, comprehension, application, analysis, synthesis, and evaluation in children with intellectual impairments. It shares a number of similarities with the above product as it also reviews components of effective reading instruction for children with language and intellectual disabilities as well as provides recommendations on how to integrate reading instruction effectively into speech-language therapy sessions.

Best Practices in Bilingual LiteracyBest Practices in Bilingual Literacy Assessments and Interventions is a 105 slide presentation which focuses on how bilingual speech-language pathologists (SLPs) can effectively assess and intervene with simultaneously bilingual and multicultural children (with stronger academic English language skills) diagnosed with linguistically-based literacy impairments. Topics include components of effective literacy assessments for simultaneously bilingual children (with stronger English abilities), best instructional literacy practices, translanguaging support strategies, critical questions relevant to the provision of effective interventions, as well as use of accommodations, modifications and compensatory strategies for improvement of bilingual students’ performance in social and academic settings.

Comprehensive Literacy Checklist For School-Aged Children was created to assist Speech Language Pathologists (SLPs) in the decision-making process of how to identify deficit areas and select assessment instruments to prioritize a literacy assessment for school aged children. The goal is to eliminate administration of unnecessary or irrelevant tests and focus on the administration of instruments directly targeting the specific areas of difficulty that the student presents with.

You can find these and other products in my online store (HERE). Wishing all of you a highly successful and rewarding school year!

Image result for happy school year

Posted on

The Importance of Narrative Assessments in Speech Language Pathology (Revised)

Image result for narrativeAs SLPs we routinely administer a variety of testing batteries in order to assess our students’ speech-language abilities. Grammar, syntax, vocabulary, and sentence formulation get frequent and thorough attention. But how about narrative production? Does it get its fair share of attention when the clinicians are looking to determine the extent of the child’s language deficits? I was so curious about what the clinicians across the country were doing that in 2013, I created a survey and posted a link to it in several SLP-related FB groups.  I wanted to find out how many SLPs were performing narrative assessments, in which settings, and with which populations.  From those who were performing these assessments, I wanted to know what type of assessments were they using and how they were recording and documenting their findings.   Since the purpose of this survey was non-research based (I wasn’t planning on submitting a research manuscript with my findings), I only analyzed the first 100 responses (the rest were very similar in nature) which came my way, in order to get the general flavor of current trends among clinicians, when it came to narrative assessments. Here’s a brief overview of my [limited] findings. Continue reading The Importance of Narrative Assessments in Speech Language Pathology (Revised)

Posted on

Improving Executive Function Skills of Language Impaired Students with Hedbanz

Image result for hedbanzThose of you who have previously read my blog know that I rarely use children’s games to address language goals.  However, over the summer I have been working on improving executive function abilities (EFs) of some of the language impaired students on my caseload. As such, I found select children’s games to be highly beneficial for improving language-based executive function abilities.

For those of you who are only vaguely familiar with this concept, executive functions are higher level cognitive processes involved in the inhibition of thought, action, and emotion, which located in the prefrontal cortex of the frontal lobe of the brain. The development of executive functions begins in early infancy; but it can be easily disrupted by a number of adverse environmental and organic experiences (e.g., psychosocial deprivation, trauma).  Furthermore, research in this area indicates that the children with language impairments present with executive function weaknesses which require remediation.

Image result for executive functions brain

EF components include working memory, inhibitory control, planning, and set-shifting.

  • Working memory
    • Ability to store and manipulate information in mind over brief periods of time
  • Inhibitory control
    • Suppressing responses that are not relevant to the task
  • Set-shifting
    • Ability to shift behavior in response to changes in tasks or environment

Simply put, EFs contribute to the child’s ability to sustain attention, ignore distractions, and succeed in academic settings. By now some of you must be wondering: “So what does Hedbanz have to do with any of it?”

Well, Hedbanz is a quick-paced multiplayer  (2-6 people) game of “What Am I?” for children ages 7 and up.  Players get 3 chips and wear a “picture card” in their headband. They need to ask questions in rapid succession to figure out what they are. “Am I fruit?” “Am I a dessert?” “Am I sports equipment?” When they figure it out, they get rid of a chip. The first player to get rid of all three chips wins.

The game sounds deceptively simple. Yet if any SLPs or parents have ever played that game with their language impaired students/children as they would be quick to note how extraordinarily difficult it is for the children to figure out what their card is. Interestingly, in my clinical experience, I’ve noticed that it’s not just moderately language impaired children who present with difficulty playing this game. Even my bright, average intelligence teens, who have passed vocabulary and semantic flexibility testing (such as the WORD Test 2-Adolescent or the  Vocabulary Awareness subtest of the Test of Integrated Language and Literacy ) significantly struggle with their language organization when playing this game.

So what makes Hedbanz so challenging for language impaired students? Primarily, it’s the involvement and coordination of the multiple executive functions during the game. In order to play Hedbanz effectively and effortlessly, the following EF involvement is needed:

  • Task Initiation
    • Students with executive function impairments will often “freeze up” and as a result may have difficulty initiating the asking of questions in the game because many will not know what kind of questions to ask, even after extensive explanations and elaborations by the therapist.
  • Organization
    • Students with executive function impairments will present with difficulty organizing their questions by meaningful categories and as a result will frequently lose their track of thought in the game.
  • Working Memory
    • This executive function requires the student to keep key information in mind as well as keep track of whatever questions they have already asked.
  • Flexible Thinking
    • This executive function requires the student to consider a situation from multiple angles in order to figure out the quickest and most effective way of arriving at a solution. During the game, students may present with difficulty flexibly generating enough organizational categories in order to be effective participants.
  • Impulse Control
    • Many students with difficulties in this area may blurt out an inappropriate category or in an appropriate question without thinking it through first.
      • They may also present with difficulty set-shifting. To illustrate, one of my 13-year-old students with ASD, kept repeating the same question when it was his turn, despite the fact that he was informed by myself as well as other players of the answer previously.
  • Emotional Control
    • This executive function will help students with keeping their emotions in check when the game becomes too frustrating. Many students of difficulties in this area will begin reacting behaviorally when things don’t go their way and they are unable to figure out what their card is quickly enough. As a result, they may have difficulty mentally regrouping and reorganizing their questions when something goes wrong in the game.
  • Self-Monitoring
    • This executive function allows the students to figure out how well or how poorly they are doing in the game. Students with poor insight into own abilities may present with difficulty understanding that they are doing poorly and may require explicit instruction in order to change their question types.
  • Planning and Prioritizing
    • Students with poor abilities in this area will present with difficulty prioritizing their questions during the game.

Image result for executive functionsConsequently, all of the above executive functions can be addressed via language-based goals.  However, before I cover that, I’d like to review some of my session procedures first.

Typically, long before game initiation, I use the cards from the game to prep the students by teaching them how to categorize and classify presented information so they effectively and efficiently play the game.

Rather than using the “tip cards”, I explain to the students how to categorize information effectively.

This, in turn, becomes a great opportunity for teaching students relevant vocabulary words, which can be extended far beyond playing the game.

I begin the session by explaining to the students that pretty much everything can be roughly divided into two categories animate (living) or inanimate (nonliving) things. I explain that humans, animals, as well as plants belong to the category of living things, while everything else belongs to the category of inanimate objects. I further divide the category of inanimate things into naturally existing and man-made items. I explain to the students that the naturally existing category includes bodies of water, landmarks, as well as things in space (moon, stars, sky, sun, etc.). In contrast, things constructed in factories or made by people would be example of man-made objects (e.g., building, aircraft, etc.)

When I’m confident that the students understand my general explanations, we move on to discuss further refinement of these broad categories. If a student determines that their card belongs to the category of living things, we discuss how from there the student can further determine whether they are an animal, a plant, or a human. If a student determined that their card belongs to the animal category, we discuss how we can narrow down the options of figuring out what animal is depicted on their card by asking questions regarding their habitat (“Am I a jungle animal?”), and classification (“Am I a reptile?”). From there, discussion of attributes prominently comes into play. We discuss shapes, sizes, colors, accessories, etc., until the student is able to confidently figure out which animal is depicted on their card.

In contrast, if the student’s card belongs to the inanimate category of man-made objects, we further subcategorize the information by the object’s location (“Am I found outside or inside?”; “Am I found in ___ room of the house?”, etc.), utility (“Can I be used for ___?”), as well as attributes (e.g., size, shape, color, etc.)

Thus, in addition to improving the students’ semantic flexibility skills (production of definitions, synonyms, attributes, etc.) the game teaches the students to organize and compartmentalize information in order to effectively and efficiently arrive at a conclusion in the most time expedient fashion.

Now, we are ready to discuss what type of EF language-based goals, SLPs can target by simply playing this game.

1. Initiation: Student will initiate questioning during an activity in __ number of instances per 30-minute session given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

2. Planning: Given a specific routine, student will verbally state the order of steps needed to complete it with __% accuracy given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

3. Working Memory: Student will repeat clinician provided verbal instructions pertaining to the presented activity, prior to its initiation, with 80% accuracy  given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

4. Flexible Thinking: Following a training by the clinician, student will generate at least __ questions needed for task completion (e.g., winning the game) with __% accuracy given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

5. Organization: Student will use predetermined written/visual cues during an activity to assist self with organization of information (e.g., questions to ask) with __% accuracy given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

6. Impulse Control: During the presented activity the student will curb blurting out inappropriate responses (by silently counting to 3 prior to providing his response) in __ number of instances per 30 minute session given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

7. Emotional Control: When upset, student will verbalize his/her frustration (vs. behavioral activing out) in __ number of instances per 30 minute session given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

8. Self-Monitoring:  Following the completion of an activity (e.g., game) student will provide insight into own strengths and weaknesses during the activity (recap) by verbally naming the instances in which s/he did well, and instances in which s/he struggled with __% accuracy given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

There you have it. This one simple game doesn’t just target a plethora of typical expressive language goals. It can effectively target and improve language-based executive function goals as well. Considering the fact that it sells for approximately $12 on Amazon.com, that’s a pretty useful therapy material to have in one’s clinical tool repertoire. For fancier versions, clinicians can use “Jeepers Peepers” photo card sets sold by Super Duper Inc. Strapped for cash, due to highly limited budget? You can find plenty of free materials online if you simply input “Hedbanz cards” in your search query on Google. So have a little fun in therapy, while your students learn something valuable in the process and play Hedbanz today!

Related Smart Speech Therapy Resources:

 

Posted on

A Focus on Literacy

Image result for literacyIn recent months, I have been focusing more and more on speaking engagements as well as the development of products with an explicit focus on assessment and intervention of literacy in speech-language pathology. Today I’d like to introduce 4 of my recently developed products pertinent to assessment and treatment of literacy in speech-language pathology.

First up is the Comprehensive Assessment and Treatment of Literacy Disorders in Speech-Language Pathology

which describes how speech-language pathologists can effectively assess and treat children with literacy disorders, (reading, spelling, and writing deficits including dyslexia) from preschool through adolescence.  It explains the impact of language disorders on literacy development, lists formal and informal assessment instruments and procedures, as well as describes the importance of assessing higher order language skills for literacy purposes. It reviews components of effective reading instruction including phonological awareness, orthographic knowledge, vocabulary awareness,  morphological awareness, as well as reading fluency and comprehension. Finally, it provides recommendations on how components of effective reading instruction can be cohesively integrated into speech-language therapy sessions in order to improve literacy abilities of children with language disorders and learning disabilities.

from wordless books to readingNext up is a product entitled From Wordless Picture Books to Reading Instruction: Effective Strategies for SLPs Working with Intellectually Impaired StudentsThis product discusses how to address the development of critical thinking skills through a variety of picture books utilizing the framework outlined in Bloom’s Taxonomy: Cognitive Domain which encompasses the categories of knowledge, comprehension, application, analysis, synthesis, and evaluation in children with intellectual impairments. It shares a number of similarities with the above product as it also reviews components of effective reading instruction for children with language and intellectual disabilities as well as provides recommendations on how to integrate reading instruction effectively into speech-language therapy sessions.

Improving critical thinking via picture booksThe product Improving Critical Thinking Skills via Picture Books in Children with Language Disorders is also available for sale on its own with a focus on only teaching critical thinking skills via the use of picture books.

Best Practices in Bilingual LiteracyFinally,   my last product Best Practices in Bilingual Literacy Assessments and Interventions focuses on how bilingual speech-language pathologists (SLPs) can effectively assess and intervene with simultaneously bilingual and multicultural children (with stronger academic English language skills) diagnosed with linguistically-based literacy impairments. Topics include components of effective literacy assessments for simultaneously bilingual children (with stronger English abilities), best instructional literacy practices, translanguaging support strategies, critical questions relevant to the provision of effective interventions, as well as use of accommodations, modifications and compensatory strategies for improvement of bilingual students’ performance in social and academic settings.

You can find these and other products in my online store (HERE).

Helpful Smart Speech Therapy Resources:

Posted on

New Product Giveaway: Comprehensive Literacy Checklist For School-Aged Children

I wanted to start the new year right by giving away a few copies of a new checklist I recently created entitled: “Comprehensive Literacy Checklist For School-Aged Children“.

It was created to assist Speech Language Pathologists (SLPs) in the decision-making process of how to identify deficit areas and select assessment instruments to prioritize a literacy assessment for school aged children.

The goal is to eliminate administration of unnecessary or irrelevant tests and focus on the administration of instruments directly targeting the specific areas of difficulty that the student presents with.

*For the purpose of this product, the term “literacy checklist” rather than “dyslexia checklist” is used throughout this document to refer to any deficits in the areas of reading, writing, and spelling that the child may present with in order to identify any possible difficulties the child may present with, in the areas of literacy as well as language.

This checklist can be used for multiple purposes.

1. To identify areas of deficits the child presents with for targeted assessment purposes

2. To highlight areas of strengths (rather than deficits only) the child presents with pre or post intervention

3. To highlight residual deficits for intervention purpose in children already receiving therapy services without further reassessment

Checklist Contents:

  • Page 1 Title
  • Page 2 Directions
  • Pages 3-9 Checklist
  • Page 10 Select Tests of Reading, Spelling, and Writing for School-Aged Children
  • Pages 11-12 Helpful Smart Speech Therapy Materials

Checklist Areas:

  1. AT RISK FAMILY HISTORY
  2. AT RISK DEVELOPMENTAL HISTORY
  3. BEHAVIORAL MANIFESTATIONS 
  4. LEARNING DEFICITS   
    1. Memory for Sequences
    2. Vocabulary Knowledge
    3. Narrative Production
    4. Phonological Awareness
    5. Phonics
    6. Morphological Awareness
    7. Reading Fluency
    8. Reading Comprehension
    9. Spelling
    10. Writing Conventions
    11. Writing Composition 
    12. Handwriting

You can find this product in my online store HERE.

Would you like to check it out in action? I’ll be giving away two copies of the checklist in a Rafflecopter Giveaway to two winners.  So enter today to win your own copy!

a Rafflecopter giveaway

Posted on

Comprehensive Assessment of Adolescents with Suspected Language and Literacy Disorders

When many of us think of such labels as “language disorder” or “learning disability”, very infrequently do adolescents (students 13-18 years of age) come to mind. Even today, much of the research in the field of pediatric speech pathology involves preschool and school-aged children under 12 years of age.

The prevalence and incidence of language disorders in adolescents is very difficult to estimate due to which some authors even referred to them as a Neglected Group with Significant Problems having an “invisible disability“.

Far fewer speech language therapists work with middle-schoolers vs. preschoolers and elementary aged kids, while the numbers of SLPs working with high-school aged students is frequently in single digits in some districts while being completely absent in others. In fact, I am frequently told (and often see it firsthand) that some administrators try to cut costs by attempting to dictate a discontinuation of speech-language services on the grounds that adolescents “are far too old for services” or can “no longer benefit from services”.  

But of course the above is blatantly false. Undetected language deficits don’t resolve with age! They simply exacerbate and turn into learning disabilities. Similarly, lack of necessary and appropriate service provision to children with diagnosed language impairments  at the middle-school and high-school levels will strongly affect their academic functioning and hinder their future vocational outcomes.

A cursory look at the Speech Pathology Related  Facebook Groups as well as ASHA forums reveals numerous SLPs in a continual search for best methods of assessment and treatment of older students (~12-18 years of age).  

Consequently, today I wanted to dedicate this post to a review of standardized assessments options available for students 12-18 years of age with suspected language and literacy deficits.

Most comprehensive standardized assessments, “typically focus on semantics, syntax, morphology, and phonology, as these are the performance areas in which specific skill development can be most objectively measured” (Hill & Coufal, 2005, p 35). Very few of them actually incorporate aspects of literacy into its subtests in a meaningful way.  Yet by the time students reach adolescence literacy begins to play an incredibly critical role not just in all the aspects of academics but also social communication.

So when it comes to comprehensive general language testing I highly recommended that SLPs select  standardized measures with a focus on not  language but also literacy.  Presently of all the comprehensive assessment tools   I highly prefer the Test of Integrated Language and Literacy (TILLS) for students up to 18 years of age, (see a comprehensive review HERE),  which covers such literacy areas as phonological awareness, reading fluency, reading comprehension, writing and spelling in addition to traditional language areas as as vocabulary awareness, following directions, story recall, etc. However,  while comprehensive tests have  numerous  uses,  their sole  administration will not constitute an adequate assessment.

So what areas should be assessed during language and literacy testing?  Below are  a few suggestions of standardized testing measures (and informal procedures) aimed at exploring the student abilities in particular areas pertaining to language and literacy.

TESTS OF LANGUAGE

TESTS OF LITERACYscreen-shot-2016-10-09-at-2-29-57-pm

It is understandable  how given the sheer amount of assessment choices some clinicians may feel overwhelmed and be unsure regarding the starting point of an adolescent evaluation.   Consequently, the use the checklist prior to the initiation of assessment may be highly useful in order to identify potential language weaknesses/deficits the students might experience. It will also allow clinicians to prioritize  the hierarchy of testing instruments to use during the assessment.  

While clinicians are encouraged to develop such checklists for their personal use,  those who lack time and opportunity can locate a number of already available checklists on the market. 

For example, the comprehensive 6-page Speech Language Assessment Checklist for Adolescents (below) can be given to caregivers, classroom teachers, and even older students in order to check off the most pressing difficulties the student is experiencing in an academic setting. 

adolescent checklist

It is important for several individuals to fill out this checklist to ensure consistency of deficits, prior to determining whether an assessment is warranted in the first place and if so, which assessment areas need to be targeted.

Checklist Categories:

  1. Receptive Language
  2. Memory, Attention and Cognition
  3. Expressive Language
  4. Vocabulary
  5. Discourse
  6. Speech
  7. Voice
  8. Prosody
  9. Resonance
  10. Reading
  11. Writing
  12. Problem Solving
  13. Pragmatic Language Skills
  14. Social Emotional Development
  15. Executive Functioning

alolescent pages sample

Based on the checklist administration SLPs can  reliably pinpoint the student’s areas of deficits without needless administration of unrelated/unnecessary testing instruments.  For example, if a student presents with deficits in the areas of problem solving and social pragmatic functioning the administration of a general language test such as the Clinical Evaluation of Language Fundamentals® – Fifth Edition (CELF-5) would NOT be functional (especially if the previous administration of educational testing did not reveal any red flags). In contrast, the administration of such tests as Test Of Problem Solving 2 Adolescent and Social Language Development Test Adolescent would be better reflective of the student’s deficits in the above areas. (Checklist HERE; checklist sample HERE). 

It is very important to understand that students presenting with language and literacy deficits will not outgrow these deficits on their own. While there may be “a time period when the students with early language disorders seem to catch up with their typically developing peers” (e.g., illusory recovery) by undergoing a “spurt” in language learning”(Sun & Wallach, 2014). These spurts are typically followed by a “post-spurt plateau”. This is because due to the ongoing challenges and an increase in academic demands “many children with early language disorders fail to “outgrow” these difficulties or catch up with their typically developing peers”(Sun & Wallach, 2014).  As such many adolescents “may not show academic or language-related learning difficulties until linguistic and cognitive demands of the task increase and exceed their limited abilities” (Sun & Wallach, 2014).  Consequently, SLPs must consider the “underlying deficits that may be masked by early oral language development” and “evaluate a child’s language abilities in all modalities, including pre-literacy, literacy, and metalinguistic skills” (Sun & Wallach, 2014).

References:

  1. Hill, J. W., & Coufal, K. L. (2005). Emotional/behavioral disorders: A retrospective examination of social skills, linguistics, and student outcomes. Communication Disorders Quarterly27(1), 33–46.
  2. Sun, L & Wallach G (2014) Language Disorders Are Learning Disabilities: Challenges on the Divergent and Diverse Paths to Language Learning Disability. Topics in Language Disorders, Vol. 34; (1), pp 25–38.

Helpful Smart Speech Therapy Resources 

  1. Assessment of Adolescents with Language and Literacy Impairments in Speech Language Pathology 
  2. Assessment and Treatment Bundles 
  3. Social Communication Materials
  4. Multicultural Materials 

 

Posted on

Review of the Test of Integrated Language and Literacy (TILLS)

The Test of Integrated Language & Literacy Skills (TILLS) is an assessment of oral and written language abilities in students 6–18 years of age. Published in the Fall 2015, it is  unique in the way that it is aimed to thoroughly assess skills  such as reading fluency, reading comprehension, phonological awareness,  spelling, as well as writing  in school age children.   As I have been using this test since the time it was published,  I wanted to take an opportunity today to share just a few of my impressions of this assessment.

               

First, a little background on why I chose to purchase this test  so shortly after I had purchased the Clinical Evaluation of Language Fundamentals – 5 (CELF-5).   Soon after I started using the CELF-5  I noticed that  it tended to considerably overinflate my students’ scores  on a variety of its subtests.  In fact,  I noticed that unless a student had a fairly severe degree of impairment,  the majority of his/her scores  came out either low/slightly below average (click for more info on why this was happening HERE, HEREor HERE). Consequently,  I was excited to hear regarding TILLS development, almost simultaneously through ASHA as well as SPELL-Links ListServe.   I was particularly happy  because I knew some of this test’s developers (e.g., Dr. Elena Plante, Dr. Nickola Nelson) have published solid research in the areas of  psychometrics and literacy respectively.

According to the TILLS developers it has been standardized for 3 purposes:

  • to identify language and literacy disorders
  • to document patterns of relative strengths and weaknesses
  • to track changes in language and literacy skills over time

The testing subtests can be administered in isolation (with the exception of a few) or in its entirety.  The administration of all the 15 subtests may take approximately an hour and a half, while the administration of the core subtests typically takes ~45 mins).

Please note that there are 5 subtests that should not be administered to students 6;0-6;5 years of age because many typically developing students are still mastering the required skills.

  • Subtest 5 – Nonword Spelling
  • Subtest 7 – Reading Comprehension
  • Subtest 10 – Nonword Reading
  • Subtest 11 – Reading Fluency
  • Subtest 12 – Written Expression

However,  if needed, there are several tests of early reading and writing abilities which are available for assessment of children under 6:5 years of age with suspected literacy deficits (e.g., TERA-3: Test of Early Reading Ability–Third Edition; Test of Early Written Language, Third Edition-TEWL-3, etc.).

Let’s move on to take a deeper look at its subtests. Please note that for the purposes of this review all images came directly from and are the property of Brookes Publishing Co (clicking on each of the below images will take you directly to their source).

TILLS-subtest-1-vocabulary-awareness1. Vocabulary Awareness (VA) (description above) requires students to display considerable linguistic and cognitive flexibility in order to earn an average score.    It works great in teasing out students with weak vocabulary knowledge and use,   as well as students who are unable to  quickly and effectively analyze  words  for deeper meaning and come up with effective definitions of all possible word associations. Be mindful of the fact that  even though the words are presented to the students in written format in the stimulus book, the examiner is still expected to read  all the words to the students. Consequently,  students with good vocabulary knowledge  and strong oral language abilities  can still pass this subtest  despite the presence of significant reading weaknesses. Recommendation:  I suggest informally  checking the student’s  word reading abilities  by asking them to read of all the words, before reading all the word choices to them.   This way  you can informally document any word misreadings  made by the student even in the presence of an average subtest score.

TIILLS-subtest-2-phonemic-awareness

2. The Phonemic Awareness (PA) subtest (description above) requires students to  isolate and delete initial sounds in words of increasing complexity.  While this subtest does not require sound isolation and deletion in various word positions, similar to tests such as the CTOPP-2: Comprehensive Test of Phonological Processing–Second Edition  or the The Phonological Awareness Test 2 (PAT 2)  it is still a highly useful and reliable measure of  phonemic awareness (as one of many precursors to reading fluency success).  This is especially because after the initial directions are given, the student is expected to remember to isolate the initial sounds in words without any prompting from the examiner.  Thus,  this task also  indirectly tests the students’ executive function abilities in addition to their phonemic awareness skills.

TILLS-subtest-3-story-retelling

3. The Story Retelling (SR) subtest (description above) requires students to do just that retell a story. Be mindful of the fact that the presented stories have reduced complexity. Thus, unless the students possess  significant retelling deficits, the above subtest  may not capture their true retelling abilities. Recommendation:  Consider supplementing this subtest  with informal narrative measures. For younger children (kindergarten and first grade) I recommend using wordless picture books to perform a dynamic assessment of their retelling abilities following a clinician’s narrative model (e.g., HERE).  For early elementary aged children (grades 2 and up), I recommend using picture books, which are first read to and then retold by the students with the benefit of pictorial but not written support. Finally, for upper elementary aged children (grades 4 and up), it may be helpful for the students to retell a book or a movie seen recently (or liked significantly) by them without the benefit of visual support all together (e.g., HERE).

TILLS-subtest-4-nonword-repetition

4. The Nonword Repetition (NR) subtest (description above) requires students to repeat nonsense words of increasing length and complexity. Weaknesses in the area of nonword repetition have consistently been associated with language impairments and learning disabilities due to the task’s heavy reliance on phonological segmentation as well as phonological and lexical knowledge (Leclercq, Maillart, Majerus, 2013). Thus, both monolingual and simultaneously bilingual children with language and literacy impairments will be observed to present with patterns of segment substitutions (subtle substitutions of sounds and syllables in presented nonsense words) as well as segment deletions of nonword sequences more than 2-3 or 3-4 syllables in length (depending on the child’s age).

TILLS-subtest-5-nonword-spelling

5. The Nonword Spelling (NS) subtest (description above) requires the students to spell nonwords from the Nonword Repetition (NR) subtest. Consequently, the Nonword Repetition (NR) subtest needs to be administered prior to the administration of this subtest in the same assessment session.  In contrast to the real-word spelling tasks,  students cannot memorize the spelling  of the presented words,  which are still bound by  orthographic and phonotactic constraints of the English language.   While this is a highly useful subtest,  is important to note that simultaneously bilingual children may present with decreased scores due to vowel errors.   Consequently,  it is important to analyze subtest results in order to determine whether dialectal differences rather than a presence of an actual disorder is responsible for the error patterns.

TILLS-subtest-6-listening-comprehension

6. The  Listening Comprehension (LC) subtest (description above) requires the students to listen to short stories  and then definitively answer story questions via available answer choices, which include: “Yes”, “No’, and “Maybe”. This subtest also indirectly measures the students’ metalinguistic awareness skills as they are needed to detect when the text does not provide sufficient information to answer a particular question definitively (e.g., “Maybe” response may be called for).  Be mindful of the fact that because the students are not expected to provide sentential responses  to questions it may be important to supplement subtest administration with another listening comprehension assessment. Tests such as the Listening Comprehension Test-2 (LCT-2), the Listening Comprehension Test-Adolescent (LCT-A),  or the Executive Function Test-Elementary (EFT-E)  may be useful  if  language processing and listening comprehension deficits are suspected or reported by parents or teachers. This is particularly important  to do with students who may be ‘good guessers’ but who are also reported to present with word-finding difficulties at sentence and discourse levels. 

TILLS-subtest-7-reading-comprehension

7. The Reading Comprehension (RC) subtest (description above) requires the students to  read short story and answer story questions in “Yes”, “No’, and “Maybe”  format.   This subtest is not stand alone and must be administered immediately following the administration the Listening Comprehension subtest. The student is asked to read the first story out loud in order to determine whether s/he can proceed with taking this subtest or discontinue due to being an emergent reader. The criterion for administration of the subtest is making 7 errors during the reading of the first story and its accompanying questions. Unfortunately,  in my clinical experience this subtest  is not always accurate at identifying children with reading-based deficits.

While I find it terrific for students with severe-profound reading deficits and/or below average IQ, a number of my students with average IQ and moderately impaired reading skills managed to pass it via a combination of guessing and luck despite being observed to misread aloud between 40-60% of the presented words. Be mindful of the fact that typically  such students may have up to 5-6  errors during the reading of the first story. Thus, according to administration guidelines these students will be allowed to proceed and take this subtest.  They will then continue to make text misreadings  during each story presentation (you will know that by asking them to read each story aloud vs. silently).   However,  because the response mode is in definitive (“Yes”, “No’, and “Maybe”) vs. open ended question format,  a number of these students  will earn average scores by being successful guessers. Recommendation:  I highly recommend supplementing the administration of this subtest with grade level (or below grade level) texts (see HERE and/or HERE),  to assess the student’s reading comprehension informally.

I present a full  one page text to the students and ask them to read it to me in its entirety.   I audio/video record  the student’s reading for further analysis (see Reading Fluency section below).   After the  completion of the story I ask  the student questions with a focus on main idea comprehension and vocabulary definitions.   I also ask questions pertaining to story details.   Depending on the student’s age  I may ask them  abstract/ factual text questions with and without text access.  Overall, I find that informal administration of grade level (or even below grade-level) texts coupled with the administration of standardized reading tests provides me with a significantly better understanding of the student’s reading comprehension abilities rather than administration of standardized reading tests alone.

TILLS-subtest-8-following-directions

8. The Following Directions (FD) subtest (description above) measures the student’s ability to execute directions of increasing length and complexity.  It measures the student’s short-term, immediate and working memory, as well as their language comprehension.  What is interesting about the administration of this subtest is that the graphic symbols (e.g., objects, shapes, letter and numbers etc.) the student is asked to modify remain covered as the instructions are given (to prevent visual rehearsal). After being presented with the oral instruction the students are expected to move the card covering the stimuli and then to executive the visual-spatial, directional, sequential, and logical if–then the instructions  by marking them on the response form.  The fact that the visual stimuli remains covered until the last moment increases the demands on the student’s memory and comprehension.  The subtest was created to simulate teacher’s use of procedural language (giving directions) in classroom setting (as per developers).

TILLS-subtest-9-delayed-story-retelling

9. The Delayed Story Retelling (DSR) subtest (description above) needs to be administered to the students during the same session as the Story Retelling (SR) subtest, approximately 20 minutes after the SR subtest administration.  Despite the relatively short passage of time between both subtests, it is considered to be a measure of long-term memory as related to narrative retelling of reduced complexity. Here, the examiner can compare student’s performance to determine whether the student did better or worse on either of these measures (e.g., recalled more information after a period of time passed vs. immediately after being read the story).  However, as mentioned previously, some students may recall this previously presented story fairly accurately and as a result may obtain an average score despite a history of teacher/parent reported  long-term memory limitations.  Consequently, it may be important for the examiner to supplement the administration of this subtest with a recall of a movie/book recently seen/read by the student (a few days ago) in order to compare both performances and note any weaknesses/limitations.

TILLS-subtest-10-nonword-reading

10. The Nonword Reading (NR) subtest (description above) requires students to decode nonsense words of increasing length and complexity. What I love about this subtest is that the students are unable to effectively guess words (as many tend to routinely do when presented with real words). Consequently, the presentation of this subtest will tease out which students have good letter/sound correspondence abilities as well as solid orthographic, morphological and phonological awareness skills and which ones only memorized sight words and are now having difficulty decoding unfamiliar words as a result.      TILLS-subtest-11-reading-fluency

11. The Reading Fluency (RF) subtest (description above) requires students to efficiently read facts which make up simple stories fluently and correctly.  Here are the key to attaining an average score is accuracy and automaticity.  In contrast to the previous subtest, the words are now presented in meaningful simple syntactic contexts.

It is important to note that the Reading Fluency subtest of the TILLS has a negatively skewed distribution. As per authors, “a large number of typically developing students do extremely well on this subtest and a much smaller number of students do quite poorly.”

Thus, “the mean is to the left of the mode” (see publisher’s image below). This is why a student could earn an average standard score (near the mean) and a low percentile rank when true percentiles are used rather than NCE percentiles (Normal Curve Equivalent). Tills Q&A – Negative Skew

Consequently under certain conditions (See HERE) the percentile rank (vs. the NCE percentile) will be a more accurate representation of the student’s ability on this subtest.

Indeed, due to the reduced complexity of the presented words some students (especially younger elementary aged) may obtain average scores and still present with serious reading fluency deficits.  

I frequently see that in students with average IQ and go to long-term memory, who by second and third grades have managed to memorize an admirable number of sight words due to which their deficits in the areas of reading appeared to be minimized.  Recommendation: If you suspect that your student belongs to the above category I highly recommend supplementing this subtest with an informal measure of reading fluency.  This can be done by presenting to the student a grade level text (I find science and social studies texts particularly useful for this purpose) and asking them to read several paragraphs from it (see HERE and/or HERE).

As the students are reading  I calculate their reading fluency by counting the number of words they read per minute.  I find it very useful as it allows me to better understand their reading profile (e.g, fast/inaccurate reader, slow/inaccurate reader, slow accurate reader, fast/accurate reader).   As the student is reading I note their pauses, misreadings, word-attack skills and the like. Then, I write a summary comparing the students reading fluency on both standardized and informal assessment measures in order to document students strengths and limitations.

TILLS-subtest-12-written-expression

12. The Written Expression (WE) subtest (description above) needs to be administered to the students immediately after the administration of the Reading Fluency (RF) subtest because the student is expected to integrate a series of facts presented in the RF subtest into their writing sample. There are 4 stories in total for the 4 different age groups.

The examiner needs to show the student a different story which integrates simple facts into a coherent narrative. After the examiner reads that simple story to the students s/he is expected to tell the students that the story is  okay, but “sounds kind of “choppy.” They then need to show the student an example of how they could put the facts together in a way that sounds more interesting and less choppy  by combining sentences (see below). Finally, the examiner will ask the students to rewrite the story presented to them in a similar manner (e.g, “less choppy and more interesting.”)

tills

After the student finishes his/her story, the examiner will analyze it and generate the following scores: a discourse score, a sentence score, and a word score. Detailed instructions as well as the Examiner’s Practice Workbook are provided to assist with scoring as it takes a bit of training as well as trial and error to complete it, especially if the examiners are not familiar with certain procedures (e.g., calculating T-units).

Full disclosure: Because the above subtest is still essentially sentence combining, I have only used this subtest a handful of times with my students. Typically when I’ve used it in the past, most of my students fell in two categories: those who failed it completely by either copying text word  for word, failing to generate any written output etc. or those who passed it with flying colors but still presented with notable written output deficits. Consequently, I’ve replaced Written Expression subtest administration with the administration of written standardized tests, which I supplement with an informal grade level expository, persuasive, or narrative writing samples.

Having said that many clinicians may not have the access to other standardized written assessments, or lack the time to administer entire standardized written measures (which may frequently take between 60 to 90 minutes of administration time). Consequently, in the absence of other standardized writing assessments, this subtest can be effectively used to gauge the student’s basic writing abilities, and if needed effectively supplemented by informal writing measures (mentioned above).

TILLS-subtest-13-social-communication

13. The Social Communication (SC) subtest (description above) assesses the students’ ability to understand vocabulary associated with communicative intentions in social situations. It requires students to comprehend how people with certain characteristics might respond in social situations by formulating responses which fit the social contexts of those situations. Essentially students become actors who need to act out particular scenes while viewing select words presented to them.

Full disclosure: Similar to my infrequent administration of the Written Expression subtest, I have also administered this subtest very infrequently to students.  Here is why.

I am an SLP who works full-time in a psychiatric hospital with children diagnosed with significant psychiatric impairments and concomitant language and literacy deficits.  As a result, a significant portion of my job involves comprehensive social communication assessments to catalog my students’ significant deficits in this area. Yet, past administration of this subtest showed me that number of my students can pass this subtest quite easily despite presenting with notable and easily evidenced social communication deficits. Consequently, I prefer the administration of comprehensive social communication testing when working with children in my hospital based program or in my private practice, where I perform independent comprehensive evaluations of language and literacy (IEEs).

Again, as I’ve previously mentioned many clinicians may not have the access to other standardized social communication assessments, or lack the time to administer entire standardized written measures. Consequently, in the absence of other social communication assessments, this subtest can be used to get a baseline of the student’s basic social communication abilities, and then be supplemented with informal social communication measures such as the Informal Social Thinking Dynamic Assessment Protocol (ISTDAP) or observational social pragmatic checklists

TILLS-subtest-14-digit-span-forward

14.  The Digit Span Forward (DSF) subtest (description above) is a relatively isolated  measure  of short term and verbal working memory ( it minimizes demands on other aspects of language such as syntax or vocabulary).

TILLS-subtest-15-digit-span-backward

15.  The Digit Span Backward (DSB) subtest (description above) assesses the student’s working memory and requires the student to mentally manipulate the presented stimuli in reverse order. It allows examiner to observe the strategies (e.g. verbal rehearsal, visual imagery, etc.) the students are using to aid themselves in the process.  Please note that the Digit Span Forward subtest must be administered immediately before the administration of this subtest.

SLPs who have used tests such as the Clinical Evaluation of Language Fundamentals – 5 (CELF-5) or the Test of Auditory Processing Skills – Third Edition (TAPS-3) should be highly familiar with both subtests as they are fairly standard measures of certain aspects of memory across the board.

To continue, in addition to the presence of subtests which assess the students literacy abilities, the TILLS also possesses a number of interesting features.

For starters, the TILLS Easy Score, which allows the examiners to use their scoring online. It is incredibly easy and effective. After clicking on the link and filling out the preliminary demographic information, all the examiner needs to do is to plug in this subtest raw scores, the system does the rest. After the raw scores are plugged in, the system will generate a PDF document with all the data which includes (but is not limited to) standard scores, percentile ranks, as well as a variety of composite and core scores. The examiner can then save the PDF on their device (laptop, PC, tablet etc.) for further analysis.

The there is the quadrant model. According to the TILLS sampler (HERE)  “it allows the examiners to assess and compare students’ language-literacy skills at the sound/word level and the sentence/ discourse level across the four oral and written modalities—listening, speaking, reading, and writing” and then create “meaningful profiles of oral and written language skills that will help you understand the strengths and needs of individual students and communicate about them in a meaningful way with teachers, parents, and students. (pg. 21)”

tills quadrant model

Then there is the Student Language Scale (SLS) which is a one page checklist parents,  teachers (and even students) can fill out to informally identify language and literacy based strengths and weaknesses. It  allows for meaningful input from multiple sources regarding the students performance (as per IDEA 2004) and can be used not just with TILLS but with other tests or in even isolation (as per developers).

Furthermore according to the developers, because the normative sample included several special needs populations, the TILLS can be used with students diagnosed with ASD,  deaf or hard of hearing (see caveat), as well as intellectual disabilities (as long as they are functioning age 6 and above developmentally).

According to the developers the TILLS is aligned with Common Core Standards and can be administered as frequently as two times a year for progress monitoring (min of 6 mos post 1st administration).

With respect to bilingualism examiners can use it with caution with simultaneous English learners but not with sequential English learners (see further explanations HERE).   Translations of TILLS are definitely not allowed as they will undermine test validity and reliability.

So there you have it these are just some of my very few impressions regarding this test.  Now to some of you may notice that I spend a significant amount of time pointing out some of the tests limitations. However, it is very important to note that we have research that indicates that there is no such thing as a “perfect standardized test” (see HERE for more information).   All standardized tests have their limitations

Having said that, I think that TILLS is a PHENOMENAL addition to the standardized testing market, as it TRULY appears to assess not just language but also literacy abilities of the students on our caseloads.

That’s all from me; however, before signing off I’d like to provide you with more resources and information, which can be reviewed in reference to TILLS.  For starters, take a look at Brookes Publishing TILLS resources.  These include (but are not limited to) TILLS FAQ, TILLS Easy-Score, TILLS Correction Document, as well as 3 FREE TILLS Webinars.   There’s also a Facebook Page dedicated exclusively to TILLS updates (HERE).

But that’s not all. Dr. Nelson and her colleagues have been tirelessly lecturing about the TILLS for a number of years, and many of their past lectures and presentations are available on the ASHA website as well as on the web (e.g., HERE, HERE, HERE, etc). Take a look at them as they contain far more in-depth information regarding the development and implementation of this groundbreaking assessment.

Disclaimer:  I did not receive a complimentary copy of this assessment for review nor have I received any encouragement or compensation from either Brookes Publishing  or any of the TILLS developers to write it.  All images of this test are direct property of Brookes Publishing (when clicked on all the images direct the user to the Brookes Publishing website) and were used in this post for illustrative purposes only.

References: 

Leclercq A, Maillart C, Majerus S. (2013) Nonword repetition problems in children with SLI: A deficit in accessing long-term linguistic representations? Topics in Language Disorders. 33 (3) 238-254.

Related Posts:

Posted on

What Research Shows About the Functional Relevance of Standardized Language Tests

Image result for standardized language testsAs an SLP who routinely conducts speech and language assessments in several settings (e.g., school and private practice), I understand the utility of and the need for standardized speech, language, and literacy tests.  However, as an SLP who works with children with dramatically varying degree of cognition, abilities, and skill-sets, I also highly value supplementing these standardized tests with functional and dynamic assessments, interactions, and observations.

Since a significant value is placed on standardized testing by both schools and insurance companies for the purposes of service provision and reimbursement, I wanted to summarize in today’s post the findings of recent articles on this topic.  Since my primary interest lies in assessing and treating school-age children, for the purposes of today’s post all of the reviewed articles came directly from the Language Speech and Hearing Services in Schools  (LSHSS) journal.

We’ve all been there. We’ve all had situations in which students scored on the low end of normal, or had a few subtest scores in the below average range, which equaled  an average total score.  We’ve all poured over eligibility requirements trying to figure out whether the student should receive therapy services given the stringent standardized testing criteria in some states/districts.

Of course, as it turns out, the answer is never simple.  In 2006, Spaulding, Plante & Farinella set out to examine the assumption: “that children with language impairment will receive low scores on standardized tests, and therefore [those] low scores will accurately identify these children” (61).   So they analyzed the data from 43 commercially available child language tests to identify whether evidence exists to support their use in identifying language impairment in children.

Turns out it did not!  Turns out due to the variation in psychometric properties of various tests (see article for specific details), many children with language impairment are overlooked by standardized tests by receiving scores within the average range or not receiving low enough scores to qualify for services. Thus, “the clinical consequence is that a child who truly has a language impairment has a roughly equal chance of being correctly or incorrectly identified, depending on the test that he or she is given.” Furthermore, “even if a child is diagnosed accurately as language impaired at one point in time, future diagnoses may lead to the false perception that the child has recovered, depending on the test(s) that he or she has been given (69).”

Consequently, they created a decision tree (see below) with recommendations for clinicians using standardized testing. They recommend using alternate sources of data (sensitivity and specificity rates) to support accurate identification (available for a small subset of select tests).

The idea behind it is: “if sensitivity and specificity data are strong, and these data were derived from subjects who are comparable to the child tested, then the clinician can be relatively confident in relying on the test score data to aid his or her diagnostic decision. However, if the data are weak, then more caution is warranted and other sources of information on the child’s status might have primacy in making a diagnosis (70).”

Fast forward 6 years, and a number of newly revised tests later,  in 2012, Spaulding and colleagues set out to “identify various U.S. state education departments’ criteria for determining the severity of language impairment in children, with particular focus on the use of norm-referenced tests” as well as to “determine if norm-referenced tests of child language were developed for the purpose of identifying the severity of children’s language impairment”  (176).

They obtained published procedures for severity determinations from available U.S. state education departments, which specified the use of norm-referenced tests, and reviewed the manuals for 45 norm-referenced tests of child language to determine if each test was designed to identify the degree of a child’s language impairment.

What they found out was “the degree of use and cutoff-point criteria for severity determination varied across states. No cutoff-point criteria aligned with the severity cutoff points described within the test manuals. Furthermore, tests that included severity information lacked empirical data on how the severity categories were derived (176).”

Thus they urged SLPs to exercise caution in determining the severity of children’s language impairment via norm-referenced test performance “given the inconsistency in guidelines and lack of empirical data within test manuals to support this use (176)”.

Following the publication of this article, Ireland, Hall-Mills & Millikin issued a response to the  Spaulding and colleagues article. They pointed out that the “severity of language impairment is only one piece of information considered by a team for the determination of eligibility for special education and related services”.  They noted that  they left out a host of federal and state guideline requirements and “did not provide an analysis of the regulations governing special education evaluation and criteria for determining eligibility (320).” They pointed out that “IDEA prohibits the use of ‘any single measure or assessment as the sole criterion’ for determination of disability  and requires that IEP teams ‘draw upon information from a variety of sources.”

They listed a variety of examples from several different state departments of education (FL, NC, VA, etc.), which mandate the use of functional assessments, dynamic assessments criterion-referenced assessments, etc. for their determination of language therapy eligibility.

But are the SLPs from across the country appropriately using the federal and state guidelines in order to determine eligibility? While one should certainly hope so, it does not always seem to be the case.  To illustrate, in 2012, Betz & colleagues asked 364 SLPs to complete a survey “regarding how frequently they used specific standardized tests when diagnosing suspected specific language impairment (SLI) (133).”

Their purpose was to determine “whether the quality of standardized tests, as measured by the test’s psychometric properties, is related to how frequently the tests are used in clinical practice” (133).

What they found out was that the most frequently used tests were the comprehensive assessments including the Clinical Evaluation of Language Fundamentals and the Preschool Language Scale as well as one word vocabulary tests such as the Peabody Picture Vocabulary Test. Furthermore, the date of publication seemed to be the only factor which affected the frequency of test selection.

They also found out that frequently SLPs did not follow up the comprehensive standardized testing with domain specific assessments (critical thinking, social communication, etc.) but instead used the vocabulary testing as a second measure.  They were understandably puzzled by that finding. “The emphasis placed on vocabulary measures is intriguing because although vocabulary is often a weakness in children with SLI (e.g., Stothard et al., 1998), the research to date does not show vocabulary to be more impaired than other language domains in children with SLI (140).

According to the authors, “perhaps the most discouraging finding of this study was the lack of a correlation between frequency of test use and test accuracy, measured both in terms of sensitivity/specificity and mean difference scores (141).”

If since the time (2012) SLPs have not significantly change their practices, the above is certainly disheartening, as it implies that rather than being true diagnosticians, SLPs are using whatever is at hand that has been purchased by their department to indiscriminately assess students with suspected speech language disorders. If that is truly the case, it certainly places into question the Ireland, Hall-Mills & Millikin’s response to Spaulding and colleagues.  In other words, though SLPs are aware that they need to comply with state and federal regulations when it comes to unbiased and targeted assessments of children with suspected language disorders, they may not actually be using appropriate standardized testing much less supplementary informal assessments (e.g., dynamic, narrative, language sampling) in order to administer well-rounded assessments.  

So where do we go from here? Well, it’s quite simple really!   We already know what the problem is. Based on the above articles we know that:

  1. Standardized tests possess significant limitations
  2. They are not used with optimal effectiveness by many SLPs
  3.  They may not be frequently supplemented by relevant and targeted informal assessment measures in order to improve the accuracy of disorder determination and subsequent therapy eligibility

Now that we have identified a problem, we need to develop and consistently implement effective practices to ameliorate it.  These include researching psychometric properties of tests to review sample size, sensitivity and specificity, etc, use domain specific assessments to supplement administration of comprehensive testing, as well as supplement standardized testing with a plethora of functional assessments.

SLPs can review testing manuals and consult with colleagues when they feel that the standardized testing is underidentifying students with language impairments (e.g., HERE and HERE).  They can utilize referral checklists (e.g., HERE) in order to pinpoint the students’ most significant difficulties. Finally, they can develop and consistently implement informal assessment practices (e.g., HERE and HERE) during testing in order to gain a better grasp on their students’ TRUE linguistic functioning.

Stay tuned for the second portion of this post entitled: “What Research Shows About the Functional Relevance of Standardized Speech Tests?” to find out the best practices in the assessment of speech sound disorders in children.

References:

  1. Spaulding, Plante & Farinella (2006) Eligibility Criteria for Language Impairment: Is the Low End of Normal Always Appropriate?
  2. Spaulding, Szulga, & Figueria (2012) Using Norm-Referenced Tests to Determine Severity of Language Impairment in Children: Disconnect Between U.S. Policy Makers and Test Developers
  3. Ireland, Hall-Mills & Millikin (2012) Appropriate Implementation of Severity Ratings, Regulations, and State Guidance: A Response to “Using Norm-Referenced Tests to Determine Severity of Language Impairment in Children: Disconnect Between U.S. Policy Makers and Test Developers” by Spaulding, Szulga, & Figueria (2012)
  4. Betz et al. (2013) Factors Influencing the Selection of Standardized Tests for the Diagnosis of Specific Language Impairment