Posted on 6 Comments

Do Our Therapy Goals Make Sense or How to Create Functional Language Intervention Targets

In the past several years, I wrote a series of posts on the topic of improving clinical practices in speech-language pathology.  Some of these posts were based on my clinical experience as backed by research,  while others summarized key point from articles written by prominent colleagues in our field such as Dr. Alan KamhiDr.  David DeBonnisDr. Andrew Vermiglio, etc.

In the past, I have highlighted several articles from the 2014 LSHSS clinical forum entitled: Improving Clinical Practice. Today I would like to explicitly summarize another relevant article written by Dr. Wallach in 2014, entitled “Improving Clinical Practice: A School-Age and School-Based Perspective“, which discusses how to change the “persistence of traditional practices” in order to make our language interventions more functional and meaningful for students with language learning difficulties. Continue reading Do Our Therapy Goals Make Sense or How to Create Functional Language Intervention Targets

Posted on 4 Comments

Effective Vocabulary Instruction: What SLPs Need to Know

vocabulary-picHaving a solid vocabulary knowledge is key to academic success. Vocabulary is the building block of language. It allows us to create complex sentences, tell elaborate stories as well as write great essays. Having limited vocabulary is primary indicator of language learning disability, which in turn blocks students from obtaining critical literacy skills necessary for reading, writing, and spelling. “Indeed, one of the , most enduring findings in reading research is the extent to which students’ vocabulary knowledge relates to their reading comprehension” (Osborn & Hiebert, 2004)

Teachers and SLPs frequently inquire regarding effective vocabulary instruction methods for children with learning disabilities. However, what some researchers have found  when they set out to “examine how oral vocabulary instruction was enacted in kindergarten” was truly alarming.

In September 2014, Wright and Neuman,  analyzed about 660 hours of observations over a course of 4 days (12 hours) in 55 classrooms in a range of socio-economic status schools.

They found that teachers explained word meanings during “teachable moments” in the context of other instruction.

They also found that teachers:

  • Gave one-time, brief word explanations
  • Engaged in unsystematic word selection
  • And spent minimal time on vocabulary devoted to subject areas (e.g., science and social studies in which word explanations were most dense)

They also found an economic status discrepancy, namely:

Teachers serving in economically advantaged schools explained words more often and were more likely to address sophisticated words than teachers serving in economically disadvantaged schools.

They concluded that these results suggest that the current state of instruction may be CONTRIBUTING to rather than ameliorating vocabulary gaps by socioeconomic status.”

Similar findings were reported by other scholars in the field who noted that “teachers with many struggling children often significantly reduce the quality of their own vocabulary unconsciously to ensure understanding.” So they “reduce the complexity of their vocabulary drastically.” “For many children the teacher is the highest vocabulary example in their life. It’s sort of like having a buffet table but removing everything except a bowl of peanuts-that’s all you get“. (Excerpts from Anita Archer’s Interview with Advance for SLPs

It is important to note that vocabulary gains are affected by socioeconomic status as well as maternal education level. Thus, children whose family incomes are at or below the poverty level fare much more poorly in the area of vocabulary acquisition than middle class children. Furthermore, Becker (2011) found that children of higher educated parents can improve their vocabulary more strongly than children whose parents have a lower educational level.

Limitations of Poor Readers:

—Poor readers often lack adequate vocabulary to get meaning from what they read. To them, reading is difficult and tedious, and they are unable (and often unwilling) to do the large amount of reading they must do if they are to encounter unknown words often enough to learn them.

—Matthew Effect, “rich get richer, poor get poorer”, or interactions with the environment exaggerate individual differences over time. Good readers read more, become even better readers, and learn more words. Poor readers read less, become poorer readers, and learn fewer words. —The vocabulary problems of students who enter school with poorer limited vocabularies only exacerbate over time. 

However, even further exacerbating the issue is that students from low SES households have limited access to books. 61% of low-income families have NO BOOKS at all in their homes for their children (Reading Literacy in the United States: Findings from the IEA Reading Literacy Study, 1996.) In some under-resourced communities, there is ONLY 1 book for every 300 children. Neuman, S., & Dickinson, D. (Eds.). (2006) Handbook of Early Literacy Research (Vol. 2)In contrast, the average middle class child has 13+ books in the home.

The above discrepancy can be effectively addressed by holding book drives to raise books for under privileged students and their siblings. Instructions for successful book drives HERE.

So what are effective methods of vocabulary instruction for children with language impairments?

According to (NRP, 2000) a good way for students to learn vocabulary directly is to explicitly teach them individual words and word-learning strategies .

For children with low initial vocabularies, approaches that teach word meanings as part of a semantic field are found to be especially effective (Marmolejo, 1991).

Many vocabulary scholars (Archer, 2011; Biemiller, 2004; Gunning 2004, etc.) agree on a number of select instructional strategies which include:

  • Rich experiences/high classroom language related to the student experience/interests
  • Explicit vs. incidental instruction with frequent exposure to words
  • Instructional routine for vocabulary
    • Establishing word relationships
    • Word-learning strategies to impart depth of meaning
    • Morphological awareness instruction

Response to Intervention: Improving Vocabulary Outcomes

For students with low vocabularies, to attain the same level of academic achievement as their peers on academic coursework of language arts, reading, and written composition, targeted Tier II intervention may be needed.

Tier II words are those for which children have an understanding of the underlying concepts, are useful across a variety of settings and can be used instructionally in a variety of ways 

According to Beck et al 2002, Tier II words should be the primary focus of vocabulary instruction, as they would make the most significant impact on a child’s spoken and written expressive capabilities.

Tier II vocabulary words

  • High frequency words which occur across a variety of domains conversations, text, etc.
  • Contain multiple meanings
  • Descriptive in nature
  • Most important words for direct instruction as they facilitate academic success
    • Hostile, illegible, tolerate, immigrate, tremble, despicable, elapse, etc.

According to Judy Montgomery “You can never select the wrong words to teach.”

Vocabulary Selection Tips:

  • Make it thematic
  • Embed it in current events (e.g., holidays, elections, seasonal activities, etc)
  • Classroom topic related (e.g., French Revolution, the Water Cycle, Penguin Survival in the Polar Regions, etc)
  • Do not select more than 4-5 words to teach per unit to not overload the working memory (Robb, 2003)
  • Select difficult/unknown words that are critical to the passage meaning, which the students are likely to use in the future (Archer, 2015)
  • Select words used across many domains

Examples of Spring Related Vocabulary 

Adjectives: 

  • Flourishing
  • Lush
  • Verdant
  • Refreshing

Nouns: 

  • Allergies
  • Regeneration
  • Outdoors
  • Seedling
  • Sapling

Verbs 

  • Awaken
  • Teem
  • Romp
  • Rejuvenate

Idiomatic Expressions:

  • April Showers Bring May Flowers
  • Green Thumb
  • Spring Chicken
  • Spring Into Action

Creating an Effective Vocabulary Intervention Packets and Materials 

Sample Activity Suggestions:

  • Text Page (story introducing the topic containing context embedded words)HVD text
  • Vocabulary Page (list of story embedded words their definitions, and what parts of speech the words are)
  • Multiple Choice Questions or Open Ended Questions Page
  • Crossword Puzzle Page
  • Fill in the Blank Page
  • True (one word meaning) Synonym/Antonym Matching Page
  • Explain the Multiple Meaning of Words Page
  • Create Complex Sentences Using Story Vocabulary Page

Intervention Technique Suggestions:

 1.Read vocabulary words in context embedded in relevant short texts

2.Teach individual vocabulary words directly to comprehend classroom-specific texts (definitions)

3.Provide multiple exposures of vocabulary words in multiple contexts, (synonyms, antonyms, multiple meaning words, etc.)

4.Maximize multisensory intervention when learning vocabulary to maximize gains (visual, auditory, tactile, etc.)

5.Use multiple instructional methods for a range of vocabulary learning tasks and outcomes (read it, spell it, write it in a sentence, practice with a friend, etc.)

6.Use morphological awareness instruction (post to follow)

  • An ability to recognize, understand, and use word parts (prefixes, suffixes that “carry significance” when speaking and in reading tasks

 Conclusion:

Having the right tools for the job is just a small first step in the right direction of creating a vocabulary-rich environment even for the most disadvantaged learners. So Happy Speeching!

Helpful Smart Speech Resources:

Posted on 9 Comments

Why Are My Child’s Test Scores Dropping?

“I just don’t understand,” says a parent bewilderingly, “she’s receiving so many different therapies and tutoring every week, but her scores on educational, speech-language, and psychological testing just keep dropping!”

I hear a variation of this comment far too frequently in both my private practice as well as outpatient school in hospital setting, from parents looking for an explanation regarding the decline of their children’s standardized test scores in both cognitive (IQ) and linguistic domains. That is why today I wanted to take a moment to write this blog post to explain a few reasons behind this phenomenon.

Children with language impairments represent a highly diverse group, which exists along a continuum.   Some children’s deficits may be mild while others far more severe. Some children may receive very little intervention  services and thrive academically, while others can receive inordinate amount of interventions and still very limitedly benefit from them.  To put it in very simplistic terms, the above is due to two significant influences – the interaction between the child’s (1) genetic makeup and (2) environmental factors.

There is a reason why language disorders are considered developmental.   Firstly, these difficulties are apparent from a young age when the child’s language just begins to develop.  Secondly, the trajectory of the child’s language deficits also develops along with the child and can progress/lag based on the child’s genetic predisposition, resiliency, parental input, as well as schooling and academically based interventions.

Let us discuss some of the reasons why standardized testing results may decline for select students who are receiving a variety of support services and interventions.

Ineffective Interventions due to Misdiagnosis 

Sometimes, lack of appropriate/relevant intervention provision may be responsible for it.  Let’s take an example of a misdiagnosis of alcohol related deficits as Autism, which I have frequently encountered in my private practice, when performing second opinion testing and consultations. Unfortunately, the above is not uncommon.  Many children with alcohol-related impairments may present with significant social emotional dysregulation coupled with significant externalizing behavior manifestations.  As a result, without a thorough differential diagnosis they may be frequently diagnosed with ASD and then provided with ABA therapy services for years with little to no benefit.

Ineffective Interventions due to Lack of Comprehensive Testing 

Let us examine another example of a student with average intelligence but poor reading performance.  The student may do well in school up to certain grade but then may begin to flounder academically.  Because only the student’s reading abilities ‘seem’ to be adversely impacted, no comprehensive language and literacy evaluations are performed.   The student may receive undifferentiated extra reading support in school while his scores may continue to drop.

Once the situation ‘gets bad enough’, the student’s language and literacy abilities may be comprehensively assessed.  In a vast majority of situations these type of assessments yield the following results:

  1. The student’s oral language expression as well as higher order language abilities are adversely affected and require targeted language intervention
  2. The undifferentiated reading intervention provided to the student was NOT targeting actual areas of weaknesses

As can be seen from above examples, targeted intervention is hugely important and, in a number of cases, may be responsible  for the student’s declining performance. However, that is not always the case.

What if it was definitively confirmed that the student was indeed diagnosed appropriately and was receiving quality services but still continued to decline academically. What then?

Well, we know that many children with genetic disorders (Down Syndrome, Fragile X, etc.) as well as intellectual disabilities (ID) can make incredibly impressive gains in a variety of developmental areas (e.g., gross/fine motor skills, speech/language, socio-emotional, ADL, etc.)  but their gains will not be on par with peers without these diagnoses.

The situation becomes much more complicated when children without ID (or with mild intellectual deficits) and varying degrees of language impairment, receive effective therapies, work very hard in therapy, yet continue  to be perpetually behind their peers when it comes to making academic gains.  This occurs because of a phenomenon known as Cumulative Cognitive Deficit (CCD).

The Effect of Cumulative Cognitive Deficit (CCD) on Academic Performance 

According to Gindis (2005) CCD “refers to a downward trend in the measured intelligence and/or scholastic achievement of culturally/socially disadvantaged children relative to age-appropriate societal norms and expectations” (p. 304). Gindis further elucidates by quoting Satler (1992): “The theory behind cumulative deficit is that children who are deprived of enriching cognitive experiences during their early years are less able to profit from environmental situations because of a mismatch between their cognitive schemata and the requirements of the new (or advanced) learning situation”  (pp. 575-576).

So who are the children potentially at risk for CCD?

One such group are internationally (and domestically) adopted as well as foster care children.  A number of studies show that due to the early life hardships associated with prenatal trauma (e.g., maternal substance abuse, lack of adequate prenatal care, etc.) as well as postnatal stress (e.g., adverse effect of institutionalization), many of these children have much poorer social and academic outcomes despite being adopted by well-to-do, educated parents who continue to provide them with exceptional care in all aspects of their academic and social development.

Another group, are children with diagnosed/suspected psychiatric impairments and concomitant overt/hidden language deficits. Depending on the degree and persistence of the psychiatric impairment, in addition to having intermittent access to classroom academics and therapy interventions, the quality of their therapy may be affected by the course of their illness. Combined with sporadic nature of interventions this may result in them falling further and further behind their peers with respect to social and academic outcomes.

A third group (as mentioned previously) are children with genetic syndromes, neurodevelopmental disorders (e.g., Autism) and intellectual disabilities. Here, it is very important to explicitly state that children with diagnosed or suspected alcohol related deficits (FASD) are particularly at risk due to the lack of consensus/training  regarding FAS detection/diagnosis. Consequently, these children may evidence a steady ‘decline’ on standardized testing despite exhibiting steady functional gains in therapy.

Brief Standardized Testing Score Tutorial:

When we look at norm-referenced testing results, score interpretation can be quite daunting. For the sake of simplicity,  I’d like to restrict this discussion to two types of scores: raw scores and standard scores.

The raw score is the number of items the child answered correctly on a test or a subtest. However, raw scores need to be interpreted to be meaningful.  For example, a 9 year old student can attain a raw score of 12 on a subtest of a particular test (e.g., Listening Comprehension Test-2 or LCT-2).  Without more information, the raw score has no meaning. If the test consisted of 15 questions, a raw score of 12 would be an average score. Alternatively, if the subtest had 36 questions, a raw score of 12 would be significantly below-average (e.g., Test of Problem Solving-3 or TOPS-3).

Consequently, the raw score needs to be converted to a standard score. Standard scores compare the student’s performance on a test to the performance of other students his/her age.  Many standardized language assessments have a mean of 100 and a standard deviation of 15. Thus, scores between 85 and 115 are considered to be in the average range of functioning.

Now lets discuss testing performance variation across time. Let’s say an 8.6 year old student took the above mentioned LCT-2 and attained poor standard scores on all subtests.   That student qualifies for services and receives them for a period of one year. At that time the LCT-2 is re-administered once again and much to the parents surprise the student’s standard scores appear to be even lower than when he had taken the test as an eight year old (illustration below).

Results of The Listening Comprehension Test -2 (LCT-2): Age: 8:4

Subtests Raw Score Standard Score Percentile Rank Description
Main Idea 5 67 2 Severely Impaired
Details 2 63 1 Severely Impaired
Reasoning 2 69 2 Severely Impaired
Vocabulary 0 Below Norms Below Norms Profoundly Impaired
Understanding Messages 0 <61 <1 Profoundly Impaired
Total Test Score 9 <63 1 Profoundly Impaired

(Mean = 100, Standard Deviation = +/-15)

Results of The Listening Comprehension Test -2 (LCT-2):  Age: 9.6

Subtests Raw Score Standard Score Percentile Rank Description
Main Idea 6 60 0 Severely Impaired
Details 5 66 1 Severely Impaired
Reasoning 3 62 1 Severely Impaired
Vocabulary 4 74 4 Moderately Impaired
Understanding Messages 2 54 0 Profoundly Impaired
Total Test Score 20 <64 1 Profoundly Impaired

(Mean = 100, Standard Deviation = +/-15)

However, if one looks at the raw score column on the far left, one can see that the student as a 9 year old actually answered more questions than as an 8 year old and his total raw test score went up by 11 points.

The above is a perfect illustration of CCD in action. The student was able to answer more questions on the test but because academic, linguistic, and cognitive demands continue to steadily increase with age, this quantitative improvement in performance (increase in total number of questions answered) did not result in qualitative  improvement in performance (increase in standard scores).

In the first part of this series I have introduced the concept of Cumulative Cognitive Deficit and its effect on academic performance. Stay tuned for part II of this series which describes what parents and professionals can do to improve functional performance of students with Cumulative Cognitive Deficit.

References:

  • Bowers, L., Huisingh, R., & LoGiudice, C. (2006). The Listening Comprehension Test-2 (LCT-2). East Moline, IL: LinguiSystems, Inc.
  • Bowers, L., Huisingh, R., & LoGiudice, C. (2005). The Test of Problem Solving 3-Elementary (TOPS-3). East Moline, IL: LinguiSystems.
  • Gindis, B. (2005). Cognitive, language, and educational issues of children adopted from overseas orphanages. Journal of Cognitive Education and Psychology, 4 (3): 290-315.
  • Sattler, J. M. (1992). Assessment of Children. Revised and updated 3rd edition. San Diego: Jerome M. Sattler.
Posted on Leave a comment

Recognizing the Warning Signs of Social Emotional Difficulties in Language Impaired Toddlers and Preschoolers

emd toddlersToday I am exited to tell you about the new product I created in honor of Better Speech and Hearing Month. 

It is a 45 slide presentation created for speech language pathologists to explain the connection between late language development and the risk of social emotional disturbances in young children 18 months- 6 years of age.

Learning Objectives:

Posted on Leave a comment

The Limitations of Using Total/Core Scores When Determining Speech-Language Eligibility

In both of the settings where I work, psychiatric outpatient school as well as private practice, I spend a fair amount of time reviewing speech language evaluation reports.  As I’m looking at these reports I am seeing that many examiners choose to base their decision making with respect to speech language services eligibility on the students’ core, index, or total scores, which are composite scores. For those who are not familiar with this term, composite scores are standard scores based on the sum of various test scaled scores.

When the student displays average abilities on all of the presented subtests, use of composite scores clearly indicates that the child does not present with deficits and thereby is not eligible for therapy services.

The same goes for the reverse, when the child is displaying a pattern of deficits which places their total score well below the average range of functioning. Again, it indicates that the child is performing poorly and requires therapy services.

However, there’s also a the third scenario, which presents a cause for concern namely, when the students display a pattern of strengths and weaknesses on a variety of subtests, but end up with an average/low average total scores, making them ineligible for services. 

Results of the Test of Problem Solving -2 Elementary (TOPS-3)

Subtests Raw Score Standard Score Percentile Rank Description
Making Inferences 19 83 12 Below Average
Sequencing 22 86 17 Low Average
Negative Questions 21 95 38 Average
Problem Solving 21 90 26 Average
Predicting 18 92 29 Average
Determining Causes 13 82 11 Below Average
Total Test 114 86 18 Low Average

Results of the Test of Reading Comprehension-Fourth Edition (TORC-4)

Subtests Raw Score Standard Score Percentile Rank Description
Relational Vocabulary 24 9 37 Average
Sentence Completion 25 9 37 Average
Paragraph Construction 41 12 75 Average
Text Comprehension 21 7 16 Below Average
Contextual Fluency 86 6 9 Below Average
Reading Comprehension Index 90 Average

The above tables, taken from different evaluations, perfectly illustrate such a scenario. While we see that their total/index scores are within average range, the first student has displayed a pattern of strengths and weaknesses across various subtests of the TOPS-3, while the second one displayed a similar performance pattern on the TORC-4.

Typically in such cases, clinical judgment dictates a number of options:

  1. Administration of another standardized test further probing into related areas of difficulty (e.g., in such situations the administration of a social pragmatic standardized test may reveal a significant pattern of weaknesses which would confirm student’s eligibility for language therapy services).                                                                                                        
  2. Administration of informal/dynamic assessments/procedures further probing into the student’s critical thinking/verbal reasoning skills.

Image result for follow upHere is the problem though: I only see the above follow-up steps in a small percentage of cases. In the vast majority of cases in which score discrepancies occur, I see the examiners ignoring the weaknesses without follow up. This of course results in the child not qualifying for services.

So why do such practices frequently take place? Is it because SLPs want to deny children services?  And the answer is NOT at all! The vast majority of SLPs, I have had the pleasure interacting with, are deeply caring and concerned individuals, who only want what’s best for the student in question. Oftentimes, I believe the problem lies with the misinterpretation of/rigid adherence to the state educational code.

For example, most NJ SLPs know that the New Jersey State Education Code dictates that initial eligibility must be determined via use of two standardized tests on which the student must perform 1.5 standard deviations below the mean (or below the 10th percentile).  Based on such phrasing it is reasonable to assume that any child who receives the total scores on two standardized tests above the 10th percentile will not qualify for services. Yet this is completely incorrect!

Let’s take a closer look at the clarification memo issued on October 6, 2015, by the New Jersey Department of Education, in response to NJ Edu Code misinterpretation. Here is what it actually states.

In accordance with this regulation, when assessing for a language disorder for purposes of determining whether a student meets the criteria for communication impaired, the problem must be demonstrated through functional assessment of language in other than a testing situation and performance below 1.5 standard deviations, or the 10th percentile on at least two standardized language tests, where such tests are appropriate, one of which shall be a comprehensive test of both receptive and expressive language.”

“When implementing the requirement with respect to “standardized language tests,” test selection for evaluation or reevaluation of an individual student is based on various factors, including the student’s ability to participate in the tests, the areas of suspected language difficulties/deficits (e.g., morphology, syntax, semantics, pragmatics/social language) and weaknesses identified during the assessment process which require further testing, etc. With respect to test interpretation and decision-making regarding eligibility for special education and related services and eligibility for speech-language services, the criteria in the above provision do not limit the types of scores that can be considered (e.g., index, subtest, standard score, etc.).”

Firstly, it emphasizes functional assessments. It doesn’t mean that assessments should be exclusively standardized rather it emphasizes the best appropriate procedures for the student in question be they standardized and nonstandardized.

Secondly, it does not limit standardized assessment to 2 tests only. Rather it uses though phrase “at least” to emphasize the minimum of tests needed.

It explicitly makes a reference to following up on any weaknesses displayed by the students during standardized testing in order to get to the root of a problem.

It specifies that SLPs must assess all displayed areas of difficulty (e.g., social communication) rather than assessing general language abilities only.

Finally, it explicitly points out that SLPs cannot limit their testing interpretation to the total scores but must to look at the testing results holistically, taking into consideration the student’s entire assessment performance.

The problem is that if SLPs only look at total/core scores then numerous children with linguistically-based deficits will fall through the cracks.  We are talking about children with social communication deficits, children with reading disabilities, children with general language weaknesses, etc.  These students may be displaying average total scores but they may also be displaying significant subtest weaknesses. The problem is that unless these weaknesses are accounted for and remediated as they are not going to magically disappear or resolve on their own. In fact both research and clinical judgment dictates that these weaknesses will exacerbate over time and will continue to adversely impact both social communication and academics.

So the next time you see a pattern of strengths and weaknesses and testing, even if it amounts to a total average score, I urge you to dig deeper. I urge you to investigate why this pattern is displayed in the first place. The same goes for you – parents! If you are looking at average total scores  but seeing unexplained weaknesses in select testing areas, start asking questions! Ask the professional to explain why those deficits are occuring and tell them to dig deeper if you are not satisfied with what you are hearing. All students deserve access to FAPE (Free and Appropriate Public Education). This includes access to appropriate therapies, they may need in order to optimally function in the classroom.

I urge my fellow SLP’s to carefully study their respective state codes as well as know who they are state educational representatives are. These are the professionals SLPs can contact with questions regarding educational code clarification.  For example, the SEACDC Consultant for the state of New Jersey is currently Fran Liebner (phone: 609-984-4955; Fax: 609-292-5558; e-mail: fran.leibner@doe.state.nj.us).

However, the Department of Education is not the only place SLPs can contact in their state.  Numerous state associations worked diligently on behalf of SLPs by liaising with the departments of education in order to have access to up to date information pertaining to school services.  ASHA also helpfully provides contact information by state HERE.

When it comes to score interpretation, there are a variety of options available to SLPs in addition to the detailed reading of the test manual. We can use them to ensure that the students we serve experience optimal success in both social and academic settings.

Helpful Smart Speech Therapy Resources:

Posted on 3 Comments

Dear SLPs, Don’t Base Your Language Intervention on Subtests Results

Tip: Click on the bolded words to read more.

For years, I have been seeing a variation of the following questions from SLPs on social media on a weekly if not daily basis:

  • “My student has slow processing/working memory and did poorly on the (insert standardized test here), what goals should I target?”
  • “Do you have sample language/literacy goals for students who have the following subtest scores on the (insert standardized test here)?”
  • “What goals should I create for my student who has the following subtest scores on the (insert standardized test here)?”

Let me be frank, these questions show a fundamental lack of understanding regarding the purpose of standardized tests, the knowledge of developmental norms for students of various ages, as well as how to effectively tailor and prioritize language intervention to the students’ needs.

So today, I wanted to address this subject from an evidence-based lens in order to assist SLPs with effective intervention planning with the consideration of testing results but not actually based on subtest results. So what do I mean by this seemingly confusing statement? Before I begin let us briefly discuss several highly common standardized assessment subtests:

Continue reading Dear SLPs, Don’t Base Your Language Intervention on Subtests Results
Posted on Leave a comment

My new article was published in January 2012 issue of Adoption Today Magazine

My article entitled: Speech Language Strategies for Multisensory Stimulation of Internationally Adopted Children has been published in the January 2012 Issue of Adoption Today Magazine

Summary:  The article introduces the concept of multisensory stimulation and explains its benefits for internationally adopted children of all ages.  It also provides suggestions for parents and professionals on how to implement multisensory strategies in a variety of educational activities in order to stimulate interest, increase task participation as well as facilitate concept retention.

References:

Doman, G & Wilkinson, R (1993) The effects of intense multi-sensory stimulation on coma arousal and recovery. Neuropsychological Rehabilitation. 3 (2): 203-212.

Johnson, D. E et al (1992) The health of children adopted from Romania. Journal of the American Medical Association. 268(24): 3446-3450

Ti, K, Shin YH, & White-Traut, RC (2003), Multisensory intervention improves physical growth and illness rates in Korean orphaned newborn infants. Research in Nursing Health.  26 (6): 424-33.

Milev et al (2008) Multisensory Stimulation for Elderly With Dementia: A 24-Week Single-Blind Randomized Controlled Pilot Study. American Journal of Alzheimer’s Disease and Other Dementias. 23 (4): 372-376.

Tarullo, A & Gunnar, M (2006). Child Maltreatment and Developing HPA Axis. Hormones and Behavior 50, 632-639.

White Traut (1999) Developmental Intervention for Preterm Infants Diagnosed with Periventricular Leukomalacia. Research in Nursing Health.  22: 131-143.

White Traut et al (2009) Salivary Cortisol and Behavioral State Responses of Healthy Newborn Infants to Tactile-Only and Multisensory Interventions. Journal of Obstetric, Gynecologic, & Neonatal Nursing. 38(1): 22–34

 Resources:

Posted on 4 Comments

Improving Executive Function Skills of Language Impaired Students with Hedbanz

Those of you who have previously read my blog know that I rarely use children’s games to address language goals.  However, over the summer I have been working on improving executive function abilities (EFs) of some of the language impaired students on my caseload. As such, I found select children’s games to be highly beneficial for improving language-based executive function abilities.

For those of you who are only vaguely familiar with this concept, executive functions are higher level cognitive processes involved in the inhibition of thought, action, and emotion, which located in the prefrontal cortex of the frontal lobe of the brain. The development of executive functions begins in early infancy; but it can be easily disrupted by a number of adverse environmental and organic experiences (e.g., psychosocial deprivation, trauma).  Furthermore, research in this area indicates that the children with language impairments present with executive function weaknesses which require remediation.

EF components include working memory, inhibitory control, planning, and set-shifting.

  • Working memory
    • Ability to store and manipulate information in mind over brief periods of time
  • Inhibitory control
    • Suppressing responses that are not relevant to the task
  • Set-shifting
    • Ability to shift behavior in response to changes in tasks or environment

Simply put, EFs contribute to the child’s ability to sustain attention, ignore distractions, and succeed in academic settings. By now some of you must be wondering: “So what does Hedbanz have to do with any of it?”

Well, Hedbanz is a quick-paced multiplayer  (2-6 people) game of “What Am I?” for children ages 7 and up.  Players get 3 chips and wear a “picture card” in their headband. They need to ask questions in rapid succession to figure out what they are. “Am I fruit?” “Am I a dessert?” “Am I sports equipment?” When they figure it out, they get rid of a chip. The first player to get rid of all three chips wins.

The game sounds deceptively simple. Yet if any SLPs or parents have ever played that game with their language impaired students/children as they would be quick to note how extraordinarily difficult it is for the children to figure out what their card is. Interestingly, in my clinical experience, I’ve noticed that it’s not just moderately language impaired children who present with difficulty playing this game. Even my bright, average intelligence teens, who have passed vocabulary and semantic flexibility testing (such as the WORD Test 2-Adolescent or the  Vocabulary Awareness subtest of the Test of Integrated Language and Literacy ) significantly struggle with their language organization when playing this game.

So what makes Hedbanz so challenging for language impaired students? Primarily, it’s the involvement and coordination of the multiple executive functions during the game. In order to play Hedbanz effectively and effortlessly, the following EF involvement is needed:

  • Task Initiation
    • Students with executive function impairments will often “freeze up” and as a result may have difficulty initiating the asking of questions in the game because many will not know what kind of questions to ask, even after extensive explanations and elaborations by the therapist.
  • Organization
    • Students with executive function impairments will present with difficulty organizing their questions by meaningful categories and as a result will frequently lose their track of thought in the game.
  • Working Memory
    • This executive function requires the student to keep key information in mind as well as keep track of whatever questions they have already asked.
  • Flexible Thinking
    • This executive function requires the student to consider a situation from multiple angles in order to figure out the quickest and most effective way of arriving at a solution. During the game, students may present with difficulty flexibly generating enough organizational categories in order to be effective participants.
  • Impulse Control
    • Many students with difficulties in this area may blurt out an inappropriate category or in an appropriate question without thinking it through first.
      • They may also present with difficulty set-shifting. To illustrate, one of my 13-year-old students with ASD, kept repeating the same question when it was his turn, despite the fact that he was informed by myself as well as other players of the answer previously.
  • Emotional Control
    • This executive function will help students with keeping their emotions in check when the game becomes too frustrating. Many students of difficulties in this area will begin reacting behaviorally when things don’t go their way and they are unable to figure out what their card is quickly enough. As a result, they may have difficulty mentally regrouping and reorganizing their questions when something goes wrong in the game.
  • Self-Monitoring
    • This executive function allows the students to figure out how well or how poorly they are doing in the game. Students with poor insight into own abilities may present with difficulty understanding that they are doing poorly and may require explicit instruction in order to change their question types.
  • Planning and Prioritizing
    • Students with poor abilities in this area will present with difficulty prioritizing their questions during the game.

Consequently, all of the above executive functions can be addressed via language-based goals.  However, before I cover that, I’d like to review some of my session procedures first.

Typically, long before game initiation, I use the cards from the game to prep the students by teaching them how to categorize and classify presented information so they effectively and efficiently play the game.

Rather than using the “tip cards”, I explain to the students how to categorize information effectively.

This, in turn, becomes a great opportunity for teaching students relevant vocabulary words, which can be extended far beyond playing the game.

I begin the session by explaining to the students that pretty much everything can be roughly divided into two categories animate (living) or inanimate (nonliving) things. I explain that humans, animals, as well as plants belong to the category of living things, while everything else belongs to the category of inanimate objects. I further divide the category of inanimate things into naturally existing and man-made items. I explain to the students that the naturally existing category includes bodies of water, landmarks, as well as things in space (moon, stars, sky, sun, etc.). In contrast, things constructed in factories or made by people would be example of man-made objects (e.g., building, aircraft, etc.)

When I’m confident that the students understand my general explanations, we move on to discuss further refinement of these broad categories. If a student determines that their card belongs to the category of living things, we discuss how from there the student can further determine whether they are an animal, a plant, or a human. If a student determined that their card belongs to the animal category, we discuss how we can narrow down the options of figuring out what animal is depicted on their card by asking questions regarding their habitat (“Am I a jungle animal?”), and classification (“Am I a reptile?”). From there, discussion of attributes prominently comes into play. We discuss shapes, sizes, colors, accessories, etc., until the student is able to confidently figure out which animal is depicted on their card.

In contrast, if the student’s card belongs to the inanimate category of man-made objects, we further subcategorize the information by the object’s location (“Am I found outside or inside?”; “Am I found in ___ room of the house?”, etc.), utility (“Can I be used for ___?”), as well as attributes (e.g., size, shape, color, etc.)

Thus, in addition to improving the students’ semantic flexibility skills (production of definitions, synonyms, attributes, etc.) the game teaches the students to organize and compartmentalize information in order to effectively and efficiently arrive at a conclusion in the most time expedient fashion.

Now, we are ready to discuss what type of EF language-based goals, SLPs can target by simply playing this game.

1. Initiation: Student will initiate questioning during an activity in __ number of instances per 30-minute session given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

2. Planning: Given a specific routine, student will verbally state the order of steps needed to complete it with __% accuracy given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

3. Working Memory: Student will repeat clinician provided verbal instructions pertaining to the presented activity, prior to its initiation, with 80% accuracy  given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

4. Flexible Thinking: Following a training by the clinician, student will generate at least __ questions needed for task completion (e.g., winning the game) with __% accuracy given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

5. Organization: Student will use predetermined written/visual cues during an activity to assist self with organization of information (e.g., questions to ask) with __% accuracy given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

6. Impulse Control: During the presented activity the student will curb blurting out inappropriate responses (by silently counting to 3 prior to providing his response) in __ number of instances per 30 minute session given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

7. Emotional Control: When upset, student will verbalize his/her frustration (vs. behavioral activing out) in __ number of instances per 30 minute session given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

8. Self-Monitoring:  Following the completion of an activity (e.g., game) student will provide insight into own strengths and weaknesses during the activity (recap) by verbally naming the instances in which s/he did well, and instances in which s/he struggled with __% accuracy given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

There you have it. This one simple game doesn’t just target a plethora of typical expressive language goals. It can effectively target and improve language-based executive function goals as well. Considering the fact that it sells for approximately $12 on Amazon.com, that’s a pretty useful therapy material to have in one’s clinical tool repertoire. For fancier versions, clinicians can use “Jeepers Peepers” photo card sets sold by Super Duper Inc. Strapped for cash, due to highly limited budget? You can find plenty of free materials online if you simply input “Hedbanz cards” in your search query on Google. So have a little fun in therapy, while your students learn something valuable in the process and play Hedbanz today!

Related Smart Speech Therapy Resources:

Posted on 7 Comments

Help, My Student has a Huge Score Discrepancy Between Tests and I Don’t Know Why?

Here’s a  familiar scenario to many SLPs. You’ve administered several standardized language tests to your student (e.g., CELF-5 & TILLS). You expected to see roughly similar scores across tests. Much to your surprise, you find that while your student attained somewhat average scores on one assessment, s/he had completely bombed the second assessment, and you have no idea why that happened.

So you go on social media and start crowdsourcing for information from a variety of SLPs located in a variety of states and countries in order to figure out what has happened and what you should do about this. Of course, the problem in such situations is that while some responses will be spot on, many will be utterly inappropriate. Luckily, the answer lies much closer than you think, in the actual technical manual of the administered tests.

So what is responsible for such as drastic discrepancy?  A few things actually. For starters, unless both tests were co-normed (used the same sample of test takers) be prepared to see disparate scores due to the ability levels of children in the normative groups of each test.  Another important factor involved in the score discrepancy is how accurately does the test differentiate disordered children from typical functioning ones.

Let’s compare two actual language tests to learn more. For the purpose of this exercise let us select The Clinical Evaluation of Language Fundamentals-5 (CELF-5) and the Test of Integrated Language and Literacy (TILLS).   The former is a very familiar entity to numerous SLPs, while the latter is just coming into its own, having been released in the market only several years ago.

Both tests share a number of similarities. Both were created to assess the language abilities of children and adolescents with suspected language disorders. Both assess aspects of language and literacy (albeit not to the same degree nor with the same level of thoroughness).  Both can be used for language disorder classification purposes, or can they?

Actually, my last statement is rather debatable.  A careful perusal of the CELF – 5 reveals that its normative sample of 3000 children included a whopping 23% of children with language-related disabilities. In fact, the folks from the Leaders Project did such an excellent and thorough job reviewing its psychometric properties rather than repeating that information, the readers can simply click here to review the limitations of the CELF – 5 straight on the Leaders Project website.  Furthermore, even the CELF – 5 developers themselves have stated that: “Based on CELF-5 sensitivity and specificity values, the optimal cut score to achieve the best balance is -1.33 (standard score of 80). Using a standard score of 80 as a cut score yields sensitivity and specificity values of .97.

In other words, obtaining a standard score of 80 on the CELF – 5 indicates that a child presents with a language disorder. Of course, as many SLPs already know, the eligibility criteria in the schools requires language scores far below that in order for the student to qualify to receive language therapy services.

In fact, the test’s authors are fully aware of that and acknowledge that in the same document. “Keep in mind that students who have language deficits may not obtain scores that qualify him or her for placement based on the program’s criteria for eligibility. You’ll need to plan how to address the student’s needs within the framework established by your program.”

But here is another issue – the CELF-5 sensitivity group included only a very small number of: “67 children ranging from 5;0 to 15;11”, whose only requirement was to score 1.5SDs < mean “on any standardized language test”.  As the Leaders Project reviewers point out: “This means that the 67 children in the sensitivity group could all have had severe disabilities. They might have multiple disabilities in addition to severe language disorders including severe intellectual disabilities or Autism Spectrum Disorder making it easy for a language disorder test to identify this group as having language disorders with extremely high accuracy. ” (pgs. 7-8)

Of course, this begs the question,  why would anyone continue to administer any test to students, if its administration A. Does not guarantee disorder identification B. Will not make the student eligible for language therapy despite demonstrated need?

The problem is that even though SLPs are mandated to use a variety of quantitative clinical observations and procedures in order to reliably qualify students for services, standardized tests still carry more value then they should.  Consequently,  it is important for SLPs to select the right test to make their job easier.

The TILLS is a far less known assessment than the CELF-5 yet in the few years it has been out on the market it really made its presence felt by being a solid assessment tool due to its valid and reliable psychometric properties. Again, the venerable Dr. Carol Westby had already done such an excellent job reviewing its psychometric properties that I will refer the readers to her review here, rather than repeating this information as it will not add anything new on this topic. The upshot of her review as follows: “The TILLS does not include children and adolescents with language/literacy impairments (LLIs) in the norming sample. Since the 1990s, nearly all language assessments have included children with LLIs in the norming sample. Doing so lowers overall scores, making it more difficult to use the assessment to identify students with LLIs. (pg. 11)”

Now, here many proponents of inclusion of children with language disorders in the normative sample will make a variation of the following claim: “You CANNOT diagnose a language impairment if children with language impairment were not included in the normative sample of that assessment!Here’s a major problem with such assertion. When a child is referred for a language assessment, we really have no way of knowing if this child has a language impairment until we actually finish testing them. We are in fact attempting to confirm or refute this fact, hopefully via the use of reliable and valid testing. However, if the normative sample includes many children with language and learning difficulties, this significantly affects the accuracy of our identification, since we are interested in comparing this child’s results to typically developing children and not the disordered ones, in order to learn if the child has a disorder in the first place.  As per Peña, Spaulding and Plante (2006), “the inclusion of children with disabilities may be at odds with the goal of classification, typically the primary function of the speech pathologist’s assessment. In fact, by including such children in the normative sample, we may be “shooting ourselves in the foot” in terms of testing for the purpose of identifying disorders.”(p. 248)

Then there’s a variation of this assertion, which I have seen in several Facebook groups: “Children with language disorders score at the low end of normal distribution“.  Once again such assertion is incorrect since Spaulding, Plante & Farinella (2006) have actually shown that on average, these kids will score at least 1.28 SDs below the mean, which is not the low average range of normal distribution by any means.  As per authors: “Specific data supporting the application of “low score” criteria for the identification of language impairment is not supported by the majority of current commercially available tests. However, alternate sources of data (sensitivity and specificity rates) that support accurate identification are available for a subset of the available tests.” (p. 61)

Now, let us get back to your child in question, who performed so differently on both of the administered tests. Given his clinically observed difficulties, you fully expected your testing to confirm it. But you are now more confused than before. Don’t be! Search the technical manual for information on the particular test’s sensitivity and specificity to look up the numbers.   Vance and Plante (1994) put forth the following criteria for accurate identification of a disorder (discriminant accuracy): “90% should be considered good discriminant accuracy; 80% to 89% should be considered fair. Below 80%, misidentifications occur at unacceptably high rates” and leading to “serious social consequences” of misidentified children. (p. 21)

Review the sensitivity and specificity of your test/s, take a look at the normative samples, see if anything unusual jumps out at you, which leads you to believe that the administered test may have some issues with assessing what it purports to assess. Then, after supplementing your standardized testing results with good quality clinical data (e.g., narrative samples, dynamic assessment tasks, etc.), consider creating a solidly referenced purchasing pitch to your administration to invest in more valid and reliable standardized tests.

Hope you find this information helpful in your quest to better serve the clients on your caseload. If you are interested in learning more regarding evidence-based assessment practices as well as psychometric properties of various standardized speech-language tests visit the SLPs for Evidence-Based Practice  group on Facebook learn more.

References: