Privacy Policy

Privacy Notice
This privacy notice discloses the privacy practices for (www.smartspeechtherapy.com). This privacy notice applies solely to information collected by this website. It will notify you of the following:

  1. What personally identifiable information is collected from you through the website, how it is used and with whom it may be shared.
  2. What choices are available to you regarding the use of your data.
  3. The security procedures in place to protect the misuse of your information.
  4. How you can correct any inaccuracies in the information.

Information Collection, Use, and Sharing
We are the sole owners of the information collected on this site. We only have access to/collect information that you voluntarily give us via email or another direct contact from you. We will not sell or rent this information to anyone.

We will use your information to respond to you, regarding the reason you contacted us. We will not share your information with any third party outside of our organization, other than as necessary to fulfill your request, e.g. to ship an order.

Unless you ask us not to, we may contact you via email in the future to tell you about specials, new products or services, or changes to this privacy policy.

Your Access to and Control Over Information
You may opt out of any future contacts from us at any time. You can do the following at any time by contacting us via the email address or phone number given on our website:

  • See what data we have about you, if any.
  • Change/correct any data we have about you.
  • Have us delete any data we have about you.
  • Express any concern you have about our use of your data.

Security
We take precautions to protect your information. When you submit sensitive information via the website, your information is protected both online and offline.

Wherever we collect sensitive information, that information is encrypted and transmitted to us in a secure way. You can verify this by looking for a lock icon in the address bar and looking for “https” at the beginning of the address of the Web page.

While we use encryption to protect sensitive information transmitted online, we also protect your information offline. Only employees who need the information to perform a specific job (for example, billing or customer service) are granted access to personally identifiable information. The computers/servers in which we store personally identifiable information are kept in a secure environment.

Cookies
We use “cookies” on this site. A cookie is a piece of data stored on a site visitor’s hard drive to help us improve your access to our site and identify repeat visitors to our site. For instance, when we use a cookie to identify you, you would not have to log in a password more than once, thereby saving time while on our site. Cookies can also enable us to track and target the interests of our users to enhance the experience on our site. Usage of a cookie is in no way linked to any personally identifiable information on our site.

If you feel that we are not abiding by this privacy policy, you should contact us immediately via our Contact Form

Posted on 1 Comment

Improving Accountability of ASHA Approved Continuing Education Providers

Image result for accountabilitySeveral days ago I had a conversation with the Associate Director of Continuing Education at ASHA regarding my significant concerns about the content and quality of some of ASHA approved continuing education courses. For many months before that, numerous discussions took place in a variety of major SLP related Facebook groups, pertaining to the non-EBP content of some of ASHA approved provider coursework, many of which was blatantly pseudoscientific in nature.

The fact is while there is a rigorous process involved in becoming an ASHA approved continuing education provider, once that approval is granted, ASHA is not privy to course content. In other words, no staff member at ASHA is available to screen course documents (pdfs, pptx, handouts, etc.) to ensure that it is scientifically supported and is free of pseudoscientific and questionable information. Continue reading Improving Accountability of ASHA Approved Continuing Education Providers

Posted on 5 Comments

If It’s NOT CAPD Then Where do SLPs Go From There?

Image result for processingIn July 2015 I wrote a blog post entitled: “Why (C) APD Diagnosis is NOT Valid!” citing the latest research literature to explain that the controversial diagnosis of (C)APD tends to

a) detract from understanding that the child presents with legitimate language based deficits in the areas of comprehension, expression, social communication and literacy development

b) may result in the above deficits not getting adequately addressed due to the provision of controversial APD treatments

To CLARIFY, I was NOT trying to disprove that the processing deficits exhibited by the children diagnosed with “(C)APD” were not REAL. Rather I was trying to point out that these processing deficits are of neurolinguistic origin and as such need to be addressed from a linguistic rather than ‘auditory’ standpoint.

In other words, if one carefully analyzes the child’s so-called processing issues, one will quickly realize that those issues are not related to the processing of auditory input  (auditory domain) since the child is not processing tones, hoots, or clicks, etc. but rather has difficulty processing speech and language (linguistic domain). Continue reading If It’s NOT CAPD Then Where do SLPs Go From There?

Posted on Leave a comment

SLPs Blogging About Research: August Edition -FASD

This month I am joining the ranks of bloggers who are blogging about research related to the field of speech pathology.  Click here for more details.

Today I will be reviewing a recently published article in The Journal of Neuroscience  on the topic of brain development in children with Fetal Alcohol Spectrum Disorders (FASD), one of my areas of specialty in speech pathology.

Title: Longitudinal MRI Reveals Altered Trajectory of Brain Development during Childhood and Adolescence in Fetal Alcohol Spectrum Disorder

Purpose: Canadian researchers performed advanced MRI brain scans of 17 children with FASD between 5 and 15 years of age and compared them to the scans of 27 children without FASD. Each participant underwent 2-3 scans and each scan took place 2-4 years apart. The multiple scan component over a period of time is what made this research study so unique because no other FASD related study had done it before.

Aim of the study: To better understand how brain abnormalities evolve during key developmental periods of behavioral and cognitive progression via longitudinal examination of within-subject changes in white brain matter (Diffusion Tensor Imaging – DTI) ) in FASD during childhood and adolescence.

Subjects: Experimental subjects had a variety of FASD diagnoses which included fetal alcohol syndrome (FAS), partial FAS (pFAS), static encephalopathy alcohol exposed (SE:AE), neurobehavioral disorder alcohol exposed (NBD:AE), as well as alcohol related neurobehavioral disorder (ARND). Given the small study size the researchers combined all sub diagnoses into one FASD group for statistical analysis.

In addition to the imaging studies, FASD subjects underwent about ∼1.5 h of cognitive testing at each scan, administered by a trained research assistant. The test battery included:

  • Woodcock Johnson Quantitative Concepts 18A&B (mathematics)
  • Woodcock Reading Mastery Test-Revised (WRMT-R) Word ID
  • Comprehensive Expressive and Receptive Vocabulary Test (CREVT)
  • Working Memory Test Battery for Children (WMTB-C)
  • Behavior Rating Inventory of Executive Function (BRIEF) parent form
  • NEPSYI/II (auditory attention and response set; memory for names, narrative memory; arrows).

9/17 participants in the FASD group were also administered the Wide Range Intelligence Test (WRIT) at scan 2.

Control subjects were screened for psychiatric and neurological impairments. Their caregivers were also contacted retrospectively and asked to estimate in utero alcohol exposure for their child. Of the 21 control subject caregivers who were reached, 14/21 reported no exposure, 2/21 unknown, and 5/21 reported minimal alcohol exposure (range: 1–3 drinks; average of two drinks total during pregnancy). Control subjects did not undergo a full battery of cognitive testing, but were administered WRMT-R Word ID at each scan.

Summary of results: The FASD group performed significantly below the controls on most of the academic, cognitive, and executive function measures  despite average IQ scores in 53% of the FASD sample. According to one of the coauthors, Sarah Treit,  “longitudinal increases in raw cognitive scores (albeit without changes in age-corrected standard scores) suggest that the FASD group made cognitive gains at a typical rate with age, while still performing below average”. For those of us who work with this population these findings are very typical.

Imaging studies revealed that over time subjects in the control group presented with marked increases in brain volume and white matter – growth which was lacking in subjects with FASD. Furthermore, children with FASD who demonstrated the greatest changes in white matter development (on scans) also made the greatest reading gains. Children with the most severe FASD showed the greatest diffusion changes in white matter brain wiring and less overall brain volume.

Implications: “This study suggests alcohol-induced injury with FASD isn’t static – those with FASD have altered brain development, they aren’t developing at the same rate as those without the disorder.” So not only does the brain altering damage exists in children with FASD at birth, but it also continues to negatively affect brain development through childhood and at least through adolescence.

Given these findings, it is very important for SLPs to perform detailed and comprehensive language assessments and engage in targeted treatment planning for these children in order to provide them with specialized individualized services which are based on their rate of development.

Posted on 7 Comments

Help, My Student has a Huge Score Discrepancy Between Tests and I Don’t Know Why?

Here’s a  familiar scenario to many SLPs. You’ve administered several standardized language tests to your student (e.g., CELF-5 & TILLS). You expected to see roughly similar scores across tests. Much to your surprise, you find that while your student attained somewhat average scores on one assessment, s/he had completely bombed the second assessment, and you have no idea why that happened.

So you go on social media and start crowdsourcing for information from a variety of SLPs located in a variety of states and countries in order to figure out what has happened and what you should do about this. Of course, the problem in such situations is that while some responses will be spot on, many will be utterly inappropriate. Luckily, the answer lies much closer than you think, in the actual technical manual of the administered tests.

So what is responsible for such as drastic discrepancy?  A few things actually. For starters, unless both tests were co-normed (used the same sample of test takers) be prepared to see disparate scores due to the ability levels of children in the normative groups of each test.  Another important factor involved in the score discrepancy is how accurately does the test differentiate disordered children from typical functioning ones.

Let’s compare two actual language tests to learn more. For the purpose of this exercise let us select The Clinical Evaluation of Language Fundamentals-5 (CELF-5) and the Test of Integrated Language and Literacy (TILLS).   The former is a very familiar entity to numerous SLPs, while the latter is just coming into its own, having been released in the market only several years ago.

Both tests share a number of similarities. Both were created to assess the language abilities of children and adolescents with suspected language disorders. Both assess aspects of language and literacy (albeit not to the same degree nor with the same level of thoroughness).  Both can be used for language disorder classification purposes, or can they?

Actually, my last statement is rather debatable.  A careful perusal of the CELF – 5 reveals that its normative sample of 3000 children included a whopping 23% of children with language-related disabilities. In fact, the folks from the Leaders Project did such an excellent and thorough job reviewing its psychometric properties rather than repeating that information, the readers can simply click here to review the limitations of the CELF – 5 straight on the Leaders Project website.  Furthermore, even the CELF – 5 developers themselves have stated that: “Based on CELF-5 sensitivity and specificity values, the optimal cut score to achieve the best balance is -1.33 (standard score of 80). Using a standard score of 80 as a cut score yields sensitivity and specificity values of .97.

In other words, obtaining a standard score of 80 on the CELF – 5 indicates that a child presents with a language disorder. Of course, as many SLPs already know, the eligibility criteria in the schools requires language scores far below that in order for the student to qualify to receive language therapy services.

In fact, the test’s authors are fully aware of that and acknowledge that in the same document. “Keep in mind that students who have language deficits may not obtain scores that qualify him or her for placement based on the program’s criteria for eligibility. You’ll need to plan how to address the student’s needs within the framework established by your program.”

But here is another issue – the CELF-5 sensitivity group included only a very small number of: “67 children ranging from 5;0 to 15;11”, whose only requirement was to score 1.5SDs < mean “on any standardized language test”.  As the Leaders Project reviewers point out: “This means that the 67 children in the sensitivity group could all have had severe disabilities. They might have multiple disabilities in addition to severe language disorders including severe intellectual disabilities or Autism Spectrum Disorder making it easy for a language disorder test to identify this group as having language disorders with extremely high accuracy. ” (pgs. 7-8)

Of course, this begs the question,  why would anyone continue to administer any test to students, if its administration A. Does not guarantee disorder identification B. Will not make the student eligible for language therapy despite demonstrated need?

The problem is that even though SLPs are mandated to use a variety of quantitative clinical observations and procedures in order to reliably qualify students for services, standardized tests still carry more value then they should.  Consequently,  it is important for SLPs to select the right test to make their job easier.

The TILLS is a far less known assessment than the CELF-5 yet in the few years it has been out on the market it really made its presence felt by being a solid assessment tool due to its valid and reliable psychometric properties. Again, the venerable Dr. Carol Westby had already done such an excellent job reviewing its psychometric properties that I will refer the readers to her review here, rather than repeating this information as it will not add anything new on this topic. The upshot of her review as follows: “The TILLS does not include children and adolescents with language/literacy impairments (LLIs) in the norming sample. Since the 1990s, nearly all language assessments have included children with LLIs in the norming sample. Doing so lowers overall scores, making it more difficult to use the assessment to identify students with LLIs. (pg. 11)”

Now, here many proponents of inclusion of children with language disorders in the normative sample will make a variation of the following claim: “You CANNOT diagnose a language impairment if children with language impairment were not included in the normative sample of that assessment!Here’s a major problem with such assertion. When a child is referred for a language assessment, we really have no way of knowing if this child has a language impairment until we actually finish testing them. We are in fact attempting to confirm or refute this fact, hopefully via the use of reliable and valid testing. However, if the normative sample includes many children with language and learning difficulties, this significantly affects the accuracy of our identification, since we are interested in comparing this child’s results to typically developing children and not the disordered ones, in order to learn if the child has a disorder in the first place.  As per Peña, Spaulding and Plante (2006), “the inclusion of children with disabilities may be at odds with the goal of classification, typically the primary function of the speech pathologist’s assessment. In fact, by including such children in the normative sample, we may be “shooting ourselves in the foot” in terms of testing for the purpose of identifying disorders.”(p. 248)

Then there’s a variation of this assertion, which I have seen in several Facebook groups: “Children with language disorders score at the low end of normal distribution“.  Once again such assertion is incorrect since Spaulding, Plante & Farinella (2006) have actually shown that on average, these kids will score at least 1.28 SDs below the mean, which is not the low average range of normal distribution by any means.  As per authors: “Specific data supporting the application of “low score” criteria for the identification of language impairment is not supported by the majority of current commercially available tests. However, alternate sources of data (sensitivity and specificity rates) that support accurate identification are available for a subset of the available tests.” (p. 61)

Now, let us get back to your child in question, who performed so differently on both of the administered tests. Given his clinically observed difficulties, you fully expected your testing to confirm it. But you are now more confused than before. Don’t be! Search the technical manual for information on the particular test’s sensitivity and specificity to look up the numbers.   Vance and Plante (1994) put forth the following criteria for accurate identification of a disorder (discriminant accuracy): “90% should be considered good discriminant accuracy; 80% to 89% should be considered fair. Below 80%, misidentifications occur at unacceptably high rates” and leading to “serious social consequences” of misidentified children. (p. 21)

Review the sensitivity and specificity of your test/s, take a look at the normative samples, see if anything unusual jumps out at you, which leads you to believe that the administered test may have some issues with assessing what it purports to assess. Then, after supplementing your standardized testing results with good quality clinical data (e.g., narrative samples, dynamic assessment tasks, etc.), consider creating a solidly referenced purchasing pitch to your administration to invest in more valid and reliable standardized tests.

Hope you find this information helpful in your quest to better serve the clients on your caseload. If you are interested in learning more regarding evidence-based assessment practices as well as psychometric properties of various standardized speech-language tests visit the SLPs for Evidence-Based Practice  group on Facebook learn more.

References:

Posted on 4 Comments

What if Its More Than Just “Misbehaving”?

Frequently,  I see a variation of the following scenario on many speech and language forums.

The SLP is seeing a client with speech and/or language deficits through early intervention,  in the schools, or in private practice, who is having some kind of behavioral issues.

Some issues are described as mild such as calling out, hyperactivity, impulsivity, or inattention, while others are more severe and include refusal, noncompliance, or aggression such as kicking, biting,  or punching.

An array of advice from well-meaning professionals immediately follows.  Some behaviors may be labeled as “normal” due to the child’s age (toddler),  others may be “partially excused” due to a DSM-5  diagnosis (e.g., ASD).   Recommendations for reinforcement charts (not grounded in evidence) may be suggested. A call for other professionals to deal with the behaviors is frequently made (“in my setting the ______ (insert relevant professional here) deals with these behaviors and I don’t have to be involved”). Specific judgments on the child may be pronounced: “There is nothing wrong with him/her, they’re just acting out to get what they want.” Some drastic recommendations could be made: “Maybe you should stop therapy until the child’s behaviors are stabilized”.

However, several crucial factors often get overlooked. First, a system to figure out why particular set of behaviors takes place and second, whether these behaviors may be manifestations of non-behaviorally based difficulties such as medical issues, or overt/subtle linguistically based deficits.

So what are some reasons kids may present with behavioral deficits? Obviously, there could be numerous reasons: some benign while others serious, ranging from lack of structure and understanding of expectations to manifestations of psychiatric illnesses and genetic syndromes. Oftentimes the underlying issues are incredibly difficult to recognize without a differential diagnosis. In other words, we cannot claim that the child’s difficulties are “just behavior” if we have not appropriately ruled out other causes which may be contributing to the “behavior”.

Here are some possible steps which can ensure appropriate identification of the source of the child’s behavioral difficulties in cases of hidden underlying language disorders (after of course relevant learning, genetic, medical, and psychiatric issues have been ruled out).

Let’s begin by answering a few simple questions. Was a thorough language evaluation with an emphasis on the child’s social pragmatic language abilities been completed? And by thorough, I am not referring to general language tests but to a variety of formal and informal social pragmatic language testing (read more HERE).

Please note that none of the general language tests such as the Preschool Language Scale-5 (PLS-5), Comprehensive Assessment of Spoken Language (CASL-2), the Test of Language Development-4 (TOLD-4) or even the Clinical Evaluation of Language Fundamentals Tests (CELF-P2)/ (CELF-5) tap into the child’s social language competence because they do NOT directly test the child’s social language skills (e.g., CELF-5 assesses them via a parental/teachers questionnaire).  Thus, many children can attain average scores on these tests yet still present with pervasive social language deficits. That is why it’s very important to thoroughly assess social pragmatic language abilities of all children  (no matter what their age is) presenting with behavioral deficits.

But let’s say that the social pragmatic language abilities have been assessed and the child was found/not found to be eligible for services, meanwhile, their behavioral deficits persist, what do we do now?

The first step in establishing a behavior management system is determining the function of challenging behaviors, since we need to understand why the behavior is occurring and what is triggering it (Chandler & Dahlquist, 2006)

We can begin by performing some basic data collection with a child of any age (even with toddlers) to determine behavior functions or reasons for specific behaviors. Here are just a few limited examples:

  • Seeking Attention/Reward
  • Seeking Sensory Stimulation
  • Seeking Control

Most behavior functions typically tend to be positively, negatively or automatically reinforced (Bobrow, 2002). For example, in cases of positive reinforcement, the child may exhibit challenging behaviors to obtain desirable items such as toys, games, attention, etc. If the parent/teacher inadvertently supplies the child with the desired item, they are reinforcing inappropriate behaviors positively and in a way strengthening the child’s desire to repeat the experience over and over again, since it had positively worked for them before.

In contrast, negative reinforcement takes place when the child exhibits challenging behaviors to escape a negative situation and gets his way. For example, the child is being disruptive in classroom/therapy because the tasks are too challenging and is ‘rewarded’ when therapy is discontinued early or when the classroom teacher asks an aide to take the child for a walk.

Finally, automatic reinforcements occur when certain behaviors such as repetitive movements or self-injury produce an enjoyable sensation for the child, which he then repeats again to recreate the sensation.

In order to determine what reinforces the child’s challenging behaviors, we must perform repeated observations and take data on the following:

  • Antecedent or what triggered the child’s behavior?
    • What was happening immediately before behavior occurred?
  • Behavior
    • What type of challenging behavior/s took place as a result?
  • Response/Consequence
    • How did you respond to behavior when it took place?

Here are just a few antecedent examples:

  • Therapist requested that child work on task
  • Child bored w/t task
  • Favorite task/activity taken away
  • Child could not obtain desired object/activity

In order to figure them out we need to collect data, prior to appropriately addressing them. After the data is collected the goals need to be prioritized based urgency/seriousness.  We can also use modification techniques aimed at managing interfering behaviors.  These techniques include modifications of: physical space, session structure, session materials as well as child’s behavior. As we are implementing these modifications we need to keep in mind the child’s maintaining factors or factors which contribute to the maintenance of the problem (Klein & Moses, 1999). These include: cognitive, sensorimotor, psychosocial and linguistic deficits. 

We also need to choose our reward system wisely, since the most effective systems which facilitate positive change actually utilize intrinsic rewards (pride in self for own accomplishments) (Kohn, 2001).  We need to teach the child positive replacement behaviors  to replace the use of negative ones, with an emphasis on self-talk, critical thinking, as well as talking about the problem vs. acting out behaviorally.

Of course it is very important that we utilize a team based approach and involve all the professionals involved in the child’s care including the child’s parents in order to ensure smooth and consistent carryover across all settings. Consistency is definitely a huge part of all behavior plans as it optimizes intervention results and achieves the desired therapy outcomes.

So the next time the client on your caseload is acting out don’t be so hasty in judging their behavior, when you have no idea regarding the reasons for it. Troubleshoot using appropriate and relevant steps in order to figure out what is REALLY going on and then attempt to change the situation in a team-based, systematic way.

For more detailed information on the topic of social pragmatic language assessment and behavior management in speech pathology see if the following Smart Speech Therapy LLC products could be of use:

 

References: 

  1. Bobrow, A. (2002). Problem behaviors in the classroom: What they mean and how to help. Functional Behavioral Assessment, 7 (2), 1–6.
  2. Chandler, L.K., & Dahlquist, C.M. Functional assessment: strategies to prevent and remediate challenging behavior in school settings (2nd ed.). Upper Saddle River, New Jersey: Pearson Education, Inc.
  3. —Klein, H., & Moses, N. (1999). Intervention planning for children with communication disorders: A guide to the clinical practicum and professional practice. (2nd Ed.). Boston, MA.: Allyn & Bacon.
  4. —Kohn, A. (2001, Sept). Five reasons to stop saying “good job!’. Young Children. Retrieved from http://www.alfiekohn.org/parenting/gj.htm
Posted on 3 Comments

It’s all about RtI!

Today I am excited to review one of the latest products from Busy Bee Speech “Common Core Standards-Based RtI Packet for Language“.

So what is RtI or Response to Intervention?

Developed as an alternative to the ability–achievement “discrepancy model,” which requires children to show a discrepancy between their IQ and standardized tests/grades, RtI is a method of academic intervention aimed to provide early, systematic assistance to children who are having difficulty learning in order to prevent academic failure via the provision of early school based intervention, frequent progress measurement, and increasingly intensive research-based instructional interventions for children who continue to have difficulty learning.

In contrast to a number of schools in my state (New Jersey), RTI or Response to Intervention is currently not utilized in my unique setting (outpatient specialized school in a psychiatric hospital). Continue reading It’s all about RtI!

Posted on 2 Comments

Wintertime Wellness Product Swap and Giveaway

winter wellness collageToday I am doing a product swap and giveaway with Rose Kesting of Speech Snacks. Rose runs a fun and unique blog. In her posts she combines her interest in nutrition and healthy cooking with her professional knowledge as a speech-language pathologist.  I’ve collaborated with Rose in the past on a variety of projects and have always been impressed with the quality of her speech and language products, which are typically aimed at language remediation of older children (upper-elementary, middle school and high school ages). Continue reading Wintertime Wellness Product Swap and Giveaway

Posted on 9 Comments

Part III: Components of Comprehensive Dyslexia Testing – Reading Fluency and Reading Comprehension

Image result for child reading

Recently I began writing a series of posts on the topic of comprehensive assessment of dyslexia.

In part I of my post (HERE), I discussed common dyslexia myths as well as general language testing as a starting point in the dyslexia testing battery.

In part II I detailed the next two steps in dyslexia assessment: phonological awareness and word fluency testing (HERE).

Today I would like to discuss part III of comprehensive dyslexia assessment, which discusses reading fluency and reading comprehension testing.

Let’s begin with reading fluency testing, which assesses the students’ ability to read word lists or short paragraphs with appropriate speed and accuracy. Here we are looking for how many words the student can accurately read per minute orally and/or silently (see several examples  of fluency rates below).

Research indicates that oral reading fluency (ORF) on passages is more strongly related to reading comprehension than ORF on word lists. This is an important factor which needs to be considered when it comes to oral fluency test selection.

Oral reading fluency tests are significant for a number of reasons. Firstly, they allow us to identify students with impaired reading accuracy. Secondly, they allow us to identify students who can decode words with relative accuracy but who cannot comprehend what they read due to significantly decreased reading speed. When you ask such children: “What did you read about?” They will frequently respond: “I don’t remember because I was so focused on reading the words correctly.”

One example of a popular oral reading fluency test (employing reading passages) is the Gray Oral Reading Tests-5 (GORT-5). It yields the scores on the student’s:GORT-5: Gray Oral Reading Tests–Fifth Edition, Complete Kit

  • Rate
  • Accuracy
  • Fluency
  • Comprehension
  • Oral Reading Index (a composite score based on Fluency and Comprehension scaled scores)

Another types of reading fluency tests are tests of silent reading fluency. Assessments of silent reading fluency can at selectively useful for identifying older students with reading difficulties and monitoring their progress. One obvious advantage to silent reading tests is that they can be administered in group setting to multiple students at once and generally takes just few minutes to administer, which is significantly less then oral reading measures take to be administered to individual students.

Below are a several examples of silent reading tests/subtests.

TOSWRF-2: Test of Silent Word Reading Fluency–Second EditionThe Test of Silent Word Reading Fluency (TOSWRF-2) presents students with rows of words, ordered by reading difficulty without spaces (e.g., dimhowfigblue). Students are given 3 minutes to draw a line between the boundaries of as many words as possible (e.g., dim/how/fig/blue).

The Test of Silent Contextual Reading Fluency (TOSCRF-2) presents students with text passages with all words printed in uppercase letters with no separations between words and no punctuation or spaces between sentences and asks them to use dashes to separate words in a 3 minute period.

Similar to the TOSCRF-2, the Contextual Fluency subtest of the Test of Reading Comprehension – Fourth Edition (TORC-4) measures the student’s ability to recognize individual words in a series of passages (taken from the TORC-4′Text Comprehension subtest) in a period of 3 minutes. Each passage, printed in uppercase letters without punctuation or spaces between words, becomes progressively more difficult in content, vocabulary, and grammar. As students read the segments, they draw a line between as many words as they can in the time allotted.  (E.g., THE|LITTLE|DOG|JUMPED|HIGH)

However, it is important to note oral reading fluency is a better predictor of reading comprehension than is silent reading fluency for younger students (early elementary age). In contrast, silent reading measures are more strongly related to reading comprehension in middle school (e.g., grades 6-8) but only for skilled vs. average readers, which is why oral reading fluency measures are probably much better predictors of deficits in this area in children with suspected reading disabilities.

Now let’s move on to the reading comprehension testing, which is an integral component for any dyslexia testing battery. Unfortunately, it is also the most trickiest. Here’s why.

Many children with reading difficulties will be able to read and comprehend short paragraphs containing factual information of decreased complexity. However, this will change dramatically when it comes to the comprehension of longer, more complex, and increasingly abstract age-level text. While a number of tests do assess reading comprehension, none of them truly adequately assess the students ability to comprehend abstract information.

For example, on the Reading Comprehension subtest of the CELF-5, students are allowed to keep the text and refer to it when answering questions. Such option will inflate the students scores and not provide an accurate idea of their comprehension abilities.

To continue, the GORT-5 contains reading comprehension passages, which the students need to answer after the stimuli booklet has been removed from them. However, the passages are far more simplistic then the academic texts the students need to comprehend on daily basis, so the students may do well on this test yet still continue to present with significant comprehension deficits.

Similar could be said for the text comprehension components of major educational testing batteries such as the Woodcock Johnson IV: Passage Comprehension subtest, which gives the student sentences with a missing word, and the student is asked to orally provide the word. However, filling-in a missing word does not text comprehension make.

Likewise, the Wechsler Individual Achievement Test®-Fourth Edition (WIAT-IV), Reading Comprehension subtest is very similar to the CELF-5. Student is asked to read a passage and answer questions by referring back to the text. However, just because a student can look up the answers in text does not mean that they actually understand the text.

So what could be done to accurately assess the student’s ability to comprehend abstract grade level text? My recommendation is to go informal. Select grade-level passages from the student’s curriculum pertaining to science, social studies, geography, etc. vs. language arts (which tends to be more simplistic) and ask the student to read them and answer factual questions regarding supporting details as well as non factual questions relevant to main ideas and implied messages.

Posted on Leave a comment

Spotlight on Syndromes: An SLPs Perspective on 22q Deletion Syndrome

Today’s guest post on genetic syndromes comes from Lauren Laur, who is contributing a post on the 22q11.2 Deletion Syndrome.

22q11.2 Deletion Syndrome is a syndrome of many names. Also known as Velocardiofacial Syndrome, Shprintzen syndrome as well as DiGeorge Syndrome, 22q11.2 Deletion Syndromeis is caused by a microdeletion on the long arm of chromosome 22 (at location marker q11.2).  This syndrome follows an autosomal dominant inheritance pattern (a child only needs to get the abnormal gene from one parent in order to inherit the disease), however, only around 10% of cases are inherited; the majority of cases are due to a random mutation. Continue reading Spotlight on Syndromes: An SLPs Perspective on 22q Deletion Syndrome