Posted on 5 Comments

Test Review: Test of Written Language-4 (TOWL-4)

Today due to popular demand I am reviewing The Test of Written Language-4 or TOWL-4. TOWL-4 assesses the basic writing readiness skills of students 9:00-17:11 years of age. The tests consist of two forms – A and B, (which contain different subtest content).

According to the manual, the entire test takes approximately  60-90 minutes to administer and examines 7 skill areas. Only the “Story Composition” subtest is officially timed (the student is given 15 minutes to write it and 5 minutes previous to that, to draft it). However, in my experience, each subtest administration, even with students presenting with mild-moderately impaired writing abilities, takes approximately 10 minutes to complete with average results (can you see where I am going with this yet?) 

For detailed information regarding the TOWL-4 development and standardization, validity and reliability, please see HERE. However, please note that the psychometric properties of this test are weak.

Below are my impressions (to date) of using this assessment with students between 11-14 years of age with (known) mild-moderate writing impairments.

Subtests:

1. Vocabulary – The student is asked to write a sentence that incorporates a stimulus word.  The student is not allowed to change the word in any way, such as write ‘running’ instead of run’. If this occurs, an automatic loss of points takes place. The ceiling is reached when the student makes 3 errors in a row.  While some of the subtest vocabulary words are perfectly appropriate for younger children (~9), the majority are too simplistic to assess the written vocabulary of middle and high schoolers. These words may work well to test the knowledge of younger children but they do not take into the account the challenging academic standards set forth for older students. As a result, students 11+ years of age may pass this subtest with flying colors but still present with a fair amount of difficulty using sophisticated vocabulary words in written compositions.

2/3.   Spelling and Punctuation (subtests 2 and 3). These two subtests are administered jointly but scored separately. Here, the student is asked to write sentences dictated by the examiner using appropriate rules for spelling and punctuation and capitalization. Ceiling for each subtest is reached separately. It  occurs when the student makes 3 errors in a row in each of the subtests.   In other words, if a student uses correct punctuation but incorrect spelling, his/her ceiling on the ‘Spelling’ subtest will be reached sooner then on the ‘Punctuation’ subtest and vise versa. Similar to the ‘Vocabulary‘ subtest I feel that the sentences the students are asked to write are far too simplistic to showcase their “true” grade level abilities.

The requirements of these subtests are also not too stringent.  The spelling words are simple and the punctuation requirements are very basic: a question mark here, an exclamation mark there, with a few commas in between. But I was particularly disappointed with the ‘Spelling‘ subtestHere’s why. I have a 6th-grade client on my caseload with significant well-documented spelling difficulties. When this subtest was administered to him he scored within the average range (Scaled Score of 8 and Percentile Rank of 25).  However, an administration of Spelling Performance Evaluation for Language and Literacy – SPELL-2yielded 3 assessment pages of spelling errors, as well as 7 pages of recommendations on how to remediate those errors.  Had he received this assessment as part of an independent evaluation from a different examiner, nothing more would have been done regarding his spelling difficulties since the TOWL-4 revealed an average spelling performance due to its focus on overly simplistic vocabulary.

4. Logical Sentences – The student is asked to edit an illogical sentence so that it makes better sense. Ceiling is reached when the student makes 3 errors in a row. Again I’m not too thrilled with this subtest. Rather than truly attempting to ascertain the student’s grammatical and syntactic knowledge at sentence level a large portion of this subtest deals with easily recognizable semantic incongruities.

5. Sentence Combining – The student integrates the meaning of several short sentences into one grammatically correct written sentence. Ceiling is reached when the student makes 3 errors in a row.  The first few items contain only two sentences which can be combined by adding the conjunction “and”. The remaining items are a bit more difficult due to the a. addition of more sentences and b. increase in the complexity of language needed to efficiently combine them. This is a nice subtest to administer to students who present with difficulty effectively and efficiently expressing their written thoughts on paper. It is particularly useful with students who write down  a lot of extraneous information in their compositions/essays and frequently overuse run-on sentences. 

6. Contextual Conventions – The student is asked to write a story in response to a stimulus picture. S/he earn points for satisfying specific requirements relative to combined orthographic (E.g.: punctuation, spelling) and grammatical conventions (E.g.: sentence construction, noun-verb agreement).  The student’s written composition needs to contain more than 40 words in order for the effective analysis to take place.

The scoring criteria ranges from no credit or a score of 0 ( based on 3 or more mistakes), to partial credit, a score of 1 (based on 1-2 mistakes) to full a credit – a score of 3 (no mistakes). There are 21 scoring parameters which are highly useful for younger elementary-aged students who may exhibit significant difficulties in the domain of writing. However,  older middle school and high-school aged students as well as elementary aged students with moderate writing difficulties may attain average scoring on this subtest but still present with significant difficulties in this area as compared to typically developing grade level peers. As a result, in addition to this assessment, it is recommended that a functional assessment of grade-level writing also be performed in order to accurately identify the student’s writing needs.

7. Story Composition – The student’s story is evaluated relative to the quality of its composition (E.g.: vocabulary, plot, development of characters, etc.). The examiner first provides the student with an example of a good story by reading one written by another student.  Then, the examiner provides the student with an appropriate picture card and tell them that they need to take time to plan their story and make an outline on the (also provided) scratch paper.  The student has 5 minutes to plan before writing the actual story.  After the 5 minutes, elapses they 15 minutes to write the story.  It is important to note that story composition is the very first subtest administered to the student. Once they complete it they are ready to move on to the Vocabulary subtest. There are 11 scoring parameters that are significantly more useful for me to use with younger students as well as significantly impaired students vs. older students or students with mild-moderate writing difficulties. Again if your aim is to get an accurate picture of the older students writing abilities I definitely recommend the usage of clinical writing assessment rubrics based on the student’s grade level in order to have an accurate picture of their abilities.

OVERALL IMPRESSIONS:

Strengths:

  • A thorough assessment of basic writing areas for very severely impaired students with writing deficits
  • Flexible subtest administration (can be done on multiple occasions with students who fatigue easily)

Limitations:

  • Untimed testing administration (with the exception of story composition subtests) is NOT functional with students who present with significant processing difficulties. One 12-year-old student actually took ~40 minutes to complete each subtest.
  • Primarily  useful for students with severe deficits in the area of written expression
  • Not appropriate for students with mild-moderate needs (requires suplementation)
  • Lack of remediation suggestions based on subtest deficits
  • Weak psychometric properties

Overall, TOWL-4 can be a useful testing measure for ruling out weaknesses in the student’s basic writing abilities, with respect to simple vocabulary, sentence construction, writing mechanics, punctuation, etc.  If I identify previously unidentified gaps in basic writing skills I can then readily intervene, where needed, if needed. However, it is important to understand that the TOWL-4 is only a starting point for most of our students with complex literacy needs whose writing abilities are above severe level of functioning. Most students with mild-moderate writing difficulties will pass this test with flying colors but still present with significant writing needs. As a result I highly recommend a functional grade-level writing assessment as a supplement to the above-standardized testing.

References: 

Hammill, D. D., & Larson, S. C. (2009). Test of Written Language—Fourth Edition. (TOWL-4). Austin, TX: PRO-ED.

Disclaimer: The views expressed in this post are the personal impressions of the author. This author is not affiliated with PRO-ED in any way and was NOT provided by them with any complimentary products or compensation for the review of this product. 

Posted on Leave a comment

In Search of Evidence in the Era of Social Media Misinformation

Tip: Click on the highlighted words for further reading.

Social media forums have long been subject to a variety of criticism related to trustworthiness, reliability, and commercialization of content. However, in recent years the spread of misinformation has been steadily increasing in disproportionate amounts as compared to the objective consumption of evidence. Facebook, for example, has long been criticized, for the ease with which its members can actively promote and rampantly encourage the spread of misinformation on its platform.

To illustrate, one study found that “from August 2020 to January 2021, misinformation got six times more clicks on Facebook than posts containing factual news. Misinformation also accounted for the vast majority of engagement with far-right posts — 68% — compared to 36% of posts coming from the far-left.” Facebook has even admitted in the past that its platform is actually hardwired for misinformation. Nowhere is it easier to spread misinformation than in Facebook groups. In contrast to someone’s personal account, a dubious claim made even in a relatively small group has a far wider audience than a claim made from one’s personal account. In the words of Nina Jankowicz, the disinformation fellow at the Wilson Center, “Facebook groups are ripe targets for bad actors, for people who want to spread misleading, wrong or dangerous information.

Continue reading In Search of Evidence in the Era of Social Media Misinformation
Posted on 13 Comments

Articulation Carnival-App Review and Giveaway

Today I am reviewing a brand new app by the Virtual Speech Center -Articulation Carnival (requires iOS 7 or later; compatible with iPad).  Virtual Speech Center did a great job creating a fun app, where the kids get to go to a “carnival’ and practice their articulation at the word, phrase, and sentence levels.

Much like all their other apps, this one is super easy to use and very intuitive to navigate. With a variety of options to boot. Applicable to children of all ages beginning with 2+ years, it’s phoneme targets include 20 pictures per phoneme and per word position as well as phrases and sentences.  All phonemes are editable which is a very convenient options for therapists who need to customize their client’s phoneme lists based on the child’s present level of ability and needs. Continue reading Articulation Carnival-App Review and Giveaway

Posted on 9 Comments

Part III: Components of Comprehensive Dyslexia Testing – Reading Fluency and Reading Comprehension

Image result for child reading

Recently I began writing a series of posts on the topic of comprehensive assessment of dyslexia.

In part I of my post (HERE), I discussed common dyslexia myths as well as general language testing as a starting point in the dyslexia testing battery.

In part II I detailed the next two steps in dyslexia assessment: phonological awareness and word fluency testing (HERE).

Today I would like to discuss part III of comprehensive dyslexia assessment, which discusses reading fluency and reading comprehension testing.

Let’s begin with reading fluency testing, which assesses the students’ ability to read word lists or short paragraphs with appropriate speed and accuracy. Here we are looking for how many words the student can accurately read per minute orally and/or silently (see several examples  of fluency rates below).

Research indicates that oral reading fluency (ORF) on passages is more strongly related to reading comprehension than ORF on word lists. This is an important factor which needs to be considered when it comes to oral fluency test selection.

Oral reading fluency tests are significant for a number of reasons. Firstly, they allow us to identify students with impaired reading accuracy. Secondly, they allow us to identify students who can decode words with relative accuracy but who cannot comprehend what they read due to significantly decreased reading speed. When you ask such children: “What did you read about?” They will frequently respond: “I don’t remember because I was so focused on reading the words correctly.”

One example of a popular oral reading fluency test (employing reading passages) is the Gray Oral Reading Tests-5 (GORT-5). It yields the scores on the student’s:GORT-5: Gray Oral Reading Tests–Fifth Edition, Complete Kit

  • Rate
  • Accuracy
  • Fluency
  • Comprehension
  • Oral Reading Index (a composite score based on Fluency and Comprehension scaled scores)

Another types of reading fluency tests are tests of silent reading fluency. Assessments of silent reading fluency can at selectively useful for identifying older students with reading difficulties and monitoring their progress. One obvious advantage to silent reading tests is that they can be administered in group setting to multiple students at once and generally takes just few minutes to administer, which is significantly less then oral reading measures take to be administered to individual students.

Below are a several examples of silent reading tests/subtests.

TOSWRF-2: Test of Silent Word Reading Fluency–Second EditionThe Test of Silent Word Reading Fluency (TOSWRF-2) presents students with rows of words, ordered by reading difficulty without spaces (e.g., dimhowfigblue). Students are given 3 minutes to draw a line between the boundaries of as many words as possible (e.g., dim/how/fig/blue).

The Test of Silent Contextual Reading Fluency (TOSCRF-2) presents students with text passages with all words printed in uppercase letters with no separations between words and no punctuation or spaces between sentences and asks them to use dashes to separate words in a 3 minute period.

Similar to the TOSCRF-2, the Contextual Fluency subtest of the Test of Reading Comprehension – Fourth Edition (TORC-4) measures the student’s ability to recognize individual words in a series of passages (taken from the TORC-4′Text Comprehension subtest) in a period of 3 minutes. Each passage, printed in uppercase letters without punctuation or spaces between words, becomes progressively more difficult in content, vocabulary, and grammar. As students read the segments, they draw a line between as many words as they can in the time allotted.  (E.g., THE|LITTLE|DOG|JUMPED|HIGH)

However, it is important to note oral reading fluency is a better predictor of reading comprehension than is silent reading fluency for younger students (early elementary age). In contrast, silent reading measures are more strongly related to reading comprehension in middle school (e.g., grades 6-8) but only for skilled vs. average readers, which is why oral reading fluency measures are probably much better predictors of deficits in this area in children with suspected reading disabilities.

Now let’s move on to the reading comprehension testing, which is an integral component for any dyslexia testing battery. Unfortunately, it is also the most trickiest. Here’s why.

Many children with reading difficulties will be able to read and comprehend short paragraphs containing factual information of decreased complexity. However, this will change dramatically when it comes to the comprehension of longer, more complex, and increasingly abstract age-level text. While a number of tests do assess reading comprehension, none of them truly adequately assess the students ability to comprehend abstract information.

For example, on the Reading Comprehension subtest of the CELF-5, students are allowed to keep the text and refer to it when answering questions. Such option will inflate the students scores and not provide an accurate idea of their comprehension abilities.

To continue, the GORT-5 contains reading comprehension passages, which the students need to answer after the stimuli booklet has been removed from them. However, the passages are far more simplistic then the academic texts the students need to comprehend on daily basis, so the students may do well on this test yet still continue to present with significant comprehension deficits.

Similar could be said for the text comprehension components of major educational testing batteries such as the Woodcock Johnson IV: Passage Comprehension subtest, which gives the student sentences with a missing word, and the student is asked to orally provide the word. However, filling-in a missing word does not text comprehension make.

Likewise, the Wechsler Individual Achievement Test®-Fourth Edition (WIAT-IV), Reading Comprehension subtest is very similar to the CELF-5. Student is asked to read a passage and answer questions by referring back to the text. However, just because a student can look up the answers in text does not mean that they actually understand the text.

So what could be done to accurately assess the student’s ability to comprehend abstract grade level text? My recommendation is to go informal. Select grade-level passages from the student’s curriculum pertaining to science, social studies, geography, etc. vs. language arts (which tends to be more simplistic) and ask the student to read them and answer factual questions regarding supporting details as well as non factual questions relevant to main ideas and implied messages.

Posted on Leave a comment

The reviews are in: Improving Social Skills in Children with Psychiatric Disturbances

 

 

 

 

 

Today I did a webinar on

Improving Social Skills in Children with Psychiatric Disturbances

click below for the initial reviews of my live webinar

http://www.speechpathology.com/slp-ceus/course/autism-asd-social-emotional-improving-social-skills-in-children-5414

+++++
or read below:
Average Rating 5 stars
well organized, gave a few examples of what presenter actually has used herself and/or put into practice
Her power point was clear and she was easy to listen to. I appreciated the corresponding resources very much.
++++-
Good review of strategies.
++++-
Information was well organized and clearly presented
++++-
Good info…wish it would have been more specific to certain populations (ADHD)
Well organized and informative.
It was very organized and easy to follow. She was incredibly informative and provided abundant resources!
+++++
practical/fuctional information to use in therapy
+++++
Extremely clear handout and great resource recommendations for therapy.
I appreciated her thoroughness in taking us from theory to therapy to materials and resources.
+++++
The information was given in a logical sequential order with data and some materials to use during therapy.
+++++
good content and presentation
The course content and presentation were informative, concise, and well organized.
Posted on 2 Comments

What is practice effect?

Practice effect is the change in performance  resulting from repeated testing.  In other words if a test is given to the child too soon, then his/her performance may  improve due to the practice effect (remembering the test items).

Why do we need to know about practice effect? Its important to know regarding practice effect  because following retesting we need to know whether the child’s performance has actually improved or is the improvement a result of  retesting provided too soon.

When can the same test be readministered to a child? Major testing companies such as Pearson and Pro-ed use a 2 week interval between readminstrations of developmental tests for the purposes of test-retest reliability. Why do they do that? Well if you wait too long they can’t tell “if differences in scores are due to the test being unreliable or due to developmental factors.”

So how long should we wait to readminister the same test to a child?

Review the test-retest reliability in the manual. Based on those results you will see that if you wait more than two weeks (e.g., 3-4 weeks) following original test administration it should eliminate the practice  effect.   For more information regarding the practice effect in speech language testing click here

Posted on

Guest Post: The Importance of Hearing Testing in Children

Today two of my guest bloggers Drs. Stella Fulman and Zhanneta Shapiro explain the importance of pediatric hearing tests beyond the newborn screenings.

The importance of hearing testing isn’t widely understood by many parents. Parents may schedule appointments with an opthamologist or a dentist for their children at regular intervals – but never think to similarly schedule a hearing test with an audiologist. We think perhaps that if a child responds to our voice in a room of our homes that their hearing must be fine. Jokingly we think that if they don’t respond to the calls for dinner that they should have their hearing checked – but rarely follow up on this. Continue reading Guest Post: The Importance of Hearing Testing in Children

Posted on 1 Comment

In case you missed it: Therapy Fun with Ready Made Spring Related Bingo

Back in late February I did a guest post for Teach Speech 365. In case you missed it I am running it again on my blog since spring is now in full bloom!

Spring is here and there are many fun therapy activities you can do with your preschool and school aged clients during this time of year.  Now, while many of my colleagues are great at creating their own therapy materials, I am personally not that handy.  If you are like me, it’s perfectly okay since there are plenty of free materials that you can find online and adopt for your speech language purposes.

Making Friends, an online craft store, and Boggles World, an online ESL teacher resource, are two such websites, which have a number of ready-made materials, crafts, flashcards, and worksheets that can be adapted for speech language therapy purposes.  One of my personal favorites from both sites is bingo. I actually find it to be a pretty versatile activity, which can be used in a number of different ways in the speech room.

Let’s start with “Spring” bingo from the Making Friends Website, since its well suited for preschool aged children.  The game comes with both call-out cards and 12-4×4 card printable boards that can be printed out on card stock or just laminated. Continue reading In case you missed it: Therapy Fun with Ready Made Spring Related Bingo