From reading social media posts, to decoding emojis in texts, we learn and gather information in some ways that didn’t exist 15 years ago. Content comes at us in multiple forms — all requiring our fluency in reading different kinds of texts. Reading is now a multilayered affair. 

Yet testing a child’s reading ability remains a linear experience. We ask them to read a paragraph, maybe two, and answer multiple-choice questions on a printed page. That’s hardly the way people read, learn and process to make decisions today.

A team of ETS researchers led by John Sabatini and Tenaha O’Reilly believe there is a better way to see whether a student possesses the skill to comprehend what they read — no matter the format. Their work is innovating how we measure a student’s reading proficiency, while challenging the very idea of the role a test can play.

Goal-oriented testing:

“In the real world, when you go to buy a cell phone, maybe [choosing] between an Android® phone or an iPhone, you have a goal,” and that goal serves as motivation, O’Reilly says. You want to learn as much as you can before buying a cell phone so you don’t make a wrong choice, such as spending too much money or buying a phone that won’t link to your current devices.

“You go on the Internet to compare prices and features, read reviews, and then make a decision,” he says. “That is a different thing than going to a test and answering detailed questions.”

O’Reilly, Sabatini and their research team have spent the past five years studying how students across 28 states absorb and digest what they read, administering over 100,000 pilot tests during that time as part of their research. The ETS team is one of six teams funded by the federal Reading for Understanding (RfU) research initiative, meant to bring reading intervention and assessment into the 21st century.

Some teams are looking at how students learn to read, others are looking at ways to support struggling readers in the digital age. The ETS research team is studying how to measure how well students read and understand information they see in their everyday lives — not just what appears on a test.

Their findings? That assessments can be transformed into an opportunity for learning and discovery, rather than a snapshot and a score.

A test, a lesson:

Consider this sample question from a middle school task set developed as part of RfU: Students are tasked with building a website about green schools. Yet in order to do so they would need to understand what green schools actually are, how they differ from conventional schools, and the pros and cons of each. Rather than multiple-choice options, this task would require students to write a summary — almost an extended outline — which should include an explanation of a green school, materials used in a green building, and the overall benefits.

To find what they need for their answer, students search a simulated Internet within the testing environment. The software used by the students limits the material students can use so they’re doing some research, but with constraints. Rather than fill in a list of bubbles, students decide what information is relevant or not, and how to structure their written answer. The summary still shows whether a student read and learned the material.

“We are trying to get assessments to be learning experiences,” O’Reilly says. “We want the test to be worthwhile.”

Stress reliever:

The best learning happens when students are allowed to make mistakes. Rather than worrying about being penalized for errors, they actually learn from, and build on, them instead. Built into the pilot tests the ETS research team has developed for RfU are supports, which involve modeling good habits of mind and allowing students to fix mistakes during the test, O’Reilly says.

For example, when students choose an answer, then learn more information, they will be asked whether, given these new details, they would like to revise their answer. Or students may be asked to write a summary — which can be difficult for some children. Those struggling on this task might encounter a virtual peer — someone who can act as a guide for them during the assessment. Students would write their own summary, then look at the virtual student’s work. The virtual peer then is sequencing, or showing the students what needs to be done, with an example of how the completed work should look.

“In the real world you collaborate and work with people,” Sabatini says. “We’ve added simulated students who come in at different points to help you out. This reduces stress and makes the testing experience more social.”

Sabatini and O’Reilly believe these changes in the way assessments are constructed could change not only a final score, but the way educators approach testing in the first place.

“Everyone makes errors,” says O’Reilly. “But how we recover from those errors really matters.”

Time for an evolution: 

At a time when some parents are opting their children out of taking standardized assessments, many administrators are looking for ways to change the thinking around assessments. Some parents, and in other cases teachers too, want students to opt out of tests because they say they aren’t convinced of their usefulness.

Making reading tests more like learning experiences — that are instructionally relevant and give educators a better window into whether students understand what they read — is what the ETS team believes is the next evolutionary step in reading testing. “How do you make a test feel a little less like a test?” O’Reilly asks. “How do you make the experience less stressful and the test more worthwhile overall? These are our research team’s higher-level aims.”

The article was originally published on Educational Testing Service’s Open Notes by Lauren Barack 

Write Comment