This is the first in a two-part post about the new tests being administered through the Common Core. To find out more background on these tests, visit the Smarter Balance Assessment Consortium.
It’s that season again, the one that used to involve lots of filling in of bubbles. This spring, Google is giggling all the way to the bank as our schools purchase carts full of Chromebooks to have their students take the new, Common Core-aligned computerized tests.
Reports have been filtering in from around the country, with tales of crying children, broken software and hardware, and lots of overworked IT guys. But I wondered how things were going locally and talked to a teacher from a shall-remain-unnamed local public school. (Not my daughter’s school—her class hasn’t gotten to take the test yet because the district is worried about too much net bandwidth at one time so they’re spreading out the pain.)
The tests are new, and this year they “don’t count,” which actually doesn’t mean that they aren’t taking data from the results. The data, in fact, will be very important. We the parents, however, will not get to see our children’s scores, nor will the scores be used to fire our beloved, hardworking teachers. Not yet, at least. The data they’re taking is supposedly going to improve the test itself, and from what my teacher-informant tells me, there’s room approximately the size of California for improvement.
Reliability of the test itself
This is the issue that, it seems, the state is most concerned about, but frankly, it’s the least of our worries. My informant tells me that there were questions that required answers to proceed, but the test offered no spaces in which to put answers so the students couldn’t proceed. OK, that’s a simple software problem, but since the teachers aren’t supposed to “help” the students in any way, kids like my very literal daughter would have just sat there, unable to proceed.
There was no way for my informant to judge the quality of the content of the tests, but I’m sure we’ll find out that these tests have all the same problems as other standardized tests: multiple choice questions for which there are two, truly valid answers; deliberately misleading questions; fuzzily worded questions that don’t actually have a valid answer, etc. That’s par for the course in state-designed tests, and I really don’t know that there is a fix for it.
Appropriateness of the test for the age group
Frankly, I don’t think any standardized test should be administered to any child under the age of, say, 12 except in situations where you really need certain specific information. The very word “standardized” says it all—by creating a common standard you end up judging seals by how well they climb trees.
However, that said, if we must test younger children we can do two important things to make sure the test is appropriate:
1) Don’t make the test too long.
Let’s face it, even if the above-average 3rd-grader can sit for an 8-hour test over three days, most kids suffer.
2) Don’t create a test the requires tools that some kids might not have mastered.
For example, the old bubbles were a challenge for some kids, especially those with trouble tracking their eyes from the booklet to the answer sheet.
This test fails miserably on both counts. This year’s test was shorter and my informant said her 3rd-4th graders did OK, but she can’t imagine them hanging on for next year’s 8-hour test without some of them suffering terribly. Just because we adults have become office drones attached to our computers doesn’t mean our 8-year-olds need to be! If we really want to know their achievement level, why do we administer tests in such a way that will make it impossible for them to do their best?
And then there’s the whole question of asking young children with varying degrees of familiarity with technology to be able to use a computer with a trackpad, little tiny icons, and little tiny boxes they have to click in. Imagine the difference between the speed of a well-off kid who owns her own iPad and a kid who has no computers in the home—this is clearly not fair and clearly not developmentally appropriate. The number of hours of exposure in school is not enough by third grade to expect mastery of these physical skills by kids who don’t practice at home.
Digital educational design
I had a very bad feeling when it was announced that our tests would all be delivered by computer. Yes, there are some great aspects of this. No more tracking from booklet to answer sheet. No more one-test-fits-all since computers can adaptively offer questions at each student’s level. No more checking patterns of erasure after the teachers have had unmonitored access to the tests.
On the other hand, I started in digital educational design in the 90’s, creating the first online classroom materials for our local community college. The teacher I worked with on one project had learning disabilities and was a passionate advocate for his learning disabled students. Instead of a paper textbook, he and I created a website that had resizable text and also audio versions of the text. (Since screen reading software wasn’t advanced at the time, he recorded the whole thing!)
This experience led me to be keenly aware of the fact that online educational tools create very different challenges, and not everyone who is hired to design these tools is really qualified to do it. (I’ll save my rant about the quality of educational IT in general for another time!)
My teacher-informant reported a shocking first fact: Her school had “chosen” not to let the students take the tutorial that teaches them how to use the test environment first. How is the state letting this be a choice? Obviously, any school administrator who looks at the enormous pile of curriculum they’re required to get through is going to try to “save” tutorial time for something else. But in order for the tests to be effective, each and every student should be required to use a tutorial until s/he reaches a minimum standard of proficiency on the tools. Any student who can’t get up to speed on a tutorial should not be allowed to continue with the test.
This should be obvious to the people who designed the test, since (theoretically) we’re not designing these tests to prove that economically disadvantaged students are “stupid,” right? (Or are we?) You might think that I’m exaggerating how much trouble these kids have with the technology. However, my informant’s students are largely not low-income, yet she reported a number of problems, most of which she was not allowed to help with:
- In the first part of the test, the students themselves are required to type their name in all caps (Chromebooks don’t have a caps lock key), an i.d. number with mixed numbers and letters, and a session passcode that had both 0’s and O’s in indistinguishable type.
- And then there’s the use of icons with no text, one of my major pet peeves. Yes, there are those who think in pictures, and they all love Ikea’s instruction sheets. The rest of us, though, need language. I’ll let my informant describe what it was like to watch kids with varying levels of exposure to modern technology deal with this: “The kids don’t know the speaker icon is for hearing stuff. Some can’t read the directions. For example, they are given a paragraph and the directions are, Highlight the sentence that is out of place. They don’t know that they are supposed to highlight a sentence. They are looking for the dot to click or the space to type something. AND I CAN’T TELL THEM they are supposed to highlight a sentence.” Cuz that would be helping, right? And God forbid we let teachers help… the kids might learn something.
Continued:
Click here to read why the tests don’t test what we think they test, and why our expectations for this test really are unreasonable.