Standardized Testing

We must be careful not to discourage our twelve-year-olds by making them waste the best years of their lives preparing for examinations.

Freeman Dyson

During my primary and secondary schooling, I moved around a lot. I also had a lot of elective interests. Going into my sophomore year, my high school had changed their scheduling without warning (part of a larger change in administration): since I had done several arts electives my first year, I would not be able to fit in all the classes I needed. So, I ended up taking some dual-credit classes at the local community college.

Before I could sign up for for the classes I needed (mostly foreign-language classes), I had to take their standardized placement exam to determine if I needed remediation in any subject areas. The exam covered the general standardized testing areas: math, reading comprehension, writing, and so on… and was taken on a computer and graded electronically — except for the writing portion which was graded by a grader. I was required to take all of sections of it, even though passing any of them was not a direct prerequisite for me to be able to take any of the classes that I was wanting to take. I could have literally done nothing on the exam, gotten a flat 0 on everything, and taken my classes without issue.


Before I finish my story, I’d like to provide some context.

There is a growing body of research showing that standardized testing does not actually demonstrate college readiness. Often, the actual tasks we are required to do within in disciplines (within or outside of academia) bear little-to-no resemblance to a paper test — especially one in any of the standard formats. Even the companies that create and administer these tests have begun to admit that the correlation between success on their exams and success in an academic field is tenuous. The biggest predictor of collegiate success remains the GPA.

One of the key considerations within student assessment (or any assessment) is simply: “does the test measure the thing that it is supposed to measure?” This is part of what researchers would call “internal validity.” However, it is often hard for people to understand why these things can be difficult to measure, and how a standardized test can fail so badly… with that context, I’ll finish my story.


At this point in time, I’ve written a lot of papers. These include two masters theses, technical documents, research papers, and several proposed chapters of my dissertation. I’ve been strong at writing for as long as I can remember.

The test I took back then was the ACCUPLACER, which is provided by the same company that does the SAT. The ACCUPLACER is intended to measure collegiate-level readiness for different subjects. As I mentioned before, it closely resembles the SAT in content.

I failed the writing section.

… and I didn’t just fail, I got a flat 0.

You might be wondering why I scored so badly. Perhaps it was a fluke, or a technical error. Maybe I goofed off and didn’t take it seriously.

The reality is that I knew I was going to fail it the moment I read the prompt. The grade came as a shock, to be sure, but I knew I was in trouble. And I felt helpless as I tried to compose my essay.

The problem? I didn’t understand the question. Not that it was a reading comprehension issue, I simply didn’t have enough prior knowledge and context to address the prompt.

The essay I was given to write was an argumentative essay. It’s ironic to me that I would go on in that same year and take third place in an international oratory competition. Argumentative writing is not a struggle. However, the prompt was: should a person be allowed to work two full-time jobs.

Likely, as you read this, you have thoughts on the subject. However, cast yourself back to when you were 15 years old. At that point in time, did you even know what a full-time job was? What that fully entailed?

At that point in my life, I had no basis for understand what a full-time job was. I was barely old enough to work any kind of job, and the jobs I held through high school were per-service engagements. Of course, you likely have a different experience.

And that’s the point! Everyone has different lived experiences. Standardized tests — especially for reading and writing — do not exist without context. Familiarity with different concepts, areas of vocabulary, or styles of writing change from person-to-person, but these things do not measure a person’s ability to learn or navigate the unfamiliar.

It also calls into question the evaluation of the assessment. I wrote a very, very strict five-paragraph argumentative essay in response to the prompt; making sure to follow the form as exactly as I could. My hope was that even if I knew nothing about the subject, I could at least demonstrate that I could write a cogent essay using an established structure. I took the argumentative position of “I don’t know,” developed three supporting reasons for why, built an introduction, elaborated on my supporting reasons, then reiterated everything in a conclusion.

That didn’t help me. Despite making clear efforts to demonstrate my understanding of writing practices, the content was not acceptable. Thus I failed. However, it begs the question: in grading me based on my understanding (or lack thereof) of the subject matter… was I being tested on my ability to write at a collegiate level? Or was I being tested on something else?


I stand by the decisions of the many universities to begin ignoring standardized test scores. In my experience within academia, they’re not predictive of anything significant that we can’t determine better from other sources. They do nothing but add stress and cost to students’ college application process.