Validating Test Questions
How many learners should be able to answer a question correctly before you can say the question is valid? How do you know when to rephrase a question or rework your instruction?
In response to that reader query, vendor Dave Buck ( dbuck@visumllc.com ) offers this advice: Give your test to 20 or more real learners -- not "alpha testers" early in the development process, not subject-matter experts. Then analyze results on test items, including multiple- choice, fill-in-the-blank and matching. Items that 50% or more of your real learners get wrong "require further scrutiny," says Buck, chief learning officer with Visum LLC, a Knoxville, Tenn., e-learning firm.
On multiple-choice items, do you get a nice scatter? That's bad. If, say, 25% select answer A, 25% take B, 25% C and 25% D -- it "implies that the learners are guessing," says Buck. "Check the question for clarity AND check whether your content provides the learner with the ability to answer this question." "But," Buck adds, "it could also be that they didn't study."
A RIGHT WRONG ANSWER?
If more learners pick a particular wrong answer than your right answer, "check everything," says Buck. The wrong answer -- a "distracter" -- "might actually be correct, depending on how you worded the question," Buck adds. Or your content may have misled learners. In any case, pay attention. "Incorrect responses can really help you understand how your learners are interpreting your content," says Buck.
To find out why certain questions are "bad," Buck adds, "talk to your learners." Buck conducted "item analyses" for all the tests he gave as a college instructor. "When I'd find a 'bad' question and wasn't sure why it was bad," says Buck, "I'd show the stats to my students and ask the group why they answered the way they did." Sometimes they couldn't say. "It was clear," says Buck, "they hadn't practiced that learning objective -- just didn't study. "Other times, however, students would indicate specific quotes from the textbook or their notes that led them to answer as they did." This showed Buck how to clarify content and revise questions and distracters.
THE 100% SUSPECT
In tests you give after you've delivered your content, questions that 100% of learners answer correctly are suspect. It may be that the way the question is written, "anyone would select the correct answer, even without training," says Buck.
To check your test items for this problem, include the items in a pre-test -- "without providing feedback," warns Buck, "so you don't influence the post-test." Items that 75% of learners or more get right before training "might not be assessing your learning objectives at all," says Buck, "or it shows many of your learners don't need the training in the first place." On the other hand, he says, "If performance on an item is low on a pre-test and 100% on a post-test, then you can be pretty sure your training was a success."
Good online training software should have a feature that analyzes test questions, says Buck, whose firm's Online Training System helps users create e-learning. Prices start at $10.80 per user per year for up to 500 users, plus a $1,000 setup charge if the total annual user fee is less than $10,000.
Copyright 2001 VNU Business Media
vnuLearning
Information posted by: Steven Larson, 11/7/2001.
|