{"id":37758,"date":"2021-02-25t10:56:50","date_gmt":"2021-02-25t15:56:50","guid":{"rendered":"\/\/www.imrbdigital.com\/?page_id=37758"},"modified":"2021-03-11t08:11:41","modified_gmt":"2021-03-11t13:11:41","slug":"steps-to-creating-more-valid-and-less-stressful-exams","status":"publish","type":"page","link":"\/\/www.imrbdigital.com\/science-exams\/steps-to-creating-more-valid-and-less-stressful-exams\/","title":{"rendered":"steps to creating more valid (and less stressful) exams"},"content":{"rendered":"

test blueprints\u00a0<\/a> \u00a0 \u00a0 \u00a0 partial credit\u00a0<\/a> \u00a0 \u00a0sequencing<\/a>\u00a0 \u00a0 \u00a0question writing<\/a><\/h3>\n

<\/a>use test blueprints<\/strong><\/h3>\n

the first and perhaps most important recommendation for constructing exams is to consider what should be on it before you start writing. what content should the exam cover? what skills should students demonstrate? what fraction of your test should require higher order thinking, and what fraction should measure basic knowledge?<\/p>\n

to clarify the answers to these questions, it is good practice to construct a table of specifications (aka, a test blueprint) to guide your test writing (brame, 2019; see chapter 12<\/a>). a test blueprint characterizes the distribution of points across content areas and types of thinking you want the test to target.<\/p>\n

it can be easiest to start with a distribution of points across content areas. the table shows one way to do that, with the number of points corresponding to the amount of time spent in class. \"thethe test blueprint can then expanded to include the types of thinking you want the students to do within each content area. then, as you think about questions, it’s helpful to consider your learning objectives for each content area (example learning objectives for the content areas in the table can be seen here<\/a>).<\/p>\n

the biggest benefit of a test blueprint is that it can help you stay on track as you write the test, making sure that you test the content and ways of thinking that matter to you. it can also be a good way to communicate with your students. sharing the test blueprint–or an abbreviated form of it–with them can help them prepare and reduce their uncertainty about what to expect.<\/p>\n

<\/a>include opportunities for partial credit that are efficient to grade<\/strong><\/h3>\n

question clusters.<\/strong> in a question cluster, students are given an initial prompt, such as a graph or a scenario, and are then asked multiple short questions about the prompt. these questions can be mc or short answer; the key is that each has an answer that students can provide independently of other questions and that you can grade efficiently. two examples are given below.<\/p>\n

\"example
example. question cluster from physics course. courtesy of shane hutson.<\/figcaption><\/figure>\n

 <\/p>\n

\"example.
example. question cluster from a chemistry class. courtesy of katie clements.<\/figcaption><\/figure>\n

when creating question clusters, it’s important that mistakes students make on the first or second question don’t result in all incorrect answers on later questions. ideally, the questions in a cluster should be independent of each other; if that’s not possible, it’s wise to grade the later questions based on the first false assumption (while clearly identifying to students the problematic thinking on the earlier question).<\/p>\n

multip<\/strong>le true\/false.<\/strong><\/p>\n

\"example
example. multiple t\/f from a biochemistry course. courtesy of cynthia brame.<\/figcaption><\/figure>\n

another way to create opportunities for partial credit is to use \u201cchoose all correct\u201d questions, which might more accurately be regarded as\u00a0multiple true\/false items. that is, the student can evaluate each response as a true or false response to the stem, and you can grade it as such. thus a student could get from 0-5 points for a question with 5 alternatives. these items have additional benefits: they are often easier for the instructor to write, and brian couch and colleagues have also shown that they more effectively reveal partial understanding<\/a>. two examples are shown, one from a biochemistry course and one from a physics course.<\/p>\n

\"example
example. multiple t\/f from a physics course. courtesy of shane hutson.<\/figcaption><\/figure>\n

multiple choice questions with more and less correct distractors.\u00a0<\/strong>some faculty provide opportunities for partial credit in multiple choice questions by offering distractors (or incorrect answers) that are more and less correct. for example, if the one best answer for a mc question is a particular dehydrogenase, then a distractor that named another dehydrogenase might be worth partial credit, whereas a distractor that named a different type of enzyme might be worth no credit. in another example, if the question required computation, then a distractor that changed the sign might be worth partial credit, whereas a distractor that had the wrong digit might be worth no credit.<\/p>\n

.<\/span><\/p>\n

<\/a>sequence questions intentionally<\/strong><\/h3>\n

exam questions can be sequenced in the order in which topics were discussed in class, by difficulty, or randomly. while a recent review of the literature on psychology exams indicated that ordering did not impact overall student performance, it did find that some studies reported cognitive and\/or affective benefits of intentional question sequencing (hauck et al., 2017<\/a>). specifically, ordering questions by lecture sequence can improve student performance, and sequencing questions from easier to harder can improve student confidence during and after test-taking (weinstein and roediger, 2010<\/a>; jackson and greene, 2014<\/a>).\u00a0 to improve student experience during test-taking, it therefore makes sense to choose an intentional order to present exam questions.<\/p>\n

 <\/p>\n

<\/a>use good practice in question writing<\/strong><\/h3>\n

of course, aligning the exam with what you have taught and intend to assess is only half the battle; the other half is writing questions that are valid measures of your students\u2019 knowledge. if you use multiple choice<\/strong> questions, the key is to write questions that 1) don\u2019t give unintentional clues to the correct answer, and 2) that assess the desired skill and knowledge and not something else, such as working memory or non-target vocabulary. this guide<\/a> provides detailed guidance, but in essence, you can follow these rules :<\/p>\n

the stem (that is, the question or prompt) should<\/p>\n