Run question analysis on a Test
- Test deployed in a content area.
- Deployed test listed on the Tests page (Tests, Surveys and Pools).
- Grade Centre column.
- A multiple choice question was flagged for review on the item analysis page. More Top 25% students chose answer B, but A was correct. You realise the correct answer was mis-typed during question creation. Edit the question and it's automatically re-marked.
- In a multiple choice question, you find that nearly equal numbers of students chose A, B, and C. Examine answer choices to determine if they were too ambiguous, if the question was too difficult, or if the material was not covered in teaching.
- A question is flagged for review because it falls into the hard difficulty category. You examine it and find that it's a hard question, but you keep it in the test because it's necessary to test your module objectives.
3. Run Item Analysis on a Test
4. Test summary
5. Question statistics table
- Medium (30% to 80%) difficulty.
- Good or Fair (greater than 0.1) discrimination values.
- Easy ( > 80%) or Hard ( < 30%) difficulty.
- Poor ( < 0.1) discrimination values.
- Discrimination indicates how well a question differentiates between students who know the subject matter those who do not. A question is a good discriminator when students who answer the question correctly also do well on the test. Questions are flagged for review if their discrimination value is less than 0.1 or is negative.
- Difficulty shows the percentage of students who answered the question correctly. Difficulty values can range from 0% to 100%. A high percentage indicates an easy question. Easy (greater than 80%) or hard (less than 30%) questions are flagged for review.
- High difficulty values do not assure high levels of discrimination.
- Graded Attempts: number of questions for which marking is complete. Higher numbers mean more reliable statistics.
- Average Score denoted with an * indicates that some attempts are not marked, so the average score might change after all attempts are marked.
- Std Dev measures how far the scores deviate from the average score. If the scores are tightly grouped (most values close to the average) the standard deviation is small. If the scores are widely dispersed (values far from the average) the standard deviation is larger.
- Std Error is an estimate of variability in a student’s score due to chance. The smaller the standard error number, the more accurate the measurement provided by the test question.
6. View single question details
- Discrimination, Difficulty, Graded Attempts, Average Score, Std Dev, Std Error (described in section 4 above).
- Skipped: Number of students who skipped the question.
|Type of information||Question type|
Number of students who selected each answer choice and distribution of those answers among the class quartiles.
True / False
Either / Or
Opinion Scale / Likert
|Number of students who selected each answer choice.||Matching
Fill in Multiple Blanks
|Number of students who got the question correct, incorrect, or skipped it.||Calculated Formula
Fill in the Blank
|Question text only.||Essay
(also includes answers students chose from)
7. Answer distribution
- Top 25%: Number of students with total test scores in the top quarter of the class who selected the answer option.
- 2nd 25%: Number of students with total test scores in the second quarter of the class who selected the answer option.
- 3rd 25%: Number of students with total test scores in the third quarter of the class who selected the answer option.
- Bottom 25%: Number of students with total test scores in the bottom quarter of the class who selected the answer option.
8. Symbol legend
- Question might have changed after deployment: A part of the question changed since the test was deployed. This could mean question data might not be reliable. Attempts submitted after the question changed may have benefited from the change.
- Not all submissions have been graded: Appears for questions that require manual marking e.g. essay questions. If a test contains an essay question with 50 student attempts, this indicator shows until someone marks all 50 attempts. The item analysis tool only uses attempts that have been marked at the time you run the report.
- (QS) and (RB): The question came from a Question Set or Random Block. With random question delivery, it's possible some questions get more attempts than others.
9. Multiple attempts, overrides and question edits
- If students take a test multiple times, the last submitted attempt is analysed. E.g. a test allows three attempts and Kelly has completed two attempts with a third attempt in progress. Her third attempt counts toward her In Progress Attempts and no previous attempts are included in the current item analysis data. When she submits the third attempt, subsequent item analyses will include it.
- Grade Centre overrides don't have an impact on item analysis.
- Manually marked answers, question text changes, correct answer choice, partial credit or points don't auto-update the analysis. Run it again to see changes reflected in data.
This feedback form is for web page URLs that begin with teachlearn.leedsbeckett.ac.uk. For other pages / comments, please use the What's Your View form on the page.
Page last updated: 24/04/2017