Testing terminology: a general quiz

Multiple-choice exercise

Choose the best answer for each question.
ELT Concourse home page

  1. Validity is a measure of:
    1.   how the test will parallel results of other tests.
    2.   how well a test measures what it is intended to measure.
    3.   how well we can describe the abilities we are testing.
    4.   how fair a test is.
  2. A multiple-choice test contains:
    1.   a rubric and some distractors.
    2.   distractors and a common core question.
    3.   a stem and a number of distractors.
    4.   a choice of true or false.
  3. Backwash is:
    1.   the affect of testing on learner performance.
    2.   the affect of testing on teacher competence.
    3.   the affect of teaching on test design.
    4.   the effect on the learning / teaching process of a test.
  4. Integrative testing is another description of:
    1.   analytic testing.
    2.   holistic testing.
    3.   discrete-point testing.
    4.   direct testing.
  5. Unique answer items have:
    1.   only true or false answers to select from.
    2.   no equivalents elsewhere in the test.
    3.   only one possible right answer.
    4.   only three correct answers in a set of four possible ones.
  6. Holistic scoring means:
    1.   judging on the basis of an overall impression.
    2.   assessing by direct testing.
    3.   marking items independently.
    4.   adding all the scores together.
  7. What is the guess ratio for a multiple-choice test with 5 possible answers to each question?
    1.   25%
    2.   20%
    3.   33%
    4.   30%
  8. The Cambridge First Certificate examination is a:
    1.   diagnostic test.
    2.   proficiency test.
    3.   performative test.
    4.   achievement test.
  9. What is the mean score of 18, 20, 22, 24 and 26?
    1.   22
    2.   21
    3.   25
    4.   23
  10. Face validity is a measure of:
    1.   how well a test is designed.
    2.   a subjective judgement of a test's fairness.
    3.   how well we can describe what we are testing.
    4.   how well a test actually targets the desired skills.
  11. Direct testing differs from discrete-point testing because:
    1.   the former attempts to test the underlying skills while the latter gets the learner to undertake the skill being tested.
    2.   the former gets the learner to undertake the skill being tested, while the latter attempts to test the underlying skills.
  12. Paraphrase test items require the learner to:
    1.   correct what they read or hear.
    2.   summarise what they read or hear.
    3.   re-express what they hear or read in a different form.
    4.   re-express what they hear or read in their own words.
  13. Criterion referencing is:
    1.   measuring performance against a benchmarked student.
    2.   measuring performance based on overall communicative success.
    3.   measuring performance against a range of predetermined criteria.
    4.   choosing the most useful criteria when standardising test markers.
  14. True score refers to:
    1.   the score measured as the difference from the mean score of all the test takers.
    2.   the learner's score minus an amount for guessing correctly.
    3.   a theoretical measurement of a learner's score excluding any problems of reliability.
    4.   the learner's total score without any subjective marking judgments.
  15. If a test is reliable, this means that:
    1.   the test will have a high facility ratio.
    2.   the results will be comparable regardless of where and when the test is taken
    3.   the test will be objective.
    4.   the results will be a valid measure of a test-taker's ability in the skill we are testing.
  16. Achievement tests are:
    1.   tests of general ability to learn language.
    2.   tests to measure what learners know and don't know.
    3.   tests directly related to a language course designed.
    4.   tests designed to influence the teaching programme.
  17. Benchmarking is:
    1.   ranking students' performance against a set of criteria.
    2.   the use of one student to compare the performance of others.
    3.   establishing a set of usable marking criteria.
    4.   the use of a few test scripts to standardise marking.
  18. Analytic scoring involves:
    1.   scoring for an overall impression.
    2.   scoring a mark for each component of a task.
    3.   adding up the marks to get an overall picture.
    4.   breaking down the scores to produce a histogram.
  19. Aptitude testing is:
    1.   assessing intelligence.
    2.   assessing communicative success.
    3.   assessing how well learners will be able to acquire the targets.
    4.   assessing general cognitive ability.
  20. If 40 out of 100 students get an answer right, that item has a value of 0.4. This is a measure of:
    1.   easiness.
    2.   facility value.
    3.   standard deviation.
    4.   usefulness.