Testing terminology: a general quiz

Multiple-choice exercise

Choose the best answer for each question.
ELT Concourse home page

  1. Benchmarking is:
    1.   ranking students' performance against a set of criteria.
    2.   the use of one student to compare the performance of others.
    3.   establishing a set of usable marking criteria.
    4.   the use of a few test scripts to standardise marking.
  2. Achievement tests are:
    1.   tests of general ability to learn language.
    2.   tests directly related to a language course designed.
    3.   tests designed to influence the teaching programme.
    4.   tests to measure what learners know and don't know.
  3. The Cambridge First Certificate examination is a:
    1.   achievement test.
    2.   proficiency test.
    3.   performative test.
    4.   diagnostic test.
  4. True score refers to:
    1.   the learner's score minus an amount for guessing correctly.
    2.   a theoretical measurement of a learner's score excluding any problems of reliability.
    3.   the learner's total score without any subjective marking judgments.
    4.   the score measured as the difference from the mean score of all the test takers.
  5. Backwash is:
    1.   the effect on the learning / teaching process of a test.
    2.   the affect of testing on teacher competence.
    3.   the affect of testing on learner performance.
    4.   the affect of teaching on test design.
  6. Holistic scoring means:
    1.   marking items independently.
    2.   judging on the basis of an overall impression.
    3.   assessing by direct testing.
    4.   adding all the scores together.
  7. Integrative testing is another description of:
    1.   holistic testing.
    2.   analytic testing.
    3.   discrete-point testing.
    4.   direct testing.
  8. What is the mean score of 18, 20, 22, 24 and 26?
    1.   25
    2.   21
    3.   23
    4.   22
  9. What is the guess ratio for a multiple-choice test with 5 possible answers to each question?
    1.   25%
    2.   33%
    3.   30%
    4.   20%
  10. A multiple-choice test contains:
    1.   a choice of true or false.
    2.   distractors and a common core question.
    3.   a stem and a number of distractors.
    4.   a rubric and some distractors.
  11. Direct testing differs from discrete-point testing because:
    1.   the former gets the learner to undertake the skill being tested, while the latter attempts to test the underlying skills.
    2.   the former attempts to test the underlying skills while the latter gets the learner to undertake the skill being tested.
  12. Face validity is a measure of:
    1.   a subjective judgement of a test's fairness.
    2.   how well we can describe what we are testing.
    3.   how well a test is designed.
    4.   how well a test actually targets the desired skills.
  13. If 40 out of 100 students get an answer right, that item has a value of 0.4. This is a measure of:
    1.   facility value.
    2.   easiness.
    3.   standard deviation.
    4.   usefulness.
  14. Aptitude testing is:
    1.   assessing general cognitive ability.
    2.   assessing communicative success.
    3.   assessing how well learners will be able to acquire the targets.
    4.   assessing intelligence.
  15. If a test is reliable, this means that:
    1.   the test will have a high facility ratio.
    2.   the results will be a valid measure of a test-taker's ability in the skill we are testing.
    3.   the test will be objective.
    4.   the results will be comparable regardless of where and when the test is taken
  16. Validity is a measure of:
    1.   how well we can describe the abilities we are testing.
    2.   how the test will parallel results of other tests.
    3.   how fair a test is.
    4.   how well a test measures what it is intended to measure.
  17. Criterion referencing is:
    1.   measuring performance based on overall communicative success.
    2.   measuring performance against a benchmarked student.
    3.   measuring performance against a range of predetermined criteria.
    4.   choosing the most useful criteria when standardising test markers.
  18. Analytic scoring involves:
    1.   adding up the marks to get an overall picture.
    2.   scoring a mark for each component of a task.
    3.   breaking down the scores to produce a histogram.
    4.   scoring for an overall impression.
  19. Unique answer items have:
    1.   only one possible right answer.
    2.   only true or false answers to select from.
    3.   only three correct answers in a set of four possible ones.
    4.   no equivalents elsewhere in the test.
  20. Paraphrase test items require the learner to:
    1.   correct what they read or hear.
    2.   summarise what they read or hear.
    3.   re-express what they hear or read in a different form.
    4.   re-express what they hear or read in their own words.