Testing terminology: a general quiz

Multiple-choice exercise

Choose the best answer for each question.
ELT Concourse home page

  1. Direct testing differs from discrete-point testing because:
    1.   the former attempts to test the underlying skills while the latter gets the learner to undertake the skill being tested.
    2.   the former gets the learner to undertake the skill being tested, while the latter attempts to test the underlying skills.
  2. What is the mean score of 18, 20, 22, 24 and 26?
    1.   25
    2.   22
    3.   23
    4.   21
  3. Paraphrase test items require the learner to:
    1.   re-express what they hear or read in their own words.
    2.   re-express what they hear or read in a different form.
    3.   summarise what they read or hear.
    4.   correct what they read or hear.
  4. Benchmarking is:
    1.   the use of a few test scripts to standardise marking.
    2.   ranking students' performance against a set of criteria.
    3.   the use of one student to compare the performance of others.
    4.   establishing a set of usable marking criteria.
  5. Backwash is:
    1.   the affect of testing on teacher competence.
    2.   the affect of testing on learner performance.
    3.   the affect of teaching on test design.
    4.   the effect on the learning / teaching process of a test.
  6. If 40 out of 100 students get an answer right, that item has a value of 0.4. This is a measure of:
    1.   facility value.
    2.   easiness.
    3.   usefulness.
    4.   standard deviation.
  7. What is the guess ratio for a multiple-choice test with 5 possible answers to each question?
    1.   30%
    2.   20%
    3.   25%
    4.   33%
  8. True score refers to:
    1.   a theoretical measurement of a learner's score excluding any problems of reliability.
    2.   the learner's total score without any subjective marking judgments.
    3.   the learner's score minus an amount for guessing correctly.
    4.   the score measured as the difference from the mean score of all the test takers.
  9. Analytic scoring involves:
    1.   adding up the marks to get an overall picture.
    2.   breaking down the scores to produce a histogram.
    3.   scoring a mark for each component of a task.
    4.   scoring for an overall impression.
  10. Unique answer items have:
    1.   only one possible right answer.
    2.   only three correct answers in a set of four possible ones.
    3.   only true or false answers to select from.
    4.   no equivalents elsewhere in the test.
  11. A multiple-choice test contains:
    1.   a rubric and some distractors.
    2.   a stem and a number of distractors.
    3.   distractors and a common core question.
    4.   a choice of true or false.
  12. Validity is a measure of:
    1.   how the test will parallel results of other tests.
    2.   how fair a test is.
    3.   how well a test measures what it is intended to measure.
    4.   how well we can describe the abilities we are testing.
  13. Integrative testing is another description of:
    1.   discrete-point testing.
    2.   analytic testing.
    3.   direct testing.
    4.   holistic testing.
  14. Aptitude testing is:
    1.   assessing how well learners will be able to acquire the targets.
    2.   assessing general cognitive ability.
    3.   assessing intelligence.
    4.   assessing communicative success.
  15. If a test is reliable, this means that:
    1.   the results will be comparable regardless of where and when the test is taken
    2.   the results will be a valid measure of a test-taker's ability in the skill we are testing.
    3.   the test will be objective.
    4.   the test will have a high facility ratio.
  16. Achievement tests are:
    1.   tests directly related to a language course designed.
    2.   tests designed to influence the teaching programme.
    3.   tests of general ability to learn language.
    4.   tests to measure what learners know and don't know.
  17. Holistic scoring means:
    1.   adding all the scores together.
    2.   assessing by direct testing.
    3.   marking items independently.
    4.   judging on the basis of an overall impression.
  18. Criterion referencing is:
    1.   measuring performance against a benchmarked student.
    2.   choosing the most useful criteria when standardising test markers.
    3.   measuring performance based on overall communicative success.
    4.   measuring performance against a range of predetermined criteria.
  19. Face validity is a measure of:
    1.   a subjective judgement of a test's fairness.
    2.   how well we can describe what we are testing.
    3.   how well a test actually targets the desired skills.
    4.   how well a test is designed.
  20. The Cambridge First Certificate examination is a:
    1.   proficiency test.
    2.   performative test.
    3.   diagnostic test.
    4.   achievement test.