Delta Module One Revision Course
Syllabus area 6
Before you tackle this section, you
should have completed the relevant section of the Module One course |
+ |
The tasksThink first and then make a note of your answer to the question. Click on the to reveal an answer. |
Why test
learners? |
It's not enough to be clear about what you want people to learn and
to design a teaching programme to achieve the objectives. We must
also have some way of knowing whether the objectives have been achieved.
|
Explain the
differences between initial, formative and summative
evaluation. |
Initial testing is often a diagnostic test to help
formulate a syllabus and course plan or a placement test
to put learners into the right class for their level.
Formative testing is used to enhance and adapt the learning programme. Summative tests seek to measure how well a set of learning objectives has been achieved at the end of a period of instruction. |
Explain the
difference between informal and formal evaluation or
assessment, with an example of each. |
Informal evaluation may include some kind of document
but there's unlikely to be a scoring system as such and
evaluation might include, for example, simply observing
the learner(s), listening to them and responding, giving
them checklists, peer- and self-evaluation and a number
of other procedures.
Formal evaluation usually implies some kind of written document (although it may be an oral test) and some kind of scoring system. It could be a written test, an interview, an on-line test, a piece of homework or a number of other things. |
Explain the
difference between objective vs. subjective assessment with
an example of each. |
Objective
assessment is characterised by tasks in which there is
only one right answer. It may be a multiple-choice
test, a True/False test or any other kind of test where
the result can readily be seen and is not subject to the
marker's judgement.
Subjective tests are those in which questions are open ended and the marker's judgement is important, for example, marking an essay and judging task completion or general impression on the reader. |
Explain the
difference between criterion-referencing and
norm-referencing with an example of what is meant. |
Criterion-referenced tests are those in which the result
is measured against a performance scale (e.g., by grades from A to E
or by a score out of 100).
Norm-referencing is a way of measuring students against each other to discover, for example, which learners in a group should be separated off into a higher-level class, e.g., the top six scorers are moved to another class. |
Explain, with
an example what the following test types are: aptitude tests achievement tests diagnostic tests proficiency tests |
Aptitude tests
test a learner’s general ability to learn a language
rather than the ability to use a particular language.
An example is The Modern Language Aptitude Test (US Army) Achievement tests measure students' performance at the end of a period of study to evaluate the effectiveness of the programme. Examples are: an end-of-course or end-of-week test etc. or even a mid-lesson test Diagnostic tests are designed to discover learners' strengths and weaknesses for planning purposes. An example is a test set early in a programme to plan the syllabus Proficiency tests test a learner’s ability in the language regardless of any course they may have taken. Examples are public examinations such as FCE etc. but also placement tests. |
Explain what
is meant by these test item formats alternate response multiple-choice structured response free response hybrid structured and free response. Give an example of each. |
Alternate
response
A True / False test or any test with one wrong and one right response. E.g.: Mary came late: True/False John came before / after Mary (delete the wrong word) Multiple-choice This is sometimes called a fixed-response test. The correct answer must be chosen from three, four or more alternatives. E.g.: Mary came to the party with: A: her brother B: her mother C: alone D: her father Structured response The subject is given a structure in which to form the answer. E.g.: Expand this to make a correct sentence: Mary / party / late / left / early / tired Free response In these tests, no guidance is given other than the rubric and the subjects are free to write or say what they like. E.g.: Write 200 words about a well-known actor in your country. Hybrid structured and free response The subject is given a list of things to include in the response. E.g.: Write 200 words about a well-known actor in your country. Include where and when he/she was born, the first successful role, where she/he now lives and a photograph of the person saying where and when it was taken. If he/she is dead, say where and when she/he died. |
Explain the
difference between direct and indirect testing with an
example of testing writing. |
Direct testing
is testing a particular skill by getting the student to
perform that skill.
E.g.: Testing whether someone can write a discursive essay by asking them to write one Indirect testing is testing the abilities which underlie the skills we are interested in. E.g.: Testing whether someone can write a discursive essay by testing their ability to use contrastive cohesive devices, modality, hedging etc. |
Explain the
difference between discrete-point and integrative testing
with an example. |
Discrete-point
testing is a format with many items requiring short
answers which each target a defined area.
E.g.: Tests with a range of with multiple-choice items focused on vocabulary, grammar, functional language etc. Integrative testing combines many language elements to do the task. E.g.: Written tasks in which marks are given for various elements: accuracy, range, communicative success etc. |
Explain the
difference between reliability and validity. |
Reliability
refers to whether candidates would get the same result
wherever and whenever the test is taken.
Validity refers to whether a test actually targets what we think it does. |
Why does
subjective marking lower reliability? |
The more
judgement markers have to use, the greater the
possibility of disparity between markers.
|
What is face
validity? |
Test takers
need to trust that a test is fair and will result in an
accurate assessment of their ability. The test
must look like a real test.
|
What is
construct validity? |
Test designers
need to be able to state precisely what a test is
assessing and how it is doing it.
|
If you had significant problems doing these tasks, you should go back to this section of the Module One course.
That's the end |
Now you can go on. Select the revision section you want to do from this menu.