1. Mode of delivery
Tests can be paper-based (PB) or computer-based (CB). Most of us have experience of paper-based tests. Some CB testing scenarios:
assessing reading/listening comprehension | test-takers read/listen to a text on a computer and select correct answers to comprehension questions accompanying the text. The computer registers the answers and calculates the test score. |
assessing writing ability | Test-takers read the prompt on the screen and type the text in the allocated space. The computer saves the text which can then be marked either by human examiners or by a machine (automated scoring). |
assessing speaking ability | Computer-mediated testing of speaking: the test-taker and the examiner communicate live, but they can be geographically far removed from each other. They use video conferencing software (the likes of Zoom) to see and communicate with each other. Computer-delivered testing of speaking: speaking tasks (prompts) are displayed on the computer screen; the test-taker is allocated a certain amount of time to produce a response (e.g., one minute to describe a picture). The response is recorded and marked either by human examiners or by a machine. |
2. Proficiency level(s)
Tests can differ in the level(s) of proficiency they target:
- multi-level tests target a range of proficiency levels
E.g., the IELTS test is scored across ten band levels, from 0 to 9, which corresponds to the CEFR levels A1 to C2
- single-level tests target one proficiency level
E.g., the Cambridge English main suite tests are offered at A2 (KET), B1 (PET), B2 (FCE), C1 (CAE) and C2 (CPE) levels. Test-takers can either pass or fail these tests.
3. Aspects of language ability
Tests can differ in what aspects of language ability they target. Most modern language tests were produced to assess the four language skills – listening, reading, speaking, and writing; one example is the IELTS test (https://www.ielts.org) which has four sections each targeting one of the four skills. In IELTS, the skills are assessed separately, and an individual score is reported for each skill. Some tests assess skills integratively. For example, Trinity’s ISE test (Integrated Skills in English, https://www.trinitycollege.com/qualifications/english-language/ISE) has two modules: 1) Reading & Writing, where test-takers use the information they have learnt from the reading texts to produce their own writing, and 2) Speaking & Listening, where test-takers are asked to discuss/comment on the audio recordings they have listened to. However, scores for ISE tests are still reported separately for each individual skill. For example, for the Reading & Writing module, two scores are reported – a reading score and a writing score. It is also possible to award a single score, for example, on a reading-into-writing test. The latter approach would mean that reading ability and writing ability are understood as elements of the same construct.
Some tests include sections that target language elements such as grammar, vocabulary or lexicogrammar. In lexicogrammar, lexis and grammar/syntax are understood as one (see Sardinha, 2019 for a helpful introduction). For example, Cambridge Assessment’s FCE, CAE and CPE tests, besides testing the four skills, have ‘Use of English’ tasks within the reading section (https://www.cambridgeenglish.org/exams-and-tests/). Michigan EPT (English Placement Test) has two sections – listening and reading – with grammar and vocabulary tasks included in the reading section (https://michiganassessment.org/michigan-tests/m-ept/michigan-ept-details/).
Sometimes, test-takers can choose what sections they want to be tested on. This applies particularly to CB tests. For example, test-takers of the British Council’s Aptis test can choose to be tested on one or more of the language skills, depending on their needs (https://www.britishcouncil.org/exam/aptis/why-choose-aptis).
Recommended reading
Banerjee, J., Lestari, S.B., & Rossi, O. (2021). Choosing test formats and task types. In P. Winke and T. Brunfaut (Eds.), Handbook of Second Language Acquisition and Language Testing (pp.78-89), Routledge. https://doi.org/10.4324/9781351034784
Sardinha, B. (2019). Lexicogrammar. In C. A. Chapelle (Ed.), The Encyclopedia of Applied Linguistics (pp.1-5). John Wiley & Sons. https://doi.org/10.1002/9781405198431.wbeal0698.pub2 (free access)
Suvorov, R., & Hegelheimer, V. (2013). Computer-assisted language testing. In A. J. Kunnan (Ed.), The Companion to Language Assessment (pp.594-613).John Wiley & Sons, Inc. https://doi.org/10.1002/9781118411360.wbcla083 (free access)