Skip to content

Toggle service links

You are here

  1. Home
  2. CREET
  3. Research themes
  4. Languages and Applied Linguistics
  5. Languages and Applied Linguistics current projects

Languages and Applied Linguistics current projects

Current projects:

Researching Academic Reading in two contrasting English-medium university contexts, and implications for the design of TOEFL iBT

Researchers:

Nathaniel Owen; Prithvi Shrestha; Kristina Hultgren; Stephen Bax

Project Summary

This project investigates the use of English in the education systems of two countries – Sweden and Nepal. These countries attracted interest as they use English widely in their education systems despite English not being their native language. The aim of the research is to find out how English is used in these contexts at university level, focusing specifically on reading materials, and whether the TOEFL test is suitable to make decisions about whether students are ready to study in English.

The TOEFL test (Test of English as a foreign language) is used widely by universities in English-speaking countries as evidence that incoming international students are able to cope with the language demands of their academic programmes. More recently, this test is being used in other contexts, such as in Nepal and Sweden, to ensure that students in those countries can cope with their English-language curricula. This study will contribute to understanding whether the test is suitable for this purpose, as this new use represents an extension to its mandate.

Further information about the TOEFL test can be found here:

https://www.ets.org/toefl

Outcomes

The project will offer important new insights into academic reading in EMI contexts, and more specifically will provide important new insights into the potential role of TOEFL iBT in EMI contexts to underpin future development and expansion of the TOEFL.

_________________________________________________________________________________________________

 

 

Researching lexical thresholds and lexical profiles across the Common European Framework of Reference for Language (CEFR) levels assessed in the Aptis test

Researchers:

Stephen Bax; Prithvi Shrestha; Nathaniel Owen

Project Summary

Major language exam boards carry out extensive research into the content of their tests to ensure that this is fit-for-purpose. This might be the texts used in reading tests for example, which are investigated to determine the suitability of vocabulary. Additionally, these exam boards are also concerned with the use of language produced by candidates in writing and speaking portions of their tests. This research project is concerned with the writing portion of the Aptis test. The Aptis test is produced by the British Council and is used by a variety of companies and education bodies to make claims about students or employees. The Aptis test is designed to be able to differentiate between a wide range of language abilities, from beginner to advanced.

One way of investigating whether a test is able to differentiate between a range of abilities is by empirical investigation comparing test-taker language against an existing language framework such as the CEFR (Common European Framework of Reference). The CEFR is designed to be a “transparent, coherent and comprehensive basis for the elaboration of language syllabuses and curriculum guidelines, the design of teaching and learning materials, and the assessment of foreign language proficiency” (https://www.coe.int/en/web/common-european-framework-reference-languages/home). The CEFR contains a number of levels of language proficiency, with descriptors of what language students at each level are capable of. The score bands of the Aptis test were designed to align to the levels of the CEFR. Language produced in test conditions by large numbers of test-takers can be compared against these descriptors to see whether the test has elicited language from which similar language proficiency claims can be made.

This study investigates whether these levels can be described through descriptive elements of student writing, such as vocabulary diversity, range and sophistication. The study uses automated analysis tools found in Text Inspector (www.textinspector.com). 6,000 transcripts of student writing were processed to obtain a range of language variables, benchmarked against the scores they received in the Aptis test. The metrics are being analyse statistically to see whether these are significant differences across the score bands.

Outcomes

The result will be a comprehensive and detailed picture of the language used by Aptis test takers across the Common European Framework of Reference for Language (CEFR) levels assessed in Aptis. This will provide evidence that the Aptis test is sufficiently finely-grained to distinguish between different language abilities.

_________________________________________________________________________________________________

 

Exploring rater interaction with test-taker responses in Aptis Writing

Researcher:

Nathaniel Owen

Project Summary:

This project investigates expert judgement in assessing student writing in English language tests. The Aptis test is produced by the British Council and is used by a variety of companies and education bodies. Test takers are judged on their English language proficiency. Raters compare test taker writing to a score rubric and assign a mark accordingly. The study explores how Aptis raters engage with writing scripts produced by test-takers and which elements of the marking rubrics influence their decision-making. The study uses innovative eye-tracking technology to identify which parts of the rubric and how much text they read before reaching a decision.

Information about the British Council Aptis test can be found here:

https://www.britishcouncil.org/exam/aptis/writing

Outcomes

Findings from the study will be fed back to the British Council, who will use them to inform future rater training and descriptors for evaluating test-taker writing. The findings of the study will also be used to provide recommendations regarding the content of the scoring rubrics; to what extent are raters marking analytically or holistically? Are parts of the rubrics for some tasks being overlooked in favour of others? Outcomes will also include additional validation evidence that specific tasks target specific cognitive writing skills.

 

Other projects: