Comparative Overview of English Language Tests for Entry to Higher Education
The information below is derived solely from publicly available sources and reflects the data accessible as of early 2026.
The following tables constitute a consolidated summary of such materials. While all reasonable efforts have been made to ensure the accuracy of the information presented, publicly available data may be altered, updated, or withdrawn by the originating organisations without prior notice. Accordingly, no representation or warranty, express or implied, is given as to the continued accuracy of the content.
Should you become aware of any inaccuracies, discrepancies, or subsequent updates, you are requested to notify us so that appropriate amendments may be made.
The following tables constitute a consolidated summary of such materials. While all reasonable efforts have been made to ensure the accuracy of the information presented, publicly available data may be altered, updated, or withdrawn by the originating organisations without prior notice. Accordingly, no representation or warranty, express or implied, is given as to the continued accuracy of the content.
Should you become aware of any inaccuracies, discrepancies, or subsequent updates, you are requested to notify us so that appropriate amendments may be made.
1. Company Overview ▼
| Feature | IELTS Academic | TOEFL iBT (Jan 2026) | PTE Academic | Password | Duolingo English Test | LanguageCert | Kaplan Test of English | Oxford Test of English |
|---|---|---|---|---|---|---|---|---|
| Parent Organisation | British Council, IDP Education, and Cambridge University Press & Assessment (joint ownership) | Educational Testing Service (ETS) | Pearson plc | englishcheck Ltd (part of the EQUALS group) | Duolingo, Inc. | PeopleCert (operating as LanguageCert) | Kaplan International (part of Kaplan, Inc.) | Oxford University Press (a department of the University of Oxford) |
| Headquarters | Co-owned across the UK and Australia | Princeton, New Jersey, USA | London, UK | London, UK | Pittsburgh, Pennsylvania, USA | Athens, Greece and London, UK | London, UK, with global offices | Oxford, UK |
| Organisation Type | Joint venture between a registered charity (British Council), a publicly listed company (IDP Education, ASX), and a university press (Cambridge) | Non-profit organisation | Publicly listed corporation (LSE: PSON) | Private limited company | Publicly listed corporation (NASDAQ: DUOL) | Private company (PeopleCert Group) | Subsidiary of Kaplan, Inc., itself part of Graham Holdings Company (NYSE: GHC) | Department of the University of Oxford; operates on a not-for-profit basis |
| Primary Market | Global: accepted in over 140 countries by more than 12,000 organisations | Global: accepted in over 160 countries by more than 12,000 institutions | Global: strong uptake in Australia, New Zealand, UK, and Canada | Primarily the UK higher education sector, with growing international recognition | Global: accepted by over 5,000 programmes at more than 4,500 institutions | Primarily Europe and the Middle East, with growing global recognition | Primarily the UK and Europe, with selected global markets | Primarily Europe and Latin America, with growing global recognition |
2. Test History ▼
| Feature | IELTS Academic | TOEFL iBT (Jan 2026) | PTE Academic | Password | Duolingo English Test | LanguageCert | Kaplan Test of English | Oxford Test of English |
|---|---|---|---|---|---|---|---|---|
| Year Launched | 1989 in its current form; predecessor test (ELTS) launched in 1980 | Original TOEFL launched 1964; CBT 1998; iBT 2005. Redesigned TOEFL iBT launched January 2026 | 2009 as PTE Academic | 2008 | 2016 in beta; full public launch in 2019 | 2015 under the LanguageCert brand following PeopleCert’s acquisition of the City & Guilds language testing portfolio | 2019 | 2018, initially at B2 level |
| Major Revisions | Computer-delivered option introduced 2017–2018. Scoring mapped to CEFR. Ongoing item bank renewal | 2023: shortened format. January 2026: complete redesign with new band 1–6 score scale, MST for Reading and Listening, new task types, and at-home delivery option | November 2021: test shortened from ~3 hours to ~2 hours; task types modified; scoring algorithm updated. PTE Core launched 2024 | Periodic item bank updates; introduction of online proctored delivery | Adaptive algorithm updates; subscore reporting introduced 2020; ongoing security enhancements | Expansion of online proctored delivery; documentation of alignment to CEFR levels | Ongoing development and expansion of the adaptive item bank | B2 level launched 2018; A2, B1, and C1 levels added subsequently |
| Annual Test Volume (approx.) | Approximately 3.5 million tests per year (2023) | Approximately 2 million or more globally (pre-2026 figures) | Not publicly disclosed; Pearson reports significant and sustained growth | Not publicly disclosed | Over 3 million tests per year | Not publicly disclosed | Not publicly disclosed | Not publicly disclosed |
3. Test Construct ▼
| Feature | IELTS Academic | TOEFL iBT (Jan 2026) | PTE Academic | Password | Duolingo English Test | LanguageCert | Kaplan Test of English | Oxford Test of English |
|---|---|---|---|---|---|---|---|---|
| Underlying Construct | Four-skills English language proficiency for academic contexts. Each skill assessed and reported separately | Hybrid construct combining foundational second-language competence and communicative language ability in academic and daily life contexts | Communicative English language proficiency across four skills plus six enabling skills (Grammar, Oral Fluency, Pronunciation, Spelling, Vocabulary, Written Discourse) | Core academic English proficiency across Reading, Listening, and Writing, with an optional separate Speaking module | General English proficiency via computer-adaptive format, measuring four sub-scores (Literacy, Conversation, Comprehension, Production) plus an overall score | Four-skills English language proficiency aligned to specific CEFR levels | Academic English proficiency across four skills, with integrated task types reflecting linguistic demands of English-medium higher education | Four-skills English language proficiency mapped to CEFR levels (A2, B1, B2, or C1) |
| CEFR Alignment | Band scores mapped to CEFR levels, covering B1 through C2 | Designed to cover CEFR A1 to C2. Mapping based on field test data and standard setting studies | Score ranges mapped to CEFR levels from A1 to C2 | Mapped to CEFR levels from A1 to C2 | Overall and sub-scores mapped to CEFR levels via published conversion tables | Tests available at specific CEFR levels: B1, B2, C1, and C2 | Scores mapped to CEFR levels | Tests targeted at specific CEFR levels: A2, B1, B2, and C1 |
| Score Scale | Band 0–9 in half-band increments per skill and overall | Band 1–6 per section in half-band increments; overall band is average of four sections. MyBest scores also reported | Overall score of 10–90; communicative and enabling skills each reported on the same 10–90 scale | CEFR level-based reporting (A1–C2) with numerical sub-scores per skill | Overall score of 10–160; sub-scores for Literacy, Conversation, Comprehension, and Production | Pass or Fail at the targeted CEFR level; percentage scores for each skill also reported | Scores reported and mapped to CEFR levels | Each skill scored on a scale of 51–140, mapped to CEFR levels |
4. Test Items ▼
| Feature | IELTS Academic | TOEFL iBT (Jan 2026) | PTE Academic | Password | Duolingo English Test | LanguageCert | Kaplan Test of English | Oxford Test of English |
|---|---|---|---|---|---|---|---|---|
| Reading Tasks | Three long academic passages; 40 questions. Task types include multiple choice, True/False/Not Given, Yes/No/Not Given, matching, heading matching, sentence completion, and summary/table/diagram completion | Three task types: Complete the Words (C-test), Read in Daily Life, and Read an Academic Passage (~200 words). 35 scored items per test path via multistage adaptive testing | Five task types: Reading and Writing Fill in the Blanks, Multiple Choice Multiple Answer, Re-order Paragraphs, Reading Fill in the Blanks, and Multiple Choice Single Answer | Academic-style passages with multiple choice, gap fill, and text completion. Difficulty calibrated to targeted CEFR level | Integrated into Literacy and Comprehension sub-scores. Includes interactive read-and-respond items and passage completion. Adaptive item selection | Texts appropriate to the targeted CEFR level. Task types include multiple choice, gap fill, and multiple matching | Academic passages with multiple choice and gap fill | Part-based tasks: multiple choice, matching, gap fill, and multiple matching |
| Listening Tasks | Four recorded monologues and conversations; 40 questions. Task types include multiple choice, matching, diagram labelling, sentence/note/table completion, and short-answer questions. Each recording heard once only | Four task types: Listen and Choose a Response, Listen to a Conversation, Listen to an Announcement, Listen to an Academic Talk. 35 scored items per test path | Eight task types including Summarize Spoken Text, Multiple Choice Multiple Answer, Fill in the Blanks, Highlight Correct Summary, Select Missing Word, Highlight Incorrect Words, and Write from Dictation | Audio recordings at CEFR level with multiple choice and gap fill | Integrated into Conversation and Comprehension sub-scores. Listen-and-respond and interactive listening exercises. Adaptive item selection | Audio recordings at CEFR level with multiple choice, gap fill, and matching | Audio prompts with multiple choice and gap fill | Part-based tasks at CEFR level: multiple choice, matching, and gap fill |
| Writing Tasks | Two tasks: Task 1 (describe visual information, min. 150 words, 20 mins); Task 2 (essay, min. 250 words, 40 mins) | Three task types: Build a Sentence (10 items, key-scored), Write an Email (1 item, rubric-scored), Write for an Academic Discussion (1 item, rubric-scored) | Two task types: Summarize Written Text (one-sentence summary) and Write Essay (200–300 words) | Extended writing task at the targeted CEFR level; format varies by level (guided writing, essay, or report) | Integrated into Literacy and Production sub-scores. Includes a 5-minute writing sample in response to an adaptive prompt | Extended tasks by CEFR level: email, essay, or report. Assessed against CEFR-aligned criteria | Extended writing task(s) appropriate to CEFR level, reflecting academic writing demands | Extended tasks by CEFR level: email, essay, or article |
| Speaking Tasks | Three-part interview with a certified examiner, 11–14 minutes total. Part 1: interview on familiar topics; Part 2: long turn from cue card; Part 3: two-way discussion on abstract themes | Two task types: Listen and Repeat (7 items) and Take an Interview (4 questions, prerecorded interviewer). No live or AI interlocutor | Six task types: Personal Introduction (unscored), Read Aloud, Repeat Sentence, Describe Image, Re-tell Lecture, and Answer Short Question | Optional module: structured interview and monologue. Human-rated against CEFR-aligned criteria | Integrated into Conversation and Production sub-scores. Includes speak-and-respond tasks and a video interview segment. Adaptive task selection | Structured interview, sustained monologue, and discussion. May be face-to-face or computer-delivered | Monologue and discussion elements assessing academic spoken English | Part-based tasks: interview, long turn, and collaborative task with an examiner |
| Integrated / Innovative Items | Skills tested independently; no integrated tasks combining skills within a single item | Combines foundational skill tasks with communicative tasks. No traditional integrated read-then-write tasks | Several item types contribute to scores in more than one skill. Six enabling skills derived from performance across multiple task types | At higher levels, writing task may draw on a reading stimulus | Adaptive format blends modalities within the session, contributing to multiple sub-scores | At higher CEFR levels, some tasks require integrating skills (e.g., listen then respond in speech) | Integrated tasks include reading-into-writing and listening-into-speaking | Collaborative speaking task integrates listening and speaking |
5. Test Delivery ▼
| Feature | IELTS Academic | TOEFL iBT (Jan 2026) | PTE Academic | Password | Duolingo English Test | LanguageCert | Kaplan Test of English | Oxford Test of English |
|---|---|---|---|---|---|---|---|---|
| Delivery Mode | Paper-based or computer-delivered at authorised test centres. Speaking is always face-to-face and may be scheduled on a separate day | Computer-delivered at authorised test centres or at home via TOEFL iBT Home Edition | Computer-delivered at authorised Pearson test centres only. No at-home testing option | Computer-delivered at authorised test centres or at home/office via online proctoring | Computer-delivered online at a location of the test taker’s choice using AI-based proctoring supplemented by human review | Computer-delivered at authorised test centres or at home via online proctoring with live or recorded invigilation | Computer-delivered at authorised test centres or at home via online proctoring | Computer-delivered at authorised test centres only. No at-home testing option |
| Adaptive Testing | Non-adaptive: all test takers receive a fixed test form | Multistage adaptive testing (MST) for Reading and Listening: two-stage design with a router module. Writing and Speaking are linear | Non-adaptive: all test takers receive a linear sequence of tasks | Non-adaptive: fixed test forms calibrated to the targeted CEFR level | Fully computer-adaptive: algorithm selects each item based on responses to previous items, adjusting difficulty in real time | Non-adaptive: fixed test forms calibrated to the targeted CEFR level | Item-bank-based selection with adaptive elements reported in test documentation | Non-adaptive: fixed test forms calibrated to the targeted CEFR level |
| Test Frequency | Multiple test dates per month at most centres worldwide | Multiple test dates per week at test centres; Home Edition available near-continuously | Near-continuous availability, with sessions up to 365 days per year at some centres | Flexible scheduling via test centres or online proctoring platform | Available on demand, 24 hours a day, 7 days a week | Regular scheduled dates at centres; online proctored sessions available frequently | Regular scheduled dates at centres; online proctored sessions also available | Regular scheduled dates at authorised centres |
| Security Measures | Biometric identity verification (photograph, fingerprint scanning at some centres); multiple test versions; ongoing post-test statistical monitoring | At-home: secure lockdown browser, AI monitoring with live remote human proctors, mandatory dual camera, room scans, identity verification. Test centre: trained administrators and continuous monitoring. Post-test: statistical analyses by ETS Office of Testing Integrity | Biometric identity verification (palm-vein scanning) at test centres; AI monitoring; randomised test form assembly from item bank | Photo identity verification; online proctoring with continuous video recording; secure browser application | AI-based proctoring with continuous video and screen recording; human review of flagged sessions; adaptive item selection reduces item exposure | Photo identity verification; remote proctoring with live invigilation or continuous video recording | Photo identity verification; proctoring at test centres or remotely via secure browser and video monitoring | Secure test centre protocols; photo identity verification; controlled test environment |
6. Marking Format ▼
| Feature | IELTS Academic | TOEFL iBT (Jan 2026) | PTE Academic | Password | Duolingo English Test | LanguageCert | Kaplan Test of English | Oxford Test of English |
|---|---|---|---|---|---|---|---|---|
| Reading & Listening | Automated scoring of objective items (correct/incorrect against an answer key) | Key-scored; scores equated across MST paths using item response theory (IRT) to ensure comparability | Automated scoring: objective items key-scored; constructed-response items scored by AI algorithms | Automated scoring of objective items | AI-scored using machine learning models | Automated scoring of objective items | Automated scoring of objective items | Automated scoring of objective items |
| Writing | Human rated by trained IELTS examiners using published band descriptors. Double marking applied as standard quality assurance | Build a Sentence key-scored. Write an Email and Write for an Academic Discussion scored by ETS proprietary AI engines combined with human rater scoring. Human–Machine correlation: 0.86 | Fully AI-scored by Pearson’s automated essay scoring engine. No human raters in the standard pipeline | Human rated by trained assessors using CEFR-aligned assessment criteria | AI-scored using machine learning models. Unusual or low-confidence responses referred for human review | Human rated by trained examiners using CEFR-aligned assessment criteria | Human rated by trained assessors | Human rated by trained examiners using CEFR-aligned assessment criteria |
| Speaking | Human rated by certified IELTS examiners during the face-to-face interview. Assessed on fluency and coherence, lexical resource, grammatical range and accuracy, and pronunciation | Listen and Repeat and Take an Interview scored by ETS proprietary AI engines combined with human rater scoring. Human–Machine correlation: 0.89 | Fully AI-scored by Pearson’s automated speech recognition and scoring engine. No human raters in the standard pipeline | Human rated by trained assessors when the optional speaking module is taken | AI-scored using speech recognition and machine learning models. Flagged responses referred for human review | Human rated by trained examiners, either live or from a recorded session | Human rated by trained assessors | Human rated by trained examiners |
| Quality Assurance | Examiner certification programme with regular retraining and monitoring; item pre-testing; statistical analysis of all test forms; moderation procedures across marking centres | Human raters hold relevant qualifications; web-based training and certification required; continuous monitoring by scoring leaders; calibration tests at start of each session; each response scored by multiple independent raters; AI models regularly refined for fairness | AI scoring engines trained and calibrated against expert human rater scores. Pearson conducts regular statistical monitoring and publishes technical reports | Assessor training and standardisation procedures; item pre-testing and review | Continuous training and refinement of AI scoring models; human review layer for flagged responses; ongoing security monitoring and item exposure management | Examiner training and standardisation procedures; item moderation and review processes | Assessor standardisation procedures; item banking and quality control processes | Examiner standardisation procedures; item pre-testing; statistical analysis of test performance |
7. Length of Test ▼
| Feature | IELTS Academic | TOEFL iBT (Jan 2026) | PTE Academic | Password | Duolingo English Test | LanguageCert | Kaplan Test of English | Oxford Test of English |
|---|---|---|---|---|---|---|---|---|
| Total Test Time | ~2 hrs 45 mins (paper-based) or ~2 hrs (computer-delivered), plus Speaking (11–14 mins, may be scheduled separately) | ~1 hr 40 mins to 2 hrs. Fixed sequence: Reading, Listening, Writing, Speaking | ~2 hours | ~1 to 1.5 hours for core modules, plus additional time if optional Speaking module is taken | ~1 hour, though duration varies by test taker due to adaptive design | ~2.5 to 3 hours, varying by targeted CEFR level | ~2 to 2.5 hours | ~2 hours, varying by targeted CEFR level |
| Reading | 40 items; 60 minutes | 35 scored items per test path (20 in router module, 15 in second-stage module) | 15–20 items; ~29–30 minutes | ~25–35 minutes | Integrated into overall test time; not separately timed | ~40–65 minutes depending on level | ~30–40 minutes | ~30–40 minutes |
| Listening | 40 items; ~30 minutes of audio (plus 10 minutes transfer time for paper-based format) | 35 scored items per test path (20 in router module, 15 in second-stage module) | 15–23 items; ~30–43 minutes | ~25–30 minutes | Integrated into overall test time; not separately timed | ~25–40 minutes depending on level | ~25–35 minutes | ~25–35 minutes |
| Writing | 2 tasks; 60 minutes total | 12 scored items: 10 Build a Sentence, 1 Write an Email, 1 Write for an Academic Discussion | 2 tasks; ~40–50 minutes within combined Speaking and Writing section | ~25–40 minutes | Integrated; includes a 5-minute writing sample | ~40–70 minutes | ~30–45 minutes | ~30–45 minutes |
| Speaking | 3 parts; 11–14 minutes total | 11 scored items: 7 Listen and Repeat, 4 Take an Interview questions | 6 task types; ~30–35 minutes within combined Speaking and Writing section | Optional module; ~10–15 minutes | Integrated into overall test time; not separately timed | ~15–20 minutes | ~10–15 minutes | ~15 minutes |
8. Cost of Test ▼
| Feature | IELTS Academic | TOEFL iBT (Jan 2026) | PTE Academic | Password | Duolingo English Test | LanguageCert | Kaplan Test of English | Oxford Test of English |
|---|---|---|---|---|---|---|---|---|
| Approximate Cost (USD) | $245–$255, varying by country and test centre | Varies by country; ETS has stated a design objective of competitive and affordable pricing | $200–$260, varying by country and test centre | ~$60–$120; generally lower price point than IELTS, TOEFL, and PTE | $59 standard test; $79 for version with enhanced institutional reporting | ~$80–$200, varying by targeted CEFR level and location | ~$100–$180 | ~$80–$150, varying by targeted CEFR level and location |
| Results Turnaround | 13 calendar days (paper-based); 3–5 business days (computer-delivered) | Official scores within 72 hours. Report includes MyBest scores | Typically within 48 hours; in many cases within 24 hours | Typically within 5 working days | Within 48 hours; in many cases within 24 hours | Typically within 5–10 working days | Typically within 5 working days | Within 14 days |
| Score Validity Period | 2 years from the test date | 2 years from the test date. MyBest scores draw from all valid administrations within the preceding 2 years | 2 years from the test date | 2 years from the test date (institution-dependent policies may vary) | 2 years from the test date | 2 years from the test date | 2 years from the test date | 2 years from the test date |
9. Validity Studies ▼
| Feature | IELTS Academic | TOEFL iBT (Jan 2026) | PTE Academic | Password | Duolingo English Test | LanguageCert | Kaplan Test of English | Oxford Test of English |
|---|---|---|---|---|---|---|---|---|
| Published Validity Research | Extensive. Multi-decade research programme published in IELTS Research Reports series, Cambridge Assessment research reports, and peer-reviewed journals. Covers construct validation, scoring quality, test impact, and test preparation effects | TOEFL iBT Technical Manual (Manna et al., 2025) presents a formal validity argument across six hierarchical inferences (Chapelle, 2008 framework). Developed through iterative prototyping (N=570), pilot testing (N=700), and field testing (N≈5,000). Described as a living document. Broader ETS TOEFL research base spans decades | Pearson publishes technical reports and research papers through the Pearson Research and Innovation team. Topics include scoring validity, CEFR alignment, and AI scoring quality | Limited published research in the public domain. Technical documentation available on request | Growing body of research. Duolingo Research publishes technical reports and peer-reviewed papers on scoring validity, item design, and test security | Some published technical reports, including CEFR alignment documentation. Independently published research is limited | Limited publicly available validity research. Some technical documentation exists | Oxford University Press publishes research reports, including CEFR alignment studies. Some peer-reviewed publications are available |
| Predictive Validity | Studied extensively. Multiple studies document moderate positive correlations between IELTS scores and indicators of academic success (grades, progression, programme completion) | Predictive validity evidence for the January 2026 version planned for collection during the operational phase. The legacy TOEFL iBT has an extensive base of published predictive validity studies | Some studies report correlations between PTE Academic scores and academic outcomes. Evidence base is growing | Limited published data on the relationship between Password scores and academic success | Early studies indicate moderate correlations between Duolingo English Test scores and academic outcomes. Evidence base is developing | Limited published data on predictive validity for academic success | Limited published data on predictive validity for academic success | Some studies have been published. Evidence base is developing |
| Construct Validity | Strong. Published evidence includes factor analyses, content validity studies, and criterion-related validity studies accumulated over multiple decades | Initial evidence includes content-construct alignment analysis and rubric-score correspondence. Section-level reliability: Reading 0.86, Listening 0.88, Writing 0.87, Speaking 0.94, Overall 0.90. Factor analysis studies planned for the operational phase | Pearson’s technical reports include factor analyses, correlation studies, and documentation of construct representation across item types | Technical documentation describes alignment of the test construct to academic English demands. Independent verification is limited | Duolingo’s technical reports describe the IRT model underpinning the adaptive algorithm and construct coverage across item types | CEFR alignment documentation published. Some construct validity evidence reported in technical materials | Technical documentation available. Independent validation is limited | Construct aligned to CEFR. Oxford University Press research reports document alignment studies |
| Fairness & Bias Research | Published differential item functioning (DIF) analyses. Ongoing fairness monitoring and research into cultural and first-language bias as part of the IELTS research programme | DIF analyses conducted on 1,454 items by gender during development: 21 items flagged, none found to exhibit content bias on review. Field test data showed comparable performance across subgroups. AI scoring found not to disadvantage major first-language subgroups | Pearson reports fairness analyses in technical documentation. AI scoring engine analysed for differential performance across demographic groups | Limited published fairness research | Duolingo publishes DIF analyses and fairness research examining performance across demographic and linguistic subgroups | Limited published fairness research | Limited published fairness research | Some fairness documentation included in technical reports |