DE eng

Search in the Catalogues and Directories

Hits 1 – 13 of 13

1
Cognitive validity in the testing of speaking
Field, John. - 2020
BASE
Show details
2
Task parallelness: investigating the difficulty of two spoken narrative tasks
Inoue, Chihiro. - 2020
BASE
Show details
3
Opening the black box: exploring automated speaking evaluation ; Issues in Language Testing Around the World: Insights for Language Test Users.
BASE
Show details
4
Re-engineering a speaking test used for university admissions purposes: considerations and constraints: the case of IELTS
Taylor, Lynda. - 2020
BASE
Show details
5
Analysing multi-person discourse in group speaking tests: how do test-taker characteristics, task types and group sizes affect co-constructed discourse in groups?
BASE
Show details
6
Investigating the use of language functions for validating speaking test specifications
Inoue, Chihiro. - 2020
BASE
Show details
7
The IELTS Speaking Test: what can we learn from examiner voices?
BASE
Show details
8
Academic speaking: does the construct exist, and if so, how do we test it?
BASE
Show details
9
Testing speaking skills: why and how?
BASE
Show details
10
Applying the socio-cognitive framework: gathering validity evidence during the development of a speaking test ; Lessons and Legacy: A Tribute to Professor Cyril J Weir (1950–2018)
Nakatsuhara, Fumiyo; Dunlea, Jamie. - : UCLES/Cambridge University Press, 2020
BASE
Show details
11
A comparison of holistic, analytic, and part marking models in speaking assessment
Abstract: This mixed methods study examined holistic, analytic, and part marking models (MMs) in terms of their measurement properties and impact on candidate CEFR classifications in a semi-direct online speaking test. Speaking performances of 240 candidates were first marked holistically and by part (phase 1). On the basis of phase 1 findings – which suggested stronger measurement properties for the part MM – phase 2 focused on a comparison of part and analytic MMs. Speaking performances of 400 candidates were rated analytically and by part during that phase. Raters provided open comments on their marking experiences. Results suggested a significant impact of MM; approximately 30% and 50% of candidates in phases 1 and 2 respectively were awarded different (adjacent) CEFR levels depending on the choice of MM used to assign scores. There was a trend of higher CEFR levels with the holistic MM and lower CEFR levels with the part MM. While strong correlations were found between all pairings of MMs, further analyses revealed important differences. The part MM was shown to display superior measurement qualities particularly in allowing raters to make finer distinctions between different speaking ability levels. These findings have implications for the scoring validity of speaking tests.
Keyword: English language assessment; English language testing; language assessment; Q110 Applied Linguistics; speaking
URL: https://doi.org/10.1177/0265532219898635
http://hdl.handle.net/10547/623759
BASE
Hide details
12
Validating speaking test rating scales through microanalysis of fluency using PRAAT
BASE
Show details
13
ПЕРЕДУМОВИ ЯКІСНОГО ОЦІНЮВАННЯ УМІНЬ АНГЛІЙСЬКОГО УСНОГО МОВЛЕННЯ ; DEVELOPING QUALITY ASSESSMENTS OF ORAL SPEECH IN ENGLISH
In: Ars linguodidacticae; № 1 (2017); 16-24 ; ARS LINGUODIDACTICAE; № 1 (2017): Ars linguodidacticae; 16-24 ; 2663-0303 (2020)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
13
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern