1 |
Validation of a large-scale task-based test: functional progression in dialogic speaking performance ; Task-based language teaching and assessment: Contemporary reflections from across the world
|
|
|
|
BASE
|
|
Show details
|
|
2 |
The design and validation of an online speaking test for young learners in Uruguay: challenges and innovations
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Towards new avenues for the IELTS Speaking Test: insights from examiners’ voices
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Video-conferencing speaking tests: do they measure the same construct as face-to-face tests?
|
|
|
|
BASE
|
|
Show details
|
|
5 |
The effects of extended planning time on candidates’ performance, processes and strategy use in the lecture listening-into-speaking tasks of the TOEFL iBT Test
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Exploring the potential for assessing interactional and pragmatic competence in semi-direct speaking tests
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Task parallelness: investigating the difficulty of two spoken narrative tasks
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Comparing rating modes: analysing live, audio, and video ratings of IELTS Speaking Test performances
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Investigating the use of language functions for validating speaking test specifications
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Exploring the use of video-conferencing technology to deliver the IELTS Speaking Test: Phase 3 technical trial
|
|
|
|
BASE
|
|
Show details
|
|
11 |
The IELTS Speaking Test: what can we learn from examiner voices?
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Academic speaking: does the construct exist, and if so, how do we test it?
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Exploring the use of video-conferencing technology in the assessment of spoken language: a mixed-methods study
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Developing rubrics to assess the reading-into-writing skills: a case study
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Exploring performance across two delivery modes for the same L2 speaking test: face-to-face and video-conferencing delivery: a preliminary comparison of test-taker and examiner behaviour
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Exploring performance across two delivery modes for the IELTS Speaking Test: face-to-face and video-conferencing delivery (Phase 2)
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Accuracy across proficiency levels: A learner corpus approach. Jennifer Thewissen. Presses Universitaires de Louvain, Lougain-la-Neuve, Belgium (2015). 342pp.
|
|
|
|
BASE
|
|
Show details
|
|
20 |
A comparative study of the variables used to measure syntactic complexity and accuracy in task-based research
|
|
|
|
Abstract:
The constructs of complexity, accuracy and fluency (CAF) have been used extensively to investigate learner performance on second language tasks. However, a serious concern is that the variables used to measure these constructs are sometimes used conventionally without any empirical justification. It is crucial for researchers to understand how results might be different depending on which measurements are used, and accordingly, choose the most appropriate variables for their research aims. The first strand of this article examines the variables conventionally used to measure syntactic complexity in order to identify which may be the best indicators of different proficiency levels, following suggestions by Norris and Ortega. The second strand compares the three variables used to measure accuracy in order to identify which one is most valid. The data analysed were spoken performances by 64 Japanese EFL students on two picture-based narrative tasks, which were rated at Common European Framework of Reference for Languages (CEFR) A2 to B2 according to Rasch-adjusted ratings by seven human judges. The tasks performed were very similar, but had different degrees of what Loschky and Bley-Vroman term ‘task-essentialness’ for subordinate clauses. It was found that the variables used to measure syntactic complexity yielded results that were not consistent with suggestions by Norris and Ortega. The variable found to be the most valid for measuring accuracy was errors per 100 words. Analysis of transcripts revealed that results were strongly influenced by the differing degrees of task-essentialness for subordination between the two tasks, as well as the spread of errors across different units of analysis. This implies that the characteristics of test tasks need to be carefully scrutinised, followed by careful piloting, in order to ensure greater validity and reliability in task-based research.
|
|
Keyword:
accuracy; speaking; speech communication; syntactic complexity; task-based research
|
|
URL: http://hdl.handle.net/10547/621953 https://doi.org/10.1080/09571736.2015.1130079
|
|
BASE
|
|
Hide details
|
|
|
|