DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 54

1
Editorial
Fulcher, Glenn; Harding, Luke. - : Routledge, 2022
BASE
Show details
2
Epilogue:Language testing: Where are we heading?
Harding, Luke; Fulcher, Glenn. - : Routledge, 2021
BASE
Show details
3
Developing a Rating Scale for Classroom Assessment of the Argumentative Writing of Chinese EFL College Students Majoring in English
Zhang, Keke. - : School of Education, 2019. : University of Leicester, 2019
BASE
Show details
4
Strategy Use in the TOEFL iBT Speaking Test and Academic Classroom
Fulcher, Glenn; Yi, Jong-il. - : NuriMedia Co. Ltd, 2018
BASE
Show details
5
The Practice of Language Assessment
Fulcher, Glenn. - : Informa UK (Taylor and Francis/Routledge), 2018
BASE
Show details
6
Investigating the Construct Validity of a Concordance-based Cloze Test: A Mixed-methods Study
Kongsuwannakul, Kunlaphak. - : School of Education, 2017. : University of Leicester, 2017
BASE
Show details
7
Raters’ accent-familiarity levels and their effects on pronunciation scores and intelligibility on high-stakes English tests
Browne, Kevin Cogswell. - : School of Education, 2016. : University of Leicester, 2016
BASE
Show details
8
Re-examining language testing : a philosophical and social enquiry
Fulcher, Glenn. - New York : Routledge, 2015
BLLDB
UB Frankfurt Linguistik
Show details
9
Context and Inference in Language Testing
Fulcher, Glenn. - : Palgrave Macmillan, 2015
Abstract: The file associated with this record is embargoed for 36 months from publication in accordance with the Publisher's archiving policy available at http://www.palgrave.com/page/publishing-with-us-archiving-in-institutional-or-funding-body-repositories/. The full text may be available in the publisher links above. ; It is arguably the case that “The purpose of language testing is always to render information to aid in making intelligent decisions about possible courses of action” (Carroll, 1961, p. 314). This holds true whether the decisions are primarily pedagogic, or affect the future education or employment of the test taker. If fair and useful decisions are to be made, three conditions must hold. Firstly, valid inferences must be made about the meaning of test scores. Secondly, score meaning must be relevant and generalizable to a real-world domain. Thirdly, score meaning should be (at least partially) predictive of, post-decision performance. If any of these conditions are not met the process of assessment and decision making may be questioned not only in theory, but in the courts (Fulcher, 2014a). It is therefore not surprising that historically, testing practice has rested on the assumption that language competence, however defined, is a relatively stable cognitive trait. This is expressed clearly in classic statements of the role of measurement in the ‘human sciences’, such as this by the father of American psychology, James McKeen Cattell: One of the most important objects of measurement…is to obtain a general knowledge of the capacities of a man by sinking shafts, as it were, at a few critical points. In order to ascertain the best points for the purpose, the sets of measures should be compared with an independent estimate of the man’s powers. We thus may learn which of the measures are the most instructive (Cattell, 1890, p. 380). The purely cognitive conception of language proficiency (and all human ability) is endemic to most branches of psychology, and psychometrics. This strong brand of realism assumes that variation in test scores is a direct causal effect of the variation of the trait within an individual (see the extensive discussion of validity theory in Fulcher, 2014b). This view of the world entails that any contextual feature that causes variation is a contaminant that pollutes the score. This is referred to as ‘construct-irrelevant variance’ (Messick, 1989, pp. 38–9). The standardization of testing processes from presentation to administration and scoring, is designed to minimize the impact of context on scores. In some ways, a good test is like an experiment, in the sense that it must eliminate or at least keep constant all extraneous sources of variation. We want our tests to reflect only the particular kind of variation in knowledge or skill that we are interested in at the moment (Carroll, 1961, p. 319). There are also ethical and legal imperatives that encourage this approach to language testing and assessment. If the outcomes of a test are high-stakes, it is incumbent upon the test provider to ensure that every test taker has an equal chance of achieving the same test score if they are of identical ability. Score variation due to construct-irrelevant factors is termed ‘bias.’ If any test taker is disadvantaged by variation in the context of testing, and particularly if this is true of an identifiable sub-group of the test taking population, litigation is likely. Language tests are therefore necessarily abstractions from real life. The degree of removal may be substantial, as in the case of a multiple-choice test, or less distant, in the case of a performance-based simulation. However, tests never reproduce the variability that is present in the real world. One analogy that illustrates the problem of context is that of tests for life guards. Fulcher (2010, pp. 97–100) demonstrates the impossibility of reproducing in a test all the conditions under which a life guard may have to operate – weather conditions, swell, currents, tides, distance from shore, victim condition and physical build. The list is potentially endless. Furthermore, health and safety regulations would preclude replicating many of the extremes that could occur within each facet. The solution is to list constructs that are theoretically related to real world performance, such as stamina, endurance, courage, and so on. The test of stamina (passive drowning victim rear rescue and extraction from a swimming pool, using an average weight/size model) is assumed to be generalizable to many different conditions, and predict the ability of the test taker to successfully conduct rescues in non-pool domains. The strength of the relationship between the test and real-world performance is an empirical matter. Recognizing the impact of context on test performance may initially look like a serious challenge to the testing enterprise, as score meaning must thereafter be constructed from much more than individual ability. McNamara (1995) referred to this as ‘opening Pandora’s box’, allowing all the plagues of the real world to infect the purity of the link between a score and the mind of the person from whom it was derived. While this may be true in the more radical constructivist treatments of context in language testing, I believe that validity theory is capable of taking complex context into account while maintaining score generalizability for practical decision making purposes. In the remainder of this chapter I first consider three stances towards context: atomism, neobehaviourism, and interactionism. This classification is familiar from other fields of applied linguistics, but in language testing each has distinctive implications. Each is described, and then discussed under two sub-headings of generalizability and provlepsis. Generalizability is concerned with the breadth or scope of score meaning beyond the immediate context of the test. The latter term is taken from the Greek Προβλέψεις, which I use to refer to refer to the precision with which a score user may use the outcome of a test to look into the future and make predictions about the likely performance of the test taker. Is the most appropriate analogy for the test a barometer, or a crystal ball? I conclude by considering how it is possible to take context seriously within a field that by necessity must decontextualize to remain ethical and legal. ; Peer-reviewed ; Post-print
Keyword: appllied linguistics; context in language learning; language testing
URL: http://hdl.handle.net/2381/33093
http://www.palgrave.com/page/detail/the-dynamic-interplay-between-context-and-the-language-learner-jim-king/?isb=9781137457127
BASE
Hide details
10
Limited Aspects of Reality: Frames of reference in language assessment
Fulcher, Glenn; Svalberg, Agneta Marie-Louise. - : University of Murcia, 2013
BASE
Show details
11
EAP Teacher Assessment Literacy
Manning, Anthony. - : University of Leicester, 2013
BASE
Show details
12
Limited aspects of reality: Frames of reference in language assessment
In: International Journal of English Studies; Vol. 13 No. 2 (2013): Second Language Testing: Interfaces between Pedagogy and Assessment; 1-19 ; International Journal of English Studies; Vol. 13 Núm. 2 (2013): Second Language Testing: Interfaces between Pedagogy and Assessment; 1-19 ; 1989-6131 ; 1578-7044 (2013)
BASE
Show details
13
Assessment literacy for the language classroom
In: Language assessment quarterly. - New York, NY [u.a.] : Routledge, Taylor and Francis Group 9 (2012) 2, 113-132
BLLDB
OLC Linguistik
Show details
14
The Routledge handbook of language testing
Davidson, Fred (Hrsg.); Fulcher, Glenn (Hrsg.). - London [u.a.] : Routledge, 2012
BLLDB
UB Frankfurt Linguistik
Show details
15
Book Notices
In: Studies in second language acquisition. - New York, NY [u.a.] : Cambridge Univ. Press 33 (2011) 4, 639
OLC Linguistik
Show details
16
Effective rating scale development for speaking tests: performance decision trees
In: Language testing. - London : Sage 28 (2011) 1, 5-29
BLLDB
OLC Linguistik
Show details
17
Practical language testing
Fulcher, Glenn. - London : Hodder Education, 2010
BLLDB
UB Frankfurt Linguistik
Show details
18
Test use and political philosophy
In: Annual review of applied linguistics. - Cambridge, Mass. [u.a.] : Univ. Press 29 (2009), 3-20
BLLDB
OLC Linguistik
Show details
19
Test architecture, test retrofit
In: Language testing. - London : Sage 26 (2009) 1, 123-144
BLLDB
Show details
20
Test architecture, test retrofit
In: Language testing. - London : Sage 26 (2009) 1, 123-144
OLC Linguistik
Show details

Page: 1 2 3

Catalogues
5
0
22
0
0
0
0
Bibliographies
22
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
14
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern