Lessons Learned in Designing and Implementing a Computer-Adaptive Test for English

Jack Burston, Maro Neophytou

Abstract

This paper describes the lessons learned in designing and implementing a computer-adaptive test (CAT) for English. The early identification of students with weak L2 English proficiency is of critical importance in university settings that have compulsory English language course graduation requirements. The most efficient means of diagnosing the L2 English ability of incoming students is by means of a computer-based test since such evaluation can be administered quickly, automatically corrected, and the outcome known as soon as the test is completed. While the option of using a commercial CAT is available to institutions with the ability to pay substantial annual fees, or the means of passing these expenses on to their students, language instructors without these resources can only avail themselves of the advantages of CAT evaluation by creating their own tests.  As is demonstrated by the E-CAT project described in this paper, this is a viable alternative even for those lacking any computer programing expertise.  However, language teaching experience and testing expertise are critical to such an undertaking, which requires considerable effort and, above all, collaborative teamwork to succeed. A number of practical skills are also required. Firstly, the operation of a CAT authoring programme must be learned. Once this is done, test makers must master the art of creating a question database and assigning difficulty levels to test items. Lastly, if multimedia resources are to be exploited in a CAT, test creators need to be able to locate suitable copyright-free resources and re-edit them as needed.

Keywords

Computer-Assisted Testing; CAT; English; placement; test authoring

Full Text:

HTML PDF

References

Cronbach, L. J.; Meehl, P.E. (1955). Construct Validity in Psychological Tests. Psychological Bulletin 52: 281–302. http://dx.doi.org/10.1037/h0040957

Hambleton, R., Swaminathan, H., & Rogers, J. (1991). Fundamentals of Item Response Theory. Newbury Park, CA: Sage Publications.

Lawshe, C.H. (1975). A quantitative approach to content validity. Personnel Psychology, 28, 563–575. http://dx.doi.org/10.1111/j.1744-6570.1975.tb01393.x

Rasch, G. (1980). Probabilistic Models for Some Intelligence and Attainment Tests. Copenhagen: Danmarks Paedagogiske Institut, 1960. Reprint, Chicago: University of Chicago Press.

Abstract Views

1756
Metrics Loading ...

Metrics powered by PLOS ALM

Refbacks

  • There are currently no refbacks.


 

Cited-By (articles included in Crossref)

This journal is a Crossref Cited-by Linking member. This list shows the references that citing the article automatically, if there are. For more information about the system please visit Crossref site

1. Developing and evaluating a computerized adaptive testing version of the Word Part Levels Test
Atsushi Mizumoto, Yosuke Sasao, Stuart A. Webb
Language Testing  vol: 36  issue: 1  first page: 101  year: 2019  
doi: 10.1177/0265532217725776



Licencia Creative Commons

This journal is licensed under a  Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.

Universitat Politècnica de València

e-ISSN: 1695-2618    http://dx.doi.org/10.4995/eurocall