Direct and Semi-Direct Validation: Test Takers’ Perceptions, Evaluations and Anxiety towards Speaking Module of an English Proficiency Test

Document Type: Original Article

Authors

1 Department of Foreign Languages and Literature, Science and Research Branch, Islamic Azad University, Tehran, Iran

2 Department of English Language Teaching, Islamic Azad University; Central Tehran Branch, Tehran, Iran

Abstract

This research study employed a mixed-methods approach to investigate the test takers’ perceptions and anx- iety in relation to an English language proficiency test called Community English Program (CEP). This study also evaluated the direct and semi-direct modes for speaking module of this test. To this end, 300 Eng- lish as Foreign Language (EFL) students were recruited in the study as test takers. They were invited to take the CEP speaking test using five tasks of Description, Narration, Summarization, Role-play and Exposition in both direct and semi-direct test modes. Their perceptions and evaluations of both test modes, through questionnaires, interviews and observations were examined. The results of the factor analysis revealed that test takers’ evaluations of both direct and semi-direct speaking modes were quite similar, yet not exactly identical. On the other hand, although test takers’ anxiety was shown influential, the findings showed that the most determining factor in test takers’ oral performance was their capability level. Capability level was the main reason why some test takers out-performed the others. The findings also demonstrated that test difficulty identification was complex, difficult and at the same time multidimensional. The quantitative re- sults displayed that the raters were scoring speaking performances differently; the qualitative results also provided logic for the reasons of these differences on the side of the test takers. Finally, the impact of test takers’ gender differences on their perceptions was found nonsignificant.
 

Keywords


Bachman, L. F., & Palmer, L. S. (1996). Language testing in practice: Designing and developing useful language tests. Oxford: Oxford University Press.

Berstein, J., VanMoere, A., & Cheng, J. (2010). Validating automated speaking tests. Language Testing, 27(355-377). doi: 10.1177/0265532207077205

Brown, A. (1993). The role of test-taker feedback in the test development process: Test-takers’ reactions to a tape-mediated test of proficiency in spoken Japanese. 1993, 10(3), 26. doi: 10.1177/026553229301000305

Clark, J. L. D. (1978). Direct testing of speaking proficiency: Theory and application. Princeton: Educational Testing Service.

Cohen, L., Manion, L., & Morrison, K. (2007). Research methods in education. London: Routledge. Elder, C., Iwashita, N., &

McNamara, T. (2002). Estimating the difficulty of oral proficiency tasks: What does the test-taker have to offer? . Language Testing, 19(4), 347-368.

Fulcher, G. (2003). Testing second language speaking. London: Longman.

Gan, Z. (2010). Interaction in group oral assessment: A case study of higher- and lower-scoring students. Language Testing, 27(4), 585-602. doi: 10.1177/0265532210364049

Knyon, D. M., & Tschirner, E. (2000). The rat- 40 Bijani, Khabiri. Modeling and Non-modeling Genre-based Approach to Writing Argument-led I … ing of direct and semi-direct oral proficiency interviews: Comparing performance at lower proficiency levels. The Modern Language Journal, 84(1), 85- 101. doi: 0026-7902/00/85-101

Luoma, S. (2004). Assessing speaking. Cambridge: Cambridge University Press.

May, L. A. (2006). An examination of rater orientations on a paired candidate discussion task through stimulated verbal recall. Melbourne Papers in Language Testing, 11(1), 29-51.

Nakatsuhara, F. (2011). Effect of test-taker characteristics and the number of participants in group oral tests. Language Testing, 28(483-508). doi: 10.1177/0265532211398110

O’Loughlin, K. (2002). The impact of gender in oral proficiency testing. Language Testing, 19(2), 169-192. doi: 0.1191/0265532202lt226oa

Robinson, P. (2001). Task complexity, task difficulty and task production: Exploring interactions in a componential framework. Applied Linguistics, 21(1), 27-57.

Scott, M. L. (1986). Student affective reactions to oral language tests. Language Testing, 3(1), 99-118. doi: 10.1177/026553228600300105

Shohamy, E. (1994). The validity of direct versus semi- direct oral tests. Language Testing, 11(2), 99-123. doi: 10.1177/026553229401100202

Stansfield, C. W. (1991). A comparative analysis of simulated and direct oral proficiency interviews. In S. Anvian (Ed.), Current developments in language test ing (pp. 199-209). Singapore: Regional English Language Center.

Stansfield, C. W., & Kenyon, D. M. (1992). Research on the comparability of the oral proficiency interview and the simulated oral proficiency interview. System, 20(347-364). doi: 0436-251X/92

Tarone, E. (1983). On the variability of interlanguage systems. Applied Linguistics, 4(2), 142-164. Van Moere, A. (2012). A psycholinguistic approach to oral language assessment. Language Testing, 29(3), 325-344. doi: 10.1177/0265532211424478

Winke, P., & Gass, S. (2013). The Influence of Second Language Experience and Accent Familiarity on Oral Proficiency Rating: A Qualitative Investigation. TESOL Quarterly, 47(4), 762-789. doi: 10.1002/tesq

Winke, P., Gass, S., & Myford, C. (2012). Raters' L2 background as a potential source of bias in rating oral performance. Language Testing, 30(2), 231- 252. doi: 10.1177/0265532212456968

Young, R., & Milanovic, M. (1992). Discourse variation in oral proficiency interviews. Studies in Second Language Acquisition, 14(4), 403-424.

Zeidner, M., & Bensoussan, M. (1988). College students’ attitudes towards written versus oral tests of English as a foreign language. Language Testing, 5(1), 100- 114. doi: 10.1177/026553228800500107