Dr. Luke Plonsky is Associate Professor of Applied Linguistics at Northern Arizona University, where he teaches courses in second language acquisition and research methods.  Recent and forthcoming works in these and other areas can be found in Applied Linguistics, Annual Review of Applied Linguistics, Language Learning, and Modern Language Journal and in a number of other journals and edited volumes.  He has also authored and edited several books including Second language acquisition: An introductory course (Gass, Behney, & Plonsky, 5th edition, under contract) and Advancing quantitative methods in second language research (Plonsky, 2015).

Dr. Plonsky is Associate Editor of Studies in Second Language Acquisition, Managing Editor of Foreign Language Annals, and he serves on the editorial boards of Language Teaching and Learning and Individual Differences.  He is also Co-Editor of de Gruyter Mouton’s Series on Language Acquisition and Co-Director of the IRIS repository for instruments in language learning and teaching (iris-database.org).  Dr. Plonsky held previous faculty appointments at Georgetown and University College London.  He has also taught in Japan, The Netherlands, Spain, and Puerto Rico.  He received his PhD in Second Language Studies from Michigan State University.

Plenary title:  Mind your measures: Methodological reform at the SLA-assessment interface

Plenary abstract:  Multiple options exist for measuring virtually all constructs of interest in second-language (L2) research. In the context of L2 writing, for example, Polio and Shea (2014) identified 44 unique measures of accuracy among 35 studies. Such choices are not arbitrary or trivial. In fact, as demonstrated by a large and growing body of primary and meta-analytic evidence, the results obtained by a given study are heavily dependent on the tools used to derive them (e.g., Thompson et al., 2018; Plonsky et al., in press; Saito & Plonsky, in press). It is critical, therefore, that we make theoretically- and empirically-informed choices when deciding which measures to employ. After describing a number of major concerns for measurement in L2 research, the first in this two-part talk outlines the growing evidence on the interaction between choice of measure and study outcome. In doing so, I draw on examples from subdomains ranging from pronunciation to pragmatics, learning strategies to psycholinguistics. Such (generally unaccounted for) variability in observed effects/relationships is often compounded by other types of errors (i.e., noise). In the second part of the talk, I identify a number of these major and frequent threats to the psychometric properties of our measures. Whether we recognize them or not, failure to address such threats greatly limits our ability to make inferences about L2 knowledge, use, and development. Validity evidence for measures in L2 research, for instance, is exceedingly scarce (Norris & Ortega, 2012). The talk concludes with a series of proposals aiming to (a) improve measurement practices in primary research and to (b) incite a more active agenda at the SLA-assessment interface. In doing so, I describe fruitful—and, I will argue, vital—directions for research centered on producing evidence for measurement validity that are needed for the field to advance.