Research

Research agenda

My research sits at the intersection of writing assessment, literacy, validity, ethics, justice, policy, and the design of assessment systems for digital and AI-mediated contexts.

Four linked areas of inquiry

These themes are linked by a hopeful vision for the future of writing assessment that asks, what assessment systems represent, whom they serve, and what kinds of futures they help produce.

Justice, fairness, ethics, and validity

I study what it means to treat justice as central to validation rather than as an afterthought. This includes work on ethical writing assessment, consequential validity, justice-oriented and antiracist validation, and the relationship between validity arguments and human dignity.

Writing expertise in digital and AI-mediated contexts

Writers now compose across platforms, media, and tools that change what counts as expertise. I examine how assessments can better represent that complexity without collapsing into narrow or reductive models of performance.

Assessment design and the consequences of use

My work on the Integrated Design and Appraisal Framework brings together construct representation, theory of action, validity, and consequences of use. The aim is to design assessments that are intellectually rigorous while remaining accountable for what they do in the world.

Policy, pedagogy, and public accountability

I work with teacher educators, professional organisations, and policy partners to rethink how assessment systems shape instruction, opportunity, and public trust. This includes research on large-scale testing, school-based assessment, and teacher learning.

Current program of research

Justice-Oriented Literacy Assessment for a Digital and AI-Mediated World is the umbrella program that currently organises much of my work. It builds on my earlier research on the social consequences of assessment and extends that work into questions of digital communication, AI-enabled feedback, and culturally responsive assessment design.

The work begins from a simple concern: too many literacy assessments still rest on narrow, standardised models of writing even as students and workers are asked to communicate across linguistic, cultural, and technological boundaries shaped by AI, misinformation, and widening social inequities.

When assessments fail to represent that complexity, they do more than mismeasure performance. They distort teaching, restrict opportunity, and reproduce injustice.

Current directions

These project clusters translate the broader agenda into concrete lines of work.

Project cluster 1

Scenario-based digital formative assessment

I am extending work on a scenario-based digital formative assessment platform that offers feedback across multiple domains of writing expertise, including emerging work on AI-enabled feedback and the validity arguments needed to support its responsible use.

Project cluster 2

International writing assessment and large language models

Working with collaborators, I am exploring how large language models can help analyse international writing assessment data in ways that reveal linguistic, cultural, and substantive patterns shaping performance and scoring.

Project cluster 3

Culturally responsive and justice-oriented assessment

I am advancing work on culturally sustaining and justice-oriented assessment frameworks that connect ethical design, sociocognitive models of writing, and public accountability across local and international contexts.

Design commitments

Across projects, I return to the same design commitments: assessments should represent meaningful forms of literacy; they should be valid, fair, ethical, and just; they should remain accountable for their consequences; and they should support learning rather than narrow it.