AI@ACC Panel 4: Assessment in the Age of AI
AI@ACC Panel 4 featured LaKisha Barrett, Sajjad Mohsin, Dania Dwyer, Sara Farr, Jaime Cantu, Marian Moore, and Ronald Johnson in a conversation on rethinking assessment in the age of AI, highlighting approaches that make student thinking visible and learning more meaningful.
Rethinking Assessment in the Age of AI
At the final AI@ACC panel, we kept coming back to one simple question: if AI can do the assignment, what are we actually assessing?
What stood out right away was that the conversation didn’t feel like a crisis. No one was arguing that everything is broken or that we need to start over. Instead, what came through was something more grounded and, honestly, more encouraging. Faculty are already adjusting. And in many cases, they’re moving toward approaches that research has been pointing to for decades, especially around authentic assessment and deeper learning.
When people talk about “authentic assessment,” it can sound abstract. But during this panel, it showed up in very concrete ways.

Jaime Cantú in biology described having students explain complex concepts to different audiences like athletes, patients, or children. That kind of task immediately raises the bar. Students can’t rely on memorization because they have to actually understand the material well enough to translate it. In learning science, this kind of transfer, taking knowledge and applying it in a new context, is one of the strongest indicators of deep understanding (see How People Learn by the National Research Council). Jaime has also been experimenting with AI tools that surface student thinking and even reward students for asking good questions, not just giving correct answers. Read more of Jaime’s research on assessment with Blackboard AI Conversation tool. That shift toward valuing inquiry aligns closely with research on metacognition and self-regulated learning (see Barry Zimmerman’s work on self-regulated learning).
In game design, Sara Farr described a different kind of assignment, but one that gets at the same core idea. Her students are creating original work, building games, visuals, and narratives, and documenting how those ideas evolve over time. The final product matters, but so does the process and the decisions behind it. Students are asked to show how their ideas developed, why they made certain choices, and how they refined their work. This kind of iterative, design-based learning reflects what Grant Wiggins describes as authentic assessment, where students are asked to produce work that mirrors real-world performance and requires judgment, not just correctness.
Dania Dwyer in composition is taking a more explicitly AI-integrated approach, but in a very intentional way. She allows students to use AI as part of their writing process, but they are still responsible for shaping the argument, making rhetorical choices, and explaining their decisions. She shared that she has been genuinely impressed with the quality of student work when AI is used thoughtfully. What she is really assessing is how students develop and refine ideas over time. That emphasis on writing as a process, not just a product, is well supported in research on learning, including work synthesized in How Learning Works by Susan Ambrose and colleagues.
In computer science, Dr. Sajjad Mohsin described a shift that feels especially relevant in the age of AI. Instead of grading only whether code works, he asks students to document their entire process through logbooks. Students explain how they approached a problem, how they used AI to troubleshoot, what prompts they tried, and how they worked through errors. They also have to explain exactly what their code is doing and why. This makes their thinking visible in a way that a finished program never could. It also aligns with research on cognitive apprenticeship and making thinking visible, such as the work of Allan Collins and colleagues.
When you put these together, the assignments look very different on the surface, but they’re all getting at the same thing. They’re asking students to apply what they know, explain their reasoning, make decisions, show their process and create something original.

The Mentimeter responses from participants reinforced this. When asked what critical thinking looks like, people described things like evaluating AI outputs, reflecting on their learning, and applying knowledge in new contexts. That’s notable because it shows that AI is already being folded into how faculty understand thinking itself. At the same time, when asked what their assessments currently reward, creativity came in lowest. That gap is important. It suggests that while many faculty are already valuing explanation and reasoning, there is still room to expand how we assess originality and generative work, something that becomes even more important when AI can produce polished outputs so easily.

At the same time, when asked what their assessments currently reward, creativity came in lowest. That gap is interesting. It suggests that while many of us are already valuing explanation and reasoning, there’s still room to expand how we assess originality and generative work.

One comment that we heard a lot was someone saying that “text homework done at home is basically useless now.” That might feel a little blunt, but it points to something real. Some kinds of assignments are becoming less reliable as evidence of learning. But that doesn’t mean everything is falling apart. It aligns with long-standing research on assessment validity, including work by Samuel Messick, which emphasizes that assessment must be continuously re-evaluated as contexts change.
The biggest takeaway from the panel is that we’re not starting from scratch. Faculty like Jaime, Sara, Dania, and Sajjad are already showing what this can look like in practice. They’re designing assignments that make thinking visible, even when AI is part of the process.
AI is definitely changing what students can produce. But it’s also pushing us to get clearer about what we actually care about. If we care about understanding, reasoning, and the ability to use knowledge in meaningful ways, then our assessments need to reflect that.
And in many cases, they already are.
View the session summary or watch the session recording to dive in deeper!
AI@ACC Panel Series is a four-part, cross-disciplinary, dialog-based conversation series developed through Austin Community College’s (ACC) participation in the AAC&U Institute on AI, Pedagogy, and the Curriculum. Grounded in national research, the series explores how artificial intelligence is shaping teaching, learning, assessment, and the future of work in higher education across teaching, support, and workforce roles.
Designed as a low-pressure entry point, this series centers real questions, lived experience, and diverse perspectives rather than tools, mandates, or hype. Ethical concerns, including bias, labor, environmental impact, and academic integrity, are acknowledged and respected throughout. No prior AI experience is expected. Questions and uncertainty are welcomed.
AI@ACC is a space for inquiry, not compliance. The series is exploratory and reflective rather than directive. While AI raises serious concerns, disengagement does not ultimately protect students. These conversations focus on helping educators and staff thoughtfully support students as they navigate evolving academic and workplace norms.


