Share your thoughts on the NEXUS website and the Faculty Knowledge Base.
Categories
Feature

AI@ACC Panel 4: Assessment in the Age of AI

AI@ACC Panel 4: Assessment in the Age of AI

AI@ACC Panel 4 featured LaKisha Barrett, Sajjad Mohsin, Dania Dwyer, Sara Farr, Jaime Cantu, Marian Moore, and Ronald Johnson in a conversation on rethinking assessment in the age of AI, highlighting approaches that make student thinking visible and learning more meaningful.

Rethinking Assessment in the Age of AI

At the final AI@ACC panel, we kept coming back to one simple question: if AI can do the assignment, what are we actually assessing?

What stood out right away was that the conversation didn’t feel like a crisis. No one was arguing that everything is broken or that we need to start over. Instead, what came through was something more grounded and, honestly, more encouraging. Faculty are already adjusting. And in many cases, they’re moving toward approaches that research has been pointing to for decades, especially around authentic assessment and deeper learning.

When people talk about “authentic assessment,” it can sound abstract. But during this panel, it showed up in very concrete ways.

Jaime Cantú in biology described having students explain complex concepts to different audiences like athletes, patients, or children. That kind of task immediately raises the bar. Students can’t rely on memorization because they have to actually understand the material well enough to translate it. In learning science, this kind of transfer, taking knowledge and applying it in a new context, is one of the strongest indicators of deep understanding (see How People Learn by the National Research Council). Jaime has also been experimenting with AI tools that surface student thinking and even reward students for asking good questions, not just giving correct answers. Read more of Jaime’s research on assessment with Blackboard AI Conversation tool. That shift toward valuing inquiry aligns closely with research on metacognition and self-regulated learning (see Barry Zimmerman’s work on self-regulated learning)

In game design, Sara Farr described a different kind of assignment, but one that gets at the same core idea. Her students are creating original work, building games, visuals, and narratives, and documenting how those ideas evolve over time. The final product matters, but so does the process and the decisions behind it. Students are asked to show how their ideas developed, why they made certain choices, and how they refined their work. This kind of iterative, design-based learning reflects what Grant Wiggins describes as authentic assessment, where students are asked to produce work that mirrors real-world performance and requires judgment, not just correctness. 

Dania Dwyer in composition is taking a more explicitly AI-integrated approach, but in a very intentional way. She allows students to use AI as part of their writing process, but they are still responsible for shaping the argument, making rhetorical choices, and explaining their decisions. She shared that she has been genuinely impressed with the quality of student work when AI is used thoughtfully. What she is really assessing is how students develop and refine ideas over time. That emphasis on writing as a process, not just a product, is well supported in research on learning, including work synthesized in How Learning Works by Susan Ambrose and colleagues. 

In computer science, Dr. Sajjad Mohsin described a shift that feels especially relevant in the age of AI. Instead of grading only whether code works, he asks students to document their entire process through logbooks. Students explain how they approached a problem, how they used AI to troubleshoot, what prompts they tried, and how they worked through errors. They also have to explain exactly what their code is doing and why. This makes their thinking visible in a way that a finished program never could. It also aligns with research on cognitive apprenticeship and making thinking visible, such as the work of Allan Collins and colleagues. 

When you put these together, the assignments look very different on the surface, but they’re all getting at the same thing. They’re asking students to apply what they know, explain their reasoning, make decisions, show their process and create something original.

The Mentimeter responses from participants reinforced this. When asked what critical thinking looks like, people described things like evaluating AI outputs, reflecting on their learning, and applying knowledge in new contexts. That’s notable because it shows that AI is already being folded into how faculty understand thinking itself. At the same time, when asked what their assessments currently reward, creativity came in lowest. That gap is important. It suggests that while many faculty are already valuing explanation and reasoning, there is still room to expand how we assess originality and generative work, something that becomes even more important when AI can produce polished outputs so easily. 

At the same time, when asked what their assessments currently reward, creativity came in lowest. That gap is interesting. It suggests that while many of us are already valuing explanation and reasoning, there’s still room to expand how we assess originality and generative work.

One comment that we heard a lot was someone saying that “text homework done at home is basically useless now.” That might feel a little blunt, but it points to something real. Some kinds of assignments are becoming less reliable as evidence of learning. But that doesn’t mean everything is falling apart. It aligns with long-standing research on assessment validity, including work by Samuel Messick, which emphasizes that assessment must be continuously re-evaluated as contexts change. 

The biggest takeaway from the panel is that we’re not starting from scratch. Faculty like Jaime, Sara, Dania, and Sajjad are already showing what this can look like in practice. They’re designing assignments that make thinking visible, even when AI is part of the process.

AI is definitely changing what students can produce. But it’s also pushing us to get clearer about what we actually care about. If we care about understanding, reasoning, and the ability to use knowledge in meaningful ways, then our assessments need to reflect that.

And in many cases, they already are.

View the session summary or watch the session recording to dive in deeper!


AI@ACC Panel Series is a four-part, cross-disciplinary, dialog-based conversation series developed through Austin Community College’s (ACC) participation in the AAC&U Institute on AI, Pedagogy, and the Curriculum. Grounded in national research, the series explores how artificial intelligence is shaping teaching, learning, assessment, and the future of work in higher education across teaching, support, and workforce roles.

Designed as a low-pressure entry point, this series centers real questions, lived experience, and diverse perspectives rather than tools, mandates, or hype. Ethical concerns, including bias, labor, environmental impact, and academic integrity, are acknowledged and respected throughout. No prior AI experience is expected. Questions and uncertainty are welcomed.

AI@ACC is a space for inquiry, not compliance. The series is exploratory and reflective rather than directive. While AI raises serious concerns, disengagement does not ultimately protect students. These conversations focus on helping educators and staff thoughtfully support students as they navigate evolving academic and workplace norms.

Categories
Feature

AI@ACC Panel 3: AI@Work: Faculty and Industry Perspectives

AI@ACC Panel 3: AI@Work: Faculty and Industry Perspectives

AI@ACC Panel 3 featured Beth Vaughn, Gwen Holford, Lani Dame, and Jennifer Houlihan in a lively conversation about how AI is reshaping workforce expectations and the skills our students need next. Mentimeter responses highlighted critical thinking, adaptability, and communication as key strengths.

View the session summary or watch the session recording to dive in deeper!


AI@ACC Panel Series is a four-part, cross-disciplinary, dialog-based conversation series developed through Austin Community College’s (ACC) participation in the AAC&U Institute on AI, Pedagogy, and the Curriculum. Grounded in national research, the series explores how artificial intelligence is shaping teaching, learning, assessment, and the future of work in higher education across teaching, support, and workforce roles.

Designed as a low-pressure entry point, this series centers real questions, lived experience, and diverse perspectives rather than tools, mandates, or hype. Ethical concerns, including bias, labor, environmental impact, and academic integrity, are acknowledged and respected throughout. No prior AI experience is expected. Questions and uncertainty are welcomed.

AI@ACC is a space for inquiry, not compliance. The series is exploratory and reflective rather than directive. While AI raises serious concerns, disengagement does not ultimately protect students. These conversations focus on helping educators and staff thoughtfully support students as they navigate evolving academic and workplace norms.

Categories
Feature

Ask an ID: Updating Course Materials for Accessibility

Ask an ID: Updating Course Materials for Accessibility

Dear Instructional Designer,

I’ve been using the same Google Slides and scanned PDFs for years, but I’m realizing they probably aren’t accessible for all my students. Between my slide decks and these old documents, the task of updating everything feels overwhelming and I don’t even know where to begin. Do you have any advice or tools for a non-tech expert to help me get my existing course materials up to current standards?

– Accessibly Anxious

Dear Accessibly Anxious, 

It’s completely normal to feel overwhelmed by the technical side of accessibility, but you don’t have to become an expert overnight to make a big impact. Here is a curated roadmap of tools and workflows to help you systematically bring your slides and documents up to current accessibility standards.

1. Audit materials with Blackboard Ally

The Accessibility Report on Blackboard Ally is a great place to find out what is flagged in your existing documents. Here’s a help document from the University of Arkansas that goes through the steps to working with Ally. We also did this Blackboard workshop a couple of years ago that talks about Ally and how to use AI to write alt-text for images and help with captioning if you have videos.

2. Making Slide Decks Accessible

When you are ready to remediate your slides for screen readers, the process depends on the tool you used to create them. Here are the go-to guides for the most common platforms:

3. PDF Accessibility

When it comes to PDFs, it is almost always easier to return to the original source file. Research shows that starting with an accessible MS Word or Google Doc produces far more reliable results than trying to “fix” a document inside Adobe Acrobat.

If you don’t have the original source file, you can still use the Adobe Acrobat Accessibility Checker and the Reading Order tool. (You can access your free ACC Adobe Creative Cloud subscription here). For step-by-step guidance, I recommend:

4. Looking Forward: AI and Design

Since we are now designing courses in the “AI era,” it’s helpful to use a framework like AI-Responsive Assignment Design (ARAD). This approach helps you create assignments that are both accessible and ethically aligned with current technology.

General Resources for Your Toolkit

I know it’s a lot, but try to take it one step at a time. The best part? When you start building with accessibility in mind, you won’t have to go back and “fix” things later—you’re just doing it right the first time.

Good luck! Let me know if I can be of further assistance.

Yours in inclusion,

Your Instructional Designer

Categories
Feature

AI@ACC Panel 2: Reimagining Curriculum for the AI Era

AI@ACC Panel 2: Talking to your Students about AI Ethics

AI@ACC Panel 2 featured Herb Coleman, Janey Flanagan, Susan Meigs, and Tina Buck in a thoughtful, energetic redesign chat where we dug into the new Blackboard AI Conversations tool. Mentimeter showed strong support for AI disclosure and clearer goals, with “everyone” claiming AI literacy as shared work.

View the session summary or watch the session recording to dive in deeper!


AI@ACC Panel Series is a four-part, cross-disciplinary, dialog-based conversation series developed through Austin Community College’s (ACC) participation in the AAC&U Institute on AI, Pedagogy, and the Curriculum. Grounded in national research, the series explores how artificial intelligence is shaping teaching, learning, assessment, and the future of work in higher education across teaching, support, and workforce roles.

Designed as a low-pressure entry point, this series centers real questions, lived experience, and diverse perspectives rather than tools, mandates, or hype. Ethical concerns, including bias, labor, environmental impact, and academic integrity, are acknowledged and respected throughout. No prior AI experience is expected. Questions and uncertainty are welcomed.

AI@ACC is a space for inquiry, not compliance. The series is exploratory and reflective rather than directive. While AI raises serious concerns, disengagement does not ultimately protect students. These conversations focus on helping educators and staff thoughtfully support students as they navigate evolving academic and workplace norms.

Categories
Faculty White Papers

Mastering the Art of Rehearsal: Utilizing Digital Tools for Enhanced Speech Delivery and Self-Assessment


Learn how “LMC” Lisa Marie Coppoletta transformed her students’ public speaking anxiety to professional confidence in her Speech Communication course. This white paper details how structured preparation and collaborative reflection fosters confidence and produces polished presentations.

Categories
Faculty White Papers

Redesigning ACNT 2330 with AI-Enhanced Exam Preparation


Discover how Okera Bishop utilized an AI-powered tutor to support exam preparation in an aysnchronous accounting course. This white paper details how this approach increased student confidence, eliminated failing exam scores, and reduced score variability, highlighting AI’s value for targeted practice.

Categories
Feature

AI@ACC Panel 1: Talking to your Students about AI Ethics

AI@ACC Panel 1: Talking to your Students about AI Ethics

AI@ACC Panel 1 brought together Alex Watkins, Toño Ramírez, Andy Kim, and Mavis Klemcke for a candid, laugh-a-little conversation about syllabus policies, student fears, and what transparency really means. Mentimeter results showed big love for openness and process over policing.

View the session summary or watch the session recording to dive in deeper!


AI@ACC Panel Series is a four-part, cross-disciplinary, dialog-based conversation series developed through Austin Community College’s (ACC) participation in the AAC&U Institute on AI, Pedagogy, and the Curriculum. Grounded in national research, the series explores how artificial intelligence is shaping teaching, learning, assessment, and the future of work in higher education across teaching, support, and workforce roles.

Designed as a low-pressure entry point, this series centers real questions, lived experience, and diverse perspectives rather than tools, mandates, or hype. Ethical concerns, including bias, labor, environmental impact, and academic integrity, are acknowledged and respected throughout. No prior AI experience is expected. Questions and uncertainty are welcomed.

AI@ACC is a space for inquiry, not compliance. The series is exploratory and reflective rather than directive. While AI raises serious concerns, disengagement does not ultimately protect students. These conversations focus on helping educators and staff thoughtfully support students as they navigate evolving academic and workplace norms.

Categories
Faculty White Papers

AI-Powered Chatbot Assessments in Online Anatomy & Physiology: A Mixed-Methods Study White Paper


Follow along as Jaime Cantú compares traditional tests with AI chatbot assessments in his online Anatomy & Physiology courses. This white paper reveals how chatbots can foster deeper understanding, stronger explanation skills, and lower test anxiety through patient-centered dialogue.

Categories
Feature

Ask an ID: Backward Design: Rethinking Curriculum for Accelerated Sessions

Ask an ID: Rethinking Curriculum for Accelerated Sessions

Dear Instructional Designer,

I am an Adjunct Professor who has been at ACC for 20 years. I have always taught a 16-week course but I was just assigned a 12-week session for the first time. I am unsure how to best approach and manage this new session given the difference in length and would appreciate any guidance you can provide.

– Course Compressor

Dear Course Compressor,

I can certainly understand how this shift in course length presents a new challenge. Not to worry – here are some tips and resources to help you make this adjustment a smooth one for both you and your students.

Backward Design is how we usually look at course structure. We start with the learning outcomes, what we want the students to be able to demonstrate, and then figure out the materials, activities, and assessments that go with each objective. You can access an example ACC course map here, which I encourage you to fill out. You want to look at your 16 week course objectives and figure out what is absolutely required, what is important, and what is just “nice to know” and cut out some of those. Trying to fully condense a 16 week into 12 without cutting anything isn’t generally recommended because of cognitive load theory and the spacing effect.

Rethink if you want to move from weekly modules to “unit” modules. A typical pattern is to fold Weeks 1–2, 3–4, 5–6, etc., into combined modules with clearer themes.

Be clear to the students about what is going on. Explain that they are in an accelerated course so things are going to move faster than they may be used to. Remind the students that it will be imperative to stay on top of their work and outline in every module what they need to Read or Watch, what they need to Do, and what the Assessment will be. Connect the learning outcomes to those activities to help them understand why they are doing what they are doing. And give them a ballpark figure of how much time you are expecting each chunk to take. I like to create a PDF course schedule with the dates of the term and all the due dates so that they can print it or save it and cross things off. 

You also want to use frequent, smaller check‑ins (quick quizzes, minute papers, short reflections) to monitor learning and catch problems early when things move faster. This is especially important because students in shortened terms can experience more stress and less recovery time between tasks.

It’s a lot to generate so I recommend leaning on Google Gemini for help. It’s in your Google Workspace Tools. Make sure you use your ACC linked account so that all of your course materials will stay secure and make sure to double check every single thing it generates for students because there can always be hallucinations and errors with generative AI.

I hope this helps you! Please don’t hesitate to reach out with any further questions or to set up a 1:1 meeting.

Backwards by design,

Your Instructional Designer

Categories
Feature

Ask an ID: Packback for Better Discussions and Smarter Grading

Ask an ID: Packback for Better Discussions and Smarter Grading

Dear Instructional Designer,

My pilot this semester is using AI to help with grading discussion boards. Do you have any tips for making the most of these functions? I’d also love to hear from others who have tried AI for managing discussions and would be open to sharing their experiences.

Discussion Dynamo

Dear Discussion Dynamo,

I haven’t personally used AI for grading discussions in Blackboard, but I have had a lot of experience with a tool called Packback—have you heard of it? At ACC, it started with just a few faculty members, but they loved it so much that we now have a license for the whole college.

Packback is designed to encourage deeper student engagement in discussion-based assignments. Instead of simply answering prompts, students are coached by the AI to ask curiosity-driven, well-supported questions. As they write, Packback provides real-time feedback, nudging them toward stronger critical thinking and clearer support for their ideas. It even flags issues like low-effort posts or potential academic integrity concerns, giving students a chance to revise before submitting. This means the quality of the posts you receive is much higher before you ever start grading.

For faculty, this support translates into less time moderating and a smoother grading process. The AI helps surface the most insightful contributions and ensures posts meet the required standards. Once you get comfortable with the workflow, you may find you spend very little time grading because Packback’s structure guides students through the steps before their post is complete.

It does take a shift in how you think about discussions since the focus moves from students answering questions to asking them. But this shift can really pay off, especially in large classes or courses where participation and critical thinking are core outcomes. If more classes in a department adopted it, students would quickly become comfortable with the approach and faculty would benefit from easier-to-grade, higher-quality discussions.

Packback also includes a tool for essays that works with students as they draft, helping them improve their writing without doing the work for them. It’s a strong option if you’re looking to integrate AI into your teaching in a way that supports learning outcomes while reducing your workload.

Here are a couple of video resources if you’d like to learn more:

I hope this gives you a sense of how Packback can transform discussion into deeper learning opportunities for students while also simplifying grading for you.

To more curious questions,

Your Instructional Designer