BIAS EXISTS IN BOTH INPUT AND OUTPUT

 I remember the day OpenAI announced ChatGPT and what its capabilities were to be. I was facilitating a teacher training program for Wisconsin computer science teachers in collaboration with the computer science department at Marquette University. On our lunch break, the department chair of the CS department, a doctoral student, and myself had a long conversation about how the future of education was about to fundamentally change forever. The three of us were all simultaneously excited but also very nervous about what the future with AI in the hands of EVERYONE would look like. Our musings that day are very close to the reality of today.

I’ve been called an “early adopter” of AI, failing my way into figuring out how to use it in a way that preserves my own humanity and creative thought process, while allowing for both my own professional growth but also to preserve my mental capacity. It was not a pretty process but I learned some very hard lessons along the way. The biggest one - garbage in, garbage out. The quality of the results rests heavily on the quality of the initial prompt and the required revisions to make the result meaningful.

My own experimentation with AI reinforced this idea quickly. Early on, I learned that simple but important lesson: garbage in, garbage out. The quality of the output depended heavily on the thoughtfulness of the prompt and the revisions that followed. In practice, this meant that using AI effectively still required reading, writing, and critical thinking, the very literacy practices that educators aim to cultivate. This aligns with the ELATE position statement’s argument that meaningful engagement with generative AI still relies on human-centered skills such as creativity, critical analysis, and reflection.

"A wonderful piece of homespun wisdom is that 'you can’t polish a turd'. If something’s crap, you’ve little hope of making it better through cosmetics. Extending the metaphor, possibly too graphically (ha ha), getting loads of turds and counting them, and then putting them into a shiny and tastefully coloured box marked n=200 and p<0.0001 doesn’t stop them being turds. It is not possible to carry out meaningful statistical analysis of data that is fundamentally inaccurate." 
-Brooke Morriswood

Some tasks (such as my Educator Effectiveness evaluation’s Student Learning Objective and Personal Professional Goal) I automate based on very specific parameters, while others I spend time painstakingly crafting and ask for feedback to ensure I am looking at all angles correctly especially in areas in which I am not an expert (for example, creating a video planning document that mirrors industry standards but is scaffolded for 11th and 12th grade students of varying skill levels). I never trusted the models to have my student’s best interests in mind, to understand the nuance of my classroom, to respect the cultural identities of the individual students, to abide by my own teaching philosophies, or to advocate for real change and disruption. These are all very human considerations and a machine cannot be expected to account for them.

Much of the current discourse around AI in education centers on control, what students should or should not be allowed to do. However, as Annette Vee notes, the traditional stance of “I know what’s good for you” no longer resonates with students navigating a world saturated with technology. Instead, educators are increasingly required to justify the purpose of assignments and acknowledge students as agents making decisions about the tools they use (which I think they should be doing regardless of student AI use, like in the case of my view of Educator Effectiveness).

At the same time, my experiences also reflected the call in the position statement for teachers to “mess around” with generative AI and learn alongside students. My early attempts were far from polished. I experimented, failed, revised prompts, and gradually learned where AI could support my thinking and where it could introduce bias or shallow reasoning. Rather than replacing the writing process, AI became something more like a feedback partner, useful when approached critically, but unreliable when used uncritically.

This process of experimentation has shaped how I now think about teaching with AI. Instead of focusing on restricting its use, I find myself helping students understand the conditions under which AI can be useful and when it may distort or oversimplify complex ideas. In this sense and according to this week’s readings, teaching with AI becomes less about control and more about cultivating agency, critical awareness, and responsible decision-making. 

I really connected with Aguilar’s position on how to train students to generate with AI using a social justice lens as it closely follows my own journey in figuring out how to harness the power of GenAI without losing myself and my cognitive processes along the way all while protecting the identities and stories of the people I serve. I dove into two real world applications utilizing Aguilar’s framework, the first is an early draft of what I might implement as a student facing exploration of AI utilization and iterative prompting. The second is AI’s critique of my own AI use utilizing the Aguilar framework and how I stack up based on the notes that I took while reading. I am choosing to keep this critique personal for now as I still have questions about how it made the decisions about my own personal use. I can explain further if you’d like.

What might this look like in the classroom?

  1. Teaching to contextualize:

DID THE STUDENT ESTABLISH AUTHORITY?

Evidence:

  • Screenshot of initial AI prompt

  • Written reflection explaining why the student structured the prompt that way

  • Annotation of what context they provided (assignment, audience, constraints, parameters

  1. Positioning to write ethically:

DID THE STUDENT PROACTIVELY SHAPE THE ETHICAL BOUNDARIES?

Evidence:

  • Explicit instructions in prompt about bias, harm reduction, inclusion

  • Use of reflexive questions (Who could be harmed? Who is excluded?)

  • Student written explanation of ethical constraints they added to the prompt

  1. Recognizing problematic output:

Evidence

  • Student highlights problematic AI language

  • Written critique of AI language within a paragraph

  • Revision instructions to be given back to the AI model

  1. Dialogic Revision Process


Collect Evidence

Prompt

Output

Revision

New Output Chain





Repeat as needed


  • Student commentary on what changed and why

Comments

  1. Hi Amanda! Thank you for sharing your thoughts on AI and how they have grown over time. A side note, but you were in such fortunate company to have deep conversations surrounding AI on the day ChatGPT was announced. I found your thoughts and the proposed process for how students may engage with AI in the classroom in ways that maintain their learning in the content are and their agency to choose if they wish to use AI. I also appreciated you referencing your own engagements with AI, especially when you mentioned a main reason for not accepting the initial output is that you never trust AI to have the best intentions for students. While AI does make things "easier" when it generates an entire output, that does not mean it should be the final output and it is far from the best result. Your proposals for having students give revisions and include evidence of the initial prompt are very smart and the labor of students doing those things means they need to engage with the AI generated work and learn why the initial prompt was/was not effective, which is an important aspect of learning we want from our students.

    ReplyDelete
  2. Amanda, you do a wonderful job connecting your "garbage in. garbage out" motto to the Aguilar reading. This speaks highly of the need to explicitly teach prompt writing and training individual chat bots. I look forward to more windows into your thinking and experience with this computational thinking.

    ReplyDelete
  3. Amanda! As always, I learn so much from your positioning and application of new technologies. I especially appreciated the explanation you gave at the end, featuring visual support to see Agular's social justice lens on AI. This particularly helps me make concrete the elusive nature of what exactly AI might assist us with, rather than the broad, generalized strokes of 'it'll make it better' without precisely discussing how (which just doesn't compute with how I process, you know I need to see it, and manipulate it to really understand). Thanks for your throughfully worded response (and for all of the future eloquent answers to obvious inquiries I throw at you) !

    ReplyDelete

Post a Comment

Popular Posts