Tuesday, March 5, 2024

An Interesting Take on AI Interviews

 One of the most popular ideas being suggested as a way to incorporate AI into teaching and learning is to use it like an interview. It's a way to make learning more engaging, maybe less dry, and maybe even come alive. I shared this idea in my PD sessions for teachers because I saw it shared in so many of the places I've read about using AI in education.

And then recently, I saw this blog post

I encourage you to take the time, when you can, to actually read it because it's interesting. But until then, here's some TL;DR.

The author of the blog post, Tom Mullaney, is a former teacher who still has his hand in education and ed tech. He's a pretty well respected guy in the ed tech circles I run in, so I listen to things he says because he offers lots of good stuff. Essentially in his post, he has suggested that using AI as a guest speaker isn't such a sound pedagogical idea. He mentions things like "interacting" with problematic historical figures (ex.: Thomas Jefferson), AI giving voice to deceased historical figures who represent oppressed/marginalized communities (ex.: Harriet Tubman), AI giving voice to the dead in general (since it's not really their words but rather words synthesized from what's available on the internet), and the risk of giving human qualities to inanimate objects (like interviewing the water cycle). Young kids especially may be subject to the Eliza Effect, where they believe that the AI they are talking to is actually a real person (he explains the Eliza Effect in his post, and there have been stories of people developing "relationships" with Siri, so it's not as far-fetched as one might think). It's common knowledge that one of the main concerns with AI is that is can produce inaccuracies and biased information. Using a chatbot or AI to interview someone, real, fictional, dead, alive, or inanimate, may produce some inaccurate or biased information, and we would need to be ready on the spot to address that if it happens -- but would the "damage" already have been done once the kids hear that information?


In all honesty, this blog post angered me the first time I read it. I was like, "Hooey!" And then I came back to it a couple more times and started to think about what he said, and at the very least, I believe he makes some interesting points that are surely worth serious consideration before anyone tries this idea in their classrooms. Maybe he's 100% spot on; maybe it's all a bunch of "woke" ideology; maybe it's a mixture. Regardless, Mullaney's blog post highlights one of the many issues we face in education related to using AI. If we are going to put this in the hands of our students, we need to seriously consider the implications and effects of it on all our students. Using AI needs to be purposeful and thoughtful, and we need to be ready in the moment to address any issues that arise. It's a pretty heavy responsibility we have, so please continue to be thoughtful about how AI might be used with your students, and always be ready and willing to learn new ideas!

Okay, now it's your turn. I would absolutely love to hear your thoughts! Please share in the comments your response. I truly love learning from all of you, so let's talk!


P.S. Here's a fun tidbit -- I wanted to create an image using Adobe Firefly for this post of Abraham Lincoln on a cell phone. It wouldn't let me create it! Adobe Firely gave me a little popup that I wasn't operating within their content generation guidelines!

2 comments:

  1. Fascinating post! An application I had not thought of. Your concerns are valid - it would be essential for the educator to be ready to intervene and fact check inaccuracies as AI is incredibly prone to error and or bias. I am amazed, intrigued, excited, and a bit alarmed at the rapid development of AI and its implications. This is why it is necessary for us to get ahead of it - many kids are already using it so it is necessary to teach them how to use it safely and responsibly.

    ReplyDelete
    Replies
    1. I agree wholeheartedly! AI isn't like so many other tools we have kids use. Because we really don't know what exactly it is going to generate, we as teachers need to use it ourselves first to get a feel for what the results may look like and be ready to act on anything presented to kids that may not be correct. There is a definite moral aspect to using it as well, something we just may not be accustomed to having to think about too much in education.

      Delete