Since I began my career (about the time I first heard the phrase "alternative facts"), my guiding question as a high school librarian has been "How do I teach students to think critically about the media and information they encounter?"
That's arguably more important than ever, but there's a new hurdle to clear before we get there:
How do I convince high school students not to rely on AI for challenging school work?
Every semester I get the opportunity to work with our Advanced Composition classes (mix of juniors & seniors) on research for their argumentative essays, and I've used the same lesson for the last couple of years (see previous CRIT post), inserting a little "ChatGPT is not a search engine" spiel that I'm sure went into and out of ears very quickly. This year, with this new hurdle in mind, I redesigned the lesson in a way that integrates CRIT while asking the students to consider the benefits and pitfalls of using AI, Google, and databases.
Set-up:
- AI/Google/Databases scorecard on a whiteboard; dry erase marker nearby
- cards with numbers 1-5 on each table (recommend different colors for each number so it's easy to see at a glance)-- see slide #1
- slides & screen/projector
Background: At various checkpoints throughout the lesson, students will collaborate with their table mates to assign a score to AI, Google, or Databases for each criteria. Ask each table to hold up their number, then roughly average the scores offered. If you're feeling confident, ask tables whose scores are outliers to justify their choices.
<Slide 2> Introduce the lesson
<Slide 3> I start every lesson with this pseudo-existential question to make sure students understand the point of our time together. By reviewing the upcoming assignment, we can begin with the end in mind.
<Slide 4> I took some liberties here, assuming that students want to get their assignments done well… so we talk about this, and I mention that even if they don't have that goal, their teacher (and adults) have that goal for them. It's my job, then, to help them bridge the gap between fast and well.
<Slide 5> "research" vs "look stuff up"... This is an opportunity to understand that research is an in-depth, iterative process.
Checkpoint: have students score AI, then Google, then databases according to how fast and easy they are to use. Record the rough average (I used whole and half numbers) on the whiteboard.
Reflect as a group: if the only goal were to get the upcoming assignment done quickly, it's easy to see why students might be tempted to use AI. However, this is plagiarism, which is both unethical and a violation of academic integrity policies (this is especially crucial for students in dual credit classes who could face college-level consequences). It also doesn't help them learn anything new, so they lose out.
Before moving to slide 6, you can also initiate a discussion about why citations are important (to give credit where it's due, to prove you're not plagiarizing, to give your reader the opportunity to further their own research).
<Slide 6> walk through the information on the slide regarding citations and AI. I include a personal anecdote about how ChatGPT has repeatedly given me lists of books attributed to incorrect authors when I ask for things to put in a themed library display, but you could also refer to the "Summer Reading List" that was published in a number of newspapers in May 2025 for a similar example.
Checkpoint: have students score AI with regard to citations.
<Slide 7> At our school, we encourage students to use Zbib.org to generate citations and edit them as needed. Our students are familiar with this already, so this doesn't require much additional explanation.
Checkpoint: have students score Google with regard to citations.
<Slide 8> Our students are already familiar with the fact that our databases have built-in citation tools, so we don't spend much time on this point either.
Checkpoint: have students score databases with regard to citations.
<Slide 9> Introduce CRIT- an acronym to help students think CRIT-ically about the information and media they encounter (I usually pause for a pity laugh). (For more details, see CRIT post).
<Slide 10> Define Credibility, then walk through each piece of info on the slide. My usual examples/notes:
- On social media, how often do you encounter someone telling you how you should do something or what you should believe about a given topic? How often do you leave the app you're in to investigate whether that content creator has the education or experience to have expertise about what they're telling you? ... I've been a school librarian for 10 years, and I have a Master's in Library & Information Science. These are the experience and education that make me a credible source to talk to students about information literacy & research. I am not a credible source on other topics, like ice fishing. That's why it's important to know a little bit about the author/creator's background.
- Why would it matter in terms of credibility whether something has been reviewed by someone other than the creator/author?
- Citing sources or explaining methodology is how people justify the claims they're making, just like you'll need to do in your paper.
- Cross-checking claims, especially those that might have any political implications, is crucial. Try to look up the same information or event in sources that are known to have opposite biases. Typically the things that appear in both reports, like the middle of a Venn Diagram, are credible.
<Slide 11> Some of this was covered in the AI/citations slide, but it bears repeating. The most important thing, though, is point #2 (that's why it has 3 citations). Currently, ALL generative AI tools, even the ones designed for research, even the pro/premium models, have been found to hallucinate (provide false / made up information) anywhere between 10% -50% of the time. That means for some AI tools, it's a literal coin toss whether it's providing you with credible information or not. WHAT??
Checkpoint: have students score AI for credibility.
<Slide 12> Here, I like to begin the Google:Youtube/TikTok::Databases:Streaming Services analogy (shout out to Dr. Kristen Mattson for the idea- I still think this is one of the most relatable ways to help students understand what a database is). When students use Google (or any other search engine), it's imperative that they run through that Credibility checklist from slide 10.
Checkpoint: have students score Google for credibility.
<Slide 13> This is where the analogy continues. Database providers have quality control gatekeepers who check credibility so students don't have to.
Checkpoint: have students score databases for credibility (you can see where this is going).
< Slide 14> First, define relevance. Then I explain that the part in the purple is pretty automatic- if I do an ineffective search for "train" and I want information on training a puppy, I'm going to scroll past results about bullet trains in Japan without even thinking about it, because they have nothing to do with the topic I'm searching for. Next, reading through a source and determining which piece of information is the most relevant evidence to support a claim is something they'll work on with their classroom teachers (and likely already have). My angle is about the algorithms operating behind the scenes that serve up results in all these tools.
<Slide 15> background: In the spring of 2025, I created this AI lesson video; this slide is an evolution of that.
AI is programmed to make it feel like we're having a conversation. But just like I can have a conversation in person with someone and have a misunderstanding, AI can also misunderstand and, by extension, give a response that isn't relevant to the request. I used the prompts shown in Canva's AI image generator.
Starting with the bottom series of images: I asked for a hamburger with no cheese-- there's no cheese on any of these, so that's accurate-- and pickles -- two images clearly have pickles; two are a little harder to discern. And, all the images include other things I didn't ask for- lettuce, tomatoes- because the AI was making assumptions about what I needed.
So I tried another prompt, and that's where things went really sideways. There's no hamburger patty in any of those 4 images, but my intention when I used that prompt was to get a hamburger patty with just pickles- no bun, no tomatoes, nothing else. Rather than giving me even a single fully relevant result, AI generated images that were tangential at best. We can see that because we know what hamburgers and pickles look like, but imagine if we were using AI to write us a report about a topic we're not very familiar with. Would we be able to tell, in a wall of text, that the hamburger patty is missing?
Checkpoint: have students score AI on relevance.
Prior to moving on to slide 16, I ask my students how long they've had their Google accounts. Our school uses Google for Education, so most of them have had their accounts for close to a decade now. I've had my work account for just as long.
<Slide 16> Every time we Google something while logged in to those accounts, every result we click on, and every result we scroll past, Google is collecting that data to give us increasingly personalized search results. Sometimes this is great-- this is also what populates our social media feeds, so it means we don't have to spend a ton of time searching for content we're interested in. But sometimes, like when you need information or perspectives outside your filter bubble, it becomes an issue.
The screenshot included is from my work computer- both windows open on the same device at the same time. On the right, I was logged in to my work account in Chrome; on the left, I was in Firefox and not logged in to anything. On the Google News homepage, most of the information/sources are the same, with one exception-- the first additional source. In order to get outside this bubble to get other sources/perspectives, I recommend doing the same search in various search engines, or logged in and logged out, or with a friend with a different worldview, and comparing results.
Checkpoint: have students score Google on relevance.
<Slide 17> Basically just walk through the points on the slide, emphasizing that databases don't collect data for personalized results. Students will get out what they put in, so it's important to know how to use search strategies and limiters effectively.
Checkpoint: have students score databases on relevance.
<Slide 18> What are the possible intentions of media/information? Some of my students can come up with Persuade, Inform, and Educate (PIE), and I expand those based on a lesson from the News Literacy Project's Checkology Classroom. We also talk about Selling (which is a form of Persuasion), Provoking (an extreme form of Persuasion), and Documenting (the step before Informing; Informing provides context to Documentation).
Why does it matter? Well, if you have to write a history report about the sinking of the Titanic, are you going to use the 1997 blockbuster film starring Leonardo DiCaprio and Kate Winslet as a historical source? No? Why not? (it's made to entertain and sell tickets, not to inform). Or if you're researching the health effects of sugary beverages, and you find a study that says soda has health benefits... and then you see that the study was sponsored by Pepsi, how does that change your opinion? Makes it less credible, because their intention is to sell you things, right?
So what do you think the main intention of the AI companies is?
<Slide 19> I think the AI companies goal is to profit. Most of them would like you to buy the pro or premium version, but even if you don't, they (like social media) want you to keep using their product, because every interaction with AI gives them data and helps train the tool.
Also mentioned in the OpenAI report cited on this slide: the desire to keep users interacting with the AI tool may be contributing to the frequency of hallucinations. As the report explains, "language models hallucinate because standard training and evaluation procedures reward guessing over acknowledging uncertainty" (Why Language Models Hallucinate).
Checkpoint: have students score AI for how useful the intention of AI is in performing quality research.
<Slide 20> Very similar to AI companies now, although Google's original mission was "to organise the world’s information and make it universally accessible and useful" (Google had outgrown its 14-year-old mission statement).
<Slide 21> Furthermore, when student select an open-web resource from their Google results, they'll have to investigate the Intention behind it.
Checkpoint: have students score Google for how useful its intention is in performing quality research.
<Slide 22> The Intention of the database providers is pretty self-explanatory.
Checkpoint: have students score databases for how useful their intention is in performing quality research.
<Slide 23> Calling this element "Timeliness" is a little clunky, but it's still important, and this was the best thing we could think of to make the acronym work. We need to make sure that the resources we're using are timely based on our information need. If we're researching history or literature, we'll want primary sources, which are usually a bit older. If we're researching something related to science, technology, politics, sports scores, etc., then we need the most updated information we can find.
<Slide 24> How do you determine the publication date of the answers you get from AI? How easy is it to get older results from Google if that's what you need? Databases have publication date limiters that make it easy to dial in to the exact time frame that is appropriate for your topic.
Checkpoint: have students score AI, then Google, then databases for ease of locating timely resources.
<Slide 25> I've had a student volunteer in each class come total the scores for me. In this example AI scored significantly lower than in a few other classes, but so far, databases have always come out on top!
<Slide 26> We then talk about how to use databases efficiently:
<Slide 27> How important search strategies are, and how to find potential synonyms or related terms for their keywords.
<Slide 28> Similarly, they need to know what kind of tools are available in every database and have an idea of how to find them so they can become more database proficient. On this slide I included screenshots of the tools from the databases their teachers and I recommended for this particular project.
<Slide 29> Finally, we use slide 29 to transition to our Canvas course, where we house our database access links, and discuss how they're organized by type, which one to use for what kind of information, etc.
PHEW! This one is a lot, but I'm pretty happy with it for now. We'll see how I have to change it again in the future as the tools continue to change....

No comments:
Post a Comment