When we think about AI, most of us picture tools like ChatGPT or Gemini. However, the reality is that AI is already built into the tools we use every day, even something as familiar as a web search. And if AI is everywhere, then so are its mistakes.

A Surprising Answer from Google

Recently, I was talking with my colleague Paulina, Senior Architect at Maplesoft, who also manages the team that creates all the Maple Learn content. We were talking about Google’s AI Overview, and I said I liked it because it usually seemed accurate. She disagreed, saying she’d found plenty of errors. Naturally, I asked for an example.

Her suggestion was simple: search “is x + y a polynomial.”

So I did. Here’s what Google’s AI Overview told me:

“No, x + y is not a polynomial”

My reaction? HUH?!

The explanation correctly defined what a polynomial is but still failed to recognize that both x and y each have an implicit exponent of 1. The logic was there, but the conclusion was wrong.

Using It in the Classroom

This makes a great classroom example because it’s quick and engaging. Ask your students first whether x + y is a polynomial, then show them the AI result. The surprise sparks discussion: why does the explanation sound right but end with the wrong conclusion?

In just a few minutes, you’ve not only reviewed a basic concept but also reinforced the habit of questioning answers even when they look authoritative.

Why This Matters

As I said in a previous post, the real issue isn’t the math slip, it’s the habit of accepting answers without questioning them. It’s our responsibility to teach students how to use these tools responsibly, especially as AI use continues to grow. Critical thinking has always mattered, and now it’s essential.

 

Please Wait...