Student Privacy in the Age of ChatGPT with Fonz Mendoza

đŸ”„ What does it take to become an education content creator?

đŸ”„ What do education leaders need to know about privacy issues in AI?

đŸ”„ What roles will AR and VR technologies play in the future of learning?

Join me for this conversation with Texas educator FONZ MENDOZA as he shares his insights.

About This Guest

Fonz is a Professional Learning Specialist with expertise in educational technology and AI in education. He hosts the My EdTech Life podcast, where he interviews edtech startup founders, practitioners, and thought leaders. His current focus is on an AI in education initiative aimed at making technology more accessible and supportive for educators and students.

You can connect with Fonz @MyEdTechLife on X, Instagram, and YouTube. You can also visit his website and connect with more of his work at https://www.myedtech.life/.

Tune in for my regular Teachers on Fire interviews, airing LIVE on YouTube every Saturday morning at 8:00am Pacific and 11:00am Eastern! Join the conversation and add your comments to the broadcast.

In This Conversation

0:00:00 – Welcome to Teachers on Fire!

0:28 – Who is Alfonso Mendoza?

1:53 – A story of adversity: the new demands of the pandemic

5:28 – An update on the doctoral writing process

11:55 – What is the mission of My EdTech Life?

18:55 – The origin story of My EdTech Life: beginning, lessons, and wins

28:00 – Should education content creators use separate social media accounts?

29:53 – What are the privacy and security issues related to students and generative AI?

36:48 – What are the worst case scenarios as generative AI tools relate to students?

48:20 – A universal message to educators regarding AI tools in 2024

54:26 – What is the future of AR and VR tools in K-12 education?

59:46 – Other passions for Fonz: streaming, podcasting, and content creation

1:00:50 – A daily personal habit that keeps Fonz on fire: prayer

1:02:01 – An edtech tool pick: Kapwing

1:03:30 – Book shoutouts: Beyond the Bulletin Board, The Promises and Perils of AI in Education

1:05:21 – People to follow on X: Jorge Valenzuela and Zac Bauermaster

1:06:50 – Future guest recommendations from Fonz: Ken Shelton, Dee Lanier

1:08:27 – What Fonz is streaming: the Great British Bake-Off

1:09:27 – Where to connect with Fonz and My EdTech Life

Connect with Me

On X @TeachersOnFire (https://X.com/TeachersOnFire)

On Facebook @TeachersOnFire (https://www.facebook.com/TeachersOnFire/)

On YouTube @Teachers On Fire (https://www.youtube.com/@teachersonfire)

On LinkedIn https://www.linkedin.com/in/timwcavey/

Song Track Credits

Tropic Fuse by French Fuse

GO! by Neffex*All songs retrieved from the YouTube Audio Library at https://www.youtube.com/audiolibrary/.

AI Issues in K-12 Education Today: A Conversation with Tim Belmont

đŸ”„ What are the biggest wins for teachers that generative AI can provide?

đŸ”„ How can we help students avoid plagiarism while supporting the creative process?

đŸ”„ Is it possible for ChatGPT to know too much about us?

Join me in conversation with New Jersey educator Tim Belmont as we dig into these and other critical questions.

About This Guest

Tim Belmont is a high school technology specialist and Language Arts teacher who has presented at many of the largest education technology conferences. In the classroom, he elevates student voices through technology-integrated English activities and implements podcasting as a modern public speaking medium.

You can follow Tim on LinkedIn, on X @tbelmontedu, and at his website, https://www.timbelmont.com.

In This Conversation

1:44 – How the challenges of COVID pushed Tim into new professional growth

4:16 – What are the concerns around BIAS and MISINFORMATION in generative AI?

7:26 – How students can VERIFY information received from generative AI tools

13:00 – ChatGPT-checkers are NOT reliable

15:53 – What are the PRIVACY and SECURITY issues related to generative AI?

22:04 – What are the biggest WINS for teachers that AI tools offer?

25:03 – Our SWOT Analysis for generative AI tools in schools

36:55 – Tim’s learning outside of education: BAKING bread

38:00 – A PRODUCTIVITY HACK: simple stretching routine in the mornings

38:50 – Someone to follow on Education X: Katie Fielding

39:34 – An EDTECH tool pick: Kami

40:25 – A BOOK recommendation: Make Time for Creativity

41:15 – A future GUEST suggestion: Dee Lanier

42:09 – What Tim’s streaming: GAME CHANGER

43:14 – How to CONNECT with Tim Belmont

3 Ways to Build Demanding Writing Tasks for Students in the Age of AI

To produce reliable evidence of student learning, we need to evolve.

Artificial intelligence and ChatGPT have disrupted the state of K-12 education.

Perhaps disrupted is an understatement.

Let’s be real. These tools have dropped an atomic bomb on teaching and learning norms around the world.

Teachers of middle and high school students are suddenly asking how on earth they can be asked to create meaningful writing tasks that cannot be fulfilled by AI tools.

“Summarize Three Important Moments in the Career of George Washington” is no longer suitable, although one could argue that it hasn’t been suitable for quite some time.

Even next-level thinking tasks like “Compare and contrast the political ideologies of Donald Trump and Justin Trudeau” are now well within the reach of AI capabilities.

The same goes for “Summarize the three most common ethical objections to stem cell research” or “Write a Shakespearean sonnet about a current political movement.”

All of these prompts are fairly easy for AI tools to tackle in seconds.

So how do we push student writing and thinking in ways that assure the significance of the produced work?

How can we elicit writing that can actually be considered reliable evidence of learning against curricular standards?

3 Ways to Build Demanding Writing Tasks for Students in the Age of AI

1. Personalize

The first approach I suggest we take with our writing tasks is personalization. Here’s what I mean.

Require students to establish authentic connections and personal positions with the text or concepts being considered. Whether it’s a political figure, a set of ideologies, an ethical issue in biology, or a creative work in English, elicit more I-statements, opinions, and connections with personal experiences or beliefs.

So instead of 


“Describe the evolution of Ponyboy in The Outsiders, connecting changes in his character to important moments in the plot,”

we can take that to the next level with 


“Describe the evolution of Ponyboy in The Outsiders, comparing key moments from his journey with your own story of personal development.”

Or instead of 


“Compare and contrast the political ideologies of Donald Trump and Justin Trudeau,”

we can personalize that with 


“Compare and contrast the political ideologies of Donald Trump and Justin Trudeau with your own views. Which ideologies do you support, and which do you oppose? Justify each of your positions.”

Sure, perhaps limited aspects of the latter are ChatGPT-able, but this kind of persistent personalization pulls students away from trite copy and paste actions. It requires learners to use I-messages and stake their claims to personal viewpoints.

And that requires critical thinking.

I-messages are everything here. We need to invite students to write in the first person as often as possible.

2. Localize

The second angle I suggest taking is localization. This is more challenging than personalization, but I think it has the potential to help. What we want to do here is to challenge AI tools like ChatGPT by building writing tasks that relate to specific local, micro environments.

I work in a large city, so it may be well within ChatGPT’s reach to speak about my city with authority. But as smart as they are, the AI clones have a much tougher time with smaller municipalities, regions, and neighborhoods.

Let’s start with something general and Google-able, like this: How does suburban growth and development affect raccoon populations?

Source: ChatGPT 3.5

No problem. Like I said, this is Google-able.

But can ChatGPT speak to raccoon population trends occurring in one specific municipality?

Source: ChatGPT 3.5

No, it can’t. It can’t find (or hasn’t yet scraped) the data — something it subtly sidesteps before launching into a boilerplate listicle of factors that affect raccoon population trends in suburban areas, generally speaking.

What’s my point here?

Simply that the more we localize the demands of our writing tasks, the less useful AI tools become, and the more our students will need to rely on primary research, investigative journalism, and good old-fashioned critical thinking.

“What do we do when ChatGPT doesn’t know the answer?”

I’m so glad you asked, young learner.

Let’s think about this.

Image Source: Canva stock library

3. Vocalize

Vocalization is the icing on the cake. We take our writing tasks to yet another level of quality and evidence of mastery by asking our students to vocalize their texts.

Present their works to the class.

Share them in small groups.

Read single paragraphs aloud in sharing circles.

Require students to engage with their texts and the texts of others in dynamic ways (think, pair, share around ideas or passages, for example).

Record portions or whole pieces (in audio or video format) of texts presented aloud to be shared with the broader learning community as podcasts, online learning portfolios, or on YouTube.

Yes, part of what we’re doing here is building in accountability: students who rip off entire essays from ChatGPT risk being exposed when they stumble over words, expressions, and core concepts from the very texts that they pretend to have written themselves.

But this isn’t a game of entrapment. That’s a loser’s game, and if that’s all we’re doing, the message we’re effectively sending is You’re going to have to try harder and work smarter in order to avoid being caught.

What we’re actually more interested in is leaning into one of the great principles of learning.

We’re seizing the moment to invite our students into higher order thinking and knowing.

What our students can discuss with confidence is what they deeply understand.

Final thoughts

In the age of AI, it can be tempting to feel like we’re on the defensive as educators.

It’s us against the machines. Students against academic integrity. Suddenly, we’re battling plagiarism and fabrication on a whole new level, and it can feel like we’re losing.

There’s a different mindset to take here.

ChatGPT and its allies have disrupted the world of learning, yes. But look what’s happening.

It’s forcing us to ask more from our learners.

More imagination.

More authentic voice.

More critical thinking.

More investigation and inquiry.

As we require students to personalizelocalize, and vocalize their learning, the evidence of learning that we’re after takes clearer shape.

And that’s no deep fake.

How to Respond to AI-Powered Cheating in the Middle Years

Let’s think this through before throwing the book at middle school students.

Most teachers remember the conversation around plagiarism and academic dishonesty in their undergraduate programs in college or university.

The vibe was intense.

Try it, get caught, and you could suffer serious academic penalties.

You could fail your course. Be removed from your degree program. Get kicked out of school entirely.

This was heavy, heavy stuff. Still is.

And it’s fresh on the minds of most teachers when they enter their K-12 classrooms.

Academic dishonesty in the 2000s

I’ve taught in the middle years for over 20 years. When I started teaching in 2001, wifi wasn’t a thing.

That gives you a sense of how things have evolved in the years since.

I remember when the internet finally arrived in our computer lab via LAN connections and we started to see the first clumsy attempts at academic dishonesty. Students were learning — like all of us — about the power of copy and paste.

Ctrl+C, Ctrl+V.

Magic. Could writing actually become this easy?

All the text jumped from some wonky website right into that 8th grade Social Studies essay with a few flourishes of the mouse and a couple of keystrokes.

So simple. Just hit that print command and let the noisy beast of a bubble jet printer do its work.

Of course, students in 2010 hadn’t quite figured out that their copying and pasting was leaving obvious tell-tale signs.

Unusual font styles and sizes were giving them away. Even funnier, source URLs were sometimes left directly in the text of essays or appeared elsewhere on the page, especially if they dared to print their “essay” straight from another website.

Thoughtful conversations followed such missteps.

Academic dishonesty in the age of Chat GPT

Fast forward to 2023 and the explosion in AI that we’ve all witnessed this year. Chat GPT and its clones have disrupted the technology landscape and redefined possibilities for composition.

Suddenly, it’s easier than ever to generate large bodies of text and claim authorship. For some students in grades five through nine, let’s say, the thought must be incredibly tantalizing.

These learners are digital natives, yes, but they’re also building new digital literacy skills.

  • They’re still new to email and appropriate email communication.
  • They’re new to task lists and calendars and cloud drive organization.
  • They’re new to academic research and appropriate citation.
  • And they’re still learning to formulate positions and justify arguments in clear, coherent, compelling ways.

They haven’t been at any of it for long, but they’re fearless. They’re ready to play and experiment.

They’re ready to be serious and fun and industrious and goofy and persuasive and inappropriate all in one day.

So we should expect them to try some moves with AI writing tools.

How to respond when middle years students turn in work created (maybe) by artificial intelligence

You’ll notice that I keep mentioning middle years. That’s intentional.

When it comes to seniors in 12th grade, for example, I recognize that the stakes are higher.

Those learners should also have a little more perspective, a little more awareness, a little more responsibility to own when it comes to academic honesty and originality of thought.

But when it comes to students in the middle years, I’m thinking of kids between the ages of 10–15 who in many cases have not had computers at their desks for long.

In my context, students don’t move to 1:1 Chromebooks until sixth grade. Their use of computers and iPads before that is rare and intermittent.

As I mentioned, they’re still in the thick of digital literacy skill acquisition.

With that in mind, I think it’s possible to over-respond when it comes to instances of AI-powered cheating. Frankly, “cheating” may not even be the right term in a lot of cases.

When a 12-year-old uses an AI tool to produce (or heavily supplement) an academic piece and then claim the work as entirely their own, my reaction is NOT “Oh my God, how could this happen?”

Not at all. I fully expect it to happen.

I mean, wouldn’t we be naive not to?

No, I’m not scheduling a serious meeting with this student and the principal. I’m not contacting the child’s parents with a heavy-sounding email (not in the first instance, at least).

I’m not pursuing a heavy consequence, suspension, failure, or a zero on the assignment.

I may make colleagues aware of what has happened in a casual, helpful sense, but I’m not putting out an all-caps distress call.

Instead, I’m going to approach the situation as an act of curiosity and experimentation.

Instead of horror, I’m going to enjoy the conversation that follows.

This is not advocating for plagiarism

I was thinking through some of this stuff out loud on X.com when Barbara shared this reply.

If what you’re hearing is me “advocating for plagiarism,” I think you’re missing my point here.

What I’m calling for here is a bit of a change in approach when it comes to students who are 10–15 years of age.

We know these kids.

We know their developmental traits.

We know they are experimental and risk-embracing.

We know they are playing with alter-egos and unsavory online activities, in many cases.

These students lack the maturity, perspective, judgment, and experience of their older peers.

So what I’m calling for is not about ‘going soft’ or ‘letting cheating go.’ Not at all. In fact, while we’re talking about punitive measures, I’d be the first to say that chronic offenders require very different responses.

But when it comes to our first-time offenders, our experimenters, our ill-advised Chat GPTers, I’d suggest proceeding with calm and thoughtful care.

Instead of throwing the book (or the computer?) at these students or initiating large-scale investigations, let’s engage in thoughtful conversations.

Conversations that might sound like 


  • “Hey, I like what you wrote here. Can you tell me about your writing process?”
  • “This is good stuff, my friend. Can you tell me a little more about your argument here in the third paragraph?”
  • “Great work on your persuasive essay. It looks like you may need to cite your sources, though. Do you think you can do that and then re-submit?”

These are gentle, open-ended questions that nudge and prod around the edges of your suspicions. They’re curious. They sound like learning partnership, not lead attorney for the prosecution.

They strike an entirely different posture than “Did you or did you not use Chat GPT for this?”

Assessment means to sit beside

Hey, it’s possible that our middle schooler in question may not be entirely honest about the role of AI in their writing process. They may offer a few lies to cover their tracks.

In the short term, that’s not such a huge deal. Keep your relationship with this student strong and move on. There will be plenty of other learning opportunities to come.

I find it a little puzzling when I hear teachers express their hell-bent commitment to prevent a student from “getting away with this.”

I mean, take a deep breath, my friend. Mikey hasn’t stolen money from your safe deposit box. It’s simply possible that not all of this writing was actually his.

Again, I’m not diminishing the seriousness of cheating. What I’m saying is that this is not the time to call in the cavalry. The sky is not falling in here.

What it actually IS time for is to do more sitting with this student. And by that I mean literally sitting with him.

Support him, encourage him, coach him through his writing process.

After all, what’s our goal for this student?

It’s to help him meet learning targets or curricular standards.

It’s to help him become a better writer and communicator.

It’s to help him learn.

May I humbly suggest that jumping to angry accusations, threatening a zero, or conducting large-scale investigations regarding did he or did he not cheat on this essay has the potential to be a lose-lose situation.

Nobody’s winning here.

Instead, focus on more partnership. More presence. More coaching. More real-time observation.

Do that, and I think we’ll all get the results that we want — teacher and student.

It’s a brave new artificial world out there.

Let’s learn together.

Why the NYC Department of Education is Wrong on ChatGPT

A system-wide ban feels like fear instead of curiosity, defense over offense, convention over adaptation.

The most recent iteration of ChatGPT was released on November 30, 2022. ChatGPT is an artificial intelligence bot that was trained on an enormous pool of information to engage in simple conversations with users.

Within a week, the AI bot had acquired over one million clients. And as K-12 schools began winding down for the calendar year, ChatGPT was making headlines around the world.

You’ve likely heard the buzz already, but in case you have yet to try it, ChatGPT is to Google what Google is to a set of encyclopedias.

Google is a master curator and locator of information, but ChatGPT has the ability to quickly aggregate and mobilize that information on a level the world has never seen.

If you haven’t seen ChatGPT at work, watch it perform these school-related tasks [9:48]:

  1. Design a lesson plan for an 8th grade civics class
  2. Compare the evolution of protagonists from two different novels
  3. Describe how the water cycle affects Vancouver, BC
  4. Calculate triangle side lengths using the Pythagorean Theorem
  5. Write a campaign speech for middle school president
  6. Suggest solutions for anxiety and loneliness
  7. Write a love poem for a special friend (and then make it spicier)
  8. Write a short story with specific character names

ChatGPT is just the latest manifestation of the growth in AI we’ve seen in recent years. And we know it’s only going to get better.

Enter the NYC Department of Education

Schools across North America were only a few bright days into the new year when the news came down from the NYC Department of Education, the largest school system in the United States: ChatGPT would be banned in all of their schools.

I can understand the fears and concerns about how this technology will impact K-12 education. I think we all can.

Like I said to my wife this week, this technology has permanently changed the way that I read and think about student writing. How can it not?

But I think a blanket ban is the wrong response.

Here’s why.

4 Reasons Why a System-Wide Ban on ChatGPT is the Wrong Call

Let’s start at the most basic, practical level.

1. A ban on a particular website is practically impossible.

NYC can only blacklist websites on school wifi networks, so students will still be able to access ChatGPT when they’re at home, off-campus, or using any device with access to a data network. Since students can obviously still use ChatGPT for homework, a school wifi ban doesn’t mean too much.

One has to wonder if a ban is actually more counter-productive to its own aims by simply raising the profile of the forbidden fruit in question.

2. Whack-a-mole isn’t sustainable.

ChatGPT has certainly grabbed the headlines, but there are plenty of other similar tools out there. And more are appearing all the time.

Quillbot.com is an AI paraphrasing tool that appears to render classic plagiarism checkers useless. TinyWow.com offers a whole suite of free AI writing tools.

Premium (paid) AI writing services such as Jasper.aiShakespeare.ai, and Rytr.me all claim to be able to deliver spectacular results to marketers.

The point: if the district strategy is to ban these tools as they appear, there will be another new tool to ban every month. That doesn’t feel like a strategy that will age well over the years to come.

3. Like wifi, Google, and YouTube before it, ChatGPT is just another step forward for learning tools.

It wasn’t long ago that schools were banning YouTube on their wifi networks rather than leveraging the world’s largest library of video resources to support learning. They opted for the safety of zero exposure rather than do the work of teaching best practices and applying skills of discrimination.

Even before the arrival of YouTube, many schools wrestled with the question of having a wifi network at all. As silly as these questions seem today, they were important conversations at the time.

Of course, Google itself has become a much smarter search engine over the years, prone to serving up large-font answers to closed questions (“How far is the sun from Earth?”) before listing any search results.

Because of this Google Effect, schools and educators have been moving away for some time now from a focus on strictly “Googleable” information to a more nuanced approach to critical thinking.

For example, instead of asking students to memorize the names of all 45 presidents (content which is very Googleable), we ask them to critique the legacies of particular presidents based on currently relevant policy issues.

Content is still important for students to learn. We know that a mass of knowledge forms a necessary foundation in order for students to learn more, make distinctions, draw conclusions, and establish new theories about their world.

But the power of Google has put downward pressure on the importance of content memorization — of that, there can be little doubt.

Like YouTube and Google before it, ChatGPT is just the latest application that will change the way we think about teaching, learning, and assessment.

These powerful technologies are here to stay. Let’s embrace them.

Photo by Eliott Reyna on Unsplash

4. The biggest reason: a ban sends all the wrong signals about learning and mindset.

In December of 2022, ChatGPT forced the world to reckon with an AI tool that could complete complex tasks in seconds. There’s no doubt that things will never be quite the same.

Who will be the most excited to play with this tool? Our young learners.

Students of all ages will share our child-like fascination with the possibilities. And well they should: this is clearly a technology that will only grow in significance throughout their lifetimes.

Sadly, I fear that a school ban sends all the wrong signals about technology and the nature of learning. It feels like fear instead of curiosity, defense over offense, convention over adaptation.

It looks like head-in-the-sand, I-hope-this-goes-away kind of thinking. And that’s not the approach of a lifelong learner.

I’m not suggesting that every teacher should give their students unfettered access to these tools. There will be times to close computers and show evidence of learning and critical thinking using pencils and paper, just as there are in classrooms today.

But there should be other times to play. To experiment. To learn together — teachers and students, sitting side by side, engaging, thinking, and talking about what it will look like to leverage ChatGPT and similar tools in constructive, powerful ways.

Closing thoughts

Whenever I come up against a difficult decision in our schools, I run it through this tried-and-true filter:

  1. What is best for our kids?
  2. What is best for learning?

Banning the latest technology from our schools just doesn’t feel like a great answer to either of those questions.

Listen, there’s no doubt that the path ahead will be challenging, and these tools will require new approaches.

But growth doesn’t happen in the comfort zone. Let’s lean into uncomfortable spaces and do what we do best: learn.

Together, let’s shape the nature of thinking and work in 2023.