đ„ What does it take to become an education content creator?
đ„ What do education leaders need to know about privacy issues in AI?
đ„ What roles will AR and VR technologies play in the future of learning?
Join me for this conversation with Texas educator FONZ MENDOZA as he shares his insights.
About This Guest
Fonz is a Professional Learning Specialist with expertise in educational technology and AI in education. He hosts the My EdTech Life podcast, where he interviews edtech startup founders, practitioners, and thought leaders. His current focus is on an AI in education initiative aimed at making technology more accessible and supportive for educators and students.
You can connect with Fonz @MyEdTechLife on X, Instagram, and YouTube. You can also visit his website and connect with more of his work at https://www.myedtech.life/.
Tune in for my regular Teachers on Fire interviews, airing LIVE on YouTube every Saturday morning at 8:00am Pacific and 11:00am Eastern! Join the conversation and add your comments to the broadcast.
đ„ What are the biggest wins for teachers that generative AI can provide?
đ„ How can we help students avoid plagiarism while supporting the creative process?
đ„ Is it possible for ChatGPT to know too much about us?
Join me in conversation with New Jersey educator Tim Belmont as we dig into these and other critical questions.
About This Guest
Tim Belmont is a high school technology specialist and Language Arts teacher who has presented at many of the largest education technology conferences. In the classroom, he elevates student voices through technology-integrated English activities and implements podcasting as a modern public speaking medium.
To produce reliable evidence of student learning, we need to evolve.
Artificial intelligence and ChatGPT have disrupted the state of K-12 education.
Perhaps disrupted is an understatement.
Letâs be real. These tools have dropped an atomic bomb on teaching and learning norms around the world.
Teachers of middle and high school students are suddenly asking how on earth they can be asked to create meaningful writing tasks that cannot be fulfilled by AI tools.
âSummarize Three Important Moments in the Career of George Washingtonâ is no longer suitable, although one could argue that it hasnât been suitable for quite some time.
Even next-level thinking tasks like âCompare and contrast the political ideologies of Donald Trump and Justin Trudeauâ are now well within the reach of AI capabilities.
The same goes for âSummarize the three most common ethical objections to stem cell researchâ or âWrite a Shakespearean sonnet about a current political movement.â
All of these prompts are fairly easy for AI tools to tackle in seconds.
So how do we push student writing and thinking in ways that assure the significance of the produced work?
How can we elicit writing that can actually be considered reliable evidence of learning against curricular standards?
3 Ways to Build Demanding Writing Tasks for Students in the Age of AI
1. Personalize
The first approach I suggest we take with our writing tasks is personalization. Hereâs what I mean.
Require students to establish authentic connections and personal positions with the text or concepts being considered. Whether itâs a political figure, a set of ideologies, an ethical issue in biology, or a creative work in English, elicit more I-statements, opinions, and connections with personal experiences or beliefs.
So instead of âŠ
âDescribe the evolution of Ponyboy in The Outsiders, connecting changes in his character to important moments in the plot,â
we can take that to the next level with âŠ
âDescribe the evolution of Ponyboy in The Outsiders, comparing key moments from his journey with your own story of personal development.â
Or instead of âŠ
âCompare and contrast the political ideologies of Donald Trump and Justin Trudeau,â
we can personalize that with âŠ
âCompare and contrast the political ideologies of Donald Trump and Justin Trudeau with your own views. Which ideologies do you support, and which do you oppose? Justify each of your positions.â
Sure, perhaps limited aspects of the latter are ChatGPT-able, but this kind of persistent personalization pulls students away from trite copy and paste actions. It requires learners to use I-messages and stake their claims to personal viewpoints.
And that requires critical thinking.
I-messages are everything here. We need to invite students to write in the first person as often as possible.
2. Localize
The second angle I suggest taking is localization. This is more challenging than personalization, but I think it has the potential to help. What we want to do here is to challenge AI tools like ChatGPT by building writing tasks that relate to specific local, micro environments.
I work in a large city, so it may be well within ChatGPTâs reach to speak about my city with authority. But as smart as they are, the AI clones have a much tougher time with smaller municipalities, regions, and neighborhoods.
Letâs start with something general and Google-able, like this: How does suburban growth and development affect raccoon populations?
Source: ChatGPT 3.5
No problem. Like I said, this is Google-able.
But can ChatGPT speak to raccoon population trends occurring in one specific municipality?
Source: ChatGPT 3.5
No, it canât. It canât find (or hasnât yet scraped) the data â something it subtly sidesteps before launching into a boilerplate listicle of factors that affect raccoon population trends in suburban areas, generally speaking.
Whatâs my point here?
Simply that the more we localize the demands of our writing tasks, the less useful AI tools become, and the more our students will need to rely on primary research, investigative journalism, and good old-fashioned critical thinking.
âWhat do we do when ChatGPT doesnât know the answer?â
Iâm so glad you asked, young learner.
Letâs think about this.
Image Source: Canva stock library
3. Vocalize
Vocalization is the icing on the cake. We take our writing tasks to yet another level of quality and evidence of mastery by asking our students to vocalize their texts.
Present their works to the class.
Share them in small groups.
Read single paragraphs aloud in sharing circles.
Require students to engage with their texts and the texts of others in dynamic ways (think, pair, share around ideas or passages, for example).
Record portions or whole pieces (in audio or video format) of texts presented aloud to be shared with the broader learning community as podcasts, online learning portfolios, or on YouTube.
Yes, part of what weâre doing here is building in accountability: students who rip off entire essays from ChatGPT risk being exposed when they stumble over words, expressions, and core concepts from the very texts that they pretend to have written themselves.
But this isnât a game of entrapment. Thatâs a loserâs game, and if thatâs all weâre doing, the message weâre effectively sending is Youâre going to have to try harder and work smarter in order to avoid being caught.
What weâre actually more interested in is leaning into one of the great principles of learning.
We’re seizing the moment to invite our students into higher order thinking and knowing.
What our students can discuss with confidence is what they deeply understand.
Final thoughts
In the age of AI, it can be tempting to feel like weâre on the defensive as educators.
Itâs us against the machines. Students against academic integrity. Suddenly, weâre battling plagiarism and fabrication on a whole new level, and it can feel like weâre losing.
Thereâs a different mindset to take here.
ChatGPT and its allies have disrupted the world of learning, yes. But look whatâs happening.
Itâs forcing us to ask more from our learners.
More imagination.
More authentic voice.
More critical thinking.
More investigation and inquiry.
As we require students to personalize, localize, and vocalize their learning, the evidence of learning that weâre after takes clearer shape.
Letâs think this through before throwing the book at middle school students.
Most teachers remember the conversation around plagiarism and academic dishonesty in their undergraduate programs in college or university.
The vibe was intense.
Try it, get caught, and you could suffer serious academic penalties.
You could fail your course. Be removed from your degree program. Get kicked out of school entirely.
This was heavy, heavy stuff. Still is.
And itâs fresh on the minds of most teachers when they enter their K-12 classrooms.
Academic dishonesty in the 2000s
Iâve taught in the middle years for over 20 years. When I started teaching in 2001, wifi wasnât a thing.
That gives you a sense of how things have evolved in the years since.
I remember when the internet finally arrived in our computer lab via LAN connections and we started to see the first clumsy attempts at academic dishonesty. Students were learning â like all of us â about the power of copy and paste.
Ctrl+C, Ctrl+V.
Magic. Could writing actually become this easy?
All the text jumped from some wonky website right into that 8th grade Social Studies essay with a few flourishes of the mouse and a couple of keystrokes.
So simple. Just hit that print command and let the noisy beast of a bubble jet printer do its work.
Of course, students in 2010 hadnât quite figured out that their copying and pasting was leaving obvious tell-tale signs.
Unusual font styles and sizes were giving them away. Even funnier, source URLs were sometimes left directly in the text of essays or appeared elsewhere on the page, especially if they dared to print their âessayâ straight from another website.
Thoughtful conversations followed such missteps.
Academic dishonesty in the age of Chat GPT
Fast forward to 2023 and the explosion in AI that weâve all witnessed this year. Chat GPT and its clones have disrupted the technology landscape and redefined possibilities for composition.
Suddenly, itâs easier than ever to generate large bodies of text and claim authorship. For some students in grades five through nine, letâs say, the thought must be incredibly tantalizing.
These learners are digital natives, yes, but theyâre also building new digital literacy skills.
Theyâre still new to email and appropriate email communication.
Theyâre new to task lists and calendars and cloud drive organization.
Theyâre new to academic research and appropriate citation.
And theyâre still learning to formulate positions and justify arguments in clear, coherent, compelling ways.
They havenât been at any of it for long, but theyâre fearless. Theyâre ready to play and experiment.
Theyâre ready to be serious and fun and industrious and goofy and persuasive and inappropriate all in one day.
So we should expect them to try some moves with AI writing tools.
How to respond when middle years students turn in work created (maybe) by artificial intelligence
Youâll notice that I keep mentioning middle years. Thatâs intentional.
When it comes to seniors in 12th grade, for example, I recognize that the stakes are higher.
Those learners should also have a little more perspective, a little more awareness, a little more responsibility to own when it comes to academic honesty and originality of thought.
But when it comes to students in the middle years, Iâm thinking of kids between the ages of 10â15 who in many cases have not had computers at their desks for long.
In my context, students donât move to 1:1 Chromebooks until sixth grade. Their use of computers and iPads before that is rare and intermittent.
As I mentioned, theyâre still in the thick of digital literacy skill acquisition.
With that in mind, I think itâs possible to over-respond when it comes to instances of AI-powered cheating. Frankly, âcheatingâ may not even be the right term in a lot of cases.
When a 12-year-old uses an AI tool to produce (or heavily supplement) an academic piece and then claim the work as entirely their own, my reaction is NOT âOh my God, how could this happen?â
Not at all. I fully expect it to happen.
I mean, wouldnât we be naive not to?
No, Iâm not scheduling a serious meeting with this student and the principal. Iâm not contacting the childâs parents with a heavy-sounding email (not in the first instance, at least).
Iâm not pursuing a heavy consequence, suspension, failure, or a zero on the assignment.
I may make colleagues aware of what has happened in a casual, helpful sense, but Iâm not putting out an all-caps distress call.
Instead, Iâm going to approach the situation as an act of curiosity and experimentation.
Instead of horror, Iâm going to enjoy the conversation that follows.
This is not advocating for plagiarism
I was thinking through some of this stuff out loud on X.com when Barbara shared this reply.
If what youâre hearing is me âadvocating for plagiarism,â I think youâre missing my point here.
What Iâm calling for here is a bit of a change in approach when it comes to students who are 10â15 years of age.
We know these kids.
We know their developmental traits.
We know they are experimental and risk-embracing.
We know they are playing with alter-egos and unsavory online activities, in many cases.
These students lack the maturity, perspective, judgment, and experience of their older peers.
So what Iâm calling for is not about âgoing softâ or âletting cheating go.â Not at all. In fact, while weâre talking about punitive measures, Iâd be the first to say that chronic offenders require very different responses.
But when it comes to our first-time offenders, our experimenters, our ill-advised Chat GPTers, Iâd suggest proceeding with calm and thoughtful care.
Instead of throwing the book (or the computer?) at these students or initiating large-scale investigations, letâs engage in thoughtful conversations.
Conversations that might sound like âŠ
âHey, I like what you wrote here. Can you tell me about your writing process?â
âThis is good stuff, my friend. Can you tell me a little more about your argument here in the third paragraph?â
âGreat work on your persuasive essay. It looks like you may need to cite your sources, though. Do you think you can do that and then re-submit?â
These are gentle, open-ended questions that nudge and prod around the edges of your suspicions. Theyâre curious. They sound like learning partnership, not lead attorney for the prosecution.
They strike an entirely different posture than âDid you or did you not use Chat GPT for this?â
Assessment means to sit beside
Hey, itâs possible that our middle schooler in question may not be entirely honest about the role of AI in their writing process. They may offer a few lies to cover their tracks.
In the short term, thatâs not such a huge deal. Keep your relationship with this student strong and move on. There will be plenty of other learning opportunities to come.
I find it a little puzzling when I hear teachers express their hell-bent commitment to prevent a student from âgetting away with this.â
I mean, take a deep breath, my friend. Mikey hasnât stolen money from your safe deposit box. Itâs simply possible that not all of this writing was actually his.
Again, Iâm not diminishing the seriousness of cheating. What Iâm saying is that this is not the time to call in the cavalry. The sky is not falling in here.
What it actually IS time for is to do more sitting with this student. And by that I mean literally sitting with him.
Support him, encourage him, coach him through his writing process.
After all, whatâs our goal for this student?
Itâs to help him meet learning targets or curricular standards.
Itâs to help him become a better writer and communicator.
Itâs to help him learn.
May I humbly suggest that jumping to angry accusations, threatening a zero, or conducting large-scale investigations regarding did he or did he not cheaton this essay has the potential to be a lose-lose situation.
Nobodyâs winning here.
Instead, focus on more partnership. More presence. More coaching. More real-time observation.
Do that, and I think weâll all get the results that we want â teacher and student.
A system-wide ban feels like fear instead of curiosity, defense over offense, convention over adaptation.
The most recent iteration of ChatGPT was released on November 30, 2022. ChatGPT is an artificial intelligence bot that was trained on an enormous pool of information to engage in simple conversations with users.
Within a week, the AI bot had acquired over one million clients. And as K-12 schools began winding down for the calendar year, ChatGPT was making headlines around the world.
Youâve likely heard the buzz already, but in case you have yet to try it, ChatGPT is to Google what Google is to a set of encyclopedias.
Google is a master curator and locator of information, but ChatGPT has the ability to quickly aggregate and mobilize that information on a level the world has never seen.
If you havenât seen ChatGPT at work, watch it perform these school-related tasks [9:48]:
Design a lesson plan for an 8th grade civics class
Compare the evolution of protagonists from two different novels
Describe how the water cycle affects Vancouver, BC
Calculate triangle side lengths using the Pythagorean Theorem
Write a campaign speech for middle school president
Suggest solutions for anxiety and loneliness
Write a love poem for a special friend (and then make it spicier)
Write a short story with specific character names
ChatGPT is just the latest manifestation of the growth in AI weâve seen in recent years. And we know itâs only going to get better.
Enter the NYC Department of Education
Schools across North America were only a few bright days into the new year when the news came down from the NYC Department of Education, the largest school system in the United States: ChatGPT would be banned in all of their schools.
I can understand the fears and concerns about how this technology will impact K-12 education. I think we all can.
Like I said to my wife this week, this technology has permanently changed the way that I read and think about student writing. How can it not?
But I think a blanket ban is the wrong response.
Hereâs why.
4 Reasons Why a System-Wide Ban on ChatGPT is the Wrong Call
Letâs start at the most basic, practical level.
1. A ban on a particular website is practically impossible.
NYC can only blacklist websites on school wifi networks, so students will still be able to access ChatGPT when theyâre at home, off-campus, or using any device with access to a data network. Since students can obviously still use ChatGPT for homework, a school wifi ban doesnât mean too much.
One has to wonder if a ban is actually more counter-productive to its own aims by simply raising the profile of the forbidden fruit in question.
2. Whack-a-mole isnât sustainable.
ChatGPT has certainly grabbed the headlines, but there are plenty of other similar tools out there. And more are appearing all the time.
Quillbot.com is an AI paraphrasing tool that appears to render classic plagiarism checkers useless. TinyWow.com offers a whole suite of free AI writing tools.
Premium (paid) AI writing services such as Jasper.ai, Shakespeare.ai, and Rytr.me all claim to be able to deliver spectacular results to marketers.
The point: if the district strategy is to ban these tools as they appear, there will be another new tool to ban every month. That doesnât feel like a strategy that will age well over the years to come.
3. Like wifi, Google, and YouTube before it, ChatGPT is just another step forward for learning tools.
It wasnât long ago that schools were banning YouTube on their wifi networks rather than leveraging the worldâs largest library of video resources to support learning. They opted for the safety of zero exposure rather than do the work of teaching best practices and applying skills of discrimination.
Even before the arrival of YouTube, many schools wrestled with the question of having a wifi network at all. As silly as these questions seem today, they were important conversations at the time.
Of course, Google itself has become a much smarter search engine over the years, prone to serving up large-font answers to closed questions (âHow far is the sun from Earth?â) before listing any search results.
Because of this Google Effect, schools and educators have been moving away for some time now from a focus on strictly âGoogleableâ information to a more nuanced approach to critical thinking.
For example, instead of asking students to memorize the names of all 45 presidents (content which is very Googleable), we ask them to critique the legacies of particular presidents based on currently relevant policy issues.
Content is still important for students to learn. We know that a mass of knowledge forms a necessary foundation in order for students to learn more, make distinctions, draw conclusions, and establish new theories about their world.
But the power of Google has put downward pressure on the importance of content memorization â of that, there can be little doubt.
Like YouTube and Google before it, ChatGPT is just the latest application that will change the way we think about teaching, learning, and assessment.
These powerful technologies are here to stay. Letâs embrace them.
4. The biggest reason: a ban sends all the wrong signals about learning and mindset.
In December of 2022, ChatGPT forced the world to reckon with an AI tool that could complete complex tasks in seconds. Thereâs no doubt that things will never be quite the same.
Who will be the most excited to play with this tool? Our young learners.
Students of all ages will share our child-like fascination with the possibilities. And well they should: this is clearly a technology that will only grow in significance throughout their lifetimes.
Sadly, I fear that a school ban sends all the wrong signals about technology and the nature of learning. It feels like fear instead of curiosity, defense over offense, convention over adaptation.
It looks like head-in-the-sand, I-hope-this-goes-away kind of thinking. And thatâs not the approach of a lifelong learner.
Iâm not suggesting that every teacher should give their students unfettered access to these tools. There will be times to close computers and show evidence of learning and critical thinking using pencils and paper, just as there are in classrooms today.
But there should be other times to play. To experiment. To learn together â teachers and students, sitting side by side, engaging, thinking, and talking about what it will look like to leverage ChatGPT and similar tools in constructive, powerful ways.
Closing thoughts
Whenever I come up against a difficult decision in our schools, I run it through this tried-and-true filter:
What is best for our kids?
What is best for learning?
Banning the latest technology from our schools just doesnât feel like a great answer to either of those questions.
Listen, thereâs no doubt that the path ahead will be challenging, and these tools will require new approaches.
But growth doesnât happen in the comfort zone. Letâs lean into uncomfortable spaces and do what we do best: learn.
Together, letâs shape the nature of thinking and work in 2023.