Fostering Cognitive Resilience in K12 Education for the AI Era

K. Melton giving closing keynote speech at NICE K12 Cybersecurity Edu Con in 2024

On 10th Dec. 2024, I gave the closing keynote for the annual NICE K12 Cybersecurity Education Conference in San Antonio, Texas. I was so deeply moved by those who shared their own stories with me afterwards that I wanted to post the speech in its entirety in a public place.


 

Our students are immersed in a digital landscape shaped by hidden algorithms, bombarded with overwhelming amounts of information, where detached social interactions often lack physical, meaningful connection. Worse, the technology driving this change is designed with Big Tech’s profit motives in mind, not the wellbeing of its users.

As tech addiction spikes and a national youth mental health crisis unfolds, AI’s role as a foundational force is ever-growing, effectively leading the charge toward these daunting realities. What can we do in the face of such profound challenges? How do we prepare for AI’s impact when it’s still evolving?

As a veteran in making complex cybersecurity concepts accessible to everyday audiences, I’d like to discuss a crucial part of the solution: fostering cognitive resilience in our students.

Understanding AI in Education

Adaptive technology has already transformed how we meet individual educational needs, allowing for personalization at unprecedented levels.

Tools like ChatGPT are not just tools for learning—they can be valuable gateways to creative play and understanding. They can allow for more seamless experiences, support independent research skills, and quicken our ability to process enormous amounts of data. For example, Chat GPT-4 was trained on more than 1 petabyte of data and 1.8 trillion parameters. That’s enough data to fill 20 million 4-drawer filing cabinets, or to watch over 13 years straight of HD video!

We humans cannot conceive of numbers this big, let alone process it! So AI allows us to break through new frontiers in all sorts of industries due to this unmatched power.

But… These datasets and algorithms were coded by humans, who are inherently biased.

We’ve seen how AI can perpetuate these systemic biases if not vigilantly managed. And each time we ask any chatbot a question, we’re actively shaping the mind of the LLM itself, causing answers to change over time even if the original inputs have not.

The now infamous example of how terribly wrong this can go is Microsoft’s Twitter chatbot named Tay. Released in 2016, Tay had to be shut down within 16 hours of its launch due to how quickly it devolved into inflammatory and offensive posts that could rival the worst. (And if you’re at all familiar with Twitter’s general environment, you know how truly terrible this was!)

Challenges & Ethical Considerations

As fast as AI grows in influence, our strategies for managing it continue to lag far behind. AI evolves much faster than our ability to regulate it, and in many cases, even faster than our ability to effectively teach it!

To my mind, this is not at all dissimilar to the evolution of the internet.

We as educators are again left with the task of not just understanding AI but regulating its impact on privacy, combating ingrained biases, and addressing the ethical dilemmas it brings into the classroom. We must prepare our students not for a distant future but for a present where digital literacy is fundamental to success.

Although the psychological and developmental impacts on kids are still largely unknown, initial studies show children interacting with AI in ways that blur the lines between technology and humanity. They often attribute real, humanlike qualities to AI platforms, assigning it thoughts, feelings, and distinguishable personhood. They tend to overshare personal information and trust it to answer fact-based questions even more than human sources.

This artificial bond can greatly impact their social skills and decision-making, especially for young kids. And because they are naturally impressionable, the lack of natural etiquette and politeness has already been shown to negatively affect their social acumen with IRL humans. After all, it’s uncomfortable to be human even in the best of circumstances; interacting with AI doesn’t usually have the same messy, unpredictable, even painful responses that come from interacting with another person.

The biggest challenges and ethical considerations for the intersection of AI and students involve data privacy, biases, hateful stereotypes, bullying, and social engineering. Let me share some real, recent headlines to exemplify this point:

There’s a clear, crucial need for guidance if we expect them to successfully coexist with this technology and still be part of the world at large.

We must be proactive in recognizing red flags and supporting students who may be especially vulnerable to the negative impacts of AI—those struggling with isolation, anxiety, depression, and disorders like autism.

Cognitive Resilience

We are not dealing with bare data, 1’s and 0’s, but rather with the need to adapt to and even challenge this new paradigm.

Because these technologies so quickly shift and morph, rather than focusing on how to utilize the latest release, it’s more fruitful to emphasize critical thinking and emotional skepticism. Cognitive resilience has become ever-more crucial as a set of skills which can enrich students’ lives regardless of what hot new thing comes out or where their trajectory takes them. Students must be taught not just how to use these tools, but how to understand and critically evaluate outputs.

Let’s teach our students to pause, to breathe, and to think critically about any information presented. Instill in them a healthy dose of skepticism so that they don’t blindly consume but rather question and verify.

In security awareness training, we often use the axiom, “Think before you click.” The same concept can be applied here. A simple 3-second pause provides space for higher level thinking to take over knee-jerk instincts. If the content is especially provocative, taking 3 deep breaths can promote mindfulness and short circuit emotional reactivity.

You can also emphasize the importance of engaging deeply with material rather than relying on AI to do all the thinking. For example, Khan Academy’s Khanmigo chatbot is set up to be socratic so that it challenges and helps learners arrive at correct answers on their own.

“Algorithmic awareness” can be a key concept for teaching the mechanics of how AI tools arrive at responses and construct digital experiences. A question you might challenge them with is: who benefits from believing this? What motives exist within certain responses or curated experiences?

You may even consider what it would look like to purposely engage AI to generate believable falsehoods. Think innovatively about the specific challenges AI introduces into learning and accountability. How might you solve the problem of safeguarding against plagiarism or homework completed entirely with AI? What new types of assignments could you add to your repertoire?

Regardless of how you approach it, it’s imperative we ensure these tools serve as extensions of our educational goals, not as their replacements.

Call to Action

We are standing at a pivotal moment in educational technology.

Like the introduction of the telephone or television, AI brings along widespread fear, uncertainty, and doubt about the long-term effects it will have on society. It’s futile to think this problem will go away or solve itself. AI represents an inevitable evolution rather than a choice, and it requires immediate and proactive engagement from educators like you.

But remember—

Our own capacity for adaptation, growth, creativity, and resilience is boundless! One of our greatest gifts to our students can be equipping them with that awareness. This can help ensure they will have the ability to think critically and to navigate overwhelming information with clarity and confidence.

As we push forward, let’s not wait for legislation to catch up. Let’s take the lead as educators, mentors, and guides as we have always done. Through small changes, a little ingenuity, and community-wide efforts, we can ensure that our students are ready to not just face the future, but actively shape it rather than allowing it to involuntarily shape them.

Please don’t give up.

We’re in this together.

And y’know what? We’ve got this.

 

K. Melton

Growing up in Appalachia into intergenerational poverty and genetic afflictions, I faced a world of extremes from the start. Through a relentless pursuit to better myself and overcome adversity, I carved a path to success.

Now, as a cybersecurity awareness expert and DEI advocate, I leverage my experiences to help others unlock their potential and similarly thrive in inclusively inspiring environments.

Over a decade’s experience herding high-performance creative tigers, fostering innovation, extensive change management, DEI & accessibility advocacy, and enhancing digital literacy for any demographic.

https://k-melt.com