Loading...

The first time I used ChatGPT to help a student, I used it to write a love story. It wasn't very good. Then I used it to answer an AP Physics question. The answer was wrong and I quickly informed ChatGPT of the error, thinking it could take the feedback and produce the correct answer. It apologized and proceeded to give me another incorrect answer. The cycle then repeated.

Despite giving several education companies early access to its model and adding a small disclaimer beneath the text box, OpenAI ultimately exposed students to a tool that could give hallucinated, or factually incorrect, results. To many educators across the country, it's even less reliable than Wikipedia. Worse yet, students can now quickly produce paragraphs and paragraphs of sub-par writing that somehow always concluded with "In Conclusion." First impressions matter, and to many educators across the country the first impression was not good. The issue isn't the technology itself, it is the lack of buffer between the incorrect responses and the end user. Education is meant to teach students the critical thinking skills needed to discern the bad from the good, and ChatGPT in its early iteration deprived students of that opportunity.

Is AI bad?

Fast forward to September of 2023. A team of researchers from Harvard Business School, The Wharton School, Warwick Business School, and MIT Sloan authored a paper that demonstrated the use of AI can improve a knowledge worker's performance by as much as 40% compared with workers who don't use it. I think it is safe to draw two conclusions from this study.

One: AI works.

Two: AI works for those who have it, and for those who know how to use it.

We now have a quantifiable measure that captures the essence of the underlying inequities of our system: 40%. Great for productivity, not so great for equity.

Any advancement of technology will inevitably create a population of students that not only fall behind but continue to fall behind. The gap is cumulative. It consumes the child's reality and unfortunately for some, becomes an anchor to the child's identity. The technology doesn't need to be as advanced as AI. How about high-speed internet? Or access to a vehicle and a license? The advances of these technologies and systems create physical impediments to both academic and economic mobility. AI will absolutely widen the gap even further.

Yet, AI also presents a once-in-a-generation opportunity to help these kids catch up. We can now explain easily complex concepts from subjects using analogies and examples that are personalized to the learners' interests and passions. That's a level of differentiation never possible before. We can now explain a medical devices company to an 8th student with a 10th grade reading level in Spanish and 4th grade reading level in English, using the student's interests and activities as metaphors.

Is AI good?

This dichotomy is exactly why we need to have thoughtful discussions and implementations of programs that focus on the right problems. The conversation doesn't start with what can AI do. Instead, we should be figuring out what problems we want AI to solve for. The tool is only as good as how well it fits the problem it is designed to address. What are our highest aspirations of what education can be, and what evidence-based instructional models do we wish to scale but can't?

Reliability

If we used ChatGPT to make up a Sci-fi story based on mitochondria and the human body, we will stare at the final output in awe. If we asked it to perform complex arithmetic in context of AP Physics, we may find it to have more errors than not. So how do we make AI more reliable? By adding more constraints.

1) We need to set clearer expectations on what data is being used so we have a source of truth to verify the final outputs.

2) We need to create more defined use cases that are tested and validated by educators.

Neither of these solutions can be implemented with ChatGPT and it's important for SchoolJoy to establish the necessary measures to help educators navigate AI without having to spend hours upon hours on PD focusing on prompt engineering within ChatGPT.

Safety

This extends beyond mere compliance with regulations like COPPA and FERPA; it is a commitment to care and precaution. It's important for any and all AI-enabled companies to use a language model provided by an existing cloud service provider with robust data security measures.

Equity

Equity in AI translates to creating systems that are not just universally accessible but are also particularly designed to level the educational playing field. This involves taking proactive steps to include those who are traditionally excluded from advancements in technology and systems. Unlike a one-size-fits-all curriculum, AI can tailor education to individual needs, but it needs to be done without amplifying existing inequalities.

Inclusion

The last cornerstone is inclusion. Our classrooms are tapestries of diverse learners with an array of needs and preferences. An AI system must be designed to be as universally accommodating as possible. This doesn’t just mean linguistic or cultural diversity but also learning styles and physical abilities. A truly inclusive AI system in education is like a versatile artist who can draw, paint, sculpt, and more, adapting its method depending on the medium and the audience.

Subscribe to our Newsletter

Get New Posts to Your Inbox

A successful marketing plan relies heavily on the pulling-power of advertising copy. Writing result-oriented ad copy is difficult. 

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.