By The Editorial Board
With great power comes great responsibility, especially when it comes to using artificial intelligence in the classroom.
AI grows more capable and sophisticated each day, making it all the more enticing and convenient for students to use as they earn their education. But, with such a high-powered tool at our fingertips, setting boundaries around its use is more important than ever.
AI technology comes in many forms. Editing softwares like Grammarly make suggestions to improve essays and generative models like Chat GPT are just a couple of the most popular and obvious applications of AI in an educational context. They provide students with the opportunity to get an extra set of eyes on their writing to improve their grades, and they also provide new avenues for plagiarism.
But, what if AI has the ability to spark students’ imagination, help them with their research or sharpen their time management skills?
According to EducationWeek, models like Hello History allow students to have text conversations with historical figures using readily-available information. It has the potential to make learning history engaging, personal and most of all, fun.
But, AI tools aren’t just for students. Education assistants such as EdPuzzle, Education Copilot and Teacherbot can help teachers and professors generate lesson plans, grade assignments, create project outlines and more.
Artificial intelligence probably won’t replace the essential role of educators, but it has the potential to complete the most tedious, time-consuming tasks that contribute to burnout and lead to teacher shortages.
The goal when using AI in an educational context is to supplement and augment how we teach and learn, not to diminish it — and that requires setting boundaries.
At Baylor, the issue is being played out in real time. Last month, Provost Nancy Brickhouse approved the creation of a committee to address students’ and faculty’s common questions and concerns around AI.
Headed by the dean of the school of engineering and computer science, the committee will provide some much-needed guidance for best practices involving AI inside and outside the classroom.
In the meantime, Baylor’s Office of Academic Integrity suggests that professors keep students honest with some clever, AI-beating workarounds. Requiring assignments to be done by hand and making course-specific homework prompts makes it harder for AI to come up with A-worthy outputs and makes it easier for professors to spot robot plagiarism.
Professors, along with the university, have drawn a line in the sand. The question is no longer if AI belongs in the classroom, but to what extent; the challenge is to define its role as a tool, rather than letting it become a replacement for human teachers.
As for students, the burden of boundaries is on us, too. We aren’t at Baylor to simply pass our classes, and a homework or exam grade is only as valuable as what you learned from it.
If you skate by in classes by sneaking AI past your professors, you may be cleverly cheating the system, but you’re also cheating yourself out of a real education.
How clever will you feel when you cross that stage in four years and realize the only thing that you taught to be better, sharper, more intelligent, more critical and more educated was an AI?