By Lukas Reyes | Reporter
The boom of artificial intelligence has been nothing short of incredible. ChatGPT introduced a form of AI that could be easily accessible to all and provide information simply and efficiently. AI has transitioned from a tool used by companies, marketers and data collectors to a spotlight in the mainstream media. As a result of this, educators across all levels, from elementary schools to universities, found themselves scrambling to mitigate its potential effects on students. In an attempt to avoid plagiarism and cheating, many institutions have outright banned the use of AI.
I think it is important to consider that the advent of new technology such as ChatGPT or Bard does not translate to the introduction of cheating. Those who are willing to cheat will always find, and have always found, ways to do so. Equally, students who do not cheat will not suddenly be inspired to do so simply because of the existence of generative AI.
Cheating has always been around, certainly for as long as I can remember. In fact, cheating has long been a source of discussion at Baylor. The Baylor Lariat ran an article on Jan. 12, 1901, about what steps the university was taking in its honor system. For over 120 years, students, faculty and the university have been discussing the issue of cheating.
To assume that a new technology will suddenly inspire people to cheat seems out of touch. Additionally, misuse of technology has always been an issue among students. Comparisons can be drawn between the introduction of generative AI and the introduction of the internet on Baylor’s campus in the spring semester of 1994. By Feb. 22, 1994 — just over a month after its campus introduction — student misuse was rampant, resulting in threats by the university to revoke access to the internet. Imagine if Baylor had completely banned the use of the internet on campus simply because students were misusing it and, from then on, refused to allow students to utilize it for their studies. That seems a little silly, doesn’t it?
To ban the use of an emerging technology simply out of fear will result in unfamiliarity with a tool of the future. Generative AI programs will not just disappear because educational institutions don’t like their capability for dishonesty. After all, that is not their function. The function of generative AI is to provide information in a manner that is easily understandable and concise. It serves as a mediator between the incomprehensibly vast amount of information that exists on the internet and the people it serves. AI is not a fad, and its persistence in our lives now and in the future is likely inevitable.
Therefore, it seems evident to me that it is the responsibility of Baylor and other educational institutions like it to find ways to integrate it into their courses and teach students how to use such programs effectively, correctly and responsibly to further education. Instead of resisting the changes that accompany the future, students should move with them and adapt to them. Plus, if students get into the habit of familiarizing themselves with emerging technologies, they will find themselves better equipped to work in the jobs of the future.
This is a plea to Baylor to allow its students to use generative AI and any emerging technologies like it. I insist that it is the responsibility of the university to equip its students, to the best of its capabilities, with the skills and tools necessary for the workforce of the future, which will soon incorporate artificial intelligence.