As we tiptoe into the realm of artificial intelligence, we must tread cautiously, mindful of the potential pitfalls.
I first began my Introduction to Computers class in my freshman year of high school. My teacher at the time introduced the class to a new chatbot named Akinator — a bot that was able to guess the famous person we were thinking about at that moment. While the experience was shocking at first, we soon learned that the concept of AI had actually been around since the 1950s and the development of some of these chatbots predated the internet.
Today, while in my sophomore year of college, mainstream media has introduced a new competitor known as ChatGPT, with the AI’s primary objective of answering questions from online users. However, ChatGPT doesn’t stop there — students have used ChatGPT to do their school work, from summarizing readings and making flashcards to straight-up writing their essays for them.
More nefarious individuals have used ChatGPT to write exploits of code for them, which do run, to take advantage of websites and the data associated with them. After all, ChatGPT wasn’t programmed with a conscious in mind — pun intended — just to provide help for users online with their general inquiries. Patches are rolling out to update the software so things like this in the future don’t happen, but people have found their workarounds without much strain on their part. Also, because its code is open to the public, it has brought about a type of renaissance for the artificial intelligence community as more are implementing and exploring the code’s capabilities.
However, AI’s potential to invade and interpret personal privacy is another alarming facet. As AI analyzes vast amounts of personal data, individuals risk becoming mere data points for corporations and governments to manipulate. This intrusion into our private lives compromises the very essence of human autonomy and freedom. Knowing that a calculating indiscriminate AI might not have our best interest, even if they advertise that they do, the interest of those operating it should ring some alarms if they don’t already.
Moreover, the so-called “black box” problem of AI has serious implications. AI algorithms, complex and opaque, can make decisions without clear explanations. This lack of transparency raises concerns about accountability, as AI-driven systems could make biased judgments with serious consequences for society because the algorithm decided it would bring about the most app engagement or increase website visits.
Beyond these immediate concerns, the fundamental question looms: Is the relentless pursuit of AI advancement eroding our humanity? We need people to be capable thinkers, imaginative in their expression, able to produce their own reasonings and be their own unique person, not something a bot told them to do or become. Our unique traits, such as creativity, empathy and emotional intelligence, are some of the aspects that make us human. Surrendering these qualities to AI may slowly erode our essence over time through current and growing generations, especially if we do not set some guidelines or precautions.
It is imperative that we engage in robust discussions about AI’s impact on society, our values, and our shared future. Setting clear boundaries and ethical guidelines is not an option but a necessity. Let us take a step back from the blind embrace of AI and approach this technology with wisdom, preserving what makes us truly human, and with at least a bit more caution in our approach by adding some general guidelines in our laws or conventions that we agree upon after some well hashed out discussion and deliberation.