Opinion: AI sentience

Abhinav Kumar, Opinion editor

There are few ethical quandaries as perplexing as that of artificial intelligence. While AI will revolutionize the world as we know it, a rapidly-growing portion of people question how far we want it to go before we have to start calling it human.

I fully believe that it is nowhere near sentience and will be completely in our control for the foreseeable future. This is because, at its core, AI is just a program that’s really good with information; it’s able to process and present information faster and more accurately than almost all current technologies. 

Our most recent example of prevalent AI is ChatGPT, every English teacher’s worst nightmare. ChatGPT is a natural language neural network learning model and has been given a significant amount of formal and refined text to learn from. This means that its job is to sift through its database of knowledge and string together information however it’s asked to do so. 

Many believe that ChatGPT is “too human” and it “poses a threat to society” because of its utility and proficiency. This, however, could not be further from the truth.

This is because ChatGPT, as aforementioned, is just a computer program that knows how to talk to people, and is the most knowledgeable thing you will ever communicate with. While the neural network may be told, in some cases, that it’s human and accepts this information because it is a learning model, it’s all still just a program.

That isn’t even to mention how many flaws ChatGPT and GPT as a model have. AI, generally, is really bad with critical thinking. GPT is not much different. While advancements will rapidly be made in this area, AI is nowhere near as proficient as the human mind. AI is also wrong, a lot. ChatGPT specifically will very confidently tell the user that 3+3 is 7 and will not accept that it is incorrect.

While AI will bloom in the coming years, the sci-fi-esque theory that it is gaining consciousness like that of humans is not materializing currently. There are definitely ethical concerns associated with AI, but that of sentience should not be a central one in its current state.