I was recently told by a professor that AI is to intelligence as calculators are to math — it transforms the method, but the principles remain the same. The skeptic in me, however, isn’t entirely convinced. The world needs mathematicians, regardless of technological changes — but will the world need any of us if AI can do it all the same?
Strong supporters of AI argue that the goal is efficiency: at its basic level, it’s how much can we make with as little resources used. AI helps with the boring tasks of corporate work, like writing emails, concepting flyers and generating to-do lists. In educational environments, researchers are looking to use AI to analyze data — what would normally be a strenuous and time-consuming task.
At the highest level of the corporate ladder, firms are looking to use AI to help automate the production of products and better manage the work environment; efficiency. Companies are concerned they won’t be able to keep up with innovation if they don’t embrace AI, with the largest AI companies themselves warning of the consequences.
The problem with efficiency, though, is that it doesn’t care about us. Data shows that U.S. companies have shrunk the white-collar workforce by 3.5% in the last three years, most notably in large companies such as Amazon or Microsoft. These firms are heavily incorporating AI into their environments.
This data comes just before the ratio of unemployed workers outpaced the amount of jobs available in the U.S. in September, with a peak of long-term unemployed citizens in the U.S. only last seen during the pandemic in 2021.
Companies are finding that they can strip middle-level management, a group that has decreased by 6.1% in the last three years, and replace the environment with AI — all while workers themselves are showing negative mental health effects of the implementation.
Supporters argue that AI will push us towards humanities and arts, as the AI will do the hard work for us. Peter Thiel, a notable tech entrepreneur, claims that AI will actually be worse for math enthusiasts than writers. In reality, current trends aren’t showing this to be true.
Arts and design majors are currently facing the highest levels of unemployment compared to other fields of study, with the highest being an 8% unemployment rate for art history majors. In Los Angeles, the film industry saw over 40,000 jobs lost in the span of a year.
The threat of AI spans across industries, making job reallocation impossible to argue. If AI takes over jobs across every industry, maintaining a livable wage will become extremely difficult for the common citizen. There will be no time for us to make the art we allegedly should be making.
This is not to mention the environmental concerns that come with the excessive energy data centers use to support large language models, or LLMs. Open AI CEO Sam Altman is looking to generate 250 gigawatts of electricity, a number currently inconceivable with present day power plants, within the next 10 years. Simultaneously, the International Energy Agency, or the IEA, reports that power plants running data centers could more than double climate pollution by 2035.

A common theme found in all of this is our lack of agency. Companies are stubbornly opposing criticism against rapid AI development, ignorant of its potential harm.
Google downplayed the true rate of hallucinations — AI responses drawing on false information — when criticism for it generating fake health information arose. Meta tried to undermine critics’ credibility when called out for having explicit guidelines for their chatbots that allowed “sensual” conversations with minors. Open AI claimed that ChatGPT should have a safety feature that referred the user to a suicide hotline if needed; it never did for a child who eventually took his own life.
If we are placing our data — more importantly, our livelihood — in the hands of corporations who do not care about us, what could possibly be left for us in an AI world?
I do believe that there are possible use cases of AI that are positive. LLMs can be used to drive medical research capable of improving cancer diagnoses and other treatment plans. If we can use the technology as a tool to help save lives, then I am more than okay with it. Unfortunately, it’s not being used as a tool, but as a replacement — and this process will only get worse the better the AI gets.
We’ve created a technology beyond anything we’ve ever imagined, and companies aren’t sure how to control it. Open AI has admitted its newest models of ChatGPT hallucinate more than before and they aren’t even sure why. Let me stress that: the company does not understand how its own AI works.

The same company recently agreed to a $100 billion deal with Nvidia, adding fuel to the fire that is the AI tech bubble. Human labor is being undervalued due to the cheap appearance of implementing AI models — of which the value comes from large companies accruing debt, holding the price from exploding. What will happen first: our complete replacement by AI or a popping of the bubble, collapsing the economy?
This is all to say: do not lose hope. With enough conversation, I find that most people are understanding of the dangers that come with using AI. Nobody really wants to be replaced.
I urge my peers to be conscious of their own usage of AI chatbots, wary of its dangers for therapy and the strain generative images have on the environment. Pay attention politically — be ready to support and vote for restrictive AI bills that seek to make the technology safer to use, most notably for children. Support local artists, and importantly, create yourself. The human condition is irreplaceable — our thoughts and experiences are something AI will never be able to recreate.
