The Doomsayer’s Vision
It may be tempting to look upon AI as inevitable, as unstoppable, in the same way Google or the internet itself previously appeared, and to complacently (or enthusiastically) accept this situation — that is, the relentless advancement of AI, pushed by billionaires disconnected from reality and mostly free from accountability. However, the real truth, as I see it, is that AI poses a unique and enormous challenge to our very humanity, threatening to replace (yes, replace, not supplement) those connections and relationships that make us, well, human.
To be exact, the reliance on LLMs and AI technology in general will further diminish critical thinking and cognition, especially in an already standards-compromised education landscape where its “ethical use” is being embraced; undermine our romantic and family attachments, with chatbots attempting to fill the role of partner or nanny; and further isolate us, well beyond what we’ve seen from the effects of basic smartphone use. It will even result in violent tragedy for some. As we acknowledge the reality of these problems, some of us may wish to go past complacency. The question then becomes, What can any of us “doomsayers” do about it?
Of course, there’s so much money involved, with so many billions being thrown around by the elite in the interest of developing AI and making it ubiquitous, that it’s no wonder how its acceleration is happening virtually unimpeded, with government contracts and university partnerships proliferating — reflecting the way in which AI has (as if it were sentient and somehow carrying out the process itself, using the developers and companies as unwitting instruments) attached itself, wormed its way into, every part of our society; or, I should say, has begun to do so at an extremely fast pace.
What can we do? In the face of money being poured out, with the growing institutional force of corporate–government and corporate–university collaboration, what could the masses do to oppose it? The problem is compounded by the fact that many of us “average people,” those living in reality, may be convinced AI is a cause for concern. But we remain passive and blindly participate in using the technology for certain tasks, offering our indirect support to its growth. Or some of us may be manipulated into championing or tacitly supporting the actions of wealthy CEOs and developers if we mistakenly assume this class’s vision aligns with our own (that is, society’s or humanity’s) best interest as it promises, in tantalizing fashion, so much “progress.” Because of the ways in which AI’s current forms offer decadent convenience, or because we may buy into a bogus narrative of progress, we basically accept it. Going further, we may perhaps believe — because they want us to believe — that AI development is an overall positive and not a de-humanizing venture; that its benefits outweigh whatever risks it brings. But how can anyone ignore or trivialize the red flags and sustain a tolerance, much less an appreciation, for such a thing? The dangers are already immense. To acknowledge them half-heartedly, only to shrug and move on, requires a strange coolness towards the direction of our civilization and species itself. One must, in some way, say “yes” to ruin, though this may take place unconsciously in those who are duped.
In the face of such forces, and with our own propensity for gullibility and limitedness in mind, the obvious thing would be to shake ourselves from the sleep of both complacency and manipulation (the latter being more difficult, of course), then say, “No thanks; I’d rather not, given where it may lead,” and, if possible, have the self-control to avoid downloading the chatbots, boycott the products of this derangement, and use whatever political voice we have to call for regulation, which would mean prioritizing our humanity in the long run — our connections, relationships, creativity — above whatever real or imagined benefits come from using AI technology, and above the elite’s interests.
At least, we could have. Perhaps we still could. But perhaps we just want our lives to be as easy as possible, valuing AI to some degree for the convenience that it provides, even as that ease destroys us spiritually and ironically makes us more miserable, even as certain billionaire transhumanists push for the eventual overcoming of (corporeal) humanity and, as one might conspiratorially imagine, allow for the possibility of the end of our species. (As a side note, whether or not humanity endures, they would be the ones, because of their vast wealth and resources, to benefit from the practical realization of transhumanism.) Perhaps we remain passive because we assume we’re powerless, and perhaps we are.
I don’t know. I could be wrong about this, I admit, and I hope I am. After all, I look upon AI with the alarmist and sensationalist conviction of a doomsayer, a true and thorough pessimist. So, instead of a new beginning, I see an end. Where others may see progress, I see potential collapse. I see civilization crumbling and humanity regressing until the losses cannot be reversed. In this scene, instead of evolving, we have all but disappeared. And, if we’re there at all, we watch from the shadows as AI builds itself into something unimaginable; the doomsayer looks on, unsurprised but nonetheless disturbed to find that they are right.