The Dystopian Horizon: How AI Challenges the Fabric of Society

Tiago V.F.
24 min readFeb 19, 2024

--

Artificial Intelligence has been, for the most part, a largely failed field filled with arrogance. Ever since its beginning, it has constantly overpromised and underdelivered. In large part, this has been because it has been an attempt to replicate us, human beings. But we don’t really understand what human beings are, or how we work. Partially a gap of scientific knowledge, particularly psychology and cognitive science, and partially due to our cultural framing everything, from the broader universe to the conscious experience, into atomistic, materialistic, and mechanical terms.

Nevertheless, that pessimistic view is more so when considering AI when talking about the original concept upon which the term was created and used. There are two core ideas: 1) artificial systems that can behave intelligently in a variety of contexts without explicit instructions, and 2) being able to mimic human actions or human behavior. A third one, although not often mentioned directly and tangentially covered, is consciousness or sentience. But that’s such a can of worms that, to simplify our current explanation, we will largely ignore it.

For these two objectives of AI: intelligent generality and human resemblance, it has indeed failed. Although in many others, it has succeeded: machine learning has unlocked a variety of new and impressive use cases for over 50 years, with significant development in the last 20. There is no denying that, but almost all of them have generally fallen short of these ambitious goals such as generality and human behavior. They’re very often very domain specific, and in my opinion, calling it AI is a misnomer.

We can see the recognition of this failure by witnessing the dilution of the term AI over the years. What used to be a fairly technical term that encompasses all our major ambitions for true intelligence has degenerated into anything that incorporates the most basic of algorithms. There are thousands of products and services that claim they rely on AI. While in reality, they use basic machine learning at best, and sometimes it’s very basic statistics. Often so ridiculous that its entire logic could be implemented in an Excel sheet.

There have been two major turning points, however. The first was a paradigm shift in LLMs. While nothing new and simply an extension of a longer project tracing back to NLP, Chat GPT 3.5 by OpenAI was a paradigm shift in how it was able to understand and create meaningful conversations. Using the terms “understand” and “create” are loaded with philosophical problems, endless difficulties, nuances, and controversies. Nevertheless, they’re justifiably used here from a pragmatic standpoint; they seem to understand and create language for most practical purposes. And that’s why people use them so much and why it has become so entrenched in human life in so little time. Chat GPT was even more powerful and has reached a point that for some basic tasks, it performs almost perfectly.

People haven’t fully understood how impressive and how powerful modern LLMs are. The fact that it understands language opens the door to a realm of unthinkable technology. One major problem with modern technology is that it’s always complex and technical, hidden away by complex code that is unreachable for most people. Even for developers, their understanding is often limited due to the scope of the project. They understand some aspect of it, but not in its entirety. They are often built by huge teams, each working on a small module of the entire technology. Part of what’s so revolutionary about LLMs compared to literally any other technology is that it dwells in the realm of language, and thus it has entered the realm of the human.

Part of what makes technology often seem so separated from us is because there is no natural, seamless way of interacting with it. Of course, that interaction is made as easy as possible, and technology has always focused on being able to be used in the easiest manner possible. I open my toaster, I put the bread in, and I close it. I don’t have to think about any of its complex inner workings. But when technology gets incredibly complex, that user interface becomes trickier. In some ways, that interface can be incredibly simple, to the point of clicking a single button. And this can be the case even if the output is extremely complex. However, that has to be created beforehand. Some engineer programmed that complex set of actions, from the high-level architecture of linking several steps together, down to the most basic component of each action, line by line.

What makes LLMs so unique is that for the first time in history, there is an artificial system that understands intent. There doesn’t need to be an existing structure in order to perform a particular action. The technology itself understands what I’m trying to ask or perform. It’s hard to underestimate how powerful, and almost magical-like, this truly is. Even if we had achieved utopian levels of technology that for one reason or another still hasn’t come to fruition, such as nanobots that can build cities on other planets, or super-computing using yet undiscovered superconductors, none will have the same level as LLMs. No matter how complex, they’re all sequential steps that I cannot interact with.

Even technologies that are specifically focused on interaction, none come close to LLMs, because without LLMs and technological variants, meaning cannot be determined without explicit technological instructions. I need to always work at the level of the technology — at the level of code, in order to achieve the outcome that I want. I may, for instance, use some product or service that allows me to perform a variety of functions, one of which may be to call the police. However, that function has to be pre-programmed, and I need to follow the conventions of the logic that has been implemented. Hence why chatbots and automated call services have always been so frustrating — they never understand any meaning behind the directions you’re attempting to give it. If you don’t use the precise instruction that it was coded for, it will never work — the dreaded “try again”, or “sorry, I didn’t understand that”.

One way to avoid this, and how we have long done it, is by trying to anticipate variations of input, in order to replicate understanding instructions. So the program doesn’t just initiate its process with an input of “police”, but also “cop”, and whatever other variations exist and we can determine upfront. This works for basic tasks, but for complex ones, it becomes a problem that seems impossible to fix. With any normal intent, there is almost an infinite way to describe that intent, making coding all possibilities impossible. But this is precisely what LLMs do. No matter how you phrase it, they’re incredibly good at understanding what you meant. Granted, I’m not saying it does it perfectly with God-like accuracy, or there aren’t limitations, but for most purposes, it simply works.

This is why it’s such a paradigm shift that people haven’t truly grasped. We finally managed to embed “true” understanding into something artificial. Of course, LLMs don’t actually understand anything, they’re working with statistics and probabilities. This is something I am very well aware of, and every time I make this claim, the philosopher in me screams in outrage. Yet as previously mentioned, such terminology is very hard to avoid because in a pragmatic sense, this is precisely what it is doing.

Bringing it back to the core argument I want to make in this article, and why this is part of my doomsday prediction, is that this blurs the line between human and non-human. Technology has long consumed human existence. We do very little without it. I wake up with technology, I eat with technology, I travel with technology, I work with technology, I communicate with others through technology, and I entertain myself with technology. This has certainly given us many benefits, and my intent is not to ignore or downplay them. But it is a reality that such a dependence has consequences. Some are pragmatic, such as health problems or environmental concerns, but some are more existential, such as our dependence on that technology. Despite all this, one element has always resisted this technological dominance: other humans. We are naturally social creatures, and seek interactions with other humans. This is something that so far technology has always fallen significantly short. Despite all my technological immersion, they’re all clearly non-human. They are things, or tools. And part of what makes them unable to fulfill that role is that they don’t understand me like a human does, and they don’t interact like a human does. Even technology that was made precisely to mimic both of them has failed miserably. Until, of course, LLMs.

Because of everything I’ve argued thus far, LLMs are a paradigm shift in the sense that they understand true meaning. They always understand what I mean to say, no matter how I say it. They’re not just executing a pre-determined program that, when it recognizes a certain input, has been programmed to output a respective output. The recognition of an input is more or less infinite. I can ask anything, and it will always respond with a reasonable answer, and very likely understand precisely what I meant to say. Because of this, it mimics human interaction extremely well. Not only understanding what it receives but likewise responding with a fairly reasonable answer. In some ways, indistinguishable from a human. Not always, of course. I’m not blind to errors that LLMs often make, nor stylistic hints that sometimes make content by LLM obvious. Nevertheless, they are not the norm. Often it works very well, and such errors and stylistic differences are likely to disappear or greatly diminish soon. The fact that it works so well is unavoidable if you’re not nitpicking examples, and the fact that it has been so heavily used with chatbots is decent evidence of their effectiveness; they use it because it works and because it’s reasonably realistic.

And hence my pessimism, LLMs will blur the line between what is human and what isn’t, and diminish the need for human interaction, sinking us further into a spiral of technological living, dealing with nothing but objects. Real human interactions becoming increasingly sidelined. I’m not delusional to think that LLMs replace human interaction entirely. People will continue to have friends, family, and relationships. LLMs can’t easily make such a need disappear. Yet, they will erode them. In the same way that technology in general has already eroded all these social aspects, little by little. So not only is this yet another technological advance, which is often problematic by itself, but it is a specific technology that has happened to cross over to the only domain that technology has generally not been able to venture: the feeling of human interaction. A technology of non-technology. The experience of being understood through natural language, and likewise receiving a response. Not a pre-determined response, because of some manually pre-coded IF statement for a specific input, but rather a mechanism that works across any input. And lastly, such response being far from random, but properly contextualized and generally following a logical flow.

This erosion of social life is the aspect that I predict people will underemphasize the most. Despite perhaps agreeing in my analysis, how impressive LLMs are, and how special and groundbreaking they are, nevertheless they may feel it will be just another tool, but life will generally continue as normal. I wish that was the case, but I don’t believe it will be that innocent. It will, however, not be an immediate effect. In the same way, many existing technologies that have had negative consequences on how we live our lives did not have their effects overnight. Even today, there are already many existing use-cases that should make this future apparent. There are AI chatbots designed to be therapists to fight depression or trauma, replacing the role of a human therapist. In homes, some AI agents have been used for the elderly with dementia, providing emotional support and “someone” to talk to. I would hope that it’s clear how dystopian and dangerous such scenarios are, and they are just the beginning. Such applications always start at the edge. There are certainly arguments for all these things. As sad as it is that some people cannot get the help they need, often due to financial means, it’s not so clear-cut how ethical AI therapists are if the alternative is nothing. Forcing that alternative will, based on statistics alone, take lives. Likewise, there may not be enough resources to give sufficient attention to dementia patients from caretakers, and those patients may unfortunately not even recognize the difference, simply wanting “someone” to have a basic conversation with and feeling listened to. But this is the start, over time it will move from edges and get closer and closer to the average day for the average person.

Despite all this pessimism with LLMs, they are not what prompted this article. What made me particularly fatalistic about our future was stable diffusion. Although very much like LLMs, not from the very beginning. In the same way that the results from LLMs were disappointing and far from realistic, the same can be said of SD. In some ways, I’d argue that while SD follows a similar progression to how I’ve described LLMs, its progression was even more dramatic. Initial results were clearly absurd, being images that not only are not close to being realistic but often funny due to how bad they were. Many memes were made using such examples, making the ironic claim that this is the AI that will take over the world. But the images kept getting better and better, more and more realistic. And what’s particularly extraordinary with such technology is how customizable and flexible it is. You can ask it pretty much anything, and it will produce a reasonable result. Once again not without flaws and hiccups, but on average being very impressive.

Even with all my praise of LLMs, in some ways, SD is even more mind-blowing. I remember 3 key events that deeply influenced me about how I think about AI. The first one was the first instance of a realistic portrait. This was not remotely recent for AI standards, and it has gotten much better since. Nevertheless, the fact that it produced an image that I couldn’t possibly guess was artificially generated was dazzling. Still to this day, despite being so used to the concept, the technology becoming mainstream, and having seen so many of them, my brain still has a hard time comprehending how exactly such images are completely fictional. Furthermore, the fact that these are produced with a simple text prompt seems closer to black magic than engineering. This is something that has always deeply impacted me regarding both LLMs and SD. Not only their power but also their speed, flexibility, and accessibility. Even a young child could use it.

What makes it concerning is how again this blend of the artificial and the real. The more powerful the artificial creations, the harder they are to distinguish from real life. Similar to LLMs, the line gets increasingly blurred. And the easier they are to use, the easier it is to lose sight of that line. Fake images have long existed, particularly after powerful photo editing software emerged. However, they required a largely manual process, that both significant skill and also significant time. SD inverts this completely, not requiring any skill or any time. Many people are rightfully worried about political implications. Yet that’s not my main concern, despite recognizing its danger. What I’m worried about is a broader phenomenon of being able to increasingly generate something artificial yet indistinguishable from the real thing.

The second key event was the release of generative fill from Photoshop. Standard SD was scary enough, but at least the challenge was to distinguish the artificial from the real. Yet, generative fill went even further, now the artificial is blending directly with the real. The question is now not only if this is real or not, but to what degree it is real. I still can’t quite comprehend how the average person isn’t mindblown by this; it’s nothing short of magic. I find the examples of expanding an image particularly striking. While for instance, making an edit to an existing photo is impressive, such as adding a hat to someone, to me, that feels widely different than expanding a square photo into a landscape one, filling the outer edges. Not only adding something on top but re-creating the very image from scratch, and blending it seamlessly with the real photo.

The third and final event was very recent when OpenAI announced Sora a few days ago, their AI video generator. AI video has existed for quite a while, but similar to previous examples, it was horribly bad. This was the first time that the output was both impressive and realistic, with instances where it’s completely impossible to distinguish from real life. Granted, they highlighted the best ones, and many are full of artifacts that make it obviously artificially generated. I’m sure that once it is released to the public, it will be underwhelming compared to the examples shown. Nevertheless, we must have learned our lesson by now, and this will improve very quickly. It’s hard to overemphasize how much of an impact this had on me. I honestly did not expect video generation to become anywhere close to decent anytime soon. It is too much of a complex task, and too many things can go wrong. We can’t even generate identical SD photos. Yet somehow, they were able to produce impressive results.

Video generation is particularly striking because no matter how realistic images get, they are images. Static slices of (simulated) experience, not how we actually engage with the world. Yet, video is precisely how we experience the world. Even though technically from a neuroscience perspective, the analogy of vision functioning like a video camera is completely wrong, from a phenomenology perspective, it is fairly accurate. Video opened up a whole new dimension not previously seen with neither LLMs nor standard SD: the creation of worlds. With such technology, it will be easy to more or less generate an artificial version of anything. Likely in seconds, and likely close to infinitely customizable.

This cut was the last thread of hope that I retained about not sinking into a complete dystopia. LLMs will slowly infect human relationships, but no matter how socially isolated we become, one could have guessed that we will retain our embodied experience in the world. With AI video generation, however, whenever it gets sufficiently powered, will be able to generate virtual worlds to an incomprehensible accuracy and immersion.

Granted, technology like Sora alone will not achieve it. For one, video is very close to how we experience reality, but not fully. Video is still a flat display, while our experience of reality has depth. It is 3D and also you can sense agency. Not only am I looking at a particular scene, but I can adjust my perspective with my own body. I can look to the edges, I can move forward, and so forth. None of this is inherently available with technology like Sora, but it won’t be a big leap. With technology such as Unreal Engine, we have already made 3D assets and 3D creation much easier, and in some ways, much more automated. Likewise, there are already AI generative tools for 3D assets, even though for fairly basic and simple ones. But not very far in the future, you will be able to go from a simple text prompt to a fully immersive, realistic, 3D world. The VR of today allows you to engage with another world, but such worlds are clearly fake. The moment that we will be able to have realistic VR where you’re not entering a “VR world”, but simply another world just like the one you were born into, it will mark a significant shift in history.

It’s also true that visual realism is not sufficient. We have a variety of other sensory experiences that are part of everyday embodiment and hints that what we are experiencing is real. Without them, it’s hard to make a VR experience truly realistic. Even if I’m transported to another world of an entirely accurate and realistic 3D model of a jungle, vision alone isn’t sufficient to be completely convincing. There is the smell of the jungle, and the feeling of sinking my feet into the ground and walking through the vegetation. This will certainly delay the realism of VR, but not by very much.

First, it’s important to highlight that the brain is very plastic, and in the words of Andy Clark, we’re natural cyborgs. Our cognition quickly adapts to new tools and new environments, not as external parts of yourself, but as extensions of yourself. This also adapts to virtual worlds, where the environment is quickly understood by the brain not as a picture you’re looking at, but as a world you just happen to inhabit. Which is precisely the correct understanding because it’s not a picture, or a series of pictures. It is indeed a world if you can change your perspective and engage with it. Even in fairly crude VR systems, many people often feel that they were truly in that world, particularly after extended usage. Clear signs that it’s not a real world, such as missing certain sensory input, are quickly forgotten and minimized. Likely because they belong to another world, a world you’re not currently in. What’s dazzling is the transition. I’m not claiming this is easy to achieve, but what I’m trying to highlight is that we’re much more malleable than what we expect. If we can achieve feelings of embodiment with crude virtual environments which often have no possible comparison to realistic real-life footage, it’s naive to think that this will be an unsurpassable block towards complete realism.

Lastly, there are potential workarounds to mimicking those sensory experiences. A straightforward approach is by simulating that experience through some external device. The most basic possible example is that if you’re moving, you can have a fan blowing into it, which its speed adjusted in real-time to whatever wind is supposed to be simulated. But of course, these can be much more elaborate, and variations of this approach have existed for a long time. They’re not perfect but they’re likewise impressive, they just aren’t used very much because it’s resource-intensive to craft the experience just right, but for most things, there isn’t an unsurpassable technical blocker that we have no idea how to surpass. It will just get better and better. The only possible exception is highly complex versions of touch. For instance, replicating with an external device the precise feeling of running on a beach would be quite difficult. Part of me wants to say impossible due to the sheer complexity, but I’ve learned my lesson of what to consider impossible.

Nevertheless, there is another potential solution for this, bypassing external devices altogether. What you experience is not the world directly, rather, your experience is a brain representation, or simulation, of that experience. Hence why you can hallucinate or dream, and feeling like it’s real. One way to put it is that it feels real because it is real in the sense that it’s being constructed just like everyday life is constructed by your brain. Even though hallucinations and dreams often can feel very different than everyday life, that’s because they’re bundled together with countless other differences given the overall state of consciousness experience, but it’s not an inherent limitation. You can certainly have dreams or hallucinations that by chance, in certain moments, they happened to not have any of the other features that you’d typically associate with non-real worlds, such as randomness or incoherence, and have a perfectly indistinguishable world from everyday reality. Therefore, the ability to create such worlds is open by instead of trying to simulate those sensations through external apparatus, instead they are simulated directly in the brain, through some kind of neurological stimulation. This already exists in a very basic form, and it works precisely in the way I’ve described above. Doing this for very complex experiences is incredibly difficult, but it’s not outside the realm of possibility.

It’s also worth mentioning why I believe SD is a particularly significant milestone for the creation of virtual worlds. The reason is that any existing realistic virtual world needs to be manually crafted, spanning an incredible amount of time and skill. Granted, 3D software has advanced immensely and CGI is much less manual than it was 20 years ago, and some aspects have been automated or easily adjustable. Nevertheless, nothing can possibly compare to the close to immediate generation of SD. It’s very much the same comparison between SD images and Photoshop. It’s one thing to be able to construct a world; it’s another thing entirely to have a system that constructs the world for you, in minutes, through simple text-based instruction. It opens the possibility for infinite worlds, not limited by time nor imagination. And in my view, this is a vastly different situation than a hyper-realistic game that a company spent thousands of hours and millions of dollars developing, only to craft one particular world.

This alarmism may seem displaced. Such scenarios have long been expected in the human imagination; we have been writing about them for a very long time. Not only in academic circles for those interested in such matters but also the population more broadly, even if never explicitly and consumed, digested, and reflected through works of art. The latter being a very broad category, but perhaps the most immediate and recognizable aspect being works of fiction about technological dystopias, where individuals and society struggle with all these aforementioned problems. In some sense, we knew this was coming. So why the alarmism now? Because before, they were abstract and distant concerns. Not just in temporal terms, but in technological ones. Just a few years ago, something like the existing LLMs would be unthinkable. While the technology itself certainly existed, its results were very mediocre. It wasn’t until around Chat GPT 3 that it made a giant leap. The same can be applied to image generation through stable diffusion techniques. While once again the technology itself has existed for a long time, its existing iterations would be unthinkable a few years ago. So while we have fantasized about a technological dystopia of simulated humans and simulated worlds, it has always been anchored in fantasy alone. Because no matter how concerned one may have been about such matters, it has always been difficult to give it enough existential weight because the technology required did not exist to any meaningful degree.

It’s not just about the existence of the technology itself; it’s about the proximity to reality. No one could have predicted we would have the power we do in such a short period of time. Those who did predict it, did so with baseless speculation. There was no reason to expect that such drastic results would be achieved from such techniques, given how they were performing at the time. It’s natural that they would get better, but as with any other technology, they should have gotten marginally better year after year. That’s not what happened. They got massively better, making gigantic leaps in the space of a few months. The reason why this is such a risk now, is because now, for the very first time, the path to this dystopia is known. It’s something that is very achievable with existing technology; it’s just a matter of time. No one knows for sure how much time, perhaps 6 months, perhaps 3 to 5 years. But it is coming, and very soon. It stopped being an unknown unknown.

Before we could have imagined such dystopian worlds but there was no intelligible path to get there. You could have put the most brilliant minds on the planet together, working on nothing but these problems with infinite resources, and it would not be achieved. In the same way that we couldn’t have developed the computer in the 15th century. We wouldn’t even know what path to take and how to get there. Similarly, today we can think about traveling through wormholes but we’re simply in the space of fantasy. Not only is the technology not there, we have no idea what path to even take to achieve it.

At least for me, 16th February 2024 was the day that this dystopian future became real. Perhaps this was too late of a realization. Maybe the impressive results of stable diffusion with normal images should have been a clear sign that the creation of worlds was soon to be achievable. I might have been too naive, but I believed that there was still a very significant leap for realistic video. I vaguely “understood” how we were able to get to the existing generation of images, but I considered that video was worlds apart in terms of complexity, that AI would run into all kinds of problems and limitations, which would take forever to fine-tune to an unacceptable standard. I was wrong. It’s here.

Not only are all these technological considerations problematic, but they’re even more so given the current cultural framework we inhabit. Even before LLMs or SD, I’ve long been worried about the direction we’re taking with our technological obsession. While previously human life was conceptualized as a moral and spiritual journey, due to a variety of scientific, religious, and philosophical changes in society, that has been significantly eroded in the past 300 years. Not just its secularization, which is problematic, but at least better than nothing, we have been heading precisely for nothing. Even original conceptions of human life as pursuing the good, the true, and the beautiful have been increasingly felt as a distant poetic vision, unrelated to modern life.

Instead, modernity is concerned with things. Things to spend and things to create. Nowhere has this been more clear to me than education. Traditionally, education would have meant becoming an educated person — a classical liberal education. Learning about art, philosophy, history, and literature. The objective was for you to become a better person, and a cultured person with the capacity for deep thinking, and understanding the world. Not a world of facts, but the human world.

Such an ideal is increasingly disappearing. Many no longer feel the need to be educated in that sense. The goal is to simply become learned for a given purpose, and that purpose often being getting a job. Nowhere is this more emblematic than our increasing focus on STEAM. I’m not against STEAM, and in many aspects, it is a passion of mine. Providing a nuanced perspective here would be difficult without turning this into a fully-fledged book, so some oversimplifications and shortcuts are required.

Generally speaking, STEAM is pragmatic. People go to the university to learn a particular set of skills in order to become technicians. They want to become such technicians because they’re well compensated, and they’re well compensated because they are required. Often required to create things, often that have nothing to do with human values. The goal is very often either some technological advance that will produce revenue. The most illustrative example is software engineering. People want to learn how to code because there is a good market for it, and there is a good market for it because some company needs that code built in order to run their business more efficiently and generate more revenue.

At best, the outcome is some scientific discovery. But even the latter has become completely removed from basic human life. Even discovering some new previously unknown particle through collaborations across hundreds of scientists and using equipment that is worth billions of dollars, it’s not very clear how that helps any of us. Except with one very straightforward application: increased knowledge of the world, of the non-human world, is a prerequisite for more technology. Sometimes with a profit motive, and sometimes nothing but a desire for further knowledge and increasing technology — for its own sake.

In all of this, there is no conception of life centered around actual humanity. Instead, it’s more so a weird fetishization of technology and profit. A pre-made template of learning to do things, making these things, profiting from it, and coming up with better and faster ways to repeat the entire process. All philosophical and spiritual questions that have defined mankind since the beginning of time are forgotten.

This all was already in due course before LLMs, but now this makes it even more problematic. Not only is a proper education less common and less desirable, it’s much more difficult to achieve because there is always a shortcut available. You no longer have to read the text, but you can read a summary. You no longer have to learn how to write, because AI can write for you. I’m aware of such limitations, and I’m not claiming that this is universal or unavoidable. Yet, it is a pull that becomes stronger and stronger, with worse and worse consequences. It will be harder to learn how to think. Not only through the temptation of shortcuts but also because education and culture itself will move away from what’s required to achieve it. Because, once again, everything becomes subservient to a pragmatic goal: a job or a career.

It’s bad enough that with these technological advances of LLMs and SD our human world will erode, getting increasingly blurred with the non-human and the artificial. But it’s infinitely worse because of our existing cultural predicament. Not only will we move towards that path of the anti-human and anti-real, but we will have a society that barely cares about such a transition. Debates will be had about why it even matters if I’m communicating with a human or an AI. Or why does it matter if I’m spending my time in the real world or a virtual one? Why should we even think of this world as real? And what’s the point of human connection in the first place? I’m convinced I know those answers, but I doubt that I can convince you of them if we disagree on what that answer is. That’s not my goal, anyhow. My goal is to make it clear the direction that we’re heading.

I have a hard time quantifying my level of pessimism and hope. On one hand, the pull of technology and profit is strong and only getting stronger. And society is trending in a direction that is completely antithetical to a good human life. Yet, some aspect of me retains some hope that the spirit of truth and goodness is so strong, that it’s a flame that can never be completely extinguished. No matter how dystopian our world gets, with infinite realistic virtual worlds full of AI, customized and maximized for my entertainment and pleasure, there are some sacred questions that often have the possibility of breaking this prison: Who am I? What’s the point of life? What should I do? What’s the best for me, and the best for everyone else? How can I know such answers?

However, the occasion for such questions will become increasingly difficult to sprout because we will be distracted, and inhabit a world where such questions are hard to make sense of. The scariest thought of all is that such questions will naturally come out from our souls, but our brains will be so infected that we will have no option but to conclude they are mind viruses, devoid of any meaning, and simply some unintelligible gibberish. We’re already trending in this direction when everything is viewed through the lens of science and facts, and any philosophical or spiritual question is viewed with skepticism. Not because they are hard, but because they are considered not true questions. A true question is considered only something you can formalize, analyze, measure, and experiment with.

The path to prevent this dystopia, to whatever degree possible, is not a fight against technology. Technology is not inherently harmful, and it’s a fight we would certainly lose. What we need is a cultural revival that cures us of our technological disease. The disease isn’t the technology itself, but the drive that creates that technology and how we frame its use. We need to return to a human-centric worldview that is concerned with values, and not with objects, automations, and revenue. Lastly, we should view technology with skepticism, understanding that while not impossible to integrate with human flourishing, it nevertheless has an unavoidable gravity towards itself. We cannot eliminate that gravity, but if we are aware of it, we may be careful enough to move in a way to not get sucked in.

--

--

Tiago V.F.

Writing Non-Fiction Book Reviews. Interested mostly in philosophy and psychology.