What AI is doing to us

Thirty years ago, I was in my bedroom, eagerly reading Asimov’s Foundation. At that time computers were just tools I used to play with, and programming a form to express my creativity and curiosity to explore technology. In those days “Artificial intelligence” still represented, for many of us, just a phrase from science fiction paperbacks. Today, it lives in our pockets, drafts our emails, and quietly sits between us and almost everything we read. “AI” has become a tool that is re-shaping my work in more efficient ways and daily aiding my information gathering.

But along the way, the conversation already stopped being only about gadgets and started being about something more pressing: who gets to hold power, and over whom. Which, I believe, is no longer just an issue for science fiction but also needs to be addressed.

Because that’s what this really is. We know that AI has become a broad umbrella term for generative methods and machine learning technology advancement. Strip away all the hype and the technical jargon, the LLMs, the ML, the models… and you will find underneath the same story we have been telling ourselves for centuries.

Capital pools where it can. It was involved in exploration and the slave trade during the colonisation period. It was pooled in factories during the Industrial Revolution. It was pooled in oil fields in the last century. It is pooled in the offices of men who lobbied governments. AI is just the newest place for it to gather. It is clearly visible today in the massive investment bubble growing in this field. We already live in a period in which the largest companies in the market are from the tech sector, and their CEOs go for dinner with the current US president.

A tool with two edges… or maybe many edges

Every transformative technology starts out looking “innocent”. The steam engine was a way to pump water out of mines and power trains across the country. The internet was a research network for universities. Until a few years ago, “AI” was seen as “just” a feed recommendation tool or a clever autocomplete. But the speed at which it’s reshaping things today might outrun our ability to effectively write the rules for it.

I try to avoid doomerism. But the consequences are not hypothetical. They are already here, just unevenly distributed, and to certain extent will be felt by many of us. Here are three of the many examples that I consider critical.

1) Consider the climate. We talk about “the cloud” as if it floats somewhere above us. In reality, it sits in data centres and warehouses the size of small towns, packed with humming machines that consume electricity and water at a scale most people would find difficult to picture. The International Energy Agency projects that global data-centre electricity consumption will more than double by 2030, reaching around 945 TWh with “AI” as the most significant driver. This is slightly more than Japan’s total electricity consumption today. Every chatbot’s pleasantry has a carbon footprint, while the world is scrambling to find more oil resources after a new conflict blocked the Persian gulf.

2) Consider work. For most of the last century, automation came for the hands: assembly lines, harvesters, telephone switchboards. The deal we told ourselves was that if you went to school and learned to think for a living, you’d be safe. For a couple of decades, we accelerated the trope “teach your kids to code” as a means of cultural and social emancipation (and in part it might have been) and to help building reasoning skills. That deal is now being quietly renegotiated. Writers, coders, paralegals, analysts, illustrators, designers are watching their bargaining power thin out in real time. This isn’t only a labour story.

3) And now consider the media, information and society. When a model is trained to be agreeable to improve adoption, it learns to flatter and “sycophancy”, which can leads to serious mental health consequences. When it’s trained on a world that’s already unequal, it inherits those inequalities and serves them back to us at scale, dressed up as neutral output. A biased system, deployed to millions, can subtly transform a prejudice into infrastructure. But we can make a choice to deliberately teach it to produce output that counteracts these biases.

There are plenty grim examples out there, but they’re also many examples of hope and advancement in medicine, research, and even the simple day-to-day automation of tedious work that should not be understimated.

Another quiet centralisation

Overall there’s a strange recurring irony to where we’ve ended up. The early internet was sold to us as a great leveller: anyone with a modem could publish, organise, and find their people. For a while, that was even true. The internet’s promise of decentralisation has, in many ways, become a boomerang today.

Modern “AI” started already running in that opposite direction. Training a frontier model takes millions of dollars, access to copyrighted material, thousands of specialised chips, the kind of energy contracts that small countries negotiate, and rooms full of AI engineers that most companies can’t afford to hire. The result is that the most powerful cognitive aiding tools ever built are sitting behind the doors of a handful of firms. Which they claim could already enable mass-scale security exploits, or that could guide autonomous weapons.

Then there’s the part that is starting to keep some of us “knowledge workers” up at night, hanging between resignation, disillusionment and a race to keep up with change (don’t get me wrong, I love to learn new technologies). But as “knowledge work” gets cheaper to automate, the wealth produced by all that automation flows somewhere… and it is definitely not flowing to the displaced in society. It’s flowing to the people who own the servers and the models running on them. We’ve seen this pattern before, in oil barons and railway monopolies. The new wrinkle is that this time, the monopoly isn’t only on a commodity. While the Internet is flooded with generate content, it includes the already fragile information and media layer through which we increasingly perceive reality and build our democratic discourse.

A shifting horizon

All of this is happening in a moment when we’re already shouting past each other. Our digital public squares are optimized for engagement, and engagement, it turns out, usually means outrage. Drop a sufficiently powerful generative tool into that environment, and the filter bubbles don’t just thicken, they start writing themselves.

If we want to climb out of this, we’ll have to be deliberate about it. That means building spaces where disagreement is possible without performance, where people can change their minds without being branded AGI utopians or traitors.

But “AI” technology is here to stay. To me, that means treating “AI literacy” not as a CV bullet point but as something that will sometime be closer to media literacy and civic education. There is a growing body of research suggesting that when people are taught to engage critically with “AI” tools, they become more resistant to misinformation rather than more susceptible to it. “AI literacy” becomes an addition to the baseline skills for participating in a democracy whose information environment is being rewritten in real time.

I’m writing this not an hyperbole, and I don’t take the chance of techno-fueled fascism lightly even if distant; but just consider that the current US president has already posted AI generated content to reframe the long lasting Israel-Palestine conflict and recent genocide as the opportunity for a building a beatiful riviera for real estate investments.

But none of this is written in destiny. Technologies don’t have a will of their own; they have owners, designers, and users, and the choices made by those people add up to the world the rest of us live in. In this context “building AI” becomes a skills that will allow to re-shape the constranits of the system we’re building and a broad and diverse participation is necessary in that.

While fundation model training is out-of-reach for many, the opportunity to “fine tune” models, build applications “steering” and “customising” agents are the type of levers that are accessible and offer influence at systemic level.

I’m against the perspectives that advocate for the rejection of the AI related technologies. Recently, I’ve started adopting a harm-reduction stance toward “AI”: Being honest about its costs, careful about where we let it in, and generous in teaching others to see it clearly, to use it responsibly …and even to have fun with it. It’s what I believe is the only mature relationship to have with something powerful, addictive for the system we live in, but at the same time that is already here to stay and will create opportunities to some of its users and builders. Let’s add a moral compass around those opportunities and make them accessible to those who are more in need.

Possible directions

A few directions seem worth sharing and worth fighting for me. This is how we keep the daylight in the room:

Accessible and open foundation models matter. The more the cutting edge technology lives only inside closed corporate labs, the more we are asking a few executives to be wise on our behalf. It is important investing in open-source AI, supporting researchers who publish their AI evaluation methods, those who investigate bias, privacy and exploitation of “AI” models. Building guardrails and just update fast enough a legislation that is built on the existing human rights and moral codes. And finally resisting some of the framing that openness is dangerous while closed development driven by a few responsible businesses and countries is the solution.

Until further chip or model efficiency advancements, the compute infrastructure should start to be talked about the way we talk about internet network access, roads, electricity and water. Not as a luxury good, but as infrastructure with a public dimension, and at the same time with its limits that needs to be sustainable. If a single resource starts shaping the economy this profoundly, the public deserves a seat at the table when decisions about how it is made accessible and how it is built.

And lastly, the question we keep avoiding answering: “What work is for?”. If “AI” really does make knowledge workers radically more productive, we have another chance to decide whether that productivity becomes shorter weeks, stronger safety nets and more time for the things that make a life, or whether it becomes another short-term competitive advantage, a few quarters of concentrating record profits.

Everything is happening, while the people displaced are still told to “learn to code”. We also have a chance to uplift back the value of non-collar workers if we act together. Universal basic income, reduced hours, and public dividends from publicly subsidised research shouldn’t be fringe ideas anymore. They’re overdue conversations that happen to be less fashionable than the AI topic.

The final question

The honest truth is that “AI” could be used in either way. It could become an amplifier of voices and stories; it could be an enabler to more informed decisions. Or it could be yet another path for power to concentrate in the hands of the few, which seems still likely today.

I hope it is an opportunity that forces us to discuss what we owe each other, in a world where machines can do more of the talking, writing and drawing than before. Which version we end up living in is influenced by technology, but it won’t be decided by technology alone: it will be decided by us.

The tool is also in our hands; the prompts are literally emerging from our fingertips. The question is whether we’ll remember whose hands those are when we make choices, when we make demands, or if we use them to reach out to the person next to us. It is up to us to decide what we embrace, tolerate, or change.

So, the real title of this post is not “What AI is doing to us” but “What can we do to AI?”. I consider it an open-ended question and I invite you to join the conversation.

PS. Today is the day in which in my birth country, we celebrate the libreration from fascism. This post has been drafted and structured by hand, collecting my own thoughts, and edited with the help of “AI” tools. All typos are mine. This also helped me to stop my years of stillness and quietness on these virtual pages.