When AI investments move faster than human judgment
The xAI funding round reveals how ethical reasoning gets left behind when billions move fast
One thing I haven’t been able to let go of lately is that Elon Musk has closed a reported $20 billion funding round for xAI.
The company’s main product, Grok, has been criticised for generating deepfakes, non-consensual sexual images of women, and sexual abuse images of children. France, the United Kingdom, and Italy are among the countries that have issued warnings or taken action against the company, citing concerns about the platform’s role in disseminating illegal content.
Despite this, xAI has attracted investment from major actors, including Nvidia and Qatar’s sovereign wealth fund.
I find this difficult to understand.
Not because controversies around technology companies are new, they aren’t, but because the risks here are unusually visible. Regulatory exposure and reputational damage are obvious. Beyond that, there is a more fundamental question: What kind of world are these investments helping to build?
We often say that technology is neutral and that it is people who choose how it is used. With AI, the issue is not whether the technology itself is dangerous, but that it enables systems whose consequences scale far faster than responsibility does.
Markets are designed to reward financial outcomes. But investment decisions are still made by people — and people bring values into every choice, whether they acknowledge them or not.
What concerns me most isn’t that people make unethical choices. It’s that many decisions don’t seem to involve ethical reasoning at all.
Instead, I often hear arguments like:
– "Everyone else is doing it."
– "My action won’t change the outcome."
– "If I don’t do it, someone else will."
These are not ethical arguments. They are ways of avoiding ethical responsibility.
Ethics is frequently described as a core leadership skill of the future — including in the World Economic Forum’s Future of Jobs report. Yet it is rarely practised in everyday decision-making. We tend to associate ethics with abstract dilemmas — Is animal testing ever justified? — rather than with real, recurring choices involving capital, power, and influence.
Ethical capability isn’t about having the “right” answer. It’s about being able to explain why a decision is acceptable — and where the line is.
That capability only develops through practice.
There may be something to learn from younger generations, many of whom are actively seeking purpose, meaning, and moral guidance. In Sweden, religious institutions have seen a notable increase in young members in recent years — not necessarily because of doctrine, but because questions of right and wrong are discussed openly, repeatedly, and collectively.
I’m not suggesting anyone should join a church.
However, in an AI-shaped economy, leaders and investors will need something more substantial than “it’s legal” or “everyone else is doing it” to guide their decisions.
Technology may be neutral. The decisions we make around it are not.
And as AI amplifies consequences – good and bad – ethical judgment stops being a philosophical nice-to-have. It becomes a practical requirement for anyone who wants to shape the future responsibly.
Anna
Weekly Recommendations
Brains and Learning — Can you rewire your brain? The metaphor of rewiring offers an ideal of engineered precision. But this essay explains how the brain is more like a forest than a circuit board.
Weapons and Politics — A new Bellingcat investigation shows that American weapons have been dropping over Gaza long before the ceasefire.