I fear that today some of our leaders are not, in fact, leaders of good will but delusional ones.
When someone says "A whole civilization will die tonight," it says exactly what type of person they are. This type of person must be made accountable for their actions.
But I wonder why this happens. What systems allow these threats to be made public and posted like normal news? Have we become indifferent?
It says that organisations must deliberately preserve and build human expertise by reintroducing targeted friction, requiring active reasoning, and training people to critically evaluate AI outputs not because friction is good in itself, but because the struggle is what builds the judgment needed to work safely with AI.
Reading the news and I found out that there is an active legislative push against "Cognitive Manipulation" and "Dark Patterns." Regions in the Mediterranean (like Portugal and Spain) are at the forefront of the "Right to Disconnect" movements, which have evolved into discussions about "offline-first" urban zones to protect mental well-being.
This essay feels urgent right now because we're watching it happen in real time. Every organisation racing to deploy AI for 'efficiency gains' is essentially choosing the precision of the map over the territory.
But here's what I keep coming back to the rooftop conversation in Lisbon isn't just a personal crisis for that founder. It's becoming a systemic one. We're building entire institutions schools, healthcare, management around AI as a substitute for the responsive human presence. And we're calling it progress.
The dangerous part isn't that AI will replace human judgment. It's that we'll replace human judgment with AI not because it's better, but because it's cheaper and scales. We'll have optimised ourselves into a world where the responsive knife is considered an inefficiency.
AI is forcing a question that many avoid: what do we actually exist for? The productivity hype gave us a convenient answer to avoid the discomfort of going the deeper, existential route. AI is now dismantling that.
Two groups are emerging. One is using AI to do more; more output, more speed. The other is using it to be more, protecting the bandwidth to do what AI never will. Sense the room. Feel the weight of a conversation. Show up fully present. Sit in the mud with someone and know that offering an answer is the exact opposite of what you should do.
In my opinion, the leaders who will matter most won't be the ones who produced the most. They'll be the ones who doubled down on what makes us human and lead effectively from that place of purpose.
Maybe the real test of 'sitting in the mud with someone' isn't just showing up present it's also having the spine to name when someone's operating in bad faith. Presence without accountability might just be complicity dressed up as wisdom.
Also agree - it's both AND. Operating in the tension between empathy/compassion and accountability. Still, I think the first action is to listen and understand before anything else.
Yes, listening and understanding are a must. A few believe that takes time, and since taking time, feels like a waste of time, assumptions are often chosen to "speed up things". I keep asking why the rush?
I feel the leaders who'll have real influence are probably the ones comfortable sitting with ambiguity and resistance rather than rushing to solve it. There's something deeply counter cultural about choosing presence over productivity especially when the systems around us are built to reward the opposite.
I agree - the absence of solving or fixing it immediately and sitting in the uncertainty usually leads to more of a team approach to handling whatever it is.
In tech, sometimes I feel we suffer from "solution oriented mode only", no matter what is told to us we almost immediately come up with a solution, and sometimes without knowing all the facts. Perhaps due to our environment that keep pushing us to deliver more in less and less time. Feels we have become like CPUs...
Got it, yes, indeed, like he mention, best thing a friend can do is to listen, and not to try to solve it straight away. Sometimes I think there are no solutions and that's fine I suppose. Not everything is a problem to solve...
I just want to say how much this piece resonated with me.. The section about not wanting AI, automation, or even efficiency – but rather asking what the destination of all this progress really is – really hit home. I felt that same uneasy recognition you describe: the discomfort not of a difficult question, but of one I suspect I already know the answer to, and haven’t yet voiced.
It’s rare to see someone take a step back from the technical, the economic, and the procedural, and point straight at the core ethical and existential choice we’re making as a species. Your framing reminds me that all the debates about regulation, redistribution, or retraining are downstream – they only matter once we’ve clarified what kind of world we actually want to arrive at.
Thank you for naming this clearly! It’s uncomfortable, necessary, and exactly the kind of reflection that pushes us beyond reactive thinking toward conscious direction!
I fear that today some of our leaders are not, in fact, leaders of good will but delusional ones.
When someone says "A whole civilization will die tonight," it says exactly what type of person they are. This type of person must be made accountable for their actions.
But I wonder why this happens. What systems allow these threats to be made public and posted like normal news? Have we become indifferent?
Feels like a bluff, but playing with millions of life...we should not ignore this. I feel you.
My concern is that we are normalising this. Just because is happening out there, doesn't mean we should not care.
This one is alarming: Adults Lose Skills to AI. Children Never Build Them.
https://www.psychologytoday.com/us/blog/the-algorithmic-mind/202603/adults-lose-skills-to-ai-children-never-build-them
This is a great info you should read as well:
It says that organisations must deliberately preserve and build human expertise by reintroducing targeted friction, requiring active reasoning, and training people to critically evaluate AI outputs not because friction is good in itself, but because the struggle is what builds the judgment needed to work safely with AI.
https://cognitiveworld.com/articles/2026/3/19/skill-atrophy-frictionless-ai-and-cognitive-debt
Reading the news and I found out that there is an active legislative push against "Cognitive Manipulation" and "Dark Patterns." Regions in the Mediterranean (like Portugal and Spain) are at the forefront of the "Right to Disconnect" movements, which have evolved into discussions about "offline-first" urban zones to protect mental well-being.
https://www.weforum.org/stories/2026/03/how-cognitive-manipulation-and-ai-will-shape-disinformation-in-2026/
This essay feels urgent right now because we're watching it happen in real time. Every organisation racing to deploy AI for 'efficiency gains' is essentially choosing the precision of the map over the territory.
But here's what I keep coming back to the rooftop conversation in Lisbon isn't just a personal crisis for that founder. It's becoming a systemic one. We're building entire institutions schools, healthcare, management around AI as a substitute for the responsive human presence. And we're calling it progress.
The dangerous part isn't that AI will replace human judgment. It's that we'll replace human judgment with AI not because it's better, but because it's cheaper and scales. We'll have optimised ourselves into a world where the responsive knife is considered an inefficiency.
Incredibly thought-provoking.
AI is forcing a question that many avoid: what do we actually exist for? The productivity hype gave us a convenient answer to avoid the discomfort of going the deeper, existential route. AI is now dismantling that.
Two groups are emerging. One is using AI to do more; more output, more speed. The other is using it to be more, protecting the bandwidth to do what AI never will. Sense the room. Feel the weight of a conversation. Show up fully present. Sit in the mud with someone and know that offering an answer is the exact opposite of what you should do.
In my opinion, the leaders who will matter most won't be the ones who produced the most. They'll be the ones who doubled down on what makes us human and lead effectively from that place of purpose.
Maybe the real test of 'sitting in the mud with someone' isn't just showing up present it's also having the spine to name when someone's operating in bad faith. Presence without accountability might just be complicity dressed up as wisdom.
Also agree - it's both AND. Operating in the tension between empathy/compassion and accountability. Still, I think the first action is to listen and understand before anything else.
Yes, listening and understanding are a must. A few believe that takes time, and since taking time, feels like a waste of time, assumptions are often chosen to "speed up things". I keep asking why the rush?
I feel the leaders who'll have real influence are probably the ones comfortable sitting with ambiguity and resistance rather than rushing to solve it. There's something deeply counter cultural about choosing presence over productivity especially when the systems around us are built to reward the opposite.
I agree - the absence of solving or fixing it immediately and sitting in the uncertainty usually leads to more of a team approach to handling whatever it is.
In tech, sometimes I feel we suffer from "solution oriented mode only", no matter what is told to us we almost immediately come up with a solution, and sometimes without knowing all the facts. Perhaps due to our environment that keep pushing us to deliver more in less and less time. Feels we have become like CPUs...
When you say sit in the mud with someone are you talking about vulnerability, shared struggle?
Yes, I love Simon Sinek’s thinking on this, which is where I borrowed the phrase from:
https://youtu.be/9sV8trjqpQU?si=7QsJntxKReR-xOcb
Got it, yes, indeed, like he mention, best thing a friend can do is to listen, and not to try to solve it straight away. Sometimes I think there are no solutions and that's fine I suppose. Not everything is a problem to solve...
I just want to say how much this piece resonated with me.. The section about not wanting AI, automation, or even efficiency – but rather asking what the destination of all this progress really is – really hit home. I felt that same uneasy recognition you describe: the discomfort not of a difficult question, but of one I suspect I already know the answer to, and haven’t yet voiced.
It’s rare to see someone take a step back from the technical, the economic, and the procedural, and point straight at the core ethical and existential choice we’re making as a species. Your framing reminds me that all the debates about regulation, redistribution, or retraining are downstream – they only matter once we’ve clarified what kind of world we actually want to arrive at.
Thank you for naming this clearly! It’s uncomfortable, necessary, and exactly the kind of reflection that pushes us beyond reactive thinking toward conscious direction!