A delusional ape hallucinating narratives
We built civilisation on the premise that precision is progress. AI just revealed how much we were never sure.
A note before you read. I used AI to pressure-test the argument in this essay. Not to write it. To challenge it. I will tell you where it surprised me and where it failed me, because that is the honest way to write about this subject.
When I use the word AI in this essay I am not acknowledging that these systems are intelligent. I am using the word the industry uses, because that is the word that has conquered the conversation. What we are actually talking about is an advanced deep learning model. Extraordinarily capable at pattern recognition, statistics, and probability. Not thinking. Not understanding. Not intelligent in any meaningful sense of the word. There is no academic consensus on what intelligence actually is, and there is certainly no evidence that these models possess it. The word AI is a marketing decision. It was chosen to make the technology feel inevitable, significant, and human. I use it here because refusing to use it would make the essay harder to read. But I want you to know that every time I write AI in this essay, I am describing a very powerful statistical engine. Nothing more. The intelligence is in the room. It is in you.
My son asked me last week what a tree was for.
He is eight. He had been sitting under one for twenty minutes, watching an ant carry something three times its size along a crack in the stone path. He was not bored. He was not seeking stimulation. He was simply present in the way that eight-year-olds are present when nobody has yet taught them that presence is an inefficiency.
I said trees make oxygen. They give us wood. Some trees give us fruit.
He looked at me the way children look at adults who have answered a different question.
What is it for, he said again.
I did not have a better answer. I had descriptions of function. I had economic utility. I had the language of what a tree produces, how a tree performs, what a tree delivers. I did not have an answer to what a tree is for in the way he was asking, which was not a question about output at all. It was a question about meaning. About whether the tree had some irreducible right to exist that had nothing to do with what it gave us.
I sat down on the path next to him and watched the ant.
We stayed there for a while. Neither of us said anything useful.
The question under every question
I have spent the last several years writing about AI, about power, about what technology is costing us as humans. I have written about the system that made work the answer to identity. I have written about the bargain that is breaking. I have written about the designer who built the beautiful interface without understanding what it was doing, and about the Mediterranean cultures that kept alive a way of being in the world that does not depend on employment to feel whole.
All of those essays were, in some sense, asking the same question. They just never quite asked it directly.
The question is this.
Do we actually want what we have been building toward.
Not do we want AI. Not do we want automation. Not even do we want efficiency or productivity or the particular kind of progress that the last two centuries have been organised around.
Do we want the destination. The thing at the end of the road we have been on. The world that arrives if the logic we have been following is followed all the way.
I have been sitting with this question for a long time. I find it uncomfortable in a specific way. Not the discomfort of a question that is difficult to answer. The discomfort of a question that I suspect I already know the answer to and am not ready to say it out loud.
Because if the answer is no, then the problem is not how we regulate AI. It is not how we tax the offshore wealth or redistribute the efficiency gains or retrain the displaced workers. Those are all important questions, urgent questions, questions worth spending careers on. But they are downstream of a more fundamental choice about what kind of creature we are and what kind of world we are trying to build.
And that question is not a policy question. It is a philosophical one. A cultural one. Possibly a spiritual one, if that word can survive in a conversation about technology without becoming its own kind of evasion.
Precision as religion
A knife I have had for eleven years lives in the third drawer of my kitchen. It was given to me by a chef I once worked alongside briefly, in a different life, before the tech career consumed everything. He told me when he handed it over that a good knife is not precise. It is responsive. A precise knife does the same thing every time. A responsive knife does what the material needs. The difference, he said, is everything.
I have thought about that distinction more in the last two years than I thought about it in the previous nine.
Precision is the god of the current age. We worship it openly and without embarrassment. We build systems to eliminate the human variability that introduces imprecision. We measure everything that can be measured so that we can optimise everything that can be optimised. We have constructed an entire civilisation around the premise that precision is inherently better than its absence. That the precise answer is more valuable than the responsive one. That the consistent output is worth more than the one shaped by the moment and the material.
AI is the apotheosis of this religion. It is the most precise instrument we have ever built. It does the same thing, more or less, given the same input. It does not have bad days. It does not bring the residue of a difficult conversation into the next one. It does not read the room and decide that what the room actually needs is different from what it was asked for.
We look at this and call it intelligence.
I want to ask what we mean by that word. Not to score a philosophical point but because the confusion is consequential. When we call AI intelligent, we are making a claim about what intelligence is. We are saying that intelligence is pattern recognition at scale. That it is the capacity to retrieve and recombine information faster and more accurately than any human could. That it is, in short, a form of precision applied to cognition.
But every teacher I admired in my life was not precise. They were responsive. They could read a room of thirty students and feel which three were lost and which one was bored and which one needed to be challenged and which one needed to be left alone today because something was happening at home. They could not have told you how they knew. The knowing was in their hands and their eyes and their thirty years of being in rooms with children. It was imprecise, uncodifiable, irreducible to an algorithm. And it was, by any meaningful measure, intelligent in a way that no pattern-recognition system has yet approached.
The precision religion cannot account for that kind of intelligence because it cannot measure it. And what it cannot measure, it tends eventually to dismiss.
What we were running from
There is a story I want to tell about a man I met at a conference in Lisbon three years ago.
He was a founder. Mid-forties. His company had just been acquired for a number that made him, by any conventional measure, secure for several lifetimes. He was at the conference not because he needed to be but because he did not know what to do with a Tuesday that did not have a schedule.
We ended up talking for two hours at the edge of a rooftop bar, the city orange in the early evening, the smell of salt coming up from the river below. He was successful by every metric the productivity culture recognises. He was also, very quietly, one of the most lost people I had met in years.
He said: I thought the acquisition would feel like something. Like I had arrived somewhere. I keep waiting for the feeling.
I asked what feeling he was waiting for.
He thought about it. Then he said: that it was worth it.
He was not talking about money. He had the money. He was asking whether the thirty-hour days and the missed dinners and the relationships that had not survived the velocity and the identity so completely organised around his company that when the company was sold he genuinely did not know what he was anymore, whether all of that had been in the service of something. Whether there was a destination that justified the road.
He had achieved everything the system promised achievement looked like. And he was standing on a rooftop in Lisbon asking a stranger whether any of it had been worth it.
This is not an unusual story. I hear versions of it regularly. What is unusual is that he was willing to say it out loud, in those words, without the protective layer of lessons learned or pivots to the next chapter that most successful people deploy when the conversation gets close to the actual question.
The actual question, underneath his question, is the same one my son was asking about the tree.
Not what does it produce. What is it for.
We have built an entire civilisation of production without ever properly asking that question. Or rather, we asked it and then accepted an answer that turned out to be, on close examination, a circular reference. Work is for productivity. Productivity is for growth. Growth is for prosperity. Prosperity is for the good life. The good life is for, roughly speaking, more of the same.
The man on the rooftop had followed that logic all the way to its conclusion and found it empty.
He is not alone. He is just unusually honest.
The delusional ape
I want to use a phrase that was offered to me recently in a conversation about AI and identity. The person I was talking with, frustrated with the circularity of most AI discourse, said this.
“We are a delusional ape hallucinating narratives as we traverse this reality.”
It is not a kind description. It is also, I think, more true than most descriptions we use.
We are biological creatures who arrived on this planet through a process that had no intention and no plan. We developed the capacity for consciousness, which is extraordinary and still largely unexplained. And we used that consciousness primarily to construct stories about why we are here and what we are supposed to be doing and whether we are doing it correctly.
Every culture in human history has done this. The stories differ. The need to have a story does not.
The productivity gospel is one of these stories. It says: the purpose of a human life is to produce value. Progress means producing more value more efficiently. The good society is one organised to maximise production. The good life is one that contributes maximally to that production.
This story has been extraordinarily successful at generating material wealth. It has also been extraordinarily successful at generating misery of a particular kind. The misery of people who have followed the story faithfully and arrived at its promised destination and found it does not feel like destination at all. Who have produced and produced and optimised and achieved and looked up from the spreadsheet of their life to find the rooftop in Lisbon and the question that has no answer in the story’s own terms.
What the AI era is doing, among other things, is stress-testing this story at scale. If the purpose of a human life is to produce value, and machines can produce that value more efficiently, then what is the purpose of a human life.
The story cannot answer that question. Because the story was never really about purpose. It was about distraction. A narrative complex enough to occupy the delusional ape’s extraordinary consciousness so thoroughly that the underlying questions, what are we for, what do we owe each other, what does it mean to live rather than merely function, could be safely deferred.
AI is removing the distraction. Not intentionally. Not kindly. But unavoidably.
What is left when the distraction is gone is the question my son was asking about the tree.
What machines cannot know
I want to be careful here because this is where the argument is tempting to get wrong.
The wrong version goes: humans are special, AI cannot replicate consciousness, therefore AI is not really intelligent, therefore the threat is overstated.
That is not what I am saying.
What I am saying is something different. That there is a category of knowledge that requires being mortal, embodied, and uncertain to access. And that this category of knowledge is not a small addendum to human intelligence. It is its foundation.
A surgeon who has never been afraid does not understand what it costs a patient to put their body in another person’s hands. A leader who has never failed does not understand what it takes for a team member to admit they are struggling. A parent who has never lost something essential does not know the particular quality of attention you give to what you have and are afraid of losing.
This is not sentimentality. It is epistemology. The knowledge that comes from being vulnerable, from having stakes, from being the kind of creature that can lose things and be changed by losing them, is a different category of knowledge from the knowledge that comes from pattern recognition across large datasets.
I think about this when I watch leaders try to use AI as a substitute for the difficult conversation. The conversation where someone needs to be told that what they are doing is not working. Where someone needs to hear, from a person they trust, that the direction is wrong. Where someone needs the specific experience of being in the room with another human being who has assessed the situation and is now offering them the honest version rather than the diplomatic one.
AI can generate the words. It can produce language that is, in many cases, more precise than what the leader would have said unprompted. More structured. Less emotional. Less contaminated by the relationship between the people in the room.
But the contamination is the point.
The reason the conversation matters is because it happens between people who have stakes. Who have history. Who will still be in the room together next week and the week after. Who are changed by what is said and by the fact of having said it. The weight of the conversation is not a problem to be designed out. It is the mechanism by which the message lands differently than it would on paper.
A piece of feedback delivered by someone who cares about you and is afraid of losing your respect and is choosing to say the difficult thing anyway is not the same piece of feedback delivered as a well-structured paragraph generated by a system that has no relationship with you and cannot lose anything by saying it.
The words might be identical. The knowledge transmitted is entirely different.
AI will get better at simulating the language of that knowledge. It will produce outputs that look, in many contexts, indistinguishable from the real thing. This is already true and will become more true.
But simulation is not the thing. A map of the territory is not the territory. The question what does it mean to grieve is not the same question whether you can answer it or not.
The danger is not that we will mistake AI for human. The danger is that we will mistake the simulation for sufficient. That we will accept the map because the territory is too difficult to navigate. That we will build systems around the convincing imitation of human understanding and call that understanding, the way we built systems around the convincing imitation of human judgment and called that efficiency.
The chef who gave me the knife was teaching me something about the difference between responsive and precise. The responsive thing is harder. It requires presence, attention, accumulated knowledge that lives in the hands as much as the head. It cannot be replicated by a system that has never held the knife.
We are building a world in which the precise thing is systematically preferred over the responsive thing. Because the precise thing is cheaper to scale. Because the responsive thing requires the kind of human presence that cannot be extracted and replicated. Because the responsive thing keeps the human in the room and the human is, from a certain angle, the most expensive component in the system.
That preference is a choice. It is also, I want to argue, a mistake. Not just ethically. Practically.
The map and the territory
I talked to a CTO last month who had just finished a six-month implementation of an AI decision-support system for his team. The system was excellent. It was fast, it was consistent, it was right more often than the humans it was designed to support. By every metric the project had been measured against, it was a success.
He called it a success with the particular flatness of someone describing something that has cost more than the balance sheet shows.
I asked what was not on the metrics.
He was quiet for a moment. Then he said: my best people have stopped thinking as hard.
He did not mean they were lazy. He meant that the presence of a system that produced correct answers reliably had altered the relationship between his team and the problems they were solving. They were now problem-checkers more than problem-solvers. They reviewed the system’s output rather than generating their own. The cognitive muscle that gets built by sitting with a difficult question and not knowing the answer and having to find your way through it was being exercised less, and atrophying accordingly.
The system was optimising their output while quietly eroding their capacity.
This is not a new phenomenon. It is what calculators did to mental arithmetic. What GPS did to spatial reasoning. What spell-check did to the intuitive understanding of how words are constructed. Each individual substitution looked like efficiency. The cumulative effect was a slow narrowing of the range of things the human could do without the tool.
But mental arithmetic and spatial reasoning and spelling are relatively small losses. What the CTO was describing was a loss at a different order of magnitude. The capacity to think hard about difficult problems is not a peripheral skill. It is, arguably, the central skill. The thing that makes a team capable of navigating a situation the system was never trained on.
The territory of reality is not the map of the problems the system has seen before. Novel situations arrive. Situations the training data never contained. Situations that require a human being to sit with uncertainty and confusion and incomplete information and still make a judgment.
If the humans who are supposed to make those judgments have spent five years checking the system’s output instead of developing their own, they will not be ready when the situation arrives that the system cannot handle.
We are optimising for a world that does not exist, the one where the system always has an answer, while eroding our capacity to navigate the world that does, the one where sometimes there is no answer and you have to make your best guess from the position of a mortal creature with incomplete information and genuine stakes in the outcome.
The civilisation that forgot it was alive
The mechanistic worldview has a logic and the logic is internally consistent. Reality is a system. Systems can be understood. Understanding can be translated into control. Control can be translated into optimisation. Optimisation is progress.
This logic has produced antibiotics and clean water and agricultural yields that feed populations that would have been unimaginable two centuries ago. I do not want to romanticise a past that involved dying of infections and watching children starve. The mechanistic worldview has delivered real things of real value and anyone who dismisses it entirely is doing something dishonest.
But.
The mechanistic worldview is a tool, not a truth. It is a way of modelling reality that is extremely useful for certain kinds of problems and useless for others. The problem is that we have elevated it from tool to worldview. From a method to an identity. We have organised societies, economies, institutions, and finally the way we understand ourselves around a model of reality that was built for the manipulation of physical systems and applied, wholesale, to the question of how to live.
The result is what you see on the rooftop in Lisbon. A man who has succeeded by every measure the system offers, standing in the orange evening light, waiting for a feeling that the system was never designed to deliver.
The system optimises for measurable outputs. Meaning is not a measurable output. Love is not a measurable output. The particular quality of attention you give to a eight-year-old sitting under a tree watching an ant is not a measurable output. Beauty is not a measurable output. The kind of trust that is built between people over years of difficulty shared and survived is not a measurable output.
These things are not inefficiencies. They are what the efficiency is supposed to be for. And we have built a civilisation so thoroughly organised around the measurable that we have systematically devalued everything that falls outside the measurement.
What interests me is how thoroughly this worldview has colonised even the vocabulary available to us when we try to resist it. We cannot argue for the value of the three-hour meal without framing it in terms of wellbeing metrics and productivity benefits and research showing that social cohesion correlates with economic resilience. We cannot argue for rest without citing studies about cognitive performance after recovery. We cannot argue for the unmeasurable without first translating it into the measurable, because the measurable is the only language that the current system accepts as legitimate.
This is the deepest form of the problem. Not that we value the wrong things. But that we have lost access to a language for valuing things that cannot be measured. The language of the intrinsic. The language that says: this is worth something because it is, not because of what it produces.
My son has that language. He is eight. He has not yet traded it in for the other one.
The cultures that maintained the three-hour meal also maintained that language. Not as a luxury or a philosophical indulgence. As a practical necessity. As the thing that allowed them to keep building communities where people wanted to be rather than places people had to be because the economic logic left them no other option.
AI is arriving into a world where that language is already endangered. Where the people who might resist the substitution of the simulated for the real are working in the vocabulary of the system they are trying to resist. Where the argument for human connection gets made by citing engagement data and the argument for rest gets made by citing productivity research and the argument for the unmeasurable gets abandoned because nobody in the room where decisions are made has the language for it anymore.
This is the loss that is hardest to name. And the hardest to recover from.
What the Mediterranean kept
The cultures that economists spent decades describing as inefficient were, among other things, holding something that was not on any balance sheet.
A meal in a village in southern Portugal takes three hours. Not because the food takes three hours. Because the meal is not primarily about the food. It is about the particular alchemy that happens between people when they sit together without agenda and let the conversation go where it goes and allow the time to be what it is rather than what it can be extracted into.
This is not nostalgia. I am aware that those same villages contain their own forms of constraint and cruelty, their own hierarchies and injustices, their own things that should not be preserved. I am not arguing for a return to an imagined pastoral past.
I am arguing that the three-hour meal contains a piece of knowledge about what it means to be human that the productivity gospel cannot encode. That the thing happening in that meal, the thing that is not the food, is not a luxury or a bonus or a cultural affectation. It is a fundamental human activity. The building and maintenance of the bonds that make a person feel they exist in a world rather than merely passing through it.
The knowledge that this contains is not abstract. It is practical. Societies that maintained these structures retained a social fabric that, when economic catastrophe arrived, gave people something to stand on that was not their job title or their salary or their professional identity. They had each other. Not metaphorically. Concretely. People who would bring food. People who would sit with you. People for whom your value was not conditional on your output.
This is what the mechanistic worldview cannot produce, not because it cannot value these things in principle, but because it has no mechanism for measuring them and therefore no mechanism for protecting them when they conflict with something that can be measured.
We are now in a moment where what can be measured is being automated and what cannot be measured is being revealed as the only remaining irreducibly human territory. The delusional ape, it turns out, cannot be optimised out of its need for exactly the things that optimisation cannot account for.
And here is what is interesting about that. The AI companies understand this, at some level. The language they use to sell their products is saturated with the vocabulary of human connection. More time for the things that matter. Focus on what only you can do. Get back to the work that is truly yours. The promise is not efficiency for its own sake. The promise is that efficiency is the route back to humanity.
This is the most sophisticated version of the problem. Not that we are being sold a machine and told it is a tool. But that we are being sold efficiency and told it is the road to meaning. That by automating the tedious, we will recover the time and the energy to do the genuinely human things. That the machine is not replacing the human. It is freeing the human to be more fully human.
It is a beautiful story. It might even be true for some people in some contexts. The problem is the track record. The agricultural revolution was supposed to free humans from backbreaking physical labour. It did, for some people, in some places. It also created the conditions for industrial capitalism, which created new forms of backbreaking labour and new systems for extracting human time on behalf of people who owned the machines. The story the machines tell about themselves tends to be more optimistic than the story the people inside the system experience.
The question is not whether AI will free some time for some people. It will. The question is whose time, and freed for what, and who decides, and what happens to the people whose time is not freed but simply made redundant.
Those are not questions the technology answers. They are questions the politics decides. And the politics, right now, is being run by people who have a significant financial interest in a particular set of answers.
Do we want this
I want to return to the question I have been circling.
Do we actually want what we have been building toward.
I do not think most people do. I think most people, if you sat with them long enough on a rooftop in Lisbon or at a kitchen table in the afternoon or under a tree in a garden while a child watched an ant, would tell you that what they want is not more efficiency or more precision or more optimised engagement with algorithmically curated content.
They want to feel that they are here. That the people they love are here. That the work they do means something beyond its contribution to a metric. That they are participating in a life rather than executing a function.
These are not complicated wants. They are the wants of the delusional ape. The wants that every culture in human history has tried to address through ritual and religion and community and art and the particular way that humans have always organised themselves around fire and food and the stories that make the darkness less absolute.
AI is not answering those wants. In many cases it is specifically designed to redirect them. The engagement economy that funds most AI development is built on the premise that human wants can be substituted with simulations. That the want for connection can be satisfied with a notification. That the want for meaning can be addressed with a personalised content feed. That the want for recognition can be met with a like, which costs the giver nothing and delivers the receiver a small neurological reward that wears off in minutes and requires replenishment.
This is not a conspiracy. It is a business model. And it has been extraordinarily successful by the measures the business model uses, which do not include whether the humans inside the system are actually getting what they want.
I watched a senior engineer at a company I was advising try to explain to his teenage daughter what he had been working on for the previous three years. She was sixteen. She listened politely. Then she asked: but what does it do for people.
He described the product. The engagement metrics. The daily active users. The revenue model.
She said: but what does it do for people.
He understood what she was asking. He did not have an answer that satisfied either of them. Because the answer was honest and the honest answer was that it did something to people rather than for them. That the engagement metric and the human good were not the same number, and the company had spent three years optimising for the number while the other thing had been left to look after itself.
The sixteen-year-old was asking the same question my son asks about the tree. Not what does it produce. What is it for.
The question is not new. Every generation asks some version of it. What is different now is the scale and the speed. We have built systems of extraordinary reach, operating at a speed that leaves no room for the question to be asked between iterations, optimising for measures that were chosen for their measurability rather than their alignment with what actually makes a human life feel worth living.
And those systems are now building the next generation of systems in their own image.
The question do we want this is not a question about technology. It is a question about whether we are willing to look clearly at what we have been choosing and decide whether to keep choosing it.
The choosing is real. This is not inevitable. The three-hour meal did not disappear because of some unstoppable force of human nature. It disappeared because a set of economic and cultural choices systematically devalued the things it produced. Those choices can be made differently.
The question is whether we are willing to make them. Which requires first being willing to name what we actually want, as opposed to what the system has trained us to say we want.
There is a version of this that becomes a counsel of despair. A litany of losses that ends with the suggestion that modernity was a mistake and we should go back to some simpler life that never actually existed in the form we imagine it. That is not what I am saying and I want to be clear about that.
What I am saying is narrower and more practical. That the tools we build should serve the wants of the creature using them, not reshape the creature into something that fits the tool better. That when we notice a gap between what the metrics measure and what actually matters, the response should be to question the metric rather than to dismiss what the metric cannot capture. That the three-hour meal and the knife that is responsive rather than precise and the eight-year-old watching the ant are not romantic anachronisms. They are data about what human beings actually need that our measurement systems are not equipped to collect.
My son under the tree was not confused about what he wanted. He wanted to understand what the tree was for in a way that had nothing to do with output. He wanted the world to make sense at a level below function and utility and measurement.
He is eight. He has not yet been taught to call that wanting impractical.
I spent fifteen years in organisations that would have called it impractical. I spent a Saturday morning at a kitchen table noticing that I could not sit still with nothing to show for the hour, and understanding that the inability was not mine but the system’s, installed in me over years of being inside it.
I am still working on the uninstalling.
The delusional ape hallucinating narratives as it traverses this reality. That is what we are. Precisely and completely. Not as an insult. As a description of something extraordinary. We are the universe looking at itself and making stories about what it sees. We are the only thing we know of that does this. It is not a bug. It is the whole point.
The question is which stories we choose to tell.
The story we have been telling, that precision is progress, that output is value, that the optimised life is the good life, is running into its own limits. The arrival of AI is not creating those limits. It is revealing them. The story always had a problem at its core. The problem is that it described a destination nobody actually wanted to arrive at.
The man on the rooftop arrived at it. He was waiting for a feeling the story had no mechanism to deliver.
My son under the tree was asking for the thing the story had no language to describe.
The tree does not produce anything right now. It is simply there. Existing completely. Doing the thing that it is, without justification or apology or quarterly metrics.
There is a kind of knowledge in that. The kind that lives in hands and eyes and thirty years of being in rooms with things you have paid attention to. The kind that comes from being mortal and present and genuinely uncertain about what happens next.
The knife is responsive, not precise.
That is the difference. That is everything.
If this essay landed, the Tuesday posts come by email. Free. Subscribe below and the next one arrives in your inbox.
If you are ready for the room, paid subscribers get deeper essays, the book as it is being written, monthly live sessions, and direct access to the thinking before it is polished. The door is open.
From this series;
About the Author
Tino Almeida is a tech leader, coach, and writer reshaping how we think about leadership in a burnout-driven world. With over 20 years at the intersection of engineering, DevOps, and team culture, he helps humans lead consciously from the inside out. When he’s not challenging outdated norms, he’s plotting how to make work more human, one verb at a time.





I fear that today some of our leaders are not, in fact, leaders of good will but delusional ones.
When someone says "A whole civilization will die tonight," it says exactly what type of person they are. This type of person must be made accountable for their actions.
But I wonder why this happens. What systems allow these threats to be made public and posted like normal news? Have we become indifferent?
Incredibly thought-provoking.
AI is forcing a question that many avoid: what do we actually exist for? The productivity hype gave us a convenient answer to avoid the discomfort of going the deeper, existential route. AI is now dismantling that.
Two groups are emerging. One is using AI to do more; more output, more speed. The other is using it to be more, protecting the bandwidth to do what AI never will. Sense the room. Feel the weight of a conversation. Show up fully present. Sit in the mud with someone and know that offering an answer is the exact opposite of what you should do.
In my opinion, the leaders who will matter most won't be the ones who produced the most. They'll be the ones who doubled down on what makes us human and lead effectively from that place of purpose.