I want to thanks Roman Nikolaev, for your question. I adapted your question to one of my chapter of the book I'm writing, Leadership as a Verb, and wrote this essay. This has open for me a pandora box.
I’ve been spiralling down a research rabbit hole lately, due to this article I wrote, and the more I dig, the more uncomfortable the questions become. I wanted to share a few investigation notes with you all because it’s completely changed how I view my Monday mornings.
What I’ve discovered is that our obsession with "what we do" isn’t just a social habit it’s a historical hangover. During the Reformation, this radical idea emerged that worldly labour was actually a religious duty, where success in your job was seen as a sign of grace or character. Fast forward to today, and we’ve mostly stopped working for God and started working for our own identity, but that crushing moral weight never actually left us.
It’s led to some pretty sobering realisations I’m currently chewing on. I’m starting to see that when people ask "what do you do," it’s often just a subconscious way to calculate how much respect to give us an efficiency tool for a world of strangers. Because we’re told our careers are the result of our choices and talent, we've fallen into the trap where our career becomes our essence. Answering with a job title is a safe, clinical defence mechanism it protects our inner selves from strangers by giving them a "file" to put us in so we don't have to show them who we actually are.
The scariest part of this investigation, though, is realising how fragile this makes us. When we define ourselves by our jobs, we build our identity on shifting sand. If you are your job, then losing that job means literally losing your self. It explains why retirement or lay-offs feel less like economic shifts and more like existential deaths. I’m trying to learn how to be a person again, not just a set of professional functions.
The distinction between process people and results people is something I haven't seen named anywhere else. Roman's question — "what am I here for?" — is the one most people are avoiding. Thank you for not letting us avoid it!
Thank you, Raghav. Indeed, we are accepting that AI will take our jobs, but in my view, that is not an approach that suits us. Roman’s question, 'What am I here for?', speaks volumes. Besides being also a philosophical point, it is an honest conversation that we need to have with ourselves and others. In Western society, we were told that our jobs define us, and now that same system is saying we will no longer have a job. It is scary, but it also suggests that the system we currently operate within was never intended to give us an identity.
Thats a good point, Diamantino! Maybe we need to be defined more than just "what our job title" suggests. And AI probably, if anything, exposed this identity question, rather than just take the headlines for job disruptions.
Yes, I do think GenAI is exposing this, or making us now question, because until this moment we only thought about, if I lose my job, what will be my next job. Now the next don't feel like a possibility. I don't blame tech, but the business models behind it. You can imagine, how this benefit certain companies, not paying wages, paying taxes, erasing human rights at work, among many other things.
Agreed. Although I think people still identify with work as a profession that gives them status and recognition, so letting that go will take some mindset shifting. The systems are fundamentally outdated to even address some of the challenges we will encounter with shifting how we work and what’s really a value-added.
Indeed, we still operate in a secular system. I'm not sure how we could change the system, perhaps by simply identifying it by what it means to us, and go from there. It's a complex situation, but needs to be discussed.
Thank you for taking time to deeply think about my questions.
The effect on the new generation who is now entering the work force is indeed something worth thinking about.
I can imagine it feels very uncertain now for these folks.
My personal answer to this, that doing work for people with people is the answer. Empathy, human touch and working as a group cannot be automated (or so I hope). These are timeless skills which are worth investing into.
The instinct you are describing is one I share. And I think you are right that genuine human connection is not something a model can replicate. Not really.
What I keep sitting with is a slightly different question. Not whether empathy can be automated. But whether the systems being built are designed to value it.
An organisation that rewards speed, output, and measurable performance has always found ways to route around the things that are slow and hard to count. Empathy takes time. Trust takes time. Working with people rather than through them takes time. Those things were undervalued before AI arrived.
The new generation you are thinking about will need those skills. I just wonder whether the rooms they are walking into were built to recognise them.
What gives you confidence that the organisations hiring them now are actually investing in that direction?
I think organizations don’t have another choice. Even before AI, culture was the competitive advantage. Organizations with less politics and high psychological safety outperformed the ones with bureaucracy and backstabbing.
The org didn’t need to have a great culture to be a leader - just better than the competition. I know it is a strong claim, but it is my experience.
The organizations that didn’t get their culture right slowly declined; the bigger the organization, and the stronger its brand, the more time it takes. But it happened nevertheless.
Now that we have turbocharged the change (at least in some industries), the process of rise and decline will accelerate.
These soft skills we talk about become differentiators, which will define if the organization will win or lose. I might be too idealistic. Time will tell.
I have seen the same pattern. Organisations with genuine psychological safety move faster, recover better, and hold onto the people worth keeping. But this market is not fair. That’s why we even have investments the bet against the decline of entire countries.
Where I find myself hesitant is the assumption underneath the argument. It requires the market to punish bad culture reliably enough and quickly enough to change behaviour. And the last thirty years suggest the market is more patient with dysfunction than we would like it to be.
Some of the most extractive, political, and psychologically unsafe organisations I have encountered were also the most profitable. Not despite the culture. Sometimes because of it. Fear moves fast. Compliance scales. Backstabbing is, in certain structures, an efficient way to concentrate decision-making.
What GenAI may do is accelerate the decline you are describing. But it may also turbocharge the extractive models first, before the reckoning arrives.
The organisations worth building are the ones you describe. I just wonder whether the market will reward them in time, or whether the people inside them will have to hold that belief on faith for longer than seems fair. Because at the moment the reward is extraction and we need a system that rewards non-extractive systems.
Are these organizations from your past still doing well?
In my experience, the decline is slow but inevitable; the ones that are rotten inside will shrink and die unless they reinvent themselves.
Now the decline that used to take twenty years might happen in five.
If the organization sits on a big deposit of natural resources or owns critical infrastructure, they are well entrenched, but they at the same time become complacent, and then competition arrives, they will topple quickly.
Look at what happened to the American car industry, for example.
Regarding the profit of the first organizations. The question is maybe not, are these organizations ethical in a broad sense - the question is are they effective internally? Can people inside these organizations work well together?
Indeed we can question how effective internally these companies are. Most will reinvent themselves, we have seen that many times, and that's good. The system as we have it today, will promote an extraction model, justifying their methods by saying either we do this way or the hard way. When in fact we has consumers, citizens, individuals we can change dramatically how certain corporations behave. If we penalise companies that go against our dignity, destroy our environment, governments will put pressure as we can put pressure on governments. But this requires a change on our lifestyles. Are we ready for that? I'm not sure. Could this be the solution, I'm not sure either, but feels like a good starting point.
Especially this part in regard to our conversation.
The evidence on human-AI collaboration offers a counter-weight worth taking seriously. Garry Kasparov, writing in Harvard Business Review, drew on “advanced chess” competitions where teams used AI and humans together. The winning teams ran better processes, not better hardware. His framing: “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”
A separate MIT Sloan analysis published in June 2025 identified five human capability clusters where employment grew between 2016 and 2024. The framework is called EPOCH: Empathy, Presence, Opinion and judgment, Creativity, and Hope-based leadership. These are the parts of professional work where AI coverage is lowest, and where BLS growth projections remain strongest.
Of course, it is just an opinion. And there is the confirmation bias. But our discussion was fresh on my mind.
I am pessimistic about the current state of things, looking at where our world is heading.
But back to the original topic. I believe that teamwork and smooth collaboration will be the differentiator for businesses, and it is what young people should invest in.
Understand themselves, be curious, learn how to take responsibility, work with others, and learn how to lead themselves and others. These skills can be used for good or evil, but they are the key nevertheless when technical expertise get automated.
I want to thanks Roman Nikolaev, for your question. I adapted your question to one of my chapter of the book I'm writing, Leadership as a Verb, and wrote this essay. This has open for me a pandora box.
You are very welcome :) I enjoy your essays, and happy that my thinking contributed to yours.
Likewise. I feel that hearing what others think, it's a great way to keep us together and express ourselves. What a small question brings...
If we imagine that somehow automation systems can really replace our labour. The question I keep going is.
Why automation systems are convincing companies in replacing us with mindless chips? Why do companies or some don't want human in the equation?
I’ve been spiralling down a research rabbit hole lately, due to this article I wrote, and the more I dig, the more uncomfortable the questions become. I wanted to share a few investigation notes with you all because it’s completely changed how I view my Monday mornings.
What I’ve discovered is that our obsession with "what we do" isn’t just a social habit it’s a historical hangover. During the Reformation, this radical idea emerged that worldly labour was actually a religious duty, where success in your job was seen as a sign of grace or character. Fast forward to today, and we’ve mostly stopped working for God and started working for our own identity, but that crushing moral weight never actually left us.
It’s led to some pretty sobering realisations I’m currently chewing on. I’m starting to see that when people ask "what do you do," it’s often just a subconscious way to calculate how much respect to give us an efficiency tool for a world of strangers. Because we’re told our careers are the result of our choices and talent, we've fallen into the trap where our career becomes our essence. Answering with a job title is a safe, clinical defence mechanism it protects our inner selves from strangers by giving them a "file" to put us in so we don't have to show them who we actually are.
The scariest part of this investigation, though, is realising how fragile this makes us. When we define ourselves by our jobs, we build our identity on shifting sand. If you are your job, then losing that job means literally losing your self. It explains why retirement or lay-offs feel less like economic shifts and more like existential deaths. I’m trying to learn how to be a person again, not just a set of professional functions.
This line stuck with me: “The role was the container. What he was losing was what the container had been holding.”
AI may automate functions, but the deeper disruption is psychological, as people realize their identity was built on something replaceable.
It’s deeper than we thought. Perhaps a blip in our society and some of our beliefs about how we actual see ourselves today.
The distinction between process people and results people is something I haven't seen named anywhere else. Roman's question — "what am I here for?" — is the one most people are avoiding. Thank you for not letting us avoid it!
Thank you, Raghav. Indeed, we are accepting that AI will take our jobs, but in my view, that is not an approach that suits us. Roman’s question, 'What am I here for?', speaks volumes. Besides being also a philosophical point, it is an honest conversation that we need to have with ourselves and others. In Western society, we were told that our jobs define us, and now that same system is saying we will no longer have a job. It is scary, but it also suggests that the system we currently operate within was never intended to give us an identity.
Thats a good point, Diamantino! Maybe we need to be defined more than just "what our job title" suggests. And AI probably, if anything, exposed this identity question, rather than just take the headlines for job disruptions.
Yes, I do think GenAI is exposing this, or making us now question, because until this moment we only thought about, if I lose my job, what will be my next job. Now the next don't feel like a possibility. I don't blame tech, but the business models behind it. You can imagine, how this benefit certain companies, not paying wages, paying taxes, erasing human rights at work, among many other things.
Agreed. Although I think people still identify with work as a profession that gives them status and recognition, so letting that go will take some mindset shifting. The systems are fundamentally outdated to even address some of the challenges we will encounter with shifting how we work and what’s really a value-added.
Indeed, we still operate in a secular system. I'm not sure how we could change the system, perhaps by simply identifying it by what it means to us, and go from there. It's a complex situation, but needs to be discussed.
Thank you for taking time to deeply think about my questions.
The effect on the new generation who is now entering the work force is indeed something worth thinking about.
I can imagine it feels very uncertain now for these folks.
My personal answer to this, that doing work for people with people is the answer. Empathy, human touch and working as a group cannot be automated (or so I hope). These are timeless skills which are worth investing into.
The instinct you are describing is one I share. And I think you are right that genuine human connection is not something a model can replicate. Not really.
What I keep sitting with is a slightly different question. Not whether empathy can be automated. But whether the systems being built are designed to value it.
An organisation that rewards speed, output, and measurable performance has always found ways to route around the things that are slow and hard to count. Empathy takes time. Trust takes time. Working with people rather than through them takes time. Those things were undervalued before AI arrived.
The new generation you are thinking about will need those skills. I just wonder whether the rooms they are walking into were built to recognise them.
What gives you confidence that the organisations hiring them now are actually investing in that direction?
I think organizations don’t have another choice. Even before AI, culture was the competitive advantage. Organizations with less politics and high psychological safety outperformed the ones with bureaucracy and backstabbing.
The org didn’t need to have a great culture to be a leader - just better than the competition. I know it is a strong claim, but it is my experience.
The organizations that didn’t get their culture right slowly declined; the bigger the organization, and the stronger its brand, the more time it takes. But it happened nevertheless.
Now that we have turbocharged the change (at least in some industries), the process of rise and decline will accelerate.
These soft skills we talk about become differentiators, which will define if the organization will win or lose. I might be too idealistic. Time will tell.
I have seen the same pattern. Organisations with genuine psychological safety move faster, recover better, and hold onto the people worth keeping. But this market is not fair. That’s why we even have investments the bet against the decline of entire countries.
Where I find myself hesitant is the assumption underneath the argument. It requires the market to punish bad culture reliably enough and quickly enough to change behaviour. And the last thirty years suggest the market is more patient with dysfunction than we would like it to be.
Some of the most extractive, political, and psychologically unsafe organisations I have encountered were also the most profitable. Not despite the culture. Sometimes because of it. Fear moves fast. Compliance scales. Backstabbing is, in certain structures, an efficient way to concentrate decision-making.
What GenAI may do is accelerate the decline you are describing. But it may also turbocharge the extractive models first, before the reckoning arrives.
The organisations worth building are the ones you describe. I just wonder whether the market will reward them in time, or whether the people inside them will have to hold that belief on faith for longer than seems fair. Because at the moment the reward is extraction and we need a system that rewards non-extractive systems.
I hope it makes sense to you. I don’t know what will happen, nobody does. We are just guessing, but it is fun :)
It makes sense, and indeed solutions are abundant, but actions that another story.
Are these organizations from your past still doing well?
In my experience, the decline is slow but inevitable; the ones that are rotten inside will shrink and die unless they reinvent themselves.
Now the decline that used to take twenty years might happen in five.
If the organization sits on a big deposit of natural resources or owns critical infrastructure, they are well entrenched, but they at the same time become complacent, and then competition arrives, they will topple quickly.
Look at what happened to the American car industry, for example.
Regarding the profit of the first organizations. The question is maybe not, are these organizations ethical in a broad sense - the question is are they effective internally? Can people inside these organizations work well together?
Indeed we can question how effective internally these companies are. Most will reinvent themselves, we have seen that many times, and that's good. The system as we have it today, will promote an extraction model, justifying their methods by saying either we do this way or the hard way. When in fact we has consumers, citizens, individuals we can change dramatically how certain corporations behave. If we penalise companies that go against our dignity, destroy our environment, governments will put pressure as we can put pressure on governments. But this requires a change on our lifestyles. Are we ready for that? I'm not sure. Could this be the solution, I'm not sure either, but feels like a good starting point.
Just stumbled upon this article: https://substack.com/home/post/p-190363393
Especially this part in regard to our conversation.
The evidence on human-AI collaboration offers a counter-weight worth taking seriously. Garry Kasparov, writing in Harvard Business Review, drew on “advanced chess” competitions where teams used AI and humans together. The winning teams ran better processes, not better hardware. His framing: “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”
A separate MIT Sloan analysis published in June 2025 identified five human capability clusters where employment grew between 2016 and 2024. The framework is called EPOCH: Empathy, Presence, Opinion and judgment, Creativity, and Hope-based leadership. These are the parts of professional work where AI coverage is lowest, and where BLS growth projections remain strongest.
Of course, it is just an opinion. And there is the confirmation bias. But our discussion was fresh on my mind.
I am pessimistic about the current state of things, looking at where our world is heading.
But back to the original topic. I believe that teamwork and smooth collaboration will be the differentiator for businesses, and it is what young people should invest in.
Understand themselves, be curious, learn how to take responsibility, work with others, and learn how to lead themselves and others. These skills can be used for good or evil, but they are the key nevertheless when technical expertise get automated.