Who are you without the title?
Losing professional identity in the age of AI. And why that loss was designed, not accidental.
A note before you read. I used AI to pressure-test the argument in this essay. Not to write it. To challenge it. I will tell you where it surprised me and where it failed me, because that is the honest way to write about this subject.
When I use the word AI in this essay I am not acknowledging that these systems are intelligent. I am using the word the industry uses, because that is the word that has conquered the conversation. What we are actually talking about is an advanced deep learning model. Extraordinarily capable at pattern recognition, statistics, and probability. Not thinking. Not understanding. Not intelligent in any meaningful sense of the word. There is no academic consensus on what intelligence actually is, and there is certainly no evidence that these models possess it. The word AI is a marketing decision. It was chosen to make the technology feel inevitable, significant, and human. I use it here because refusing to use it would make the essay harder to read. But I want you to know that every time I write AI in this essay, I am describing a very powerful statistical engine. Nothing more. The intelligence is in the room. It is in you.
Many philosophers and psychologists argue we’d be better served building identity around character rather than output. But that’s easier said than done when the whole world keeps asking what you do for a living.
Why I started writing this
Last week, a subscriber named Roman Nikolaev left a message in the chat, after I asked for feedback from my readers.
It was short. One sentence.
“It would be interesting to read your take about losing professional identity in the age of AI.”
I read it twice. Then I closed the tab and went to make coffee.
I came back a few minutes later and the sentence was still there. Still doing something to me that I could not name immediately.
I have been writing about AI and its costs for over a year. I have written about data theft, about algorithmic harm, about the environmental cost of the infrastructure we are building without consent. I have written about what happens when systems are designed to keep people dependent and incurious. I have written about power, about who benefits, about the stories we tell to make inevitable what was always a choice.
But I had not written about this. About what happens to the person inside the machine when the machine changes shape. About the specific loss that does not show up in any economic model but shows up in almost every conversation I have had with a senior tech leader in the past eighteen months.
Roman’s question sat on the table like a stone.
I started writing the next morning. Sitting at my desk before the day started, the street outside still quiet, a cup of tea that went cold before I remembered to drink it. I wrote three pages and then stopped. Because I realised the essay I was writing was about the feeling. And my rule is this, name the mechanism, not just the feeling.
So I started again.
This is what I found.
The engineer who did not know who he was
I was sitting with a senior engineer last year. Coffee going cold on the table between us. Eleven years with his company. Teams led. Systems built. A title that took a decade to earn and a reputation that took longer.
His company had just announced that AI would handle the first pass on every architecture decision his team had made for the past three years.
He did not say he was angry. He did not say he was scared. He said something I have been carrying ever since.
He said I am not sure who I am at work anymore.
Not what he would do next. Not whether his job was safe. Not what skills he needed to develop. Who he was.
That sentence is the one I want to stay with. Because I think it is the most honest thing anyone has said to me about AI and the workplace in two years of watching this unfold. And I think most of the conversation we are having about professional identity and AI is missing it entirely.
We are talking about roles. We should be talking about the story underneath the role.
The wrong conversation
Most of what gets written about AI and professional identity is about jobs. About which roles will survive and which will not. About learning new functions and starting again and the future of work as an economic question. About what capabilities will be relevant in 2030 and which will have been automated away.
That conversation is not wrong. It is just not the conversation the engineer was having.
He was not talking about his role. He was talking about something underneath it. The story he had been telling about himself for eleven years. The story that explained why the work mattered, why the long hours were worth it, why he had chosen this particular life over other possible lives. The story that made the sacrifices legible.
The role was the container. What he was losing was what the container had been holding.
And here is the part that matters. He did not build that story himself. It was built for him. By the culture he worked inside. By the systems that rewarded certain kinds of expertise and made that expertise feel like identity. By the language of titles and seniority and accumulated knowledge that the tech industry uses to assign value to people. By eleven years of being told, in a hundred small ways, that what he knew was who he was.
The fragility was engineered. Long before the AI arrived to reveal it.
What a title actually does
When you accept a title at work, you are not just accepting a description of your responsibilities. You are accepting a position in a story.
Senior Engineer. Lead Architect. Head of Product. Director of Engineering. Each of those titles carries a narrative about competence, about earned authority, about where you sit in the hierarchy of people who know things. That narrative is not just external. It becomes internal. It becomes the answer to the question of who you are when someone at a dinner party asks what you do.
But the title does something more subtle than this. It becomes the primary mechanism through which you receive validation at work.
You know you are valuable because of what your title says you know. You know you are progressing because your title changes to reflect more seniority. You know you are respected because people with certain titles defer to you on certain decisions. The feedback loop runs through the title. And over years, without most people noticing it happening, the feedback loop becomes the identity.
This is not a personal failure. It is a system design.
The tech industry in particular has constructed an entire architecture of professional worth around the accumulation and display of technical expertise. The interview processes that test specific knowledge under time pressure. The performance reviews that reward demonstrated skill over collaborative wisdom. The culture of the senior developer who has seen everything and whose judgment is treated as the room’s ceiling. The mythology of the 10x engineer, the brilliant architect, the one person whose departure would bring the whole thing down.
Every one of those structures sends the same signal. Your value is what you know. Your identity is your expertise. Your security is your irreplaceability.
And then a tool arrives that can produce a first draft of the architecture in thirty seconds.
The upbringing question
Roman’s question opened something else for me that I had not expected.
When I started research for this essay, I kept asking the same thing. Why does the loss of a professional role feel like a loss of self? Economically, you can explain it easily. The job provides income, structure, social connection. Losing it disrupts all three. But the engineer I sat with was not facing redundancy. His role still existed. His income was not threatened. What he was facing was something more interior than that.
So I started asking a different question. Where did we learn that our work is who we are?
The answer, when I found it, was not surprising. But it was uncomfortable.
Most of us grew up in households where adult worth was measured by what adults did for a living. The question that greets every child at every family gathering is not who are you becoming. It is what do you want to be when you grow up. The verb is telling. You do not have a job. You become one. Doctor. Engineer. Lawyer. Teacher. The role is not something you do. It is something you are.
This is reinforced through education. The entire design of most educational systems is not to produce curious, adaptable people. It is to produce people who can perform specific functions in an economy. The curriculum is an economy simulation. The grades are an employment forecast. What you study becomes what you are trained to be. And what you are trained to be becomes, over time, what you understand yourself to be.
By the time most people reach their first professional role, the collapsing of identity into function is already complete. They do not notice it because it happened gradually, over twenty years, in every institution they passed through. The school. The university. The interview process. The onboarding. The first performance review.
Then the tool arrives. And the function they were trained to be can now be partially replicated by a machine.
And they say I am not sure who I am anymore.
They are not being melodramatic. They are being precise.
The mechanism nobody is naming
The villain in this story is never a specific person. It is the system. So let me name the system precisely.
We built work cultures that deliberately collapsed the distance between a person and their role. Titles were not just descriptions. They became validation systems. The seniority ladder was not just an organisational structure. It was a story about worth. The concept of expertise was not just about capability. It was about irreplaceability, which is another word for security.
These structures served the companies that built them. A person whose identity is fused with their role will work harder to protect that role. A person who understands their worth through their title will negotiate less aggressively because losing the title feels like losing themselves. A person who has been told for a decade that their expertise is what makes them valuable will not easily question whether that expertise serves anyone other than their employer.
Identity fusion at work is not an accident. It is a management technology.
And then a different technology arrived. One that does not need to be validated. That does not need a title. That will produce the first draft of the architecture without asking for a performance review.
AI did not create the fragility. It revealed it. The fragility was always there, built into the foundations of how we structured the relationship between people and their work.
But the people who built the AI knew this was coming. Many of them said so. They knew that the systems they were releasing would automate the functions that people had built their identities around. They chose to release them anyway. Without any framework for what it means to be a person whose expertise has been made replicable. Without any meaningful conversation about the human cost of making human capability optional.
That was a choice. Not an inevitability. A choice made by people who had already answered the identity question for themselves, because they were the ones building the tool.
Professional theft
There is a second layer to this that most accounts leave out. The models were trained on the accumulated output of professional expertise. Code that engineers wrote. Problems that engineers solved. Answers that engineers shared publicly over decades. That knowledge was not licensed. It was not compensated. It was taken. So companies extracted the professional knowledge, built models on top of it, and are now deploying those models to reduce the value of the expertise they extracted. The engineers are not just being disrupted by AI. They are being disrupted by a version of their own work.
But the damage does not stop at the economic level. It cuts deeper because of something we rarely examine openly for most professionals, work is not just what they do it is who they are. Identity in modern life is heavily fused with professional output. This is not accidental. It is culturally reinforced from the moment someone first asks a child what they want to be when they grow up. By the time a person has spent a decade mastering a craft, that mastery feels inseparable from the self.
This is the vulnerability that makes the disruption so destabilising. When a tool can replicate your output cheaply, it does not just threaten your income. It threatens your sense of worth. And the cruelest irony is that the people most psychologically exposed are not the least skilled they are often the most invested, the ones who gave the most of themselves to their craft, and whose knowledge quietly became the foundation these systems were built on.
They are then invited to pay a subscription for the privilege.
What I found when I asked the AI
At this point in writing the essay, I did something I want to be transparent about. I put the argument to the AI. I query it to tell me where it thought I was wrong.
It said I was conflating two separate problems. The first is the economic disruption of AI, which is real and measurable. The second is the psychological disruption of identity, which is real but harder to measure. It said that my argument about identity fusion being engineered was compelling but potentially unfalsifiable. How would you distinguish a deliberately engineered identity collapse from one that emerged organically from the way human beings naturally seek meaning in their work?
That is a fair challenge.
My answer is this. The distinction between deliberate and organic may matter less than I suggested. What matters is that the structure exists, that it is being disrupted, and that nobody in a position of power designed any system to absorb that disruption. Whether the structure was deliberately engineered or organically evolved, the people deploying AI into workplaces had an obligation to understand it before disrupting it. Most of them did not.
The AI also pointed me somewhere I had not looked. It said the most significant identity loss from AI might not be happening to senior engineers at established companies. It might be happening to people who were just entering the workforce. People who had not yet had time to build a professional identity of any kind, but who had been told by the educational system that they were being prepared for a specific kind of work, and are now discovering that the work has been redesigned before they arrived.
I had not thought about it from that direction. It changed something in how I understood the intergenerational dimension of this problem.
The two kinds of expertise
There is expertise that lives in the function. And there is expertise that lives in the judgment.
Functional expertise is the ability to produce a specific output. Write the code. Design the architecture. Generate the report. Draft the proposal. This is the kind of expertise that is most legible to an organisation, because it produces things that can be measured. It is also the kind most replicable by AI, because it can be described precisely enough to train a model on.
I sat in a product review two years ago where the lead engineer spent forty minutes defending an architectural decision. The room had the data. The room had the alternatives. What the room did not have was someone willing to say this will not survive contact with the culture of the team that has to maintain it. The engineer knew that. He had been in enough rooms to feel it. He did not say it. Nobody had ever told him that was part of the job. His job was the architecture. The culture was someone else’s problem.
That gap between what he produced and what the room needed is where judgment expertise lives. Knowing which function to perform and when. Understanding the context surrounding the output. Reading the room. Knowing that the architecture that works technically will not work politically.
AI is very good at functional expertise. It is not good at judgment. It will produce the architecture. It cannot tell you whether the organisation is ready to implement it.
The crisis of professional identity in the age of AI is partly a crisis about which kind of expertise was being valued. If your entire professional identity was built on functional expertise, the AI is a genuine threat to the story you have been telling about yourself. If your identity was built on judgment, the AI is an accelerant.
The people whose judgment expertise was always their real contribution are now finally in a position where that contribution is visible. The people who were told that their functional expertise was their value are watching that value be replicated. Neither group created this situation. Both groups are living in it.
What Roman said next
After the essay was drafted, Roman came back to the chat with something sharper than his original question.
He said sometimes I feel sad, sometimes I feel excited. He said people who enjoyed the process more than the results are more affected. He said deep problem-solving happens more rarely now. Most problems are resolved automatically. And then he asked the question I had not yet named in the essay.
What am I here for. What is the worth of my skill.
That is a different question from who am I. It is deeper. It is not about identity. It is about purpose.
The identity question asks what story do I tell about myself? The purpose question asks what does the world need from me that only I can give?
Roman’s distinction between process and results is the sharpest thing I have heard anyone say about this. The people who built their sense of self around outcomes, around shipping, around the visible evidence of having done something, are disrupted by AI in one way. The problem got solved. Something got built. The output exists. AI made that faster.
But the people who built their sense of self around the doing, around the specific texture of sitting with a hard problem and turning it over until something gives, around the particular satisfaction of finding the solution that was not obvious, around the hours of thinking that precede the answer, those people are losing something different. They are losing the reason the work felt worth doing.
The deep problem-solving was not just the method. It was the meaning.
When the problems are resolved automatically, you do not just lose the task. You lose the reason you were in the room. And no amount of reframing the role or finding new challenges or being told to work at a higher level of abstraction gives back what was actually lost. Which is the specific, irreplaceable experience of being the person who figured it out.
I do not have a clean answer to Roman’s purpose question. I want to be honest about that. What am I here for is not a question that can be answered by a strategy. It is a question that has to be lived toward.
But I notice something. The people I have watched navigate this transition with the most dignity are not the ones who found a new function to perform. They are the ones who found a new kind of problem to sit with. Not the problem the tool can solve. The problem the tool reveals. The human question underneath the automated answer.
That is still deep problem-solving. It is just that the problems have changed shape.
The collective response to the crisis
Roman made one more observation that I cannot let pass.
He said if you look at the LinkedIn feed, the most popular topics are that engineers will not be replaced by AI.
Sit with that for a moment.
The most popular content for engineers right now is reassurance that they will not be replaced. Not frameworks for navigating the change. Not honest accounts of what is being lost. Not questions about what the transition should look like for the people inside it. Reassurance. Repeated, shared, liked, reassurance.
That is not a comfort. It is a mechanism.
When a group of people facing a genuine disruption responds primarily by circulating content that tells them the disruption will not affect them, that is not optimism. It is collective denial. And collective denial is what a system produces when the people inside it have no other language for what they are experiencing.
The engineers are not naive. They can see what is changing around them. They are sharing the reassurance content not because they believe it but because believing it, even temporarily, is less costly than sitting with the alternative. Which is the purpose question. Which is what am I here for.
That question is expensive to ask. It requires time and honesty and a certain willingness to feel lost before you find anything. Most workplaces do not create the conditions for that question to be asked. Most LinkedIn feeds actively suppress it by flooding the space with content designed to make the question feel unnecessary.
The popularity of the reassurance content is not evidence that the crisis is overstated. It is evidence of how deep the crisis runs. The louder the reassurance, the more afraid people are of what happens if they stop needing it.
I am writing this essay because I think the reassurance is doing harm. Not because the engineers are wrong to want it. But because a question that is suppressed does not go away. It goes underground. And questions that go underground tend to surface in the form of the sentence the engineer said to me across a cold cup of coffee.
I am not sure who I am at work anymore.
That sentence is what happens when the reassurance runs out.
Shared leadership and the identity crisis
The solo hero model of leadership is partly a response to the same identity fusion I have been describing. The senior leader who cannot delegate is often the leader whose identity is fused with their expertise. Who built their whole professional story around being the person who has the answer. Who cannot share the map because sharing the map would reveal that the map is incomplete.
That model was already fragile. AI makes it more fragile. Because the expertise the solo hero was holding is now the thing the machine can replicate.
Shared leadership, the kind where you put the map on the table and let everyone navigate together, requires a different relationship with professional identity. It requires being able to say I do not know. Not as a performance of humility but as an honest statement of the situation. It requires building your professional identity not around what you know but around how you think, how you bring other people into the thinking, how you create the conditions for good decisions to emerge from a room rather than from a single head.
That kind of identity is not threatened by AI. Because it is not located in a function that can be replicated. It is located in a way of being in a room with other people that no statistical model can simulate.
The transition is not easy. Telling someone whose entire professional identity is built around functional expertise to shift to judgment expertise is like telling someone who has been swimming their whole life to try flying. The water is what they know. The water is where they feel competent. The air looks like failure from where they are standing.
But the water is rising. And the question is not whether to learn to move differently. The question is whether anyone is going to help people make that transition with any dignity.
What happens to the people who cannot pivot
The tendency in this conversation is to focus on the people who will thrive. The adaptable. The curious. The people who can build a professional identity around judgment rather than function.
There are also people who cannot make that pivot. Or who should not have to.
The person who spent thirty years building deep expertise in a specific domain. Who chose that depth over breadth because depth was what was valued. Who is fifty-three years old and has two children in university and a mortgage and a professional identity built on something that AI has just made significantly less scarce.
The advice the system gives this person is to adapt. To be curious. To embrace the change. To find the opportunity inside the disruption.
That advice is an insult dressed as encouragement. It locates the problem in the person’s adaptability rather than in the system that deployed a tool without any plan for the human cost of its deployment.
The tech leaders who made the decision to deploy AI into these workplaces made a choice. They chose speed over the management of transition. They will not be the ones paying the cost of that choice. The fifty-three year old engineer will.
I do not write this to be angry. I write it because the identity question is not just philosophical. It lands in specific human lives. In specific families. In the specific moment when someone who has worked with complete commitment to a set of skills is told that the story is no longer the one the industry wants to tell.
What do you do with thirty years of becoming something if the thing you became is no longer needed?
That question deserves better than a LinkedIn post about growth mindset.
The generation arriving into a changed room
The most visible version of professional identity loss belongs to the senior engineer. That person has a platform. They can write essays. They have enough professional authority to make their discomfort audible.
There is a quieter version of the same loss happening to people who are just arriving.
The person who spent four years in a computer science degree being trained for a specific kind of work. Who chose the degree because the economy said this was the safe path. Who graduated in 2024 or 2025 into a job market where entry-level technical roles had been redesigned around AI tools, and where the path from junior to senior was no longer clear because the work that used to teach you how to become senior had been automated.
This person did not have time to build a professional identity before the ground shifted. They arrived at the starting line to find the race had already been rerouted.
And underneath all of that enthusiasm the culture demands, many of them are sitting with a version of Roman’s question.
What am I here for. What is the worth of my skill.
The difference is that they do not have thirty years of evidence that they can figure things out. They have four years of studying for a future that has already changed.
This is not their failure. It is the failure of every institution that prepared them without preparing them for this.
That gap is a leadership failure. Not theirs.
The three questions worth asking
Not a framework. Not a five-step plan. Three questions worth sitting with if you are a leader in a tech organisation right now.
The first question is for yourself.
Where is your professional identity located? Is it in what you know, or in how you think? Is it in the output, or in the process of getting there?
Roman’s distinction matters here. If your sense of worth lives in the doing, in the specific texture of sitting with hard problems and turning them over, you are experiencing a different disruption from the person whose worth lives in the output. Neither is wrong. But they require different responses. The person whose worth lives in the output needs to find new outputs worth producing. The person whose worth lives in the process needs to find new kinds of problems worth sitting with. Problems the tool reveals rather than problems the tool solves.
Spend a morning with that question honestly. Not in the way you would answer it in a performance review. In the way you would answer it if nobody was evaluating the answer.
The second question is about your team.
How much of the identity of the people you lead is fused with the functions that AI is changing? Not their jobs. Their identities. The story they tell about why the work matters and why they matter in the work.
And underneath that are they process people or results people? Because the disruption they are experiencing is not the same. The results person may adapt more easily. The process person is losing the reason the work felt worth doing. You cannot manage those two transitions the same way.
Ask the question before the sentence arrives. Not in a town hall. In a conversation. One at a time. With someone who actually has something to lose.
The third question is the hardest.
What do you owe the people whose professional story your deployment decisions are disrupting?
Not economically. Human beings are not just economic units. What do you owe the person who gave thirty years to building deep expertise that your tool is now making replicable? What do you owe the process person who is not just losing a function but losing the reason they came to work?
I do not think there is a clean answer. I think there is a conversation that most organisations have refused to have because having it would slow things down.
Slowing things down is not in the business plan.
But the cost of not slowing down is not in the business plan either. Because the cost lands in the engineer sitting across from you saying he is not sure who he is at work anymore. And in the feed full of reassurance content that tells you everyone is fine.
They are not fine. They are just not saying it yet.
Someone has to create the conditions for that to be said. That is a leadership question. It always was.
A place to name the hard things
I started this essay because a reader asked a question in a chat thread. I want to end it by naming what that means.
Roman came back after the first draft was written and went deeper. He named the process versus results distinction. He named the purpose question. He named the LinkedIn denial mechanism. Each of those things sharpened the essay in ways I could not have reached alone.
That is what this room is for.
The essays name the mechanism. The comments name what the mechanism costs. And sometimes, like this week, the comments name a mechanism the essay had not yet found.
Roman’s final line was I think there is a lot to explore here.
He is right. This essay is not the end of the exploration. It is the beginning of it. The purpose question, what am I here for when the thing I was trained to do is being automated, is a question this publication will keep returning to. Because it is not a question that gets answered once. It gets lived with, over time, in rooms where people are willing to say what they actually see.
If you are carrying a version of Roman’s question, I want to hear it. Not the version that sounds okay. The version that is actually true.
That is what shared leadership looks like in practice.
The map is on the table. Come navigate.
If this essay landed, the Tuesday posts come by email. Free. Subscribe below and the next one arrives in your inbox.
If you are ready for the room, paid subscribers get deeper essays, the book as it is being written, monthly live sessions, and direct access to the thinking before it is polished. The door is open.
About the Author
Tino Almeida is a tech leader, coach, and writer reshaping how we think about leadership in a burnout-driven world. With over 20 years at the intersection of engineering, DevOps, and team culture, he helps humans lead consciously from the inside out. When he’s not challenging outdated norms, he’s plotting how to make work more human, one verb at a time.



I want to thanks Roman Nikolaev, for your question. I adapted your question to one of my chapter of the book I'm writing, Leadership as a Verb, and wrote this essay. This has open for me a pandora box.
If we imagine that somehow automation systems can really replace our labour. The question I keep going is.
Why automation systems are convincing companies in replacing us with mindless chips? Why do companies or some don't want human in the equation?