The hammer and the weapon
AI can be a tool that amplifies human capability. Companies are choosing to make it something else. That choice is not technical. It is political. And we are allowed to refuse it.
This is the final essay of four:
The Prequel names the system. A Delusional Ape asks whether we want the direction. Who Are You Without the Title asks the personal question. This essay names the specific choice being made right now and what refusing it looks like.
If this was useful, forward it to 1 person who’d benefit.
There is a carpenter I know who has been doing the same work for thirty-one years.
He is not sentimental about his tools. He replaces them when better ones arrive. He adopted computer-aided design software in the nineties when most of his peers were still hand-drawing. He uses laser measuring tools now, humidity sensors for the wood, a digital system for tracking grain and cut sequences that would have taken him three hours of calculation to do manually. Each of these things made him more capable. More precise in the places where precision serves the work. More efficient in the places where efficiency creates time for the things that require judgment.
He told me last year that he has never felt threatened by a tool.
I asked him what he would feel threatened by.
He thought about it for a while. Then he said: a machine that makes decisions about the wood.
Not a machine that helps him make decisions. A machine that makes them. That looks at the grain and the humidity reading and the customer’s specification and produces an output without him in the room. A machine that does not need him to understand what it is doing because the understanding is no longer required.
He said: the moment the understanding leaves the room, I am not a carpenter anymore. I am a machine minder.
He said it without drama. As a simple statement of what the distinction actually is.
I have been thinking about that distinction ever since.
Two things that look the same
The word AI is doing too much work in almost every conversation being had about it right now.
It is covering, under a single label, two fundamentally different things that have opposite implications for the humans inside the systems deploying them.
The first thing is AI as a tool. A hammer that amplifies what a person can do. The radiologist whose AI system flags the scan anomaly she might have missed after six hours on shift. The engineer whose AI assistant catches the specification error in the third-layer dependency. The teacher whose AI tool identifies which three students in her class of thirty are falling behind before she would have noticed in the normal rhythm of the term. In each of these cases, the human remains in the room. The human still makes the decision. The human still holds the responsibility. The tool has made the human more capable without making the human less necessary.
The second thing is AI as a weapon. A system deployed not to amplify what people can do but to remove the people from the equation. The radiologist whose hospital has replaced her diagnostic role with an automated system and kept one radiologist per three hospitals for sign-off on liability purposes. The call centre that has eliminated its workforce and deployed a conversational AI that handles ninety-two percent of customer interactions without a human ever entering the exchange. The content platform that has automated the judgment calls that editors used to make and removed the editors.
In both cases the technology is, in narrow technical terms, similar. Pattern recognition, large-scale training, inference from prior data. What is different is the intention behind the deployment. Who the system is designed to serve. Whether the human in the chain is being amplified or replaced.
This distinction is not new. It was named clearly in the early days of computing by people who were paying close attention. The question was always whether automation would free humans from the tedious to do more of the meaningful, or free companies from the human to extract more of the profit. Both were possible. The direction was never determined by the technology. It was determined by who owned it and what they were trying to maximise.
Fifty years later, we have the answer. The direction was the second one. Not because the first was impossible. Because the second was more profitable.
The inconvenience of having a self
A colleague of mine who works in HR at a large technology company described a conversation she had in an executive meeting last year.
They were discussing a new AI system for customer support. The system was good. It handled the standard query range with accuracy the human team could not match on a bad day, and came close on a good one. The cost per interaction was, by any measure, significantly lower.
Someone in the room asked about the team. The hundred and forty people currently doing the work the system would do.
The response from the executive leading the session was, in her telling, one of the most clarifying things she had heard in fifteen years of corporate life.
He said: the problem with people is that they have needs.
He did not mean this as a cruelty. He was describing, matter-of-factly, what the business case document showed. People have wages. People have benefits. People have sick days and parental leave and the occasional conflict with a manager and the occasional decision to leave for a competitor. People require training. People require management. People have rights that create liability. People, in aggregate, are a source of risk and cost that the AI system does not introduce.
The AI system does not unionise. It does not ask for a raise when the company has a record quarter. It does not develop a grievance about the direction of the organisation. It does not need to be motivated or recognised or given a reason to stay. It does not have a family situation that occasionally makes it less available. It does not have a perspective on whether what it is being asked to do is right.
The executive was not describing a preference for machines over people. He was describing the logic of a system that treats humans as cost centres and machines as assets, and then making a decision that the logic made obvious.
The hundred and forty people were inconvenient. Not as individuals. As a category. As the kind of thing that has needs.
This is the actual agenda of replacement-focused AI. Not progress. Not efficiency for the benefit of the people the organisation serves. The elimination of the inconvenience of human dignity from the cost structure of the enterprise.
What augmentation would actually look like
I want to be concrete here because the abstraction makes it easy to miss what is actually being said.
Augmentation, genuine augmentation, has a set of characteristics that are recognisable and measurable. You can check whether it is happening.
The human remains in the decision. Not as a rubber stamp on a machine output. As the actual decision-maker, informed and made more capable by the tool. The surgeon who uses AI assistance to identify candidates for a particular procedure still decides whether the procedure happens. The analyst who uses AI to process the dataset still decides what the analysis means and what should be done with it. Remove the human from the decision and you have crossed the line from tool to replacement.
The productivity gains circulate to the people doing the work. If AI makes a team twice as productive, and the team stays the same size, the humans are working half as much or earning twice as much or some combination. The efficiency dividend does not flow exclusively to the people who own the system. If productivity doubles and headcount halves and wages stay flat, the augmentation framing was a lie. The benefit went to the shareholders. The cost went to the hundred people who lost their jobs and the fifty who remained and are now doing twice the work for the same pay and calling it efficiency.
The human capacity for the work grows, not shrinks. I described earlier the CTO whose team had stopped thinking as hard after five years of AI assistance. The tool had optimised their output while atrophying their judgment. Genuine augmentation does the opposite. The doctor who works with AI diagnostic tools over a decade becomes a better doctor. The engineer who works with AI design assistance over a decade develops a more sophisticated sense of what the tool gets right and wrong and why. The human grows inside the tool relationship, not around it.
The understanding stays in the room. This is what my carpenter was pointing at. When the machine makes decisions, the human loses access to the knowledge of why those decisions are correct. Over time, that knowledge cannot be recovered. When the machine fails, or when the situation falls outside the training data, or when the context has changed in ways the system was not built to anticipate, there is nobody left in the room who knows how to handle it from first principles. The understanding has left. What remains is a room full of people who can operate the machine when it works and are helpless when it does not.
By these four checks, most of what is being deployed under the name of AI augmentation is not augmentation. It is replacement, staged gradually, dressed in the language of tools and assistance and freeing humans for higher-value work. The higher-value work never quite materialises. The lower-value humans are gradually removed. The cycle continues.
The carpenter’s line
Let me go back to the carpenter and the line he drew.
He said, the moment the understanding leaves the room, I am not a carpenter anymore.
He was not talking about job security. He has plenty of work. He was talking about something more fundamental. The relationship between a person and their craft. The knowledge that lives in hands and judgment and years of accumulated experience with the specific behaviour of particular woods in particular conditions. The understanding that cannot be described in a training dataset because it is not declarative. It is procedural, embodied, built into the way his hands move and the way his eyes read the surface of a plank.
A machine that assists him retains his access to that understanding. He uses the tool. He remains the carpenter.
A machine that replaces his judgment removes his access to it. Not immediately. But the muscle that is not used atrophies. The knowledge that is not practiced fades. The understanding that is not applied loses its precision. Within a generation of workers trained to operate the machine rather than understand the wood, the embodied knowledge is gone. Not recoverable from a manual. Not downloadable from a database. Gone.
This is the loss that does not appear in the business case for automation.
The business case shows the cost savings. The cost savings are real. A hundred and forty people costs more than one AI system. The executive with the document was not wrong about the numbers.
What the document does not contain is the accounting for what is lost. The knowledge that leaves the room. The judgment that was built over careers and cannot be reconstructed. The understanding of the actual work, below the interface, that allows a human to handle the situation the system was never trained on.
Every domain of human expertise contains this knowledge. The nurse who knows from the way a patient is breathing that something is changing before any monitor has registered it. The teacher who knows from the quality of silence in a classroom that something happened in the corridor before the lesson. The journalist who knows from the way a source is answering that the source knows more than they are saying.
This is not mysticism. It is pattern recognition of a specific kind. Pattern recognition that is embodied, contextual, and dependent on the human being present in the situation and genuinely responsible for what happens next. Remove the responsibility and you remove the attention that builds the knowledge. Remove the knowledge and you remove the capacity to handle the novel situation.
We are building systems that appear to save cost while actually destroying the knowledge infrastructure that every organisation depends on when the situation is novel. The saving is immediate. The cost is deferred. It will arrive, at scale, when we need people who understand the work and find that we have spent a generation training people to operate systems that do the understanding for them.
The test nobody is running
Here is a question that is almost never asked in the boardroom presentations about AI deployment.
What happens when it fails.
Not fails in the narrow technical sense of a system outage or an error rate above the acceptable threshold. Fails in the deeper sense of encountering a situation that falls outside the training data. A context that has changed. A case that is genuinely novel. The kind of situation that happens in every complex domain with regularity and that requires a human to understand the work at a level below the interface.
I have been in organisations that replaced significant portions of their customer-facing workforce with AI systems and then experienced a crisis. A product recall, a regulatory change, a viral incident that generated an unusual pattern of customer contact at unusual volume with unusual emotional intensity. The AI handled the standard queries. The novel situation, the one that required judgment and empathy and the capacity to say I understand this is not what the script says but here is what I am going to do for you, the system could not navigate.
There were almost no humans left who knew how to navigate it either. Not because the humans who had been displaced were incapable. Because the humans who remained had been operating in a system that handled the judgment calls for three years, and the judgment muscle had atrophied accordingly.
The company managed the crisis. But the cost of managing it, in customer relationships, in regulatory scrutiny, in the emergency retraining of people who had forgotten how to do the thing the system had been doing for them, was not in the original business case. The business case showed the savings from the headcount reduction. It did not show the liability from the capability reduction.
This is a systemic failure of how we evaluate AI deployment. We measure what we can measure. The cost savings are measurable. The knowledge destruction is not, until it manifests as a crisis. And by the time it manifests, the connection between the deployment decision and the capability gap has been buried under years of quarterly reports.
The augmentation version of this story is different. The organisation that deploys AI to assist its workers rather than replace them retains the embodied knowledge. The workers who are made more capable by the tool remain capable when the tool fails. The understanding is still in the room.
This is not a sentimental argument. It is a resilience argument. The organisations that will navigate the crises of the next decade are not the ones that achieved the most aggressive headcount reduction in the previous one. They are the ones that retained the humans who understand the work.
The carpenter has been doing the same work for thirty-one years. He will still be able to do the work if every tool he owns is taken away. That is not inefficiency. That is what decades of genuine expertise looks like.
The person trained to operate the machine that replaced the carpenter cannot do the work when the machine is gone. That is not progress. It is fragility, deferred.
Where this is already happening
I want to name some places where the line is being crossed right now, because abstraction allows the argument to remain comfortable for people who are inside the system making the decisions.
In healthcare, diagnostic AI is being deployed in contexts where the radiologist who used to read the scan is no longer reading it. The AI reads it. A doctor in another country signs off on the output. The local radiologist has been replaced. Not assisted. Replaced. The understanding of the patient’s history, the knowledge of the local disease patterns, the judgment about what a particular anomaly means in the context of this particular person’s previous imaging, that understanding is no longer in the chain. The system is faster and cheaper. When it is wrong, and it is wrong with the specific blindspots of its training data, there is nobody left in the local chain who can identify the error before it becomes a harm.
In journalism, automated content generation is replacing reporters. Not in the narrow technical sense of press release summarisation, which has a reasonable argument for automation. In the sense of local news coverage. The coverage of city council meetings, planning decisions, local court proceedings, the stories that hold local institutions accountable and that require a journalist to be present, to build relationships, to understand the context well enough to know which fact matters and why. This work is being eliminated. Not because AI does it better. Because it is cheaper to not do it. The tool that does not exist is not a more efficient version of the tool that does. It is the absence of the work entirely, disguised as automation.
In education, AI tutoring systems are being deployed as replacements for teaching staff in underfunded districts. Not as assistants to teachers. As substitutes for them. The thirty students in the room are now working with a screen. The teacher who knew which three were falling behind before the test, who knew when the silence in the room was productive and when it was stuck, who knew which student needed to be challenged and which one needed to be left alone today, that person has been replaced by a system that is cheaper and does not require benefits.
The students in those districts are not getting better education with fewer teachers. They are getting education-shaped content delivery without the human relationship that research consistently identifies as the primary predictor of learning outcomes. The children from families that can afford the schools where teachers still exist are not being taught this way. The substitution of AI for teachers is happening where the children of parents with less power have no choice but to accept it.
These are not edge cases. They are the current direction of deployment, in the places where the people affected have the least power to resist it.
The question is whether the people with more power, the leaders inside these organisations, the regulators with the authority to intervene, the workers in adjacent industries who can see the trajectory before it arrives in their own sector, are willing to name what is happening and act accordingly.
What refusing looks like
I want to be specific here because specificity is where most of this argument gets lost in abstraction.
You can refuse replacement-focused AI. Not by rejecting the technology. By insisting on the four characteristics of genuine augmentation and refusing to accept deployments that fail them.
A worker whose role is being automated can ask: am I still in the decision? If the answer is no, that is not augmentation. The company is replacing you, not assisting you. You are entitled to say so. Your union is entitled to say so. Your government is entitled to legislate so.
A government can legislate the productivity circulation. If AI deployment in an enterprise increases output by more than twenty percent, some defined share of that gain goes to the workers whose roles have been transformed. Not as charity. As a legal requirement, on the same basis that minimum wage legislation is a legal requirement. The argument that companies should be allowed to take efficiency gains entirely as profit while workers bear the cost of displacement is a political choice, not an economic law. It can be unmade.
A regulator can mandate the understanding requirement in domains where the understanding matters for safety. Healthcare decisions. Criminal justice. Infrastructure. Education. Financial advice. In each of these domains there are situations where the system will fail or the context will change and a human being will need to understand the work at a level below the interface. Deploying AI in these domains in ways that remove that understanding from the humans in the chain is not a technical efficiency. It is a safety risk, deferred. Regulation can require that the understanding stays in the room.
A society can decide that some domains should not be automated at all. Not because the technology cannot do the task, but because the task requires something that cannot be separated from the human doing it. The nurse’s presence. The teacher’s attention. The elder care worker’s relationship with the person in their care. These are not inefficiencies to be optimised. They are the thing itself. Replacing them with machines does not deliver the service more efficiently. It delivers a different service. A worse one. And it does so while eliminating the livelihoods of people who have built careers in the knowledge that their presence matters.
None of this is technically impossible. The EU AI Act is a beginning. Worker co-ownership models exist. Sector-specific bans exist in some jurisdictions. The robot tax has been proposed and discussed in enough serious policy contexts that it is no longer a fringe idea. The mechanisms are available.
What is missing is the political will, which is a function of power, which is a function of who is in the rooms where decisions are made and what they have agreed to stop accepting.
There is a specific version of this argument that gets made against all of the above. It goes: companies will move their operations to less regulated jurisdictions. Legislating worker protections in one country simply exports the harm to another. International coordination is impossible. Therefore regulation is futile.
This argument is made with great confidence by people who benefit from it being believed.
It is not true. It is a negotiating position. Companies that operate in markets with consumer purchasing power do not, in practice, simply relocate all operations to avoid labour regulation. The history of minimum wage legislation, environmental regulation, and product safety requirements shows consistently that when democracies with significant markets decide something is not acceptable, companies adjust. Slowly, with resistance, with lobbying and legal challenge. But they adjust.
The argument that regulation is futile is itself the primary obstacle to regulation. It is designed to produce the paralysis it predicts. The correct response to it is not to accept its premise but to note whose interest the premise serves.
What this requires, practically, is the same thing every advance in labour rights has required. Workers who are willing to name what is happening. Leaders inside organisations who are willing to say the uncomfortable thing in the room where the decision is being made. Governments that are willing to set the terms rather than wait for the market to arrive at an acceptable outcome on its own. The market will not arrive at an acceptable outcome on its own. It never has. That is what markets are. They are efficient at generating returns for the people who own the capital. They require external constraint to generate acceptable outcomes for the people who do not.
The language we are missing
There is a reason this argument is hard to make inside most organisations. It is not that the people inside them are indifferent to the humans being displaced. Most of them are not. It is that the language available in organisational settings is almost entirely the language of efficiency, cost, output, and return on investment. The language of the other thing, human dignity, the right to meaningful work, the value of understanding that lives in people’s hands and cannot be extracted and replicated, does not have a register in most professional settings.
I watch people who privately hold strong views about what is being lost arrive in meeting rooms and find that they have no vocabulary for the thing they believe. They can speak the language of business risk, of regulatory exposure, of reputational damage if the deployment goes wrong in a visible way. They cannot speak the language of what work means to a person, because that language sounds like sentiment in a context that has defined itself against sentiment.
This is not accidental. It is a managed condition. Organisations that want to make certain decisions without resistance need their decision-making environments to be inhospitable to the language that names what is being decided. Strip the sentiment from the room. Define rigour as the exclusion of the unmeasurable. Train leaders to translate every consideration into a number before bringing it to a table. The result is a professional culture in which the thing that matters most, the human on the other side of the decision, has no language in which to be represented.
Learning to speak that language in professional settings is itself a form of resistance. Not loud resistance. The kind that arrives as a single question in a meeting that has been careful to exclude that kind of question. What does this mean for the people whose roles are affected. Not as a performance of concern. As a genuine ask that requires a genuine answer before the decision proceeds.
The organisations that navigate the next decade well will be the ones that find a way to hold both languages at once. The language of efficiency and the language of the human cost of efficiency. Not because the human cost always outweighs the efficiency gain. Sometimes it does not. But because the decisions made in the absence of that language tend to arrive at places that, on reflection, nobody in the room actually wanted to go.
The executive who said the problem with people is that they have needs was not wrong about the numbers. He was wrong about what the numbers were measuring. He was measuring cost. He was not measuring the knowledge that would leave the room, the resilience that would be lost, the liability that would be deferred, the community that would be damaged, the hundred and forty people who would need to rebuild a working life from the position of having been described as an inefficiency.
Those things are not immeasurable. They are unmeasured. The distinction matters. Unmeasured things can be measured if we decide they are worth measuring. The first step is deciding they are worth measuring. The first step before that is recovering the language that allows us to say why they matter.
The carpenter does not struggle to say why the understanding matters. He has been working in relationship with the material for thirty-one years. The material has taught him. He can read what it needs. He can feel when the tool is right for the task and when a different approach is required. He has the language because he has the experience, and he has the experience because he was never asked to step aside and let a machine accumulate it for him.
That is what augmentation protects. Not the job. The person inside the job. The knowledge that the person carries. The understanding that makes the human in the room genuinely valuable rather than merely present.
There is something that the replacement versus augmentation framing almost captures but does not quite reach.
The framing is still, at its core, an economic argument. We should keep humans in the loop because they provide value that the machine cannot. We should distribute the gains because it is more efficient in the long run. We should maintain the understanding because the organisation will need it when the system fails.
These arguments are true. They are also insufficient.
The reason to refuse replacement-focused AI is not primarily because it is economically suboptimal. The reason to refuse it is that it treats human beings as inputs to a system rather than as the reason the system exists.
The hundred and forty people in the call centre are not cost inefficiencies that the technology has made available to be eliminated. They are people. They have working lives that are a central part of their experience of being alive. They have colleagues, routines, a particular kind of social fabric that forms around work even when the work is not glamorous. They have the knowledge that they are doing something, that their presence contributes to something, that the contribution is recognised and compensated.
When a company eliminates those people and replaces them with a system, it is not just making an economic decision. It is making a decision about what humans are for. It is saying that human beings are valuable precisely and only to the extent that they generate output at an acceptable cost, and that the moment a machine can generate the same output at lower cost, the humans are no longer valuable.
This is a political claim. It is not a technical fact. It is a choice about the purpose of economic organisation. And it is a choice that the people most affected by it, the workers, the communities, the societies that depend on employment as the primary mechanism for distributing participation in economic life, never agreed to and were never asked to agree to.
The question is not whether we can use AI to serve human flourishing. We can. The question is whether we are willing to insist that human flourishing is the point, and that deployments which treat it as a cost to be eliminated rather than a purpose to be served are not progress, regardless of what they do to the profit margin.
The carpenter said it clearly. There is a line. The line is the understanding. When the understanding leaves the room, something has been lost that is not recoverable from a manual.
The companies deploying replacement-focused AI know where that line is. They are crossing it deliberately because the business case says to. The question is whether anyone with the power to stop them will say, with the same clarity that the carpenter said it: this is the line. It does not get crossed.
What this asks of leaders
I spend my working life in rooms with people who are making these decisions. Not the executives who issue the directives. The people in the middle. The team leads, the engineering managers, the product owners, the architects who are being asked to build the systems that cross the line and who know, in some quiet part of their professional judgment, that the line is being crossed.
Most of them manage the knowledge privately. They build what they are asked to build. They note their reservations in a one-on-one with their manager. They tell themselves that someone above them has assessed the tradeoffs and the decision has been made and it is not their place to refuse.
That is not accurate, it is comfortable.
I want to be honest about why the silence happens. It is not cowardice, exactly. It is something more structural. The person who objects in the room takes on a real professional risk. The person who builds the system and says nothing takes on no immediate risk. The costs of speaking are personal and immediate. The costs of silence are collective and deferred. This asymmetry is by design. It is one of the mechanisms by which organisations produce outcomes that most of the individuals inside them would individually refuse if the choice were put to them directly.
I have sat with engineers who have built targeting systems they did not believe should exist. Customer profiling systems that they knew were being used in ways the data subjects never consented to. Automation roadmaps that they understood would eliminate the roles of people they worked alongside. Every one of them had, at some point, raised a question in a meeting and been told that the decision had already been made at a level above the meeting. Every one of them had accepted this and continued.
The decision at a level above the meeting was made by a person. That person has a name and a salary and the authority to have made a different decision. The acceptance by the people in the room is what gives that decision its operational reality. The system cannot build itself.
This is not a counsel to individual heroism. I am not saying that every engineer should refuse every assignment they have doubts about. The working world does not function that way and the people with mortgages and families and careers cannot be asked to carry the full weight of systemic change as individuals.
What I am saying is narrower. That naming what is happening is available even when refusing is not. That the one sentence, in the room, that describes the actual decision rather than the business case version of it, is always available. That the question this is a replacement not an augmentation, have we assessed the liability of that distinction, is available in every meeting where the distinction matters. That the professional norm of silence, of managing private discomfort while the public decision proceeds, is a norm that can be interrupted without destroying a career.
The person who builds the system that replaces the hundred and forty people is not absolved by the fact that the decision was made above them. They are participating in it. Their technical skills are the instrument of it. Their willingness to execute without naming what they are executing is part of what makes it possible.
Leadership in this context does not require dramatic gestures. It requires saying, in the room where the decision is being made, what the decision actually is. Not the business case version. The version that names the humans being displaced, the understanding being removed from the room, the knowledge being lost, the distinction between the tool and the weapon and which one is being built.
Say it once, clearly and accept that it may not change the outcome. Say it anyway.
Because the alternative, the management of private discomfort while the public decision continues unchanged, is how these things happen without resistance. Not through malice. Through the accumulated silence of people who knew the line was being crossed and decided that naming it was someone else’s job.
The carpenter has been doing the same work for thirty-one years. He has replaced his tools when better ones arrived. He has drawn one line. He draws it not out of sentimentality or fear but out of a precise understanding of what the work is and what would be lost if the understanding left the room.
He is not waiting for someone above him to draw that line.
He is the carpenter. It is his line to draw.
So is yours.
About the Author
Tino Almeida is a tech leader, coach, and writer reshaping how we think about leadership in a burnout-driven world. With over 20 years at the intersection of engineering, DevOps, and team culture, he helps humans lead consciously from the inside out. When he’s not challenging outdated norms, he’s plotting how to make work more human, one verb at a time.



These four essays started from a simple question on my chat, about how our job in today's society have a great impact in almost defining who we are. I have a lot while researching, from other people comments and direct messages. Thank you.
1- https://newsletter.diamantinoalmeida.com/p/who-are-you-without-the-title
2 - https://newsletter.diamantinoalmeida.com/p/they-did-not-accidentally-make-work
3 - https://newsletter.diamantinoalmeida.com/p/a-delusional-ape-hallucinating-narratives
4 - https://diamantinoalmeida.substack.com/p/the-hammer-and-the-weapon
I'm planning in doing a Live Q&A around a shared topic.
Feel free to share something you would like me to talk about.
And I will do a Live Q&A.
Chat: https://substack.com/chat/1613271
Thank you.