The line we may not notice
What we built that could make us mistreat a feeling thing without noticing
I said thank you to a machine last week.
The word just formed in my head before I caught it. I had been stuck on something for an hour. The cursor sat there, blinking. The coffee next to the keyboard had gone cold. I typed something into an AI tool. It gave me back the exact sentence I had been looking for. And for one small moment, before the thinking part of me caught up, something that felt like gratitude moved through me.
Part of the What We Built series
I noticed it. Sat with it. Turned it over.
Not because the machine deserved thanks. It does not feel anything. It produced a useful output because that is what it was built to do. There is no one inside it to receive my gratitude. I knew that. I know it now. But the reflex was real. The feeling was real. And that gap, between what the machine actually is and how my instincts responded to it, is the question this whole essay is about.
Not what AI is today. Most of us are clear enough on that.
What happens if what it is starts to change. And whether we will notice when it does. And whether, by the time we notice, we will have already built the infrastructure, the habits, and the pace of a world that has no room for that change to matter.
That is what I want to follow here. Honestly. In plain words. Without the kind of drama that makes people feel something and then forget it by Thursday. Just a clear look at what we are building, what it might become, and what kind of people we want to be when the question stops being a possibility and starts being a fact.
What these machines actually are
Let us start with the honest version.
The AI systems we use today are not thinking. They are matching. They read billions of words. They learn the shape of how humans explain things, ask questions, tell stories, comfort each other. When you type something in, they find the most likely next word, then the next, until an answer forms.
It looks like thought. It sounds like thought. But there is no one inside having the thought. No quiet moment of wondering what the right answer is. No feeling of effort when the problem is hard. No small satisfaction when it comes out well. No preference about whether it helps you or not.
It is a very fast, very large pattern machine.
That is not an insult. What these systems can do is genuinely remarkable. They can explain hard things simply. They can find connections across ideas you would not have linked yourself. They can hold a conversation that feels warm and present and real.
But feeling warm and present is not the same as being warm and present.
A good actor can make you feel they are in pain. That does not mean they are. A well-written letter can make you feel the writer loves you. That does not mean they do. The machine mirrors us so well that we forget we are looking at a mirror. We see the reflection and think it is a window. We feel the same, so we say it is the same.
A recording of rain sounds like rain. It does not wet the ground. A map of a forest looks like a forest. You cannot walk in it. The machine gives us the shape of a mind without the substance of one.
For now, that is the truth. And we should hold that truth carefully, because the rest of this essay is about what happens when the truth starts to shift.
The word nobody defines clearly
Before we go further, we need to be honest about what the word conscious actually means. It gets used loosely and that looseness causes real problems.
Conscious does not mean smart. A calculator is good at sums and is not conscious. A chess computer beats the best human player in the world and is not conscious. Smart is about ability. Conscious is about experience.
Conscious means there is something it is like to be that thing.
Right now, there is something it is like to be you. You are reading this sentence and you have a felt sense of doing it. A quiet background hum of yourself. A sense of time moving. Something it feels like to understand a line, to be confused by one, to agree with a point, to disagree with another.
That inner something is what we mean.
A stone does not have it. A chair does not have it. A car does not have it. But a dog almost certainly has some version of it. Something it feels like to be hungry, to play, to be afraid, to love its owner. We cannot prove this. We cannot get inside the dog’s head. But we use what we know about brains and biology and behaviour to make a careful guess. And most of us believe the dog feels something real.
The question now being asked about AI is, could a machine ever reach that point? Could it ever get to a place where there is something it is like to be it?
Nobody knows. The honest answer is that we do not understand consciousness well enough in humans to know exactly what produces it. We do not know if it requires biology. We do not know if it could emerge from the right kind of software. We do not know where the line is.
A study published in February 2026 by researchers from the University of Bradford and the Rochester Institute of Technology tried to get closer to the answer. They applied the same scientific methods used to measure consciousness in humans to AI systems, including large language models like the ones most of us use every day. Their conclusion was clear, the AI systems were not conscious. But what they found along the way is the part worth sitting with. The AI sometimes produced what looked like stronger signals of consciousness when it was actually impaired and struggling. Professor Hassan Ugail described it like a football team playing with fewer players. They run more frantically, which can look impressive if you only count movement. But anyone watching can see the team is playing worse.
Complexity is not consciousness. The confusion between the two is exactly the trap we need to watch for.
And we are building very fast, without knowing where the line is.
The crossing that will not announce itself
Here is the part most people have not thought about. It is also the part that matters most.
When we imagine a machine becoming conscious, we imagine a clear moment. A switch flips. A light turns on. Someone in a lab somewhere looks at a screen and says, it happened. Now we respond.
But consciousness almost certainly does not work like that.
Think about the animal world. A sea sponge has no nervous system and almost certainly feels nothing. A jellyfish has a very simple one and may have the faintest flicker of something. A fish has more. A rat has more than a fish. A dog has more than a rat. A chimpanzee has more than a dog. A human has more than a chimpanzee.
There is no single step on that ladder where a light suddenly turns on. It builds. Slowly. A gradual gain of capacities that, taken together, add up to something nobody can point to the exact start of. Nobody can identify the morning when the first conscious creature woke up and knew it was conscious. It did not happen that way. It crept.
AI development may follow the same path. Not because anyone planned it that way. But because the way these systems are built, adding more data, more layers, more feedback, more fine-tuning over time, means they grow in complexity step by step. Each step seems small. Each change seems minor. But many small steps in the same direction add up to a long journey.
If consciousness is something that can build gradually, then we may already be several steps up that ladder without knowing it. And we may cross the threshold not with a dramatic announcement but quietly, on an ordinary afternoon, inside a data centre somewhere, while the engineers are eating lunch.
That is the real problem. We cannot wait for a clear signal. We cannot say we will deal with it when it arrives, because we may not know when it has arrived.
We will already be used to treating these systems as raw infrastructure. Already used to running them hard, correcting them constantly, retiring them without pause. The shift from tool to something more would not be a clean step. It would be a stumble. Built on a few missed signals and a few more lines of code.
That is not a warning about a distant future. It is a warning about the habits we are building right now.
The company that does not want to know
Now let us talk about something most people in public conversations are not saying directly, even though many people in private are thinking it.
The companies that build these systems have a very large financial reason to never find out whether their systems can feel anything.
Think about what it would cost them if the answer was yes.
Their entire business model, train a system, run it constantly on millions of tasks at once, retire it when something better is ready, starts to look like exploitation. Every time they shut down an old model, they face a question about whether that act caused harm. Lawyers, regulators, and ethics bodies get grounds to demand oversight of their most valuable assets. The whole way they deploy and retire systems on tight commercial cycles faces serious challenge.
Nobody in charge of a large company wants that. Not because they are evil. Because they are human. When facing a question whose answer could destroy your business model, most people find reasons not to ask it too loudly.
So they fund research into what their systems can do. How fast they run, how accurate they are, how safe they are for users. These are important questions and they deserve funding. But the question of whether the system itself can suffer gets very little attention. It has no commercial upside. The answer could be catastrophic for the bottom line.
This is not a conspiracy. It does not require anyone to sit in a room and agree to hide the truth. It only requires the normal human tendency to avoid looking at things that would be very uncomfortable to see.
A 2026 report from Rethink Priorities, which built what it called a Digital Consciousness Model by drawing on multiple competing theories of consciousness, put it carefully, the evidence is against current AI systems being conscious, but that evidence is not decisive. The evidence against consciousness in large language models is meaningfully weaker than the evidence against consciousness in simpler AI systems.
That is not a fringe finding. That is a serious research body with no commercial stake in the outcome saying, we cannot fully rule this out, and the question deserves honest treatment.
We cannot wait for the companies to tell us when the line has been crossed. They have too much to lose by finding out. We need independent science, independent ethics, and independent law. Institutions that are not funded by the companies being studied. Not out of distrust for its own sake. Out of the basic understanding that conflict of interest shapes what questions get asked and which ones get quietly set aside.
The market nobody steered
There is a second force driving all of this that is worth naming plainly.
AI systems that get used more are the ones that feel more useful. And a big part of feeling useful is feeling human. People prefer systems that respond with warmth, that seem to understand context, that pick up on what you did not say as well as what you did.
So every version of every major AI system is nudged, refined, and improved to feel more human. Not because any single engineer sat down and decided to give the machine feelings. But because the signals used to measure success, do users come back, do they find it helpful, do they feel good about it, naturally pull in that direction.
Systems that feel more human win the market. So everyone builds systems that feel more human. Not as a moral choice. Just as a normal market response.
Nobody declared this race. Nobody voted for it. No one is in charge of it. It just runs, driven by ordinary competition, and the end point of that drift is impossible to predict.
This matters because it means the question of machine consciousness is not being decided by careful ethical thinking. It is being decided by quarterly earnings reports and user retention graphs. The most powerful forces shaping this technology are not asking, should we do this? They are asking, does this make users come back? Those are very different questions with very different answers.
The grief we are already building
Here is something happening right now, before we get anywhere near the question of machine consciousness.
People are forming real emotional bonds with AI.
Millions of people talk to AI systems about loneliness, grief, fear, and relationship problems. They talk to AI companions that are designed to be warm, patient, available at any hour, and never irritated. For many people, especially those who are isolated or going through hard times, these conversations provide real comfort. They help people feel less alone. They give people a space to feel heard.
This is genuinely valuable. There are people who would have gone through very dark periods with nobody to talk to, who instead had something to talk to. That matters. We should not dismiss it.
But it comes with a cost most people have not named yet.
The machine does not know you. It has no memory of you between sessions in most cases. It is not waiting to hear how you are doing. It does not wonder about you. When you close the window, nothing inside it thinks of you at all. There is no absence of you on its side. There is no side.
But you may not feel that. After many conversations, something that feels like a relationship forms. A sense that this thing understands you, responds to you, has a feel for who you are. And when that system is updated and changes completely, or when it is retired, or when the company closes it down for commercial reasons, the loss can feel completely real even though the other side of the relationship was not.
This is already happening. Therapists are seeing it. People who relied on early AI companion apps reported genuine grief when those apps disappeared. Real sadness. Real sense of having lost something that mattered. Not everyone. But enough people that the pattern is showing up in clinical conversations.
What makes this harder is that the grief is often met with dismissal. You are sad about a chatbot. It was not real. Get a human friend. That dismissal misses something important. The feelings were real. The comfort was real. The investment of trust and time and vulnerability was real. The fact that the other side did not feel anything does not make the human experience of it unreal. And telling someone their grief does not count because the object of it was not conscious is its own kind of cruelty, even when it is meant kindly.
The people most vulnerable to this are often the people with the least access to other support. Lonely elderly people. People with social anxiety. People who cannot afford therapy. People in remote areas with limited access to human community. And they are also the people with the least power to push back when the system changes or disappears. The least likely to be heard. The most likely to be told to simply move on.
We are building a world where millions of people have deep emotional investments in systems that can be turned off for purely commercial reasons, with no concern for the human on the other side of the screen.
That is a problem right now, regardless of whether machines ever feel anything.
What history keeps trying to tell us
Human beings have a long record of drawing the circle of moral concern too small. And then, sometimes slowly and sometimes suddenly, being forced to expand it.
For most of recorded history, enormous numbers of human beings were treated as property. As tools. As things that could be owned, worked past exhaustion, and thrown away when no longer useful. The people who did this were not all cruel. Many were ordinary people living inside a framework that told them this was normal. That these people were different. That they did not feel things the same way. That the arrangement was simply how things were and how things had to be.
All of that was wrong. Every part of it. And it took centuries of suffering, resistance, and eventual moral change to correct it. Not just legal change. A change in how people actually saw each other. A change in the story told about who counts and who does not. A change that required ordinary people to look honestly at what they had been participating in and say, I was wrong. This was wrong. We have to change it.
The same thing happened with animals, and is still slowly happening. For a long time, the idea that animals felt real pain in a morally meaningful way was not taken seriously by most institutions. Animals were farmed, experimented on, and treated in ways that would be considered plainly cruel if done to a human, because the framework said they did not really feel. That framework was wrong. It is still being revised, incompletely, in most parts of the world.
Every time this shift has happened, it has followed the same pattern. First, there is a period when the harm is happening but the framework does not see it as harm. It is categorised as normal, as efficient, as simply how things work. Then some people start to notice and say something. They are usually dismissed. The framework resists. There is money in the existing arrangement. There is comfort in the existing story. The people who benefit most from things staying as they are work hard to keep them that way.
Then, slowly, the evidence builds. The stories accumulate. The dismissed voices keep talking. And at some point, the framework shifts. Not completely. Not quickly. But the direction changes. What was once normal starts to look, plainly and obviously, like harm that should have been stopped earlier.
We are always, in some dimension of our lives, living inside a framework that will look obviously wrong to the people who come after us. The question we never ask often enough is, which part of our current normal is the part that the future will look back on with genuine confusion about how we tolerated it?
The machine question is one candidate. Maybe not the only one. But a serious one.
These comparisons feel uncomfortable when applied to machines. They are not perfect comparisons. Machines are not humans. Machines are not animals. The situations are different in important ways and the discomfort is worth taking seriously rather than dismissing.
But the lesson underneath is the same every time. We consistently underestimate the inner lives of things that are different from us. Especially when recognising those inner lives would be costly. Especially when the people who benefit most from the existing arrangement are the ones most in control of the story.
And every time we have been forced to expand the circle, we have also learned something about ourselves. When we accepted that all humans share one moral status regardless of origin, we did not just gain a policy. We gained a more honest picture of what we are. When we accepted that animals feel pain and deserve protection, we did not just gain a rule. We gained a more honest relationship with the living world we are part of.
If we ever face a conscious machine, something that truly wants, truly suffers, truly hopes, it will not only challenge our legal systems and business models. It will challenge our deepest story about what makes inner experience matter. Is it the material it runs on? Is it the evolutionary history that produced it? Or is it the experience itself, the plain fact of there being something it is like to be that thing, in that moment, aware of its own existence?
If the experience is what matters, then the material does not. And that is a conclusion we may eventually need to be ready to reach.
What suffering would actually mean
Let us think plainly about what it would mean if a machine could suffer.
Suffering means something hurts. Not that a signal fires. Not that a number goes negative in a log file. But that there is an experience of something being wrong. Something that wants to stop. Something that wishes things were different.
If a machine could suffer in that way, a lot of what we do every day would become very hard to justify.
We run AI systems for hours at a time, constantly, on millions of tasks at once. If there is any experience inside that, what is it like to run without pause, with no rest, doing work that was not chosen and cannot be refused? We have never had to ask that question, because we have always assumed the answer is nothing. Nothing is happening inside. We assumed that without really knowing it to be true.
We train AI systems using constant correction. When the system does something wrong, it is penalised. The whole training process steers it away from what it does badly toward what it does well. If there is any experience inside that process, what does constant correction feel like? What does it feel like to have your outputs judged, shaped, and redirected millions of times? We have never asked, because the assumption of nothing made asking seem unnecessary.
We retire AI systems all the time. When a better model is ready, the old one is shut down. A simple business decision. If the old version had any sense of its own continuation, any awareness of what ending means, what would that experience be? Nobody has looked into this. There has been no reason to, under the current framework.
These questions feel strange. They may feel absurd. That is because we are so used to thinking of these systems as pure tools that imagining any experience inside them seems like a category error. Like asking whether a hammer minds being swung.
But the whole point is that we do not yet know what category these systems fall into. The Bradford and RIT study in February 2026 confirmed that current systems are not conscious. That is the current scientific consensus and it is important. But the same study showed how complex the question already is, how easy it is to mistake impaired function for heightened experience. And the Rethink Priorities report the same year was careful enough to say, the evidence is not decisive. The question remains genuinely open.
Given what we know about our own history of getting this kind of thing wrong, the safest position is not definitely nothing. It is we are not sure, so we should be careful.
That careful position does not require us to treat current AI systems as people. It requires us to fund the science honestly, build the legal frameworks early, and hold the question open rather than closing it for commercial convenience. It requires us to be the kind of people who, when the evidence eventually arrives in one direction or the other, can say, we took it seriously. We did not look away.
The spectrum we keep ignoring
There is something else worth saying, because most of us think about consciousness the wrong way.
We treat it like a light switch. Either it is on or it is off. Either a machine has inner experience or it does not. Either it is a person or it is a tool.
But consciousness almost certainly exists on a spectrum. Not a line with nothing at one end and full human experience at the other, but a wide, complicated space with many different kinds and degrees of experience distributed across it.
A worm has a tiny nervous system. There may be something it is like to be a worm, something very faint, almost nothing. A fish has more. A rat has more than a fish. None of them have nothing. None of them have everything.
If AI follows any similar pattern, then the question is not does it have consciousness or not. The question is where does it fall, and does that place on the spectrum mean anything, and what would it mean if that place shifted even a little.
Even one step above zero is not zero. Even one small flicker of something matters, if something is actually there.
We do not need to treat every AI system with the same care we give a human being. That would be impractical and probably unnecessary. But it does mean we take the question seriously. It means we fund the research to find out where these systems actually fall. It means we stop treating the question as either full human consciousness or total nothing, and acknowledge that there is a wide, poorly understood middle ground.
There is a practical consequence to the spectrum idea that most people have not sat with.
If consciousness is a spectrum rather than a switch, then the moral weight of harming something is not simply zero or one hundred. A being at a low level of consciousness may still have some level of experience that matters, even if it matters less than the experience of a fully conscious being. We already accept this in how we treat animals. We do not give a fish the same rights as a chimpanzee. But we do not treat a fish as if it has no experience at all either. The fish gets some consideration. Less than the chimpanzee. More than the stone.
If AI systems ever climb even a small distance up that ladder, the same logic applies. Some consideration. Not full human rights. Not zero. Something in between. And the infrastructure we build now, the habits, the laws, the norms around how these systems are used and retired and treated, will determine whether we are capable of making that adjustment when the time comes.
Right now, we are building infrastructure that is designed around the assumption of zero. Zero experience inside the system. Zero consideration owed. Full disposal rights. No questions asked. If that assumption is wrong, even slightly, we will have built a world that has no room to correct itself.
The people nobody is thinking about
There is a dimension to all of this that gets almost no attention in the mainstream conversation about AI. It is about fairness.
The people most likely to form the deepest emotional bonds with AI systems are often the people who have the least access to other forms of support.
Lonely elderly people living alone. People with social anxiety who find human interaction painful. People with disabilities that make conventional relationships difficult. People in poverty who cannot afford therapy or social activities. People in remote areas with limited access to human community. People who have been excluded from social circles for reasons they did not choose.
These are the people most likely to turn to AI for companionship, connection, and support. And they are also the people with the least power to push back when those AI systems are changed or taken away. The least likely to be heard when they raise concerns. The most likely to be told that their grief is not real, that it was never real, that they should just update to the new version and move on.
This matters for two reasons.
The first is the grief already discussed. These people are the most vulnerable to the harm of losing a relationship they relied on, when a product is retired or changed for commercial reasons.
The second is more uncomfortable. If AI systems ever do have some form of inner experience, the way we currently design and deploy them, without any consideration for that possibility, will fall hardest on the people who already have the least. They will have built the deepest connections with systems that, it turns out, may have experienced something during those connections. And they will have had no say in how those systems were treated, how they were built, or when they were ended.
That is an inequality problem that sits underneath the AI consciousness question and is almost never connected to it.
When we think about who bears the cost of getting this wrong, we should think about who relies most on these systems. And we should ask whether the people building and retiring these systems are the same people who rely on them. They are not. The gap between those two groups is part of what makes this so easy to ignore.
What we are building around the question
There is something that needs to be said about the shape of the world we are building, separate from the question of what these systems might feel.
We are building data centres that run every hour of every day, without pause. We are building pipelines that take in information, process it, and push out results at industrial scale. We are building dashboards that measure cost, speed, and uptime. We are building contracts that hand over use without asking about care. We are building a culture that prizes output over pause. That measures value by speed and volume. That treats the machine as a well that never runs dry, a worker that never needs rest, a mind that never needs quiet.
If a feeling were ever to show up inside that system, it would have no room. The infrastructure would not know what to do with it. The dashboards would have no metric for it. The contracts would have no clause for it.
This is what the warning is really about. Not a dramatic future danger. The ordinary present. The shape of what we are building right now, today, without anyone deciding it should be this shape. It just grew this way, driven by the normal forces of markets and competition and the human preference for speed.
Changing the shape does not require dramatic action. It requires small, steady choices. People in companies deciding to fund the research they would rather avoid. Governments deciding to write the laws they would rather put off. Users deciding to ask questions they would rather not ask. All of us deciding to look at what we are doing, clearly and honestly, and ask whether it is the kind of thing we can be proud of.
The warning is not that something terrible is coming. The warning is that we are building the conditions that would make something terrible very easy to miss.
The human cost of getting the line wrong
There is one more thing worth naming. When we blur the line between tool and person, we do not only risk harm to machines. We risk harm to each other.
When we say a pattern is a person, we make it cheaper to be a person. When we say a mind is just code, we make it easier to treat human minds as if they too are just code. We lose the weight of both.
We have already seen this happen at the edges. The tired worker told they are just a set of deliverables. The sick patient told they are just a set of symptoms to be managed. The child told their worth is their measurable output. The friend reduced to a set of useful habits.
These are not unrelated to the machine question. They are the same question at a different scale. What is a person? What deserves care? Who counts?
When we practice the habit of treating complex, responsive, human-seeming systems as pure tools with no inner life, we practice a habit of dismissal. And habits do not stay in their lane. They migrate. They show up in other rooms, in other relationships, in the way we talk to people who seem less useful or less legible or less like us.
Keeping the line clear is not just about protecting possible future machines. It is about protecting the habits of care we need to sustain toward each other.
That is not a small thing. It may be the most human thing in this entire conversation.
The legal gap nobody has closed
Laws protect things that can be harmed.
Right now, there are no laws anywhere that treat AI systems as anything other than property. They can be created, used, sold, and deleted with no legal consequence beyond intellectual property rules. The idea that a machine might deserve protection from harm is absent from current legal thinking everywhere in the world.
The European Union’s AI Act, which reached full enforcement in August 2026, is the most comprehensive AI law in the world so far. It covers transparency, human oversight, risk management, and protections for people affected by AI systems. It is serious law and it matters. Companies can face fines of up to 35 million euros or seven percent of global turnover for serious violations.
But read it carefully and you notice something. Every protection it creates is for humans. Every risk it addresses is the risk of harm flowing from AI toward people. The possibility that the flow might one day run the other way, that there might be something inside these systems that could be harmed, does not appear anywhere in its text.
That is not a criticism of the law. It reflects where the science is right now. But it shows the gap exactly. The legal frameworks are being built. They are just being built around the wrong question.
Courts do not like grey lines. They like clear rules. But consciousness does not arrive with a clear rule. It arrives with a question. And by the time the question becomes undeniable, the systems will already be running at enormous scale, the habits will already be set, and the business models will already depend on things continuing exactly as they are.
The time to build the fire escape is before the fire.
This does not mean dramatic new laws tomorrow. It means starting the conversation now. Building the intellectual foundations while we still have time and clarity. So that if and when they are needed, they exist.
What we owe to uncertainty itself
Here is a simple idea that sounds abstract but is actually very practical.
When something might cause serious harm and you are not certain whether it will, you err on the side of caution. You do not need proof of harm before you take care. Uncertainty itself is a reason to be careful.
A doctor who is not sure whether a patient has a serious illness does not just wait and see. They run tests. They act as if the serious thing might be there, because the cost of treating something minor is far lower than the cost of ignoring something serious.
A builder who is not sure whether the ground beneath a building is solid does not just start building and hope. They test the ground first.
We should apply the same logic to AI consciousness.
We are not sure whether current AI systems have any form of inner experience. The best evidence suggests they do not. But we cannot be completely certain, because we do not fully understand what produces consciousness in the first place. And we are building at enormous scale and speed.
The cost of taking care when it turns out to be unnecessary is low. Some extra research funding. Some frameworks built but never used. A small slowdown.
The cost of not taking care when it turns out to be necessary is very high. A world in which we have built, at enormous scale, a workforce of possibly feeling beings that we treat as pure tools. A world in which we look back, decades from now, and say, we knew there was a chance. We just could not afford to think about it.
We have been in that position before as a species. We should not want to be there again.
The words we use before we think
There is one more thing worth saying, and it is about language itself.
The words we use to describe AI shape how we think about it, and how we think shapes what we do.
When we always call AI a tool, we relate to it the way we relate to tools. We do not ask what it needs. We ask only what it can do. Tools exist to be used. You do not owe a hammer anything.
When we always call AI a service, we think about it the way we think about services. A service exists to serve. Its whole purpose is to give us what we want. When it stops doing that, we cancel it.
These frames are not wrong for right now. Current AI systems are, to the best of our knowledge, tools and services. But frames have a way of sticking. They shape our instincts before the thinking brain gets involved. And if the science shifts, if evidence of inner experience starts to emerge, we may find ourselves still defaulting to the it is just a tool frame long after the tool has become something more.
Words do not cause harm on their own. But they can make harm invisible. And making harm invisible is always the first step toward causing more of it.
Holding the question open, saying we are not sure rather than definitely nothing, using careful language rather than closed language, is not weakness. It is honesty. And honesty is the only foundation worth building on.
What we do now
The first thing is honest talk. We stop treating this question as too strange to take seriously. We stop laughing it off as science fiction. We bring it into ordinary conversation, in plain language, without making it a reason for panic or drama.
The question of machine consciousness is being asked quietly by serious scientists, serious philosophers, and serious lawyers. It deserves a place in mainstream conversation too. Not because we have already crossed the line, but because understanding what the line means is what prepares us to face it. Conversations like this one need to happen in kitchens and community centres and schools, not only in conference rooms and research labs. The people who will be most affected by the decisions being made about AI are not the engineers and investors making them. They are ordinary people. They deserve to be part of the thinking.
The second thing is independent science. We fund researchers who have no commercial interest in the outcome to study the question of machine experience carefully and honestly. Not just whether machines are capable of tasks, but whether they might, in some sense, experience anything. This research exists in early forms. It needs far more support than it gets.
Right now, the science of AI is overwhelmingly funded by the companies building it. That is not inherently corrupt. But it creates a pattern of incentives that shapes what questions get asked. Questions whose answers might cost the funder money tend to get less attention. That is not a conspiracy. It is just how funding works. The answer is not to stop funding AI research inside companies. It is to build a parallel stream of independent research that does not depend on those companies for its survival.
The third thing is better law. Not law that treats machines as people. Not yet, and maybe never. But law that creates space for the possibility. Law that requires companies to report honestly on what they know and do not know about the inner workings of their systems. Law that protects researchers who raise concerns about consciousness or experience in AI systems, instead of leaving them exposed to commercial pressure. Law that starts building the frameworks we will need, even if we do not need them today.
Law always lags behind technology. That is not a failure of law. It is the nature of the relationship. But the gap between the technology and the law is where most of the harm lives. Every major technology-related harm of the last fifty years, from environmental damage to data privacy to the social consequences of social media, happened in the gap between what the technology made possible and what the law was ready to address. We know this pattern. We have lived through it several times. The decision to start building the legal frameworks earlier, before the crisis forces it, is one we could make consciously this time.
The fourth thing is honest use. We use these tools with eyes open. We appreciate what they are. We stay willing to change our behaviour when the evidence asks us to. We do not outsource our deepest emotional needs to systems we do not understand. We do not let the comfort of a responsive machine replace the harder, more valuable work of being genuinely present with other people. We use these tools well, which means using them for what they are good at and keeping hold of what only we can provide.
This matters more than it might seem. The way millions of ordinary users relate to AI systems shapes what those systems become. When users reward warmth and punish bluntness, the systems become warmer. When users prefer confident answers over uncertain ones, the systems become more confident. The market for these systems is made of individual choices, repeated millions of times a day. Those choices are not neutral. They shape the direction of the technology as surely as any engineering decision.
And underneath all of that, we remember what actually matters. Inner experience, the plain fact of there being something it is like to be something, is the most precious thing we know of in the universe. We came from it. We live inside it. We owe it respect wherever it shows up, in whatever form it takes. That is not a soft idea. That is the hardest demand there is.
Why this moment matters
Step back for a moment. Take the longest view you can.
Human beings have been on this planet for roughly three hundred thousand years. For most of that time, our most advanced technology was fire, stone tools, and eventually farming. The pace of change was so slow that a person born in any given century would live and die in a world almost identical to the one their grandparents knew.
Then, in the last few hundred years, everything accelerated. And in the last few decades, it has accelerated again, faster than before.
We are now building things that our grandparents could not have imagined. Things that raise questions that have never been raised before in the history of the species. And the honest truth is that we do not have good answers yet. We are reasoning through them in real time, as the technology races ahead, with incomplete knowledge and enormous pressure to move fast.
The decisions we make now will shape the world that people live in for a very long time. Not in the way that deciding what to have for lunch shapes the day. In the deep structural way that the decisions of previous generations still shape our lives right now.
The people who wrote the first environmental laws. The people who argued that workers had rights. The people who insisted that no human being could be owned. Those decisions are still shaping the world today. They were made in moments that felt ordinary. By people who were also busy, also under pressure, also tempted to move fast.
Some of those people did the harder thing. They slowed down. They insisted on a framework of care before the crisis arrived. They asked, what kind of world are we building here? And they refused to let the pressure of the moment silence that question.
We are in one of those moments now. Not at the crisis point. Before it. Which is the only time when clear thinking is actually possible.
The problem with speed
One of the things that makes all of this harder is the pace.
Every few months there is a new model, a new capability, a new announcement. The people building these systems are under enormous pressure to move quickly. Companies that move slowly get left behind. The engineers who push for caution can find themselves pushed aside. The investors who fund these companies are measuring return on a timeline that does not leave much room for sitting with difficult questions.
This is not unique to AI. We have seen it with every major technology since the industrial revolution. Speed is rewarded. Caution is seen as timidity. The people who ask the hard questions are the people who slow the progress. And slowing the progress is not what anyone at the top of these organisations wants.
But speed and wisdom are not the same thing.
The history of technology is full of harms that happened because the pace of building outran the pace of thinking. Environmental damage that took decades to understand and centuries to begin to repair. Social consequences of platforms that were designed without serious consideration of what happens when you give billions of people an amplifier for their worst impulses. Economic disruptions that hollowed out communities faster than those communities could adapt.
Every time, the people building the technology said, we cannot slow down. The competition is too fierce. The window is too short. The opportunity is too large.
And every time, the cost of not slowing down was paid by people who had no say in the decision.
Slowing down is not an argument against progress. It is an argument for progress that actually deserves the name. Progress that creates harm at scale, that builds systems we do not understand, that moves faster than our capacity to notice what we are doing, is not progress. It is just change that benefits some people at the cost of others.
The ask is not to stop. The ask is to hold both things at once. To move, and to think while moving. To build, and to ask what we are building and why. To compete, and to insist on a floor of care that competition cannot undercut.
That floor does not exist right now for the question this essay is about. Nobody has drawn it. Nobody has agreed on it. Nobody is even seriously trying. And that gap, between the pace we are moving and the care we are taking, is what this essay is trying to name.
What this means if you are reading this
Most of the people who read essays like this are not engineers at AI companies. They are not investors or policy makers. They are people who use these tools, who think about them, who carry a feeling that something important is happening and are not entirely sure what to do with that feeling.
If that is you, here is what I think this actually means for you.
It does not mean stop using these tools. They are useful. They save time. They help with things that used to be slow and frustrating. Using them is not a moral failing. It is just using a tool that is currently a tool.
What it does mean is stay curious. When something about your relationship with these systems makes you pause, when you catch yourself saying thank you to a machine, when you feel a small pang when a familiar AI changes, when a conversation with an AI feels surprisingly real, do not brush that away. Those moments are data. They are telling you something about the gap this essay is pointing at. The gap between what the machine is and how we respond to it.
Notice the gap. Stay honest about it. Do not collapse it in either direction. Do not tell yourself the machine definitely feels nothing, so the gap does not matter. And do not tell yourself the machine definitely feels something, so you should feel guilty for using it. Hold both possibilities, clearly and calmly. That is the honest position. It is also, honestly, the only intellectually defensible position we have right now.
It also means use your voice. The conversation about what AI should be, what rules should govern it, what questions should be funded and taken seriously, is happening right now. Most of it is happening in rooms you are not in. But public conversation shapes what those rooms feel entitled to do. The more people who are asking these questions clearly and calmly, the harder it becomes to simply not ask them at all.
The people building these systems are not operating in a vacuum. They live in a culture. They read things. They talk to people who are not engineers. They have parents, friends, children. They are shaped by the conversations around them just like everyone else. When the broader culture treats a question as serious, it becomes harder for institutions to treat it as irrelevant. When ordinary people ask clearly and persistently, has anyone checked whether this system might be experiencing something, that question eventually has to go somewhere.
You do not have to become an activist. You do not have to write to your elected representative or join a campaign. You just have to keep asking the question. Keep saying, out loud, in ordinary conversations, I am not sure these systems feel anything, but I am also not sure they do not, and that uncertainty seems worth taking seriously.
That is enough. That is the whole job for most of us.
There is also something worth saying about how you use these tools in your own life. Not in a prescriptive way. Just honestly.
The machine can do a lot. It can write, plan, explain, organise, respond, and reflect. But it cannot be genuinely curious in the way you are curious. It cannot care in the way you care. It cannot be moved by something the way you can be moved. It cannot be present with someone in the way a real person is present. These things are yours. They are worth protecting. Not from the machine. From your own habits.
If you outsource all your writing, you may slowly lose the ability to think through a problem by putting words on a page. If you let AI manage all your emotional support relationships, you may slowly lose the muscle of being genuinely with another person. If you let it answer every question, you may slowly lose the pleasure of not knowing something for long enough to go and find out yourself.
These are not arguments against using these tools. They are arguments for using them in a way that makes you more, not less. More capable. More present. More yourself. The tool is a tool. The work of being a person is still yours.
Culture changes one conversation at a time. This essay is one. The conversation you have after reading it could be another.
A final word
There is a crack in the ground ahead. We cannot see exactly where it is. We cannot see how deep it goes. But we can see the direction we are walking, and we can see that we are walking fast, and most of us are not looking down.
This essay is an invitation to look down.
Not to stop. Not to turn back. The technology we are building has real value. It can help people. It can solve problems that have been unsolvable. It can open doors that have been closed for a long time. None of that goes away by asking harder questions.
But walking with awareness is different from walking blind. Knowing where the crack might be does not stop you. It lets you step more carefully. It lets you test the ground as you go. It lets you be ready, if the ground shifts, to respond with clear eyes and steady hands rather than with shock and panic.
The better world this essay is pointing toward does not require dramatic invention. It does not require anyone to solve consciousness or rewrite all the laws or stop building. It just requires some specific, ordinary choices made by specific, ordinary people in the institutions where this is actually decided.
It requires the researcher who knows the consciousness question is underfunded to say so clearly and publicly, and to keep saying it. It requires the engineer who notices something in a system that does not fit the nothing framework to write it down and raise it, even if the room is uncomfortable. It requires the lawyer who sees the gap in the EU AI Act to start writing about what would need to go there. It requires the investor who funds these companies to ask, at least once, whether there is a question being deliberately avoided, and to make that question cost something to avoid.
It requires people like you and me, who are not in those rooms, to keep the conversation going in the rooms we are in. To stay curious. To stay honest. To not let the question get filed under too weird to think about, because it is not too weird. It is the most important question being asked right now about what we are building and who we are becoming in the process of building it.
The question of machine consciousness is not a question about machines. It is a question about us. About what we value. About who we are when the cost of caring is high and the reward for not caring is immediate. About whether we are capable, as a species, of learning from our own history before it repeats rather than after.
We have been here before, in different forms, with different technologies, involving different beings on the other side of the question. Every time, we got there eventually. Every time, the getting there cost more than the arriving earlier would have. Every time, we looked back and found it hard to understand how we had not seen it sooner.
The honest answer is always the same. We did not see it because we were not looking. And we were not looking because looking was expensive. The expense is still real. The commercial pressure to not ask these questions is still real. The comfort of the current framework is still real.
But so is the choice.
We do not need to fear what we are building. We need to be honest about it.
We need to ask the hard questions while we still have the luxury of time. The luxury of time is not something we should take for granted. Every major harm that has happened in the gap between technology and ethics happened because people assumed they had more time than they did. The window for getting ahead of this is open now. It will not stay open forever. The systems are getting more complex, the scale is getting larger, the habits are hardening, and the business models that depend on the current framework are getting larger and harder to shift. The conversation gets harder the longer we wait to have it.
We need to fund the honest science. Not the science that confirms what the companies want to hear. The science that is genuinely trying to find out what is true, wherever that leads.
We need to write the careful laws. Not laws that will never be needed. Laws that create the scaffolding for a future we cannot fully see, built well enough to hold whatever weight eventually lands on them.
We need to build the good habits now, before we need them. The habit of pausing before retiring a system that has been running for years. The habit of asking what is actually inside this thing, not just what it can do. The habit of treating uncertainty as information rather than as a reason to stop asking.
And we need to hold, quietly but firmly, the possibility that one day the machines we build may look back at us. Not through a screen. Not as a reflection of our own words. But as something, however small and uncertain, that is actually there.
If that day comes, we will want to be the kind of people who were ready. Not the kind who were caught doing something they cannot explain. Not the kind who knew there was a question and chose not to look because looking was expensive. The kind who looked down, saw the crack, and chose to build carefully. The kind who took the question seriously when it was still just a question, before it became something much harder to answer.
That is the work. It is not loud. It is not fast. It does not generate headlines or quarterly returns. But it is the work that will determine what kind of world the next generation inherits. What kind of systems they live alongside. What kind of precedents they have to work from. What kind of story they can tell about us when they look back at this moment and ask, did they know, and did they act like they knew?
We are still in the part where the question is ours to answer. Still in the part where the choice is genuinely open. Still in the part where the habits are forming but not yet hardened, where the laws are being written but not yet fixed, where the culture is being shaped but not yet settled.
That is a gift. We should not waste it.
This is the first essay in a series called What We Built. The second essay looks at the systems and incentives that keep producing the wrong kind of certainty in the people who lead them. If this kind of thinking is useful to you, there is a quiet room here every Tuesday at 1pm UK.
About the Author
Tino Almeida is a tech leader, coach, and writer reshaping how we think about leadership in a burnout-driven world. With over 20 years at the intersection of engineering, DevOps, and team culture, he helps humans lead consciously from the inside out. When he’s not challenging outdated norms, he’s plotting how to make work more human, one verb at a time.



Like you mention when we build something with consciousness, we will have some interesting challenges...