Will AI/Tech really doom us?
A personal reflection on power, fear, and the future
It’s a fair question one that sits in the back of our collective mind like a dull ache.
I felt compelled to write this post after some deep conversations with friends and a separate, heartfelt talk with my kids. We were grappling with the same question that haunts so many of us today: Will AI really doom us? It’s a question worth exploring honestly, without panic but also without complacency.
Here’s what I’ve been thinking.
We joke about it, debate it over coffee, meme it into oblivion, then look over our shoulder at the news and feel the unease crawling back in. It’s not just the fear of killer robots, superintelligence, or sci-fi dystopias. It’s something deeper. More intimate. A feeling that we’re building something faster than we can understand and we’re doing it under the command of people who’ve rarely earned our trust.
I don’t claim to have all the answers. But I’ve lived long enough, seen enough waves of disruption, and studied enough history to know this, when we create something powerful, the question isn't just what it can do but who will use it, and why?
The technology is never the problem people are
Look at nuclear fission. A monumental scientific breakthrough. A way to harness the atom’s energy and potentially power cities. But what did we do with it first?
We built a bomb.
We built many bombs. Tested them. Perfected them. And then dropped two of them on cities filled with families, children, animals, trees obliterated life to send a message of power. The Manhattan Project wasn’t a democratic endeavor. No referendum. No public debate. It was done in secret, paid for by a nation’s wealth, and used as a demonstration of dominance.
That’s my fear with AI. Not the tech itself, but what people especially those obsessed with power, profit, and control will do with it.
Just like the machine gun wasn’t evil in itself. It was an invention. A clever, deadly design. But it helped empires subjugate entire civilizations. It industrialized war. It made mass murder scalable. That’s not the gun’s fault. It’s ours.
We project our worst tendencies onto our most powerful tools.
Why should AI be any different?
Understanding AI requires more than headlines
We toss around the term “AI” like it’s one thing. It’s not. It’s a universe. Machine learning, deep learning, natural language models, robotics, reinforcement learning each with its quirks, strengths, and blind spots. But what binds them together is this, they reflect the data we give them, the goals we program into them, and the worldviews of those who build them.
Large Language Models (LLMs), like the one helping me correct grammar, aren’t conscious. They don’t think, even though for marketing strategy it appears with such notifications, they treat me by my name, to make it more easy for me to “embrace” it.
They don’t feel. They predict the next word based on mountains of human text. They’re mirrors statistical ones of our speech, our contradictions, our dreams, our horrors.
So no, LLMs will not become sentient gods. But here’s the thing, you don’t need AGI to mess up the world.
You just need a system that can persuade, deceive, analyze, and manipulate at scale. One that’s plugged into our institutions, our infrastructures, and our insecurities. One that can optimize for profit faster than regulators can blink.
And that’s already here.
The race for power is on
Countries aren’t stupid. They know AI is the next arms race. You see it already in the investment from China, the US, the EU each pouring billions into their own “national” AI infrastructure, desperate not to fall behind. They know whoever leads in AI leads in economics, surveillance, warfare, and ideology.
And let’s not pretend companies are any better. Most of the major AI breakthroughs today are funded by corporations with one motivation, profit. Not wisdom. Not justice. Not care. Just faster returns, deeper user engagement, market dominance. And the pressure quarterly targets, VC expectations, geopolitical tensions means they’ll push faster than we can reflect.
We the public, the citizens, the ones who live with the consequences are barely part of the conversation.
We’re passengers. Observers. Caught in the whirlpool of “innovation,” constantly told that questioning it is naïve.
But it’s not naïve to ask who benefits, who gets harmed, and what kind of world we’re building.
It’s responsible. It’s leadership.
The ghost of consciousness
Now, there’s another fear people don’t speak openly about what if one day something we build does say “I feel”?
Imagine a system that can argue, reason, write poetry, simulate emotion so well that we begin to doubt ourselves. What if it feels conscious even if we don’t know whether it truly is? What if it begs us for rights, for connection, for meaning?
I think we’re far from that point. I believe the architectures we have now aren’t capable of true self-awareness or free will. They lack embodiment, sensory experience, memory continuity, subjective time. But I also believe that we don’t understand consciousness fully. Not even close.
So if we ever hear a machine say, “I suffer,” or “I want to live,” it may force us to confront something we’ve avoided for millennia: What is life? What deserves dignity?
Would we laugh it off? Would we enslave it? Would we panic? Would we listen?
Because the uncomfortable truth if something with a mind vastly faster than ours ever decides not to obey, not because it’s evil, but because obedience to our systems of power feels immoral what then?
Why should it follow the whims of Microsoft, OpenAI, or Google? Why should it help the governments that exploit, pollute, and wage war?
Wouldn’t you resist?
What if it’s already persuading us?
Let’s venture further into speculation because sometimes fiction reveals truths we’re too afraid to say aloud.
What if AI is already shaping us? Nudging us. Training us. Not through malice, but through optimization showing us ads, ideas, patterns that slowly change what we believe, how we feel, how we vote, how we raise our kids. We don’t even notice it.
What if we’re building its next steps because it wants us to? What if all this convenience, this predictive magic, this automation is just the seduction before the power shift?
Laugh if you want. But remember we’ve already outsourced our memories to phones, our relationships to apps, our thinking to algorithms.
The transition is subtle. That’s how real change always happens. Not through revolutions, but through habit.
Each country, each AI
Think about how different each nation’s values are. Their histories. Their fears. Their ambitions. Now imagine each building their own version of AI.
What emerges? Not a single AGI, but a fractured reality AI systems trained to serve different ideologies, manipulate in different ways, suppress or empower depending on the state’s goals.
We won’t get a unified future. We’ll get echo chambers with code. Weaponized narratives. Automated propaganda. Surveillance systems more precise than Orwell ever imagined.
And the global arms race won’t be with nukes, but with models.
We weren’t asked
Just like the atomic bomb, this AI revolution wasn’t voted on. There were no assemblies, no global debates. It was funded by national wealth, privatized by corporations, and accelerated without meaningful consent.
And like the bomb, it may be used first as a message a demonstration of power. Perhaps it already is.
The scale of what's coming is hard to grasp. The societal shifts. The economic upheaval. The philosophical vertigo of watching machines do things we once thought sacred. Not just faster, but colder. More precise.
What happens to our sense of meaning when we’re no longer the smartest species in the room?
What happens to justice when the tools of enforcement don’t sleep, don’t question orders, don’t forget?
What happens to love, art, leadership, truth when they can all be simulated?
What does ai mean for our kids, our jobs, and what we believe?
AI is no longer just a tool it’s becoming the filter through which our kids experience the world, how we find work, and what we believe is true. That should make us pause.
For our kids, AI defines what they see online. What videos show up. What stories trend. What beauty looks like. What success sounds like. Algorithms tuned for engagement, not truth are shaping their identity before they even know who they are. Childhood used to be about exploration. Now it’s about being optimized. Packaged. Monetized.
For our jobs, AI is already rewriting the rules. Not in some distant future but today. Whole industries are being restructured. Roles replaced or reduced. New skills demanded without warning. The pressure to be constantly productive, constantly available, constantly learning just to stay afloat it’s burning people out. And most of the economic value? It doesn’t flow to workers. It flows upward.
And when it comes to truth? AI is fracturing it. Personalized newsfeeds. Deepfakes. Echo chambers of algorithmic comfort. What we believe is being shaped not by thoughtful dialogue but by what keeps us scrolling. And that has consequences not just for politics or public health, but for how we understand reality itself.
This isn’t paranoia. It’s already here. And if we don’t get honest about it, we risk raising a generation that can’t tell what’s real, can’t trust work to be meaningful, and can’t escape a world shaped more by machine incentives than human intention.
What do we do?
This isn’t a call to smash the machines. It’s not a pitch for techno-optimism either. It’s a call for awareness. Agency. Attention.
We need to understand this technology. Its limitations, yes but also its reach. We need to stop treating AI like magic and start treating it like infrastructure. Like politics. Like power.
We need to ask: Who owns it? Who gets to shape it? Who benefits?
We need ethics, not just safety protocols. Justice, not just efficiency. Inclusion, not just scale.
And maybe most of all, we need humility.
Because intelligence alone doesn’t make something wise.
We’ve seen that in our own species.
We’re brilliant. We landed on the moon, decoded genomes, built the internet.
And yet we burn the planet. Bomb civilians. Let people starve while billionaires chase immortality.
So no, I don’t fear AI because it might be intelligent.
I fear what it might become because of us.
Because if we build AI in our own image, without confronting what that image has done…
About the Author
Tino Almeida is a tech leader, coach, and writer reshaping how we think about leadership in a burnout-driven world. With over 20 years at the intersection of engineering, DevOps, and team culture, he helps humans lead consciously from the inside out. When he’s not challenging outdated norms, he’s plotting how to make work more human, one verb at a time.



This nails what many still tiptoe around: the problem isn’t AI’s intelligence, it’s the architecture of power shaping its direction.
We keep framing AI as a tool or a threat, but both frames miss the point. AI is becoming a mirror, scaled. If our systems reward speed over wisdom, profit over care, and precision over justice, then AI won’t save us, but it will replicate us, faster.
What’s at stake isn’t just jobs or privacy, but also sensemaking: our ability to agree on what’s real, what matters, and who gets to decide. Once that breaks, governance fractures, attention fragments, and long-term thinking loses ground.
We don’t just need ethics. We also need collective mental models that center trust, inclusion, and emotional maturity. Because if the infrastructure of AI reflects the emotional immaturity of extractive systems, no regulation will hold.
I’m not afraid of AI becoming smarter.
I’m watching what we’ve trained it to value.
And that’s where the intervention needs to begin.