AI, LLMs, and Society: What Every Politician Needs to Know
The worst thing that can happen at home is open the tap to get some water...
… and nothing comes out, and you piece things together and realise is because of a new data center, in your region. Soon after comes the blackouts…
Before venturing further, let’s confront a foundational truth, large language models (LLMs) like ChatGPT, Gemini, and alike are not intelligent.
They do not think as humans do, do not reason, and, crucially, cannot distinguish between true and false, good and harmful, or wise and shortsighted.
Anyone be it technologist, consultant, or lobbyist who tells politicians otherwise is glossing over the most serious risks and realities of our era.
To misread statistical mimicry as intelligence is not just an academic slip it is an error destined to test the very resilience of democracy, public resources, and social trust.
What Is AI? What Are LLMs?
Artificial Intelligence (AI) refers to computer systems capable of tasks like natural language processing, image recognition, or decision-making that traditionally required human intelligence.
In this article I will use the term “AI” to identify a predictive statistical model, the “I” stands for ignorance and not for “intelligence”, since as fact LLM are not intelligent at all, far from it.
Among the most prominent examples today are Large Language Models algorithmic engines trained on massive amounts of text data to generate plausible-sounding sentences, answer questions, and summarise complex topics.
Popular models such as GPT-4, Gemini, and Claude have captured the imagination of leaders and public alike. But they are not minds, nor are they sources of truth. These models, at their core, are probability calculators predicting the most likely next word, based on patterns in their data.
It bears repeating, LLMs are not autonomous thinkers. They have no intent, no understanding, and no built-in metric for honesty or virtue. When an LLM offers advice, makes a policy suggestion, or summarises a national debate, it is doing so without the benefit of lived experience, ethical contemplation, or memory.
Its outputs, like a polite Chester which can be compelling and even persuasive, but all too often they are incorrect, incomplete, or shaped by subtle (sometimes glaring) biases inherited from the training data.( Data of any kind that is filtered by humans, so you can imagine the trauma, data annotators suffer to make sure you don’t get a very awful input).
The Temptation and Limitations of LLMs in Governance
Why does this matter? In the context of policy making, the appeal of LLMs is obvious. They can rapidly summarise thousands of documents, offer drafts of speeches or legislation, and provide a digital “second pair of eyes” for busy political offices. Early pilots suggest LLMs may assist with translation, constituent communication, or data analysis.
But peril lurks in overreliance. LLMs are notorious for "hallucinations" confidently producing false facts, invented statistics, or plausible-yet-incorrect statements, even when operating at the highest level. Their seeming objectivity can mask deep biases, including stereotypes, discrimination, or misinformation that may be faint in individual uses but disastrous at scale. Moreover, LLMs reflect the underlying values, ideas, and assumptions embedded in the corpora on which they are trained most often sourced from a narrow slice of the world, the commercial West, Anglophone media, and the interests and blind spots of Big Tech.
When these systems are handed decision-making roles, or their advice is treated as gospel, the risk is not just inefficiency it’s the amplification of errors, the escalation of bias, and the abdication of uniquely human moral and political responsibilities.
Environmental Impact: The Elephant in the Room
One of the least appreciated aspects of the AI revolution is its staggering environmental footprint.
Data centers the beating heart of every generative AI, LLM, or search engine query are energy and water-hungry behemoths. By 2030, AI-powered data centers are projected to emit three times more CO₂ annually than they would without the boom in AI development, totaling around 2.5 billion tonnes equivalent to about 40% of the United States’ current annual emissions. The worldwide surge in AI could see data centers consuming as much energy as the entire nation of Japan each year, with only about half of that met through renewable sources.
One major data center can use 300,000 gallons of water daily matching the needs of 1,000 households for cooling alone, making them among the top water consumers in the US industrial sector.
Why does this matter to policymakers?
The explosion of AI-driven computing diverts both renewable and non-renewable energy resources that could otherwise serve public needs, homes, and businesses. It also accelerates the depletion of local water supplies and creates staggering e-waste, expected to reach 16 million tons by 2030.
When politicians accept the narrative that “bigger is better” that national prestige depends on hosting the largest data factories they risk compounding energy and water crises and undermining climate targets.
The people must have a saying in all this.
Social and Mental Health Cascades
Let’s take the wider societal view. AI is not just an environmental challenge it is a profoundly social one, with direct impact on critical thinking, public discourse, and even personal well-being.
Unfettered AI deployment can:
Amplify addictive content, increasing time spent online and reducing focus or mental well-being.
Spread and reinforce misinformation, sometimes invisibly, leading to more polarised societies and confused electorates.
Target and shape political opinions with unprecedented scale and subtlety, especially during sensitive moments like elections.
A key symptom of overuse is the slow erosion of collective critical thinking. When automated answers, recommendations, or analysis are accepted at face value, there is less incentive for skepticism, debate, or consensus-building. Instead, society risks drifting into passivity, where hard questions are left to algorithms and politicians rely on machine output for complex, human decisions.
Why Big Tech Needs Guardrails
It is imperative to recognise where incentives lie. Technology companies are answerable to shareholders not the public.
Their reporting on emissions, water use, or working conditions may be creative or incomplete; some independent studies show real emissions from major AI data centers are more than seven times higher than officially reported. Without clear, enforceable regulation, companies have no structural incentive to err on the side of public well-being.
Truthful environmental accounting, fair labor practices in the global hardware supply chain, and unbiased data-use all require external oversight.
Unchecked, the power of Big Tech to set the AI agenda fragments regulatory attempts and concentrates social, informational, and political influence among a handful of corporations.
These entities can, intentionally or not, steer discourse, policy, and even national mood subverting the foundations of democratic governance.
It is a repeating lesson of digital capitalism, self-regulation by multinational giants rarely matches the ambitions or protections of well-crafted public law.
The Role of Governments and Citizens
Responsibility for confronting these issues cannot be abdicated to private interests or “the market.” Governments must act decisively, using their powers of law, incentive, and procurement to define the boundaries of acceptable AI use.
What should those boundaries be? At minimum:
Mandatory Transparency: Require companies to report environmental, energy, and water use as a precondition for deployment.
Bias and Misinformation Monitoring: Systems must regularly be audited for bias and factual errors, with remediation required.
Explicit Accountability: Clarify who bears responsibility for AI-enabled harms: the developer, deployer, or end-user.
Equitable Access: Ensure that AI deployment reduces rather than exacerbates inequality, offering public services where markets may not reach.
Public Participation: Consult widely with civil society, educators, and affected communities, not just industry lobbyists.
Citizens, too, have power, demanding and supporting transparent vendors, voting for regulatory reform, and being active, skeptical participants in the digital public square shift the balance back toward public interest.
Small Models and Micro-Data Centers: The Smarter Choice for the Public Good
Tech innovation is not, and has never been, solely about bigness. The long arc of digital progress moves toward efficiency, miniaturization, and smarter, less-wasteful machines. For governments, this means moving away from the reflex to subsidize ever larger cloud warehouses and toward investment in:
Smaller, local data centers: These reduce pressure on grids and utilities, improve resilience, and foster local tech ecosystems. Smaller centers are easier to regulate and adapt to changing needs.
Compact, domain-specific LLMs: Instead of one model to rule them all, many smaller, focused models can perform specialized tasks using less data, less power, and less risk of harmful spillover.
Support for distributed, energy-aware AI: Democracies must embrace architectures that provide necessary services while minimizing environmental externalities and maximizing transparency.
These approaches democratize innovation, making it less prone to corporate monopoly, more adaptable to regional priorities, and fundamentally more sustainable.
Human Judgment: Irreplaceable and Non-Negotiable
It cannot be repeated enough AI models can support policy making but never supplant political judgment. Human oversight is not a “nice to have” it’s essential.
Only people, accountable to voters and possessed of lived experience, can correct AI’s blind spots, make ethical trade-offs, and steer policy according to democratic values. LLMs have no stake in the outcomes of their predictions; politicians and citizens do.
No AI, no matter how powerful, understands the hopes, fears, or nuances of a local community or national debate. The role of AI is to inform the conversation, not to lead it.
Safeguarding Public Trust
There is a growing and justified public concern about the hidden costs of AI. Resource stewardship is not just a technical problem, it’s a social contract.
If leaders remain passive or naïve about what AI is and what it is not, they will soon find themselves facing resistance, skepticism, and widespread disillusionment.
Prioritizing efficient, transparent, and locally accountable AI isn’t a fashionable add-on it’s the price of public trust and legitimacy.
Policy Must Be Adaptive and Responsive
No technology, least of all AI, remains static. Policymakers must approach AI not as a “solved” topic but as an evolving experiment, requiring iterative adaptation, vigilant oversight, and sustained investment in human expertise.
The questions of bias, environmental impact, and social disruption will shift as models and markets change. Adaptive approaches such as sunset clauses for regulations, regular public review, and agile oversight bodies will protect the public from both sudden disruptions and slow-building harms.
The High Cost of Mistaking Imitation for Intelligence
Above all, the gravest mistake any leader can make is to be dazzled by the sophistication of mimicry and forget the difference between calculation and consciousness.
When the output of a statistical engine is taken for truth, when agency is handed to machines, the real intelligence civic, democratic, and human is endangered. There is no substitute for careful, direct engagement with the facts and for a grounded, skeptical appraisal of technology.
Politicians, do not be fooled the costs are too high measured in squandered resources, eroded trust, and the slow displacement of human agency by inscrutable, unaccountable systems.
Imperative: Act Now for a Safe, Democratic, and Sustainable Digital Future
This is the tipping point. To safeguard a just and sustainable society, politicians must act not tomorrow, not after another round of industry consultation, but now.
Regulate for truth, transparency, and environmental sanity. Insist that technologies serve democracy, not the other way around.
Do not allow a clever imitation of intelligence to supplant the messy, essential work of real leadership.
Policy making, at its best, learns from the past, confronts hard truths, and charts an honest, hopeful way forward.
In this era, the true test is not how quickly we can build newer machines, but how firmly we can insist human intelligence, judgement, and values must always steer the future.
About the Author
Tino Almeida is a tech leader, coach, and writer reshaping how we think about leadership in a burnout-driven world. With over 20 years at the intersection of engineering, DevOps, and team culture, he helps humans lead consciously from the inside out. When he’s not challenging outdated norms, he’s plotting how to make work more human—one verb at a time.



Italy has approved a law to protect their citizens from AI. This is good progress, and we need to support this. Big tech is not our friend... sure a few have us in consideration but at the end of the day is all about profit and margins. Companies need to be regulated and we need to regulate our governments.
https://www.reuters.com/technology/italy-enacts-ai-law-covering-privacy-oversight-child-access-2025-09-17/
I will probably expand this post a bit more. I had some conversations with people in politics and most of them are totally unaware what AI really means to their voters and to society in general, they know is good but that's it.
Also we in general need to know more about AI, especially what LLMs are in fact, there are many AI technologies that have being put on hold to focus on LLMs, technology that are far superior that LLMs, but because there's no commercial viability, is parked until it is.