Artificial Intelligence
AI is the most consequential technology since the internet — possibly since electricity — yet public understanding of what it is, how it works, and who controls it has not kept pace with its deployment.
The short version
- Modern AI systems, particularly large language models (LLMs) like GPT-4, Claude, and Gemini, are trained on vast datasets of text using a technique called deep learning — they predict statistical patterns, not verified truths.
- A handful of companies — primarily OpenAI, Google, Microsoft, Meta, and Amazon — control the foundational models and the infrastructure they run on, concentrating enormous economic and informational power in extraordinarily few hands.
- The compute required to train frontier AI models has been doubling roughly every six months, and the energy and water costs of that compute are becoming significant environmental concerns with real community consequences.
- There is no scientific consensus on when or whether AI will reach human-level general intelligence; what is certain is that narrow AI is already displacing workers, reshaping industries, and concentrating wealth at an accelerating rate.
What it is
Artificial intelligence, as a field, dates to the 1950s, but the term today almost always refers to a specific subset: machine learning systems trained on large datasets to perform tasks that previously required human cognition. The dominant paradigm in the 2020s is deep learning — the use of layered neural networks loosely inspired by the structure of the brain. These networks learn by being shown vast quantities of labeled or unlabeled data and adjusting billions or even trillions of numerical parameters (called weights) to minimize prediction error. The result is a system that can generate coherent text, recognize objects in images, translate languages, or synthesize realistic audio — not because it 'understands' these things in any human sense, but because it has learned statistical patterns at a scale and resolution impossible for a human to perceive.
The most visible current form of AI is the large language model (LLM) — a system trained to predict the next word in a sequence across trillions of examples of human-written text. GPT-4 (OpenAI), Claude (Anthropic), Gemini (Google), and Llama (Meta) are all LLMs. They are capable of remarkable generalization: because language is a compressed representation of human knowledge and reasoning, a model trained to predict text can, as a byproduct, perform tasks like summarization, translation, code generation, and question answering. But they also hallucinate — confidently producing factually false statements — and they encode the biases, errors, and power structures embedded in the data they were trained on. They are not search engines with verified facts; they are pattern-completion engines of extraordinary sophistication.
The infrastructure underlying modern AI is deeply concentrated. Training a frontier model requires tens of thousands of specialized chips — NVIDIA's H100 GPUs being the dominant variety — housed in massive data centers consuming enormous quantities of electricity and water. As of 2024, the ability to train frontier models is effectively limited to a small number of companies with access to both the capital (training runs cost hundreds of millions of dollars) and the specialized hardware. This creates a structural concentration of power: the entities capable of building the most capable AI systems are also the entities with the most to gain from their deployment, and the fewest external checks on that deployment.
It is important to distinguish between different types of AI claims. 'Artificial General Intelligence' (AGI) — a system capable of matching or exceeding human performance across all cognitive domains — remains a contested concept without agreed-upon definitions or timelines. What exists today are narrow AI systems: extraordinarily capable at specific tasks, brittle at others, and with no inherent goals or agency. When technology executives make sweeping claims about AGI arriving imminently, they are making predictions, not reporting facts. The scientifically honest position is that no one knows when or whether AGI will be achieved, and the pace of capability gains in LLMs has been accompanied by growing evidence of fundamental limitations that simple scaling alone may not overcome.
Why it matters
The economic stakes of AI deployment are difficult to overstate. Goldman Sachs estimated in 2023 that generative AI could raise global GDP by 7% and expose roughly 300 million full-time jobs to automation. McKinsey projected that between 400 million and 800 million workers globally could be displaced by automation by 2030. The distributional question — who captures the productivity gains and who bears the displacement costs — is the central political question of the AI transition, and it is not being adequately addressed in most policy environments.
The concentration of AI development in a few corporate hands raises concerns that extend beyond economics. These systems are increasingly embedded in consequential decisions: hiring, lending, criminal sentencing, content moderation, medical diagnosis, and military targeting. Algorithmic auditing is nascent, regulation is lagging by years, and the companies deploying these systems face minimal liability when they produce harmful outcomes. The EU AI Act — passed in 2024 — represents the most serious legislative attempt to date to impose accountability, but it applies only within the European Union and faces significant implementation challenges.
The environmental costs of AI are becoming increasingly visible. Data centers training and serving AI models consume vast quantities of electricity — the IEA projected that global data center electricity consumption could double by 2026, driven largely by AI workloads. Cooling those data centers requires billions of gallons of water annually. Google's water consumption increased nearly 20% in a single year as AI workloads expanded. Microsoft, Google, and Amazon have all made net-zero pledges — and all three are currently moving further from their targets as AI demand grows.
The narrative shaping public understanding of AI is largely written by those who profit from it. Technology executives speak of AI as an inevitable, beneficial force that will 'augment' human capability; disruptions are minimized, benefits are universalized, and the timeline of consequences is kept vague. Critical infrastructure decisions — where data centers are built, how much energy they consume, which labor markets they disrupt — are being made without meaningful public deliberation. The gap between AI hype and AI reality is itself a significant problem: it makes it harder to assess genuine risks, allocate resources wisely, or build governance frameworks grounded in evidence rather than speculation.
Sources & Further Reading
- The Economic Potential of Generative AI
- Jobs Lost, Jobs Gained: What the Future of Work Will Mean for Jobs, Skills, and Wages
- Electricity 2024: Analysis and Forecast to 2026
- EU AI Act: Regulatory Framework for Artificial Intelligence
- Compute Trends Across Three Eras of Machine Learning
- Google's AI Poses a Challenge to Its Pledge to Be Carbon Free by 2030