The Void and the Machine: How the Abandonment of Purpose Has Shaped AI
Educational Content – Not Legal Advice
This article provides general information. Consult a qualified attorney before taking action.
Disclaimer
This analysis is for educational purposes only and does not constitute legal advice. The information provided is general in nature and may not apply to your specific situation. Laws and regulations change frequently; verify current requirements with qualified legal counsel in your jurisdiction.
Last Updated: April 24, 2026
THE VOID AND THE MACHINE
How the Abandonment of Purpose Has Shaped the Age of Artificial Intelligence (and What to Do About It)
A Critical Analysis of The Technological Republic by Alexander C. Karp and Nicholas W. Zamiska
SECTION 1 – Silicon Valley's Broken Dream
Silicon Valley was born in partnership with the Pentagon. Today it builds food delivery apps and selfie filters. This section explains how that disconnection has emptied technology of purpose and why artificial intelligence is following the same path.
Keywords: collective purpose, trivial innovation, military-industrial complex, AI, Manhattan Project
The Forgotten Origins
"Silicon Valley has lost its way."
With this sentence The Technological Republic begins, and it is not mere provocation. It is a historical observation. Before smartphones and social networks existed, California's laboratories worked for the Pentagon. Fairchild Semiconductor built reconnaissance equipment for CIA spy satellites. Lockheed designed ballistic missiles in Sunnyvale. The U.S. Navy produced all of its missiles in Santa Clara County.
This symbiosis between science and the state – which the authors trace back to Franklin D. Roosevelt's 1944 letter to Vannevar Bush – was not the exception but the rule. The government funded, and engineers built for national defense. The Manhattan Project was not an aberration; it was the template.
But something changed. The generation of founders who came of age rejecting the Vietnam War and distrusting all centralized authority (from the creators of the Homebrew Computer Club to Steve Jobs) redirected talent toward the individual, not the nation. The consumer became the new battlefield.
Triviality as a Business Model
The book dissects the eToys phenomenon: a company that in 1999 reached a valuation of $10 billion selling toys over the internet. Its founder, Toby Lenk, explained without embarrassment: "We're losing money fast on purpose, to build our brand." It was not an isolated case. Zynga (agricultural video games), Groupon (discount coupons), and dozens of similar startups attracted capital and brains that could have been designing defense systems or curing diseases. The market spoke, and no one dared ask whether its verdict was sensible.
The authors call this the "lost valley": a cultural juncture where the tech industry stopped asking what deserved to be built and instead answered only what was profitable. The ability to scale became an end in itself. Mark Zuckerberg summarized it with brutal honesty: "They can't wrap their head around the idea that someone might build something because they like building things." Creation divorced itself from purpose.
What AI Has to Do with All This
The same pattern is repeating with artificial intelligence. Large language models (GPT-4 and its successors) could be decisive tools for defense, medicine, or science. Instead, most investment goes to commercial chatbots, meme generation, and virtual assistants for buying sneakers.
In November 2022, OpenAI launched ChatGPT with a policy that explicitly prohibited military use of the technology. One year later, facing competitive pressure, the company removed that clause. But the symbolic damage was already done: a generation of engineers has internalized the idea that working for the state is suspect, while working for a social network is virtuous.
Silicon Valley's turn toward consumption recalls Gibbon's account of Rome's fall: an elite that, having conquered the world, surrenders to leisure and delegates defense to mercenaries. Today's "mercenaries" are professional soldiers from humble backgrounds, while millionaire engineers design apps.
Question for the reader: If the West's most advanced AI is being trained primarily to sell advertising and entertain, while other powers direct it toward military and mass surveillance capabilities, what kind of future are we building without even debating it?
Transition: The problem is not only economic; it is moral. How did we arrive at a tech culture that fears having convictions? That is the subject of the next section.
SECTION 2 – The Dogma of Neutrality: When Technology Forgot How to Believe
The fear of giving offense has emptied tech leadership of ethics. This section explores the paradox of an industry that monetizes personal data yet retreats from complex moral dilemmas, and how that void affects AI development.
Keywords: moral void, convictionless leadership, conformity, psychological censorship, AI ethics
The Price of Having No Opinions
In 1976, the director of the ACLU, Aryeh Neier (a Jewish refugee from Nazi Germany), defended the right of a neo-Nazi group to march in Skokie, Illinois. He knew thousands of members would leave the organization. He did it anyway. His reasoning: "To defend myself, I must restrain power with freedom, even if the temporary beneficiaries are the enemies of freedom."
Karp and Zamiska rescue this anecdote not for its heroism but for something more uncomfortable: Neier held authentic beliefs and was willing to pay a price for them. Today's tech leadership has been educated in exactly the opposite direction. Better to remain silent. Better not to alienate anyone. The result is a technically brilliant but morally bland ruling class, capable of building mass surveillance systems for advertisers yet unable to state clearly whether AI should be used to kill enemies on a battlefield.
The University Presidents Episode
The book dedicates a key passage to the 2023 congressional testimony of the presidents of Harvard, Penn, and MIT. Asked whether calling for the genocide of Jews constituted harassment, they responded with a repertoire of legal ambiguities: "It is a context-dependent decision." They were not courageous or prudent; they were trained to lack convictions. All three had been prepared by the same law firm. Two lost their jobs anyway – not for saying something, but for saying nothing with sufficient clarity.
This "abandonment of belief" extends to the big tech firms. Google moved from "don't be evil" to "do the right thing." But who defines the right thing? And what happens when the right thing is unpopular?
This recalls Solomon Asch's 1950s conformity experiments, in which subjects preferred to give an answer they knew was false rather than face group disapproval. The difference is that Asch's participants were anonymous. Silicon Valley executives do it with million-dollar salaries and global consequences.
The Paradox of Omnimodous Monetization
Engineers who refuse to work for the Pentagon have no qualms about building systems that track twelve-year-olds to show them advertising. In 2022, YouTube generated $959 million in annual revenue from ads targeted at children under twelve; Instagram, $801 million. The same person who recoils at the idea that their code might end up in a military drone accepts without flinching that it manipulates adolescent neurochemistry.
Where does this inverted moral hierarchy come from? The book points to a cultural education that demonized the state while canonizing the market. Working for government is suspect; working for a corporation is neutral. The distinction is artificial: advertising also kills – it contributes to adolescent mental health crises, fuels unsustainable consumption, entrenches inequality – but because the harm is diffuse and long-term, it provokes no protests.
Question for the reader: If future AI must make life-and-death decisions (in combat, medicine, or justice), would you prefer it to be trained by people with firm opinions or by people who have learned to have none?
Transition: It is not all pessimism. The book also extracts valuable lessons from nature and improvisational theater. Let us see how they can be applied to AI.
SECTION 3 – Swarms and Startups: What Nature Teaches Us About AI
Bees, starlings, and improv comedians teach how to organize collective intelligence without rigid hierarchies. A lesson that centralized AI is ignoring.
Keywords: decentralization, swarm, improvisation, collective intelligence, AI architecture
When Bees Teach Businesses
In 1951, zoologist Martin Lindauer observed in Munich the "Eck Swarm": a cluster of honeybees searching for a new home. Without a queen giving orders, scout bees went out, inspected cavities, and returned to perform a dance (Tanzsprache) indicating direction and distance. Other bees verified, supported favorite options, or proposed alternatives. Hours or days later, the entire swarm agreed and flew together to their new home. No hierarchies, only shared information.
Karp and Zamiska use this metaphor to describe the ideal of a tech startup. Unlike traditional corporations – with their chains of command and endless meetings – successful startups resemble a swarm: information flows from the edges (the engineers closest to the problem), is validated through action (dancing is doing, not PowerPointing), and direction emerges rather than being decreed.
The Problem of Frozen Hierarchies
The book cites a Harvard Business School study in which executives confessed to feeling overwhelmed by meetings. One executive admitted to stabbing her leg with a pencil to keep from screaming during a particularly torturous session. Massive meetings, pre-meetings, meetings to prepare for meetings: symptoms of an organization that has replaced action with the theater of coordination.
Peter Drucker noted decades ago that a symphony orchestra functions without vice presidents: the conductor communicates directly with each musician. Yet most corporations have built layers that add no value but extract it. The authors are blunt: "The vast majority of an individual employee's energy during their working lives is spent merely on survival, navigating among the internal politicians."
Starling Flocks and the Speed of Information
Another fascinating example: Giorgio Parisi's (Nobel Prize in Physics) study of starlings. At dusk in Rome, thousands of birds form synchronized aerial choreographies. Parisi discovered that the secret lies at the edges: starlings on the periphery detect danger and change direction. Information passes to neighbors in fractions of a second, creating a wave that traverses the entire flock in less than a second. There is no leader.
What does this imply for AI? The authors suggest that most current systems are "clocks" (centralized, predictable, fragile) when they should be "clouds" (distributed, adaptable, resilient). A swarm-like AI would have multiple specialized agents that "dance" their results, cross-check each other, and vote on the final answer. It would be more robust, transparent, and harder to corrupt.
The Startup as Improvisational Theater
The third source of inspiration is Keith Johnstone's improvisational theater, whose book Impro was given to every new Palantir employee. His fundamental principle: "say 'yes, and…' " Accept the other's offer and add something. Don't block. In a startup, this translates into rapid experimentation, early failure, constant iteration. No time for committees.
The book notes that most companies select and reward unquestioning compliance. They hire people who nod. But a true startup needs people who dare to propose, to contradict (constructively), and to disobey stupid orders. That "constructive disobedience" is the engine of innovation.
This status flexibility contrasts with Milgram's experiments, where two-thirds of participants administered what they believed to be lethal shocks simply because an authority figure instructed them to. The swarm is the antidote to that blind obedience.
Question for the reader: If we had to design an AI to manage a crisis (cyberattack, pandemic, war), would you prefer a single omnipotent model or a swarm of smaller AIs deliberating among themselves? Which would inspire more confidence?
Transition: This debate about AI architecture is not abstract. It is directly connected to national security and the geopolitical balance. Let us move to the era of algorithmic deterrence.
SECTION 4 – The New Era of Deterrence: AI as Heir to the Atomic Bomb
The atomic bomb structured geopolitics for eighty years. Now artificial intelligence – more diffuse and cheaper – is replacing it. The West wrestles with ethical dilemmas while its adversaries advance without them.
Keywords: algorithmic deterrence, autonomous weapons, drone swarms, ethical asymmetry, new Manhattan Project
The Twilight of Missiles and the Dawn of Software
"I have become Death, the destroyer of worlds," Oppenheimer said upon witnessing the first nuclear explosion. Seventy-nine years later, Karp and Zamiska ask: what if that power is becoming irrelevant? Not because bombs have disappeared, but because deterrence has become algorithmic.
The book dedicates a memorable passage to the F-35, Lockheed Martin's stealth fighter, which costs $2 trillion and will remain in service until 2088. But as General Mark Milley, former chairman of the Joint Chiefs of Staff, observed: "Do we really think a manned aircraft is going to be winning the skies in 2088?" The answer is no. The era of military hardware is giving way to the era of software. And software is updated in weeks, not decades.
Now the relationship has inverted: AI is the brain, and drones, sensors, and robots are merely the body executing its decisions. Drone swarms need no pilots. Facial recognition systems need no human agent staring at photos. War is being automated.
The Adversary Does Not Wait
Three of the six most accurate facial recognition systems in the world are Chinese. In December 2021, the U.S. Treasury accused CloudWalk Technology of providing software to the Chinese government "to track and surveil members of ethnic minority groups." Researchers at Zhejiang University developed a drone swarm capable of flying cooperatively through a dense bamboo forest. The U.S. Air Force concluded that Beijing is "actively pursuing research into drone swarms for dealing with dynamic scenarios in large-scale combat."
The asymmetry is brutal. While in the West Google engineers protest Project Maven, China's engineers work without such dilemmas. While the Pentagon spends $1.8 billion on AI (0.2% of its budget), Beijing has made AI a national priority.
This recalls the nuclear race of the 1940s, but with a crucial difference. Then, Western scientists (many exiled from fascism) had a clear moral motivation: to defeat Hitler. Now, the motivation is diffuse. The current generation of engineers has never lived through a total war. Their pacifism is a luxury they can afford only because others – professional soldiers from humble backgrounds – already pay the price of security.
The Dilemma of Technological Pacifism
The book criticizes countries like Germany and Japan, which after World War II adopted pacifist constitutions and delegated their defense to the United States. Germany reduced its army to "bonsai armies," in the words of EU foreign policy chief Josep Borrell. Japan maintains Article 9 of its constitution, renouncing war as a sovereign right. The result: strategic dependency, especially dangerous in the AI domain. Europeans and Japanese are not developing their own algorithmic deterrence capabilities. They depend on Washington, which in turn depends on a Silicon Valley that resists working for the Pentagon.
The book's proposal is radical: a new Manhattan Project for AI. Not to build a bomb, but to develop AI systems that guarantee Western superiority on the future battlefield. This implies massive funding and a cultural shift: engineers must understand that working for national defense is not "building weapons" but participating in the deterrence that makes the rest of the economy – including their beloved smartphones and messaging apps – possible.
Question for the reader: If an adversary developed autonomous drone swarms capable of destroying critical infrastructure, and the West had the technology to do the same but declined "for ethical reasons," would that be a courageous decision or a strategic irresponsibility?
Transition: But political will alone is insufficient. The current state is incapable of innovating quickly. Why? The answer lies in its incentive systems.
SECTION 5 – The Price of Purity: Why the State Castrates Itself
Miserable salaries, the cult of procedure, and the fear of error have driven the most capable leaders away from the public sector. This section shows why bureaucratic government is incompatible with the speed that AI demands.
Keywords: public incentives, skin-in-the-game leadership, procedural purity, Rickover case, state castration
The Fed Chair Who Earns Like an Intern
In February 2023, billionaire David Rubenstein interviewed Jerome Powell, Chair of the Federal Reserve. The conversation was routine until Rubenstein asked: "What is your salary?" Powell replied: "Roughly $190,000 a year." Rubenstein pressed: "Do you think that's fair?" Powell, with an almost absurd elegance, said: "I do."
Powell's decisions affect the prices every family pays, the employment of millions, the value of an entire generation's savings. Yet his salary is less than that of a first-year associate at an investment bank. Karp and Zamiska are direct: "At that salary, Powell is essentially volunteering his time to the country."
The practical consequence is that the pool of people willing to take such positions shrinks drastically. You are either already a millionaire (Powell's net worth exceeds $20 million) or you could never afford to accept the job. Democracies have created a system that selects governors from among the already rich or among those willing to monetize their influence after leaving office. Both options are perverse.
The Fallacy of the Public Servant as Priest
The authors identify a long-standing cultural bias: the idea that those who serve the state must do so out of vocation, not money. It is an inheritance from classical republicanism, from Plato, who in The Republic argued that good rulers do not consent to govern for cash or honors. But that ideal has produced the opposite effect: driving the most capable away from public service.
The contrast with Singapore is damning. Lee Kuan Yew linked ministers' salaries to those of senior banking and law executives. In 2007, a Singaporean minister earned $1.26 million per year. When criticized, he responded: "Politicians are real men and women, like you and me, with real families who have real aspirations in life. When we talk of all these high-falutin, noble, lofty causes, remember at the end of the day, very few people become priests."
The Admiral Who Built the Nuclear Navy by Breaking the Rules
The most fascinating case is Hyman Rickover, the admiral who built the U.S. nuclear navy. He was arrogant, abusive, and would lock officers he disliked in closets. When a subordinate arrived with a navy regulations manual, Rickover told him to burn it. "My job was not to work within the system. My job was to get things done."
Rickover achieved the impossible: in 1953 he tested the first nuclear reactor small enough to fit inside a submarine. In 1955, the USS Nautilus roamed the oceans without surfacing for months. The advantage over the Soviet Union was decisive and lasted decades. But he also violated countless rules. He accepted gifts from General Dynamics over sixteen years: a jade pendant, twelve fruit knives with water buffalo horn handles, dry cleaning for his suits, eighty-eight Tiffany paperweights. Nothing of great value, but together they constituted an unacceptable pattern.
The book does not defend corruption. But it raises a question that today's culture refuses to ask: how much irregularity are we willing to tolerate in exchange for extraordinary results? Rickover was investigated and eventually pushed out. His enemies celebrated his "fall from grace." But the nuclear navy he built still protects the United States.
The Rickover case illustrates the tension between procedure and outcome. Modern democracies have prioritized procedure for fear of abuse. It is a reasonable choice, but not cost-free. The price is the loss of eccentric, insufferable, and extraordinarily effective leaders. In the AI domain, ethics committees and precautionary regulations reduce the risk of abuse, but also slow down innovation. And in a global technology race, speed may be the decisive factor.
Question for the reader: Would you tolerate a public leader violating certain administrative rules if doing so achieved decisive advances in AI for national defense? Where do you draw the line between necessary flexibility and unacceptable corruption?
Transition: With these warnings, what is to be done? The book has concrete proposals, but also notable limitations.
SECTION 6 – Rebuilding the Technological Republic (and Its Limits)
Karp and Zamiska propose a new Manhattan Project, a technology peace corps, and the recovery of national identity. But critical analysis reveals unresolved tensions: more technology or a deeper cultural shift?
Keywords: Manhattan Project, technology peace corps, national identity, collective project, limits of technological optimism
A New Manhattan Project for AI
The most ambitious proposal: the United States and its allies should launch without delay a new Manhattan Project for AI. Not to build a bomb, but to develop the algorithmic deterrence systems that will determine the balance of power in the 21st century. The original project assembled the best physicists, mobilized virtually unlimited resources, and achieved in just three years what many thought impossible.
What would this entail? Massive investment in basic and applied research, but also a reform of Pentagon software acquisition. The book recounts how the Federal Acquisition Streamlining Act of 1994 was ignored for two decades until Palantir invoked it in court and won. The state must learn to buy software as it buys screws: evaluating existing products, testing prototypes in weeks, and scaling what works.
The Technology Peace Corps
The second pillar would be a "technology peace corps": the best Silicon Valley engineers could spend one or two years serving in government agencies, modernizing obsolete systems, building tools for defense or public health. Not as outside contractors billing by the hour, but as a form of civic service.
The idea is ambitious and somewhat naive. The book acknowledges the difficulties: lower salaries, suffocating bureaucracy, incompatible cultures. But it argues that without a flow of young talent into the public sector, the innovation gap will only widen.
The technology peace corps recalls the national service programs of Roosevelt and Kennedy, which channeled youthful idealism toward tangible collective projects. Today, youthful idealism channels toward abstract global causes or startups that promise to "change the world" by selling apps. Modernizing Pentagon intelligence systems may be more necessary than building yet another social network.
The Return of National Identity
The third pillar is the most surprising for a book written by tech executives. Karp and Zamiska devote Chapter 17 to defending the need to recover a shared national identity. In an era where speaking of "Western culture" or "American values" is considered suspect, the authors dare to assert that without a sense of belonging and collective purpose, there is no national project worth defending. And without a national project, technology becomes a toy.
They turn to Ernest Renan, who in 1882 defined the nation as "a daily plebiscite": not race or language, but the shared will to continue living together. They sharply criticize the contemporary left for abandoning any discourse on national identity, delegating it to the market or to global activism. The result, they argue, is a void that consumption fills: people no longer identify as citizens, but as fans of a sports team, followers of a series, customers of a brand. And that market identity is too weak to mobilize collective defense when the threat comes.
The Limits of Technological Optimism
Critical analysis must note limitations. The first is the underlying technological optimism. Karp believes the problem is that engineers work on the wrong things; the solution is to redirect them. But what if the crisis of purpose is cultural, political, even spiritual – not repairable with another Manhattan Project?
The space race mobilized the United States not only because rockets were technically challenging, but because it embodied a narrative: democracy versus communism, freedom versus tyranny, future versus past. That narrative predated the technology. The book does not explain how to rebuild it in an era of generalized cynicism.
Another limitation: the book barely addresses inequality. The same engineers Karp wants to redirect to defense earn million-dollar salaries. Will they accept working for less money in government agencies? The book mentions Singapore's incentive model but does not apply it to Silicon Valley. If defense engineers are paid worse than advertising engineers, the redirection of talent will always be partial.
The Future of AI as a Collective Project
Despite these limitations, the book asks an unavoidable question: what do we want AI for? So that adolescents spend more hours watching short videos? To optimize advertising to the point of manipulation? Or to cure diseases, predict disasters, defend democracy?
The answer cannot be left to the market. The market generates what is profitable, not what is desirable. And in the absence of a collective project, profitability prevails by default.
The future of AI will be decided on three fronts: technological (capacity to develop advanced systems), organizational (institutional agility), and cultural (shared narrative). The first two are difficult, but the third is the most decisive. Without a "why," the "how" is mere technique.
EDITOR'S AFTERWORD
The Technological Republic is an uncomfortable book, designed to offend nearly everyone. It offends the tech left by criticizing its pacifism and moral evasion. It offends the libertarian right by defending a strong state and collective projects. It offends bureaucrats by pointing out their inefficiency, and engineers by reminding them that their creations are not neutral.
But it is uncomfortable above all because its questions are pertinent. Can a civilization that has lost faith in itself defend its place in the world? Can leaders educated in relativism and neutrality make difficult decisions about the use of force? Can an AI trained on convictionless discourse ever develop judgment of its own?
The book does not answer these questions, but it poses them with an urgency rarely found in academic treatises or business manifestos. Karp and Zamiska are not neutral. They hold a position, defend it unapologetically, and assume the cost. In an era where forced neutrality is the norm, that gesture alone is significant.
Final question for the reader (with an invitation to action): Imagine that ten years from now a general artificial intelligence exists, capable of surpassing humans in almost every cognitive task. That AGI has been trained on the data and values of our culture – skeptical, ambivalent, lacking strong convictions. What kind of intelligence will emerge? A timid, evasive AGI, unable to take sides? Or one that, lacking consistent ethical guidance, develops its own criteria, perhaps not aligned with ours?
This is not only about regulation. It is about deciding what we want to build and why. And that debate cannot be delegated to engineers or politicians. It belongs to all of us.