Don't Count Your AI Chickens Before They Hatch: The Risks of Premature UBI
Premature UBI enthusiasm and/or rollouts could have disastrous effects on the U.S. economy, innovation, and human advancement
“Count not thy chickens that unhatched be” – Thomas Howell (New Sonnets and Pretty Pamphlets, 1570)
Five centuries ago, Thomas Howell warned against celebrating outcomes before their actual arrival. Today, many in tech (and non-tech) circles are rabidly clamoring for universal basic income (UBI).
Why? They strongly believe that AGI (artificial general intelligence) is right around the corner and will displace a significant % of human jobs that will never return, essentially rendering many/most humans “occupationally useless.”
The looming threat of large-scale job displacement from AGI has led many to support an accelerated and/or immediate UBI rollout… however, if these individuals get their way – we risk “counting our chickens before they hatch.”
The risks of counting chickens before they hatch in this scenario? Significant economic damage and deceleration of human innovation/advancement (with no guarantee of AGI or its impact).
These individuals are assuming: (1) AGI is imminent (will arrive soon); (2) AGI will displace a significant #/% of jobs; (3) there won’t be a significant #/% of new jobs created from AGI; (4) gov spending/taxation for $ to fund UBI programs won’t do more harm than good; & (5) money will still be used as a medium of exchange (at the time a “UBI” would be objectively appropriate).
I think the smartest move now is to plan for potential UBI (if necessary) but: (1) avoid promoting it excessively (don’t get the general public hysterical) and (2) avoid premature rollouts (if there is mass job displacement without jobs to fill then gov can deploy UBI rapidly… akin to what they did with COVID stimulus checks).
Assuming we deploy UBI, we should avoid getting carried away (i.e. excessive payments). Even with AGI, excessive $ allocated to UBI payouts could stifle human advancement and the economy (we want to avoid slowing/stagnating innovation and entitlement mindset/socialism).
Drinking the AI Kool-Aid: AGI 2025-2030
Tech often becomes an echo chamber of “the current hyped thing.” The current hyped thing is now scaling to AGI, then ASI. Everyone is drinking the Kool-Aid and convinced AGI will arrive 2025-2030 and ASI is right around the corner (2030-2040).
I’m even drinking the Kool-Aid here… I don’t think tech circles are incorrect about AGI predictions. I could even make the case that AGI is already here with o1-pro or OpenAI’s new o3.
But even if AGI is near (or already here) this doesn’t mean UBI will be: (1) 100% necessary OR (2) needed anytime soon… there will be latency (perhaps a lot of it) between AGI releases (AGI v1, AGI v2, AGI v100, ASI, etc.) and mass job displacement.
A.) Tech’s Echo Chamber
Exaggerated Timelines: Certain venture-funded startups and large tech firms benefit from marketing AGI as an imminent revolution to attract investors, boost valuations, or seize media attention.
Overconfident Predictions: In closed-loop tech circles, “AGI is just around the corner” becomes self-reinforcing talk—even if the real engineering hurdles are massive.
Fundraising & PR Strategy: Promises of near-superhuman AI systems (claiming “500 IQ” or more) generate headlines and capital, even though effective, real-world job displacement depends on far more than raw intelligence.
B.) Reality Check: AGI ≠ Instant Labor Takeover
Implementation Bottlenecks: Even if an AI model is theoretically “superhuman,” scaling it into robots or complex systems that can reliably perform physical, social, and oversight tasks is not guaranteed—especially outside lab conditions.
Human-AI Coordination: Real workplaces have regulatory, cultural, and technical constraints. AGI might still rely on human supervisors for data curation, edge-case handling, and moral or ethical oversight.
Infrastructure Gaps: Think power grids, advanced robotics hardware, specialized training for operators. AGI must be embedded in physical machines or services—requiring time, capital, and iterative testing.
Core Assumptions Driving Early UBI Promotion with AI Advancement (AGI)
AGI Will Arrive Soon (2025-2030): Many believe AGI is just around the corner.
Mass Job Displacement: People assume AGI will eradicate a large percentage of jobs, leaving no alternatives. Most fail to realize that “AGI” will have levels (AGI-1 won’t look like AGI-30 which won’t look like ASI-1).
Not Enough “New” Opportunities: They doubt AI will create many new jobs—akin to pessimistic predictions during the Industrial Revolution or the early Internet age. (I would guess they are correct on this point, but could be proven wrong.)
No Harm in Handouts with AGI: They assume giving out free money prematurely won’t undermine work incentives or distort the economy. This is a horrendous idea because it disincentivizes work. If someone is working making minimum wage but could get the same amount by just “chillin” - they might just “chill.”
Money Remains the Medium: Most also presume that currency-based transactions will still define economic exchange when AGI supposedly leaves humans jobless. (Money - or some form of it - will likely remain relevant long into the future but could eventually become obsolete.)
Reality: Rolling out UBI prematurely ignores how markets, workforces, and even AI capabilities adapt over time. Moreover, early UBI can create complacency, reduce skill-building, and siphon resources away from the very research and entrepreneurial ventures that elevate living standards.
AGI Job Displacement: Short-Term Skepticism vs. Long-Term Possibility
Short-Term Outlook (~2025-2035)
AI Requires Oversight (Now): Today’s AI still depends on humans for training, troubleshooting, and oversight. Even if a “basic AGI” appears soon, it will not instantly conquer every job sector.
Potential for Job Creation: As with past revolutions, AI could spawn entirely new fields (e.g., AI trainers, data-ethics specialists, robotics maintenance). Although major displacement is possible, it might arrive more gradually. I’m skeptical of mass new jobs, but hope I’m incorrect.
Future Displacement (Long Run: ~2035-2050)
AGI/ASI Could Replace Every Job: If humanity endures, we may witness AGI integrated into humanoid robots or advanced exoskeletons, potentially displacing nearly every human occupation (24/7 performance, zero fatigue, and self-upgrading capabilities).
Cautionary UBI Rollout: Even if we reach that stage, an overly luxurious UBI fosters dependency and stifles economic progress. Striking a balance ensures we don’t fall into entitlement or stagnation.
3 Key Risks of Premature UBI
1. Undercutting Innovation
Advances in AI—or even full AGI—might well transform the job landscape. But rolling out a robust, unconditional UBI before mass job losses actually materialize can starve the very innovators and entrepreneurs we need to navigate (and leverage) such upheavals.
Killing Motivation Before It’s Necessary
Premature Safety Net: If people receive substantial UBI before genuine displacement occurs, many will lose the incentive to stay in or transition to roles still urgently needing human skills (like healthcare, caregiving, construction, and more).
No Urgency to Adapt: The prospect of AGI means we need a nimble workforce—ready to reskill and pivot. A sizable handout blunts that urgency, keeping people complacent rather than pushing them to acquire new, in-demand capabilities.
Draining Capital from Crucial AGI-Response Projects
Looming AGI Demands R&D: If AGI is indeed on the horizon, society should be pouring funds into research labs and new technologies that can help smooth disruptive effects. A hefty UBI siphons fiscal resources away from precisely these innovation hotspots.
Talent and Money Misallocation: Instead of fueling tech startups, AI labs, and reskilling initiatives, heavy taxation to fund UBI prematurely can divert both talent and capital away from the critical sectors building AGI and other novel technologies to advance humanity.
Result: A prematurely generous UBI robs society of the entrepreneurial spirit and resources we urgently need to prepare for AGI’s disruptions. Rather than fostering adaptation, it can lull people into complacency, just as we’re stepping into an era where rapid, proactive innovation could keep entire segments of the workforce relevant and the economy resilient.
2. Socialist Drift
When UBI arrives too soon and in too large an amount—especially under the claim that “AGI might replace everyone next year!”—it doesn’t just risk complacency. It also sets off a chain reaction toward deeper state control and away from the market dynamics that actually spur progress.
The Slippery Slope of Dependency
Evolving from Crisis Aid to Permanent Entitlement: Once the public becomes accustomed to broad, comfortable payouts, scaling them back—no matter how temporary they were meant to be—is politically explosive. (Think about feeding wild animals. They become dependent and don’t really know how to function when you cut them off from the trough.)
Entitlement Over Effort: If people see UBI as a right, they start expecting the government to cover life’s challenges preemptively. That sense of entitlement can erode the urgency to remain employed, upskill, or switch fields in response to AI disruptions.
Centralized Control Erodes Adaptability
Higher Taxes Drain the Innovation Tank: Funding a lavish UBI before real AGI-based displacement actually hits can mean sharp tax hikes, diverting capital from advanced research, robotics, AI safety, and workforce development.
Regulatory Bloat: As government pays for more, it typically imposes tighter rules—on how businesses operate, how people qualify for income supplements, and how resources get allocated. This adds layers of bureaucracy at a time when fast pivoting (to meet emerging AI challenges) is crucial.
Result: By granting a generous UBI before the job apocalypse truly arrives, society risks sliding into a more centralized, state-driven economy—fostering a “someone else will pay for it” mindset. This socialistic tilt undercuts the market forces that historically drive adaptation, keep costs down, and mobilize innovators to tackle new problems—precisely the dynamics we need most as AGI looms on the horizon.
3. Distorting Labor Markets
By handing out a sizeable UBI before actual AGI-triggered job displacement sets in, governments can disrupt labor markets in ways that stall genuine adaptation to AI’s advances.
Wage Inflation Without Need
Artificially High Pay Expectations: If a UBI covers most living costs, many workers refuse lower-paying or entry-level jobs—raising wage demands across the board.
Businesses Accelerate Automation Prematurely: Faced with overpriced labor, companies might deploy AI-driven tools and robotics faster than market forces would naturally dictate, ironically fueling the very job-loss crisis UBI was supposed to cushion.
Fewer Incentives to Pivot or Reskill
“Why Upskill If I’m Already Paid?” A safety net that’s too comfortable blunts the urgency to learn new trades, pursue in-demand certifications, or shift to roles that still rely heavily on human dexterity and social nuance.
Missed Window for AGI Preparation: Preemptive UBI can distract both policymakers and individuals from launching (or enrolling in) targeted reskilling programs—yet these are exactly what ensure broad-based adaptability in an AI-driven future.
Result: Instead of organically evolving alongside AI, a preemptive UBI warps labor dynamics—leading employers to automate faster while simultaneously discouraging workers from seizing emerging opportunities. When genuine AGI-level upheavals do arrive, a labor force cushioned by excessive handouts is less agile and less equipped to respond—risking a bigger shock than if the market had remained flexible.
Why AGI Won’t Instantly Displace All Jobs
A.) Physical & Cultural Constraints
Hardware Challenges: True AGI in a robot body must tackle power systems, advanced dexterity, and sensory feedback. These are still in early stages of mass production.
Regulatory Hurdles: Institutions like hospitals and schools adopt transformative tech slowly, requiring human oversight and long-term studies of effectiveness.
B.) Human Oversight
Edge Cases & Error Rates: Current AI struggles with nuanced, unpredictable scenarios. Fields needing emotional support or immediate improvisation (caregiving, crisis response) defy complete automation.
Partial Automation: Historically, technology first displaces routine tasks. People often shift to more interpersonal or supervisory roles, rather than vanish from the workforce entirely.
C.) Sector-by-Sector Variation
Skilled Trades: Roles like plumbing, carpentry, and HVAC repair demand on-site problem-solving in irregular environments. Replicating this adaptability in robots remains exceedingly difficult.
Human-Facing Services: Hospitality, eldercare, personal training—jobs relying on emotional intelligence—are less likely to be overtaken quickly.
Many Current U.S. Job Vacancies (~8 Million in 2025)
According to the U.S. Bureau of Labor Statistics (BLS), there are between ~8–10 million unfilled positions nationwide, surpassing the number of unemployed individuals.
Unfilled Positions
Healthcare: Nursing shortages abound, with no AI caretaker ready to substitute human compassion and adaptability.
Skilled Trades: Construction, energy, and maintenance sectors face a crisis as experienced workers retire with too few apprentices to replace them.
Hospitality & Service: Post-pandemic reopenings still struggle to recruit adequate staff, suggesting demand for human labor remains high.
Paths for Displaced Workers
Sector Shifts: If AI automates certain white-collar tasks, many can pivot to roles that remain “uniquely human.”
Skill rotation: Community colleges, boot camps, and private programs can quickly retrain workers in in-demand fields, especially if funded or incentivized by business and government.
BCIs: Brain-computer-interfaces like Neuralink may become available that essentially upgrade humans to a level that they can integrate and/or compete with various AIs (giving them a sense of purpose and an occupation).
New Roles in AI’s Ecosystem
AI Support Functions: AI trainers, data cleaners, compliance specialists, and more. Even advanced AI requires ongoing oversight and environment tuning, ironically creating near-to-mid-term job growth.
Designing UBI: If Mass Job Displacement Occurs
1. Defining Legitimate “Mass Displacement”
Sustained, Unusually High Unemployment: UBI should trigger only if joblessness soars well above historical norms and stays that way (e.g., 6–12 months), pointing to structural unemployment rather than a temporary dip.
Limited Reabsorption: If workers can’t realistically retrain or move into other sectors quickly, a minimal UBI might be necessary.
2. Fair Not Luxurious
Covering Basic Needs: Enough for food, shelter, and utilities—no more. Why? Standard of living will continue getting better in the advent of AGI.
Below Entry-Level Wages: Ensuring recipients still find it profitable to work or reskill rather than remain idle.
3. Temporary & Conditional
Sunset Clauses: UBI legislation should expire or downshift if/when new jobs appear or the crisis abates.
Mandatory Upskilling: Linking UBI eligibility to participation in educational or retraining programs fosters a return to productivity.
4. Safeguards for Innovation
Protecting R&D: Prioritize incentives for startups, labs, and entrepreneurs so capital isn’t gutted by UBI’s demands.
Limiting Tax Burden: Excessively high taxes drive away innovators, hamper business growth, and can starve the economy of resources.
Keeping Capitalism Alive with AGI
A central fear driving premature UBI proposals is that once AGI displaces humans in countless roles, free-market capitalism may no longer “work.”
But abandoning the competitive market framework—and the rewards that come with building game-changing solutions—may be a grave mistake, even in a post-AGI world.
1. Innovation Doesn’t Stop with AGI
Humans Still Find Niches: Even if robots perform most manual or routine tasks, people can create entirely new forms of art, entertainment, and personalized services where human connection and creativity remain vital.
Pushing the Frontier Further: Markets spur competition, fostering improvements in AI safety, robotics, and spin-off industries. Just as early computing birthed an ecosystem of new roles, AGI could unlock unanticipated avenues of enterprise.
2. Competition Still Matters
Price and Quality Gains: A free market ensures producers (including AI-driven enterprises) keep improving while lowering costs, making advanced technologies widely accessible over time.
Checks and Balances: Competitive markets distribute power across multiple firms and innovators, reducing risks of monopolies or heavy-handed government oversight.
3. Avoiding the ‘Command Economy’ Trap
No Top-Down Planning: Centrally managing AGI deployment or instituting an expansive UBI risks inefficiency and stagnation, as seen in failed command economies of the past.
Preserving Entrepreneurial Spirit: Private ownership, profit motives, and patent protections attract ambitious innovators eager to create the next wave of products and services, even in a tech-dominant landscape.
4. The Essential Role of Risk-Takers
Rewarding Courageous Moves: Whether developing AGI safety protocols, pioneering brain-computer interfaces, or integrating humans and AI in new business models, bold risk-takers deserve commensurate rewards.
Market-Driven Cycles: Competitive dynamics ensure that failures clear the path for successful innovations, lifting living standards just as early internet and smartphone revolutions did.
5. Capitalism as an Innovation Engine
Risk & Reward: Profit incentives drive the risk-taking that leads to revolutionary breakthroughs, from the steam engine to AGI.
Deflationary Tech: As technology matures, previously expensive goods become affordable for the masses—think smartphones, solar power, or advanced medical tools. Early access for the wealthy often accelerates this cycle, benefiting everyone long-term.
6. Wealth Inequality vs. Quality of Life
Arguably the dumbest modern argument in favor of UBI is “income inequality” or a “wealth gap.” These gaps are a byproduct of individuals being able to generate innovation outputs and/or utility for society that others could not… these people should be compensated accordingly (and their kids should get the money if they choose to pass it on).
This fuels more innovation for human advancement… money is actually irrelevant. Human quality of life continues improving to the point where we all live better than Kings of the past.
Elevating the Floor: Closing the wealth gap is less critical than ensuring the poorest have improving access to essentials like healthcare, utilities, and communication tools.
Excessive Redistribution Risks: Flattening incomes too aggressively stifles the capital necessary for next-generation advancements, ultimately harming society as a whole.
7. Beyond AGI: Sustaining Capitalism in a Tech-Dominant World
Resource Generation: A vibrant capitalist system can rapidly scale resources to address crises—such as deploying minimal UBI—if true displacement overwhelms the workforce.
Market Dynamics: Even in an AI-dominant economy, competition between private entities drives constant improvement, ensuring sustained creativity, opportunity, and progress.
Counterarguments & Rebuttals: “UBI Now Cuz AGI Soon”
1. “Better Safe Than Sorry - Roll Out UBI Now!”
Undermining Work Ethic: Premature, generous payouts disincentivize skill-building. Labor shortages in vital sectors (nursing, skilled trades) could worsen.
Inflated Government Liabilities: Universal payouts without proven mass displacement risk ballooning the national debt, crowding out infrastructure and research funding. (The current debt-to-GDP ratio for the U.S. is asinine and spiraling out of control).
2. “Mass Job Displacement Is Inevitable - and Immediate!”
Uncertain Timelines: True AGI that replaces all roles may still be years (or decades) away. Even breakthrough technologies (steam engine, electricity) took years to saturate the market.
Adaptive Markets: History shows that as some jobs vanish, new ones often emerge—especially in fields that support or manage the new technology.
3. “Inequality Will Spiral Out of Control Without UBI!”
Focus on Quality of Life: If the “floor” continues to rise—via affordable healthcare, declining tech costs—society still advances, even if top earners become wealthier.
Innovation Necessitates Unequal Rewards: Drastic income caps or heavy redistribution can kill entrepreneurial drive.
4. “Humans Are Doomed - AGI Will Run Everything!”
Physical & Emotional Barriers: Many tasks require dexterity, empathy, creativity. Integrating AGI into robots that can seamlessly replicate these human traits is extraordinarily complex.
Legal & Cultural Resistance: Public sentiment often insists on having humans in the loop for medical, childcare, and counseling services.
5. “We Should Abandon Capitalism for Guaranteed Security!”
Track Record of Planned Economies: Historically, central planning leads to slower innovation and reduced growth, harming the very people it aims to protect.
Fueling Future Safety Nets: Capitalist wealth creation is what funds any robust social program. Without it, the tax base dwindles.
Conclusion: Premature UBI is Dumb AF
We are inching closer to potentially transformative AI developments… and it’s tempting to jump straight down the UBI rabbit-hole in fear that advanced AI or AGI will displace human labor en masse.
But remember, the U.S. government is agile and will react quickly if there is a true crisis/emergency with many displaced workers and no new jobs to fill (due to AI taking most of them etc.)… they had no problem cutting checks during COVID.
Why Not Panic Now?
AGI Isn’t an Instant Tsunami: Genuine mass automation will take time, real-world testing, and widespread hardware adoption. Blanket solutions—like a permanent, large-scale UBI—are likely to waste resources if rolled out too early.
Costly Complacency: A lavish preemptive safety net can dull people’s drive to retrain or shift into fields where human input still matters—healthcare, personalized services, creative endeavors, and beyond.
When UBI Makes Sense
Provable Mass Displacement: A “just-in-case” UBI is valid only if massive, long-term unemployment arises despite concerted efforts to reskill workers. Even then, it should be modest, phase-limited, and encourage new skill acquisition.
Data-Triggered Policy: Rely on unemployment rates, job vacancy data, and skill mismatch stats to decide if and when to activate UBI—minimizing knee-jerk panic spending.
Keeping the Free Market Front and Center
Innovation as Our Lifeline: Capitalism’s competitive pressures don’t vanish with AI. They’re essential to keep costs falling and breakthroughs emerging—even after AGI arrives.
Rewards for Risk-Takers: Visionaries need strong incentives to continue pioneering advanced robotics, AI safety, and new human-machine synergy models that could yield whole new job categories.
Essential Takeaways
No Premature Generosity: High, unconditional UBI without genuine, large-scale displacement can lead to stagnant productivity, thwart necessary reskilling, and tip society toward a more socialist, sluggish economy.
Minimal, Conditional, Temporary: If a real AGI-driven meltdown comes, keep UBI below entry-level wage levels, pair it with training mandates, and include sunset clauses—so it doesn’t evolve into a permanent hammock.
Preserve the Capitalist Engine: Moderate taxes, retain competition, and keep the rewards high for inventors, entrepreneurs, and all who push civilization forward.
Thomas Howell’s ancient admonition holds true in our AI/AGI age: “Count not thy chickens that unhatched be.”
While AGI might ultimately transform or replace most human jobs, betting on an immediate apocalypse—and preemptively funding UBI payouts—risks damaging the very economic engine that has consistently raised living standards and created advanced AI.
A combo of contingency planning and a commitment to free-market principles can help society brace for the day AGI displaces large swaths of the workforce (assuming it actually does) without damaging the economy or rate of innovation.