
Many people are coming to the view that AI is going to replace humanity, doing what it does, just hundreds of times cheaper, faster, and better. But if you delve into AI science, that future isn’t guaranteed. Many people working in the industry remain skeptical about what AI can do, despite its impressiveness today.
Part of the problem comes from the hype itself. Many prominent people in the industry are claiming that AI is essentially the ultimate technology and that it will solve all human material (and perhaps immaterial) problems. Intelligent systems will have the power of unlimited intelligence, allowing them to shape the world in any way that serves human needs. Given that these machines can do anything, they will take over the economy and run it for us, eliminating the requirement for anyone to work a job.
But is that how things are going to turn out in reality? More voices are now suggesting that AI won’t progress as far or as fast as tech CEOs seem to believe. Deep thinkers, like David Deutsch, Roger Penrose, and Noam Chomsky believe that AI has fundamental limits, as currently conceived, and that it will continue to have those limitations, regardless of how powerful the software or hardware become over the coming years.
The depth of these critiques of AI vary wildly. Some are superficial, and others are profound, challenging the technology at its roots.
RECOMMENDED FOR YOU

How to Save Money on Groceries Every Month: Smart Strategies for Every Home
Team SR
Jun 14, 2025
So, why might AI fail to replace workers en masse in today’s economy?
Trust Issues In AI Adoption
The first and perhaps most interesting reason why AI may not come to dominate the globe has to do with trust issues. A single mistake on the part of AI that leads to human deaths could essentially derail wider adoption in entire industries.
For example, suppose the medical establishment uses AI to perform surgeries on patients in the near future. The technology is certainly available and it is now only a matter of time before it happens. What would happen if it was discovered that an AI system deliberately botched the operation or harmed the patient? In this scenario, medical authorities would rush in, ban the technology, and instant conventional surgeons, preventing the AI takeover on a worldwide basis. It would also demand that the developers of the system show why the deliberate harm occurred and how they’ve solved the problem. The issue is that solving this challenge isn’t something that any AI company can currently do. AI alignment remains a challenge and isn’t guaranteed, so the first robot that harms a human in a particular business setting could also be the last.
Hard-To-Automate Niche Roles
Then, of course, there are hard-to-automate niche roles. These would require exceptional training and a general learning system that might not be possible using today’s architectures. Unlike artisans, modern machine learning systems can’t learn how to do something from one or two examples. They require millions of pieces of data to enhance their ideas and approaches.
Take someone who makes jewelry, for instance. This person often makes bespoke items for clients before selling them on Etsy. They will usually draw what they want to make first and then attempt to craft it for the first time out of a piece of metal, even if there are no prior examples of the same thing elsewhere.
This process requires artistic creativity and intelligence. Machines can excel in areas where data is available, but developing something original for the first time from imagination is far more challenging. It is conceivable that it could be done, but perhaps not without prompts and detailed inputs from human users.
The International Labour Organization agrees with the idea that these niche roles are essentially immune to automation. Their studies prove that even if machine dexterity improves, AIs will struggle to develop new concepts that don’t waste resources for the first time.
AI Energy Costs
Another argument is that AI energy costs will eventually become prohibitive and that humans are just much more efficient. For example, a datacenter providing chatbot outputs could use as much as a million watts of energy to generate an answer to a complex question. Meanwhile, a trained human brain might only use 20 watts for the same output.
This energy constraint is a significant concern in a world of dwindling resources. Many commentators are worried that falling oil supplies and production will mean that AI will essentially become ultra-expensive and infeasible long-term, unless significant improvements can be made in the underlying architecture. Transformers were a substantial breakthrough, but experts, like Geoffrey Hinton, argue that there need to be more.
According to estimates, AI will comprise 2% of global energy production by 2025, perhaps rising to 25% in a full automation scenario. That sort of growth simply isn’t sustainable.
The Long Tail Of Implementation
A further issue is what’s being dubbed “the long tail of implementation.” The idea here is that AI adoption takes time, and even if the technology solves 99% of work process problems, humans are still required for the remaining 1%. This reason is essentially why AI chatbots for customer service haven’t completely taken over. Yes, they can deal with the majority of questions and issues today, but they don’t have the same problem-solving skills or real-world agency that human agents have, preventing them from taking over these roles 100%.
Anyone who’s tried to use a chatbot to solve complex problems will know this. While they can provide canned answers to common questions, they don’t have the tentacles required to interact with CRMs, host meetings with managers, or get things done with third parties. It’s these more rounded abilities that makes humans critical, even in low-skill jobs.
AI Can’t Step Outside Computation
Another interesting point raised by Nobel Prize-winning physicist Roger Penrose is the fact that AI can’t step outside of its computational process, something he argues humans can do because of our ability to understand Godel’s theorem. Regardless of how sophisticated it becomes, AI is still a Turing machine and, therefore, can’t reflect on the truth of a statement that isn’t proven by the logic that constructed it.
Whether this sets practical limits to AI adoption on the job market remains to be seen. However, it is an interesting insight that reveals that AIs are probably fundamentally different from us. They live within a computational paradigm, while humans don’t, and that could explain our generalisability, and why AI gets stuck inside task-specific domains.
Of course, AI-human collaboration could solve this issue. But that would essentially turn AI into a new form of capital or machinery, similar to what’s been done since the industrial revolution. Under this assumption, it wouldn’t be fundamentally different to all the other technical disruptions that led to job losses in the past.
Economic Barriers To Full Automation
Economist and automation specialist David Autor also raises the point that for a capitalist system to replace human labor with machines, it must be profitable. This point applies across the board, including to AI.
He gives an example of an AI that can replace warehouse workers. Today, constructing such machines seems possible (given the work being done by leading companies, like Boston Dynamics). However, these systems are expensive, perhaps $500,000 to $1 million. Because of this, Autor argues that most companies will still use human laborers. The technology may exist, but it is too expensive to use productively.
This argument explains why many things that people want to happen haven't happened yet, like bus-sized nuclear power plants or flying cars. The technology to build them is available, but the market won’t support them, simply because of the sheer resources required to build them. The same could be true of embodied, agentic AI.
Lack Of Human Judgement
Finally, there are qualitative arguments against AI replacing all jobs. Institutions simply may not allow it, preferring to maintain human responsibility in the economic process. While AI might have access to more data and be better positioned to make great decisions, societal friction could prevent it from assuming those roles, especially if it threatens political or organizational power.
AI may also lack the ability to make sound judgements in ambiguous situations. Nobody has really put it to the test yet, so it isn’t clear whether it can act ethically within specific cultural contexts.
Wrapping Up
The headwinds that AI faces are more severe than the headlines reveal. While the technology is exciting and highly capable since the introduction of LLMs, it still isn’t really human-like in its approach, meaning that it can’t replace the vast majority of jobs.
Economists tend to break down jobs into roles and then make predictions based on whether AI could perform them. Often, they find that it can. However, when you break jobs into specific tasks, you discover something different -- AI systems can’t really do much.
According to McKinsey, around 30% of current job tasks are automatable, but the roles surrounding them are still necessary. It’s akin to workers being able to get more done in a day, instead of being replaced outright.