47 Million Developers and Counting: Is AI About to Change That Math?

There are now more software developers on Earth than nurses and doctors combined. That fact alone should give us pause—not because it’s necessarily wrong, but because it forces us to ask what we mean by “developer” and whether AI is about to redraw the map.

I’m not looking at the wider impact of AI across other industries here—just focusing on software development, a field I know well. The question first nagged at me after watching a throwaway video on vibe coding with an AI assistant, where someone casually mentioned that 0.5% of the global population now works in software. That number stuck with me.

The Numbers: Bigger Than You Think, Murkier Than They Seem

SlashData’s most recent estimate, published in early 2025, puts the global developer population at 47.2 million—a 50% increase from just three years earlier. Of these, roughly 36.5 million are professionals and 10.7 million are hobbyists or students. JetBrains, using a stricter methodology focused on professional employment, arrives at a much lower figure: 20.8 million. Evans Data Corporation sits between the two at approximately 28.7 million.

For context, the World Health Organization counts 29.8 million nurses globally (2023 data) and estimates roughly 13 million physicians. Combined, that’s about 43 million healthcare workers serving the physical well-being of eight billion people. By even the most conservative software developer count, we have a tech workforce in the same order of magnitude—and by the broadest measure, we’ve already surpassed it.

The comparison is vivid but imperfect. Medicine and nursing are constrained by physical presence: a nurse in Lagos cannot treat a patient in London. Software, by contrast, scales without geographic limits. A single team in San Francisco can ship a product used by hundreds of millions. That fundamental difference in leverage partially explains why the developer population has grown so large—and why AI’s productivity effects could prove so disruptive.

What AI Is Actually Doing to Developer Productivity

The hype around AI coding tools has outpaced the evidence, but the evidence is starting to come into focus.

A GitClear study analyzing 2,172 developer-weeks of real codebase data (September–December 2025) found that regular AI tool users showed roughly a 25% improvement in output compared to their own performance one year earlier. That’s meaningful and compounding. But it’s a far cry from the “10x developer” narrative. The same study found that the raw activity gap—where AI power users appeared 4–14x more productive than non-users—was largely explained by confounding factors: heavy AI adopters tend to be senior engineers at startups during their most productive weeks. Strip those factors out, and AI’s isolated contribution settles at that more modest 25%.

There are also costs. Heavy AI users exhibited 9x higher code churn—code written and then revised or deleted within two weeks—and 4x more code duplication. GitHub’s research team has reported similar patterns. AI makes it easy to generate code fast; it doesn’t make it easy to generate good code fast. The productivity gain is real, but it comes with a quality tax that teams must actively manage.

These findings suggest AI is best understood as a force multiplier, not a replacement. It amplifies the capability of skilled engineers rather than eliminating the need for engineering judgment. The developers who benefit most are those who already know what good software looks like.

The Layoff Signal

If the productivity data is ambiguous, the corporate response is not. In Q1 2026 alone, the tech industry shed an estimated 52,000 to 78,000 jobs, with tracking sites like TrueUp recording nearly 95,000 total tech layoffs by mid-April. According to Tom’s Hardware and Challenger, Gray & Christmas, roughly half of these cuts were attributed to AI-driven efficiency gains.

The most prominent example came from Block, where CEO Jack Dorsey cut approximately 4,000 of the company’s 10,000 employeesin February 2026, citing AI as enabling “a new way of working.” Block’s stock jumped on the announcement—a signal that markets, at least, believe AI can substitute for headcount at that scale. Whether Dorsey’s framing was genuine or convenient is debated. A New York Times opinion piece by a former Block employee argued the “AI playbook” was just a fresh justification for familiar Silicon Valley cost-cutting.

Snap, Oracle, Disney, and others have followed with their own rounds. The pattern is clear: companies are testing the hypothesis that fewer developers, armed with AI, can maintain or increase output. Whether the hypothesis holds over 12–18 months as technical debt accumulates from AI-generated code churn remains to be seen.

The Professionalization Question

All of this feeds an older debate: should software engineering be professionalized like medicine or law? Today, anyone can call themselves a software developer. There is no licensing requirement, no bar exam, no mandatory continuing education. The IEEE Computer Society offers a Professional Software Engineering Master Certification covering 11 competency areas (requirements, design, testing, maintenance, and more), but it’s entirely voluntary and virtually unknown outside academic circles. Texas has offered a Professional Engineer (PE) license for software engineers for over two decades with negligible adoption.

The argument for professionalization has gained new urgency as AI reshapes the field. If AI handles boilerplate code generation, testing scaffolding, and documentation—the tasks that occupy much of a junior developer’s day—then the remaining human role shifts toward architecture, system design, ethical oversight, and complex problem-solving. These are precisely the competencies that benefit from formal standards and credentialing.

But professionalization also carries risks: it creates barriers to entry that historically disadvantage self-taught developers, career changers, and people from lower-income backgrounds—the very groups that the tech industry has spent a decade trying to recruit. A licensing regime designed to elevate standards could easily become one that entrenches privilege.

So How Many Developers Do We Actually Need?

The honest answer is that nobody knows, and anyone who gives you a specific number is speculating. The draft of this argument originally proposed a target of “5–10 million”—a figure that would represent a 75–90% reduction from today’s professional workforce. That kind of collapse is hard to square with the demand side of the equation. Global software market revenue is projected to grow from $824 billion in 2025 to over $2.2 trillion by 2034. IT spending on software is expected to rise nearly 10% in 2026 alone, exceeding $6 trillion. Companies are not spending less on software; they’re spending dramatically more.

The historical pattern with productivity-enhancing technology is that it tends to increase total employment in a field even as it reduces the labor required per unit of output. This is the Jevons paradox: when software becomes cheaper and faster to build, we build more of it. The automobile didn’t eliminate transportation jobs; it created millions of new ones in different shapes. AI may follow the same pattern—or it may not. Software has unique properties (zero marginal cost of distribution, near-infinite scalability) that could break the historical analogy.

What seems most likely is a reshaping rather than a simple shrinking. The composition of the developer workforce will shift toward higher-skill roles—architects, security specialists, AI integration engineers, domain experts who can direct AI systems—and away from tasks that AI can handle autonomously. The absolute number may grow, shrink, or hold steady depending on how fast AI capability outpaces the expansion of software demand.

What We Can Say With Confidence

Three things are clear from the data.

First, the developer population is genuinely large—somewhere between 21 million and 47 million depending on who you count—and comparing it to the healthcare workforce is a legitimate way to highlight just how central software has become to the global economy. That comparison deserves serious engagement, not dismissal.

Second, AI tools are delivering real but modest productivity gains (~25% in rigorous studies), with meaningful quality trade-offs that are not yet well-managed at scale. The companies betting on dramatic headcount reductions are running an experiment whose results won’t be clear for another year or two.

Third, the professionalization of software engineering—whether through licensing, certification, or credentialing—is an idea whose time may finally be arriving, driven less by idealism than by the practical reality that AI is automating the easy parts and leaving the hard parts for humans. The hard parts require demonstrated competence, and demonstrated competence requires standards.

The question isn’t whether we “need” 47 million developers. It’s what kind of developers we need, and whether we have the institutional structures to ensure quality as the tools change everything about how software gets made.