Developer Productivity Has Never Really Improved (And Why That's About to Change)
In 1975, Frederick Brooks published The Mythical Man-Month, a collection of essays that became a bible for software engineering and project management.
The book had been sitting on my shelf for years. When I reread it recently, it hit me hard. After closing it, I realized that across twenty-five years of career — through every company I've worked at — I've watched the same patterns repeat endlessly. The very patterns Brooks describes, the ones that destroy project productivity.
I've been guilty of it myself. As a manager or team lead, I've tried — alongside my colleagues — countless approaches to improve how we ship. Different methodologies, different tools, different organizational structures.
But without fail, regardless of the company, the project, or the team, despite the best intentions, a delay would appear, something unexpected would surface, and at the end of the day, the roadmap lied.
How is that possible? Our industry has come a long way since Waterfall... We had the Agile revelation, the Scrum Guide, and a whole ecosystem of methodologies that bloomed around them.
Yes. But let's be honest: despite all of that, we're still in the same place. Exactly where Brooks described in 1975:
- Software projects are always late.
- Growing a team doesn't make it faster (spoiler: it does the opposite).
- Accurately estimating the total time to deliver a project is nearly impossible (we get it wrong every single time).
Why? Because every modern methodology has failed to address the three fundamental reasons behind this reality.
1. Brooks's Law: adding people slows the project down
The first trap is the man-month myth: the belief that you can compensate for a project's delays by throwing more developers at it. In reality, the bigger the team, the more communication becomes a black hole.
This principle, known as Brooks's Law, states that adding people to a late project only makes it later. The reason is mathematical: coordination costs explode exponentially with team size, because the number of communication channels (n × (n - 1) / 2) grows disproportionately.
For example, a 5-person team requires 10 distinct communication channels (for each person to exchange with every other). At 10 people, that number climbs to 45. At 15, it's 105.
Of course, nobody expects every individual to talk to every other person directly — so we invent... meetings. But the fundamental problem remains unsolved. Communication grows more complex as the team grows. We all know the "meeting that should have been an email," the discussions that go off-topic, the meeting to prepare the meeting, or the one we keep rehashing every six months... Not to mention the time spent synchronizing calendars, juggling between people on-site and others on video calls (which introduces its own communication problems), and so on.
But beyond all that, the issue runs deeper: meetings create an illusion of clarity — what cognitive scientists call the illusion of transparency. We sit in a room with eight people thinking information will be shared evenly. In reality, the opposite happens, driven by several cognitive biases:
- Conformity bias: once an idea is voiced (usually by the most assertive person), it becomes hard to push back.
- Impostor syndrome: for fear of looking incompetent, people suppress their doubts and questions.
- Status bias: the most senior or most extroverted people monopolize the conversation, creating a false consensus that masks latent disagreements.
In the end, the more members you add, the more communication overhead cannibalizes actual work time. But more importantly, information gaps quietly take hold. These invisible misunderstandings turn into surprises, blockers, and bugs that surface — almost always — far too late.
2. Code is just the tip of the iceberg
The second trap — and it's a big one — is our collective tendency to underestimate. This cognitive bias is so widespread it has a name: the planning fallacy. It describes our systematic propensity to underestimate the time needed to complete a task, even when we know that similar tasks took longer in the past.
If I had to cite a recent and spectacular example, it would be Apple Intelligence, whose AI features were pushed back to 2026 after initially being slated for iOS 18. The delay was so severe that Apple had to cancel an advertising campaign already in production — a rare admission of failure for a company known for near-flawless product execution.
We could also talk about Google's Privacy Sandbox debacle — a project announced with great fanfare, then repeatedly delayed, only to be abandoned in 2025 after years of development and a monumental disruption of the online advertising industry.
And we can go even further back, with the emblematic case of Netscape 6 — a complete rewrite of the most popular browser of its time, which ended up stretching over two years of development, only to ship a product so unstable it precipitated the company's demise.
Since Brooks published The Mythical Man-Month fifty years ago, we haven't made meaningful progress on this front. Methodologies have evolved, programming languages and development environments have improved considerably, but projects still end up proving roadmaps wrong.
In fact, if you look closely, the situation has evolved toward a normalization of delays. Commitments on deliverables have gradually faded. Software vendors — particularly in B2B — have adopted fuzzier development cycles: "Beta," "Private Preview," "General Availability." Roadmaps shared with customers now avoid any precise timeline commitments, sticking to annual goals always accompanied by disclaimers...
In short, delays have become the norm. In a way, we've (unconsciously?) institutionalized them.
Scrum is the perfect illustration: faced with the systematic failure of the Waterfall model and its chronic delays, what we actually did was eliminate long-term commitments.
By breaking projects into sprints of a few weeks, Scrum offers an elegant short-term solution: more realistic micro-commitments, more flexibility, more adaptability. It's undeniably better than the unrealistic promises of Waterfall.
But this approach carries a hidden cost: the abandonment of any strategic long-term vision.
That might sound provocative, so let me unpack it: by focusing on the immediate, we've sacrificed the ability to plan and anchor a clear, lasting direction. The project becomes a succession of sprints with no real defined destination, where the team moves forward with no certainty about where it's heading.
Sure, you can absolutely graft long-term vision on top of Scrum. It's common practice: the backlog is fed with ideas, the Product Owner sets the course, etc. Yes. But in practice, the way Scrum works induces a bias toward topic-switching. The wind blows hard from one direction, you prioritize that feature — three weeks later it shifts, another objective takes the lead, and so on.
The question deserves to be asked: has agility ultimately created its own bias? Are we truly capable of pursuing a major strategic objective when we commit to years of Scrum-based development?
I don't claim to have the answer. But I notice, as I write these lines, that this question is almost never asked.
Brooks (in 1975!) was already pointing to the root cause of flawed project planning. He writes, from the very beginning of the book, that a project is composed of four major types of work (still fully relevant today, fifty years later):
- Designing the work to be done (the specs)
- Writing the code
- Unit testing (testing each component, API tests)
- Integration testing (testing the complete system, or black box testing)
He specifies that these four types of work don't take the same amount of time at all. The breakdown looks like this:
- Design: 1/3 of the time
- Writing code: 1/6 of the time
- Unit testing: 1/4 of the time
- Integration testing: 1/4 of the time
According to Brooks, only 1/6th of total project time is spent writing code (roughly 17%). Conversely, half (50%!) of the project's time will ultimately be spent testing and validating overall coherence — and making the countless adjustments needed for everything to work together. Half the time spent testing, validating, fixing...
Worse: Brooks notes that while we rarely misjudge the time needed to code a given piece, we consistently neglect — or literally forget — to estimate the testing phase... which ends up being the largest time investment.
In many companies, the opposite pattern emerges. Testing and validation are the poor relations of the project.
When presenting a timeline to leadership, you want to show concrete output — and unconsciously, you omit testing, because it might seem like a luxury or wasted time. You might even be tempted to think that if you need tests, it means the developer did a poor job. That kind of reasoning reveals a deep misunderstanding of the craft. Going back to Brooks's breakdown: a developer might excel at their own tests, but they cannot — by definition — validate the entire system, since their work spans only one of its many parts.
3. Software: the intangible, elusive artifact
The third reason we estimate software projects so poorly is perhaps the most fundamental: software is an abstract object. Intangible. According to Brooks, this immaterial nature changes everything. It exposes us to powerful cognitive biases, such as the availability heuristic.
Unlike a car or a house, a software project emerges from the world of ideas. We think, conceptualize, model — and all of it materializes in code. For our brains, conditioned by the physical world, the visible output (a screen that works) is tangible, while the underlying complexity remains abstract.
This bias leads us to overestimate what comes easily to mind (the visible feature) and underestimate what's fuzzy (the tests, the edge cases, the unexpected technical pitfalls, that API that won't behave quite as expected). We think we've done 80% of the work when we've only laid the foundations (Brooks's famous 1/6th).
But this immaterial nature creates a second obstacle — even more formidable — to industrializing our craft: nothing is truly reusable.
Despite the abundance of code libraries, modern frameworks, and templates, every project reinvents a massive portion of its own logic. Compare that to the physical world: a car manufacturer, for its new model, will reuse the same machine tools, the same materials, the same assembly processes, the same standardized expertise. Its innovation is incremental, built on a solid, proven industrial foundation.
In software, every project is a rediscovery. We start (almost) from scratch, because the essence of the work isn't in assembling Lego bricks — it's in creating the unique logical connectors that bind them together. That almost the same that seems so trivial is in fact the heart of the problem. It costs as much, if not more, than building from nothing.
Software isn't built — it's cultivated. And I just realized something: I constantly use the word "organic" when talking about my approach to programming and development. The connection just clicked... Yes, software grows organically. Unexpected branches sprout here and there, its roots tangle, its trunk forks.
AI, or the first disruption in 50 years
Fifty years of methodologies, frameworks, and heated debates — only to end up... in the same place. Despite all our obvious progress in abstraction, modularization, scalability, and automation, Brooks's Law remains relentless: teams struggle more and more to move as they grow, our estimates are still too optimistic, and the abstract nature of code makes everything harder to grasp.
But for the first time in half a century, a disruption seems to be emerging. Generative AI, coupled with the power of agentic tools like Cursor, may represent the first genuine productivity breakthrough in software engineering.
As I explore in my article on the Vibe Coding revolution, AI doesn't simply make us faster. It changes the very nature of development work by tackling the three fundamental bottlenecks:
-
Communication complexity (Brooks's Law)? AI short-circuits it radically: the team shrinks to its minimum. This raises serious societal questions, but through the lens of the problem we're examining here, it's a meaningful answer. No need for 15 people where 3 will do...
-
Chronic underestimation? AI doesn't eliminate our bias, but it drastically reduces its impact. By generating and testing code at superhuman speed, it compresses the development cycle. The 84% of the submerged iceberg gets explored in hours rather than weeks, making our estimation errors less punishing.
-
The immaterial nature of software? That's AI's playground. It excels at manipulating abstraction, at creating those "logical connectors" that until now constituted the core of our effort. The developer can finally step back and focus on what truly matters: architecture, design, product vision.
The bottleneck was never how fast we typed code. The real constraint has always been how fast we could think, communicate, and structure complex systems.
Tom Blomfield, a Y Combinator board member, said that programmers had been very well-paid farmers harvesting by hand... and that we just invented the combine harvester.
That analogy — predictably — outraged many programmers on X, certainly stung by the idea that the profession could be so profoundly transformed. Rather than fearing or rejecting this revolution in progress, I'd rather embrace it with open arms. Perhaps we're witnessing the birth of the first industrial revolution of the computer age — and if so, it's a thrilling time to be alive.