Rethinking Software Teams in the AI Era

· 10 min read · Lire en Français

I'm writing these lines in April 2026. Nearly four years after ChatGPT burst into our lives, and after the slow, irreversible transformations LLMs have brought to the world of software development. The time for speculation or debate is over. Artificial intelligence is a reformat, a re-parameterization of the established order. The code generated in 2023 was mediocre and often inconsistent from one session to another. Today, a model like Opus, used inside Claude Code, produces code of excellent quality.

That doesn't mean we have here an absolute genius capable of materializing a perfect application from the slightest prompt. Of course not — as Brooks admirably explained more than fifty years ago, writing code is only a small (minority) part of the work needed to produce a functional application.

Before AI arrived, we had the illusion that code was the heart of the matter, because for our human brains, programming is a long and difficult task that demands optimal mental clarity and a high capacity for concentration. That step is now a commodity. As soon as the need is clearly formulated (a point that is crucial and far more important than it seems — I'll come back to it), code is almost free: Opus takes care of materializing it in a few minutes.

Stephen King uses the image of the archaeologist when he writes a story. He explains that "the story pre-exists" but is buried underground, and that his job is to reveal it, to unearth it, as best he can, without altering it. Writing, for him, is removing the sand, the earth, the rock around that artifact — every sentence being one more risk of damaging the object he has spotted in the world of ideas.

One could easily draw a parallel here. Code already exists. All potential code already exists. If you want to give yourself a bit of vertigo, imagine that any computer program is, ultimately, just a sequence of 0s and 1s inside the computer. A gigantic number, then, in base 2. Now take the decimals of Pi, an infinite and non-deterministic numerical sequence. Pi therefore contains every conceivable computer program (since its decimals theoretically contain every number)… We're not far from the theory of the chimpanzee that ends up writing Hamlet by randomly hitting a typewriter for all eternity — same idea.

In a way, Opus is our archaeologist. It is capable of producing all potential code from the infinite corpus of possible code, provided you take care to describe it precisely. And that's where everything plays out.

Context Matters More Than the Prompt

The precision of your description doesn't come from your prompt. It comes from the context in which the model is invoked. No need to spend hours hunting for skills or prompt recipes online — what really matters is having a development environment designed AI-first. The idea is to use a workflow that's optimal for the LLM. Here's mine:

  • Claude Code as the orchestrator: Claude Code will explore your project and find the files relevant to your prompt, and much more. It provides the engine, the body in which the LLM incarnates. Skipping it cuts you off from the real development potential. It would be like asking a developer to program on your project while forbidding them to explore the source code.

  • Write CLAUDE.md at the root of the project. In this file, list all the house rules and the specifics of your project. Every time Claude goes off the rails or in the wrong direction, add an instruction here to correct it.

  • Install SpecKit in your project, and every time you start a new feature, force yourself to go through the steps of this "Spec-Driven" workflow. You'll spend an hour or two specifying your need before code generation, but the result will be incomparable in final quality and robustness.

  • Always use the best available model. Don't try to save tokens by switching to a cheaper LLM. You have a Ph.D. coder at your disposal — use it. The token savings you make with a cheaper model will end up costing you much more in maintenance and debugging.

If you start working this way, every one of your days changes. And it seems to me that a large part of the industry has already crossed that line, and many engineers now work this way. The consequences are multiple.

From this observation, everything tips over in our software development industry. Everything. And I realize this shift, however profound, is actually quite slow to materialize in our daily lives. It is no less irreversible.

Code as a Commodity

The change already visible is simple: job openings in IT have dropped and have just slipped below the post-pandemic 2020 level. Meanwhile, job postings mentioning AI use are increasing continuously across Europe. In the US, the verdict is, as always, much sharper: tech companies are massively laying off developer roles. An American friend recently told me his company had launched a payroll reduction based on a single criterion: AI refusers are "let go" — the company doesn't want to spend time convincing or training them.

But it seems to me this is only the beginning of a broader transformation. It's the logical step of a revolution that's just starting in our industry.

Use SpecKit for a day or two and you'll understand where I'm going. A session in Claude Code with this framework literally compresses a Scrum sprint into a few hours. I really mean a complete sprint, centered on an Epic: you define a functional need, discuss edge cases, question UX, the user workflow, security, integration with the existing UI, the reuse of available building blocks, technical constraints, and so on. The machine asks you questions, refines the topic, explores the functional gaps. The approach is bottom-up: you start from a "human" description of the need, and step by step, your agent assists you in generating, in order:

  • a detailed spec (without technical elements)
  • a technical reference corpus (API docs, data models, etc.)
  • a plan: breaking the spec into independent User Stories
  • a task list: breaking the User Stories into individual tasks

For an ambitious feature (one that touches the backend, the frontend and the API layer of a web app, for example), it takes about 2 hours of interaction with Claude Code and SpecKit to complete these steps.

Once all these artifacts are generated, the implementation becomes trivial. The LLM has such a rich and precise context that it makes almost no errors. Used with a model like Opus, it's a matter of an hour or two more to… ship the final, tested and validated version to production.

The question becomes obvious: how do we rethink the team of the old world through this lens?

One thing is certain: it's hard to see how a typical Scrum team of 5-7 members will remain relevant if all developers adopt this way of working. Scrum becomes completely superfluous here. At least in its current usage.

Toward Micro-Feature Teams of Product Designers?

I don't pretend to know what tomorrow's typical team will look like. It does seem obvious to me, though, that not asking the question today amounts to burying our heads in the sand. One promising line of thinking is twofold:

  1. Forget the "ceremonial" roles of the past
  2. Reorganize teams around product specialties

Everyone a Product Designer

A developer can perfectly well learn to think product. It's actually — in my personal experience — very often the criticism leveled at developers: too focused on their technical scope, too removed from the business interest of their work. It's not an absolute generality, of course, but a common trait, and it has a clear explanation. To be effective, a developer needs precisely focus, extreme precision in their working frame. Almost by construction, their available brain time is reduced for "thinking like the end customer".

In the AI era, with a working process like the one described here, that state of affairs becomes obsolete. Specifically: code generation is no longer the developer's responsibility — it's the agent's. The developer thus gains as much focus and concentration time as they can allocate elsewhere: stepping back, understanding the customer's needs, and as a result better framing the agent's work.

It's (almost) the same for an ex-Product Owner: the power of Claude Code paired with Opus genuinely lets a Product Owner produce code. Especially if you train the person on SpecKit and finalize the session with a pull request submitted to a developer for review. That's what we do at Anantys: Maxime, our Product lead (who has no development background), produces PRs that are then submitted for review.

Of course, the topics won't be of the same order. But the idea is there: the development team becomes more compact, and the boundaries between roles blur.

The Typical Team, Seen by Opus

So what does the typical team of an AI-first company look like?

Speaking of which — I preferred to put this final question to the party concerned: I asked Opus how it saw the development team recomposed through the lens of this article. Here's its proposal, which I publish here as is.

The feature pod: 2-3 people, one product surface.

The team is no longer organized by function (devs, PO, QA) but by product surface — onboarding, billing, data ingestion, analytics dashboard. Each pod owns its surface end-to-end, from customer need to production deployment.

Inside the pod, two main roles:

  1. The Product Designer — ex-PO or ex-dev trained in product. Pilots SpecKit, writes specs, validates PRs. The entry point for the need.
  2. The Engineer-Designer — system architect and quality steward. Codes little (the agents take care of that), but reviews a lot, and designs the technical abstractions that agents can't invent on their own.

A third role, the domain expert (finance, legal, security, depending on the surface), is mobilized on demand, without being dedicated to the pod.

Around the pods, a single transverse function: the Platform Lead. They maintain the shared infrastructure — CLAUDE.md, base agents, design system, observability. Without them, every pod would reinvent the wheel.

On rituals: keep the daily stand-up — still crucial to maintain collective energy and rhythm — but sprint planning disappears (SpecKit). Async by default. Only one long meeting is kept per week: the demo. Everything else happens on GitHub, Slack, in threads.

Let's be honest: we're not inventing anything here. The idea of organizing development into small vertical pods has been documented for over a decade. Spotify popularized it in 2012 with its Squads / Tribes / Chapters / Guilds model, Amazon practices it as "two-pizza teams", and many companies have built on it since.

What AI changes is the density: we go from 6-9 people per pod to 2-3, because agents absorb most of the code production.

Several recent sources document this shift, including HBR with To Drive AI Adoption, Build Your Team's Product Management Skills (February 2026), and ShiftMag with How AI Is Redefining Product Teams.

Tomorrow's development team would therefore be a compressed version of yesterday's Scrum team: smaller, denser, faster. It no longer manages code: it frames agents that produce code.

One major challenge remains: transforming existing teams toward this model. We can clearly see why, technically and objectively, this model will be much better suited to the era we have entered. But the real difficulty will lie in supporting the men and women who are accustomed to the previous model — that is, almost every team out there today.

They will have to overcome their reservations and dive into a collective moment where everyone steps out of their comfort zone to relearn how to work together — differently — with a new partner in the game: AI.

Enjoyed this post?

I'm building Anantys, an investment tracking platform — follow my journey in entrepreneurship and AI-assisted development.