Stand Up Meeting Best Practices

This is a highly opinionated view on best practices for running stand-up meetings. It’s based on the approach I’ve developed and refined through working with probably nearing a hundred product teams over the past 25 years.

Across all of them, one thing has held true: a good stand-up acts as the beating heart of a high-performing team.

Done well, they give the team focus, momentum, visibility, and a shared sense of purpose. Done badly – and sadly, that’s more common – it becomes a daily chore. A box-ticking exercise. Status update theatre. Or worse, a passive, rambling, soul-draining ritual no one looks forward to.

In this article, I’ll share the practices I’ve seen consistently work – and explain why they matter, not just what to do.

Focus on the work, not the people

I have a strong personal dislike for the classic Scrum-style format of “yesterday, today, blockers.” It reinforces the idea that the stand-up is about checking that everyone is doing something, rather than focusing on what truly matters: are we delivering? It encourages individual updates over team progress, and often results in only talking about the work people are actively doing – which means anything not being worked on, including stuck or neglected items, gets ignored.

Everything from here on is with the assumption that you are taking this approach.

Walk the board from right to left

The work closest to being in production is the most valuable because it’s closest to delivering impact. Until something is released, it delivers zero value – no customer benefit, no feedback, no outcome. It’s also where the most effort has already been invested, so leaving it unfinished carries the highest cost. By focusing on what’s nearly done first, you prioritise finishing over starting, reduce waste, and increase the chances of delivering real value sooner.

  • Start from the right hand side of the board
  • Focus on the work, ensure all work in progress has been discussed
  • Conclude the stand up by asking
    • If there’s anyone that hasn’t spoken?
    • “Are we on track?” – A final call as an opportunity for anyone to raise any issues

Bias for action, delivery, and results

Stand-ups work best when they reinforce a culture of delivery. It’s not just about sharing what you’re doing – it’s about driving action, finishing work, and holding each other to a high standard. These behaviours help teams stay focused, accountable, and outcome-oriented.

  • Focus on completion – what will it take to get this done?
  • Use commitment language
  • Take ownership
  • Challenge one another to uphold best practices

Visible, present and engaged

Whether remote or in person, being visibly present and engaged is a basic sign of respect – especially for the person facilitating. It’s frustrating and disruptive when people appear distracted or disinterested, particularly in a short, focused meeting like a stand-up. Cameras off might be fine for a long company all-hands, but not for a 10-minute team check-in. The stand-up only works if everyone is paying attention and showing up fully.

  • Bring your whole self, pay attention.
  • Be on time
  • Cameras on when remote
  • Do not multi-task
  • Gather together in person on office days, don’t stay at desks

Efficient and focused

Stand-ups are a tool for focus and momentum, not a catch-all meeting. When they drag or lose direction, they quickly become a waste of time – and people disengage. Keeping them brief and on-topic ensures they stay effective, energising, and sustainable. Updates should be concise and relevant to the team’s progress. Longer conversations needed can still happen – just not here.

  • Keep it brief, aim for 10 minutes or less
  • Talk less, be informative. Be as to-the-point as possible. Be on track and speak to what team needs to know
  • Take conversations offline (agree how to follow up and who’s taking the action)
  • Only team members contribute (i.e. not stakeholders, supporting roles, observers)
  • Make sure the board is up to date before you start
  • BUT, fun is good! A bit of informal chat, banter and jokes is ok 

Well facilitated

A well-run stand-up doesn’t happen by accident – it needs strong facilitation. The facilitator sets the tone, keeps the meeting on track, and reinforces good habits. Without that, it’s easy for bad habits and practices to creep back in.

  • Have a clear agenda and stick to it
  • Be the pace setter
  • Be energised
  • Ensure you’re sticking to the agenda
  • Ensure you’re adhering to best practices

Rotate the facilitator

The stand-up is ultimately for the team, not for the facilitator. Rotating who leads it is a powerful way to build shared ownership and reinforce that principle. When the same person always runs them, it can start to feel like their meeting – which subtly encourages passive behaviour, status reporting, and a lack of collective responsibility.

By rotating the facilitator, you signal that everyone has a role in making the stand-up effective. It keeps people engaged, encourages investment, and helps the whole team develop a shared understanding of what ‘good’ looks like.

But there’s a big caveat: facilitation still needs to be good. Make sure everyone taking the role:

  • Is confident and capable of running an effective stand up
  • Can hold the line if things go off course
  • Is open to feedback

Importantly, someone still needs to be ultimately accountable for ensuring your stand-ups remain effective.

A great stand-up should energise the team, not drain it. If yours isn’t doing that, fix it.

Appendix

Stand up health check

Use this to periodically assess whether your stand-up is working as it should:

✅ Was everyone present and on time?
✅ If in person, did the team gather together (not stay at desks)?
✅ If remote, did everyone have their camera on?
✅ Was the board fully updated before you started?
✅ Did it finish within 10 minutes?
✅ Was everyone engaged and paying attention?
✅ Did everyone in the team speak and confirm what they’re doing?
✅ Was all work in progress discussed?
✅ Were any follow-up conversations taken offline, with a clear owner?

Further reading

Martin Fowler – Its Not Just Standing Up – a comprehensive guide to patterns and practices for daily stand-ups.

Why you’ve probably got Object-Oriented Programming wrong all this time 🤯

Most people were taught OOP means organising code into classes, using inheritance to share behaviour, and exposing/manipulating state via getters and setters. This has led to bloated, brittle code and side-effect-ridden systems that are hard to change.

But that was never the intention!

Alan Kay, who coined the term in the late ’60s, had something very different in mind. He saw OOP as a way to build systems from independent, self-contained objects – like small computers – that communicate by sending messages.

So where did it all go wrong?

Languages like C++ and Java formalised classes and inheritance as core features. Academia followed, teaching the “4 pillars” of OOP – encapsulation, abstraction, inheritance, and polymorphism – often illustrated with real-world analogies like cats inheriting from animals or shapes extending other shapes 🤦‍♀️

This encouraged developers to focus on classification and hierarchy, rather than systems that emphasise behaviour, clear boundaries, and message-based interaction.

Kay later said:

“I’m sorry that I long ago coined the term ‘objects’ for this topic because it gets many people to focus on the lesser idea. The big idea is messaging.”

And:

“OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.”

In other words, OOP was meant to be about communication, modularity, and flexibility – not rigid structures or class hierarchies.

Kay’s original ideas are still just as relevant today. They’re language-agnostic, and they apply just as well in JavaScript, Go, or Rust as they do in Java or C#.

If you’ve got a beef with OOP, aim it at what it became – not what it was meant to be.

What can we do instead?

If you want to align more closely with the original spirit of OOP – and build systems that are easier to understand, change, and scale – here are some heuristics worth considering. These aren’t hard rules. Like any design choice, they come with trade-offs. The key is to think deliberately and apply them where they bring value.

Design small, composable parts that can evolve

Avoid deep inheritance hierarchies. Instead, model systems using small, focused components that you can compose (“composition over inheritance”). This encourages flexibility and separation of concerns.

Let objects own their state and behaviour

Don’t pass state around or expose internals for others to manipulate. Instead, define clear behaviours and interact through messages. This reduces coupling and makes each part easier to reason about in isolation.

Reduce hidden side effects

Use immutable data and pure functions to limit surprises and make behaviour more predictable. This isn’t about functional purity – it’s about making change safer and debugging less painful.

Look to supporting architectural patterns

Approaches like Domain-Driven Design (DDD) and Hexagonal Architecture (aka Ports and Adaptors) both support a more Alan Kay style approach to OOP.

DDD encourages modelling your system around behaviour and intent, not just data structures. Entities and aggregates encapsulate state and logic, while external code interacts through clear, meaningful operations – not by poking at internal data. Bounded contexts also promote modularity and autonomy, aligning closely with the idea of self-contained, message-driven objects.

Hexagonal Architecture reinforces separation of concerns by placing the application’s core logic at the centre and isolating it from external systems like databases, user interfaces, or APIs. Communication happens through defined interfaces (“ports”), with specific technologies plugged in via adapters. This approach makes systems more modular, testable, and adaptable – supporting the kind of clear boundaries and message-based interaction that aligns closely with the original intent of OOP.

Without Systems Thinking, AI won’t deliver the gains you expect

If you’ve not yet got into Systems Thinking, now is more important than ever.

AI brings the promise of big productivity gains – but so many optimisation efforts deliver nothing. Worse, they can take you in the wrong direction, faster – exacerbating the very issues you’re trying to solve.

Why? Often the things that look like the problem are just symptoms. They’re usually the most visible and measurable activities – and therefore appealing to focus on. At best, it’s like taking painkillers instead of treating the underlying cause of the illness.

I see it all the time – organisations attempting local optimisations to the wrong part of the system, blind to the knock-on effects. Fixes that make bottlenecks worse, and costs taken out in one place, only to reappear elsewhere – often bigger than the original saving.

GenAI makes it even more tempting to fix the wrong problems. It’s good at generating documents, writing more code, handling support queries. None of which are directly value-add activities, but they are visible and measurable.

There’s a real risk of just getting busier with busywork.

That’s why you need to step back and look at the system as a whole. Map end-to-end value streams – across people, process and technology. Identify pain points, bottlenecks, and constraints. Understand how the work flows, and what’s actually causing the outcomes you’re seeing.

That’s systems thinking in a nutshell.

A lot of the theory-heavy books make it sound more complex than it is. It’s why I’m a big fan of The Goal by Eliyahu M. Goldratt. Yes, it’s a corny business novel – but it’s one of the most practical intros to systems thinking you’ll find.

When you take a systematic approach, more often than not, you’ll find the real problems aren’t where the pain is, and rather than playing whack-a-mole patching problems, you’ll uncover opportunities to make changes that deliver real, lasting impact 🙌

Good engineering practices have become essential in the age of GenAI-assisted development

Especially since Cursor made some changes late last year, there’s been a lot of excitement about fully AI-driven development, where developers sketch out the intent and the agent generates the code. It’s interesting stuff, and I get why people are excited.

But what’s becoming increasingly clear to me is that modern engineering best practices aren’t just nice to have in this world – they’re absolutely essential. Without them, you’re basically handing the GenAI a loaded gun and pointing it at your own foot.

For one thing, if this is the case, the value of people with my type of skills and experience have just shot up significantly, which is why I keep coming back to this Kent Beck tweet from a couple of years ago:

Kent Beck Twitter post “I’ve been reluctant to try ChatGPT. Today I got over that reluctance. Now I understand why I was reluctant. The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1000x. I need to recalibrate”

Why good engineering practices matter more than ever

I see a lot of guides and people comparing using GenAI in software development to a pairing with a junior developer, but that’s the wrong analogy.

It’s much more like a toddler.

Just like a young child, GenAI has no real-world context. It grabs random things, unexpectedly veers off in bizarre directions, and insists it’s right even when it’s covered in spaghetti. It’ll confidently hand you a “masterpiece” that’s just glue and glitter stuck to your phone. Anyone who’s had young children will know this very well – you can’t leave them alone for a second.

GenAI can produce plausible-looking code at incredible speed, but it will happily generate code that’s subtly, or spectacularly, wrong.

Without good practices and strong guardrails around it all, you’re not accelerating delivery – you’re accelerating chaos.

The engineering practices that matter

None of these are new. They’re well-established, widely recognised best practices for building and maintaining software. They’ve always mattered – but now they’re essential. If you’re not in good shape on these fronts, I’d strongly suggest staying away from AI-driven development until you are. Otherwise, you’re putting your foot down on a trip to disaster.

  • Clear requirements and expected outcomes – Knowing what you’re building and why, with clear, well understood, outcome-based requirements and definitions of success.
  • Clean, consistent, loosely coupled code – Code that’s easy to understand, maintain, and extend, with clear separation of concerns, high cohesion and minimal unnecessary dependencies.
  • High and effective automated testing – Unit tests, integration tests, E2E tests all running as part of your deployment pipeline.
  • Frequent code check-ins – Regularly checking in code, keeping branches short-lived.
  • Continuous Delivery – Highly automated build and deployment. Releasing in small batches frequently into production (not every 2 weeks)
  • Static analysis – Automated checks for code quality, vulnerabilities, and other issues, baked into your pipelines.
  • Effective logging and monitoring – Clear visibility into what’s happening in in all environments, so issues can be identified and understood quickly.
  • Infrastructure as code – Consistent, repeatable infrastructure and environments, easy to maintain and keep secure.
  • Effective documentation – Lightweight, useful documentation that explains why something was done, not just what was done.

Most common but not common

I’ve long opined that whilst these are the most well established and common best practices, they are still far from common. Only a relatively small proportion of organisations and teams actually follow them well, and the amount of people in the industry with the skills is still comparatively small.

The reality is, most of the industry is still stuck in bad practices – messy code, limited automated testing, poor automation and visibility, and a general lack of solid engineering discipline.

If those teams want to lean heavily into GenAI, they’ll need to seriously improve their fundamentals first. For many, that’s a long, difficult journey – one I suspect most won’t take.

While already high-performing teams will likely see some benefit, I predict most of the rest will charge in headfirst, blow up their systems, and create plenty of work for consultants to come in to clean up the mess.

Final Thought

GenAI isn’t a shortcut past the hard work of good engineering – it shines a spotlight on why the established good practices were already so important. The teams who’ve already invested in strong engineering discipline will be the ones who may see real value from AI-assisted development.

For everyone else, GenAI won’t fix your problems – it’ll amplify them.

Whether it leads to acceleration or chaos depends entirely on how strong your foundations are.

If you’re serious about using GenAI well, start by getting your engineering house in order.

What CTOs and tech leaders are observing about GenAI in software development

It’s helpful to get a view of what’s actually happening on the ground rather than the broader industry hype. I’m in quite a few CTO and Tech Leader forums, so I thought I’d do something GenAI is quite good at – collate and summarise conversation threads and identify common themes and patterns.

Here’s a consolidated view of the observations, patterns and experiences shared by CTOs and tech leaders across various CTO forums in the last couple of months

Disclaimer: This article is largely the output from my conversation with ChatGPT, but reviewed and edited by me (and as usual with GenAI, it took quite a lot of editing!)

Adoption on orgs is mixed and depends on context

  • GenAI adoption varies significantly across companies, industries and teams.
  • In the UK, adoption appears lower than in the US and parts of Europe, with some surveys showing over a third of UK developers are not using GenAI at all and have no plans to1UK developers slow to adopt AI tools says new survey.
    • The slower adoption is often linked to:
      • A more senior-heavy developer population.
      • Conservative sectors like financial services, where risk appetite is lower.
  • Teams working in React, TypeScript, Python, Bash, SQL and CRUD-heavy systems tend to report the best results.
  • Teams working in Java, .NET often find GenAI suggestions less reliable.
  • Teams working in modern languages and well-structured systems, are adopting GenAI more successfully.
  • Teams working in complex domains, messy code & large legacy systems often find the suggestions more distracting than helpful.
  • Feedback on GitHub Copilot is mixed. Some developers find autocomplete intrusive or low-value.
  • Many developers prefer working directly with ChatGPT or Claude, rather than relying on inline completions.

How teams are using GenAI today

  • Generating boilerplate code (models, migrations, handlers, test scaffolding).
  • Writing initial tests, particularly in test-driven development flows.
  • Debugging support, especially for error traces or unfamiliar code.
  • Generating documentation
  • Supporting documentation-driven development (docdd), where structured documentation and diagrams live directly in the codebase.
  • Some teams are experimenting with embedding GenAI into CI/CD pipelines, generating:
    • Documentation.
    • Release notes.
    • Automated risk assessments.
    • Early impact analysis.

GenAI is impacting more than just code writing

Some teams are seeing value beyond code generation, such as:

  • Converting meeting transcripts into initial requirements.
  • Auto-generating architecture diagrams, design documentation and process flows.
  • Enriching documentation by combining analysis of the codebase with historical context and user flows.
  • Mapping existing systems to knowledge graphs to give GenAI a better understanding of complex environments.

Some teams are embedding GenAI directly into their processes to:

  • Summarise changes into release notes.
  • Capture design rationale directly into the codebase.
  • Generate automated impact assessments during pull requests.

Where GenAI struggles

  • Brownfield projects, especially those with:
    • Deep, embedded domain logic.
    • Where there is little or inconsistent documentation.
    • Highly bespoke patterns.
    • Inconsistent and poorly structured code
  • Languages with smaller training data sets, like Rust.
  • Multi-file or cross-service changes where keeping context across files is critical.
  • GenAI-generated code often follows happy paths, skipping:
    • Error handling.
    • Security controls (e.g., authorisation, auditing).
    • Performance considerations.
  • Several CTOs reported that overly aggressive GenAI use led to:
    • Higher defect rates.
    • Increased support burden after release.
  • Large, inconsistent legacy codebases are particularly challenging, where even human developers struggle to build context.

Teams are applying guardrails to manage risks

Many teams apply structured oversight processes to balance GenAI use with quality control. Common guardrails include:

  • Senior developers reviewing all AI-generated code.
  • Limiting GenAI to lower-risk work (boilerplate, tests, internal tooling).
  • Applying stricter human oversight for:
    • Security-critical features.
    • Regulatory or compliance-related work.
    • Any changes requiring deep domain expertise.

The emerging hybrid model

The most common emerging pattern is a hybrid approach, where:

  • GenAI is used to generate initial code, documentation and change summaries, with final validation and approval by experienced developers.
  • Developers focus on design, validation and higher-risk tasks.
  • Structured documentation and design rules live directly in the codebase.
  • AI handles repetitive, well-scoped work.

Reported productivity gains vary depending on context

  • The largest gains are reported in smaller, well-scoped greenfield projects.
  • Moderate gains are reported more typical in medium to large, established or more complex systems.
  • Neutral to negative benefit in very large codebases, messy or legacy systems
  • However, across the full delivery lifecycle, a ~25% uplift is seen as a realistic upper bound.
  • The biggest time savings tend to come from:
    • Eliminating repetitive or boilerplate work.
    • Speeding up research and discovery (e.g., understanding unfamiliar code or exploring new APIs).
  • Teams that invest in clear documentation, consistent patterns and cleaner codebases generally see better results.

Measurement challenges

  • Most productivity gains reported so far are self-reported or anecdotal.
  • Team-level metrics (cycle time, throughput, defect rates) rarely show clear and consistent improvements.
  • Several CTOs point out that:
    • Simple adoption metrics (e.g., number of Copilot completions accepted) are misleading.
    • Much of the real value comes from reduced research time, which is difficult to measure directly.
  • Some CTOs also cautioned that both individuals and organisations are prone to overstating GenAI benefits to align with investor or leadership expectations.

Summary

Across all these conversations, a consistent picture emerges – GenAI is changing how teams work, but the impact varies heavily depending on the team, the technology and the wider processes in place.

  • The biggest gains are in lower-risk, less complext, well-scoped work.
  • Teams with clear documentation, consistent patterns and clean codebases see greater benefits.
  • GenAI is a productivity multiplier, not a team replacement.
  • The teams seeing the most value are those treating GenAI as part of a broader process shift, not just a new tool.
  • Long-term benefits depend on strong documentation, robust automated testing and clear processes and guardrails, ensuring GenAI accelerates the right work without introducing unnecessary risks.

The overall sentiment is that GenAI is a useful assistant, but not a transformational force on its own. The teams making meaningful progress are those actively adapting their processes, rather than expecting the technology to fix underlying delivery issues.

I don’t see how anyone could now doubt GenAI has hit a wall

OpenAI launched GPT-4.5 yesterday1Sam Altman post on X announcing GTP-4.5 release – a model they’ve spent two years and a fortune training. Initial impressions? Slightly better at some things, but noticeably worse at others2Ethan Mollick’s first impressions of GPT-4.5, and it’s eye-wateringly expensive – 30 times the cost of GPT-4o and 5 times more than their high-end “01 Reasoning” model3OpenAI’s API pricing table.

This follows X’s recent release of Grok3 – only marginally better than most (not all) high end existing models, despite, again, billions spent on training.

Then there’s Anthropic’s recently released Claude Sonnet 3.7 “hybrid reasoning” model. Supposedly tuned for coding, but developers in the Cursor subreddit are saying it’s *worse* than Claude 3.5.

What makes all this even more significant is how much money has been thrown at these next-gen models. Anthropic, OpenAI, and X have collectively spent hundreds of billions of dollars over the past few years (exponentially more than was ever spent on models like GPT-4). Despite these astronomical budgets, performance gains have been incremental and marginal – often with significant trade-offs (especially cost). Nothing like the big leaps seen between GPT-3.0, GPT-3.5 and GPT-4.

This slowdown was predicted by many. Not the least a Bloomberg article late last year highlighting how all the major GenAI players were struggling with their next-gen model (whoever wrote that piece clearly had good sources).

It’s becoming clear that this is likely as good as it’s going to get. That’s why OpenAI is shifting focus – “GPT-5” isn’t a new model, it’s a product4Sam Altman post on X on OpenAI roadmap.

If we have reached the peak, what’s left is a long, slow reality check. The key question now is whether there’s a viable commercial model for GenAI at roughly today’s level of capability. GenAI remains enormously expensive to run, with all major providers operating at huge losses. The Nvidia GPUs used to train these models cost around $20k each, with thousands needed for training. It could take years – possibly decades – for hardware costs to fall enough to make the economics sustainable

Gitclear’s latest report indicates GenAI is having a negative impact on code quality

I’ve just been reading GitClear’s latest report on the impact of GenAI on code quality. It’s not good 😢. Some highlights and then some thoughts and implications for everyone below (which you won’t need to be a techie to understand) 👇

Increased Code duplication 📋📋

A significant rise in copy-pasted code. In 2024, within-commit copy/paste instances exceeded the number of moved lines for the first time.

Decline in refactoring 🔄 

The proportion of code that was “moved” (suggesting refactoring and reuse) fell below 10% in 2024, a 44% drop from the previous year.

Higher rate of code churn 🔥

Developers are revising newer code more frequently, with only 20% of modified lines being older than a month, compared to 30% in 2020 (suggests poor quality code that needs more frequent fixing).


If you’re not familiar with these code quality metrics, you’ll just need to take my word for it, they’re all very bad.

Thoughts & implications

For teams and organisations

Code that becomes harder to maintain (which all these metrics indicate) results in the cost of change and the rate of defects both going up 📈. As the Gitclear report says, short term gain for long term pain 😫

But is there any short term gain? Most good studies suggest the productivity benefits are marginal at best and some even suggest a negative impact on productivity.

Correlation vs causation

Significant tech layoffs over the same period of the report could also be a factor for some the decline. Either way code quality is suffering badly (and GenAI, at the very least, isn’t helping).

For GenAI

  1. Models learn from existing codebases. If more low-quality code is committed to repos, future AI models will be trained on that. This could lead to a downward spiral 🌀 of increasingly poor-quality suggestions (aka “Model Collapse”).
  2. Developers have been among the earliest and most enthusiastic adopters of GenAI, yet we’re already seeing potential signs of quality degradation. If one of the more structured, rule-driven professions is struggling with AI-generated outputs, what does that mean for less rigid fields like legal, journalism, and healthcare?

Building Quality In: A practical guide for QA specialists (and everyone else)

Introduction

I wrote this guide because I wanted a useful, practical article to share with QA (Quality Assurance) specialists, testers and software development teams, for how to shift away from traditional testing approaches to defect prevention. It’s also based on what I’ve seen work well in practice.

More than that, it comes from a frustration that the QA role – and the industry’s approach to quality in general – hasn’t progressed as much as it should. Outside of a few pockets of excellence, too many organisations and teams still treat QA as an afterthought.

When QA shifts from detecting defects to preventing them, the role becomes far more impactful. Software quality improves, delivery speeds up, and costs go down.

Whilst this guide is primarily aimed at QA specialists and testers looking to move beyond testing into true quality assurance, it’s also relevant to anyone in software development who is interested in faster, more cost-effective delivery of high-quality, reliable software.

The QA role hasn’t evolved

Something similar happened to QA as it did with DevOps. At some point, testers were rebranded QAs, but largely kept doing the same thing. From what I can see, the majority of people with QA in their title are not doing much actual quality assurance.

Too often, QA is treated as the last step in delivery – developers write code, then chuck it over the wall for testers to find the problems. This is slow, inefficient, and expensive.

Inspection doesn’t improve quality, it just measures a lack of it.

Unlike DevOps (which is a collection of practices, culture and tools, not a job title), I believe there’s still a valuable role and place for QA specialists, especially in larger orgs.

QA’s goal shouldn’t be just to find defects, but to prevent them by embedding quality throughout the development process – not just inspecting at the end. In other words, we need to build quality in.

The exponential cost of late defects and delivery bottlenecks

The cost of fixing defects rises exponentially the later they are found. NASA research1Error Cost Escalation Through the Project Life Cycle Stecklein et al 2004 confirms this, but you don’t really need empirical studies to substantiate this, it’s pretty simple:

The later a defect is found, the more resources have been invested. More people have worked on the it, and fixing it involves more rework – it’s easy to tweak a requirement early, but rewriting code, redeploying, and retesting is much more expensive. In production, they impact users, sometimes requiring hotfixes, rollbacks, and firefighting that disrupts everything else. Beyond direct costs, there’s the cumulative cost of delay – the knock-on effect to future work.

Late-stage testing isn’t just costly – it’s often the biggest bottleneck in delivery. Most teams have far fewer QA specialists/testers than developers, so work piles up at feature testing (right after development) and even more at regression testing. Without automation, regression cycles can take days or even weeks.

As a result, features and releases stall, developers start new work while waiting, and when bugs come back, they’re now juggling fixes alongside new development. Changes are batched into large, high risk releases. Increasing the team size can often end up doing little more than excarcerbated the bottlenecks.

It’s an inefficient and expensive way to build software.

The origins of Build Quality In

“Build Quality In” originates from lean manufacturing and the work of W. Edwards Deming2Wikipedia: W. Edwards Deming and Toyota’s Production System (TPS)3Wikipedia: Toyota Production System. Their core message: Inspection doesn’t improve quality – it just measures the lack of it. Instead, they focused on preventing defects at the source.

Toyota built quality in by ensuring that defects were caught and corrected as early as possible. Deming emphasised continuous improvement, process control, and removing reliance on inspection. These ideas have shaped modern software development, particularly through lean and agile practices.

Despite these well-established principles, QA and testing in many teams hasn’t moved on as much as it should have.

From gatekeeper to enabler

Quality assurance shouldn’t be a primarily late stage checkpoint, it should be embedded throughout the development lifecycle. The focus must shift left. Upstream.

This means working closely with product managers, designers, BAs, developers from the start and all the way through, influencing processes to reduce defects before they happen.

Unless you’re already working this way, it probably means working a lot more collaboratively and pro-actively than you currently are.

Be involved in requirements early

QA specialists should be part of requirements discussions from the start. If requirements are vague or ambiguous, challenge them. The earlier gaps and misunderstandings are addressed, the fewer defects will appear later.

Ensure requirements are clear, understood and testable

Requirements should be specific, well-defined, and easy to verify. QA specialists should work with the team to make sure everyone is clear, and be advising on appropriate automated testing to ensure it’s part of the scope.

Tip: Whilst there are some strong views on the benefit of Cucumber and similar acceptance test frameworks, I’ve found the Gherkin syntax very good for specifying requirements in stories/features, which makes it easier for developers to write automated tests and easier for anyone to take part in manual testing

If those criteria are not met, it’s not ready to start work (and it’s your job to say so). Outside of refinement sessions/discussions, I’m a fan of a quick Three Amigos (QA, Dev Product) before a developer is about to pick up a new piece of work from the backlog

Collaborating with developers

QA specialists and developers should collaborate throughout development, not just at the end. This means pairing on tricky areas and automated tests, being available to provide fast feedback (rather than always waiting for work to e.g. be moved to “ready to to test”), having open discussions about risks and edge cases. The earlier QA provides input, the fewer defects make it through.

Encourage effective test automation

QA should help developers think about testability as they write code. Ensure unit, integration, and end-to-end tests as part of the development process, rather than relying on manual testing later. Guide on the most suitable tests to be implementing (see the test pyramid and testing trophy). If a feature isn’t easily testable, that’s a design flaw to address early.

Tip: If you have little or no automated testing, start with high-level end-to-end tests using tools like Playwright. Begin with simple, frequently executed tests from your manual regression pack and gradually build them up. Most importantly, integrate them into your CI/CD pipeline so they run automatically whenever code is deployed.

Work closely with developers to write these tests – shared ownership is essential. Whenever I’ve seen automation left solely to QA specialists (who are often less experienced in coding), it has failed.

Get everyone involved with manual testing

Manual testing shouldn’t be a bottleneck owned solely by QA. Instead of being the sole tester, be the specialist who enables the team. Teach developers and product managers how to manually test effectively, guiding them on what to look for. (Note: the clearer the requirements, the easier this becomes – good testing starts with well-defined expectations). Having everyone getting involved in manual testing not only removes bottlenecks and dependencies, it tends to mean everyone cares a lot more about quality

Embedding Quality into the SDLC

Most teams have a documented SDLC (Software Development Lifecycle)4Wikipedia: Software Development Lifecycle. But too often, these are neglected documents – primarily there for compliance, rarely referred to and, at best, reviewed once a year as a tick-box exercise. When this happens, the SDLC fails to serve its actual intended purpose: to enable teams to deliver high-quality software efficiently.

An effective SDLC should emphasis building quality in. If it reinforces the idea that quality is solely the QA’s responsibility and the primary way of doing so is late stage testing – it’s doing more harm than good.

QA specialists should work to make the SDLC useful and enabling. This means collaborating with whoever owns it to ensure it focuses on quality at every stage and supports best practices that prevent defects early. It should promote clear requirements, testability from the outset, automation, and continuous feedback loops – not just a final sign-off before release. And importantly, it should be something teams actually use, not just a compliance artefact.

Shifting from reactive to proactive

There are far more valuable things a QA specialist can be doing with their time than manually clicking around on websites. Performance testing, exploratory testing, reviewing static analysis, reviewing for common recurring support issues, accessibility. The list goes on and on. QA should be driving these conversations, ensuring quality isn’t just about finding defects, but about making the entire system stronger.

Quality is a team sport: Fostering a quality culture

The role of QA specialists should be to ensure everyone sees quality as their responsibility, not something QA owns. I strongly dislike seeing developers treat testing as someone else’s job (did you properly test the feature you worked on before handing it over, or did you rush through it just to move on to the next task?)

Creating a quality culture means fostering a shared commitment to building better software. It’s about educating teams on defect prevention, empowering them with the right tools and practices, and making it easy for everyone to care about quality and be involved.

The value of modern QA specialists

I firmly believe QA specialists still have an important role in modern software teams, especially in larger organisations. Their role isn’t disappearing – but it must evolve faster. The days of QA as manual testers, catching defects at the end of the cycle, should be left behind.

The best QA specialists aren’t testers; they’re quality enablers who shape how software is built, ensuring quality is embedded from the start rather than checked at the end.

This isn’t just better for organisations and teams – it makes the QA role a far richer, more rewarding career. On multiple occasions I’ve seen QA specialists who embody this approach go on to become Engineering Managers, Heads of Engineering and other leadership roles.

The demand for people who drive quality, improve engineering practices isn’t going away. If anything, with the rise of GenAI generated code5a recent Gitclear study shows that GenAI generated code is having a negative impact on code quality, it’s becoming more critical than ever.

No, GenAI will not replace junior developers

With the rise of GenAI coding assistants, there’s been a lot of noise about the supposed decline of junior developer roles. Some argue that GenAI can now handle much of the grunt work juniors traditionally did, making them redundant. But this view isn’t just short-sighted – it’s wrong.

I’ve never heard a CTO say they hired junior developers to off load simple tasks to cheaper staff.

Organisations primarily hire junior devs as its seen as a cost effective way to grow their own talent and thus reduce reliance and dependency on external recruitment.

Yes, juniors start with less complex work, but if that’s all they did, they’d never develop into senior engineers – defeating the very purpose of hiring them.

But more than that, junior developers contribute far beyond just writing code, and if anything, GenAI only highlights just how valuable they really are.

Developers only spend a small amount of time coding

As I covered in this article, developers spend surprisingly little time coding. It’s a small part of the job. The real work is understanding problems, solving problems, designing solutions, collaborating with others, and making trade-offs. GenAI might be able to generate some code, but it doesn’t replace the thinking, the discussions, and the understanding that go into good software development.

Typing isn’t the bottleneck. I’ve written about this before, but to reiterate – coding is only one part of what developers do. The ability to work through problems, ask the right questions, and contribute to a team is far more valuable than raw coding speed and perhaps, even deep technical knowledge (go with boring common technology and this is less of a problem anyway).

If coding isn’t the bottleneck, and collaboration, problem-solving, and domain knowledge matter more, then the argument against juniors starts to fall apart.

What juniors bring to the table

One of the best examples I’ve seen of this was when we started our Technical Academy at 7digital. One of our first cohort came from our content ingestion team. They’d played around with coding when they were younger, but had never worked as a developer. From day one, they added value – not because they were churning out lines of code, but because they were inquisitive, challenged assumptions, and made the team think harder about their approach. They weren’t bogged down in the ‘this is how we do things’ mindset. (It also benefited they had great industry and domain knowledge, which meant they could connect technical decisions to real business impact in ways that even some of our experienced developers struggled with).

This is exactly what people often under-appreciate about junior developers. In the right environment, curiosity and problem-solving ability are far more important than years of experience. A good junior can:

  • Ask the ‘stupid’ questions that expose gaps in understanding.
  • Challenge established ways of working and provoke fresh thinking.
  • Improve team communication simply by needing clear explanations.
  • Bring insights from other disciplines or domains.
  • Provide opportunities to mentor for other developers (to e.g. gain experience as a line manager/engineering manager)
  • Grow into highly effective engineers who understand both the tech and the business.

GenAI doesn’t replace the learning process

GenAI might make some tasks easier, but it doesn’t replace the learning process that happens when someone grapples with real-world software development (one challenge, however, is ensuring junior devs don’t become over-reliant on GenAI and still develop fundamental problem-solving skills).

Good juniors add more value than we often realise. They bring energy, fresh perspectives, (and even sometimes, domain knowledge) that makes them valuable from day one. In the right environment, they’re not a cost – they’re an investment in better thinking, better collaboration, and ultimately, better software.

Rather than replacing junior developers, GenAI highlights why we need them more than ever. Fresh thinking, collaboration, and the ability to ask the right questions will always matter more than just getting code written.

And that’s precisely why juniors still matter.

A plea to junior developers using GenAI coding assistants

The early years of your career shape the kind of developer you’ll become. They’re when you build the problem-solving skills and knowledge that set apart excellent engineers from average ones. But what happens if those formative years are spent outsourcing that thinking to AI?

Generative AI (GenAI) coding assistants have rapidly become popular tools in software development, with as many as 81% of developers reporting to use them. 1Developers & AI Coding Assistant Trends by CoSignal

Whilst I personally think the jury is still out on how beneficial they are, I’m particularly worried about junior developers using them. The risk is they use them as a crutch – solving problems for them rather than encouraging them to think critically and solve problems themselves (and let’s not forget: GenAI is often wrong, and junior devs are the least likely to spot its mistakes).

GenAI blunts critical thinking

LLMs are impressive at a surface level. They’re great for quickly getting up to speed on a new topic or generating boilerplate code. But beyond that, they still struggle with complexity.

Because they generate responses based on statistical probability – drawing from vast amounts of existing code – GenAI tools tend to provide the most common solutions. While this can be useful for routine tasks, it also means their outputs are inherently generic – average at best.

This homogenising effect doesn’t just limit creativity; it can also inhibit deeper learning. When solutions are handed to you rather than worked through, the cognitive effort that drives problem-solving and mastery is lost. Instead of encouraging critical thinking, AI coding assistants short-circuit it.

Several studies suggest that frequent GenAI tool usage negatively impacts critical thinking skills.

I’ve seen this happen. I’ve watched developers “panel beat” code – throwing it into an GenAI assistant over and over until it works – without actually understanding why 😢

GenAI creating more “Expert Beginners”

At an entry-level, it’s tempting to lean on GenAI to generate code without fully understanding the reasoning behind it. But this risks creating a generation of developers who can assemble code but quickly plateau.

The concept of the “expert beginner” comes from Erik Dietrich’s well known article. It describes someone who appears competent – perhaps even confident – but lacks the deeper understanding necessary to progress into true expertise.

If you rely too much on GenAI code tools, you’re at real risk of getting stuck as an expert beginner.

And here’s the danger: in an industry where average engineers are becoming less valuable, expert beginners are at the highest risk of being left behind.

The value of an average engineer is likely to go down

Software engineering has always been a high-value skill, but not all engineers bring the same level of value.

Kent Beck, one of the pioneers of agile development, recently reflected on his experience using GenAI tools:

Kent Beck Twitter post “I’ve been reluctant to try ChatGPT. Today I got over that reluctance. Now I understand why I was reluctant. The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1000x. I need to recalibrate”

This is a wake-up call. The industry is shifting. If your only value as a developer is quickly writing pretty generic code, the harsh reality is: if you lean too heavily on AI, you’re risking making yourself redundant.

The engineers who will thrive are the ones who bring deep understanding, strong problem-solving skills, the ability to understand trade-offs and make pragmatic decisions.

My Plea…

Early in your career, your most valuable asset isn’t how quickly you can produce code – it’s how well you can think through problems, how well you can work with other people, how well you can learn from failure.

It’s a crucial time to build strong problem-solving and foundational skills. If GenAI assistants replace the process of struggling through challenges and learning from them (and from more experienced developers), and investing time to go deep into learning topics well, it risks stunting your growth, and your career.

If you’re a junior developer, my plea to you is this: don’t let GenAI tools think for you. Use them sparingly, if at all. Use them in the same way most senior developers I speak to use them – for very simple tasks, autocomplete, yak shaving4 as things stand today, the landscape continues to evolve rapidly, it. But when it comes to solving real problems, do the work yourself.

Because the developers who truly excel aren’t the ones who can generate code the fastest.

They’re the ones who problem solve the best.