First “Radical” Idea on the Path to Progressive Delivery: Continuous Integration
The story starts with the birth of Continuous Integration.
When Continuous Integration began, it was a bit of a “radical” idea: developers should merge their changes as often as possible into the main branch (ideally once a day or more), running a series of tests against every commit.
There’s a great quote from Martin Fowler about Continuous Integration:
“Continuous Integration doesn’t get rid of bugs, but it does make them dramatically easier to find and remove.”Martin Fowler, ThoughtWorks page on Continuous Integration
The idea was to find problems much faster, and get away from putting off the discovery of problems, merge issues and bugs until later.
Continuous Integration required three things:
- A centralized source code repository.
- Tests, mostly at the unit test and integration test level, that could be run automatically, very quickly. If the build and tests didn’t return results quickly, the goal of fast feedback would not be met and people wouldn’t want to do a build if it took three or four hours.
- A CI server or service to sweat the details so that you wouldn’t have to worry about it. The idea is to check in your code, and then what needs to happen would just happen.
CI challenged people to focus on building smaller chunks of their solution at a time and to use mocks or virtual services so each commit could be checked in on its own yet still tested and validated if other bits weren’t ready.
Notice a Pattern? From Radical Idea to Table Stakes
I don’t think anybody would question whether CI is a good thing or whether a central source repo is a good thing. At first, CI was a radical idea, but now it’s table stakes.
Next Stop on the Path to Progressive Delivery: Continuous Delivery
Then, along came Continuous Delivery, the next “radical” idea: deployments shouldn’t be labor-intensive, high-drama, multi-hour things that have to happen outside of normal business hours.
One of my favorite quotes on this is from Jez Humble, which makes sense since he and David Farley literally wrote the book on CD.
“The reason that Dave and I wrote the book back in 2010 was that we just didn’t want to spend our weekends in data centers doing releases anymore. We thought it was a shitty way to spend our time and it was miserable for everyone. We actually want to enjoy our weekends. It was really about making releases reliable and boring.”-From the Split blog
Get Code to Production Without a Fire-Drill
The idea was to figure out a way to get code into production without having it be a manual fire-drill. Continuous Delivery forced more of a focus on automation and simplification. You can’t have a lot of crazy complex stuff and get it all done quickly, let alone automate it in the first place. It also forced process improvement, both with the people and with the systems.
The Transformative Benefit of CD
Here is another of my favorite quotes from Jez:
“[Continuous Delivery] reduces the ongoing cost of evolving your software because what you’re fundamentally doing is reducing the transaction cost of pushing changes. So, you can put changes out more often, at a lower cost.”– From the Split blog
CD isn’t just about automating deployments, but about making innovation cheaper. Don’t miss that point; it’s transformative!
If you can do releases often with less effort, then it’s much easier to achieve a fast feedback loop, which is the fundamental thing we’re going for in the first place.
- I want to try something.
- I want to get it out there.
- I want to see how it goes.
- I want to do it again soon, given what I just learned.
What? You Want To Test In Production?
Before anyone gets all upset about “testing in production (TiP),” bear in mind that I’m not talking about functional testing, like “Is it buggy?” I’m saying that I want to try something and see whether it creates the results I’m expecting, once put in the hands of real humans.
I want to figure that out with the least amount of drama and cost.
“Flow” Realized and The Path to Progressive Delivery Cemented: Continuous Deployment
After Continuous Delivery, next came the idea of Continuous Deployment. Another “radical idea.” For most shops, this is still kind of radical:
What if a change the developer makes could make it all the way to production, unimpeded by humans?
The only way you’d really do this (and not get fired) is if the maturity level of your team’s code review and testing is really, really high. That way, you have a much lower probability of pushing something out to production that causes grief.
So here we are, with most teams doing Continuous Integration, and smaller (but growing) numbers doing Continuous Delivery and Continuous Deployment.
So, What If Things Go Wrong?
Along the way through Continuous Integration, Continuous Delivery and Continuous Deployment, practices emerged to “limit the blast radius.”
If you are trying to ship more often and push code out all the way to the user more often, whether the deployment is automated or manual or whatever, you might run into problems more often that could hurt your users. (It turns out that research led by Nicole Forsgren, PhD, Jez Humble and Gene Kim shows the reverse is true, but that’s a topic for another post).
What if you could limit the blast radius?
What if you could limit the exposure of those problems so that they only hurt a few users and you can detect it quickly before you ramp up to everybody else? That is the latest, but perhaps least “radical”, idea: Progressive Delivery.
Progressive Delivery: A New Name for an Already Emerging Practice
Progressive Delivery isn’t really a new thing, as much as it is a new term to describe something that’s been emerging for quite some time.
Don’t Just “Contain The Blast Radius,” Progressive Delivery Means to Learn Quickly Too
The thing about Progressive Delivery is that our heroes in the story don’t just want to contain the blast radius; they also want to learn quickly.
Sam was describing Azure DevOps’ staged deployments around the world, and how they used feature flags to gradually expose functionality to particular users. It’s telling that Sam called it Progressive Experimentation.
I quoted the conversation in last week’s post: Why would I want to decouple deployment from release? Here it is again:
“Well, when we’re rolling out services. What we do is progressive experimentation because what really matters is the blast radius. How many people will be affected when we roll that service out and what can we learn from them?”Sam Guckenheimer, quoted in InfoQ
So, things might go wrong (or not). If I roll out the changes gradually, how can I learn as much as possible from the users who first see the release?
Bottom line: don’t forget that we are turning the crank faster because we want to learn faster and iterate more intelligently.
50 Shades of Progressive Delivery?
Next week we’re going to get into more detail about the “shades” of Progressive Delivery. Whether it’s 50 shades or not, I don’t know. That might be controversial, but definitely, we’ll talk about shades of progressive delivery next week. Have a great week!
Jump to the next episode of Safe at Any Speed: Four Shades of Progressive Delivery
Stay up to date
Don’t miss out! Subscribe to our digest to get the latest about feature flags, continuous delivery, experimentation, and more.
Keystone flags deliver all the safety benefits of feature flagging while minimizing the cruft that those same feature flags can add to your code.
By focusing on feature flag flow, teams can reap the full benefits of feature flagging while also keeping the number of active flags to a manageable level.