We have updated our Data Processing Addendum, for more information – Click here.

Influencing Without Authority Is All About Aligning Incentives

Contents

Influencing Authority Is All About Incentives

In the staff+ circles that I frequent, I hear a lot of people talking about influence and influencing without authority. What I don’t hear a lot about is aligning incentives. However, while directly convincing does come up, aligning incentives is often the most effective way to accomplish goals. Sometimes incentive structures lead to desired outcomes, and sometimes they lead to unforeseen results, making understanding the incentive structure essential. Rather than asking you to take my word for it, I will discuss a few examples I’ve seen. 

I work at a feature flagging company, and one of the common problems we see with feature flags is that no one ever seems to find the time to clean them up. A few dead flags in your code base are annoying and add unneeded complexity, but the more you have, the more this risk compounds. So, at a previous company, we decided to fix the problem by putting a hard limit on the number of feature flags in the system: we could only have 150 flags in our monolith at any given time. If I wanted to add a new feature flag, I had to clean up another one. This was successful in that it capped the number of flags we had — we didn’t exceed 150. However, it created several unforeseen incentives. First, it was much more time-consuming to add a new feature flag because adding one also included removing one. As a result, if I have a small but risky fix or change, I’m suddenly much less inclined to surround it with a flag. If the fix took me two hours, but adding a feature flag will take me two weeks, it’s suddenly very tempting to do the fix without the flag and hope for the best. You could even argue that skipping the flag may be the right choice, depending on the chance of a problem and the speed to roll back a release. Secondly, it caused teams to start hoarding flags. Before this rule was added, occasionally, when a team or individual found themselves with some extra time, they would remove an easier-to-remove flag. However, now, they were less incentivized to do so. I heard of teams purposely keeping easy-to-remove flags in the codebase as placeholders for the next flags they needed to add. So, while this rule forced people to remove flags, it also incentivized people to leave flags in the code base.

In this case, most of the incentives were driven by what was easiest. I had a coworker who used to say that laziness makes you a better engineer because you’re constantly looking for what you can automate and how to simplify things. However, those same engineers will also look for the easiest way to do things. That’s why when I’m in a postmortem, and someone says something like, ‘We should just tell people they shouldn’t do X,’ I try to ask, ‘What can we do to make it hard to do that instead?’ or ‘what can we do to make it even easier to do the right thing?’ Often, just making it annoying or hard (not even impossible) to do the wrong thing is enough to get people to do the right thing.

Some incentives are more obvious to see. For example, at a previous company, we had a team responsible for building frameworks. They got rewarded if they built frameworks. Additionally, everyone wanted to be on this team because it was viewed as easy to get promoted on that team because they worked across teams and, therefore, had an extensive scope. However, this team had absolutely no incentive to make sure their frameworks were fully or even partially adopted. Adoption was the responsibility of the other teams, not that one. As a result, we ended up with actions, new_actions, and new_new_actions (and later commands, resolvers, and more) all simultaneously in our codebase and all pretty heavily used (and yes, those were the actual names). The later frameworks were better than the earlier ones, but at what complexity cost? Should we have rewarded the building of new frameworks alone quite so heavily?

Other incentives are less obvious than promotion potential and team goals. For example, when I was on one of the R&D teams at Microsoft, our team was tasked with testing risky ideas. And we were told over and over again that if we were picking ideas that were as risky as we were supposed to be, then we should have a roughly 50% failure rate. However, we struggled to have even a 30% failure rate. The fact is that most of us want to see something succeed. Especially something that we’ve spent a lot of time thinking about. So we will try to find ways to make it work, throwing good money (aka time and resources) after bad until we have a semblance of good enough. To even get to that 30% failure rate, the team had to go out of its way to celebrate and reward failure well above success. In this case, it was worth understanding where people’s natural tendencies lean and finding ways to counterbalance those, and just stating the goal was not enough on its own.

Incentives can be anything, including what gets promoted, what gets recognized, what gets bonuses, what is fastest to do, and our innate drive for success. Additionally, it can also be what leadership pays attention to. Not long ago, Gergely Orosz and Kent Beck posted a response to the McKinsey Article claiming that you can measure developer productivity. It states the following:

The act of measurement changes how developers work as they try to “game” the system.

Their point, which I’ve seen play out, is that just asking managers to track a statistic or by measuring some piece of work, even if it is not tied to performance reviews, team rewards, or anything else, will change the behavior of those being measured. Sometimes, these behavioral changes have positive side effects (for example, nearly everyone agrees that smaller, more frequent PRs have positive outcomes), but not all of them. For example, while measuring the number of PRs may lead to smaller PRs, it may also lead to developers spending less time on the design (like I’ve heard a friend talking about feeling pressure to do). Even if the consequences are neutral, it doesn’t make it a good use of anyone’s time. On the flip side, though, if you want to see a behavior change, sometimes simply measuring it can be enough.

Incentives can play into other situations where we’re trying to influence in less apparent ways. For example, I want to enforce an API design standard, but teams don’t seem to be doing it. It can be tempting to think they need convincing, but that often isn’t the problem. Most people will agree that having consistent APIs is a good thing. However, they also think that their APIs are good enough without anyone else reviewing them, and if you try to point out why not, they’ll get defensive — no one likes to hear their work is sub-par. So, instead of that approach, how do we make the process of API design review as easy as possible? Are there side benefits we can give people who go through it? Are there things we can recognize or celebrate so that people will want to do it? Is there something else, other than the time it takes, stopping people from asking for design reviews?

Throughout all of this, the basic idea is that people are most likely to do what they are incentivized to do, whether overtly through a promotion or inadvertently through the good feeling of seeing a project succeed. Because of that, the easiest way to influence people is to see what incentives affect a situation and find ways to change those incentives or counterbalance them (in the case of celebrating failure).

Switch It On With Split

The Split Feature Data Platform™ gives you the confidence to move fast without breaking things. Set up feature flags and safely deploy to production, controlling who sees which features and when. Connect every flag to contextual data, so you can know if your features are making things better or worse and act without hesitation. Effortlessly conduct feature experiments like A/B tests without slowing down. Whether you’re looking to increase your releases, to decrease your MTTR, or to ignite your dev team without burning them out–Split is both a feature management platform and partnership to revolutionize the way the work gets done. Schedule a demo to learn more.

Get Split Certified

Split Arcade includes product explainer videos, clickable product tutorials, manipulatable code examples, and interactive challenges.

Want to Dive Deeper?

We have a lot to explore that can help you understand feature flags. Learn more about benefits, use cases, and real world applications that you can try.

Create Impact With Everything You Build

We’re excited to accompany you on your journey as you build faster, release safer, and launch impactful products.