BONUS: Avoiding Escalation in Decision-Making
"The negative feelings around 'approval' (and the associated behavior of 'escalation') come from the notion that when you want to do something you have to somehow seek approval from management."
BONUS POST—For all email list recipients
In follow-up conversations on twitter and DMs, several asked about how exactly did the Windows (and Windows Live) teams move away from the culture of escalation and improve decision-making?
One of the most visible tools of the culture of escalation was the “review meeting” with an executive. These meetings were often where escalations happen. They were also where approval was sought for product changes, headcount requests, and more. What I worked to do was end these review meetings altogether.
Still there was a need for a forum to have a check on whether we were in fact succeeding at having good decisions and consensus reached. To that end, we held “checkpoint” meetings. My big concern with changing the name of the meeting was that we’d have the same escalation and approval meetings, with a new name. To avoid that I wrote many posts on the internal “Office Hours blog” I maintained (that giant corpus of 750,000 words.)
These three posts paint a picture of the evolving nature of the culture and what we tried to do. In the next post (number 088) I will write briefly about the very first Windows checkpoint in the new organization.
Contents
LET'S REVIEW "REVIEW MEETINGS" (Jan 22, 2008)
FIRST CHECKPOINTS – EXPECTATIONS AND ACCOUNTABILITIES (Nov 29, 2009)
THE PERILS OF “APPROVAL” OR OUR JOURNEY TO ACCOUNTABILITY AND EMPOWERMENT (Nov 4, 2009)
This post is available to all email list recipients. Please do consider subscribing.
Note. These posts are unedited from how they were originally posted, which does mean there is jargon and other Windows-specific context that might make a point or two a bit opaque. These originals were used in One Strategy: Organization, Planning, Decision Making (Wiley, 2009 and associated materials.
LET'S REVIEW "REVIEW MEETINGS" (Jan 22, 2008)
As we are working our way through milestones for Windows and IE one topic I’ve been asked about quite a bit is the difference in the types of “review meetings” that we have been doing compared to other “VPs” (it is usually asked relative to that specific job title). It is a good question and gets to an important topic relative to success as a team and success on the team. What I will describe here is of course my own personal view, but I want to be clear that there are many different styles and ways of working and this one happens to be mine (and I hope it is working!) I definitely don’t speak for all my peers in this post, but rather just me.
I think everyone has a pretty intuitive feel for “review meeting” and what that entails. Hosting a review is sort of the last major phase in career development [sic]. You swear that your job will never be the same the day you become a manager—and all of a sudden you have a whole new world of responsibility. Then one day you walk into a conference room and everyone looks up to you. You look around and notice two things—there is an empty chair in the position of honor and there is a copy of the slide deck neatly placed at that seat (and if you are really lucky that is also the only color printout in the room) and second you look at the screen and realize it says “SteveSi Review” as the title slide. The day that happened to me was also the day that I decided I really don’t like these meetings.
There are many positive interactions and folks often find these meetings valuable for a variety of reasons. The question is really the “ROI” of these types of meetings and if there are in fact other ways to achieve some of these same positives (“visibility”, presentation skills, cross-group discussion, different perspectives on the challenges, etc.) I think there is at least one other model—let’s call this “Checkpoint Meetings” rather than reviews. Checkpoint is a word we started using in Office long long ago (just after the first time I showed up to that “SteveSi Review” title slide).
What are some of the aspects of a “Checkpoint” and why is it any different than a “review” and not just newspeak? The essence of moving away from a “review” and focusing more on accountability, communication, information sharing, is integrating the overall process of “exec meetings” into the overall planning and development process and checking in relative to the plan. It is about inverting the perceived purpose of the meeting and trying to find a way so that instead of the meeting being about management getting informed and helping to decide, it is about teams presenting what they know to be the case and identifying areas where management can/should be helping or at least informed. Again that might be ideal…
Having a plan – I know I sound like a broken record, but having a plan makes the process of “review” much easier. Instead of reviewing an ad hoc set of decisions or issues, we can focus on discussing the plan. As we have talked about our goal is to promise and deliver (not over or under promise, and under or over deliver) so when we talk about the status of the project we can do so relative to the plan. All of our conversations can take place relative to the vision, the pillars for the release, the tenets for the release, and the resources assigned to the team to accomplish that work.
Using the data team already work with – One of the big challenges with review meetings is that they create a lot of work (and often the work is for the exec, not for the team doing the actual coding). What we have been working to do is not create new asks for information or new pivots on data, but just relying on the work that the team already use on a daily basis. So when it comes to feature lists we use TFS, or when it comes to headcount then headtrax (and our project codes) work fine, and bugs of course come from product studio. The basic idea for me is that checkpoints are not an opportunity for me to assign homework but a chance for the team to easily reuse the information that is routinely used.
Checking in at natural points in the product cycle – Part of the benefit of being disciplined with our schedule and focusing on one set of shared goals is that there are natural points when we can go through all the teams and checkpoint the work. We do this before the start of each milestone when all the information, tradeoffs, and issues are top of mind for everyone. And we do so evenly across all the feature teams.
Meeting efficiently makes everyone happy – Our checkpoint meetings for Windows M2 were about 40 minutes per team (all done in one day for the windows experience teams). Our template was 6 slides focused on the vision, features, resources, and issues. As with any discussion one could always hope for more time, but by staying focused and following the same structure we accomplish together what we all need to accomplish—which is to make sure the project rolls up as coherently as we had hoped.
Assuming feature teams are acting rationally – One of the biggest things we do is we assume in running the project is that teams are acting rationally within the scope of the release. This means that teams know their partnerships, understand the goals of the vision, and have a clear view of the customer experience we are setting out to achieve. We assume that and then the checkpoint becomes much less contentious. Believe me if that assumption is false there are better opportunities to learn that and frankly no meeting will really uncover anything as so problematic.
Talking routinely up, down, and across the team – This one applies especially to me but of course it is absolutely necessary for everyone else. One of the biggest challenges in “review” meetings is the element of surprise. Once a meeting has a ton of new information the reality is no one will be happy. So the best way to avoid surprise is to talk casually and enough throughout the cycle to enough different people so that there aren’t any surprises.
Keeping in mind who the meeting is for – My ideal view of this topic is that my job is to help people who write code, specs, test, and do the designs to get that work done. So my view of any meetings we have is “is there something I can be doing to help folks”. Sometimes that is simple as reinforcing that the work is indeed in line with what was committed. It might mean helping by connecting with other groups. It might mean just affirming that a difficult choice was made. But the most important thing for me is making sure I don’t get confused over who is doing the work and who is “overhead”.
Those are just a few of the techniques we have been trying to use throughout the Windows and Windows Live products. Do we practice all of these all the time and perfectly? Probably not, but we're a learning organization (and I'm still learning as well). I would definitely say we aspire to this mode of working and are not far off. But I bet a few of you still have some questions about two topics that might seem missing from what is described above. I’ve been asked about both of these recently.
First, what if there is a special topic we just need “executive input” about. This might have to do with external partners or it might have to do with a partnership across the company that just isn’t working out. It might be that we’re just having a lot of trouble we didn’t anticipate. My first thought is – hey grab me in the hallway or send me mail so meeting with me is not a bottleneck. But if we want to get together we certainly can and will. Throughout the course of a project we will have these “topical” meetings which we use to inform, share information, or get in sync on a complex topic. That’s totally fine.
Second, there are a lot of folks who believe that these meetings are important “career development opportunities”. I am definitely in tune with this. In fact many years ago, someone who moved to the Office team was pretty unhappy during the first year or so as they tried to figure out how the “system” worked (I didn’t know about this dissatisfaction at the time). There were never any exec meetings and this person felt like “boy how do you get ahead without visibility.” Well this person got promoted several times over several releases and came to me years later with the observation that all along all the organization really wanted to see were great results—great features aligning with the vision, great code, great hiring and team building, and positive feedback from partners. Whether I saw a good performance at a single meeting or not was not nearly as important as the work that the team did, and of course the management overall (through our people processes and other tools) is clear on the specifics of different contributions within the team. But overall the key for me is focusing on the work and the results, which is a much more objective view of getting things done than on a series of meetings.
Those are some of the things we are doing and also an attempt to address two of the most common questions. But I also thought it would be worthwhile to look at the elements of "review meetings" that I see as negatives. Of course it is not the meetings themselves or the participants that are the problem as much as the dynamic that a "culture of review" drives. Within that dynamic I've observed a few unintended side-effects:
Focusing on the “ask” – These meetings more often than not tended to become “high stakes” meetings rather than information sharing. By that I mean the presenters usually feel like they have to get something out of the meeting (headcount, funding, technical requirement pushed to another team, communication of bad news, etc.) and as such the desire to be sure to “focus” the information that is shared in such as way as to ensure the desired outcome. This is not to say anyone misleads in any deliberate way, but rather there are many sides to any presentation and there is a set of editorial decisions over what to say and when. Since there is something at stake it means over time people are conditioned to try to game the system of meetings. I am sure everyone who has ever presented will deny having done this.
Obtaining the “big stick” – An outcome of many of these meetings is the “Meeting Review Minutes”. Often these look (to me) like an account of a boxing match or the dialog in a David Mamet play with lots of “he said then she said then he said then he said again with emphasis”. But a key part of these notes is “We were told ‘this is a good direction’” or something like that. That part of the meeting is where folks might feel they now have the authority to go get people to do things because “we just had a Review and we said we were doing this and now we need your support.” This makes everyone uncomfortable because of course more often than not the people being told how to act differently weren’t represented at the meeting (so email flies around getting clarification). And similarly, more often than not there is a bit of an extrapolation over what the “exec” said at the meeting which probably would make them uncomfortable if they knew this is how the outcome of the meeting was being used (yes “we” all know how to track down mail like “stevesi said” and then follow up with what we really said).
Finding the “hole” – For me the worst emotion I see at these meetings is at the end when the presenter feels like “phew made it through the meeting”. This has the implication (again, for me) that the purpose of the meeting was for the “exec” to find the hole in the work or to identify the problems. This is highly problematic because, and I have done a lot of research on this, executives don’t actually know everything about everything and often can’t be all knowing about every topic in the course of a single meeting (well maybe some can, just not this one). But the challenge is that these meetings get viewed as some form of “IQ test” where everyone is testing everyone (this goes both ways!) Yuck.
Adding "more" – Again through exhaustive scientific study I have determined that 9 out of 10 product reviews result in more work being added to an already too full schedule. That’s no fun for everyone. It is extremely hard to watch a presentation on what people are doing and not add stuff (I have tried). It is just as hard to watch a presentation on any topic where people have made the tradeoffs and come up with a plan and not try to “edit” the plan in some way. And of course no one really likes to be on the receiving end of editing, especially since they feel like they just worked really hard at gathering all the information and making the tough choices. I can’t count how many times I have heard “we thought of that but decided not to” but the outcome was ultimately to revisit those choices. The other related outcome is just redirecting the work—that is you come into the meeting thinking one thing and leave thinking another. Sometimes you hear the phrase “go get a rock” to describe meetings (this is the dynamic where you feel like you are being told to head down to the river bank and retrieve rocks until one strikes the fancy of the exec).
Transferring "accountability" – Finally the biggest challenge I see with these types of meetings is how they have the potential to undermine accountability, which of course is the heart and soul of what we are making sure we build into our management structure. This is a very subtle point and takes time for this type of weakness to really settle into an organization. When you think about it, if you have a big hard issue that gets before a review forum then there are only a couple of outcomes. You can genuinely know what you want to do and genuinely present it and genuinely get some thoughtful and reinforcing feedback. That’s the ideal. Ideal things don’t happen a lot in nature or in business. On the other hand, you can be sort of unsure and get solid direction. Or possibly, you can have one view but after the meeting you are now heading in another direction. In those two latter cases, you can see that if we end up less successful than we thought then the person doing the work can clearly point to their boss (or exec or whatever) and push the responsibility for the outcome “up”. And of course from the “up” perspective, you have a tough time holding people accountable because they are doing what they think you told them to do.
No one person, one meeting, or one team of course demonstrates all of these (or other less than positive characteristics one could come up with). Rather I wanted to highlight some examples of things that I think from a "checkpoint perspective" we are trying to improve upon.
This turns out to be a pretty rich topic. I think I just touched the surface on this one. Given that I know I am probably a little bit “different” with respect to this topic I am sure I either ruffled some feathers or maybe wasn’t as clear on the topic as some would have liked. Feel free to email me with any followup questions or comments as we together hone this topic.
--Steven
THE PERILS OF “APPROVAL” OR OUR JOURNEY TO ACCOUNTABILITY AND EMPOWERMENT (Nov 4, 2009)
If we think back to the start of the Windows and Windows Live organization, one of the earliest topics we talked about was “empowerment” and associated “accountability”. We talked a lot about two management behavior patterns that drove folks crazy—“approval” and “randomizing”. This post looks at a current situation through this lens and also how we can continue to do better while also trying to show the complexities of these cultural attributes.
The negative feelings around “approval” (and the associated behavior of “escalation”) come from the notion that when you want to do something you have to somehow seek approval from management. That the process to do this is to put together a proposal and then meet with management (executive or otherwise, but executive preferred). To me this seems like a pretty crazy process. The idea that you should get approval to do something implies that management knows more than you about what to do or not do, which when you think about it is kind of crazy. On the other hand, we don’t want the other extreme which is people deciding things poorly because they can.
Part of this was instilled in me by a famous (and oft repeated) story that legendary Mike Maples (the original VP of Applications/Office) would tell. He talked about how when he first arrived at Microsoft (from IBM) he noticed that people would ask to have meetings with him. In these meetings with him they would present a lot of slides (he always called them “foils”) and after about 55 minutes and 20-30 slides there would be a request for Mike to approve something (marketing, development, advertising, or something). Mike would famously ask “so how long have you spent on this issue?” And invariably the presenters would say “we’ve been focused on this for weeks [months]”. To which Mike would reply, “ok, so I have now spent a total of about 55 minutes on this so clearly you know way more than me, so you should decide.” I love this story.
There is something that we learned along the way that is important to really be able to act on Mike’s sage advice. It is important for people making those kinds of decisions (which is all decisions) to be operating with two important tools. First, the team/people/person should have a plan—a plan is a thought through view of execution, risks, and most importantly downstream impact of the decision. And second, and perhaps what is most interesting, is that Mike and management need to make sure they are providing the team with a framework and the context upon which to base their plan and thus decision.
This is important because without a framework and context, anything can be made to look locally perfect. That is if someone wants to spend $100M then I am sure they can come up with a nifty forecast for how sales will improve. If someone wants to add a feature, then I am sure they can also come up with the customer testimonials for how important that feature is.
Equally important is how a plan needs to be part of the equation. You can’t just have a plan for the immediate execution—it is easy to have a plan to open up a PO or to start coding. There needs to be a plan for how the whole thing plays out. Not just in isolation but in terms of opportunity costs, side effects, and overall management of the team. If we spend this $100M then what do we not spend (assuming zero sum)? If we allocate developers to this feature, then what aren’t we going to do? What will shipping this code do to quality, reliability, and performance of the existing product—either directly or indirectly, or unexpected?
In order to avoid the process of having approval, it is important that everyone on the team be informed of the context (the role of management) and that the people making the decision have a great plan to execute.
This last point brings us to accountability. The positives of accountability are super clear—you are responsible for the successes you bring to market, get credit for the work, and experience the pride of end-to-end ownership. The negative feelings around “accountability” come from a history of the infamous executives mandating things at the last minute. In fact, we have our own Microsoft jargon for this one, “randomization”. To randomize, means that executives who don’t know anything swoop in and make decisions (which by definition they must know less about). This has the side effect of completely removing the notion of accountability from any decision made—it means that if you know that someone will change the start menu “randomly” (that was a question I was asked in the spring of 2006-see http://my/sites/stevesi/Blog/Lists/Posts/ViewPost.aspx?ID=306) then at first you just back off and assume the worst, and eventually you just stop deciding things since you know someone will swoop in and decide something different. Ultimately the organization fails to decide anything and anything that does get done has no real accountability. Why is that?
From the perspective of the random manager, the reality is that they firmly believe (or so I am told) that there was either a poorly thought through plan or that context was missing in order to make the “right” decision. While this is happening in real time it looks like the manager is being random. But taking a step back both parties need to take responsibility for what is going on and why this is a poor situation:
Management (the random folks). What context was missing? Why did folks not know the thing that you know? This is really important to understand. It means as a manager you were not doing your job to communicate the context.
Team (the randomized folks). What elements of the plan were not thought through? Were you just really missing something? How did it get so far through a “process” without you figuring this out?
Of course there can always be more to a specific situation. As always, these posts tend to look at things through a general perspective. At the same time we are working through some decisions for Windows 7 SP1 that have many elements of this “dynamic” or “problem”. The question we’re dealing with is the process by which new features are added to SP1.
On the face of it we have a corporate wide policy that applies to servicing which is that we don’t do new features in service packs. Doing so slows the adoption of service packs (because corporate customers re-evaluate them) and because doing so increases the overall risk to maintaining the quality of the in-market product because of the increased surface area. Just as a footnote, this is often misunderstood regarding agility as of course we make 100’s of changes to the product each month. We just don’t do a press release or blog post about each one. On average, Windows fixes about 100 issues a month directly reported by customers or telemetry. These fixes are very much like the ones you might read about on the gmail blog (not the labs stuff, but the operating service) or the google apps blog. There’s always been a view that we might consider trying to get “credit” for this agility, but it has always seemed a bit cheesy since we’re just maintaining the in-market product as our competitors do.
Nevertheless, we still on rare occasions want to add something to the product. This is where this post becomes clear. How do we decide when/where or what to add? If we’re not supposed to add things, but everyone can make their own decisions, then how do the right decisions get made? Of course the goal is to avoid having an approval process because no one likes those (especially me). The approval process just drives the whole dynamic of that Mike Maples meeting of 30 slides and 55 minutes of “proof” (customer anecdotes, field testimonials, sales forecasts, etc.. But we also know that we need everyone to understand the downside of the decision to add features to an SP. From a management perspective this should be clear by now since it is our Microsoft servicing policy.
Where the process does not lead to a successful resolution is when the decisions are looked at in isolation—this is where the need for a complete plan comes into play. For example, a specific feature/DCR in isolation might be perfectly fine. But what are the downstream effects on other teams? Will other teams get hit with new bugs or hotfix requests as a result? For the team doing the work, what sort of resource load does this put on them in the nearterm? Will they miss out on milestone planning, MQ, or other work (such as the rest of SP1) as the feature scope broadens or required resources increase? Will this be part of a larger picture marketing will begin to communicate and can the one change sustain a change in perception or is this just one small thing relative to a larger problem? All of these are the questions that need to get answered by the plan, which is different than the questions that need to get asked by the “approver”.
This last sentence matters the most relative to accountability. In an “approval” oriented process, the effort goes into justifying a decision which is really the easy part (anyone can find enough customers who want something, a revenue projection, or a competitive issue). Thus an “approval” shifts accountability for the overall effort—if approved, then anything that goes wrong was due to the approval not catching it and if denied then anything downstream is also the problem of the approver failing to understand the “case”. Yet we have established that the people doing the work actually know the most. What we really aspire to is having a plan that takes into account the full context in which we operate—not just how to get the decision made, but what will happen if we make the decision. Will we actually get the results we expect? What are the opportunity costs? What are the side effects? The person/team deciding needs to be accountable for those, not just accountable for getting a decision made.
We will make mistakes. Making mistakes is a normal part of a learning organization and, of course, of just being human. Everyone makes mistakes. Folks on the team might decide something and make a bad choice. Managers might fail to provide context or a good framework that does make them know more than folks on the team. Those are mistakes from all parties. When they happen we might end up being random—we might reverse a decision, we might decide something counter to the best judgment. And because we’re not the government, we might even decide that our rules weren’t right. We can do all those things. But we shouldn’t miss the chance to learn. The flip side is we should not learn the wrong thing or learn a way to approach problems that treats a system as something to game.
In this post, we don’t go through how these attributes translate into an effort that is less about a decision and more about getting work done—what code to write, is the spec good, do we have test coverage. These are all areas where you don’t really get approval, but work with your lead or manager. There are elements of this same dynamic and also important lessons for you and your manager to take away from the overall process of “decision making”. The key is that there are two sides to every decision and each has a perspective that needs to be included—how to pick what (or how) to do and how to frame what to pick.
Ultimately, what we don’t want to have happen is to fall back on the old behavior patterns of doing a bunch of work to seek approval (the slide deck with all the statistics, forecasts, or quotes) in an effort to convince the approver that we’re ready to go. We aspire to a much more organic process where people do what is locally effective, and globally optimal for the overall team. This is hard. It is why we expect a lot of our group managers relative to this type of work.
Our goal as an organization is to get as close to zero as possible in terms of “approval” processes and as close to 100% as possible in terms of accountable decisions. And we want decisions to be good—making bad decisions as a practice v. a mistake is not what we aspire to. The recent discussions around SP1 have reminded me that we still have learning to do as a team. We’ve vastly improved. But we all have more to do on each side of an issue like this.
--Steven
FIRST CHECKPOINTS – EXPECTATIONS AND ACCOUNTABILITIES (Nov 29, 2009)
For some folks on the team, we will have the first round of checkpoints in planning the next product. This might also be the first time leading these meetings for many—either the first time leading the meeting for a specific product domain or the first time in a new management role, or perhaps both. It is worth making sure we are clear on the role these meetings play in planning and the mutual expectations between those presenting and those listening.
I think it is important to view this topic through the lens of a recent post “The Perils of 'Approval' or Our Journey to Accountability and Empowerment” since so much of what goes on in these meetings can inadvertently alter the equilibrium of accountability or worse revert to “old” behavior patterns we are trying to avoid. The post referenced generated a lot of mail to me and a lot of interesting hallway conversations (excellent!) but what was most interesting were the two extremes represented in the interpretation. Uniformly folks were pleased to see the overall tone of the post and direction we want to head. Some interpreted the post as a logical point in evolving a plan—that is you don’t have “approval” meetings because planning and associated details are being worked out through the “funnel” of gradual and shared refinement we have talked about so many times. I would say this is the “right” interpretation. But some folks seemed to read the post and conclude that meetings were no longer necessary or that the post solidifies the view of “I do what is right for my product area”. This latter view is worrisome in the sense that “right” requires context—the context for deciding the right things (as best we ever can) is the gradual refinement of planning and sharing, not a self-desigated context. Checkpoint meetings are not approval meetings, but they are meetings that checkpoint the gradual refinement and sharing within the framework/context we have established.
So the best way to think of a checkpoint meeting is that if the work of the team represents the further refinement of the framing memo, which lead to planning themes, which are now beginning to focus on scenarios (sorry for the all the subtle terminology—these are not terms to be taken too seriously, but focus on the work results), then the meetings just represent a checkpoint along the path to completing the product plans. The checkpoint meetings are a chance to show off the creativity brought to the problems we set out to address at an abstract level in the earlier stages of planning.
Where things get tricky is when teams present the unexpected, unsupported, unreliable, or unfinished view of the plan relative to where we should be:
Unexpected. Unexpected means that ideas presented are a “surprise” or that scenarios that have been talked about quite a bit vanish. This is a sign that the sharing of information has been incomplete during the planning time leading up to the checkpoint.
Unsupported. Unsupported means that the work presented might require some “magic” from another group but the other group does not have that magic in their view of the checkpoint. This is a sign of a lack of cross group work in planning that is required.
Unreliable. Unreliable means that the work presented does not pass the basics of engineering rigor in terms of what can be accomplished, what is technically feasible, or what will have the right level of customer viewed quality at the end. This is a sign of a lack of calibration relative to where we are in the checkpoint process.
Unfinished. Unfinished means that the work presented is incomplete in terms of the expected level of detail so it is hard to know where things are heading next. This is a sign that the team is “behind” where it needs to be in order to get us to a vision, feature list, and work item schedule.
In each of those cases the meeting can quickly turn from a “routine” discussion to one of those tricky situations where folks presenting might claim that the meeting has turned to “micro-management” or “randomization” or when those listening feel like they are pushing too much top-down or feel they are changing course. What is incredibly important to realize is that the meeting should not get to this point—to avoid this requires a level of accountability such that if the state of the work is in this tricky spot, then those presenting need to own up to it now and not think that such accountability is failure. Obviously part of this would be to provide insights (a plan) into when the work “exits” this stage of unreadiness.
That is really the key to how a checkpoint meeting is not a review. A review is a meeting seeking approval. There is no approval to be obtained from this meeting. A checkpoint is a meeting where the team demonstrates it knows precisely where it is along the planning timeline, knows the level of details that should be presented at this point, and owns up to the differences between those two. It is not a time to assign “blame”, try to avoid the subject, or hope that no one notices. A checkpoint is not a meeting you “make it through” or “win over management”.
Accountability comes from “self-awareness” of the overall plan and a deep understanding of the responsibility, latitude, and authority each team has to determine the success of their area. Success of the meeting comes from a clear affirmation of accountability, convergence towards the plan, and clarity of next steps—and the self-realization of all of those combined with sharing that point of view with management and your peer group.
Each checkpoint comes before a particular milestone (before MQ, before the vision, before M1/M2/M3, etc.) as this is a natural time to sync up. This is another way that these are different than reviews in that the whole team is on the same timeline and working from the same playbook and importantly at the same “altitude”. We don’t have “deep dives” for some teams and overviews for other teams. It also means each checkpoint is "led” from different discipline perspectives. Before the vision, usually test leads as we line up the MQ work. PM leads through the vision and planning. Then once we’re into coding Dev leads. And finally as we focus on exit criteria then Test will lead. Of course every checkpoint is a presentation of all the disciplines and the views expressed represent a consensus. Of course it also means that each checkpoint has a slightly different focus. For example, the current round of checkpoints are focused on scenarios. As we enter Vision/M1 we will focus on features and the robust scheduling of dev/test.
With this backdrop here are a few tips for how the meeting can be a productive discussion and an affirmation of roles, responsibilities, and accountability:
Follow the template. The template provided is there for you to use not as guidance but to simplify the overall process of the meeting. We’re not asking for more slides, a different pivot of the information, or even a unique presentation format. There’s a lot of room for creativity in the process and the best place to apply creative energy is into feature design and architecture, not into the slides that we never ship to customers. Some folks always think their work does not fit into the format/structure for some reason—keep in mind that ultimately when we sell the product all the features fit into a single “template” as well.
Speak as one team. Some view checkpoints as a “pm thing”. Checkpoints are a “team thing”. Depending on the project phase different leaders will own the meeting and at each phase different parts of the template might be for different leaders. It is incredibly important for the team to speak as one and that the presentation not show a disconnect between pm, dev, or test. The checkpoint is not a place to learn that dev does not have a plan for how to build something or test does not feel the boundary conditions or constraints have been thought through. That alignment need to happen before the meeting.
Provide a definitive scenario/feature list. The checkpoint is always a time to present the definitive list of planned work (scenarios leading up to planning, features as we approach M1). This is the list of things the team is going to deliver. Of course there are uncertainties and earlier in the process the uncertainty is higher. But everyone should be clear that the primary output of a team is code that implements scenarios/features, so this list needs to be refined and shared with clarity.
List all scenarios/features with the same priority. A while back we had a blog discussion about the perils of P0, P1, P2 with respect to accountability and to clarity of plans and communications. It is worth returning to that now. When a team presents the scenario/feature list that has this prioritization, the meeting adjourns with a total lack of clarity over what is getting done—are all the P0’s getting done or is there a strong desire to get them done, which P1’s are getting done or will none of them get done, are any P2’s going to get done and how will that get decided? The goal of planning is to get to certainty about what is being committed to. The normal engineering process would dictate that once a list of committed things is created those will expand to fill all available time and the idea of “pulling a few things off the wish list” never materializes because the wish list ends up being populated with work items learned about during execution. This is a normal result of starting execution with a plan that is not 110% complete and so is expected, and thus it is not useful/prudent to generate a wish list now. Early in the process there are degrees of certainty associated with feature execution that might be noted.
Account for all dev resources. Often in checkpoint templates you are asked for an accounting of how many devs (SDEs) are working on each of the feature areas listed. This needs to “add up”. In other words your lists of feature areas needs to be complete and your list of devs working on those needs to add up to the total number of devs. If you find that a lot of devs are fractionally assigned that is almost certainly a problem. If you find that there are a ton of devs on one line item then it should be clear that is overwhelmingly important (not just overwhelmingly difficult). If you have trouble accounting for all the devs or assigning resources to all the features then that is problematic. This is the first place where the disciplines show the necessary alignment as the dev manager owns this information.
Scope the work. Each checkpoint is a chance to iterate towards a more concrete plan. It means that when PM is presenting the scenarios or feature list that dev and test are on board with the list because they *know* how to translate the list into high quality and delivered features. At the checkpoint it is assumed dev/test/pm are all on the same page. The checkpoint is not a chance to “force” each other into agreement and the weeks leading up to the meeting are the time to reach consensus, recognizing the different points of view each bring to the process.
Ignore “non-goals”. An “old school” way of pointing out non-accountability is to make sure that there is a clear list of “non-goals”. This is always a problematic list. First, fitting it on one slide would seem to be impossible since the list of things not being done has to be infinite. And second, a snapshot view of the list at a meeting like this can only generate one reaction, which is a desire to move things from the non-goals to the goals. The only list that matters is the definitive feature list. In sharing and aligning across teams, non-goals or “scope” is a way of working across teams to say what is being done/not being done and that is totally fine. Sometimes non-goals serve as an FAQ that sort of defines the scope of an overall effort—such an FAQ might be a convenient way of referring to the goals. On the other hand, if you use this a lot one might call into question the choice of features as it sounds like a lot of people are asking “why” with respect to the list.
Deliver on cross-group alignment now. Now is the time to stop thinking about cross-group “dependencies” and record things as feature commitments and partnerships. All features need to be in the feature list, including those that are being done “for” other teams—or for scenarios, all need to be inclusive of the full set of contributing teams and “dilithium crystals” required. It is especially important that when one looks across the entire checkpoint feature list that cross-team partnerships be accurately reflected. Teams should think of cross-group work as “off the books” or a “lower priority”. Since all features have equal levels of commitment, when it comes time to cut something a team needs to consider the full list of committed features as candidates to cut—and in fact cross-group features are the last ones to be cut/scoped because of the downstream commitments.
Dependencies are not “risk areas”. Often along with the “P2” features there is a desire to point out the risk areas to the feature list. It is a truism that the risk areas always seem to highlight the inbound code from other groups. This isn’t particularly productive or informative as we all know that the normal reaction to risk is to look for things you don’t have “control” over. The best use of risk areas is to find things in your own code that are risky because of unknowns, complexity, or other execution issues. Having the list of risk areas be a mirror of cross-group efforts doesn’t really inform much. Ultimately the only risk that matters is not getting the list of committed features done. There is a clear recognition that a value to outlining risk areas is to assert the work the team is especially focused on in terms of failure prevention and that’s ok.
With those “tips” on the use of checkpoint meetings, it is important to re-state the most obvious tip—surprise is uncool. This means that in any dimension there should not be some big newsworthy event at the checkpoint meeting. This is not a call for everyone to send “expectations” mail to their manager about the meeting since saying the night before “oh by the way this might be a surprise but now it won’t be” is not particularly helpful. The checkpoint is not a place to learn things for the first time—we push for weekly 1:1’s through the management of the team and ask everyone to have high touch connections across the team. That is to create an environment where sharing happens sooner, not later, and course corrections are a rare exception not something that happens at the meeting.
I think that if the spirit of this post is following then there are two outcomes. First, creating the information for the checkpoint is easy. It is not homework because the template is “lightweight” and because as leaders of the team this is just information you should know. Second, the meeting itself feels like a casual discussion of where things are because it is a checkpoint.
If you leave the meeting feeling you gained something then it is worth asking what that is and why it had to wait for the checkpoint. If you leave the meeting feeling as though you “survived” then ask yourself what risk is in the presentation that was not properly understood by those listening to the presentation? If you leave the meeting feeling like “now we can get back to work” then ask yourself why what is supposed to be a straight-forward statement of where the team is was so disruptive.
At the meeting “management” is there to reinforce these principles. The product “plan” is in place at 100,000 feet as we like to say. The refinement and the creative spark is being introduced now—all the essential elements for what we need have been articulated and these steps in the planning process are to make sure that we’re all progressing as a team in our refinement of the plan and in sharing of that refinement.
--Steven