There are two important elements to Bill's management style. He was deeply interested in how things worked in the products, and set the tone that managers needed to know how things worked. That was a huge strength of Microsoft's management team (and a weakness when people lacked the human aspects of managing) and critical to Microsoft being able to do all of the things it did. The company lost this to a degree along the turn of the century and it resulted in several project failures.
Bill also, as Steven wrote, would purposely drive conversations and challenge people in order to find the weak spots. He was sometimes brutal on a personal level when doing so. However, these discussions were often valuable. Bill did not have to know the answer. These discussions often drove substantial follow up that developed valuable insights. In addition he was good about having the discussions not break the delegation barrier. He wasn't telling people what to do, he was telling people he didn't think they had thought something through. Sometimes that was a painful discussion. Sometimes that was a waste of time. But also sometimes it was super valuable (Bill term) and really helped those of us working on the products.
One (obvious) observation from my time at Microsoft is that there was always a strong, top-down push for early (some might say premature ...) dependencies and code re-use. One of many problems with Longhorn.
Interestingly, at Facebook there was never any top-down push for code re-use. Over nearly a decade, I rarely saw a classic Microsoft-style architecture diagram in an exec review (the only exception was new Microsoft hires who were still adapting). No exec ever asked if a team was using the latest-and-greatest internal framework.
But – in my opinion – there was far less code duplication at Facebook than at Microsoft. Code unification/clean-up was a constant, bottoms-up effort from various teams (e.g., the Product Infrastructure team). Projects like GraphQL, React, React Native, etc. were entirely organic, bottoms-up efforts.
I've never been able to fully explain the difference. There was clearly a confluence of factors:
- A single, always up-to-date codebase (easier to reuse code)
- A culture that primarily optimized for speed of execution
- Daily releases (vs. multi-year releases with discrete RTMs)
- A culture that celebrated engineers improving core abstractions
- Etc.
But I can't help but wonder to what extent Bill's constant pushing on this issue actually worsened it, e.g.:
- Bill would detect some area of duplication
- He would push some team to create a framework to unify the thing
- He would push other teams to take a premature dependency on that framework
- Dependent teams would be burned and learn to avoid future dependencies at all costs (real artists ship)
- The cycle would repeat
Certainly one of the unspoken lessons I learned at Microsoft was that you were crazy to take a dependency on any piece of code that hadn't shipped yet.
It feels like a cautionary tale in the exertion of power – the harder you push, the worse you make it.
I think they are a product of the era. My outsider perspective on facebook is that it is one product even though it is often talked about as multiple products. It got optimized for code sharing the way Excel did (versus Windows) or maybe Office in the post 1997 era. The view that there was one customer at the end viewing the output as a single product even if gets talked about as "products", they are developed as one with a bunch of assumptions about what it means to be one.
Bill's push on Windows would not have worked, particularly with Cairo, Longhorn, etc. if the team had not been receptive. One thing under reported at Microsoft is the "thick skin" it took to run a product and deal with that input.
The Windows team celebrated improving abstractions. The problem was they could not mandate consuming them internally because of the above, and of course externally no one was even remotely interested in rewriting ever because along with improving abstractions came a promise of running the old ones forever (and also often handicapping the new abstractions by that requirement). A big lesson I will talk about later is how much customers loved the idea of not having to change.
It is difficult to cover all the bottom up innovation because I'm writing this from my perspective, but Microsoft was filled with bottom up. As you correctly point out, the top-down innovation was not connected to that and importantly overlapped or Venn'ed. Instead the top-down needed more of a framework for success, but we never got there as a company just some products (like Office) specialized in framreworks v details.
There's a lot to be said for the fact that much of the online world (FB/google) have not yet experienced wholesale changes that impact how customers perceive things. Mobile might have been one but the FB changes happened soon enough and with a base that wanted to change (like the internet and Microsoft).
You are correct that Facebook itself (not WhatsApp, Instagram, etc.) is one product (and one codebase, critically) and that makes it way easier.
Also, my commentary was not intended to diminish the bottoms-up work of folks. I worked with some incredible engineers at Microsoft (Erik Christensen comes to mind).
My guess is the real root cause here is ship velocity, i.e., if you literally ship every day, there is no real incentive to jam staff into the next release because you can just wait until tomorrow. If you have a 2-3 year ship cycle, there is tremendous pressure at all levels to jam things in now because 2-3 years is such a long time.
To your point, it sounds like Bill valued "innovation" over shipping (i.e., the teams that shipped frequently didn't "innovate" enough, the teams that "innovated" rarely shipped ...). I think the real counterfactual here would have been if Bill/MSFT valued shipping frequently over innovating. It'd be curious if that led to more or less innovation (over, say, a decade).
Of course, hindsight is 20/20 and I was in high school at this point so this is from the cheap seats. :)
I have spent a lot of hours in discussions over velocity. I've learned two things. First, it is pointless (and I should not) get into discussions about velocity as to whether it is the *cause* or something, good or not. Second, I should not try to convince anyone I understand just how big a deal velocity is. I say this having managed things that ship all the time, huge products that shipped hundreds of changes a month, and not pushing back that those same people saying they totally understand shipping "slowly" :-)
The real thing I learned though is every "generation" or even team/company has a "thing" and the worst thing you can do in seeking alignment is to try to devalue that core "thing". An unrelated example was I learned early on that no matter how much we thought we understood customers, had data to explain, and did tons of work with customers I should never engage or debate sales people over whether I understand customers. I needed to grant them that they understand customers better than me. It was their thing. I should not try to better it or take it away.
That said, velocity is Facebook's thing. I should not debate that :-)
What I also believe is that any team/company "thing" is its greatest strength inevitably becomes its greatest weakness. I think the past couple of years have shown the weaknesses of velocity--it is a bumpy ride (not straight down) but I think the trend is clear. I admit, my reservations about shipping fast and breaking things see confirmaiton bias.
Microsoft's "thing", at least for sure with Bill and Systems, was always architecture. I don't know if I would say "innovation" but certainly that is a valid way to look at architecture. There's contra proofs because Office was really the team that shipped and Office is the team that today funds all of Microsoft (technically it is more fuzzy than that).
The debate was always how "Office is too busy shipping to do the right thing" or to "align" or to "have a good architecture". I have a few stories about this later on. For example, I was once "explained" how with Office you can always cut features and ship, but a platform is different--you can't cut features from a platform (and that is combined with the requirement to annouce the platform at the idea stage).
It is also fair to say, Bill and others did not see difficulty in execution. It isn't that they valued it, it was just that execution was not what required IQ. IQ was needed for architecture. A great example to me of this was how the Windows team was often structured with sort of an "A" team and a "B" team. THe "A" team was on the the next release while the B team was "shipping". As soon as the product shipped a transition would take place the shipping people would come in and do the next steps. We saw this with Windows 3.1/Chicago, Chicago/Memphis, Nashville, etc. I'm not saying it is good or bad and there was a common belief that the way to solve for both shipping and good architecture was to have people who were naturally excellent at one of those gravitate towards doing that. It was something we not only avoided in Office, but actively managed against doing (we did not let people peel off to work on ideation to the exclusion of shipping).
To your point about jamming things into the next train, that is absolutely true. I would say the primary driver was because we lacked distribution to do so otherwise. If you missed the train (or the train missed you) then you had no way to get new stuff to customers or even for them to know you did it (though that's also a problem with shipping fast--no one really knows you did something unless you jam it in their experience which has all sorts of other compromises).
I tried to write a long response to this and then realized I needed a short response.
Your comments about velocity, Apple's "million NOs for a single YES", and JWZ/Richar Gabriel's "Worse is Better" approach all indicate that:
1. You have to ship and ship as quickly as possible.
2. If you don't put something out, someone else will and they will be remaining product on the market so they get the opportunity to improve in the next version and you don't.
3. Perfectionism in building products is the kiss of death.
This makes me sad, because most of the products I really enjoy have a perfectionist philosophy but were not long-term market winners.
Take, for example, Novell eDirectory. There are two narratives around that product:
1. The product was so good Novell was able to basically build the second stage of their company around it and it kept Novell around a lot longer than it otherwise would have lived.
2. Novell's focus on engineering to the detriment of junior-admin usability and marketing was what killed them and things like eDirectory are emblematic of that.
I still use eDirectory and products derived from eDirectory at work, and I truly think it is a best-in-class experience. But I am also always looking for what the future is. In my job I build systems upon foundations that I think are technically strong but I know I am going to need versatility and stability down the line.
In your opinion, given the ways in which tech is built today, how tech velocity works, and the things tech companies have to do in order to survive; how does one identify foundational technologies that are likely to be both high-quality and stick around for the future?
I am wrestling with this question long term, and I think I have identified two characteristics that seem to help, but I am sure there are more and it is possible my current two are flawed:
1. The team behind the product can demonstrate a coherent vision for it in documentation. That is, the model it is built on makes sense and meets varied use cases in a holistic manner.
2. There are enough resources being expended on it over time (coding, cash, docs, etc.) that foundational changes to it can still be made in a reasonable timeframe.
I wish I had a good answer in the abstract for this. All I can do is tell the story here and let you see what lessons emerge that you feel you can apply. Perhaps that is the humility of experience speaking, knowing how different every situation is and not projecting too much.
I will not argue for velocity then (other than to say it is a dominant variable for many companies and I'm not sure there's consensus on the trend line :) ).
To paraphrase Alex R., maybe the core thing is as a startup you need to get distribution before the incumbent gets innovation and so velocity is a dominant variable there. Once you are the incumbent, the calculus shifts.
And fwiw – as someone who lived in the innovator/shipper dichotomy in DevDiv – I wholeheartedly support not doing this. I was adamant about not having any kind of "architect" role post-Microsoft because (imho) real artists ship.
There's a lot of evidence that velocity was correlated with success. OTOH, post 1996 or so when velocity became a thing, then everything that failed was also moving fast--just in the wrong direction.
Similarly, many things that gained distribution also failed. From pointcast, to yahoo (also moved fast), to you pick from today.
The interesting thing is that early on in this seeking distribution makes the most sense and then revenue. The problem is revenue takes much longer to decline which is an incumbent curse. It is why having the founder around longer can help if they are tuned into what is going on. Non-incumbents almost always get confused by the revenue v. innovation.
I'd be curious to get your take on how BillG himself honed his management muscle—both to enable an organization that reached such scales (even in these days and then beyond), but also in spite of it as well. This is pretty remarkable in its own right. Future post?
I think in a sense this would be best left to his own reflection. My view is he was like we see with so many founder-CEOs in that he had a long term vision and set of tools and principles that he was putting to work. From that perspective a founder CEO is in the "same job" for a long time. I think it wasn't until 2000 or so when he changed his role to Chief Software Architect that he deliberately looked at what he did and how, and began to focus on specific strategic initiatives (like the Vista/Longhorn project).
I see Vern Rayburn in that video. It would be great to hear more about the contributions of others like him who may not be so well known. I met him while I headed the apps group at GO, when he started Slate. I thought he was just a brilliant guy, with one of the reasons being that he didn't drink the KoolAid about the Penpoint OS, and asked questions that we should have been asking ourselves but had too much hubris to ask.
There are two important elements to Bill's management style. He was deeply interested in how things worked in the products, and set the tone that managers needed to know how things worked. That was a huge strength of Microsoft's management team (and a weakness when people lacked the human aspects of managing) and critical to Microsoft being able to do all of the things it did. The company lost this to a degree along the turn of the century and it resulted in several project failures.
Bill also, as Steven wrote, would purposely drive conversations and challenge people in order to find the weak spots. He was sometimes brutal on a personal level when doing so. However, these discussions were often valuable. Bill did not have to know the answer. These discussions often drove substantial follow up that developed valuable insights. In addition he was good about having the discussions not break the delegation barrier. He wasn't telling people what to do, he was telling people he didn't think they had thought something through. Sometimes that was a painful discussion. Sometimes that was a waste of time. But also sometimes it was super valuable (Bill term) and really helped those of us working on the products.
These are a joy to read. Thank you!
One (obvious) observation from my time at Microsoft is that there was always a strong, top-down push for early (some might say premature ...) dependencies and code re-use. One of many problems with Longhorn.
Interestingly, at Facebook there was never any top-down push for code re-use. Over nearly a decade, I rarely saw a classic Microsoft-style architecture diagram in an exec review (the only exception was new Microsoft hires who were still adapting). No exec ever asked if a team was using the latest-and-greatest internal framework.
But – in my opinion – there was far less code duplication at Facebook than at Microsoft. Code unification/clean-up was a constant, bottoms-up effort from various teams (e.g., the Product Infrastructure team). Projects like GraphQL, React, React Native, etc. were entirely organic, bottoms-up efforts.
I've never been able to fully explain the difference. There was clearly a confluence of factors:
- A single, always up-to-date codebase (easier to reuse code)
- A culture that primarily optimized for speed of execution
- Daily releases (vs. multi-year releases with discrete RTMs)
- A culture that celebrated engineers improving core abstractions
- Etc.
But I can't help but wonder to what extent Bill's constant pushing on this issue actually worsened it, e.g.:
- Bill would detect some area of duplication
- He would push some team to create a framework to unify the thing
- He would push other teams to take a premature dependency on that framework
- Dependent teams would be burned and learn to avoid future dependencies at all costs (real artists ship)
- The cycle would repeat
Certainly one of the unspoken lessons I learned at Microsoft was that you were crazy to take a dependency on any piece of code that hadn't shipped yet.
It feels like a cautionary tale in the exertion of power – the harder you push, the worse you make it.
Thank you. I love this comment.
I think they are a product of the era. My outsider perspective on facebook is that it is one product even though it is often talked about as multiple products. It got optimized for code sharing the way Excel did (versus Windows) or maybe Office in the post 1997 era. The view that there was one customer at the end viewing the output as a single product even if gets talked about as "products", they are developed as one with a bunch of assumptions about what it means to be one.
Bill's push on Windows would not have worked, particularly with Cairo, Longhorn, etc. if the team had not been receptive. One thing under reported at Microsoft is the "thick skin" it took to run a product and deal with that input.
The Windows team celebrated improving abstractions. The problem was they could not mandate consuming them internally because of the above, and of course externally no one was even remotely interested in rewriting ever because along with improving abstractions came a promise of running the old ones forever (and also often handicapping the new abstractions by that requirement). A big lesson I will talk about later is how much customers loved the idea of not having to change.
It is difficult to cover all the bottom up innovation because I'm writing this from my perspective, but Microsoft was filled with bottom up. As you correctly point out, the top-down innovation was not connected to that and importantly overlapped or Venn'ed. Instead the top-down needed more of a framework for success, but we never got there as a company just some products (like Office) specialized in framreworks v details.
There's a lot to be said for the fact that much of the online world (FB/google) have not yet experienced wholesale changes that impact how customers perceive things. Mobile might have been one but the FB changes happened soon enough and with a base that wanted to change (like the internet and Microsoft).
You are correct that Facebook itself (not WhatsApp, Instagram, etc.) is one product (and one codebase, critically) and that makes it way easier.
Also, my commentary was not intended to diminish the bottoms-up work of folks. I worked with some incredible engineers at Microsoft (Erik Christensen comes to mind).
My guess is the real root cause here is ship velocity, i.e., if you literally ship every day, there is no real incentive to jam staff into the next release because you can just wait until tomorrow. If you have a 2-3 year ship cycle, there is tremendous pressure at all levels to jam things in now because 2-3 years is such a long time.
To your point, it sounds like Bill valued "innovation" over shipping (i.e., the teams that shipped frequently didn't "innovate" enough, the teams that "innovated" rarely shipped ...). I think the real counterfactual here would have been if Bill/MSFT valued shipping frequently over innovating. It'd be curious if that led to more or less innovation (over, say, a decade).
Of course, hindsight is 20/20 and I was in high school at this point so this is from the cheap seats. :)
Super interesting to think about.
I have spent a lot of hours in discussions over velocity. I've learned two things. First, it is pointless (and I should not) get into discussions about velocity as to whether it is the *cause* or something, good or not. Second, I should not try to convince anyone I understand just how big a deal velocity is. I say this having managed things that ship all the time, huge products that shipped hundreds of changes a month, and not pushing back that those same people saying they totally understand shipping "slowly" :-)
The real thing I learned though is every "generation" or even team/company has a "thing" and the worst thing you can do in seeking alignment is to try to devalue that core "thing". An unrelated example was I learned early on that no matter how much we thought we understood customers, had data to explain, and did tons of work with customers I should never engage or debate sales people over whether I understand customers. I needed to grant them that they understand customers better than me. It was their thing. I should not try to better it or take it away.
That said, velocity is Facebook's thing. I should not debate that :-)
What I also believe is that any team/company "thing" is its greatest strength inevitably becomes its greatest weakness. I think the past couple of years have shown the weaknesses of velocity--it is a bumpy ride (not straight down) but I think the trend is clear. I admit, my reservations about shipping fast and breaking things see confirmaiton bias.
Microsoft's "thing", at least for sure with Bill and Systems, was always architecture. I don't know if I would say "innovation" but certainly that is a valid way to look at architecture. There's contra proofs because Office was really the team that shipped and Office is the team that today funds all of Microsoft (technically it is more fuzzy than that).
The debate was always how "Office is too busy shipping to do the right thing" or to "align" or to "have a good architecture". I have a few stories about this later on. For example, I was once "explained" how with Office you can always cut features and ship, but a platform is different--you can't cut features from a platform (and that is combined with the requirement to annouce the platform at the idea stage).
It is also fair to say, Bill and others did not see difficulty in execution. It isn't that they valued it, it was just that execution was not what required IQ. IQ was needed for architecture. A great example to me of this was how the Windows team was often structured with sort of an "A" team and a "B" team. THe "A" team was on the the next release while the B team was "shipping". As soon as the product shipped a transition would take place the shipping people would come in and do the next steps. We saw this with Windows 3.1/Chicago, Chicago/Memphis, Nashville, etc. I'm not saying it is good or bad and there was a common belief that the way to solve for both shipping and good architecture was to have people who were naturally excellent at one of those gravitate towards doing that. It was something we not only avoided in Office, but actively managed against doing (we did not let people peel off to work on ideation to the exclusion of shipping).
To your point about jamming things into the next train, that is absolutely true. I would say the primary driver was because we lacked distribution to do so otherwise. If you missed the train (or the train missed you) then you had no way to get new stuff to customers or even for them to know you did it (though that's also a problem with shipping fast--no one really knows you did something unless you jam it in their experience which has all sorts of other compromises).
This is my attempt at a really candid reply :-)
I tried to write a long response to this and then realized I needed a short response.
Your comments about velocity, Apple's "million NOs for a single YES", and JWZ/Richar Gabriel's "Worse is Better" approach all indicate that:
1. You have to ship and ship as quickly as possible.
2. If you don't put something out, someone else will and they will be remaining product on the market so they get the opportunity to improve in the next version and you don't.
3. Perfectionism in building products is the kiss of death.
This makes me sad, because most of the products I really enjoy have a perfectionist philosophy but were not long-term market winners.
Take, for example, Novell eDirectory. There are two narratives around that product:
1. The product was so good Novell was able to basically build the second stage of their company around it and it kept Novell around a lot longer than it otherwise would have lived.
2. Novell's focus on engineering to the detriment of junior-admin usability and marketing was what killed them and things like eDirectory are emblematic of that.
I still use eDirectory and products derived from eDirectory at work, and I truly think it is a best-in-class experience. But I am also always looking for what the future is. In my job I build systems upon foundations that I think are technically strong but I know I am going to need versatility and stability down the line.
In your opinion, given the ways in which tech is built today, how tech velocity works, and the things tech companies have to do in order to survive; how does one identify foundational technologies that are likely to be both high-quality and stick around for the future?
I am wrestling with this question long term, and I think I have identified two characteristics that seem to help, but I am sure there are more and it is possible my current two are flawed:
1. The team behind the product can demonstrate a coherent vision for it in documentation. That is, the model it is built on makes sense and meets varied use cases in a holistic manner.
2. There are enough resources being expended on it over time (coding, cash, docs, etc.) that foundational changes to it can still be made in a reasonable timeframe.
Am I asking the wrong questions?
I wish I had a good answer in the abstract for this. All I can do is tell the story here and let you see what lessons emerge that you feel you can apply. Perhaps that is the humility of experience speaking, knowing how different every situation is and not projecting too much.
I will not argue for velocity then (other than to say it is a dominant variable for many companies and I'm not sure there's consensus on the trend line :) ).
To paraphrase Alex R., maybe the core thing is as a startup you need to get distribution before the incumbent gets innovation and so velocity is a dominant variable there. Once you are the incumbent, the calculus shifts.
And fwiw – as someone who lived in the innovator/shipper dichotomy in DevDiv – I wholeheartedly support not doing this. I was adamant about not having any kind of "architect" role post-Microsoft because (imho) real artists ship.
There's a lot of evidence that velocity was correlated with success. OTOH, post 1996 or so when velocity became a thing, then everything that failed was also moving fast--just in the wrong direction.
Similarly, many things that gained distribution also failed. From pointcast, to yahoo (also moved fast), to you pick from today.
The interesting thing is that early on in this seeking distribution makes the most sense and then revenue. The problem is revenue takes much longer to decline which is an incumbent curse. It is why having the founder around longer can help if they are tuned into what is going on. Non-incumbents almost always get confused by the revenue v. innovation.
It is why I'd still argue the trendline :-) :-)
I'd be curious to get your take on how BillG himself honed his management muscle—both to enable an organization that reached such scales (even in these days and then beyond), but also in spite of it as well. This is pretty remarkable in its own right. Future post?
e.g. did he get inspiration/guidance externally? Or was he super remarkable in seeing around corners while honing the principles he applied?
I think in a sense this would be best left to his own reflection. My view is he was like we see with so many founder-CEOs in that he had a long term vision and set of tools and principles that he was putting to work. From that perspective a founder CEO is in the "same job" for a long time. I think it wasn't until 2000 or so when he changed his role to Chief Software Architect that he deliberately looked at what he did and how, and began to focus on specific strategic initiatives (like the Vista/Longhorn project).
I see Vern Rayburn in that video. It would be great to hear more about the contributions of others like him who may not be so well known. I met him while I headed the apps group at GO, when he started Slate. I thought he was just a brilliant guy, with one of the reasons being that he didn't drink the KoolAid about the Penpoint OS, and asked questions that we should have been asking ourselves but had too much hubris to ask.