Go back to 003. Klunder College
Subscribers, thank you so much for the kind words and most of all participation in the discussions. My heart warms each time someone shares their own story or memories from these times. In this section, I am transitioning out of Apps Development College to my full time role. Along the way I am learning the realities of PC software today—it is usually late, and usually buggy. I’m also starting to learn a bit about the two cultures at Microsoft, Apps and Systems. There will be a couple of short posts after this and then we’re off building products!
Everything is Buggy
As the summer of 1989 turned to fall, the shipment of Windows 3.0 was looming. When not working on a Mac product or trying to get OS/2 stable for daily use, most of us in Apps were dealing with getting something to work on Windows and reporting bugs back to the Windows team.
Far away in Systems, Windows, what started as a side project was now a full team of people grinding away on a death march to get Windows 3.0 done. Typically, in those days, this period of heightened work hours and intense cycles of bug fixing marked the last months of any project. The cafeterias were not open for dinner ushering in a (mostly) Systems tradition of ship meals, featuring a buffet much fancier than the cafeteria offered. The idea of serving dinner as part of routine death marches became a decidedly Systems approach that was so formalized it became a budgeted line item (as I would later learn when I joined Windows). Windows 3 was still months from shipping, but the activity was going on around the clock.
Windows was in Systems, which was the big dog half of Microsoft. While the history of the company was in Languages where BASIC and other tools were made, the center, and at the time the economic engine of the company, was Systems where MS-DOS was made. MS-DOS was the brilliant product born out of a commitment to IBM to deliver a product that didn’t exist and wasn’t yet under development. It was subsequently acquired and modified to meet the deadline, with the twist that Microsoft was free to license the product to other computer companies.
In other words, while IBM was the first contract for MS-DOS it was not exclusive.
Out of that, the entire PC industry was created. And not for one second was that lost on the Systems people.
From those early first days, Microsoft felt like two different companies: Apps and Systems. A buzzword in modern business, these two cultures could not have been more different, at least that’s what I was led to believe by listening to stories at lunch. Even though the company was made up of only about 3,500 people, with half in Redmond, I had not yet met anyone in Systems. While I could have easily walked a few hundred feet over to one of the buildings they occupied, that wasn’t something that people did. Apps and Systems didn’t exactly intermingle.
The one thing we knew about Systems, despite the anonymity, was that as buggy and late as Apps products were, the Systems products, I was informed, were buggier and later. Windows 3.0 was coming down to the wire. There were no real secrets—many people had builds and were installing the product, and the weekly industry tabloids, InfoWorld and PC Week, were tracking the latest rumors, test releases, and gossip. The actual delivery date was not well known, not by the team or anyone else until very close to the announcement of that date.
From the time I arrived at Microsoft and installed that first build in ADC in the summer, the launch of Windows 3.0 was always real soon now, often abbreviated RSN in snarky email. That had no impact at all on the enthusiasm as the buzz that Windows 3.0 would be a breakthrough was pervasive through the hallways. The industry was equally anxious for what appeared to be a showdown across a plethora of operating systems including MS-DOS, Windows, OS/2, and Macintosh.
In hindsight, it was easy to make fun of the fact that everything seemed late and hardly worked. The entire industry was like that. From the earliest days of PCs none of us knew anything else.
The expression vaporware was commonly used to refer to software that was well known and frequently discussed but not yet shipping. The phrase was used first as far back as 1983 by Esther Dyson in the industry thought-leading newsletter Release 1.0. In some sense, most everything was vapor. I remember sitting in my ADC office having just received a Goldman Sachs analyst report on Microsoft from the library. In the report was a table of all our company’s products under development and estimated ship dates. The dates were far in the future and all wrong by many months or even years.
In fairness, it was challenging to simply get a non-trivial product built, have it work on the wide variety of PC configurations that existed, and then ship it in dozens of languages. That’s because there was no internet, no diagnostics, or telemetry, and anything that went wrong simply crashed the whole computer, requiring a power cycle. And, most importantly, the field of software engineering was nascent to the point of not really having the institutional knowledge of building and testing software for mass distribution. Before the PC, there were many complex systems, but each one was custom and staffed by full-time people to keep it running. PCs were different. Everything was new. And that was before the complexity of coding for a graphical interface like Windows and Macintosh.
One of the biggest differences with PCs was that the PC operating system ran in such a way (called real mode compared to protected mode that would be introduced later in Windows and still later in Macintosh) that any bug in a program generally did one of two things, but probably both. First, for certain, whatever file was open and being edited probably became corrupt and data was lost. That was a heart-stopping given. Second, there was a good chance that the crashing program also caused the computer to crash, hang, or otherwise stop working. Thus, the cardinal rules of the early PC era were born: First, frequently save work and make backup copies, and second, if something goes wrong, reboot the machine.
I learned this firsthand too many times. In college when I operated computers in the lab, an entire shift could often be consumed by trying to help a classmate salvage the remains of a term paper off of a floppy after a crash. Those were the most horrific bugs because work was lost that people assumed they were saving. Such were early PCs (and Macs).
Because of this, it was extraordinarily super-human to even get programs working in the first place. By definition, a mistake in the code caused everything on the computer to stop working, including the tools being used to diagnose the bug. The best programmers, like Duane Campbell (DuaneC), ScottRa, and others, were able to figure out how to step through each instruction carefully and monitor whole blocks of memory for changes at the lower levels to figure out what was going on.
DuaneC was already a legendary programmer within the ranks, a tech lead as I would learn. He was a few years older but seemed more grown up simply because he was married and had a maturity level that most of us lacked. DuaneC had a slight Southern accent, having grown up in rural Tennessee, and a speaking tempo I was familiar with from the people I grew up with in Florida. He was a musician but also studied computer science at the University of Tennessee. He was one of the earliest members of the MS-DOS Applications team and a key contributor to Word. He was also one of the kindest and most thoughtful leaders I had ever worked with.
The most difficult bugs were those that crossed from the application into the operating system. That meant it took knowledge of not only your own code, but also code in MS-DOS and probably code from a video or print driver as well. Lunchtime discussions often dove deep into the details of bugs and the techniques used to find the mistake, and almost always the mistake was one of a small number of common flaws, such as forgetting to check for null pointers or using uninitialized variables.
The tools and techniques that were being developed across the engineers at Microsoft to build software at scale and to make reliable products proved to be a competitive advantage. That was an important fact. It was state of the art. The 1990s saw an incredible advance in building software at scale. And no company did that better than Microsoft. Microsoft’s ubiquity and scale did not allow for gloating or even acknowledging the progress, but it would have been deserved.
The world outside of Microsoft was different. Outside, the computing landscape was marked by a period of extreme heterogeneity. While IBM lorded over the PC, which dominated business, Compaq and Dell were becoming leaders in making PC clones and even racing ahead of IBM in areas like portables and using the new Intel chips. Apple Macintosh was not viewed as a viable alternative in business but captured the hearts and minds of students, educators, and creatives. While Microsoft was busy making MS-DOS and Windows 3.0, and was already shipping Windows 2 with Excel, it was also deep in a partnership with IBM to develop OS/2, a much more sophisticated and reliable (protected mode) operating system. From the outside, Microsoft looked confused or at least lacking a clear strategy. Caught in the middle were companies trying to bring software products to market. Which operating system would they come to rely on for their products? Some viewed the duality of Windows versus OS/2 as an elaborate scheme by Microsoft to distract potential future competitors. The age-old conspiracy theory, which lacked any foundation other than IBM’s poor execution, was that this was some sort of head-fake to distract developers with OS/2 while Microsoft could dominate Windows apps. The partnership with IBM was the highest priority, but it wasn’t working out well.
The ever-present industry trade magazines seemed not to miss a beat over the rift between Microsoft and IBM. The raging debate over the cost and benefits of moving to a 32-bit operating system, specifically OS/2, was front and center even though OS/2 for 16-bits had not taken off at all. This put Windows 3.0 at a perception disadvantage as it was a 16-bit operating system that could take advantage of 32-bit Intel processors. The industry disliked this lack of purity but loved the complexity of the debate. Something I learned early is just how much the PC era was marked by bringing complexity front-and-center to debates that had little to do with customers but served to keep analysts and pundits busy. Our job was to hide complexity, but it seemed others were constantly surfacing it. Though to be fair we did our share of talking complexity, not usually passing up a chance to demonstrate our nerd credentials.
More importantly for customers, there was the constant coverage of quality problems with software and hardware. If programs were not slow, they took too much memory, or hard drive space. At the same time, every week seemed to bring more news of faster processors hoping to finally make yesterday’s software fast enough to use. Except we were busy building more software, requiring even faster processors and more memory. We were under constant pressure to build software that ran on PCs customer had while also taking advantage of the latest processor and hardware. In hindsight, what saved us all was that at any given time the installed base of PCs (the number of existing PCs in use) was being dwarfed by the run rate of new PCs (PCs sold to new customers or to replace older slower PCs). The velocity of this dynamic was key to our ability to constantly ship software that outstripped the PCs people already owned.
The industry saying was something along the lines of “what Intel gives, Microsoft takes away” in reference to increasing hardware capabilities constantly outstripped by more demanding software.
The early and successful Microsoft strategy of developing applications that ran on multiple platforms, remained the cornerstone of the Apps strategy. Only now Apps was busy enough just developing for Microsoft’s own platforms consisting of the mature MS-DOS where Apps never gained a lead, the nascent Windows that few were buying (yet), the non-existent, mostly non-functional but strategically critical OS/2, and the monster money-maker Macintosh that competing with all of those. As crazy as the strategy (or lack thereof) seemed to the press and Wall Street, it was even more taxing for us developers in Apps.
Cross-platform development was not only impractical but the answer to a question no single customer had on their own. Yet that did not stop the search for a magical solution, and thus my first real programming work. Microsoft, BillG in particular, always believed there was a software solution to any problem if enough “IQ” was applied (BillG used IQ as an expression of currency, such as “how much IQ is in that group” or “he brings a lot of IQ to the problem”). This optimism and faith in IQ was a gift to Microsoft, but also caused a lot of problems because not every problem required a high IQ solution and those with high IQ could not always apply it in a practical manner.
Finding such a magical solution was my first project and the first project of our new team.