Back to 004. Everything is Buggy
I finally have a project to work on. Unfortunately it feels a bit like make-work and I have no idea how it fits into the big picture of Microsoft. Actually, I’m not even sure what the big picture is as we’re all in the middle of the strategic shift to GUI and Microsoft has multiple operating systems we’re supposed to be supporting.
As part of the Apps Tools group, we were set up to provide the tools to make it easy to build apps that worked on any platform, regardless of the differences or details of each platform. Isolating app developers from platforms was our job. The industry called this cross-platform development.
Historically, such an approach was at the core of Microsoft from the beginning, simply because computing had always been heterogeneous. The makers of computer hardware customized the operating system, which in turn meant that apps needed to be modified to run on each different computer system. This was not any sort of evil plot, as some believed, but simply something that was in place because the hard part of making a computer system was the hardware. Hardware engineers, naturally, chose to modify the software if it meant making the hardware easy. In Microsoft’s earliest days, PaulA and BillG made the BASIC language for many different computers. Microsoft early apps, like the Multiplan spreadsheet, ran on many different personal computer systems at the time, a variety of 8-bit microprocessors and operating systems. Developers like JonDe and DuaneC were experts in the underlying technologies used to get Microsoft apps running on systems from DEC, Tandy, Zenith, Data General, and a host of other names from a bygone era, as well as IBM and then Hewlett-Packard, Compaq, and Dell. PCode and the virtual machine that DougK talked so poetically about in my summer training were in part about making it easier to run software on multiple platforms.
It was natural, therefore, that with Microsoft looking to grow the graphical interface apps business while also itself building multiple operating systems, there was a need for cross-platform tools that were more sophisticated than the 8-bit character mode tools that were already in place. Microsoft needed cross-platform tools just to be able to develop its own applications for its own operating systems. Let that sink in. It was common practice in the industry at the time for every major independent software vendor to also develop their own cross-platform toolset, designed to optimize for their own app and their own view of the platform landscape. Microsoft was unique in creating its own need for cross-platform tools, with multiple operating systems and its own applications.
Cross-platform product development was the elusive brass ring of development that accompanied each generation when there was no clear platform winner. From mainframes to minis to the increasingly popular Unix variants to microcomputers, and now the rising graphical interface. Each new platform promised to be the one to end all platforms, and it might have been, until the cycle repeated. Cross-platform tools are one of those developer problems that everyone believes they have an answer to, certainly early in software lifecycle.
This did not stop even Microsoft from getting caught up in building cross-platform tools. As platforms and applications mature, cross-platform becomes increasingly difficult and the customer experience decreasingly good. Microsoft was still in the early days of cross-platform so it was looking workable. Given the early success with BASIC and 8-bit character mode, it was no surprise that BillG thought the next generation of such work was trivial, a term he loved to toss around. The difficulty—the lack of a trivial solution—was that more and more work was shifting to operating systems away from apps. In other words, as Microsoft (with IBM, and Apple) invested more into making the operating system feature-rich, it made building cross-platform applications more difficult. In fact, that was the strategy, even if it pertained to its own operating system products.
Still, the industry believed the key to making cross-platform trivial was a programming technique, one that wasn’t too new dating back to the 1970s Xerox Palo Alto Research Center (PARC), called object-oriented programming, or OOP (sounds like oops). OOP was everywhere. A trip to the Tower Books on NE 8th Street in Bellevue, something I routinely did on Friday nights because it featured a necessarily deep section of programming and technology books, yielded new books every week with OO in the title. OOP promised to make programming an order of magnitude easier (another common phrase, meaning 10 times better or more, but with no specific units or ability to measure).
OOP was also deep in my own bones. My lab in graduate school was the Object-oriented Systems Lab. We spent the better part of a year recreating the original OOP platform from Xerox PARC, Smalltalk-80, so we could build our own OOP projects using that as a foundation. It is where I came to believe garbage collection was an important part of OOP. I came to Microsoft already an OOP zealot, which in part was why I was hired I was later told.
Aside from abstract computer science concepts, a new innovation for OOP was a new programming language pioneered at AT&T Labs, which, despite the breakup 10 years earlier, was still functioning and a leader in many fields of research, still winning prizes and medals. C++ was the OOP version of the widely used and taught C programming language. That meant it held the promise of not only making programming an order of magnitude easier, but also through its OOP techniques making it possible to be cross-platform, all while maintaining compatibility with the industry standard C language (the language used across Microsoft at the time). OOP as expressed in C++ would make not just cross-platform programming easier but make all programming easier.
Imagine that? No, really. Imagine that, because that’s all that could be done at the time, or ever.
The buzz around OOP reached epic, or comical, proportions even making its way into mainstream business press. The cover of BusinessWeek magazine featured a baby in diapers at a keyboard and monitor introducing OOP to readers as “a way to make computers a lot easier to use”. It was no longer just a magical tool that would make cross-platform programming trivial or a technology that computer scientists believed would lead to more robust and maintainable code. OOP was even going to make resulting applications easier to use.
Object-oriented programming and C++ represented my introduction to the hype cycle of the technology industry. In experiencing this now, I was fortunate in two ways. First, I was still early in career, so I was more mystified than cynical. Second, I was surrounded by already seasoned managers focused on “shipping” who helped our group to navigate the St. Elmo’s fire of OOP.
The industry would undergo a tectonic shift over a multiyear journey to demonstrate the utility of OOP when it comes to mass market software, especially for GUI platforms. Today, most anyone can build GUI applications, but early on the complexity made that extremely difficult. While we could not make it possible for an infant in diapers to program, we could make it much easier for the typical professional or college student. The degree to which OOP or other developments contributed to making it easier will always be the subject of debate, as programming tools and languages always seem to be. There is no doubt, however, that OOP is deeply rooted in the evolution of the graphical user interface, going all the way back to Xerox and forward to today’s smartphones.
Making progress in my new job, however, had one big problem: There was no C++ for the PC. In fact, there was barely C++ at all as it was primarily a research project at AT&T. The only tools around took C++ code and transformed it into C to then be compiled by a C compiler. Normally, one thinks of programming as typing in one language and then converting that into the raw numeric code for the PC, straight from English-like to binary numbers. C++ was so new that using it was akin to translating to German by going from English to French to German. C++ was first translated into C that Windows tools could understand, then finally translated into binary.
Like every other Microsoft project we were already late and behind schedule though I didn’t realize it or even really internalize it. But how could I have? I had no idea what product we were supposed to be building. All I knew was we were supposed to be working on cross-platform GUI and that meant OOP and C++. We did not, however, even have the software development tools to use the C++ language. There was a team in Languages working on a compiler, but first they were busy releasing the latest version of C, which was late and buggy and did not include C++ support.
ScottRa cleverly decided that we needed to keep busy. I was too young and naïve to really understand how deliberate this strategy was as Scott was essentially stalling while the company figured out larger strategic issues, such as Windows versus OS/2 versus Macintosh, and while the Languages group finished up C and could move full time to C++ tools.
Were we soldiers, doing battle and training, or were we the TVA just digging ditches to keep busy? I had no idea. Nevertheless, ScottRa devised a simple master plan. We learned the ropes of getting C++ code to work by being pioneers within Apps and using a crazy library of C++ code from researchers in Switzerland, ET++, and a commercial product Glockenspiel C++. The latter was a port of the AT&T C++ tools to OS/2 designed to work with Microsoft’s industry-leading C compiler, C version 5.1, that was already in market. ET++ was something called an application framework, not unlike parts of Smalltalk-80 with which I was very familiar—a framework was a collection of prewritten objects or code that helped programmers to write applications quickly because they could reuse previously written code. ET++ too was cross-platform, but it was just a research project at a university. ET++ was presented at a paper that came out when I was in graduate school and compared itself to MacApp, an application framework for the Macintosh that I was also quite familiar with from my MacMendeleev days. It was a given that we would someday build our own application framework. ScottRa told me it was just too soon.
That meant, however, at least there was a project. We spent our days trying to get ET++ to work on OS/2, which basically no one else on earth was even thinking about. Days turned into weeks, and months.
I was glad to have a project to work on. Like so many new hires into big companies, though, I struggled to figure out how what I was doing fit into the big picture. Actually, I wasn’t quite sure of the big picture just yet.
On to 006. Zero Defects
Cross platform has quite a history at Microsoft. Steven is right to point out the ebb and flow of the benefit of cross platform approaches as platforms and computing eras go through their life cycles. By that I mean that an OS like Mac OS and Windows are platforms while GUI (local) and Internet ("server") based apps are eras. (Your taxonomy mileage may vary :))
I joined Microsoft at the beginning of the Mac OS platform and GUI era but before Windows and OS/2. I worked on the Mac apps File, Multiplan, and Excel 1.0 (barely). These early apps did a lot to teach Microsoft about the GUI era, lessons that were critical to winning the apps market. Other ISVs were completely focused on MS-DOS and ignored the GUI era at their peril. I give credit to BillG (Bill Gates) for having the strategic forethought about GUI, even if DoukK hated it at first. It was a “Big Bet” as GUI needed more of everything: RAM, Disk, Graphics, and Peripherals (think mice and laser printers).
Since Mac OS was the only platform at first, there was no cross platform need in the early GUI era. This fit with the resource constraints. In order to make things perform well the closer to the platform the app stayed the better. Windows was on the horizon, and after Excel 1.0 shipped exclusively on Mac OS, a team was put together to port it to Windows (including me). At this point Windows 1.0 had just shipped, with virtually zero ISV support and the Windows team was given the charter to make sure Windows 2.0 would garner ISV support. In addition to Excel the Windows team was keen to make sure Aldus PageMaker would work on Windows. I sometimes say that the Excel, PageMaker, and Windows teams wrote Windows 2.0 together. Another important piece of the Windows story happens during this time, in the second half of this project Microsoft acquires Dynamical Systems Research Inc. and its programming team including DavidW. There are many stories to that adventure, but this is a post about cross platform!
After shipping Excel 2.0 on Windows, with the OS/2 with Presentation Manager platform on the horizon, we started thinking about cross platform strategies for Excel. Steven and the tools team were biding time for C++ and other ideas to come together. Word was not yet shipping for Windows. So, the Excel team was doing this on its own.
Cross platform programming is an exercise in indirection. The “core” code is written to a model platform and the model platform is then implemented on a real platform like Mac OS. One of the biggest decisions to make, one that has ramifications on performance and stability, is “what should the model platform be?” Steven will have to speak about the thinking in the tools team, but I remember their attitude being they should build an “ideal” model platform using OOP paradigms. For Excel we took a very pragmatic approach. Our design philosophy was to make the model platform implementable with as little code as possible driven by the maxim, “Choose the mechanism which will suck the least on the other platforms.” The “Excel Layer” model used eventing and windowing using the Windows model and used a hybrid of the Mac OS model with global GrafPorts but Windows style handles for pens, brushes, and fonts for graphics. The memory management model was the HUGE pointer allocator that already spanned the two platforms from Excel 1.0 and 2.0. The goal was to have the core code contain zero platform specific code.
Why pursue cross platform programming? The intention is that most of the programming effort of an app is done as core code. People remember the mantra “Write once, run anywhere” and that was the hope for Excel. Any new feature would be written once. The core code bugs would be worked out on the first platform shipped and the subsequent platform versions would thus take much less effort to ship. The Excel team would have all platforms working up until the code complete stage of the development process, then focus on the first shipping platform exclusively (keeping the other platforms compiling and passing a basic battery of automated testing), then shipping the other platforms in turn.
The Excel Layer worked very well, and it was used to deliver Excel 2.2, 3.0, and 4.0 on all platforms including OS/2 when it arrived (and then not when IBM and Microsoft split). However, eventually Excel's layer stopped working well, as the platforms began diverging. Windows with protected mode from a memory management perspective vastly outstripped Mac OS in terms of the size of applications it could support. Apple would fix this with OS X, but that was many years off. Another set of issues arose when feature areas with end user abstractions (think desktop, window, menu, scroll bar) were completely different on each platform. The best example I can remember is the Windows concept of OLE (Object Linking and Embedding) and Mac OS OpenDoc with Publish and Subscribe. The user models, semantics, and data models were so different that there was not a ready model platform approach to preserve the core code paradigm. Platform specific code proliferated in the core.
Along this timeline Microsoft acquired ForeThought Inc and its app PowerPoint. PowerPoint also started on the Mac, wrote a Windows specific version, and then its own cross platform layer. Steven’s tools team shipped AFX for Windows (he will tell that story in much detail I am sure). Word used parts of that to ship a core code Word for Mac OS. Also important along this timeline is a mini era shift, away from stand-alone applications to office suites.
These factors led to the general displeasure of end users. People started complaining about “least common denominator apps” and the Mac version of Office stretched Mac OS to its limit. At this time, I was the VP responsible for Office. The Mac business was important to Microsoft. In those days SteveB (Steve Ballmer) would use a “dollars per PC” (PC included Macs) as a top-level management metric and ironically, because Office had a much higher usage percentage on Macs, Microsoft made more money per Mac sold than IBM style PC sold. It was clear that Office’s cross platform strategy was no longer making good enough applications for Mac OS. So, I made the decision to abandon the strategy and formed an independent Mac Business Unit led by BenW (Ben Waldman).
That, my friends, is a real-world life and death story of one cross platform strategy. To compete in an Era, Platforms start with the similarities applicable to the Era, but then diverge as they compete for dominance of the Era. Cross platform approaches work for a while, and then they don’t.
Hi Steven, great chapter. Talking about Excel and Cross platform. Excel 2 for Windows (first version for Windows) had x86 specific assembly code for certain parts like the SUM function so that Excel would win shoot outs in the PC press at the time. The press would compare how fast Excel would add a column or row of numbers and compare it to other spreadsheets like Lotus 1-2-3.