In the Excel team, on a lark, after shipping Excel 2.0, we all sent our resumes to Lotus to apply for a job. We all received rejection letters and proudly posted them on our relight windows.
I was the VP-Marketing at NeXT in 1991 and 1992, working for Steve. We completely pivoted our marketing after I arrived, punting on trying to sell to prosumers and focusing 100% on businesses and government building mission critical custom apps and wanting to deploy them on a Mac-like workstation. That was our niche, and we doubled down on OOP, Interface Builder, DBKit, etc. We really focused on Sun as our competition, not Windows and certainly not OS/2. We launched a crazy ad campaign in the WSJ pushing the video, and it not only worked, it really pissed off Scott McNealy, who accused Steve and I of "immature marketing". Perhaps my proudest accomplishment as a marketer. Here is the video:
That's an awesome accomplishment. Ballmer always loved the McNealy scuffles because of the Detroit-area connection. That whole schtick of his about how they didn't use slides and just had markers and an overhead projector used to drive me nuts!
Also, in 1992, Steve and Jim Allchin both appeared (separately) at "ObjectWorld". Steve gave our awesome pitch/demo, which included smoke and a SubZero-sized Teradata coming up on a riser. Allchin came on stage and called Steve "Pinocchio". I am not making this up.
My favorite relight artifact was a 'Zappa for President' bumper sticker. I left it there, on my next office move, and was delighted to see it a few years later stuck to one of those gray Helpdesk carts.
I was definitely on the outside during all of this. I had ended my first tour at Xerox and decided to become a nano-sized ISV, working first with CP/M-80. My gigs were actually with local enterprises and only a couple had me code--badly, I did publish a patch to Turbo Pascal (that I teased Anders about years later) and also a version of the CP/M command processor called EZCPR that was a load-and-stay-resident fixture. It was free-ware. A guy who published a manual for it, which he sold along with the disc, asked me to add some extensions that his customers wanted. Didn't offer me a dime. Stopped me in my tracks. I just gave up.
I was a big Borland fan, starting with Turbo Pascal and moving to Turbo C and also Paradox. I remember Phillipe being hostile to C++, but then BCC. Cool thing was templates for making CUA applications that built easily and ran on Windows, not Cmd.exe, something VS never seemed to get right (but maybe Microsoft Terminal and related APIs have finally made up for that).
On CompuServe, the closest we got to StackOverflow in the day, Borland had great presence and encouragement of forums. Microsoft came late to that, essentially requiring proof-of-purchase to play. Microsoft corrected rather quickly though. I suspect that this was not delegated to V-badges as too much developer-facing stuff has been recently. This was the era of the Apple 2e, and those fans were relentless.
I now understand the dog-food problem better with respect to developer tools. I fear it made the tools slanted to folks who needed to build Windows and bulky Microsoft apps though. As far as I am concerned, the killer developer bloat case was and is windows.h. Forget about dependencies. In my thinking, compiled headers are the wrong solution to a real problem. The Borland lean-ness is missed.
Of course, Borland would miss windows for getting Windows versions of Paradox (oh, hi there Windows Office Access) and a Windows-generating Turbo Pascal (oops, hello Java and eventually .NET). That was hard to watch. That son-of-Turbo Delphi continues to have adherents is remarkable.
stevesi great book so far. looking back in history, i wonder how much time we all could have saved if we had invested more resources in better dev tools and build farms. think of all the time spent waiting around for compiles, link, build, and source control, and cross dev transfers. Or even just a way to run an application in protected mode where you could fault on dereferencing a null pointer. If an SDE was 40k, then it should of been easy to buy racks stuffed with 40k of PCs but that wasn’t the thinking and budgeting made it almost impossible. It’d be interesting to see a historical chart of the round trip time to make a single source change, compile/link and see it running
Jon, Duane, Rick and others I am sure have tons of thoughts on this. The old Xenix build system was one way of using a different class of hardware/OS for builds early on. I like to think we were using state of the art -- things like parallel compiling, incremental linking, and so on were being developed through the 90s. Of course there was no easy way to even run "racks" of servers then without investing huge amounts, which we did for testing and compatibility in addition to builds over time.
BillG was relentless in wanting to understand this. Later on in the book I talk about some work I did for him on the topic. The NT team really led a level of scale on this topic. Then Office super scaled itself in the late 1990s (with Office 97).
If I were writing a book from Jon's perspective I would talk about the checkin tests and scaling that--it was a huge part of efficiency while also keeping things going. Much like today's CI/CD.
I can keep going but I bet one of the other folks has more to add!
One of the benefits of protected memory on Windows 3 and OS/2 was immediate feedback on bad pointers. NULL is one case. Running off the end of a memory block was another. In debug builds we had modes of memory management where every allocation was its own segment (or page in 32 bit with the next page unallocated). We also used fill values in unused memory to detect when someone wrote where they should not. All of this in debug builds.
We were always jealous of the quick turnaround commercial tools but we had too many tricks built up to move. Eventually we had pre-compiled headers and incremental linking that saved a ton of time in code turnarounds. Before that, we encouraged people to batch changes so that turnaround time was more leveraged. This worked well for bug fixing where instead of recompiling every bugfix one could recompile after diagnosing and fixing several.
The biggest time waster was when the main build was broken. If there are not protections built into the code check-in process, one person can derail the entire team for hours, and the larger the team, the higher the probability that the build is broken.
Core code introduced more combinations and opportunities for error when all platforms needed to keep working. So a lot of resources were invested. Developers had multiple machines so that they had the resources to run tests before checking in code. Every night developers would run the "home" build script that would sync the main branch into the developer's local code, build all of the platform combinations in debug and RTM versions, and test the resulting build on each platform. Build labs were made to verify the union of the day's code check-ins didn't conflict with each other in order to keep errors from flowing into developers' offices when they synced.
When the shift from individual apps to Office occurred, the Office shared code teams had a new set of combinations to keep all of the apps running every day as well.
The test teams also started growing automation labs. So eventually we did have huge labs running builds and tests continually.
As the Office team grew, and the builds were unified (I.e. one consolidated build lab building Word, Excel, PowerPoint, etc. each night) I remember a manager (I think it was TerryC) saying if each developer broke the build only once a year, the Office build would be broken every day. Kirk was the most notorious buildmaster — you did not want to see Kirk outside your door around 6 pm, it meant you checked in something you shouldn’t have! There were of course smoke tests you were supposed to pass prior to checkin, but they often had odd environmental dependencies and so some would fail semi often — you’d convince yourself “I could not possibly have broken that test, it must be broken for other reasons” and then checkin — only to find Kirk at your door... the importance of developers trusting the tests is really important.
i can only give the perspective from the apple side of things. the change was drastic - the system was small, the assembler was fast, one person worked on each subsystem. but the hot idea was to move to OOP and in hindsight it just killed turnaround time - Cfront was slow, linking, headers, source control - every little thing slowed to a crawl. At the same time, the Mac II came out in 1987 and better use of the MMU was just a missed opportunity. Some people did some of the memory guards Jon mentions below but it was ad-hoc and not system wide.
I am surprised to hear Steven say that the app teams and Excel in particular were looking in any serious way at the Borland tools. The reality was the CSL compiler had a raft of special features and our only hope of moving to a commercial tool was getting the Microsoft C team to add the features we needed. This was the first set of requirements that came from being the earliest GUI app developers. Because of the early performance constraints a lot of "tricks" were used that became barriers to moving to commercial tools. Eventually this was all ironed out, but it was thought to be quite a barrier at the time. About this time the application code size was starting to press the limits of the CSL P-Code system and we really needed commercial tools.
Technically it was the linker not the compiler. The Excel project was getting big and the apps Tools team was under resource pressure to stop investing in proprietary tools while at the same time the C Tools group was under pressure to win over the internal teams. It was *very* busy with the Systems team, particularly the NT team, on keeping them happy. We’re still 5 years away from Excel and Word getting rid of PCode. Crazy to think about. But the specter of Borland was definitely used by management to torment the Languages team who was given a mission to get Microsoft internally using its tools.
This is all a great example of being the new person and not quite having a full view of what was going on and applying hindsight as I try to understand it now. Love it!
In the Excel team, on a lark, after shipping Excel 2.0, we all sent our resumes to Lotus to apply for a job. We all received rejection letters and proudly posted them on our relight windows.
I was the VP-Marketing at NeXT in 1991 and 1992, working for Steve. We completely pivoted our marketing after I arrived, punting on trying to sell to prosumers and focusing 100% on businesses and government building mission critical custom apps and wanting to deploy them on a Mac-like workstation. That was our niche, and we doubled down on OOP, Interface Builder, DBKit, etc. We really focused on Sun as our competition, not Windows and certainly not OS/2. We launched a crazy ad campaign in the WSJ pushing the video, and it not only worked, it really pissed off Scott McNealy, who accused Steve and I of "immature marketing". Perhaps my proudest accomplishment as a marketer. Here is the video:
https://youtu.be/UGhfB-NICzg
That's an awesome accomplishment. Ballmer always loved the McNealy scuffles because of the Detroit-area connection. That whole schtick of his about how they didn't use slides and just had markers and an overhead projector used to drive me nuts!
Also, in 1992, Steve and Jim Allchin both appeared (separately) at "ObjectWorld". Steve gave our awesome pitch/demo, which included smoke and a SubZero-sized Teradata coming up on a riser. Allchin came on stage and called Steve "Pinocchio". I am not making this up.
My favorite relight artifact was a 'Zappa for President' bumper sticker. I left it there, on my next office move, and was delighted to see it a few years later stuck to one of those gray Helpdesk carts.
I was definitely on the outside during all of this. I had ended my first tour at Xerox and decided to become a nano-sized ISV, working first with CP/M-80. My gigs were actually with local enterprises and only a couple had me code--badly, I did publish a patch to Turbo Pascal (that I teased Anders about years later) and also a version of the CP/M command processor called EZCPR that was a load-and-stay-resident fixture. It was free-ware. A guy who published a manual for it, which he sold along with the disc, asked me to add some extensions that his customers wanted. Didn't offer me a dime. Stopped me in my tracks. I just gave up.
I was a big Borland fan, starting with Turbo Pascal and moving to Turbo C and also Paradox. I remember Phillipe being hostile to C++, but then BCC. Cool thing was templates for making CUA applications that built easily and ran on Windows, not Cmd.exe, something VS never seemed to get right (but maybe Microsoft Terminal and related APIs have finally made up for that).
On CompuServe, the closest we got to StackOverflow in the day, Borland had great presence and encouragement of forums. Microsoft came late to that, essentially requiring proof-of-purchase to play. Microsoft corrected rather quickly though. I suspect that this was not delegated to V-badges as too much developer-facing stuff has been recently. This was the era of the Apple 2e, and those fans were relentless.
I now understand the dog-food problem better with respect to developer tools. I fear it made the tools slanted to folks who needed to build Windows and bulky Microsoft apps though. As far as I am concerned, the killer developer bloat case was and is windows.h. Forget about dependencies. In my thinking, compiled headers are the wrong solution to a real problem. The Borland lean-ness is missed.
Of course, Borland would miss windows for getting Windows versions of Paradox (oh, hi there Windows Office Access) and a Windows-generating Turbo Pascal (oops, hello Java and eventually .NET). That was hard to watch. That son-of-Turbo Delphi continues to have adherents is remarkable.
stevesi great book so far. looking back in history, i wonder how much time we all could have saved if we had invested more resources in better dev tools and build farms. think of all the time spent waiting around for compiles, link, build, and source control, and cross dev transfers. Or even just a way to run an application in protected mode where you could fault on dereferencing a null pointer. If an SDE was 40k, then it should of been easy to buy racks stuffed with 40k of PCs but that wasn’t the thinking and budgeting made it almost impossible. It’d be interesting to see a historical chart of the round trip time to make a single source change, compile/link and see it running
Jon, Duane, Rick and others I am sure have tons of thoughts on this. The old Xenix build system was one way of using a different class of hardware/OS for builds early on. I like to think we were using state of the art -- things like parallel compiling, incremental linking, and so on were being developed through the 90s. Of course there was no easy way to even run "racks" of servers then without investing huge amounts, which we did for testing and compatibility in addition to builds over time.
BillG was relentless in wanting to understand this. Later on in the book I talk about some work I did for him on the topic. The NT team really led a level of scale on this topic. Then Office super scaled itself in the late 1990s (with Office 97).
If I were writing a book from Jon's perspective I would talk about the checkin tests and scaling that--it was a huge part of efficiency while also keeping things going. Much like today's CI/CD.
I can keep going but I bet one of the other folks has more to add!
One of the benefits of protected memory on Windows 3 and OS/2 was immediate feedback on bad pointers. NULL is one case. Running off the end of a memory block was another. In debug builds we had modes of memory management where every allocation was its own segment (or page in 32 bit with the next page unallocated). We also used fill values in unused memory to detect when someone wrote where they should not. All of this in debug builds.
We were always jealous of the quick turnaround commercial tools but we had too many tricks built up to move. Eventually we had pre-compiled headers and incremental linking that saved a ton of time in code turnarounds. Before that, we encouraged people to batch changes so that turnaround time was more leveraged. This worked well for bug fixing where instead of recompiling every bugfix one could recompile after diagnosing and fixing several.
The biggest time waster was when the main build was broken. If there are not protections built into the code check-in process, one person can derail the entire team for hours, and the larger the team, the higher the probability that the build is broken.
Core code introduced more combinations and opportunities for error when all platforms needed to keep working. So a lot of resources were invested. Developers had multiple machines so that they had the resources to run tests before checking in code. Every night developers would run the "home" build script that would sync the main branch into the developer's local code, build all of the platform combinations in debug and RTM versions, and test the resulting build on each platform. Build labs were made to verify the union of the day's code check-ins didn't conflict with each other in order to keep errors from flowing into developers' offices when they synced.
When the shift from individual apps to Office occurred, the Office shared code teams had a new set of combinations to keep all of the apps running every day as well.
The test teams also started growing automation labs. So eventually we did have huge labs running builds and tests continually.
As the Office team grew, and the builds were unified (I.e. one consolidated build lab building Word, Excel, PowerPoint, etc. each night) I remember a manager (I think it was TerryC) saying if each developer broke the build only once a year, the Office build would be broken every day. Kirk was the most notorious buildmaster — you did not want to see Kirk outside your door around 6 pm, it meant you checked in something you shouldn’t have! There were of course smoke tests you were supposed to pass prior to checkin, but they often had odd environmental dependencies and so some would fail semi often — you’d convince yourself “I could not possibly have broken that test, it must be broken for other reasons” and then checkin — only to find Kirk at your door... the importance of developers trusting the tests is really important.
i can only give the perspective from the apple side of things. the change was drastic - the system was small, the assembler was fast, one person worked on each subsystem. but the hot idea was to move to OOP and in hindsight it just killed turnaround time - Cfront was slow, linking, headers, source control - every little thing slowed to a crawl. At the same time, the Mac II came out in 1987 and better use of the MMU was just a missed opportunity. Some people did some of the memory guards Jon mentions below but it was ad-hoc and not system wide.
I am surprised to hear Steven say that the app teams and Excel in particular were looking in any serious way at the Borland tools. The reality was the CSL compiler had a raft of special features and our only hope of moving to a commercial tool was getting the Microsoft C team to add the features we needed. This was the first set of requirements that came from being the earliest GUI app developers. Because of the early performance constraints a lot of "tricks" were used that became barriers to moving to commercial tools. Eventually this was all ironed out, but it was thought to be quite a barrier at the time. About this time the application code size was starting to press the limits of the CSL P-Code system and we really needed commercial tools.
Technically it was the linker not the compiler. The Excel project was getting big and the apps Tools team was under resource pressure to stop investing in proprietary tools while at the same time the C Tools group was under pressure to win over the internal teams. It was *very* busy with the Systems team, particularly the NT team, on keeping them happy. We’re still 5 years away from Excel and Word getting rid of PCode. Crazy to think about. But the specter of Borland was definitely used by management to torment the Languages team who was given a mission to get Microsoft internally using its tools.
This is all a great example of being the new person and not quite having a full view of what was going on and applying hindsight as I try to understand it now. Love it!