203. Software As Part of Everything Is Regulated…As Part of Everything
The race to declare the need to distinctly regulate AI software fails to acknowledge all that is regulated and how we got to software opportunity in the first place.
Originally posted on twitter.
Software is unique in many ways but one of the most unique aspects of it—the soft part—is that it morphed into a tool for every human endeavor. What started out as a way to do math permeated every aspect of our lives. How did that happen? 1/
2/ Well it took a lot of smart people. Until about 1970 or so there wasn’t even college training in “programming”. Yet software had already landed us on the moon, paid social security and Medicare, and was routinely involved in running the global business world.
3/ This could have been very risky. Importantly, however, the software component of every effort was governed by the regulatory, safety, and reliability framework of that work.
In other words, there was no software free pass.
4/ This was not a bug, but a feature. It enabled every effort to incorporate software at its own pace and in ways they see fit. It meant we collectively did not need to invent all new ways of doing everything. Software helped everything to improve while continuing to function.
5/ For me, a good deal of the concern about AI/LLM/etc is that somehow AI will become a free pass and skirt the long established regulatory framework established for software. At the extreme AI will simply be connected to some dangerous infrastructure and take it over.
6/ It is difficult to find a need to independently regulate AI when any use of AI follows from the use of any software in any endeavor. This is especially true for dangerous work regulated by FAA, OSHA, NHTSA, NRC, or everything from local building codes to Geneva Conv. to IAEA.
7/ Over 60 years we’ve seen software go from an oddball feature of hardware done by math and EE majors to permeating everything with enormous benefits. All developed by anyone with an idea. A big part of doing that was not treating it separately from what it intended to improve.
8/ AI-software providers have a responsibility. They must not pretend that AI-enhanced software is exempt from regulation governing the industry it is enhancing with software. This seems remarkably straight forward. From safety to bias to misinformation are covered,
9/ Science fiction provides a lot of surface area to create hypotheticals. The Industrial Revolution and then Information Age provide a roadmap that unleashed innovation and provided for corrections along the way. Even the exceedingly rare s/w tragedies were dealt with this way.
10/ So much of what made computing happen was the openness and accessibility of state of the art tools. Everyone from kids like me to scientists in labs to domain experts with problems to solve were able to tap into the latest and best tools and create software.
11/ AI has (though not yet to be fully demonstrated) the potential to be the next generation of amazing tools for writing not only software but creating words, images, videos, and products of all sorts.
I hope the next generation does not lose the opportunity we had. // END
I enjoyed the post. But I wonder if the power of AI in the hands of a desperate State, or institution, or individual can be harnessed to invent a weapon of destruction that is so innovative that counter measures can not be created in time to prevent the damage it could inflict?
The only thing I can see potentially needing some add-on regulatory attention, and here I'm speaking of AI-enabled medical tech - think provider assists in EHRs- is traceability of results. ie. ensuring that recommendations/responses reliably and **transparently** follow the evidence.
But then, I suppose this is just an extension to the current regs around traceability, isn't it?
It's a fascinating time for s/w. Makes me wish I was 20 years earlier in my career....