211. Regulating AI by Executive Order is the Real AI Risk
The President’s Executive Order on Artificial Intelligence is a premature and pessimistic political solution to unknown technical problems and a clear case of regulatory capture.
The President’s Executive Order on Artificial Intelligence is a premature and pessimistic political solution to unknown technical problems and a clear case of regulatory capture at a time when the world would be best served by optimism and innovation1
Please see this open letter sent to President Biden with respect to open-source software and AI signed by leaders in open-source AI.
This week President Biden released the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” as widely anticipated.
I wanted to offer some thoughts on this because as a technologist, student of innovation, and executive that long experienced the impact of regulation on innovation I feel there is much to consider when seeing such an order and approach to technology innovation.
Unlike past initiatives from the executive branch, the first thing I noticed is that this was in fact an Executive Order or EO. It was not a policy statement or aspirational document. This was not the work of a leader of science like Vannevar Bush working through an office like the Office of Scientific Research and Development writing “As We May Think”.
Instead, this document is the work of aggregating policy inputs from an extended committee of interested constituencies while also navigating the law—literally what is it that can be done to throttle artificial intelligence legally without passing any new laws that might throttle artificial intelligence. There is no clear owner of this document. There is no leading science consensus or direction that we can discern. It is impossible to separate out the document from the process and approach used to “govern” AI innovation. Govern is quoted because it is the word used in the EO. This is so much less a document of what should be done with the potential of technology than it is a document pushing the limits of what can be done legally to slow innovation.
You have to read this document starting from the assumption that AI needs to be regulated immediately and forcefully and do so without the accountability of the democratic process. It doesn’t really matter what view you have of AI from accelerate to exterminate, but knowing the process one just has to be concerned. Is AI truly such an immediate existential risk that the way to deal with it is to circumvent the democratic process?
There are three elements of executive orders that are critical to understand:
They are not based on first principles and are politically expeditious approaches to getting the government to do something. They do not look at what should be regulated. They are focused on what can be regulated.
They are not accountable as are laws. A party does not challenge the executive order on the merits/constitutionality of the contents of the order, but rather on if the order was an overreach of the specific authority granted to the executive branch in perhaps some hardly connected law on how an agency is run. If you are one to have distrust for putting the power of government in the hands of unelected bureaucracies, then executive orders should be a major red flag.
The creation of an executive order is not subject to the same levels of transparency and accountability as is the legislative process. There’s no easily discovered history of debate. There’s no clear view of inputs or sources of information to later challenge. There’s no mechanism for competing interests to weigh in on the effort before it is the “law of the land”. If you are one to argue against money and influence in politics, then on principle you should loathe executive orders.
Now of course if you like the results of an executive order then these are all great. Unfortunately, the history of executive orders is that of them becoming increasingly important as the legislature becomes less effective or writes bills that are increasingly vague. We saw President Trump reverse a bunch of President Obama executive orders on his first day. Many people cheered. Then we saw President Biden reverse a bunch of President Trump orders on his first day. Many different people cheered. This might be fun politically but when you consider innovation this is a disaster. Such an approach is akin to trying to build something amazing while working with a constant threat of a reorg or having resources pulled out from under the team. EOs are not a way to govern effectively in general, and they are specifically not a way to “govern” innovation.
The lens I used when reading the document was to imagine being around in the 1970s at the dawn of the microprocessor, the database, and the technologies that became the foundation internet we know today. At each of those there were people terribly concerned about what could be. Books were written. Science fiction resulted. Movements were created. At the time, however, the government and industry were focused on innovation. They were not focused on issues ancillary to innovation or turning science fiction or political fears into a less worrisome product roadmap.
Imagine where we would be if at the dawn of the microprocessor someone was deeply worried about what would happen if “electronic calculators” were capable of doing math faster than even the best humans. If that would eliminate the jobs of all math people. If the enemy were capable of doing math faster than we could. Or if the unequal distribution of super-fast math skills had an impact on the climate or job market. And the result of that was a new regulation that set limits for how fast a microprocessor could be clocked or how large a number it could compute?
What if at the dawn of the internet the concern over having computers connected to each other and becoming an all-knowing communications network resulted in regulations that set limits on packet size and speed, or the number of computers that could be connected to the network or to each other?
Perhaps databases were the risk you saw the most. It was decided that the risk of big databases that could contain more knowledge and retrieve it instantly better and faster than any human ever could was so great that strict limits needed to be placed on databases such that only IBM would be allowed to make databases and everyone else had to keep databases under a certain size, unless they submitted to rules and regulations that were so burdensome that no small company inventing new database techniques could build enough capability and raise enough capital to compete with IBM. And for fun, IBM created the rules for databases and was instrumental in developing the regulatory framework, so they had a head start in the whole process and knew all the players.
These are not fantasies. There were many people that would have loved to have put in place these sorts of limits. As mentioned, books were written. Movies were made. Universities debated the existential risk to humans and society because of the rise of computers, networks, and databases.
Yet the optimism in the government at the time and the desire to enable innovation to take us to better places was the dominate mode of thinking. Optimism for technology ruled the day. Government agencies were created to build new things. To invent new technologies. At the same time, we had agencies in place to regulate. We just didn’t have regulations for how to invent and innovate. We regulated problems we knew about not inventions we hoped to not have.
We don’t have that today. Rather we have a whole new approach which is to stifle innovation before it could happen. In the 1990s there was an industry buzzword called Knowledge Management which was all the rage about how companies could use knowledge to its advantage using new software tools. I was teaching then. A student asked me point blank “who are these people that think knowledge is something to be managed?” They continued, “isn’t knowledge something that should be set free, used by everyone, and made available to everyone.”
AI is a technology buzzword today and it also has early tools a lot like what was going on 25 or 50 years ago. Is it more exciting than innovations back then? If you were around back then it is hard to say as the microprocessor was pretty cool, but only to a very small number of people who could even use one. AI is exciting, at least in part, because so many people can use it immediately and see magic immediately. To see the magic of a microprocessor, database, or the internet in 1980 you were in a very small community. The irony of all the concerns about AI today is that the reason so many people can immediately see the magic of the new technology is because of the microprocessor, database, and the internet. We can see the potential of AI because it is built on the innovations that were allowed to take place without the heavy hand of government managing them like something to be kept in a special jar, doled out for approved uses to entitled people.
Section I of the EO says it all right up front. This is not a document about innovation. It is about stifling innovation. It is not about fostering competition or free markets but about controlling them a priori. It is not about regulating known problems but preventing problems that don’t yet exist from existing. The President says it right here this way:
My Administration places the highest urgency on governing the development and use of AI safely and responsibly.
I’ve read “As We May Think” many times. I’ve read and was there for the internet RFCs. I was there for the introduction of the Apple ][ and IBM PC. No one thought the first thing that needed to happen was that the highest levels of the federal government needed to step in with “urgency” to govern the “development and use…safely and responsibly.” It boggles the mind.
And don’t for a minute think there were not people certain that these products had the potential for abuse. The film “2001: A Space Odyssey” by Clarke and Kubrik, Roddenberry’s Star Trek, and Asimov’s “I Robot” all came a decade or more before these inventions. All of them had informed and rational concerns, even fears, for technology yet all were in the end optimistic. Perhaps most ironically, many overly focused on the easily seen dystopian technologies of “2001” but the real message was the fallibility of humans at the root of the problems and how the problems of humans would be addressed by the relentless forward march of technology. The Star Trek episode “The Ultimate Computer” is in spite of the name a triumph of man over machine, a machine specifically built to eliminate jobs (Captain Dunsel) and be smarter than humans.
The best, enduring, and most thoughtful writers who most eloquently expressed the fragility and risks of technology also saw technology as the answer to forward progress. They did not seek to pre-regulate the problems but to innovate our way out of problems. In all cases, we would not have gotten to the problems on display without the optimism of innovation. There would be no problem with an onboard computer if the ship had already not traveled the far reaches of the universe.
And for the record, you could bet that IBM would have been more than receptive to the government regulating the personal computer or database had it expressed any inkling of doing so. They would have been more than happy to provide their expertise to explaining how to exactly control the technology to their advantage.
According to the Order, “The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society.” It is not just that the government places the highest urgency on the problem, but it is literally “compelled” to act now because of pace of innovation. No matter how fast you believe AI is advancing, it is not advancing at the exponential rates we saw in microprocessors as we all know today as Moore’s Law or the growth of data storage that made database technology possible, or the number of connected nodes on the internet starting in 1994 due to the WWW and browser.
I would argue that we do not even have a reliable measure of AI capabilities let alone the velocity, direction, or acceleration of those capabilities. I know the government is not talking about the rate of AI hallucinations or lack of convergence of AI towards the truth. A good number, perhaps majority, of AI researchers do not even see convergence happening with current implementations. Most importantly in the context of an Executive Order, it is clear there is not consensus with the Congress over the urgency of the matter as they have not rushed to create legislation.
Section 1 of the Order concludes:
We are more than capable of harnessing AI for justice, security, and opportunity for all.
Technology does not do these things intrinsically. People do. Most technology use doesn’t even touch on these characteristics, as high-minded as many in the industry like to talk. This kind of language presumes a future that can never exist, which is a priori determination that only good must come of something new. Whether this Order wants to admit it or not, it is people that are most responsible for when something is used for “good” or “bad” and often people face off with the government, not the other way around.
We sit in a country surrounded by nuclear warheads, but we have few nuclear power plants equally capable of harnessing the power of the atom. The choice to deploy physics in this manner was entirely that of the government, not of people. We are here today in this spot because the government decided what was best. A lot of people today remain committed to generating power from the atom, but the government stands in the way of that innovation.
Had the government not backed down from deciding that encryption was not the equivalent of “justice, security, and opportunity for all” we might be in a world without end-to-end encryption or even secure online commerce. Unlike previous technologies described, it was the government choosing the use cases that altered the path of technology. The same body that wants to stand in front of AI.
We see this happening in real time with decade old technology of image recognition, specifically for faces of humans. The executive branch stepped in because of a view that the technology was some combination of unjust or lacking opportunity for all. As a result, we are behind in the commercialization of this technology for a myriad of uses in homes and businesses from recognizing family members returning home to stores offering easier access and friendly welcome for frequent shoppers. Yet the government has a monopoly on this technology slowly requiring it airports for example, while remaining the only entity to have verified photos of every individual. Between photos, license plates, and passports the government is now the exclusive user of tracking and privacy invading technology against innocent people not even accused of anything. It is no accident that the Executive Order specifically maintains that the government will continue to maintain a monopoly on technology that goes beyond what non-government actors can employ. Even if one believes this scenario to be constitutional, creating this use case by circumventing the Constitution is more than suspect.
The Order is about restricting the “We” to the government and constraining the “We” that is the people. Let that sink in.
As with all executive orders, there is a lot of content that is the product of making sure every constituency is mentioned. At first this seems benign. Of course, it is important that any effort the government embark on should be “safe and secure” or “responsible” and so on. In the hands of skilled politicos in Washington, however, these phrases form springboards for further regulations and empower the bureaucracies given actual authority to go above and beyond. Specifically, these allow agencies entirely unrelated to technology or innovation to become part of the efforts to slow down, gum up, or entirely thwart a product or service. It is not worth speculating here other than to say absent any specific authority you can bet that every agency looking to be involved in the latest and greatest technology will be sharpening pencils in an effort to insert itself into the workings of this Order.
There is an example worth calling out in this regard and that is the pivot of AI from technology to AI as a key tool for reorganizing the workforce. This concern for jobs was also a major concern with the rise of the PC and then the internet. The concerns were rather extreme. Many saw the computer, for example, as putting typists, accountants, bookkeepers, graphics producers, and more out of work immediately. Then along came the word processor and spreadsheet and an immense explosion in the number of employees participating in those tasks with advanced skills. When training was needed the businesses, then private market, then even community colleges stepped in upleveling everyone in the economy. Being able to use a spreadsheet went from a highly coveted resume skill to an assumption in just a couple of years. Those worried that in the near-term AI simply eliminates jobs and so magic software replaces a human should experience self-checkout at a grocery store or see that Amazon Go stores switched to having more humans work in them. To think this won’t happen with marketing professionals or lawyers using AI, at least for some projectable future, is to be making a baseless projection at a time when the President is placing specific constraints on the technology.
Disruption in the labor pool is enormously difficult on specific individuals. History has shown that the needs of industry will quickly move to fill in gaps, entirely out of self-interest. The lack of support or even faith in the free market combined with a view that new tools reduce labor both lack supporting evidence from any past waves of technology. So far technology innovation has proven to be a creator of jobs, not a destroyer of them. It is also a reorganizing force in work that in doing so creates different jobs and importantly new products and services for everyone.
In legislation, many challenges and deficiencies boil down to definitions. Often there are vigorous debates in drafting laws about defining terms resulting in a trail of records helping future citizens know what was going on at the time. The Supreme Court often ends up hearing cases that hinge on what exactly a definition was meant to encompass. Section 3 of the order has no such trail at all. It simply presents a definition as though it is something generally agreed upon. Unlike legislation there will be no ability to challenge this definition. It simply became the working definition. Yet this is what AI is defined to be:
The term ‘artificial intelligence’ or ‘AI’ has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
When I read this, you know what else this applies to? A typical spreadsheet model in finance that issues buy/sell advice based on stock quotes. The kind that sold for the original PC in 1981 when you hooked your 256K monochrome PC with a 300 baud modem up to a Dow Jones information service.
The law 15 U.S.C. 9401, National AI Initiative Act of 2020 (NAIIA) was a spending bill for programs designed to foster artificial intelligence, not constrain it. As such using this definition to provide various agencies funding to encourage AI are fine, if not broad and potentially wasteful. In this context, the definition is crucial—it means whether or not a future technology will exist and whether or not the innovation trajectory will be aided or restricted by the bureaucratic agencies defined further in the Order. Context matters and simply cutting and pasting a definition is not in the best interests of us all.
It is unfortunate that good work like the NAIIA will be overshadowed by this Order that seeks essentially to reign in the very work the government was just 3 years ago encouraging.
A similar leap that will greatly constrain innovation resides in Section 3(k) of definitions where the Order specifically carves out models that are “tens of billions of parameters” that could “be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety.” It goes on to list specific cases where such risks are unacceptable including: “design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons”, “powerful offensive cyber operations through automated vulnerability discovery and exploitation”, or “evasion of human control or oversight through means of deception or obfuscation.”
It is difficult to read these concerns because on the face of it these are quite worrisome and legitimate concerns of the government. On the other hand, this is precisely the kind of language that was used to temporarily ban encryption. Upon reading these concerns one has to wonder how an Encyclopedia ever came to be. What is so interesting about this is the needle-threading going on, the same kind that happened with encryption. The government is using the fact that it cannot ban the research, publication, or dissemination of the code or data used because those are clearly “free speech” but if one packages those up as a compiled product in software then it became something to ban. This is not as straight-forward as the case of banning chemicals to prevent the drug trade, but also not far off from that.
We know for a fact that even the inferior or “bad” GPT (Generative Pre-trained Transformer) models already do these things. It doesn’t matter how many parameters they have or if they are popular or not. The internet already does all these things without GPT technology. The core question in the context of the Order is precisely what is going to happen as a result. If anything can be constrained by this Order, then it will be, and from the outset everything that exists already fits within these definitions. No innovation will be required.
It is for this reason that the potential for government actions we’re starting to see around open-source models are so critical. Should open-source models be attacked on the basis of these above definitions then there are decidedly “free speech” issues in play. The famous t-shirt that contained the code for “illegal” encryption technology positioned as a “prohibited munition” is exactly the type of absurdity this order is backing into. I think it is doing so intentionally. The Order wants to create the situation where even open-source advocates are forced to litigate. Do you know who else wants to thwart open source? The incumbent technology leaders who provided the input into this executive order through their lobbyists and contacts in the White House.
I love the phrase “Trustworthy AI” because to my ears it plays on “Trustworthy Computing” an initiative created by Microsoft 20 plus years ago known as TwC. There is a big difference though. Two decades ago, the PC was already 20 years old. The failures of the PC that led to the need for Trustworthy Computing were well-known and based on experience—experience that came from the rapid innovation and incredible growth in an ecosystem of millions of developers creating PC software and hardware tools around the world. When we spoke of the limits we would build into Windows and Office, we spoke about known behaviors and problems.
“Trustworthy AI” is like the precrime unit applied to PC. Even in 2000 when TwC was created we joked about some of the problems that would have arose had we actually tried the initiative decades earlier when we did see the first problems. A favorite, which might even ring true today, was what would have happened had we forced software to receive patches and updates to be secure. In 1990 we completely lacked the engineering discipline, connectivity, or testing tools to reliably do so. The very act of “patching” software was a hit-or-miss activity at best.
Today’s AI is about as mature of the PC was in 1990—no networking, no USB, 2MB RAM, and so on. Like the PC it hardly works as intended. It isn’t just that we collectively do not understand the problems experienced with AI, though we can point to some that look like big problems. It is that the smartest people in the industry lack the capabilities and tools to diagnose and repair those problems properly, reliably, and consistently. What is needed is not a rush to mandate the problems be solved but, and this should be no surprise, more innovation.
To pick one example, AI will not solve the hallucination problems by mandating AI must always be trustworthy in its generated answers. That’s akin to mandating that software in 1990 be certified as “bug free”. Anyone in the field would have laughed at such a requirement (as I personally did many times.) The state of the art did not possess such a solution as much as everyone wanted one. The state of the art in AI today precludes most all elements of trustworthy that are described in the Order. One could look at this as a horrible problem and therefore no one should use the technology at all ever. Or one could say that the best bet is to support the use of the technology and see what the world, aka the free market, sees as uses where those limitations are either features themselves or just acceptable risk. As it turned out, even though computers were very good at math, spreadsheets ended up having bugs. Humans learned to check results and the makers of spreadsheets became intensely focused on making sure that math worked in spreadsheets. While there were errors that were indeed terrible, the vastly overwhelming use of spreadsheets went to good uses for humans around the world. That happened without pre-regulating spreadsheets.
The entirety of Section 4 “Ensuring the Safety and Security of AI Technology” is really a “guilty until proven innocent” view of a technology. It is simply premature. People are racing ahead to regulate away potential problems and in doing so will succeed in stifling innovation before there is any real-world experience with the technology. This section is a result of the fantastical claims of the technology doomsayers having won the ear of the White House. These are exactly the types of advocates that did not win over the White House at the dawn of the Information Age.
Many of the sorts of tests, guidelines, certifications in this section are expressed through existing regulatory frameworks. This has two advantages:
It makes it seem like AI technology is ready to be regulated and at the same time it makes it seem like there is legitimacy in a wonderful organization like NIST taking the lead. I love the work of NIST, but NIST works best when the problem and measurement space have some practical and real-world experience upon which to build tests and standards.
It is precisely the reason these proposals are made in the context of an Executive Order. Rather than see a problem and make a solution, the only way to proactively engage in regulation before there is any identifiable problem is to define the technology itself to be a problem and establish jurisdiction for the regulators. EV cars are regulated exactly like cars because to ride on a road that is required. Imagine if before the first EV hit the road, EV cars were regulated by the definitions of cars as decided by makers of gas combustion cars. It is easy to see how they would have prevented EVs from ever hitting the road (actually they tried to do that.)
The absurdity of the idea of letting the existing players come up with ways to measure what needs to be regulated and how is on full display in Section 4.2(b)(i) where the document suddenly becomes strangely specific about the architecture of systems to be regulated. If you’re like me the first thought one has when reading this is “where the heck did this come from?” And then after that I immediately thought about all the ways this doesn’t make sense. The section reads:
(i) any model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 10^23 integer or floating-point operations; and
(ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 10^20 integer or floating-point operations per second for training AI.
As a government person one could immediately see the appeal of such specifics. They paint a clear target on what systems should be regulated. Like regulating MPG or horsepower on cars, concepts that make no sense in an EV, the regulators really want hard lines and boundaries to regulate.
As a technologist one immediately sees the absurdity of this section. Who has not worked on a system that had to be completely rearchitected because it contained hard-coded assumptions. Conversely, how many computer systems of the past got left behind because they presumed limitations that were outmoded by the time the system was in widespread use.
I like to imagine the pre-emptive spreadsheet regulations that looked at current sheets and thought that the biggest spreadsheets had 255 rows and that sounds like a hard limit because after that the sheets would be too complex for humans to fathom, not realizing of course that the limit arose out of 8-but integers in use in the 8086 processor that would be outdated in just 5 years. Or worse that the choice was floating point numbers should be restricted in 18 decimal places of accuracy because any more than that and calculations would be too precise, not realizing that was just the limit of the current floating-point coprocessors. In both those cases, as every computer scientist knows these can be easily worked around by every programmer’s best friend, a level of indirection.
Every hard-coded limit is not just a mistake, it attempts to throw an absolute gauntlet down on innovation. Yet all it does is force innovation in another direction for no other reason than this innocuous constraint. It sounds good and thoughtful, but it is just silly. It is like the original cryptography regulations that thought constraining the number of bits used was a good idea.
Where did these numbers come from? I have no idea. A good guess is that these came from the people testifying before Congress. Their goal is to make for the least level playing field possible. They have an advantage over the government falling for these numbers in knowing where they are taking their innovation next. These were proposed not because they make sense, but precisely because they would not impact the trajectory, they are choosing to take innovation.
When you read an Order with this sort of overreach or poor logic, it is difficult to have faith in the rest of the order. Even if it covers topics that one does not have firsthand experience with or knowledge of, the tragic failure of these sorts of limits should be a red flag to anyone reading the Order.
The Federal Government, owing to the Cold War and its unique position in protecting the country, has a vast array of authorities in the realm of chemical, biological, radiological, and nuclear (CBRN) threats. It is not unusual for an executive order to lean on these authorities in order to achieve a goal. The reason for this is that the regulatory oversight that has grown over decades of legislation and regulatory action creates a fairly impenetrable fortress for any company to contend with. Even Congress would have difficulty unwinding under what authority regulatory efforts are undertaken and would be averse to challenging any for fear of being accused of putting the whole nation at risk.
Yet it should be immediately apparent that every innovation in the history of computing was immediately and obviously useful to those wishing to do bad things with CBRN capabilities. To be fair, many in the government would have loved nothing more than to have treated computers, networking, or databases as “weapons” or “munitions” and as such keep them out of the hands of the private sector as though they were immediate threats. We live in a world where this was fortunately not the course of action.
This Order jumps well ahead and presumes that the existing level of AI tools is already a capable weapon in the wrong hands. Section 4.3 goes to great lengths to encourage a broad amount of reporting and assessment in this regard. Given that no agency would want to be caught facing a terror threat of its own making by going lax on the opportunity to regulate a nascent technology, it is a safe bet that in 100, 180, or 270 days (various deadlines in the Order) the White House inbox will be filled a slew of recommendations to curtail the use of AI tools and techniques, at least for the private sector.
It is another safe bet that the incumbents are aware of this and are already strongly connected to these agencies and many are already doing deals for their own technologies. This is another way that new companies, even those complying with all the limits previously mentioned, are easily frozen out of the potential for new innovations. The irony of this regulatory capture is that many companies might be exactly the best ones to create new AI technologies to defend us from various threats.
Section 5.1 “Promoting Innovation and Competition: Attracting AI Talent to the United States” is a perfect example of regulating what could be regulated rather than what should be regulated. There will near unanimous agreement in technology circles that Section 5 is awesome. I certainly agree. Anything that we can do that gets more immigrants working in technology the better. The approach of bypassing general immigration reform and going through the Order shows how dysfunctional the process remains. The country needs immigration reform. It does not need to carve out singular slice of the technology market and try to squeeze talent through this opening. That only encourages more cynicism and dysfunction.
The result of Section 5.1 will be abundantly clear and immediately obvious. Every tech company will find ways to craft job openings that sound like these skills and fit the needs of the resulting loopholes created by this “urgent” need. We of course need more people in the US working on AI. We also want more people working on everything as our unemployment rate and massive software backlog show. This type of “gross” interference in the market for talent is exactly what we do not need compared to immigration reform.
By focusing on what can be done through EO the whole of the free market is distorted. At the same time there’s a round of congratulations and even support for the rest of this order that result. This is spun as the kind of “compromise” that good government is when not everyone gets what they want but it is still progress. With this Order, those that might have argued President Trump’s EO on the “Muslim Ban” was illegal will soon find themselves the beneficiaries of a new source of talent if they just craft their job descriptions appropriately. We do not need this type of cynicism. We need leadership from our leaders.
The private sector should spend time innovating, creating jobs, and hiring people but not spend those same energies manipulating their product agenda or hiring approaches to be able to tap into the global market for talent. Once again, the incumbent technology companies have the resources and capabilities to jump through these hoops. New companies are at a distinct disadvantage in a job market where they are already disadvantaged. In effect, this type of initiative in the EO has the exact opposite effect as the words intend. It should come as no surprise who probably helped craft this approach. It was not startups, that is for sure.
No area of AI activism has received more attention than the apparent role of AI in discrimination or more broadly the violation of civil rights. Much like CBRN, there is an immense infrastructure already in place to oversee these issues and as such they are easy to tap into with the Order. In Section 7 “Advancing Equity and Civil Rights” the Order goes through the many agencies that create and enforce civil rights law and requests reports and actions about the use of AI to advance their agenda.
Not explicitly stated is the underlying assumption that the use of AI is a tool for accelerating terrible injustices of discrimination. This has been the key fear of many in the AI alignment community.
What is also not described is that these problems exist absent AI and importantly are illegal already. The use of AI is completely ancillary to the discrimination that is taking place. Again, we are reliving the rise of the information age. There is no denying the problems we as a nation have in the area of discrimination. The solution will not be found in restricting the use of tools. If anything, the fear of tools is not the fear that the tools will make better choices than humans but that the tools will expose that some humans have been making bad choices all along precisely because they were not using tools to understand and analyze data. The history of redlining and mortgages is rather illustrative in this regard.
I find the discussion of AI in the context of criminal justice to be extremely challenging. While the Order focuses by and large on the role of AI in discrimination and how that needs to be prevented, it spends very little time on the problem with the governments expansive use of AI relative to innocent people. While it is just television scriptwriter fantasy now, it seems we are a short time away from innocent people being dragged into investigations because their cell phone pinged near a crime scene, or their car license plate was spotted somewhere by a traffic camera. We already live in a world where if a crime happens near your home the police knock on your door for security camera footage. The fundamental right to privacy is a huge challenge right now. It does seem to me that giving the government ready access to the technology is almost worse than having the private sector have access because the government is in a position to abuse the data far more than any company. The Order does not seem to address this nearly to the degree that it worries that the technology is intrinsically discriminatory or invasive, when it is the humans using it and the authority they operate under that are the challenge.
As we get much deeper into the Order, we start to see that by and large this is routine government work, as in Section 8, “Protecting Consumers, Patients, Passengers, and Students.” In a sense the more we read the more we see that outside of the “fear” of nuclear weapons or broad discrimination mostly the government should not change trajectories and the use of AI does not substantially alter the need for regulators to do what they already do. I’ve long tried to make the argument that there is no use of AI that is not already covered by existing regulations. This was something already seen with the rise of computers. Whether a doctor, architect, or inspector used a computer or not did not relieve them of liability, nor did it protect them from litigation. A doctor making a bad diagnosis could not just point to a computer and say “the computer did it” as many feared might happen.
AI changes nothing about the way we receive professional services. The Order and the rise of additional regulations, however, might easily and prematurely inhibit the use of AI technology in these fields. Treating AI as special will make it easy for professions to avoid AI and justify doing so. It will make it easier for insurance companies to raise their rates if a profession tries to use AI. It will make it easier for plaintiff litigators to find potential suits if AI is called out specifically as a risk. AI in education is already suffering this backlash as students have access to the technology and must operate under arbitrary and inconsistent rules about using it. I was in school when the first Casio Mini hit the scene and later when the TI-35 became standard for use in physics and math class. We survived even as the alarmists clouded the media with their end of learning mantra.
There are many downsides to treating AI like an outlier technology needing special attention.
Those of us living in SF have seen some of this with autonomous cars. There’s absolutely no doubt that autonomous cars are safer than human driven cars. There’s no doubt the owner of an autonomous car is responsible for the vehicle no matter what. The barriers put up to usage and more importantly the special blame autonomous cars have been receiving for what happens on roads show just how regulations can thwart innovation.
The incumbent providers of technology solutions would love nothing more than for AI to be treated differently. It provides a unique once in a lifetime wedge to prevent new entrants into a field. It sends a message of “stay with us, we’re trusted and legal.” Treating AI differently literally precludes the introduction of AI-first approaches to societal problems, solutions that may be better. It is exactly what did not happen with the Internet.
The Order has a few hundred words specifically on privacy. This would come as a surprise to many considering that the biggest issue many see with GPT as implemented today is the absorption of vast amounts of data. The biggest beneficiaries of this have been incumbents who have been amassing data and training models already. Most obviously any changes in privacy will preclude new entrants and harm them more than incumbents. It is now that more substantial laws around data privacy are most needed, laws that apply to the government most decidedly.
The Order concludes in a typical fashion by defining who is responsible. In past eras, major shifts in technology would be led by technologists. Often an industry leader with a broad range of experience or an academic leader viewed as a pioneer in the space would volunteer for such service. The Order before us shows that this is not an order to advance innovation, but an order to regulate it, to thwart it, and to implement guardrails and barriers to entry benefitting incumbents. It does this by putting the responsibility of chairing these efforts in the hands of an existing political office, Assistant to the President and Deputy Chief of Staff for Policy. In the President’s view, AI is not a technology issue but a political policy one. Perhaps that is what is most disappointing.
If I remain unconvincing that this Order is either the product of incumbent regulatory capture or at the very least enormously beneficial to incumbents, then the last portion of the Order makes it abundantly clear that the regulatory framework that results from those charged with developing it will be the exclusive province of only the largest companies. The Order names 29 executive branch Secretary level (or close) positions that will have oversight or contribute to regulating AI innovation, including an open-ended invitation to add more. As anyone that has worked with new technology companies knows, the efforts that go into simply being able to sell software in the US that meets the basic needs of compliance across merely SOC2, FISMA, and often HIPAA are immense. Few new companies can even cross this threshold. Only the largest existing companies will have the wherewithal to deal with 29 different executive departments. Only the largest existing companies can pick up a phone and call the Deputy Chief of Staff for Policy to begin to deal with the onslaught of regulation and tune it to their capabilities.
This approach to regulation is not about innovation despite all the verbiage proclaiming it to be. This Order is about stifling innovation and turning the next platform over to incumbents in the US and far more likely new companies in other countries that did not see it as a priority to halt innovation before it even happens.
I am by no means certain if AI is the next technology platform the likes of which will make the smartphone revolution that has literally benefitted every human on earth look small. I don’t know sitting here today if the AI products just in market less than a year are the next biggest thing ever. They may turn out to be a way stop on the trajectory of innovation. They may turn out to be ingredients that everyone incorporates into existing products. There are so many things that we do not yet know.
What we do know is that we are at the very earliest stages. We simply have no in-market products, and that means no in-market problems, upon which to base such concerns of fear and need to “govern” regulation. Alarmists or “existentialists” say they have enough evidence. If that’s the case then then so be it, but then the only way to truly make that case is to embark on the legislative process and use democracy to validate those concerns. I just know that we have plenty of past evidence that every technology has come with its alarmists and concerns and somehow optimism prevailed. Why should the pessimists prevail now?
It is my hope that AI optimism can prevail. We have every reason to believe it should.
If you want another detailed section-by-section view of the EO, check out On the Executive Order Zvi Mowshowitz.
This essay should not be construed as legal advice or even analysis. It is a technologist view of the implementation of executive orders based on history and experience. Nothing in this memo should be construed as investment advice or recommendations. All opinions are those solely of the author. Any factual errors are the responsibility of the author and will be fixed if notified.
I liked how David Deutsch historically positions where we may be today with AI (AGI specifically) in a recent interview with Sean Carroll. He said, "It depends what timescale you're talking about. I think we do not have the slightest clue how to make an AGI."
He believes the break through, in hindsight, will appear obvious and easy, but "it's not going to be reached by more and more billions and trillions of bits of data, that's not the kind of thing it is." He offers an analogy comparing human intelligence to apes. He said, "We differ from... great apes only by a few K of code. In that few K of code is the bootstrap program for bootstrapping this qualitatively different type of program that we run. Infinitely different."
He invites us to go back and consider the question of life in 1800. He said, "some people wanted life to be explicable as an ordinary physical process without any supernatural, without any magic, without any God, just laws of physics. And no one knew how to do that...They didn't have the idea of genes and they didn't have the idea of mutations and natural selection. And that solved it. And you could write down that idea in one paragraph."
But Darwin "felt the need to write a whole book, and probably rightly, because from that paragraph nobody but him would have understood it." He thinks "it's possible that the idea that will open the door to AGI is that kind of idea. There will come a time when everybody thinks it's obvious and that we in our time were being obtuse for not seeing it. But from this end it might be very, very difficult."
It could be that all this effort, skepticism, and fear...and regulation...around AI today will be supplanted by a discovery that makes all the handwringing for not. Just as you point out with the invention of spreadsheets, what will we do with those teams of people who erase paper ledgers every time some manager demands a re-calc?