1/ There are two reactions a company can have when faced with calls for regulation. One is to appear to capitulate & work w/regulators to find a least disruptive path. Other is to fight it knowing you’re turning over control of your roadmap to a bureaucracy. There’s one more…
2/ Enter AI and the new big tech, who directly or historically have battled regulation for ~100 yrs. A new approach is to run towards regulation, literally ask to be regulated claiming tech is out of control 🆘
3/ When a well-positioned company begs for regulation, even if it seems to be capitulating out of service to industry, it is attempting a maneuver known as “regulatory capture”. Why? They know more about the tech than regulators. https://en.wikipedia.org/wiki/Regulatory_capture
4/ Yikes. That sounds awful. What this does is take the notion of capitulation to regulators to a whole new level. Some would say the whole adversarial relationship between regulators and regulated is wrong. Cos should embrace "do no harm". Otherwise, profits >> safety/goodness.
5/ The problem is this is not how the real world works. Regulation is never-ending, knows no bounds, and is a mechanism by which regulators takes control of business AND market forces. It is a deliberate effort to slow the pace of business/innovation. By definition. Not judging.
6/ Of course at this point we're all thinking of awesome regulations. It is pointless to dispute these. Yet, it is not hard to find things every day that seem arbitrary, difficult, or simply backwards and when asked why, sprawling regulation is the root. Downsides are real.
7/ The challenge is left to their own “regulators are going to regulate” just as “business is going business”. So the idea of a natural tension or worse is not only necessary but it is the balance of power in a free economy.
8/ Thus, a company saying “regulate us now” should raise questions. Why? To what end? How? And more. A company acting in its own self-interest should not naturally want to regulate any more than regulators facing visible harm should say “nah, let the markets just do their thing”.
9/ The most likely reason a company takes this path is…hubris. It believes two things. First, that it has a huge lead. The regulations (with their help) will simply increase the cost to enter the market and also define the market in the terms friendly to this new incumbent.
10/ Second, the company presumes to know the long term direction of the technology. This is indeed the much bigger issue. Why? It is almost always the case an early de facto leader or other "experts" are wrong on both velocity and direction innovation. It is too soon to predict.
11/ We see this in antitrust. I have examples personally, but my favorite is the 1982 ATT settlement, which was a form of capture (a CD trades a potentially worse unknown trial outcome for a known bad one). Bell itself a product of capture. https://docs.fcc.gov/public/attachments/DOC-324810A1.pdf
12/ ATT consented to settle by dividing up local carriers (breaking the national monopoly) and maintaining control over long distance. They saw the future as long distance land lines. Mobile? Oh, that was just an afterthought. Both parties literally just said “whatever”.
13/ Omitting mobile and internet wasn’t a deliberate choice but result of focus on long distance. And ATT was totally wrong. As a result they ceded the market for wireless in place of land line long distance. (NB this is all confusing because of the name ATT).
14/ So here we are today in this highly unusual position of the de facto leader (de facto key because we are at Day 0 of AI wave) pleading to be regulated. Everyone should be asking why? No govt should just regulate because of this invitation or hypotheticals. From above on Bell:
15/ More to the point, any dimension upon which proposed regulation is based is fraught with challenges over where this technology is heading. Today the debates are at extremes of “useless” “wrong” “hallucinations” to “existential threat”. It is better said “no one knows”.
eg, should AI be regulated on parameters, training time, amount of data, etc.? Who is advantaged by those? Or based on use cases? What cases are not already regulated (eg drug discovery, home loans, policing, etc.)? Is an LLM a sensible place in the stack at all? What defines AI?
16/ Will "AI" destroy jobs? Will it cure cancer? Will it extinguish the human race? Will it increase or decrease a Std of Living? The only thing we know for sure is all past technology predictions along those lines have been wrong, even mundane anti-tech objections proved wrong.
17/ Many people mocked Microsoft when it took a position of “Freedom to Innovate” during its antitrust era. It was awkward. People saw only Machiavellian exploitive side. Not debating that but there was a real technical side. Microsoft had no idea where innovation was heading.
18/ There was a huge push to regulate by removing the browser from Windows. The browser was defined as some special technology that could must be controlled. BUT it was totally ok to have TCP/IP in the operating system. Or HTTP. Or SSL. Or certificates. Or script. And so on.
20/ Why was that line drawn? At that moment in time in 1998 it seemed to be a line in a tech stack that made sense. But a few years earlier there was no TCP/IP in Windows and it cost hundreds of dollars to acquire. The ubiquity of TCP/IP accelerated the internet and browser.
21/ Going forward, Microsoft's own view of where the browser was heading was proven completely wrong. Integrating into Windows, saddling it with Windows technologies, and more were just wrong. The market was working anyway. And MS lost focus elsewhere.
22/ Technology platforms morph and pivot by absorbing bits and pieces of what Kevin Kelly called the “technium”. It is almost biological evolution. Pre-emptive regulation can thwart this evolutionary process. There’s an old joke “we asked for flying cars and got 140 characters”…
23/ In the book “Where Is My Flying Car?” A credible argument is made that it was regulation that held back the diffusion of a potential flying car. Maybe. Maybe not. The key is we don’t know. We do know we all drive cars on land today.
24/ There are many questions about "AI". Before we cede innovation to a captured market, there's much to learn. I don’t have an answer if you think doing so will cause society's end in a few or ten years, but even the worst case was given "50 years" at hearings this week. // END
Originally posted here.