205. Regulating AI: Is being proactive now the right first step?
Some notes on the concept of the "precautionary principle" and technology innovation
Many have been writing about risks of computers more "intelligent" than humans for decades. It is not unreasonable to ask if what is going on today with "aligned" "responsible" or "sparks of AGI" is a new risk or simply a desire to finally declare those worries real. 1/
2/ No one wants to dismiss real risks. At the same time acting with such extreme precaution when the current systems are somehow wildly inaccurate, nonsense, hallucinating and yet also nearly "AGI" or "super intelligent" is difficult to reconcile.
3/ "One consideration that should be taken into account when deciding whether to promote the development of superintelligence is that if superintelligence is feasible, it will likely be developed sooner or later." So we must cope with it v. prevent it. https://nickbostrom.com/ethics/ai
4/ Around the same time (early 00s) there was also this view expressed by Cass R Sunstein, a lawyer and author of many books on decision-making, risk, and more. He has written a good deal on the "precautionary principle" and risk/benefit choices. https://cato.org/sites/cato.org/files/serials/files/regulation/2002/12/v25n4-9.pdf
5/ Part of what is going on now, and then, is a search for a path forward that is also a path free of risk. This type of framework is the "precautionary principle" often favored by those that wish to prevent ANY harm in the immediate term. Precautionary principle - Wikipedia
6/ The challenge with AI is not unlike that of many new advances in technology. Some perceive major risks and others do not. The unique challenge today is that not only are the risks hypothetical (as, for ex with GMO) but the technology itself is hypothetical wrt capabilities.
7/ Some have advocated for a hard ceasing of efforts. In fact just today the EU has come out with a draft amendments to the AI plan. It includes registering so called "high risk" AI models, applies to open source, APIs, and more. https://europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/DV/2023/05-11/ConsolidatedCA_IMCOLIBE_AI_ACT_EN.pdf
8/ This is decidedly the full "do no harm at all" precautionary principle applied to AI. See the above papers or this book on the principle overall which has been criticized as a thinly guised effort to simply halt technology development.
This book is about the complex relationship between fear, danger, and the law. Cass Sunstein argues that the precautionary principle is incoherent and potentially paralyzing, a… https://www.amazon.com/Laws-Fear-Precautionary-Principle-Lectures/dp/0521615127/
9/ In his book "What Technology Wants" Kevin Kelly researches over 2000 years attempting to uncover technologies that were significant and banned/precluded from development. There were almost none, and none that lasted more than a short time (few years). https://www.amazon.com/dp/B00476WM36
10/ This research is especially valid in the case of open source software since implementing any sort of global and enforceable ban is completely impossible. This lesson was learned during the encryption fears in 1990s.
11/ Instead, Kelly describes a set of pro-precautionary principles one can take in order to be responsible in the face of a new technology. One can think of these as a way to mitigate the risk rather than ban the technology. https://kk.org/thetechnium/the-pro-actiona/
12/ This week we'll see some more US Congressional testimony on AI which is likely to be all over the polarized political map, agenda-laden, perhaps even lacking a technology foundation. With luck an informed re-framing of challenges will emerge. https://www.cnn.com/2023/05/10/tech/openai-ceo-congress-testifying
Please subscribe to Hardcore Software.
This originally appeared here on May 14, 2023 https://twitter.com/stevesi/status/1657927951344893952?s=20
Even the Gutenberg pressed instilled fear. The Catholic Church would have likely banned it had they not been so distracted by the larger fear of widespread literacy!