This article is an on-site version of Martin Sandbu’s Free Lunch newsletter. Sign up Here newsletter sent straight to your inbox every Thursday
When ChatGPT and other examples of artificial intelligence software were unleashed on an unsuspecting public a few months ago, a frenzy of astonishment ensued. In its wake has come an avalanche of concern about where the dizzying growth in software’s capabilities will lead human society – which, surprisingly, includes the people who are so close to the action.
Last month, AI investor Ian Hogarth insisted in the FT’s Weekend magazine that “we must slow down the AI race like God”. A few weeks later, Geoffrey Hinton, the man referred to as the “godfather” of AI, left Google in order to freely express his concerns, including a interview with the new york times, Professor and AI Entrepreneur Gary Marcus is worried Regarding “what could bad actors do with these things”. And just today, the FT carried an interview with AI pioneer Yoshua Bengio, who fears AI could “destabilise democracy”. Meanwhile, a large number of AI investors and experts have called for a “moratorium” on developing the technology further.
Call me naive, but I’ve found myself unable to get too carried away with the excitement. Not because I doubt AI will shake up the way we live, and especially the structures of our economies – certainly, it will. (Check out this list of the types of people have started using AI.) But rather because I struggle to see how even the worst-case scenarios that the experts warn us against are qualitatively different from the bigger problems that humanity has already created and has to deal with all by itself. Had to try to solve it.
Take Hogarth’s example of an AI chatbot that drives someone to suicide. reading goethes in the 18th century The Sorrows of Young Werther Supposedly there could be the same effect. Whatever conclusion we must draw, it is not that AI poses an existential threat.
Or take Hinton, whose “immediate concern is that the Internet will be filled with false images, videos and text, and the average person ‘will no longer know what is the truth'”. The inability to see the truth is a fear that seems to be shared by all the thinkers mentioned above. But lying and manipulation, especially in our democratic processes, are problems we humans are perfectly capable of creating without the need for AI. For example, a quick look at some of the opinions held by large majorities of the American public reveals that (to put it politely) impaired access to truth is nothing new. And, of course, generative AI’s ability to create deepfakes means we have to become more critical of what we see and hear; And unscrupulous politicians will use deepfake accusations to hush up damaging revelations about them. But, then again, AI was not needed in 2017 to be able to back Donald Trump’s “fake news” accusations against his critics.
So I think the latest AI breakthroughs that have fueled a wave of existential panic are a distraction. We should instead think on a much more mundane level. Marcus makes a good analogy with building codes and standards for electrical installations, and that – rather than attempting to slow down technological advances – is the plane on which policy discussions should take place.
There are two particularly serious (because they are the most actionable) questions that policy makers, especially economic policy makers, must address.
The first is who should be held accountable for the decisions made by AI algorithms. It should be easy to accept the principle that we should not allow decisions to be made by AI that we would not allow (or would not want to allow) if made by a human decision maker. Here’s our bad look at this, of course: We let corporate structures get away with actions that we wouldn’t allow by individual humans. But with AI in its infancy, we have the opportunity from the outset to eliminate potential impunity for real people based on the defense that “it was AI that did it”. (This argument is not limited to AI, by the way: we should treat non-intelligent computer algorithms the same way.)
Such an approach encourages legislative and regulatory efforts not to get caught up in the technology itself, but to focus on its particular uses and the harms that follow. In most cases, it doesn’t matter whether the loss is caused by an AI decision or a human decision; What matters is discouraging and punishing harmful decisions. When he exaggerates Daniel Dennett says in the atlantic magazine AI’s ability to create “fake digital people risks destroying our civilization”. But he makes the good point that if executives at tech companies developing AI could face jail time for the technology being used to facilitate fraud, they would quickly ensure that the software is signed. that make it easy to determine whether we are communicating with AI. ,
The Artificial Intelligence Act being legislated in the EU is taking the right approach: identifying particular uses of AI to be restricted, restricted or regulated; enforcing transparency on the use of AI; ensuring that rules that apply elsewhere also apply to the use of AI, such as copyright for artworks on which AI can be trained; and clearly specifying where liability lies, for example, whether with the developer of the AI algorithm or its users.
The second big issue policymakers should consider is the distributional consequences of the productivity gains that AI will ultimately bring. A lot of this will depend on intellectual property rights, which are ultimately about who controls access to technology (and can charge for that access).
Because we don’t know how AI will be used, it’s hard to know how much access to valuable uses will be controlled and monetized. So it’s useful to think in terms of the two extremes. On the one hand is a fully proprietary world, where the most useful AI will be the intellectual property of the companies creating the AI technologies. These will be more and more numerous because of the enormous resources that go into creating usable AI. An effective monopoly or oligopoly, they would be able to charge high rates for licensing and reap the bulk of the productivity gains AI could bring.
At the opposite extreme is the open-source world, in which AI technology can be pursued with so little investment that any attempt to restrict access will prompt the creation of a free open-source rival. The author of the leak is Google “We Have No Ditch” Memo That’s right, the open-source world is what we’re looking at. Rebecca Gorman’s align aiThis is the reasoning given in the letter to FT. In that world, productivity gains would be accrued from AI that has the intelligence or motivation to deploy them – tech companies would see their product commoditized and cost less than the competition.
I think it is now impossible to know which extremes we will be near, for the simple reason that it is impossible to imagine how AI will be used and therefore exactly what technology will be needed. But I will make two observations.
One has to look at the Internet: its protocols are designed to be accessible to all, and the language is, of course, open-source. Yet this hasn’t stopped big tech companies from trying and often succeeding at creating “walled gardens” with their products, and reaping economic rent as a result. So we should err on the side of worrying that the AI revolution will lend itself to the concentration of economic power and rewards.
The second is that where we end up is partly the result of the policy choices we make today. To push towards an open-source world, governments can legislate to increase transparency and access to technology developed by tech companies, thereby turning ownership into open-source. It makes sense to consider in tools – especially for mature technologies, large companies, or AI examples that are rapidly leveraged by users – compulsory licensing (at regulated prices) and the need to publish source code Is.
After all, the big data on which any successful AI would have been trained has been generated by all of us. The public has a strong claim on the fruits of their data labor.
other reading
-
Tobias Gehrke and Julian Ringhof of the European Council on Foreign Relations argued, “There can be no functioning open trade order without an associated safeguard order.” critical analysis How the EU should update its thinking on strategic trade policy.
-
The digital euro project is moving forward but has yet to garner widespread public support.
-
council of europe setting up a register Damage caused by Russia’s invasion of Ukraine. As a formal multilateral initiative, this should make it easier to hold Russia financially accountable for the destruction it has caused, including eventually confiscating its assets.
-
EU’s new joint purchasing platform for natural gas did better than expected In its first tender.











