The author argues that law and regulation have never diagnosed and prevented social, political, and economic ills of new technology. AI is no different. AI regulation poses a greater threat to democracy than AI, as governments are anxious to use regulation to censor information. Free competition in civil society, media, and academia will address any ill effects of AI as it has for previous technological revolutions, not preemptive regulation. “AI poses a threat to democracy and society. It must be extensively regulated.” Or words to that effect, are a common sentiment. They must be kidding. Have the chattering classes—us—speculating about the impact of new technology on economics, society, and politics, ever correctly envisioned the outcome? Over the centuries of innovation, from moveable type to Twitter (now X), from the
The author argues that law and regulation have never diagnosed and prevented social, political, and economic ills of new technology. AI is no different. AI regulation poses a greater threat to democracy than AI, as governments are anxious to use regulation to censor information. Free competition in civil society, media, and academia will address any ill effects of AI as it has for previous technological revolutions, not preemptive regulation.
“AI poses a threat to democracy and society. It must be extensively regulated.”
Or words to that effect, are a common sentiment.
They must be kidding.
Have the chattering classes—us—speculating about the impact of new technology on economics, society, and politics, ever correctly envisioned the outcome? Over the centuries of innovation, from moveable type to Twitter (now X), from the steam engine to the airliner, from the farm to the factory to the office tower, from agriculture to manufacturing to services, from leeches and bleeding to cancer cures and birth control, from abacus to calculator to word processor to mainframe to internet to social media, nobody has ever foreseen the outcome, and especially the social and political consequences of new technology. Even with the benefit of long hindsight, do we have any historical consensus on how these and other past technological innovations affected the profound changes in society and government that we have seen in the last few centuries? Did the industrial revolution advance or hinder democracy?
Sure, in each case one can go back and find a few Cassandras who made a correct prediction—but then they got the next one wrong. Before anyone regulates anything, we need a scientifically valid and broad-based consensus.
Have people ever correctly forecast social and political changes, from any set of causes? Representative democracy and liberal society have, in their slow progress, waxed and waned, to put it mildly. Did our predecessors in 1910 see 70 years of communist dictatorship about to envelop Russia? Did they understand in 1925 the catastrophe waiting for Germany?
Society is transforming rapidly. Birth rates are plummeting around the globe. The U.S. political system seems to be coming apart at the seams with unprecedented polarization, a busting of norms, and the decline of our institutions. Does anyone really know why?
“The history of millenarian apocalyptic speculation is littered with worries that each new development would destroy society and lead to tyranny, and with calls for massive coercive reaction. Most of it was spectacularly wrong.”
The history of millenarian apocalyptic speculation is littered with worries that each new development would destroy society and lead to tyranny, and with calls for massive coercive reaction. Most of it was spectacularly wrong. Thomas Malthus predicted, plausibly, that the technological innovations of the late 1700s would lead to widespread starvation. He was spectacularly wrong. Marx thought industrialization would necessarily lead to immiseration of the proletariat and communism. He was spectacularly wrong. Automobiles did not destroy American morals. Comic books and TV did not rot young minds.
Our more neurotic age began in the 1970s, with the widespread view that overpopulation and dwindling natural resources would lead to an economic and political hellscape, views put forth, for example, in the Club of Rome report and movies like Soylent Green. (2) They were spectacularly wrong. China acted on the “population bomb” with the sort of coercion our worriers cheer for, to its current great regret. Our new worry is global population collapse. Resource prices are lower than ever, the U.S. is an energy exporter, and people worry that the “climate crisis” from too much fossil fuel will end Western civilization, not “peak oil.” Yet demographics and natural resources are orders of magnitude more predictable than whatever AI will be and what dangers it poses to democracy and society.
“Millenarian” stems from those who worried that the world would end in the year 1000, and people had better get serious about repentance for our sins. They were wrong then, but much of the impulse to worry about the apocalypse, then to call for massive changes, usually with “us” taking charge, is alive today.
Yes, new technologies often have turbulent effects, dangers, and social or political implications. But that’s not the question. Is there a single example of a society that saw a new developing technology, understood ahead of time its economic effects, to say nothing of social and political effects, “regulated” its use constructively, prevented those ill effects from breaking out, but did not lose the benefits of the new technology?
There are plenty of counterexamples—societies that, in excessive fear of such effects of new technologies, banned or delayed them, at great cost. The Chinese Treasure fleet is a classic story. In the 1400s, China had a new technology: fleets of ships, far larger than anything Europeans would have for centuries, traveling as far as Africa. Then, the emperors, foreseeing social and political change, “threats to their power from merchants,” (what we might call steps toward democracy) “banned oceangoing voyages in 1430.” (3) The Europeans moved in.
Genetic modification was feared to produce “frankenfoods,” or uncontrollable biological problems. As a result of vague fears, Europe has essentially banned genetically modified foods, despite no scientific evidence of harm. GMO bans, including vitamin A-enhanced rice, which has saved the eyesight of millions, are tragically spreading to poorer countries. Most of Europe went on to ban hydraulic fracking. U.S. energy policy regulators didn’t have similar power to stop it, though they would have if they could. The U.S. led the world in carbon reduction, and Europe bought gas from Russia instead. Nuclear power was regulated to death in the 1970s over fears of small radiation exposures, greatly worsening today’s climate problem. The fear remains, and Germany has now turned off its nuclear power plants as well. In 2001, the Bush administration banned research on new embryonic stem cell lines. Who knows what we might have learned.
Climate change is, to many, the current threat to civilization, society, and democracy (the latter from worry about “climate justice” and waves of “climate refugee” immigrants). However much you believe the social and political impacts—much less certain than the meteorological ones—one thing is for sure: Trillion dollar subsidies for electric cars, made in the U.S., with U.S. materials, U.S. union labor, and page after page of restrictive rules, along with 100% tariffs against much cheaper Chinese electric cars, will not save the planet—especially once you realize that every drop of oil saved by a new electric car is freed up to be used by someone else, and at astronomical cost. Whether you’re Bjorn Lomborg or Greta Thunberg on climate change, the regulatory state is failing.
We also suffer from narrow-focus bias. Once we ask “what are the dangers of AI?” a pleasant debate ensues. If we ask instead “what are the dangers to our economy, society, and democracy?” surely a conventional or nuclear major-power war, civil unrest, the unraveling of U.S. political institutions and norms, a high death-rate pandemic, crashing populations, environmental collapse, or just the consequences of an end to growth will light up the scoreboard ahead of vague dangers of AI. We have almost certainly just experienced the first global pandemic due to a human-engineered virus. It turns out that gain-of-function research was the one needing regulating. Manipulated viruses, not GMO corn, were the biological danger.
I do not deny potential dangers of AI. The point is that the advocated tool, the machinery of the regulatory state, guided by people like us, has never been able to see social, economic, and political dangers of technical change, or to do anything constructive about them ahead of time, and is surely just as unable to do so now. The size of the problem does not justify deploying completely ineffective tools.
Preemptive regulation is even less likely to work. AI is said to be an existential threat, fancier versions of “the robots will take over,” needing preemptive “safety” regulation before we even know what AI can do, and before dangers reveal themselves.
Most regulation takes place as we gain experience with a technology and its side effects. Many new technologies, from industrial looms to automobiles to airplanes to nuclear power, have had dangerous side effects. They were addressed as they came out, and judging costs vs. benefits. There has always been time to learn, to improve, to mitigate, to correct, and where necessary to regulate, once a concrete understanding of the problems has emerged. Would a preemptive “safety” regulator looking at airplanes in 1910 have been able to produce that long experience-based improvement, writing the rule book governing the Boeing 737, without killing air travel in the process? AI will follow the same path.
I do not claim that all regulation is bad. The Clean Air and Clean Water Acts of the early 1970s were quite successful. But consider all the ways in which they are so different from AI regulation. The dangers of air pollution were known. The nature of the “market failure,” classic externalities, was well understood. The technologies available for abatement were well understood. The problem was local. The results were measurable. None of those conditions is remotely true for regulating AI, its “safety,” its economic impacts, or its impacts on society or democratic politics. Environmental regulation is also an example of successful ex post rather than preemptive regulation. Industrial society developed, we discovered safety and environmental problems, and the political system fixed those problems, at tolerable cost, without losing the great benefits. If our regulators had considered Watt’s steam engine or Benz’s automobile (about where we are with AI) to pass “effect on society and democracy” rules, we would still be riding horses and hand-plowing fields.
“If our regulators had considered Watt’s steam engine or Benz’s automobile (about where we are with AI) to pass “effect on society and democracy” rules, we would still be riding horses and hand-plowing fields.”
Who will regulate?
Calls for regulation usually come in the passive voice (“AI must be regulated”), leaving open the question of just who is going to do this regulating.
We are all taught in first-year economics classes a litany of “market failures” remediable by far-sighted, dispassionate, and perfectly informed “regulators.” That normative analysis is not logically incorrect. But it abjectly fails to explain the regulation we have now, or how our regulatory bodies behave, what they are capable of, and when they fail. The question for regulating AI is not what an author, appointing him or herself benevolent dictator for a day, would wish to see done. The question is what our legal, regulatory, or executive apparatus can even vaguely hope to deliver, buttressed by analysis of its successes and failures in the past. What can our regulatory institutions do? How have they performed in the past?
Scholars who study regulation abandoned the Econ 101 view a half-century ago. That pleasant normative view has almost no power to explain the laws and regulations that we observe. Public choice economics and history tell instead a story of limited information, unintended consequences, and capture. Planners never have the kind of information that prices convey. (4) Studying actual regulation in industries such as telephones, radios, airlines, and railroads, scholars such as Buchanan and Stigler found capture a much more explanatory narrative: industries use regulation to get protection from competition, and to stifle newcomers and innovators. (5) They offer political support and a revolving door in return. When telephones, airlines, radio and TV, and trucks were deregulated in the 1970s, we found that all the stories about consumer and social harm, safety, or “market failures” were wrong, but regulatory stifling of innovation and competition was very real. Already, Big Tech is using AI safety fear to try again to squash open source and startups, and defend profits accruing to their multibillion dollar investments in easily copiable software ideas. (6) Seventy-five years of copyright law to protect Mickey Mouse is not explainable by Econ 101 market failure.
Even successful regulation, such as the first wave of environmental regulation, is now routinely perverted for other ends. People bring environmental lawsuits to endlessly delay projects they dislike for other reasons.
The basic competence of regulatory agencies is now in doubt. On the heels of the massive failure of financial regulation in 2008 and again in 2021, (7) the obscene failures of public health in 2020–2022, do we really think this institutional machinery can artfully guide the development of one of the most uncertain and consequential technologies of the last century?
And all of my examples asked regulators only to address economic issues, or easily measured environmental issues. Is there any historical case in which the social and political implications of any technology were successfully guided by regulation?
“Studying actual regulation in industries such as telephones, radios, airlines, and railroads, scholars such as Buchanan and Stigler found capture a much more explanatory narrative: industries use regulation to get protection from competition, and to stifle newcomers and innovators.”
It is AI regulation, not AI, that threatens democracy.
Large Language Models (LLMs) are currently the most visible face of AI. They are fundamentally a new technology for communication, for making one human being’s ideas discoverable and available to another. As such, they are the next step in a long line from clay tablets, papyrus, vellum, paper, libraries, moveable type, printing machines, pamphlets, newspapers, paperback books, radio, television, telephone, internet, search engines, social networks, and more. Each development occasioned worry that the new technology would spread “misinformation” and undermine society and government, and needed to be “regulated.”
The worriers often had a point. Gutenberg’s moveable type arguably led to the Protestant Reformation. Luther was the social influencer of his age, writing pamphlet after pamphlet of what the Catholic Church certainly regarded as “misinformation.” The church “regulated” with widespread censorship where it could. Would more censorship, or “regulating” the development of printing, have been good? The political and social consequences of the Reformation were profound, not least a century of disastrous warfare. But nobody at the time saw what they would be. They were more concerned with salvation. And moveable type also made the scientific journal and the Enlightenment possible, spreading a lot of good information along with “misinformation.” The printing press arguably was a crucial ingredient for democracy, by allowing the spread of those then-heretical ideas. The founding generation of the U.S. had libraries full of classical and enlightenment books that they would not have had without printing.
More recently, newspapers, movies, radio, and TV have been influential in the spread of social and political ideas, both good and bad. Starting in the 1930s, the U.S. had extensive regulation, amounting to censorship, of radio, movies, and TV. Content was regulated, licenses given under stringent rules. Would further empowering U.S. censors to worry about “social stability” have been helpful or harmful in the slow liberalization of American society? Was any of this successful in promoting democracy, or just in silencing the many oppressed voices of the era? They surely would have tried to stifle, not promote, the civil rights and anti-Vietnam War movements, as the FBI did.
Freer communication by and large is central to the spread of representative democracy and prosperity. And the contents of that communication are frequently wrong or disturbing, and usually profoundly offensive to the elites who run the regulatory state. It’s fun to play dictator for a day when writing academic articles about what “should be regulated.” But think about what happens when, inevitably, someone else is in charge.
“Regulating” communication means censorship. Censorship is inherently political, and almost always serves to undermine social change and freedom. Our aspiring AI regulators are fresh off the scandals revealed in Murthy v. Missouri, in which the government used the threat of regulatory harassment to censor Facebook and X. (8) Much of the “misinformation,” especially regarding COVID-19 policy, turned out to be right. It was precisely the kind of out-of-the-box thinking, reconsidering of the scientific evidence, speaking truth to power, that we want in a vibrant democracy and a functioning public health apparatus, though it challenged verities propounded by those in power and, in their minds, threatened social stability and democracy itself. Do we really think that more regulation of “misinformation” would have sped sensible COVID-19 policies? Yes, uncensored communication can also be used by bad actors to spread bad ideas, but individual access to information, whether from shortwave radio, samizdat publications, text messages, Facebook, Instagram, and now AI, has always been a tool benefiting freedom.
Yes, AI can lie and produce “deepfakes.” The brief era when a photograph or video provided by itself evidence that something happened, since photographs and videos were difficult to doctor, is over. Society and democracy will survive.
“Regulation is, by definition, an act of the state, and thus used by those who control the state to limit what ideas people can hear. Aristocratic paternalism of ideas is the antithesis of democracy.”
AI can certainly be tuned to favor one or the other political view. Look only at Google’s Gemini misadventure. (9) Try to get any of the currently available LLMs to report controversial views on hot-button issues, even medical advice. Do we really want a government agency imposing a single tuning, in a democracy in which the party you don’t support eventually might win an election? The answer is, as it always has been, competition. Knowing that AI can lie produces a demand for competition and certification. AI can detect misinformation, too. People want true information, and will demand technology that can certify if something is real. If an algorithm is feeding people misinformation, as TikTok is accused of feeding people Chinese censorship, (10) count on its competitors, if allowed to do so, to scream that from the rafters and attract people to a better product.
Regulation naturally bends to political ends. The Biden Executive Order on AI insists that “all workers need a seat at the table, including through collective bargaining,” and “AI development should be built on the views of workers, labor unions, educators, and employers.” (11) Writing in the Wall Street Journal, Ted Cruz and Phil Gramm report: “Mr. Biden’s separate AI Bill of Rights claims to advance ‘racial equity and support for underserved communities.’ AI must also be used to ‘improve environmental and social outcomes,’ to ‘mitigate climate change risk,’ and to facilitate ‘building an equitable clean energy economy.’” (12) All worthy goals, perhaps, but one must admit those are somewhat partisan goals not narrowly tailored to scientifically understood AI risks. And if you like these, imagine what the likely Trump executive order on AI will look like.
Regulation is, by definition, an act of the state, and thus used by those who control the state to limit what ideas people can hear. Aristocratic paternalism of ideas is the antithesis of democracy.
Economics
What about jobs? It is said that once AI comes along, we’ll all be out of work. And exactly this was said of just about every innovation for the last millennium. Technology does disrupt. Mechanized looms in the 1800s did lower wages for skilled weavers, while it provided a reprieve from the misery of farmwork for unskilled workers. The answer is a broad safety net that cushions all misfortunes, without unduly dulling incentives. Special regulations to help people displaced by AI, or China, or other newsworthy causes is counterproductive.
But after three centuries of labor-saving innovation, the unemployment rate is 4%. (13) In 1900, a third of Americans worked on farms. Then the tractor was invented. People went on to better jobs at higher wages. The automobile did not lead to massive unemployment of horse-drivers. In the 1970s and 1980s, women entered the workforce in large numbers. Just then, the word processor and Xerox machine slashed demand for secretaries. Female employment did not crash. ATM machines increased bank employment. Tellers were displaced, but bank branches became cheaper to operate, so banks opened more of them. AI is not qualitatively different in this regard.
One activity will be severely disrupted: Essays like this one. ChatGPT-5, please write 4,000 words on AI regulation, society, and democracy, in the voice of the Grumpy Economist…(I was tempted!). But the same economic principle applies: Reduction in cost will lead to a massive expansion in supply. Revenues can even go up if people want to read it, i.e., if demand is elastic enough. (14) And perhaps authors like me can spend more time on deeper contributions.
The big story of AI will be how it makes workers more productive. Imagine you’re an undertrained educator or nurse practitioner in a village in India or Africa. With an AI companion, you can perform at a much higher level. AI tools will likely raise the wages and productivity of less-skilled workers, by more easily spreading around the knowledge and analytical abilities of the best ones.
AI is one of the most promising technical innovations of recent decades. Since social media of the early 2000s, Silicon Valley has been trying to figure out what’s next. It wasn’t crypto. Now we know. AI promises to unlock tremendous advances. Consider only machine learning plus genetics and ponder the consequent huge advances coming in health. But nobody really knows yet what it can do, or how to apply it. It was a century from Franklin’s kite to the electric light bulb, and another century to the microprocessor and the electric car.
A broad controversy has erupted in economics: whether frontier growth is over or dramatically slowing down because we have run out of ideas. (15) AI is a great hope this is not true. Historically, ideas became harder to find in existing technologies. And then, as it seemed growth would peter out, something new came along. Steam engines plateaued after a century. Then diesel, electric, and airplanes came along. As birthrates continue to decline, the issue is not too few jobs, but too few people. Artificial “people” may be coming along just in time!
“It’s fun to play dictator for a day when writing academic articles about what “should be regulated.” But think about what happens when, inevitably, someone else is in charge.”
Conclusion
As a concrete example of the kind of thinking I argue against, Daron Acemoglu writes,
We must remember that existing social and economic relations are exceedingly complex. When they are disrupted, all kinds of unforeseen consequences can follow…
We urgently need to pay greater attention to how the next wave of disruptive innovation could affect our social, democratic, and civic institutions. Getting the most out of creative destruction requires a proper balance between pro-innovation public policies and democratic input. If we leave it to tech entrepreneurs to safeguard our institutions, we risk more destruction than we bargained for. (16)
The first paragraph is correct. But the logical implication is the converse—if relations are “complex” and consequences “unforeseen,” the machinery of our political and regulatory state is incapable of doing anything about it. The second paragraph epitomizes the fuzzy thinking of passive voice. Who is this “we”? How much more “attention” can AI get than the mass of speculation in which we (this time I mean literally we) are engaged? Who does this “getting”? Who is to determine “proper balance”? Balancing “pro-innovation public policies and democratic input” is Orwellianly autocratic. Our task was to save democracy, not to “balance” democracy against “public policies.” Is not the effect of most “public policy” precisely to slow down innovation in order to preserve the status quo? “We” not “leave[ing] it to tech entrepreneurs” means a radical appropriation of property rights and rule of law.
What’s the alternative? Of course AI is not perfectly safe. Of course it will lead to radical changes, most for the better but not all. Of course it will affect society and our political system, in complex, disruptive, and unforeseen ways. How will we adapt? How will we strengthen democracy, if we get around to wanting to strengthen democracy rather than the current project of tearing it apart?
The answer is straightforward: As we always have. Competition. The government must enforce rule of law, not the tyranny of the regulator. Trust democracy, not paternalistic aristocracy—rule by independent, unaccountable, self-styled technocrats, insulated from the democratic political process. Remain a government of rights, not of permissions. Trust and strengthen our institutions, including all of civil society, media, and academia, not just federal regulatory agencies, to detect and remedy problems as they occur. Relax. It’s going to be great.
Footnotes
(1) I thank Angela Aristidou, Eugene Volokh, and an anonymous reviewer for helpfulcomments.
(2) Donella Meadows, Dennis Meadows, Jørgen Randers, and William Behrens, Limits to Growth: A Report for the Club of Rome’s Project on the Predicament of Mankind (New York: Universe Books, 1972), https://www.donellameadows.org/wp-content/userfiles/Limits-to-Growth-digital-scan-version.pdf; Soylent Green, directed by Richard Fleischer (1973; Beverly Hills, CA: Metro-Goldwyn-Mayer).
(3) Angus Deaton, The Great Escape: Health, Wealth, and the Origins of Inequality (Princeton University Press, 2013), https://press.princeton.edu/books/hardcover/9780691153544/the-great-escape.
(4) See Friedrich Hayek, “The Use of Knowledge in Society,” American Economic Review 35 (September 1945): 519–30, https://www.jstor.org/stable/1809376.
(5) See George J. Stigler, “The Theory of Economic Regulation,” Bell Journal of Economics and Management Science 2, no. 1 (Spring 1971): 3–21, https://doi.org/10.2307/3003160.
(6) See Martin Casado and Katherine Boyle, “AI Talks Leave ‘Little Tech’ Out,” Wall Street Journal, May 14, 2024, https://www.wsj.com/articles/ai-talks-leave-little-tech-outhomeland-security-adversaries-open-source-board-46e3232d.
(7) See John H. Cochrane and Amit Seru, “Ending Bailouts, at Last,” Journal of Law, Economics and Policy 19, no. 2 (2024): 169–193, https://www.johncochrane.com/research-all/end-bailouts.
(8) Murthy v. Missouri, 603 U.S. _____ (2024).
(9) Megan McArdle, “Female Popes? Google’s Amusing AI Bias Underscores a Serious Problem,” Washington Post, February 27, 2024, https://www.washingtonpost.com/opinions/2024/02/27/google-gemini-bias-race-politics/.
(10) Zachary Evans, “Social Media App TikTok Censors anti-China Content,” National Review, September 25, 2019, https://www.nationalreview.com/news/social-mediaapp-tiktok-censors-anti-china-content.
(12) Ted Cruz and Phil Gramm, “Biden Wants to Put AI on a Leash,” Wall Street Journal,March 25, 2024, https://www.wsj.com/articles/biden-wants-to-put-artificial-intelligence-on-a-leash-progressive-regulation-45275102.
(13) “Unemployment Rate [UNRATE], May 2024” U.S. Bureau of Labor Statistics, retrieved from FRED, Federal Reserve Bank of St. Louis, July 5, 2024, https://fred.stlouisfed.org/series/UNRATE.
(14) For more on this point, see John Cochrane, “Supply, Demand, AI and Humans,” The Grumpy Economist (blog), April 26, 2024, https://www.grumpy-economist.com/p/supply-demand-ai-and-humans.
(15) See the excellent, and troubling, analysis in Robert J. Gordon, The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War (Princeton: Princeton University Press, 2017) and Nick Bloom, John Van Reenen, Charles Jones, and Michael Webb, “Are Ideas Getting Harder to Find?,” American Economic Review, 110, no. 4 (April 2020): 1104–1144, https://www.aeaweb.org/articles?id=10.1257/aer.20180338.
(16) Daron Acemoglu, “Are We Ready for AI Creative Destruction?,” Project Syndicate, April 9, 2024, https://www.project-syndicate.org/commentary/ai-age-needs-more-nuanced-view-of-creative-destruction-disruptive-innovation-by-daron-acemoglu-2024-04.
100 Wilshire Blvd., Suite 700
Santa Monica, CA 90401
MiddleLand@protonmail.com
Terms and Conditions
October, 2023
Using our website
You may use the The Middle Land website subject to the Terms and Conditions set out on this page. Visit this page regularly to check the latest Terms and Conditions. Access and use of this site constitutes your acceptance of the Terms and Conditions in-force at the time of use.
Intellectual property
Names, images and logos displayed on this site that identify The Middle Land are the intellectual property of New San Cai Inc. Copying any of this material is not permitted without prior written approval from the owner of the relevant intellectual property rights.
Requests for such approval should be directed to the competition committee.
Please provide details of your intended use of the relevant material and include your contact details including name, address, telephone number, fax number and email.
Linking policy
You do not have to ask permission to link directly to pages hosted on this website. However, we do not permit our pages to be loaded directly into frames on your website. Our pages must load into the user’s entire window.
The Middle Land is not responsible for the contents or reliability of any site to which it is hyperlinked and does not necessarily endorse the views expressed within them. Linking to or from this site should not be taken as endorsement of any kind. We cannot guarantee that these links will work all the time and have no control over the availability of the linked pages.
Submissions
All information, data, text, graphics or any other materials whatsoever uploaded or transmitted by you is your sole responsibility. This means that you are entirely responsible for all content you upload, post, email or otherwise transmit to the The Middle Land website.
Virus protection
We make every effort to check and test material at all stages of production. It is always recommended to run an anti-virus program on all material downloaded from the Internet. We cannot accept any responsibility for any loss, disruption or damage to your data or computer system, which may occur while using material derived from this website.
Disclaimer
The website is provided ‘as is’, without any representation or endorsement made, and without warranty of any kind whether express or implied.
Your use of any information or materials on this website is entirely at your own risk, for which we shall not be liable. It is your responsibility to ensure any products, services or information available through this website meet your specific requirements.
We do not warrant the operation of this site will be uninterrupted or error free, that defects will be corrected, or that this site or the server that makes it available are free of viruses or represent the full functionality, accuracy and reliability of the materials. In no event will we be liable for any loss or damage including, without limitation, loss of profits, indirect or consequential loss or damage, or any loss or damages whatsoever arising from the use, or loss of data, arising out of – or in connection with – the use of this website.
Privacy & Cookie Policy
September 11, 2024
Last Updated: September 11, 2024
New San Cai Inc. (hereinafter “The Middle Land,” “we,” “us,” or “our”) owns and operates www.themiddleland.com, its affiliated websites and applications (our “Sites”), and provides related products, services, newsletters, and other offerings (together with the Sites, our “Services”) to art lovers and visitors around the world.
This Privacy Policy (the “Policy”) is intended to provide you with information on how we collect, use, and share your personal data. We process personal data from visitors of our Sites, users of our Services, readers or bloggers (collectively, “you” or “your”). Personal data is any information about you. This Policy also describes your choices regarding use, access, and correction of your personal information.
If after reading this Policy you have additional questions or would like further information, please email at middleland@protonmail.com.
PERSONAL DATA WE COLLECT AND HOW WE USE IT
We collect and process personal data only for lawful reasons, such as our legitimate business interests, your consent, or to fulfill our legal or contractual obligations.
Information You Provide to Us
Most of the information Join Talents collects is provided by you voluntarily while using our Services. We do not request highly sensitive data, such as health or medical information, racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, etc. and we ask that you refrain from sending us any such information.
Here are the types of personal data that you voluntarily provide to us:
Name, email address, and any other contact information that you provide by filling out your profile forms
Billing information, such as credit card number and billing address
Work or professional information, such as your company or job title
Unique identifiers, such as username or password
Demographic information, such as age, education, interests, and ZIP code
Details of transactions and preferences from your use of the Services
Correspondence with other users or business that you send through our Services, as well as correspondence sent to JoinTalents.com
As a registered users or customers, you may ask us to review or retrieve emails sent to your business. We will access these emails to provide these services for you.
We use the personal data you provide to us for the following business purposes:
Set up and administer your account
Provide and improve the Services, including displaying content based on your previous transactions and preferences
Answer your inquiries and provide customer service
Send you marketing communications about our Services, including our newsletters (please see the Your Rights/Opt Out section below for how to opt out of marketing communications)
Communicate with users who registered their accounts on our site
Prevent, discover, and investigate fraud, criminal activity, or violations of our Terms and Conditions
Administer contests and events you entered
Information Obtained from Third-Party Sources
We collect and publish biographical and other information about users, which we use to promote the articles and our bloggerswho use our sites. If you provide personal information about others, or if others give us your information, we will only use that information for the specific reason for which it was provided.
Information We Collect by Automated Means
Log Files
The site uses your IP address to help diagnose server problems, and to administer our website. We use your IP addresses to analyze trends and gather broad demographic information for aggregate use.
Every time you access our Site, some data is temporarily stored and processed in a log file, such as your IP addresses, the browser types, the operating systems, the recalled page, or the date and time of the recall. This data is only evaluated for statistical purposes, such as to help us diagnose problems with our servers, to administer our sites, or to improve our Services.
Do Not Track
Your browser or device may include “Do Not Track” functionality. Our information collection and disclosure practices, and the choices that we provide to customers, will continue to operate as described in this Privacy Policy, whether or not a “Do Not Track” signal is received.
HOW WE SHARE YOUR INFORMATION
We may share your personal data with third parties only in the ways that are described in this Privacy Policy. We do not sell, rent, or lease your personal data to third parties, and We does not transfer your personal data to third parties for their direct marketing purposes.
We may share your personal data with third parties as follows:
With service providers under contract to help provide the Services and assist us with our business operations (such as our direct marketing, payment processing, fraud investigations, bill collection, affiliate and rewards programs)
As required by law, such as to comply with a subpoena, or similar legal process, including to meet national security or law enforcement requirements
When we believe in good faith that disclosure is necessary to protect rights or safety, investigate fraud, or respond to a government request
With other users of the Services that you interact with to help you complete a transaction
There may be other instances where we share your personal data with third parties based on your consent.
HOW WE STORE AND SECURE YOUR INFORMATION
We retain your information for as long as your account is active or as needed to provide you Services. If you wish to cancel your account, please contact us middleland@protonmail.com. We will retain and use your personal data as necessary to comply with legal obligations, resolve disputes, and enforce our agreements.
All you and our data are stored in the server in the United States, we do not sales or transfer your personal data to the third party. All information you provide is stored on a secure server, and we generally accepted industry standards to protect the personal data we process both during transmission and once received.
YOUR RIGHTS/OPT OUT
You may correct, update, amend, delete/remove, or deactivate your account and personal data by making the change on your Blog on www.themiddleland.com or by emailing middleland@protonmail.com. We will respond to your request within a reasonable timeframe.
You may choose to stop receiving Join Talents newsletters or marketing emails at any time by following the unsubscribe instructions included in those communications, or you can email us at middleland@protonmail.com
LINKS TO OTHER WEBSITES
The Middle Land include links to other websites whose privacy practices may differ from that of ours. If you submit personal data to any of those sites, your information is governed by their privacy statements. We encourage you to carefully read the Privacy Policy of any website you visit.
NOTE TO PARENTS OR GUARDIANS
Our Services are not intended for use by children, and we do not knowingly or intentionally solicit data from or market to children under the age of 18. We reserve the right to delete the child’s information and the child’s registration on the Sites.
PRIVACY POLICY CHANGES
We may update this Privacy Policy to reflect changes to our personal data processing practices. If any material changes are made, we will notify you on the Sites prior to the change becoming effective. You are encouraged to periodically review this Policy.
HOW TO CONTACT US
If you have any questions about our Privacy Policy, please email middleland@protonmail.com
TestI am a description. Click the edit button to change this text.
New Programs Added to Your Plan
March 2, 2023
The Michelin brothers created the guide, which included information like maps, car mechanics listings, hotels and petrol stations across France to spur demand.
The guide began to award stars to fine dining restaurants in 1926.
At first, they offered just one star, the concept was expanded in 1931 to include one, two and three stars. One star establishments represent a “very good restaurant in its category”. Two honour “excellent cooking, worth a detour” and three reward “exceptional cuisine, worth a
AI, Society, and Democracy: Just Relax
Detail 7 hero (Photo: The Digitalist Papers)
148 Views
148 Views
By John H. Cochrane
The author argues that law and regulation have never diagnosed and prevented social, political, and economic ills of new technology. AI is no different. AI regulation poses a greater threat to democracy than AI, as governments are anxious to use regulation to censor information. Free competition in civil society, media, and academia will address any ill effects of AI as it has for previous technological revolutions, not preemptive regulation.
“AI poses a threat to democracy and society. It must be extensively regulated.”
Or words to that effect, are a common sentiment.
They must be kidding.
Have the chattering classes—us—speculating about the impact of new technology on economics, society, and politics, ever correctly envisioned the outcome? Over the centuries of innovation, from moveable type to Twitter (now X), from the steam engine to the airliner, from the farm to the factory to the office tower, from agriculture to manufacturing to services, from leeches and bleeding to cancer cures and birth control, from abacus to calculator to word processor to mainframe to internet to social media, nobody has ever foreseen the outcome, and especially the social and political consequences of new technology. Even with the benefit of long hindsight, do we have any historical consensus on how these and other past technological innovations affected the profound changes in society and government that we have seen in the last few centuries? Did the industrial revolution advance or hinder democracy?
Sure, in each case one can go back and find a few Cassandras who made a correct prediction—but then they got the next one wrong. Before anyone regulates anything, we need a scientifically valid and broad-based consensus.
Have people ever correctly forecast social and political changes, from any set of causes? Representative democracy and liberal society have, in their slow progress, waxed and waned, to put it mildly. Did our predecessors in 1910 see 70 years of communist dictatorship about to envelop Russia? Did they understand in 1925 the catastrophe waiting for Germany?
Society is transforming rapidly. Birth rates are plummeting around the globe. The U.S. political system seems to be coming apart at the seams with unprecedented polarization, a busting of norms, and the decline of our institutions. Does anyone really know why?
The history of millenarian apocalyptic speculation is littered with worries that each new development would destroy society and lead to tyranny, and with calls for massive coercive reaction. Most of it was spectacularly wrong. Thomas Malthus predicted, plausibly, that the technological innovations of the late 1700s would lead to widespread starvation. He was spectacularly wrong. Marx thought industrialization would necessarily lead to immiseration of the proletariat and communism. He was spectacularly wrong. Automobiles did not destroy American morals. Comic books and TV did not rot young minds.
Our more neurotic age began in the 1970s, with the widespread view that overpopulation and dwindling natural resources would lead to an economic and political hellscape, views put forth, for example, in the Club of Rome report and movies like Soylent Green. (2) They were spectacularly wrong. China acted on the “population bomb” with the sort of coercion our worriers cheer for, to its current great regret. Our new worry is global population collapse. Resource prices are lower than ever, the U.S. is an energy exporter, and people worry that the “climate crisis” from too much fossil fuel will end Western civilization, not “peak oil.” Yet demographics and natural resources are orders of magnitude more predictable than whatever AI will be and what dangers it poses to democracy and society.
“Millenarian” stems from those who worried that the world would end in the year 1000, and people had better get serious about repentance for our sins. They were wrong then, but much of the impulse to worry about the apocalypse, then to call for massive changes, usually with “us” taking charge, is alive today.
Yes, new technologies often have turbulent effects, dangers, and social or political implications. But that’s not the question. Is there a single example of a society that saw a new developing technology, understood ahead of time its economic effects, to say nothing of social and political effects, “regulated” its use constructively, prevented those ill effects from breaking out, but did not lose the benefits of the new technology?
There are plenty of counterexamples—societies that, in excessive fear of such effects of new technologies, banned or delayed them, at great cost. The Chinese Treasure fleet is a classic story. In the 1400s, China had a new technology: fleets of ships, far larger than anything Europeans would have for centuries, traveling as far as Africa. Then, the emperors, foreseeing social and political change, “threats to their power from merchants,” (what we might call steps toward democracy) “banned oceangoing voyages in 1430.” (3) The Europeans moved in.
Genetic modification was feared to produce “frankenfoods,” or uncontrollable biological problems. As a result of vague fears, Europe has essentially banned genetically modified foods, despite no scientific evidence of harm. GMO bans, including vitamin A-enhanced rice, which has saved the eyesight of millions, are tragically spreading to poorer countries. Most of Europe went on to ban hydraulic fracking. U.S. energy policy regulators didn’t have similar power to stop it, though they would have if they could. The U.S. led the world in carbon reduction, and Europe bought gas from Russia instead. Nuclear power was regulated to death in the 1970s over fears of small radiation exposures, greatly worsening today’s climate problem. The fear remains, and Germany has now turned off its nuclear power plants as well. In 2001, the Bush administration banned research on new embryonic stem cell lines. Who knows what we might have learned.
Climate change is, to many, the current threat to civilization, society, and democracy (the latter from worry about “climate justice” and waves of “climate refugee” immigrants). However much you believe the social and political impacts—much less certain than the meteorological ones—one thing is for sure: Trillion dollar subsidies for electric cars, made in the U.S., with U.S. materials, U.S. union labor, and page after page of restrictive rules, along with 100% tariffs against much cheaper Chinese electric cars, will not save the planet—especially once you realize that every drop of oil saved by a new electric car is freed up to be used by someone else, and at astronomical cost. Whether you’re Bjorn Lomborg or Greta Thunberg on climate change, the regulatory state is failing.
We also suffer from narrow-focus bias. Once we ask “what are the dangers of AI?” a pleasant debate ensues. If we ask instead “what are the dangers to our economy, society, and democracy?” surely a conventional or nuclear major-power war, civil unrest, the unraveling of U.S. political institutions and norms, a high death-rate pandemic, crashing populations, environmental collapse, or just the consequences of an end to growth will light up the scoreboard ahead of vague dangers of AI. We have almost certainly just experienced the first global pandemic due to a human-engineered virus. It turns out that gain-of-function research was the one needing regulating. Manipulated viruses, not GMO corn, were the biological danger.
I do not deny potential dangers of AI. The point is that the advocated tool, the machinery of the regulatory state, guided by people like us, has never been able to see social, economic, and political dangers of technical change, or to do anything constructive about them ahead of time, and is surely just as unable to do so now. The size of the problem does not justify deploying completely ineffective tools.
Preemptive regulation is even less likely to work. AI is said to be an existential threat, fancier versions of “the robots will take over,” needing preemptive “safety” regulation before we even know what AI can do, and before dangers reveal themselves.
Most regulation takes place as we gain experience with a technology and its side effects. Many new technologies, from industrial looms to automobiles to airplanes to nuclear power, have had dangerous side effects. They were addressed as they came out, and judging costs vs. benefits. There has always been time to learn, to improve, to mitigate, to correct, and where necessary to regulate, once a concrete understanding of the problems has emerged. Would a preemptive “safety” regulator looking at airplanes in 1910 have been able to produce that long experience-based improvement, writing the rule book governing the Boeing 737, without killing air travel in the process? AI will follow the same path.
I do not claim that all regulation is bad. The Clean Air and Clean Water Acts of the early 1970s were quite successful. But consider all the ways in which they are so different from AI regulation. The dangers of air pollution were known. The nature of the “market failure,” classic externalities, was well understood. The technologies available for abatement were well understood. The problem was local. The results were measurable. None of those conditions is remotely true for regulating AI, its “safety,” its economic impacts, or its impacts on society or democratic politics. Environmental regulation is also an example of successful ex post rather than preemptive regulation. Industrial society developed, we discovered safety and environmental problems, and the political system fixed those problems, at tolerable cost, without losing the great benefits. If our regulators had considered Watt’s steam engine or Benz’s automobile (about where we are with AI) to pass “effect on society and democracy” rules, we would still be riding horses and hand-plowing fields.
Who will regulate?
Calls for regulation usually come in the passive voice (“AI must be regulated”), leaving open the question of just who is going to do this regulating.
We are all taught in first-year economics classes a litany of “market failures” remediable by far-sighted, dispassionate, and perfectly informed “regulators.” That normative analysis is not logically incorrect. But it abjectly fails to explain the regulation we have now, or how our regulatory bodies behave, what they are capable of, and when they fail. The question for regulating AI is not what an author, appointing him or herself benevolent dictator for a day, would wish to see done. The question is what our legal, regulatory, or executive apparatus can even vaguely hope to deliver, buttressed by analysis of its successes and failures in the past. What can our regulatory institutions do? How have they performed in the past?
Scholars who study regulation abandoned the Econ 101 view a half-century ago. That pleasant normative view has almost no power to explain the laws and regulations that we observe. Public choice economics and history tell instead a story of limited information, unintended consequences, and capture. Planners never have the kind of information that prices convey. (4) Studying actual regulation in industries such as telephones, radios, airlines, and railroads, scholars such as Buchanan and Stigler found capture a much more explanatory narrative: industries use regulation to get protection from competition, and to stifle newcomers and innovators. (5) They offer political support and a revolving door in return. When telephones, airlines, radio and TV, and trucks were deregulated in the 1970s, we found that all the stories about consumer and social harm, safety, or “market failures” were wrong, but regulatory stifling of innovation and competition was very real. Already, Big Tech is using AI safety fear to try again to squash open source and startups, and defend profits accruing to their multibillion dollar investments in easily copiable software ideas. (6) Seventy-five years of copyright law to protect Mickey Mouse is not explainable by Econ 101 market failure.
Even successful regulation, such as the first wave of environmental regulation, is now routinely perverted for other ends. People bring environmental lawsuits to endlessly delay projects they dislike for other reasons.
The basic competence of regulatory agencies is now in doubt. On the heels of the massive failure of financial regulation in 2008 and again in 2021, (7) the obscene failures of public health in 2020–2022, do we really think this institutional machinery can artfully guide the development of one of the most uncertain and consequential technologies of the last century?
And all of my examples asked regulators only to address economic issues, or easily measured environmental issues. Is there any historical case in which the social and political implications of any technology were successfully guided by regulation?
It is AI regulation, not AI, that threatens democracy.
Large Language Models (LLMs) are currently the most visible face of AI. They are fundamentally a new technology for communication, for making one human being’s ideas discoverable and available to another. As such, they are the next step in a long line from clay tablets, papyrus, vellum, paper, libraries, moveable type, printing machines, pamphlets, newspapers, paperback books, radio, television, telephone, internet, search engines, social networks, and more. Each development occasioned worry that the new technology would spread “misinformation” and undermine society and government, and needed to be “regulated.”
The worriers often had a point. Gutenberg’s moveable type arguably led to the Protestant Reformation. Luther was the social influencer of his age, writing pamphlet after pamphlet of what the Catholic Church certainly regarded as “misinformation.” The church “regulated” with widespread censorship where it could. Would more censorship, or “regulating” the development of printing, have been good? The political and social consequences of the Reformation were profound, not least a century of disastrous warfare. But nobody at the time saw what they would be. They were more concerned with salvation. And moveable type also made the scientific journal and the Enlightenment possible, spreading a lot of good information along with “misinformation.” The printing press arguably was a crucial ingredient for democracy, by allowing the spread of those then-heretical ideas. The founding generation of the U.S. had libraries full of classical and enlightenment books that they would not have had without printing.
More recently, newspapers, movies, radio, and TV have been influential in the spread of social and political ideas, both good and bad. Starting in the 1930s, the U.S. had extensive regulation, amounting to censorship, of radio, movies, and TV. Content was regulated, licenses given under stringent rules. Would further empowering U.S. censors to worry about “social stability” have been helpful or harmful in the slow liberalization of American society? Was any of this successful in promoting democracy, or just in silencing the many oppressed voices of the era? They surely would have tried to stifle, not promote, the civil rights and anti-Vietnam War movements, as the FBI did.
Freer communication by and large is central to the spread of representative democracy and prosperity. And the contents of that communication are frequently wrong or disturbing, and usually profoundly offensive to the elites who run the regulatory state. It’s fun to play dictator for a day when writing academic articles about what “should be regulated.” But think about what happens when, inevitably, someone else is in charge.
“Regulating” communication means censorship. Censorship is inherently political, and almost always serves to undermine social change and freedom. Our aspiring AI regulators are fresh off the scandals revealed in Murthy v. Missouri, in which the government used the threat of regulatory harassment to censor Facebook and X. (8) Much of the “misinformation,” especially regarding COVID-19 policy, turned out to be right. It was precisely the kind of out-of-the-box thinking, reconsidering of the scientific evidence, speaking truth to power, that we want in a vibrant democracy and a functioning public health apparatus, though it challenged verities propounded by those in power and, in their minds, threatened social stability and democracy itself. Do we really think that more regulation of “misinformation” would have sped sensible COVID-19 policies? Yes, uncensored communication can also be used by bad actors to spread bad ideas, but individual access to information, whether from shortwave radio, samizdat publications, text messages, Facebook, Instagram, and now AI, has always been a tool benefiting freedom.
Yes, AI can lie and produce “deepfakes.” The brief era when a photograph or video provided by itself evidence that something happened, since photographs and videos were difficult to doctor, is over. Society and democracy will survive.
AI can certainly be tuned to favor one or the other political view. Look only at Google’s Gemini misadventure. (9) Try to get any of the currently available LLMs to report controversial views on hot-button issues, even medical advice. Do we really want a government agency imposing a single tuning, in a democracy in which the party you don’t support eventually might win an election? The answer is, as it always has been, competition. Knowing that AI can lie produces a demand for competition and certification. AI can detect misinformation, too. People want true information, and will demand technology that can certify if something is real. If an algorithm is feeding people misinformation, as TikTok is accused of feeding people Chinese censorship, (10) count on its competitors, if allowed to do so, to scream that from the rafters and attract people to a better product.
Regulation naturally bends to political ends. The Biden Executive Order on AI insists that “all workers need a seat at the table, including through collective bargaining,” and “AI development should be built on the views of workers, labor unions, educators, and employers.” (11) Writing in the Wall Street Journal, Ted Cruz and Phil Gramm report: “Mr. Biden’s separate AI Bill of Rights claims to advance ‘racial equity and support for underserved communities.’ AI must also be used to ‘improve environmental and social outcomes,’ to ‘mitigate climate change risk,’ and to facilitate ‘building an equitable clean energy economy.’” (12) All worthy goals, perhaps, but one must admit those are somewhat partisan goals not narrowly tailored to scientifically understood AI risks. And if you like these, imagine what the likely Trump executive order on AI will look like.
Regulation is, by definition, an act of the state, and thus used by those who control the state to limit what ideas people can hear. Aristocratic paternalism of ideas is the antithesis of democracy.
Economics
What about jobs? It is said that once AI comes along, we’ll all be out of work. And exactly this was said of just about every innovation for the last millennium. Technology does disrupt. Mechanized looms in the 1800s did lower wages for skilled weavers, while it provided a reprieve from the misery of farmwork for unskilled workers. The answer is a broad safety net that cushions all misfortunes, without unduly dulling incentives. Special regulations to help people displaced by AI, or China, or other newsworthy causes is counterproductive.
But after three centuries of labor-saving innovation, the unemployment rate is 4%. (13) In 1900, a third of Americans worked on farms. Then the tractor was invented. People went on to better jobs at higher wages. The automobile did not lead to massive unemployment of horse-drivers. In the 1970s and 1980s, women entered the workforce in large numbers. Just then, the word processor and Xerox machine slashed demand for secretaries. Female employment did not crash. ATM machines increased bank employment. Tellers were displaced, but bank branches became cheaper to operate, so banks opened more of them. AI is not qualitatively different in this regard.
One activity will be severely disrupted: Essays like this one. ChatGPT-5, please write 4,000 words on AI regulation, society, and democracy, in the voice of the Grumpy Economist…(I was tempted!). But the same economic principle applies: Reduction in cost will lead to a massive expansion in supply. Revenues can even go up if people want to read it, i.e., if demand is elastic enough. (14) And perhaps authors like me can spend more time on deeper contributions.
The big story of AI will be how it makes workers more productive. Imagine you’re an undertrained educator or nurse practitioner in a village in India or Africa. With an AI companion, you can perform at a much higher level. AI tools will likely raise the wages and productivity of less-skilled workers, by more easily spreading around the knowledge and analytical abilities of the best ones.
AI is one of the most promising technical innovations of recent decades. Since social media of the early 2000s, Silicon Valley has been trying to figure out what’s next. It wasn’t crypto. Now we know. AI promises to unlock tremendous advances. Consider only machine learning plus genetics and ponder the consequent huge advances coming in health. But nobody really knows yet what it can do, or how to apply it. It was a century from Franklin’s kite to the electric light bulb, and another century to the microprocessor and the electric car.
A broad controversy has erupted in economics: whether frontier growth is over or dramatically slowing down because we have run out of ideas. (15) AI is a great hope this is not true. Historically, ideas became harder to find in existing technologies. And then, as it seemed growth would peter out, something new came along. Steam engines plateaued after a century. Then diesel, electric, and airplanes came along. As birthrates continue to decline, the issue is not too few jobs, but too few people. Artificial “people” may be coming along just in time!
Conclusion
As a concrete example of the kind of thinking I argue against, Daron Acemoglu writes,
We must remember that existing social and economic relations are exceedingly complex. When they are disrupted, all kinds of unforeseen consequences can follow…
We urgently need to pay greater attention to how the next wave of disruptive innovation could affect our social, democratic, and civic institutions. Getting the most out of creative destruction requires a proper balance between pro-innovation public policies and democratic input. If we leave it to tech entrepreneurs to safeguard our institutions, we risk more destruction than we bargained for. (16)
The first paragraph is correct. But the logical implication is the converse—if relations are “complex” and consequences “unforeseen,” the machinery of our political and regulatory state is incapable of doing anything about it. The second paragraph epitomizes the fuzzy thinking of passive voice. Who is this “we”? How much more “attention” can AI get than the mass of speculation in which we (this time I mean literally we) are engaged? Who does this “getting”? Who is to determine “proper balance”? Balancing “pro-innovation public policies and democratic input” is Orwellianly autocratic. Our task was to save democracy, not to “balance” democracy against “public policies.” Is not the effect of most “public policy” precisely to slow down innovation in order to preserve the status quo? “We” not “leave[ing] it to tech entrepreneurs” means a radical appropriation of property rights and rule of law.
What’s the alternative? Of course AI is not perfectly safe. Of course it will lead to radical changes, most for the better but not all. Of course it will affect society and our political system, in complex, disruptive, and unforeseen ways. How will we adapt? How will we strengthen democracy, if we get around to wanting to strengthen democracy rather than the current project of tearing it apart?
The answer is straightforward: As we always have. Competition. The government must enforce rule of law, not the tyranny of the regulator. Trust democracy, not paternalistic aristocracy—rule by independent, unaccountable, self-styled technocrats, insulated from the democratic political process. Remain a government of rights, not of permissions. Trust and strengthen our institutions, including all of civil society, media, and academia, not just federal regulatory agencies, to detect and remedy problems as they occur. Relax. It’s going to be great.
Footnotes
(1) I thank Angela Aristidou, Eugene Volokh, and an anonymous reviewer for helpfulcomments.
(2) Donella Meadows, Dennis Meadows, Jørgen Randers, and William Behrens, Limits to Growth: A Report for the Club of Rome’s Project on the Predicament of Mankind (New York: Universe Books, 1972), https://www.donellameadows.org/wp-content/userfiles/Limits-to-Growth-digital-scan-version.pdf; Soylent Green, directed by Richard Fleischer (1973; Beverly Hills, CA: Metro-Goldwyn-Mayer).
(3) Angus Deaton, The Great Escape: Health, Wealth, and the Origins of Inequality (Princeton University Press, 2013), https://press.princeton.edu/books/hardcover/9780691153544/the-great-escape.
(4) See Friedrich Hayek, “The Use of Knowledge in Society,” American Economic Review 35 (September 1945): 519–30, https://www.jstor.org/stable/1809376.
(5) See George J. Stigler, “The Theory of Economic Regulation,” Bell Journal of Economics and Management Science 2, no. 1 (Spring 1971): 3–21, https://doi.org/10.2307/3003160.
(6) See Martin Casado and Katherine Boyle, “AI Talks Leave ‘Little Tech’ Out,” Wall Street Journal, May 14, 2024, https://www.wsj.com/articles/ai-talks-leave-little-tech-outhomeland-security-adversaries-open-source-board-46e3232d.
(7) See John H. Cochrane and Amit Seru, “Ending Bailouts, at Last,” Journal of Law, Economics and Policy 19, no. 2 (2024): 169–193, https://www.johncochrane.com/research-all/end-bailouts.
(8) Murthy v. Missouri, 603 U.S. _____ (2024).
(9) Megan McArdle, “Female Popes? Google’s Amusing AI Bias Underscores a Serious Problem,” Washington Post, February 27, 2024, https://www.washingtonpost.com/opinions/2024/02/27/google-gemini-bias-race-politics/.
(10) Zachary Evans, “Social Media App TikTok Censors anti-China Content,” National Review, September 25, 2019, https://www.nationalreview.com/news/social-mediaapp-tiktok-censors-anti-china-content.
(11) Exec. Order No. 14110, 88 Fed. Reg. 75191 (October 30, 2023), https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safesecure-and-trustworthy-development-and-use-of-artificial-intelligence/.
(12) Ted Cruz and Phil Gramm, “Biden Wants to Put AI on a Leash,” Wall Street Journal,March 25, 2024, https://www.wsj.com/articles/biden-wants-to-put-artificial-intelligence-on-a-leash-progressive-regulation-45275102.
(13) “Unemployment Rate [UNRATE], May 2024” U.S. Bureau of Labor Statistics, retrieved from FRED, Federal Reserve Bank of St. Louis, July 5, 2024, https://fred.stlouisfed.org/series/UNRATE.
(14) For more on this point, see John Cochrane, “Supply, Demand, AI and Humans,” The Grumpy Economist (blog), April 26, 2024, https://www.grumpy-economist.com/p/supply-demand-ai-and-humans.
(15) See the excellent, and troubling, analysis in Robert J. Gordon, The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War (Princeton: Princeton University Press, 2017) and Nick Bloom, John Van Reenen, Charles Jones, and Michael Webb, “Are Ideas Getting Harder to Find?,” American Economic Review, 110, no. 4 (April 2020): 1104–1144, https://www.aeaweb.org/articles?id=10.1257/aer.20180338.
(16) Daron Acemoglu, “Are We Ready for AI Creative Destruction?,” Project Syndicate, April 9, 2024, https://www.project-syndicate.org/commentary/ai-age-needs-more-nuanced-view-of-creative-destruction-disruptive-innovation-by-daron-acemoglu-2024-04.
Source: The Digitalist Papers
Tag
AI regulation artificial intelligence (AI) Censor information democracy democratic institutions
More on this topic
More Stories
From Stone Tablets To Digital Justice: The Evolution of Laws
Presidential Power in War and Peace
California Says Goodbye to the Winnebago and Hello to More Expensive Gasoline
Cancel anytime
Latest Articles
Pretty Yende in Amazing Grace for the Reopening of Notre-Dame de Paris
The Night Before Christmas
China Counting on Cyber-Psychological Cocktail to Expand Dominance
Snow
What Christmas Looks Like in Deeper Universes
First Severe Human Case of Bird Flu in the US Sparks Pandemic Fears
Trending
Pretty Yende in Amazing Grace for the Reopening of Notre-Dame de Paris
The Night Before Christmas
China Counting on Cyber-Psychological Cocktail to Expand Dominance
Snow
What Christmas Looks Like in Deeper Universes
First Severe Human Case of Bird Flu in the US Sparks Pandemic Fears
Top Products
NEW SAN CAI – CHILDREN (4TH ISSUE)
$18.99
$18.99
Middle Land – European Roots and The American Dream
$25.00
$25.00
Middle Land – Decoding Traditions in the Heart of Silicon Valley
$25.00
$25.00
Middle Land – A Crash Course on the Chinese New Year
$25.00
$25.00