The New York Times recently reported on the latest calls for more government regulation—even on an international scale—over the field of artificial intelligence due to perceived risks. Such commentary has seemingly become ubiquitous both in the media and from policymakers in D.C. who, while well-intentioned, seem to owe reflection to the broader picture of why developing this technology is so critically important in the first place. AI has the potential to be one of the greatest life-preservers in human history, and for that reason, there is inherently a risk in delaying its implementation that must be balanced with those calling for more regulation.
It is easy to take for granted that the three-piece seatbelt, now standard in all automobiles, was invented as recently as 1959. Even more remarkable is the fact that this innovation occurred somefive decades after Henry Ford invented the Model T, America’s first mass-produced automobile. Although it might seem strange today, cars did exist without seatbelts.
It is estimated that the modern seatbelt, designed by Volvo engineer Nils Bohlin, has saved over one million lives. Not surprisingly, it is significantly safer to travel by car today as a result of his invention.
Crucial here is that the government had as little to do with the invention of the automobile as it did with the seatbelt that eventually rendered cars exponentially safer. One supposes that if the AI doomsayers were alive back in 1908, they would be pushing for the government to ban cars “until they are safe.”
This leads to the realization that new technology inherently comes with risk. However, through trial and error, humans progress, and safety improves. Those expecting the government to eliminate any risk associated with the development of AI are attributing to it a level of wisdom it simply does not have.
Much has been written about the potential for AI to further reduce automobile deaths through advanced driver assistance systems (ADAS) designed to avoid collisions and provide earlier detection of vehicle or human errors. However, the lifesaving potential of AI extends far beyond transportation.
Leading companies including Microsoft have begun piloting AI programs in rural healthcare facilities, which are considered prime targets for hackers because these hospitals often lack adequate security controls as a result of thin operating margins, and they tend to use more antiquated systems to house personally identifiable information (PII).
Obtaining personal medical records and disrupting critical healthcare systems affords hackers maximum leverage given the severity of its consequences on either end. The number of ransomware attacks against the US healthcare sector [http://the%20number%20of%20ransomware%20attacks%20against%20the%20us%20healthcare%20sector%20increased%20by%20128/] increased by 128% year over year in 2023 alone and studies reveal such attacks gravely reduce hospital admissions and endanger lives. AI has shown potential to strengthen the cyber capabilities of rural hospitals, reducing the frequency of these attacks and the economic impact of ransomware payments – which carry the additional risk of funding foreign adversaries.
AI will further revitalize rural hospitals by assisting in services even as basic as bookkeeping. By helping to reverse coding errors that could delay insurance reimbursement payments, and weed out other costly inefficiencies, AI has enormous potential to streamline administrative tasks. This may not be the sexiest use case for AI, but such cost-saving measures inevitably stretch the razor-thin profit margins of these rural facilities, allowing them to provide next-level care to patients.
AI also has the potential to aid in the discovery of new drugs, create better early warning systems for at-risk patients, provide earlier detection of cancers, tumors, and other diseases, and identify fraud including in taxpayer-funded programs such as Medicare.
Taken all together, AI’s impact on the healthcare sector will quite literally be a lifesaver, and on a scale that vastly dwarfs the seatbelt by comparison.
Policymakers should keep this in mind when weighing the concerns of those calling for more regulation.
We don’t want to move too slowly on AI. Our lives may depend on it.