We need to avoid a ‘ready, fire, aim!’ approach to AI regulation

NEWYou can now listen to Fox News articles!
The panic to regulate artificial intelligence (AI) came almost immediately after last fall’s release of ChatGPT popularized the technology with the public.
Some industry insiders themselves called for a pause on development, highlighting that expertise in a field doesn’t translate into proficiency in the perils of regulation. That appeal was followed by a White House AI Bill of Rights and an educational effort by Senate Majority Leader Chuck Schumer, D-N.Y.
Fears about AI include job displacement, data security and privacy, misinformation, autonomous defense systems mistakes, discrimination and bias, and an existential threat to humanity itself.
It’s imperative to prove actual market failure before regulating and to make sure the costs of doing so don’t outweigh the benefits. (iStock)
We’ve lived with all of these threats in different contexts, but is there something new that justifies regulating AI? And, if so, what are the costs to doing so?
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
It’s imperative to prove actual market failure before regulating and to make sure the costs of doing so don’t outweigh the benefits.
Doomsday predictions are no substitute for proof of actual problems. Job displacement will certainly accompany AI integration across many industries, but so will new jobs and an elimination