Even the FDA Doesn’t Know How to Regulate AI

(RightWing.org) – The sudden rise of artificial intelligence (AI) has left politicians and regulators baffled. In the space of a year, AI has gone from being a few experimental programs in tech labs to a powerful new force on the internet. Now lawmakers are struggling to decide if — and how — it needs to be regulated. The problem is, they don’t know where to start.

On November 17, Dr Robert Califf, the commissioner of the Food and Drug Administration (FDA), spoke to Yahoo! Finance about the challenges his agency is facing. Some of those are things the FDA has been dealing with for a long time, such as making sure new drugs are safe to use while also getting them onto the market — and into patients — as quickly as possible. One, however, is new. Medical regulators haven’t had to deal with AI before, but it’s quickly becoming important.

New AI systems like the popular ChatGPT are so-called “Large Language Models” (LLMs) which use machine learning; they read huge quantities of data and use it to create new content. That content can be news articles, instruction manuals, or artwork — but it could also be a medical diagnosis.

Modern medicine generates a lot of data, and an AI can process and use it. For example, if you tell it a patient’s symptoms it can tell you what it thinks is wrong with them based on records of thousands of previous cases. The FDA has already licensed hundreds of medical devices that use AI — but it isn’t sure how to build on that. Califf, who used to advise Google on medical strategy, admitted that it’s “something we don’t really know how to regulate.” He went on to talk about taking an “ecosystem approach” and putting “guardrails” around the technology. That sounds very cautious and sensible — but it also sounds worryingly short on details.

Copyright 2023, RightWing.org