‘We need to get out of the way as a regulator,’ the agency’s commissioner said.
The Food and Drug Administration on Jan. 6 clarified that it will not regulate some artificial intelligence (AI) tools and wearables.
In a guidance document, the Food and Drug Administration (FDA) said tools used to help make clinical decisions are sometimes exempt from FDA oversight, such as those not intended to analyze medical images.
In a second document, the FDA said it does not regulate “low risk products that promote a healthy lifestyle,” including “low risk general wellness products” such as exercise equipment and software programs.
The agency listed as examples products that track and record a person’s sleep, work, and exercise and products that make claims about weight management and physical fitness.
“For a lot of the decision support out there, we need to get out of the way as a regulator,” FDA Commissioner Marty Makary said in a video. “We have a clear lane for medical-grade products. But otherwise, we need to adapt with the times, and be proactive with guidance, so that companies and developers are not left confused about what they should be doing, or what the FDA wants.”
The government doesn’t need to be regulating everything.
— Dr. Marty Makary (@DrMakaryFDA) January 6, 2026
Announcing today two new @US_FDA guidances on AI to cut unnecessary regulation and promote innovation to keep America first. pic.twitter.com/pSC9ApNb6p
Makary said the FDA is “here to promote AI” and added during an appearance on Fox Business that at least some of the tools use AI.
“If something is simply providing information, like ChatGPT or Google, we’re not going to outrun that lion, we’re not going to go in there and say there is one result that is inaccurate and therefore we have to shut down,” Makary said. “We have to promote these products, and at the same time, just guard against major safety concerns.”
When asked about concerns regarding inaccurate information, Makary said, “We don’t believe in censorship.”
“If people are looking up a symptom on an AI-based tool, let’s have that conversation when they come in to see their doctor or do a virtual visit,” he said.
OpenAI, which developed ChatGPT, said this month that more than 5 percent of ChatGPT messages are about health care, with more than 40 million users turning to the AI bot with health care questions.
Researchers reported in 2025 that at least half of responses from AI models to health-related questions are not fully supported by their sources and that some answers are contradicted by the sources.







