Thursday, February 19, 2026

STAT+: How the Trump administration is recasting government’s role in regulating health technology

In recent years, the use of artificial intelligence (AI) in the healthcare industry has been on the rise. From diagnosing diseases to predicting treatment outcomes, AI has the potential to revolutionize the way we approach healthcare. However, with this new technology comes a need for regulation and oversight to ensure its safe and ethical use. This is where federal agencies play a crucial role. But, unfortunately, these agencies have shifted from being guardrails for health technology to becoming cheerleaders for new and risky technologies like AI.

Traditionally, federal agencies such as the Food and Drug Administration (FDA) and the Centers for Medicare and Medicaid Services (CMS) have been responsible for regulating and approving new healthcare technologies. They have strict guidelines and processes in place to ensure the safety and effectiveness of these technologies before they are introduced to the market. However, in recent years, there has been a noticeable shift in their approach.

Instead of being cautious and thorough in their evaluations, these agencies have become overly enthusiastic about the potential of AI in healthcare. They have started to promote and encourage the use of AI without fully understanding its risks and limitations. This shift in attitude has raised concerns among experts and healthcare professionals who fear that the rush to adopt AI may have serious consequences.

One of the main reasons for this change in attitude is the pressure from the tech industry. With big tech companies investing heavily in AI, there is a push for faster approvals and less regulation. This has led to federal agencies being more lenient in their evaluations and approvals of AI technologies. As a result, many new and untested AI tools have entered the market without proper oversight, putting patients at risk.

Moreover, federal agencies have also started to rely heavily on industry-funded studies and data to make their decisions. This creates a conflict of interest as these studies may not always be unbiased and may downplay the risks associated with AI. This lack of independent research and evaluation further adds to the concerns surrounding the use of AI in healthcare.

Another issue is the lack of clear guidelines and regulations for AI in healthcare. Unlike traditional medical devices, AI is constantly evolving and learning, making it difficult to regulate. This has left federal agencies struggling to keep up with the rapid pace of AI development. As a result, there is a lack of consistency in the evaluation and approval process, leading to confusion and uncertainty among healthcare providers.

The consequences of this shift in attitude towards AI can be seen in the recent approval of a controversial AI-powered device for detecting diabetic retinopathy. Despite concerns raised by experts about its accuracy and potential harm to patients, the FDA approved the device without requiring any clinical trials. This has raised questions about the reliability of the FDA’s approval process and the safety of AI technologies being introduced to the market.

It is essential to understand that AI is not a magic solution for all healthcare problems. It has its limitations and risks, and these need to be carefully evaluated and addressed before widespread adoption. Federal agencies have a responsibility to protect the public and ensure the safety and effectiveness of new technologies. However, their current approach of being cheerleaders for AI is not fulfilling this responsibility.

To address this issue, federal agencies need to go back to their traditional role of being guardrails for health technology. They must prioritize patient safety and ensure that AI technologies are thoroughly evaluated and regulated before being introduced to the market. This can be achieved by establishing clear guidelines and regulations for AI in healthcare and conducting independent research to assess its risks and benefits.

Moreover, there needs to be more transparency in the evaluation and approval process. Federal agencies must disclose any conflicts of interest and rely on independent research rather than industry-funded studies. This will help build trust and confidence in the regulatory process and ensure that only safe and effective AI technologies are approved for use in healthcare.

In conclusion, the shift in attitude of federal agencies towards AI in healthcare is a cause for concern. Instead of being cautious and thorough in their evaluations, they have become cheerleaders for new and risky technologies. This has led to a lack of oversight and regulation, putting patients at risk. It is time for federal agencies to prioritize patient safety and go back to their traditional role of being guardrails for health technology. Only then can we fully harness the potential of AI in healthcare while ensuring the safety and well-being of patients.

most popular