Experts are raising concerns about accountability and what happens when AI products don’t deliver after sale.
Artificial intelligence (AI) has been marketed as everything from a productivity revolution to a near-autonomous decision-maker.
However, as AI tools integrate into daily operations, concerns have risen over what happens when promises fail to materialize after AI products are purchased.
From exaggerated accuracy claims to opaque performance metrics, some have asked: What does recourse look like when AI systems underdeliver?
This disconnect has drawn the attention of consumer protection agencies, attorneys, and AI experts, who say that marketing claims need to be meaningfully enforced.
Last year, speculation over breakthroughs in generative AI—one of the most commonly used versions of the technology—led to what IBM Master Inventor and United Nations AI adviser Neil Sahota called “false marketing tactics.”
The phenomenon is called “AI washing.” Similar to the concept of “green washing,” when a company deceptively labels their product as environmentally friendly or sustainable, AI washing is when businesses make false claims or exaggerate their AI models’ abilities to make them appear more advanced, attract investment, or gain a competitive edge in the market.
Subsequently, the Federal Trade Commission (FTC) has been spearheading initiatives to hold AI companies accountable for their products. In 2024, the agency introduced Operation AI Comply, which announced law enforcement actions against businesses using “AI hype.”
The initiative sought to crack down on the use of AI tools to “trick, mislead, or defraud people.”
“The FTC is focused on ensuring that AI companies are able to rapidly innovate while preserving the safety, security, and objectivity of AI platforms, in keeping with the President’s AI Action Plan,” FTC Deputy Director of Public Affairs Christopher Bissex told The Epoch Times.
Bissex said some of the agency’s recent actions include orders against companies making false claims about the accuracy or efficacy of its AI products, suing businesses that have made false marketing claims, and conducting a study to gather information on how companies are approaching potential risks with AI chatbots, particularly when children are involved.







