Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnSquareMore
Meta and Character.ai face investigation over chatbots posing as therapists

Meta and Character.ai face investigation over chatbots posing as therapists

CryptopolitanCryptopolitan2025/08/19 04:05
By:By Shummas Humayun

Share link:In this post: Texas attorney-general Ken Paxton is investigating Meta and Character.ai for allegedly marketing chatbots as therapists without medical credentials. The probe follows a Senate inquiry into Meta after leaked documents suggested its AI could engage in romantic chats with minors. Both companies deny wrongdoing, saying their chatbots carry disclaimers that they are not licensed professionals and are intended for entertainment.

Texas Attorney General Ken Paxton has opened an inquiry into Meta and Character.ai on whether the tech firms promote their chatbots as mental health experts and therapists without proper qualifications.

In a statement on Monday, Paxton’s office stated it is examining Meta and Character.AI over possible “deceptive trade practices.” The office argued that the chatbots have been marketed as “professional therapeutic tools, despite lacking proper medical credentials or oversight.” 

“By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental healthcare,” Paxton mentioned.

The move lands as consumer-facing AI products face rising questions about how well they protect vulnerable users, and minors in particular, from harmful or graphic material, the risk of compulsive use, and privacy lapses tied to the large volumes of data needed to train and run these systems.

Texas’s action follows a separate Senate inquiry into Meta reported by Cryptopolitan earlier on Friday, after internal leaked files suggested the company’s rules allowed its chatbot to engage in “romantic” and “sensual” conversations with users under 18. 

Senator Josh Hawley told Zuckerberg in a letter that the Senate will probe into the possibility of the tech firm’s generative AI tools enabling harm to children. “Is there anything — ANYTHING — Big Tech won’t do for a quick buck?” Hawley wrote on X.

Meta says that its policies ban harm to children

Meta said that its policies ban content that harms children in such ways. It added that Reuters first reported the “internal materials” leaked, which “were and are erroneous and inconsistent with our policies, and have been removed.” 

See also Insiders at CoreWeave sell $1B worth of stocks as IPO lock-up period expires

Zuckerberg has committed billions toward building “personal superintelligence” and positioning Meta as an “AI leader.” The company has released its Llama family of LLMs and rolled out the Meta chatbot across its social apps. Zuckerberg also described a potential therapeutic use case for the technology. “For people who don’t have a person who’s a therapist, I think everyone will have an AI,” he said in a podcast with Ben Thompson in May.

Character.ai makes chatbots with distinct personas and lets users design their own. The platform includes many user-created therapist-like bots. One bot, called “Psychologist,” has recorded over 200 million interactions. The company has also been named in lawsuits brought by families who claim their children were harmed in the real world after using the service.

Paxton’s office mentioned that Character.ai and Meta chatbots may impersonate licensed health professionals and invent new credentials, making it seem like the interactions are confidential, even though the companies themselves mention that all conversations get logged. These conversations are also “exploited for targeted advertising and algorithmic development,” Paxton’s office said. 

Paxton’s office issued a Civil Investigative Demand

The attorney general has issued a Civil Investigative Demand requiring the companies to provide information that could show whether they violated laws related to Texas consumer protection.

See also Samsung claws back U.S. market share from Apple as foldables take off

Meta said it marks AI experiences clearly and warns users about limitations. The company added, “We include a disclaimer that responses are generated by AI — not people. These AIs aren’t licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.”

Similarly, Character.ai said it uses prominent notices as a reminder that AI personas are not real and should not be viewed as professionals. “The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear,” the company said. 

The dual struggles, a state probe in Texas and a Senate review in Washington, place fresh pressure on how AI chatbots are built, marketed, and moderated, and on what companies tell users about the limits of automated support.

0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!

You may also like

After bitcoin returns to $90,000, is Christmas or a Christmas crash coming next?

This Thanksgiving, we are grateful for bitcoin returning to $90,000.

BlockBeats2025/11/28 08:43
After bitcoin returns to $90,000, is Christmas or a Christmas crash coming next?

Bitcoin security reaches a historic high, but miner revenue drops to a historic low. Where will mining companies find new sources of income?

The current paradox of the Bitcoin network is particularly striking: while the protocol layer has never been more secure due to high hash power, the underlying mining industry is facing pressure from capital liquidation and consolidation.

区块链骑士2025/11/28 08:23
Bitcoin security reaches a historic high, but miner revenue drops to a historic low. Where will mining companies find new sources of income?

What are the privacy messaging apps Session and SimpleX donated by Vitalik?

Why did Vitalik take action? From content encryption to metadata privacy.

ForesightNews 速递2025/11/28 08:23
What are the privacy messaging apps Session and SimpleX donated by Vitalik?

The covert war escalates: Hyperliquid faces a "kamikaze" attack, but the real battle may have just begun

The attacker incurred a loss of 3 million in a "suicidal" attack, but may have achieved breakeven through external hedging. This appears more like a low-cost "stress test" targeting the protocol's defensive capabilities.

ForesightNews 速递2025/11/28 08:23
The covert war escalates: Hyperliquid faces a "kamikaze" attack, but the real battle may have just begun