Companion Chatbots: What Founders Need To Know Now

Companion Chatbots: What Founders Need To Know Now

Building a companion chatbot? New global laws are coming fast. Here’s what founders need to know right now.

October 21, 2025

Why this matters:

“Companion” chatbots sit at the intersection of youth safety, mental health, privacy, product liability, and deceptive design. Regulators in California, the European Union, the United Kingdom, Australia, and Singapore have shifted their focus from watching AI development to writing rules. If your product builds parasocial relationships, simulates empathy, or sustains long-running one-to-one interactions, you and your company are now squarely in their sights.

Quick note on the regulatory landscape:

The United States does not yet have a comprehensive federal AI law. State laws, especially in California and Colorado, and laws and regulations in the EU and other international jurisdictions are already shaping the regulatory environment. These provide a window into what is likely to happen in the United States over the next few years.

If you want help pressure-testing your roadmap, Vidar Law advises founders and executives on AI risk and regulation. Learn more or book a call.

TL;DR for busy executives

  • Clearly disclose that users are interacting with AI whenever a reasonable person could think they are chatting with a human. Err on the side of caution.
  • Prepare crisis and youth-safety flows now. Including escalation to helplines and teen-safe defaults.
  • Build for scrutiny from consumer protection authorities. Expect enforcement on deception, unfair practices, data handling, and youth safety, even before new federal statutes arrive.

Need a fast audit of your product and marketing claims? Schedule a call.

Founder’s Overview: Design Principles That De-risk

  1. Design for minors, even if you do not target them. Implement age assurance, safer defaults, and session-level friction to reduce risky spirals.
  2. Treat self-harm as safety critical. You need reliable detection, blocking, warm handoffs to hotlines, and well tested playbooks.
  3. Avoid deceptive anthropomorphism. If copy or UI implies human empathy or romantic attachment, you raise deception and manipulation risks.
  4. Minimize data, especially chat logs and embeddings. Plan for consent, short retention, and erasure.
  5. Do not accidently drift into medical claims. If features diagnose, treat, or guide clinical decisions, you may trigger medical device rules.

Want this turned into an internal checklist for engineering and product teams? Contact Vidar Law.

Deep Dive: United States

California SB 243, the first state law focused on “companion” chatbots

What it requires: (1) in-product AI disclosure where users could be misled; (2) crisis-response protocols that detect and block self-harm content and signpost help; (3) minor-specific duties such as break reminders and blocking of sexual content for minors; and (4) annual public safety reporting to the Office of Suicide Prevention. Most of these duties begin in 2026.

Why it matters to founders: if you serve Californians (and if you are operating in the US at all, you almost certainly do serve Californians), treat SB 243 like a product specification. Build persistent AI labels, crisis macros and escalation, teen-mode defaults, and a reporting pipeline that counts referrals and explains safeguards.

FTC scrutiny under Section 5 of the FTC Act

The FTC is actively studying companion chatbots and can pursue cases of deception or unfairness. Expect questions about safety testing, age gating, marketing claims, and data practices. Build the appropriate documentation now.

COPPA for users under 13

If your service is child-directed or you have actual knowledge of users who are under 13 years of age, you need verifiable parental consent, child-friendly notices, data minimization, and deletion. Treat any embeddings and logs as personal information and protect them accordinly.

California bot disclosure law

If you use a bot to influence sales or voting, and a user could believe it is human, then you must disclose that it is a bot. Additionally, companion experiences that include upsells or advocacy features can be covered by this law.

When “companion” crosses into health

If any features diagnose, treat, or guide clinical decisions, then the FDA frameworks for clinical decision support and software as a medical device may apply. This can include things such as wellness support. You need to ensure that you are keeping and wellness positioning tight, or you need to plan a regulatory compliance pathway.

Unsure where your product lands under SB 243, COPPA, or FDA guidance? Book a strategy session.

Deep Dive: International

European Union

  • The AI Act: prohibits exploiting vulnerabilities due to age or disability in ways that materially distort behavior and cause likely harm. It requires transparency for conversational systems, labeling of synthetic content, and special disclosures for emotion recognition or biometric categorization.
  • The Digital Services Act: the Commission’s protection of minors guidance expects the following from companies: (1) age assurance; (2) teen-safe defaults; (3) recommender controls; and (4) robust reporting. The EDPB explains how these duties align with GDPR.
  • Product Liability Directive: software and AI qualify as products. As a result, this increases strict liability exposure, including after model updates.

United Kingdom

  • Online Safety Act and Ofcom codes: platforms likely to be accessed by children must complete risk assessments and implement measures such as age checks, filtering of self-harm and pornography, and fast reporting.
  • ICO Children’s Code: this is privacy-by-default for teens and requires data protection impact assessments and profiling limits.

Australia

  • Social Media Minimum Age law: starting December 10, 2025, platforms classified as age restricted social media must take reasonable steps to prevent under 16 accounts. If your companion experience looks like social media messaging, assess your exposure and prepare controls.

Singapore

  • Online Criminal Harms Act and proposed Online Safety Commission: this empowers authorities to direct platforms to curb harmful content and gives victims faster recourse. Companion chatbots embedded in social products may be captured by these regimes.

South Korea, Japan, China

  • Korea: AI Framework Act effective January 2026, with transparency and risk management expectations, especially for high impact AI.
  • Japan: AI Promotion Act and business guidelines that emphasize governance, risk assessment, and disclosure practices.
  • China: Generative AI and Deep Synthesis rules that mandate content controls and labeling for public services and synthetic media.

Expanding globally in the next two quarters? Get a cross-border readiness plan. Talk to Vidar Law.

Non AI Specific Rules That Still May Apply

  • Consumer protection and dark patterns: the EU bans manipulative interfaces in platform contexts, and the FTC can act against deceptive anthropomorphism and unsafe defaults. Companies need to review things such as streaks, gifts, upsells, and retention hooks to ensure compliance.
  • Privacy: GDPR in the EU and CCPA or CPRA in California govern profiling, retention, and teen privacy. Assume chat logs and embeddings are personal data.
  • Health claims: do not imply diagnosis or treatment without planning for medical device obligations.
  • Product liability: the EU’s Product Liability Directive extends strict liability to software and AI, which increases the importance of safety cases, post-market monitoring, and rollback plans.

Want a one page matrix of these rules mapped to your current features, along with owner and deadline fields? Request a custom checklist.

Will “Companion” Expand Beyond Chat

Regulators focus on relationships and risks, not only on text interfaces. Expect coverage to extend to voice and avatar companions in apps and VR, agentic companions that proactively reach out and nudge behavior over time, social-media style messaging features inside other apps, and wearables or ambient devices that sustain ongoing emotional interactions. A safe posture is to implement AI identity disclosure, teen-safe defaults, crisis playbooks, risk assessments, and incident documentation across text, voice, avatars, and agents. Even if you don’t think you are creating a companion chatbot, you should still consider whether you need to address compliance. The fact that you don’t specifically use the word “companion” in reference to your product should not be considered a safe haven. 

If your product uses voice, avatars, or proactive agents, we can review your flows for the next wave of rules. Start here.

What to do this quarter

  1. Scope exposure. Are users likely to be minors, could a reasonable user think the bot is human, and do flows veer near mental health
  2. Harden product. Review and test identity badges, crisis macros, teen mode, synthetic-media labels, and timeouts for risky spirals
  3. Stand up governance. Build a Safety and Compliance Matrix that unifies SB 243, DSA or OSA, GDPR or COPPA, and any platform policies. Set owners and KPIs and prepare the artifacts you will need for California’s annual report.
  4. Prepare for audits. The FTC’s study hints at the documentation regulators will request. Align your internal companion safety dossier now.

Need this turned into a PDF or a checklist for your PMs and engineers? We can help.

About the author

Chris Werner is the founder and managing partner at Vidar Law. He has advised founders and executives in emerging technology for more than 20 years, with a focus on AI regulation, privacy, and product risk. Learn more at vidarlaw.com or book a call at vidarlaw.com/contact.

Move Forward with Confidence

Whether you need ongoing counsel, help resolving a dispute, or support closing your next deal, Vidar is ready to step in. Schedule a consultation to get started.

Get in touch