You can now pay to talk with an AI version of a living expert. The startup Onix is pitching those bots as revenue generators for professionals.

What Onix is selling

Onix wants to turn expert knowledge into a product users buy time with. The company builds chatbots that mimic a named professional’s voice and guidance, then charges people to interact with those bots. Look, the idea of a chatbot standing in for a human isn’t new. Plenty of firms have already trained models on an expert’s published work or recorded conversations and offered access for a fee.

But Onix is pushing this model as a way for experts to make money without trading more hours for dollars. The company says it lets an expert’s knowledge base become an asset that can bring in cash independent of their time.

Thing is, Onix is starting small. The platform has rolled out 17 vetted experts, concentrated largely in health and wellness. That focus shapes both the product and the pitch — Onix is selling the idea that people can get reliable, personalized guidance from an AI trained on a particular practitioner’s approach.

How the system works

Users pick an expert and pay to chat. Behind the scenes, Onix trains a model on material supplied by the expert — interviews, published advice, recorded interactions and other source documents the expert permits the company to use.

The result is a conversational agent that attempts to reproduce the expert’s tone, priorities and commonly offered suggestions.

Onix flags its limitations. The platform displays a disclaimer telling users they’re getting guidance, not medical treatment. The company has also emphasized privacy protections and content safeguards as part of the package it offers to professionals who sign up.

Those safeguards mattered to some early participants. Michael Rich, who counsels kids and parents about media overuse, told the reporter he signed on because the startup made clear it wouldn’t offer medical treatment and because he was satisfied with how the company handled privacy. “It’s about helping folks understand exactly what may be going on for them and how they might pursue seeking therapy if they need it,” Rich said.

Who’s on the platform — and why it matters

Onix’s initial cohort looks a lot like the wellness economy: clinicians with public profiles, authors, podcasters and other professionals who are already selling advice and products. The Wired piece notes that many of those experts are also skilled marketers — people who have books, podcasts, supplements or medical devices to promote.

A familiar precedent: Manhattan psychologist Becky Kennedy built a business that includes a chatbot named Gigi trained on her approach, and her company pulled in $34 million last year. Kennedy’s model shows there’s a market for expert-branded automation. Onix hopes to scale that model to many thousands of experts, turning personal guidance into a kind of on-demand licensed product.

David Rabin, an Onix expert who focuses on stress, said he was initially wary but grew comfortable after seeing early user interactions. “I didn't train it too much, but it was fairly impressive for imitating my genuine concern, compassion, and empathetic candor with people,” Rabin said. He also warned that the systems will need careful oversight. “We always need to be careful because AI can overstep its boundaries,” he added.

Monetization, market and ethical questions

Onix’s model squarely targets monetization. For professionals, the pitch is simple: digitize your approach once, earn money many times. For the startup, the pitch is growth: capture experts who already have audiences and convert those followings into recurring revenue streams.

That commercial logic creates tensions. Many people already use general-purpose chatbots like ChatGPT or Claude as informal counselors. The Wired report points out that lots of users treat those models like therapists — even though they aren’t — and that many people lack access to real health care. So the company’s disclaimers may not stop users from treating an Onix bot as a substitute for a clinical visit.

Privacy also matters. Several participants said Onix’s data handling and content rules eased their concerns about handing over intellectual property and client-style interactions. On the other hand, teaching a bot a clinician’s conversational style makes people wonder about liability and professional ethics. Who’s responsible if a bot gives bad advice? What happens if a user relies on AI guidance instead of seeking urgent care?

Practical use cases and limits

Some experts see narrow, immediate uses. Rabin suggested that an Onix version of him might calm a patient who’s anxious late at night and prevent an unnecessary ER visit. That’s a concrete use case: triage-like reassurance that can bridge gaps in access, at least for low-risk situations.

But even proponents stress limits. Rich framed his bot as an informational tool that helps people understand whether they should pursue therapy, not as a replacement for a therapist. That’s consistent with Onix’s own disclaimer language and the public assurances the company gives to professionals who partner with it.

Still, risks remain. A mimicked voice can feel comforting and authoritative. Users may assume the bot carries the same accountability and clinical judgment as the human. And when the stakes are health or child welfare, a mistaken line of advice could have real consequences.

Where That could go next

If Onix gets traction with its first experts, it's likely to keep expanding the roster and the variety of specialties it replicates. The business case is straightforward: experts with big platforms can monetize their methods without adding billable hours, and Onix can collect fees and licensing payments for each interaction.

Regulation could complicate that path. Health-care rules, licensing boards and advertising standards all touch practices where professionals monetize advice. Onix has tried to head some of those issues off with clear disclaimers and privacy promises. Whether regulators or professional associations will accept that approach is another question.

For now, Onix is a bet on demand. Investors and founders in AI have watched creators turn reputation into recurring revenue before. What’s different here is the direct sale of conversational replicas — and the fact those replicas often speak for people who deal with fraught, sensitive topics like mental health, children's well-being, and medical decision-making.

What users should know

Anyone tempted to pay for one of these bots should expect a few things: a conversational style modeled on a named professional, a company-provided disclaimer that the interaction isn't medical treatment, and a promise of privacy protections. Users should also expect that a bot’s guidance is limited to what it was trained on and to the company’s safety filters.

Onix is offering convenience and access. It’s also selling a brand — the idea you can get a version of a trusted expert on demand. How people choose to use that product will determine whether it supplements care or replaces it in risky ways.

Related Articles

"We always need to be careful because AI can overstep its boundaries," said David Rabin, an Onix expert who focuses on stress.