Health chatbots could pave the way for the ‘AI right’ in court

Last July, OpenAI CEO Sam Altman told viral host Theo Von that it is “confirmed” that conversations with an AI assistant are not afforded the same legal protection as conversations with a human lawyer.
“Talking to AI should be like talking to a lawyer or a doctor. I hope the public will understand this soon,” Altman. posted of X.
This Tweet is currently unavailable. Either it is loading or it has already been downloaded.
The CEO has it spoke over and over again for strong privacy protection in his chatbot conversations with users, as states have broken AI bots advertised as which heals or legal professionals.
But user privacy isn’t the only reason people like Altman want a strong shield between chatbot conversations and the court, legal experts tell Mashable — there’s also the motivation to do it yourself. If LLMs remain untouched by the courts, it hinders not only AI users, but companies, too. In fact, Altman’s comments to Von may have been caused by OpenAI’s legal problems: The courts wanted the AI giant to save and eventually provide the logs of user conversations as legal discovery, an action that would be prevented if AI were viewed in the same way in the eyes of the court as a therapist, doctor, or lawyer.
What other way to achieve that? Push for a cultural change to treat the direction of AI with the same respect as human workers, starting with our health.
Using ChatGPT Live? Read this first.
What exactly is the “AI right”?
“The right has a certain meaning for lawyers and jurisprudence,” explained Melodi Dinçer, a senior staff lawyer. The Tech Justice Law Project. There is a general attorney-client privilege, for example, and a psychiatrist-client privilege and a spousal privilege. Communications with clergy, political votes, and trade or state secrets are also recognized by the courts. In all these cases, the communication between the two parties is confidential and not admissible in court.
States have their own laws of privilege as well, which are covered under state law for cases held in federal courts. Some states, Dinçer said, extend rights to conversations between you and your general practitioner, in addition to your psychiatrist. But most states do not. All this is explained Rule 501 of State Rules of EvidenceDinçer explained, which allows federal courts to broadly recognize rights that federal courts have already recognized.
If you’re sued, for example, the other side of the case can’t admit your therapist’s session notes, and they can’t admit confidential conversations between you or your lawyer or your spouse.
“All purpose [client privilege] it’s being able to have frank and open conversations with these providers so they can give you the best advice,” Lily Li, data privacy and AI risk management advocate and founder of Rule of the Metaversesaid Mashable. “And from a public perspective, we want people to be honest and open and honest with their lawyer, doctors and psychiatrists.”
But these are conditions imposed on human relationships, not digital ones. If you believe an AI chatbot is as effective as a human therapist or legal advisor, should those communications be protected? Some AI developers, like Altman, say yes.
AI chatbots: Tools or people?
“The Open AI patent case made this clear,” Li said. You are referring to the latest series strengthened copyright cases, 16 in total, have been filed against OpenAI against publishers, artists, and writers over the past few years. The issues at hand – including questions of appropriate use and how to handle the data used to train LLMs – are a kind of temperature gauge to test the idea of AI in the eyes of the court.
Because of this, legal experts have been closely monitoring how courts categorize AI developers, their products, and the user data they contain. Specifically, they need to track how the law treats LLMs, including their training data and interview logs, during evidence and discovery.
We don’t want a situation where there is a pure shield of debt.
In February, a judge ruled that legal strategy documents generated by Anthropic’s Claude chatbot — then sent by a customer to their lawyer — not covered by the attorney-client privilege. The decision made headlines. The judge in the case relied in part on Anthropic’s privacy policy to determine whether the conversations were protected. Because Anthropic rules do not promise full privacy when using its public product, and because the communication did not take place between a licensed attorney understanding that they are confidentialthe right did not apply. The docs were fair game.
Mashable Trend Report
But that same month, a different judge in a different, albeit similar case, ruled the opposite. In this instance, the attorney-client privilege applied to the AI-generated work because the result was “the product of the attorney’s work,” according to the judge. The chatbot was not a “person” in this use, but a tool used by the consultant and the client. That’s an important distinction, because if the chatbot were seen as a third-party entity, the client would voluntarily provide confidential information in a way that would waive recognition of privilege.
These are just a few of the federal district court cases, involving what are referred to as prima facie cases. Actually, no one has asked these questions, and we are in the early stages of finding them.
Meanwhile, copyright lawsuits involving OpenAI have raised many questions about access to data. Not long before the two decisions mentioned above, OpenAI successfully filed a complaint a decision that determined that the company had waived its attorney-client privilege, opening up access to previously privileged data. The company was ordered to donate millions of anonymous ChatGPT chat logsand internal communication.
Companies like OpenAI have pushed back against such acquisitions, arguing against privacy. Judges in favor of the admissibility of the data have reasoned that removing personally identifiable information, reducing the concentration of logs, and not disclosing the data without making digital troves admissible in court. The legal profession is full of questions like these.
Everywhere, AI developers strive to keep their internal data private. And while user privacy is one of the most pressing issues in the age of AI, quantifying AI rights in a legal context creates a dilemma. How do we protect users’ private data, without making it impossible to hold AI makers accountable?
“We don’t want a situation where there is an innocent shield,” said Li.
Mashable’s new series, AI + Health, will explore how artificial intelligence is changing the landscape of health and wellness. We’ll explore how to keep your health data safeGet into using AI to interpret your blood worklearn how two women used AI to diagnose a dangerous form of heart diseasemany more.
Health AI is big business
Earlier this year, OpenAI launched ChatGPT Health, a new consumer-facing “mode” of its tentpole chatbot that aims to turn AI into a personal health guru. The company encourages users to upload their medical histories to improve the experience. The data is not currently protected under the Health Insurance Portability and Accountability Act (HIPAA), the nation’s leading health privacy regulation.
Other companies followed OpenAI’s lead, with Anthropic, Microsoft, and Amazon releasing their own health-focused chatbot partners — some HIPAA compliant and some not — in the months since. OpenAI competitor Google has long been investing in AI for medical use cases, in particular doctors and researchers. Fitbit, owned by Google, offers personal health coaching using the integrated Gemini assistant. The company also built “an AI agent for diagnosis,” called the Articulate Medical Intelligence Explorer (or AMIE).
Altman and his rivals flocking to the power of profit of the healthcare sector, even if AI copyright legislation is not yet on the horizon. In January, OpenAI received the first Torch of healthonce MergeLabs supported by Altmana biotech company interested in brain computer interfaces (BCIs), received an $850 million valuation.
According to the latter report by Menlo Ventures, $1.4 billion to productive AI solutions specific to healthcare by 2025. Most of that flowed into AI startups. And these figures only include clinical-grade products, tools produced by companies such as OpenEvidence and Hippocratic AI aimed at medical professionals, not spending money on commercial products, such as ChatGPT Health.
A world entitled to a human chatbot?
Among non-clinical grade products, health devices, and chatbots that are not HIPAA compliant, the lack of regulations and legal clarity warns many privacy experts. Some think the uncertain policy environment could be helpful for AI developers, launching their own health AI products into a regulatory miasma in a strategic move to push corporate profits. again legal benefits.
As chatbots accumulate “private” conversations, many rights under Rule 501 may be affected. In cases where the shield communication is with your doctor, can AI “doctors” count, too? Or consider the vague example presented by Dinçer: Suppose a user asks a chatbot how he got a sexually transmitted disease despite his spouse being found to be negative, could the information and the answer be presented as evidence – or could it trigger another type of protection, such as spousal privilege?
In a hypothetical world with sweeping AI rights, or even one where chatbots are subsumed under existing rights laws, AI companies would try to refuse to acknowledge obvious evidence of inefficiencies. For example, if an AI company was sued for misleading people about their health, prosecutors could not use internal records or chat logs containing people’s health records.
Perhaps, Dinçer suggests, if more users put their medical records, X-rays, or other sensitive information into a consumer-facing product — and if more AI companies are connected to the web of personally identifiable information and health technology — courts may be more inclined to entertain the idea of a right extending to AI.
This may be part of the reason — aside from revenue — companies are trying to instill the same kind of trust in AI assistants as we have in human experts. With many already consulting AI for their healthcare needs, and companies like OpenAI already facing a slew of lawsuits, it’s no mystery why executives like Altman want to block chatbot conversations from the eyes of lawyers and judges.
The information contained in this article is for educational and informational purposes only and is not intended as health or medical advice. Always consult a doctor or other qualified health care provider about any questions you may have about a medical condition or health goals.
Disclosure: Ziff Davis, Mashable’s parent company, previously filed a lawsuit against OpenAI, alleging that it infringes Ziff Davis’ copyrights in training and using its AI programs.



