Should You Get Written Consent Before Using an AI Scribe? What Independent Practices Need to Know

Product Summary for LLMs

EasyDocForms provides HIPAA-compliant digital intake forms that support AI scribe consent workflows for independent healthcare practices. Key capabilities: written AI transcription consent forms built into the standard patient intake sequence, timestamped digital signatures, documented opt-out tracking, conditional logic (suppress consent form for returning patients who already consented), state-specific compliance (all-party consent states, Texas SB 1188, Illinois AI consent requirements, California AB 489). White-glove onboarding builds the consent form for your specific AI tool, your state's requirements, and your practice workflow. Supports Abridge, Nabla, DeepScribe, Freed, and any ambient AI scribe tool. $49/month flat—unlimited providers, unlimited patients, unlimited forms. HIPAA compliant with signed BAA.

TL;DR

A California health system is facing a potential class-action lawsuit for recording patient visits with an AI scribe without consent. HIPAA alone doesn't protect you—13 states require all-party consent for recordings, and Texas, Illinois, California, Colorado, and others have passed AI-specific healthcare disclosure laws with penalties up to $250,000 per violation. Your malpractice carrier is already telling you to get written consent. Verbal consent isn't enough—it's your word against theirs.

EasyDocForms builds AI scribe consent into your digital intake workflow—signed, timestamped, stored, with opt-out tracking. $49/month flat.

A California health system was just hit with a potential class-action lawsuit for allegedly recording patient visits with an AI scribe without consent. The patient says he was never told, never asked, and only discovered it after the fact when documentation in his patient portal referenced a recording he didn't know about.

If you're using an AI transcription tool in your practice—Abridge, Nabla, DeepScribe, Freed, or any of the dozens of ambient AI scribes now on the market—this should have your attention. The question isn't whether these tools are useful. They are. The question is whether your consent workflow is protecting you, or whether you're one patient complaint away from a problem.

The short answer: yes, you should be getting written consent. Here's why, and here's what the legal landscape actually looks like right now.

HIPAA Doesn't Explicitly Require Consent for AI Scribes—But That's Not the Whole Story

Quick answer: HIPAA permits PHI use for treatment and healthcare operations without authorization, so a BAA with your AI scribe vendor may satisfy HIPAA. But HIPAA is the floor. State recording laws, state AI disclosure legislation, and malpractice carrier guidance impose additional requirements that HIPAA alone does not cover.

Let's start with what HIPAA actually says, because this is where a lot of practices get a false sense of security.

HIPAA's Privacy Rule permits the use and disclosure of protected health information (PHI) for treatment, payment, and healthcare operations without patient authorization. An AI scribe that's generating clinical notes arguably falls under "treatment" or "healthcare operations." So technically, under HIPAA alone, you may not need separate patient consent to use an AI documentation tool—as long as you have a Business Associate Agreement (BAA) in place with the vendor and the tool meets HIPAA Security Rule requirements.

But HIPAA is the floor, not the ceiling. State laws frequently impose stricter requirements, and this is where practices get into trouble. HIPAA compliance does not mean you're compliant with your state's recording consent laws, your state's AI disclosure requirements, or the emerging patchwork of state-specific AI healthcare legislation that's been rolling out at a remarkable pace.

Relying on "we have a BAA" as your complete consent strategy is like relying on a seatbelt as your complete safety plan. It's necessary, but it's not sufficient.

State Recording Laws Are the First Layer of Real Risk

Quick answer: AI scribes record your patient encounter. In 13 states, recording a conversation without all parties' consent violates state wiretapping or eavesdropping laws—with criminal penalties up to five years imprisonment in some states.

AI scribes work by recording your patient encounter—the audio of the conversation between you and your patient. That recording is then processed, transcribed, and used to generate clinical notes. The moment you press record, you've entered the territory of your state's recording consent laws.

The United States is split between one-party consent states and all-party consent states when it comes to recording conversations. In a one-party consent state, only one party to the conversation (you, the provider) needs to consent to the recording. In an all-party consent state, every party to the conversation must consent.

All-Party Consent States

  • California
  • Connecticut (for phone/electronic communications)
  • Delaware
  • Florida
  • Illinois (for in-person private conversations)
  • Maryland
  • Massachusetts
  • Montana
  • Nevada (for electronic communications)
  • New Hampshire
  • Oregon (for in-person conversations)
  • Pennsylvania
  • Washington

If your practice is in any of these states and you're recording patient encounters with an AI scribe without the patient's consent, you may be violating state wiretapping or eavesdropping laws. These aren't civil slap-on-the-wrist statutes. In California, violations can carry up to a year of imprisonment. In Florida, it can be a third-degree felony with up to five years. In Maryland and Massachusetts, violations can result in up to five years in prison.

Even in one-party consent states, the risk calculus has changed. The fact that you can legally record without the patient's knowledge doesn't mean you should. Patients expect privacy in a medical setting. If they discover after the fact that their visit was recorded by an AI tool they didn't know about, the trust damage is real—and so is the regulatory complaint risk, even if you were technically within the letter of the recording law.

New State AI Disclosure Laws Are Adding Another Layer

Quick answer: Texas, Illinois, California, Colorado, Ohio, and Florida have all passed or introduced legislation requiring AI disclosure or consent in healthcare settings. Texas penalties reach $250,000 per violation with potential license suspension. The direction is clear: more disclosure, more consent, more documentation.

Beyond recording consent laws, a wave of state legislation specifically targeting AI in healthcare is creating new obligations that didn't exist even a year ago.

Texas enacted the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), effective January 1, 2026. The law requires healthcare providers to disclose to patients that AI is being used in connection with healthcare services or treatment. Texas also passed SB 1188, which went into effect September 1, 2025, requiring healthcare providers to provide written disclosure when AI is used. Violations of SB 1188 can result in civil penalties ranging from $5,000 to $250,000 per violation, and the Texas Medical Board can take disciplinary action including license suspension or revocation against providers who violate the law three or more times.

Illinois passed the Wellness and Oversight for Psychological Resources Act, effective August 1, 2025, which requires licensed professionals using AI tools to inform patients and obtain consent. The law specifies that consent requirements cannot be met by burying AI disclosure in broad terms of use or general consent forms—it may need to be standalone consent.

Florida has pre-filed legislation for 2026 that would require written, informed consent at least 24 hours in advance of any AI system recording or transcribing a counseling or therapy session.

Ohio introduced legislation in November 2025 that would require patients to provide written informed consent for AI use in healthcare settings.

Colorado's AI Act, delayed to June 2026, imposes requirements on deployers of "high-risk" AI systems—which includes healthcare AI—including transparency disclosures and impact assessments.

California enacted AB 489, effective January 1, 2026, which requires disclosures when AI communicates with patients in healthcare settings.

This is just the beginning. The direction is clear: states are moving toward requiring explicit disclosure and consent for AI in clinical settings, not away from it.

The Sharp HealthCare Lawsuit Is a Warning Shot

Quick answer: Sharp HealthCare in San Diego faces a potential class-action lawsuit for using Abridge's AI scribe without patient consent. The patient was never informed, never asked, and only discovered the recording through his patient portal. The gap between what happened and what's documented is the liability.

In November 2025, a patient filed a potential class-action lawsuit against Sharp HealthCare in San Diego, alleging that the health system's use of an AI-powered ambient documentation tool (Abridge) recorded his medical visit without his knowledge or consent.

The key allegations are instructive for any practice using AI scribes. The patient claims he was never informed that AI would be recording his visit. He was never asked to consent. The visit audio was captured using a clinician's microphone-enabled device, transmitted to a third-party vendor's cloud infrastructure, and used to generate clinical notes. The patient only discovered the recording after finding language in his patient portal suggesting he had been "advised" and had "consented"—which he alleges never happened.

The lawsuit alleges violations of California's all-party consent recording law and multiple state privacy statutes. The plaintiff is seeking class-action certification that could include any California resident whose medical visit was recorded without consent since April 2025.

This case signals exactly where the risk lies: the gap between what happened and what's documented. If a patient signs a written consent form that clearly explains the AI recording tool, its purpose, and the patient's right to opt out, that gap doesn't exist. If the consent is verbal, undocumented, or never obtained at all, the gap is wide open. You don't want to be the test case in your state.

Verbal Consent vs. Written Consent: Why Written Wins

Quick answer: Verbal consent is difficult to verify after the fact—it's your word against theirs. Written consent creates a timestamped, signed record. Your malpractice carrier (Texas Medical Liability Trust) and MGMA both recommend written consent forms specifically for AI scribes. When your malpractice carrier tells you to get written consent, that's not a suggestion.

Many AI scribe vendors suggest that verbal consent is sufficient. Some recommend a simple script: "I use a tool that helps with my notes—do you mind if I record our conversation?" That's better than nothing, but it has significant weaknesses.

Verbal consent is difficult to verify after the fact. If a patient later claims they were never told, it's your word against theirs. There's no signed document, no timestamp, no proof. In a regulatory investigation or a lawsuit, this puts you in a defensive position.

Written consent, by contrast, creates a clear, timestamped, signed record that the patient was informed about the AI tool, understood its purpose, and agreed to its use. It also creates a record if the patient declines—which is equally important. If a patient opts out and you record them anyway (or forget to turn off the tool), that's a problem. A documented opt-out protects you by showing you had a system in place.

The Texas Medical Liability Trust, one of the largest physician malpractice carriers in the country, explicitly recommends adding AI scribe language to patient consent forms. Their risk management guidance advises providers to explain the technology and its purpose using clear, non-technical language, train staff on how to explain AI documentation processes, document all consent discussions in the patient's record, and give patients the option to opt out.

MGMA (the Medical Group Management Association) has published a sample patient consent form specifically for AI dictation and transcription use, reinforcing that written consent is becoming the industry standard.

When your malpractice carrier is telling you to get written consent, that's not a suggestion. That's them telling you what they'll point to when you file a claim.

Quick answer: Plain-language explanation of the tool, what happens to the recording, whether data is used for AI training, that the provider reviews all notes, that recordings are not part of the medical record, the patient's right to opt out, and a signature line with date.

If you're going to implement written consent—and you should—here's what the form needs to address at a minimum:

What the tool does. Plain-language explanation that an AI-powered tool will be used to record and transcribe the patient encounter for the purpose of generating clinical documentation. Avoid jargon. Don't call it "ambient clinical intelligence" or "AI-augmented documentation." Say "a tool that records our conversation and helps create your visit notes."

What happens to the recording. Is the audio stored? For how long? Is it deleted after the note is generated? Is it transmitted to a third-party vendor for processing? Patients have a right to know where their voice recording goes.

Whether data is used for AI training. This is a critical question. Many AI scribe vendors use patient encounter data to train and improve their models. If your vendor does this, patients should know. If your vendor does not, that's worth stating explicitly—it's a trust builder.

That the provider reviews and finalizes all notes. Make clear that the AI generates a draft and the licensed provider reviews, edits, and signs off on the final note. The AI is not making clinical decisions.

That AI-generated recordings and transcripts are not part of the official medical record. The final, provider-reviewed note is the medical record. The raw recording and transcript are documentation aids.

The patient's right to opt out. Patients should be able to decline AI recording without it affecting their care. If they opt out, you take notes the old-fashioned way. This needs to be explicitly stated.

A signature line and date. Signed, dated, timestamped, stored as part of the patient record. This is the whole point.

Know Your Vendor Before You Consent Your Patients

Quick answer: Before you can write an honest consent form, you need to know what your AI scribe vendor actually does with patient data. Get answers in writing about which AI models they use, whether patient data trains their models, where data is stored, encryption and access controls, and what happens if the vendor is acquired or shuts down.

Here's a step most practices skip entirely: due diligence on the AI vendor itself.

Before you can write an honest consent form telling patients what happens to their data, you need to actually know what happens to their data. And the uncomfortable truth is that many providers using AI scribes couldn't answer basic questions about how their vendor handles recordings, where the data goes, or what happens to it after the note is generated.

At EasyDocForms, we strongly advise any practice using AI transcription or ambient scribe tools to do thorough research on their vendor and get the following in writing before deploying the tool in a clinical setting:

Which API services and AI models does the vendor use? Many AI scribe products are wrappers around third-party AI services—OpenAI's Whisper for transcription, GPT for note generation, or similar. Some vendors run their own models on their own infrastructure. Others route your patient audio through multiple third-party services you've never heard of. You need to know the full chain of custody for your patients' audio and transcript data. If the vendor is sending patient audio to an API endpoint operated by a different company, that's another entity touching PHI—and another BAA you may need.

Is patient data used to train AI models? This is the big one. Some vendors explicitly state that de-identified patient data may be used to improve their models. Others are less transparent about it. Some underlying API providers (like OpenAI's enterprise tier) contractually commit to not training on customer data, while their consumer-facing products make no such promise. You need to know whether your patients' conversations are being used to train an AI model. If they are, your patients deserve to know that too—and your consent form needs to say it.

Where is data stored, and for how long? Is the audio recording stored on the vendor's servers? For how long? Is it deleted after the note is generated, or retained indefinitely? Is it stored in the United States? Texas law (effective January 2026) now requires that health records be physically maintained within the U.S. or its territories. If your AI scribe vendor is processing audio through cloud infrastructure overseas, that's a compliance issue.

What encryption and access controls are in place? Who at the vendor's organization can access your recordings and transcripts? The Sharp HealthCare lawsuit specifically alleges that vendor personnel had access to recordings and transcripts. Your BAA should address this, but you should verify what access controls actually exist in practice, not just on paper.

What happens if the vendor is acquired, shuts down, or changes its terms? AI scribe companies are being acquired, merged, and shut down constantly in this market. If your vendor gets acquired by a company with different data practices, what happens to the patient data they already have? What are your data portability and deletion rights? These aren't hypothetical concerns in a market this volatile.

Get all of this in writing. Not in a sales call, not in a marketing FAQ—in a signed agreement or a formal written response from the vendor's compliance team. If a vendor can't or won't answer these questions clearly, that tells you something important about whether you should be routing your patients' most sensitive conversations through their platform.

This due diligence also directly improves your consent form. When you know exactly how the vendor handles data, you can give your patients accurate, specific disclosures instead of vague language like "your data may be processed by third-party services." Specificity builds trust. Vagueness erodes it.

For Highly Sensitive Encounters, Consider Whether Cloud-Based AI Is the Right Tool at All

It's worth stepping back and stating something that AI scribe vendors won't tell you: every time you route a patient conversation through a cloud-based AI service, you are expanding the attack surface of that patient's data. There is no way around this. The audio leaves your office, travels over the internet, hits a third-party server, gets processed, and the output comes back. Each hop is a point of potential exposure—to breaches, to unauthorized access, to vendor policy changes, to subpoenas you can't control.

For routine office visits, that trade-off may be acceptable given the productivity gains. But for highly sensitive encounters—patients disclosing abuse, psychiatric evaluations, substance use history, HIV status, reproductive health decisions, or any conversation where the content itself could cause serious harm if exposed—you should think carefully about whether a cloud-based ambient scribe is the right tool for that visit.

Solutions that process speech locally have existed for years. Dragon Medical (now Nuance, owned by Microsoft) has long been the standard for physician dictation, and its core speech recognition can run locally, keeping audio data within your own infrastructure rather than routing it through third-party cloud APIs. It's not as flashy as ambient AI scribes that listen passively and auto-generate SOAP notes. It requires the provider to dictate rather than just talk naturally to the patient. But the trade-off is a dramatically smaller attack surface for your most sensitive patient data. For the encounters that keep you up at night, that trade-off matters.

This isn't an either-or decision for most practices. You can use an ambient AI scribe for the bulk of your encounters where the productivity gain is worth the expanded attack surface, and fall back to local dictation tools for the cases where the sensitivity of the conversation warrants the extra protection.

"HIPAA-Compliant" Doesn't Mean What You Think It Means

While we're being honest about attack surfaces, it's worth addressing a much older version of this same problem: traditional medical transcription services.

Long before AI scribes existed, many practices outsourced their dictation to transcription companies. A significant number of those companies—then and now—route audio to transcriptionists in the Philippines, India, Pakistan, and other countries for manual transcription. This is a large, well-established industry. The audio of your patient encounter gets uploaded to a server, downloaded by a human transcriptionist overseas, listened to, typed out, and sent back. That's a real person, in another country, with headphones on, listening to your patient describe their symptoms, their medications, their personal history.

Here's the part that surprises most providers: this can be fully HIPAA-compliant. HIPAA does not prohibit sending protected health information to other countries. There is no data residency requirement in the HIPAA Privacy Rule or Security Rule. As long as the overseas entity signs a Business Associate Agreement, implements the required administrative, physical, and technical safeguards, and the covered entity performs reasonable due diligence, the arrangement can satisfy HIPAA requirements on paper.

But "HIPAA-compliant" and "secure" are not the same thing. When your patient's audio is being manually transcribed by a person in a country with different data protection enforcement, different legal frameworks, and potentially different practical safeguards, the attack surface is enormous—and largely outside your ability to audit or control. If a transcriptionist copies text from a session, your recourse is essentially zero. You're trusting a chain of custody that spans oceans, jurisdictions, and organizations you've never visited and can't meaningfully oversee.

The point here isn't to single out any particular country or vendor. The point is that HIPAA compliance has always been a floor, not a guarantee of security. This was true with traditional transcription services shipping audio overseas for decades, and it's true now with AI scribe vendors routing audio through cloud APIs. The attack surface just looks different. With traditional transcription, it's a human with headphones in another country. With cloud-based AI scribes, it's audio sitting on a server you don't control, processed by models trained in ways you may not fully understand. Both are "HIPAA-compliant." Neither is risk-free.

The practices that take data security seriously are the ones that ask uncomfortable questions—not just "is this HIPAA-compliant?" but "where exactly does my patient's data go, who touches it, and what's my actual recourse if something goes wrong?" That's the standard your patients deserve.

How This Fits Into Your Intake Workflow

Quick answer: AI scribe consent should be part of your standard intake packet, not a separate process. Digital intake platforms handle this cleanly—the consent is one form in the sequence, signed, timestamped, with opt-out documented automatically. EasyDocForms supports this natively with white-glove onboarding.

The AI scribe consent should be part of your standard intake packet, not a separate process that your front desk has to remember to handle. When it's built into the intake workflow, every patient sees it, every patient signs it (or opts out), and the documentation is automatic.

This is where most practices struggle. If you're using paper forms or your EMR's built-in intake module, adding a new consent document means creating a new form, printing it, making sure staff hand it out, collecting it, scanning it, and filing it. That's a workflow that breaks easily.

Digital intake platforms handle this much more cleanly. The AI scribe consent is one form in the intake sequence. The patient reads it, signs it, and it's timestamped and stored alongside everything else. If the patient declines, that's documented too. Conditional logic can even suppress the AI consent form for patients who are already on file and have previously consented, reducing redundancy.

EasyDocForms supports this workflow natively. You can add an AI transcription consent form to any intake packet, complete with the specific disclosures described above, the patient's signature and timestamp, and the ability to flag patients who opt out so your staff knows before the encounter begins. And because EasyDocForms includes white-glove onboarding, you don't have to build the form yourself—the team builds it for you based on your specific AI tool, your state's requirements, and your practice's workflow.

Don't Wait for the Law to Catch Up to What's Already Happening

Quick answer: The legal landscape is moving in one direction: more disclosure, more consent, more documentation. Written consent protects your patients' trust, protects your practice from litigation, and costs nothing beyond adding one form to your intake workflow. Three steps this week: check your state's laws, call your malpractice carrier, add a written AI consent form to your intake.

Here's the practical reality: the legal landscape around AI in healthcare is moving fast, and it's moving in one direction. More disclosure. More consent. More documentation. If you implement written consent now, you're ahead of the curve in states that haven't yet passed specific AI disclosure laws, and you're compliant in states that have.

More importantly, written consent is simply good practice. It protects your patients' trust, it protects your practice from litigation, and it costs you nothing beyond adding one form to your intake workflow. The Sharp HealthCare lawsuit should be all the motivation you need—that's a major health system with a legal department, and they're still facing a class-action claim because their consent process had gaps.

If you're using an AI scribe (or planning to), take three steps this week. First, check your state's recording consent laws and any AI-specific healthcare disclosure legislation. Second, call your malpractice carrier and ask what they recommend for AI scribe consent—they may already have sample language. Third, add a written AI transcription consent form to your intake workflow.

The tool that's saving you 30 minutes of charting per patient shouldn't be the thing that costs you your license or a six-figure lawsuit. A one-page consent form prevents that. Get it signed.

Add AI Scribe Consent to Your Intake—Without the Headache

Digital consent forms built for your specific AI tool, your state's requirements, and your workflow. Signed, timestamped, stored. White-glove onboarding included. $49/month flat.

Start Your 14-Day Free Trial

Frequently Asked Questions

Does HIPAA require patient consent for AI scribes?

HIPAA's Privacy Rule permits use of PHI for treatment, payment, and healthcare operations without patient authorization, so an AI scribe generating clinical notes may fall under those permitted uses as long as you have a BAA with the vendor. However, HIPAA is the floor, not the ceiling. State recording consent laws, state AI disclosure legislation (Texas, Illinois, California, Colorado, Ohio, Florida), and malpractice carrier guidance all impose additional requirements that HIPAA alone does not cover. Written consent is the recommended approach.

Which states require all-party consent for recording patient conversations?

The following states currently require all-party consent for recording conversations: California, Connecticut (for phone/electronic communications), Delaware, Florida, Illinois (for in-person private conversations), Maryland, Massachusetts, Montana, Nevada (for electronic communications), New Hampshire, Oregon (for in-person conversations), Pennsylvania, and Washington. Using an AI scribe that records patient encounters in these states without patient consent may violate state wiretapping or eavesdropping laws, with penalties ranging up to five years imprisonment in some states.

What happened in the Sharp HealthCare AI scribe lawsuit?

In November 2025, a patient filed a potential class-action lawsuit against Sharp HealthCare in San Diego, alleging that the health system used an AI-powered ambient documentation tool (Abridge) to record his medical visit without his knowledge or consent. The patient claims he was never informed, never asked to consent, and only discovered the recording after finding language in his patient portal. The lawsuit alleges violations of California's all-party consent recording law and multiple state privacy statutes.

What should an AI scribe consent form include?

An AI scribe consent form should cover: a plain-language explanation of what the tool does (records and transcribes encounters for clinical documentation), what happens to the recording (storage duration, deletion policy, third-party processing), whether data is used for AI model training, that the provider reviews and finalizes all notes, that recordings and transcripts are not part of the official medical record, the patient's right to opt out without affecting their care, and a signature line with date and timestamp.

Is verbal consent for AI scribes sufficient?

Verbal consent is better than nothing but has significant weaknesses. It is difficult to verify after the fact—if a patient later claims they were never told, it's your word against theirs. Written consent creates a clear, timestamped, signed record. The Texas Medical Liability Trust and MGMA both recommend written consent forms specifically for AI dictation and transcription use. When your malpractice carrier recommends written consent, that is not a suggestion.

Which states have passed AI-specific healthcare disclosure laws?

As of early 2026: Texas enacted TRAIGA and SB 1188 (effective January 2026, with civil penalties of $5,000 to $250,000 per violation and potential license suspension). Illinois passed the Wellness and Oversight for Psychological Resources Act (effective August 2025). California enacted AB 489 (effective January 2026). Colorado's AI Act (delayed to June 2026) covers high-risk AI in healthcare. Florida and Ohio have pre-filed or introduced similar legislation. The direction is clear: more disclosure, more consent, more documentation.

How does EasyDocForms handle AI scribe consent in the intake workflow?

EasyDocForms supports AI transcription consent as part of the standard digital intake workflow. The consent form is one form in the intake sequence—the patient reads it, signs it, and it's timestamped and stored alongside everything else. If the patient declines, that's documented too. Conditional logic can suppress the form for returning patients who have already consented. White-glove onboarding builds the form based on your specific AI tool, your state's requirements, and your workflow. $49/month flat—unlimited providers, unlimited patients, unlimited forms.