Preventing BIPA Class Actions in AI Meeting Tools: 6 Essential Steps

AI Meeting Tools Are The Latest Target of Illinois BIPA Class Actions

The Illinois Biometric Information Privacy Act (BIPA) has recently turned its attention to AI notetaker applications and other listening tools. A proposed class action filed on December 18 alleges that a popular AI notetaker vendor violated this stringent privacy law by collecting and storing voiceprints during a virtual meeting without proper notice, consent, or a compliant data retention policy.

What Happened: Familiar Workplace Scenario Turns Into Lawsuit

The complaint outlines a scenario that many employers may find unsettling:

  • An Illinois resident participated in a routine virtual meeting.
  • She had not signed up for any AI meeting assistant.
  • She did not agree to any terms of service.
  • However, an AI notetaking tool automatically joined the meeting at the request of another participant.

The AI tool recorded the discussion, identified speakers, and generated transcripts attributing statements to individuals. The lawsuit claims that this process required the notetaker to create and store voiceprints, classified as biometric identifiers under BIPA. This law mandates that businesses collecting such data must adhere to specific requirements.

The plaintiff, Katelin Cruz, asserts that the AI vendor failed to inform her in writing about the data collection, retention duration, and did not provide an option for written consent. Furthermore, the vendor allegedly did not publish a compliant biometric data retention and destruction policy.

Why This Matters for Employers

While the lawsuit primarily targets the AI vendor, employers should not dismiss the implications. In Illinois, BIPA is enforced largely through private lawsuits, and employers can be pulled into litigation due to:

  • Authorizing or deploying the tool: If an organization licenses or encourages the use of an AI notetaker that captures voiceprints without proper consent, they may be deemed complicit.
  • Employee use during work-related meetings: Even if an individual employee activates the AI assistant, employers may be liable if this occurs in business meetings.
  • Benefiting from the outputs: If transcripts or insights generated from AI tools are used to derive value, this could support claims of biometric data collection.
  • Lack of guardrails: Courts have shown little leniency regarding claims of automatic or incidental biometric collection. Missing policies or training can exacerbate liability.

Even companies based outside Illinois can face BIPA claims if their meetings include participants physically located in Illinois.

6 Steps Employers Should Take Now

Employers do not need to abandon AI notetaking tools, but they must manage their use effectively to mitigate legal risks. Consider the following steps:

  1. Inventory AI Meeting Tools in Use
    Identify all platforms and tools being used across the organization, including those not officially approved.
  2. Understand What the Tool Actually Collects
    Assess whether the tool performs functions that involve biometric data, such as voice identification or speaker recognition.
  3. Clarify Who Can Enable Recording or AI Assistants
    Restrict the activation of AI notetakers, especially in meetings with external participants.
  4. Update (and Enforce) Meeting and Recording Policies
    Ensure privacy policies clearly outline when AI tools may be used and the required notifications and approvals.
  5. Coordinate With Vendors, But Don’t Outsource Compliance
    Conduct due diligence with AI vendors to ensure they have BIPA-compliant mechanisms in place and document these discussions.
  6. Train Employees on “Meeting Hygiene”
    Provide training to employees to emphasize that enabling an AI assistant is not a neutral act.

This lawsuit signals an important shift in how biometric data relating to AI tools will be scrutinized under BIPA, and proactive measures are essential for compliance.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...