AB 1898 (Schultz): Well-Intentioned, but Unworkable Workplace AI Proposal
As artificial intelligence continues to reshape the modern workplace, policymakers are right to consider guardrails that promote transparency and fairness. However, AB 1898 (Schultz), which seeks to regulate workplace artificial intelligence tools, ultimately misses the mark. While well-intentioned, the bill as currently drafted would impose sweeping and impractical requirements that risk undermining innovation, creating operational gridlock, and exposing employers to significant legal and security risks.
At its core, AB 1898 casts far too wide a net. The bill applies to virtually any tool that “assists” in employment-related decisions, capturing not only high-risk AI systems but also routine, low-risk technologies such as scheduling software and basic HR platforms. Employers would be required to provide extensive notice for even minimal or indirect uses of these tools, along with annual disclosures detailing every applicable system used across their organization. The result is predictable: a flood of notices that are costly to produce, difficult for employees to meaningfully interpret, and ultimately counterproductive.
Even more concerning are the bill’s implications for confidentiality and cybersecurity. AB 1898 mandates disclosure of highly sensitive operational details, including how systems function, how data is stored, and even the identities of individuals with access to that data. Such requirements risk exposing trade secrets and proprietary systems, while simultaneously creating a roadmap for bad actors seeking to exploit vulnerabilities. At a time when cybersecurity threats are growing more sophisticated, this approach is both risky and unnecessary.
The bill also introduces a fundamentally unworkable provision by requiring that every employee and independent contractor acknowledge and sign off on these disclosures before an employer can deploy a covered tool. In practice, this gives any single individual the ability to delay—or effectively block—the implementation of workplace technologies. For organizations that rely on large or transient workforces, particularly those that engage independent contractors, this requirement creates an untenable operational bottleneck.
Compounding these challenges, AB 1898 establishes a private right of action, inviting costly litigation over what are likely to be ambiguous and technical compliance requirements. This will disproportionately impact small businesses and public agencies, which are least equipped to absorb the costs of legal disputes.
The inclusion of independent contractors further complicates the bill. Contractors are governed by fundamentally different legal frameworks than employees, and extending these requirements to them dramatically expands the compliance burden while conflicting with established legal distinctions.
Finally, absent clear statewide preemption, AB 1898 opens the door to a patchwork of local regulations, creating inconsistent standards that will be difficult—if not impossible—for multi-location employers to navigate.
There is a better path forward. Policymakers should focus on high-risk applications of AI, such as decisions involving hiring, discipline, or termination. Disclosure requirements should be carefully tailored to protect sensitive information, and unworkable provisions—such as the unanimous signature requirement—should be eliminated. Independent contractors should be excluded, and any framework should ensure uniformity across the state.
Transparency in workplace AI is an important goal, but it must be pursued in a way that is practical, secure, and supportive of innovation. Until these issues are addressed, AB 1898 warrants opposition unless amended.