Bill Text: NJ AR158 | 2024-2025 | Regular Session | Introduced
Bill Title: Urges generative artificial intelligence companies to make voluntary commitments regarding employee whistleblower protections.
Spectrum: Partisan Bill (Democrat 3-0)
Status: (Introduced) 2024-10-21 - Reported out of Assembly Comm. with Amendments, 2nd Reading [AR158 Detail]
Download: New_Jersey-2024-AR158-Introduced.html
Sponsored by:
Assemblyman CHRIS TULLY
District 38 (Bergen)
Assemblyman CODY D. MILLER
District 4 (Atlantic, Camden and Gloucester)
Assemblywoman HEATHER SIMMONS
District 3 (Cumberland, Gloucester and Salem)
SYNOPSIS
Urges generative artificial intelligence companies to make voluntary commitments regarding employee whistleblower protections.
CURRENT VERSION OF TEXT
As introduced.
An Assembly Resolution urging generative artificial intelligence companies to make voluntary commitments regarding employee whistleblower protections.
Whereas, Artificial intelligence technology has the potential to provide unprecedented benefits to humanity, but it also poses serious risks; and
Whereas, The risks associated with artificial intelligence technology are of grave concern and range from the further entrenchment of existing inequalities, to the manipulation and dissemination of misinformation; and
Whereas, Many risks associated with artificial intelligence are currently unregulated, and existing whistleblower protections are inadequate to protect employees from retaliation for disclosing information regarding company risk-related concerns; and
Whereas, In the absence of government oversight, employees of artificial intelligence companies possess the most comprehensive knowledge of the risks involved and are among the few individuals capable of holding the companies accountable; and
Whereas, Broad confidentiality agreements prevent employees of artificial intelligence companies from voicing concerns beyond the company failing to address the issues; and
Whereas, Artificial intelligence companies such as OpenAI, Anthropic, and Google have also acknowledged the risks posed by artificial intelligence technology; and
Whereas, Independent evaluation is critical to identifying the risk posed by artificial intelligence systems, yet the terms of service and enforcement strategies used by artificial intelligence companies disincentive good faith safety evaluations for fear of legal reprisal or account suspension; and
Whereas, To varying degrees, artificial companies provide legal safe harbor for system evaluation for security research but do not include technical safe harbor for good faith research that may lead to account termination, such as evaluating systems on the generation of hate speech, misinformation, or abusive imagery, thereby inhibiting the discovery of all forms of system flaws; and
Whereas, Google, OpenAI, Microsoft, Anthropic, Amazon, and Meta participate in the Frontier Model Forum, enabling cross-organizational collaboration on artificial intelligence safety and responsibility and enabling independent, standardized evaluations, but do not address concerns of unbiased assessment that would be afforded through both legal and technical safe harbor; and
Whereas,
The risks of artificial intelligence may be mitigated by the commitment of
artificial intelligence companies to certain principles concerning employee
protections, transparency, and safety; now, therefore,
Be It Resolved by the General Assembly of the State of New Jersey:
1. This House urges generative artificial intelligence companies to commit to the following principles:
a. The company shall not enter into or enforce any agreement prohibiting disparagement or criticism of the company for risk-related concerns or retaliate for risk-related criticism by hindering any vested economic benefit;
b. The company shall facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company board, to regulators, and to an appropriate independent organization with relevant expertise;
c. The company shall support a culture of open criticism and allow current and former employees to raise risk-related concerns about its technologies to the public, to the company board, to regulators, or an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
d. The company shall provide legal and technical safe harbor for good faith system evaluation, ensuring safety from legal reprisal, account suspension, or termination, while maintaining the protection of trade secrets and other intellectual property. Safe harbor should enable independent identification of all forms of risks posed by artificial intelligence systems;
e. The company shall not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed; and
f. Employees shall retain the freedom to publicly report concerns until the creation of an appropriate process for anonymously reporting concerns to the company board, regulators, and an independent organization with relevant expertise. Any effort to report risk-related concerns should avoid releasing confidential information unnecessarily.
2. Copies of this resolution, as filed with the Secretary of State, shall be transmitted by the Clerk of the General Assembly to the Chief Executive Officers of leading generative artificial intelligence companies including, but not limited to, OpenAI, Anthropic, Google, Inflection, Meta, Midjourney, and Cohere.
STATEMENT
This resolution urges generative artificial intelligence companies to make voluntary commitments to protect employees who raise risk-related concerns.
Artificial intelligence technology has the potential to provide unprecedented benefits to humanity but also poses serious risks, such as perpetuating inequalities, manipulation and misinformation, and the potential loss of control of autonomous artificial intelligence systems. Many risks associated with artificial intelligence are currently unregulated, and existing whistleblower protections are inadequate to protect employees from retaliation for publicly disclosing concerns.
In the absence of government oversight, employees of artificial intelligence companies are among the few individuals capable of holding the companies accountable. However, broad confidentiality agreements prevent employees from voicing concerns beyond the company failing to address the issues.
Additionally, independent evaluation is critical to identifying the risk posed by artificial intelligence systems, but is stymied by the lack of both legal and technical safe harbor, without which evaluators face legal reprisal or account suspension or termination. Legal safe harbor protects evaluators from legal reprisal, and technical safe harbor protects evaluators from account suspension or termination.
The resolution urges generative artificial intelligence companies to make voluntary commitments to mitigate the risks of artificial intelligence by adhering to the following principles:
(1) The company will not enter into or enforce any agreement prohibiting disparagement or criticism of the company for risk-related concerns or retaliate for risk-related criticism by hindering any vested economic benefit;
(2) The company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company board, to regulators, and to an appropriate independent organization with relevant expertise;
(3) The company will support a culture of open criticism and allow current and former employees to raise risk-related concerns about its technologies to the public, to the company board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
(4) The company will provide legal and technical safe harbor for good faith system evaluation, ensuring safety from legal reprisal, account suspension, or termination, while maintaining the protection of trade secrets and other intellectual property. Safe harbor should enable independent identification of all forms of risks posed by artificial intelligence systems;
(5) The company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed.
(6) Current and former employees should retain the freedom to publicly report concerns until the creation of an adequate process for anonymously raising concerns. Efforts to report risk-related concerns should avoid releasing confidential information unnecessarily.