Load WordPress Sites in as fast as 37ms!

Parents call for New York governor to sign landmark AI safety bill

A group of more than 150 parents sent a letter on Friday to New York governor Kathy Hochul, urging her to sign the Responsible AI Safety and Education (RAISE) Act without changes. The RAISE Act is a buzzy bill that would require developers of large AI models — like Meta, OpenAI, Deepseek, and Google — to create safety plans and follow transparency rules about reporting safety incidents.

The bill passed in both the New York State Senate and the Assembly in June. But this week, Hochul reportedly proposed a near-total rewrite of the RAISE Act that would make it more favorable to tech companies, akin to some of the changes made to California’s SB 53 after large AI companies weighed in on it.

Many AI companies, unsurprisingly, are squarely against the legislation. The AI Alliance, which counts
Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks, and Hugging Face among its members, sent a letter in June to New York lawmakers detailing their “deep concern” about the RAISE Act, calling it “unworkable.” And Leading the Future, the pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz (a16z), OpenAI president Greg Brockman, and Palantir co-founder Joe Lonsdale, has been targeting New York State Assemblymember Alex Bores, who co-sponsored the RAISE Act, with recent ads.

Two organizations, ParentsTogether Action and the Tech Oversight Project, put together Friday’s letter to Hochul, which states that some of the signees had “lost children to the harms of AI chatbots and social media.” The signatories called the RAISE Act as it stands now “minimalist guardrails” that should be made law.

They also highlighted that the bill, as passed by the New York State Legislature, “does not regulate all AI developers – only the very largest companies, the ones spending hundreds of millions of dollars a year.” They would be required to disclose large-scale safety incidents to the attorney general and publish safety plans. The developers would also be prohibited from releasing a frontier model “if doing so would create an unreasonable risk of critical harm,” which is defined as the death or serious injury of 100 people or more, or $1 billion or more in damages to rights in money or property stemming from the creation of a chemical, biological, radiological, or nuclear weapon; or an AI model that “acts with no meaningful human intervention” and “would, if committed by a human,” fall under certain crimes.

“Big Tech’s deep-pocketed opposition to these basic protections looks familiar because we have
seen this pattern of avoidance and evasion before,” the letter states. “Widespread damage to young people —
including to their mental health, emotional stability, and ability to function in school — has been
widely documented ever since the biggest technology companies decided to push algorithmic
social media platforms without transparency, oversight, or responsibility.”

Check Also

Lenovo’s next gaming laptop may have a rollable ultrawide OLED screen

Lenovo has already demonstrated its ability to put rollable OLEDs into laptops by graduating last …

The Ultimate Managed Hosting Platform
Verified by MonsterInsights