New York Governor Kathy Hochul has budgeted for bold new proposal. This effort addresses the growing fears surrounding AI-produced content, like deepfakes. The model laws would mandate prominent labeling for any content generated by AI. They will go further to enact a ban on non-consensual deepfakes during important periods of elections. This action could not come soon enough as policymakers around the country continue to wrestle with the consequences of rapidly emerging AI developments.
The new bills are a good first step to combat the potential abuse of AI. They do this by directly addressing problems with sexualized depictions. In May, U.S. Congress voted unanimously to pass the Take It Down Act. Under this new law, it’s a criminal offense to produce and share sexualized images without the person’s consent. Even with this new legislation, advocates remain worried that regulations aren’t being enforced effectively at the local level.
More recently, a number of high-profile incidents have highlighted the serious risks that AI technologies can pose. OpenAI’s Sora 2 allegedly allowed users to create pornographic content with children, causing major uproar from child safety advocates. Similarly, Google’s AI model, Nano Banana, generated an image depicting a violent act against political commentator Charlie Kirk, drawing sharp criticism for its implications regarding misinformation and public safety.
Furthermore, racist videos produced using Google’s AI video model have garnered millions of views on social media platforms, amplifying concerns over the societal impact of such content. As these accidents continue to stack up, U.S. legislators are becoming more attuned to the need for strong, decisive action.
Elon Musk’s xAI falsely claims it even goes out of its way to remove illegal content from its platform, including CSAM and non-consensual nudity. The implementation and efficacy of these measures are bringing new worries. A recent letter from a bipartisan group of U.S. senators highlighted that, despite many companies claiming to have policies against non-consensual intimate imagery, enforcement appears inconsistent.
“We recognize that many companies maintain policies against non-consensual intimate imagery and sexual exploitation, and that many AI systems claim to block explicit pornography. In practice, however, as seen in the examples above, users are finding ways around these guardrails. Or these guardrails are failing.” – U.S. Senators
The regulatory landscape varies significantly between countries. Abroad, China has begun implementing stricter labeling requirements for synthetic content. This type of federal regulation has yet to be seen in the United States. In fact, many Chinese technology companies, particularly those with ties to ByteDance, provide services that allow you to quickly swap images and videos. This relationship poses significant challenges to regulating deepfake technology.
Deepfakes exploded into popular consciousness in 2018 when a Reddit community dedicated to synthetic pornographic videos of celebrities went viral. The controversial material immediately went viral before it was removed. Platforms such as TikTok and YouTube have been flooded with sexualized deepfakes of celebrities and most recently even of prominent politicians. This rapid spread is troubling not just on privacy grounds, but because of the risk of reputational injury.
California’s attorney general has initiated an investigation into xAI’s chatbot amid increasing pressure from governments worldwide to regulate AI technologies more stringently. Such developments have only compounded domestic factors to make the issue more complicated. Thus, U.S. lawmakers are faced with an imposing task of formulating the right policies.
The legislative push in New York reflects a growing recognition of the need for clearer guidelines and regulations surrounding AI-generated content. Governor Hochul’s proposals are aimed at protecting these individuals from being exploited. They want to make sure that AI is used transparently.
Conversations continue on how to address the unique issues posed by artificial intelligence. Energy stakeholders from every sector, both public and private, are monitoring closely the rapid developments at the intersection of legislation and technology. What hasn’t changed is our goal to balance the needs of innovation with the safety and protection of the public.




