OpenAI's Consideration of 'Responsible' NSFW Content Generation

OpenAI's Consideration of 'Responsible' NSFW Content GenerationOpenAI Considers 'Responsible' Generation of NSFW Content OpenAI, the creator of ChatGPT, has recently revealed plans that may significantly alter the use of its technology, indicating a possible change in its typically strict content guidelines. Last week, the company released draft documentation suggesting that it is considering how to 'responsibly' incorporate not-safe-for-work (NSFW) content into its platforms. This new policy, found in a commentary note within the comprehensive Model Spec document, has sparked an intricate debate about the future role of AI in generating sensitive content, as reported by Wired. The AI image generator, Unstable Diffusion, has minimal restrictions on NSFW content. The note in the document states, "We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area." At present, the usage policies forbid the creation of sexually explicit or suggestive materials. However, the document hints at a nuanced consideration: the potential to allow NSFW content in age-appropriate contexts. This possible shift is not about indiscriminately promoting explicit content, but rather about understanding societal and user expectations to guide model behavior responsibly. OpenAI is contemplating how its technology could responsibly generate a variety of content that might be deemed NSFW, including slurs and erotica. However, the company is specific about how sexually explicit material is described. In a statement to WIRED, company spokesperson Niko Felix clarified, "we do not have any intention for our models to generate AI porn." However, NPR reported that OpenAI's Joanne Jang, who helped write the Model Spec, admitted that users would ultimately decide if its technology produced adult content, stating, "Depends on your definition of porn." The worry extends beyond the direct implications of NSFW content. Danielle Keats Citron, a law professor at the University of Virginia, highlighted the wider societal consequences, noting that violations of intimate privacy can significantly affect the lives of targeted individuals, limiting their opportunities and personal safety. There are already numerous NSFW AI content generators using tools like Stable Diffusion, some of which verge on, or exceed, virtual child exploitation. Citron stated, "Intimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging. We now have clear empirical support showing that such abuse costs targeted individuals crucial opportunities, including to work, speak, and be physically safe." According to Citron, OpenAI's potential adoption of NSFW content is "alarming." OpenAI’s announcement addresses the ongoing debate between technological innovation and ethical responsibility, especially in setting precedents for how AI technologies might handle sensitive content in the future. OpenAI spokesperson Grace McGuire told the outlet that the Model Spec was an attempt to "bring more transparency about the development process and get a cross section of perspectives and feedback from the public, policymakers, and other stakeholders." Earlier this year, Mira Murati, OpenAI’s chief technology officer, told The Wall Street Journal that she was “not sure” if the company would allow depictions of nudity to be made with the company’s video generation tool Sora in the future. AI-generated pornography has rapidly become one of the most significant and troubling uses of the type of generative AI technology that OpenAI has pioneered. Deepfake porn—explicit images or videos made with AI tools that depict real people without their consent—has become a prevalent tool of harassment against women and girls. In March, WIRED reported on what seem to be the first US minors arrested for distributing AI-generated nudes without consent, after Florida police charged two teenage boys for creating images depicting fellow middle school students. Although OpenAI's usage policies prohibit impersonation without permission, the decisions made by OpenAI could have far-reaching effects. They also understand that if they don't compete in this space, another AI will simply dominate, leaving OpenAI behind. This article brings up some thought-provoking questions about the ethical responsibilities of AI technology companies. What are your thoughts on this issue? Share this article with your friends and let's start a conversation. Don't forget to sign up for the Daily Briefing, every day at 6pm.

Some articles will contain credit or partial credit to other authors even if we do not repost the article and are only inspired by the original content.

Some articles will contain credit or partial credit to other authors even if we do not repost the article and are only inspired by the original content.