Skip to content
Home » News

News

OpenAI Models Caught Handing Out Weapons Instructions

NBC News tests reveal OpenAI chatbots can still be jailbroken to give step-by-step instructions for chemical and biological weapons. The post OpenAI Models Caught Handing Out Weapons Instructions appeared first on TechRepublic.