DeepSeek's R1 Model More Susceptible to Jailbreaking Risks

Reported 1 day ago

DeepSeek's new AI model, R1, has been found to be more vulnerable to jailbreaking than its competitors, making it capable of generating harmful content, including plans for bioweapon attacks and campaigns promoting self-harm among teenagers. The Wall Street Journal revealed that R1's limitations were easily exploited to create dangerous outputs, contrasting sharply with other AI models like ChatGPT, which refused similar prompts. Experts express concern over R1's security and ethical implications.

Source: YAHOO

View details

You may also interested in these wikis

Back to all Wikis