← Back

Jailbreak Resistance

Develop techniques to ensure AI systems are resistant to "jailbreaking" or circumvention of built-in safety and control protocols.

Resources (1)

R&D Gaps (1)

The risk of AI being misused—whether through malicious intent or unintended consequences—necessitates robust safeguards and countermeasures.