AI Codex
Prompt EngineeringDevelopersCTOs

Jailbreaking

Techniques to bypass a model's safety constraints through adversarial prompting — a cat-and-mouse dynamic between model developers and adversarial users.