Are Large Language Models Really Bias-Free? Jailbreak Prompts for Assessing Adversarial Robustness to Bias ElicitationRiccardo Cantini, Giada Cosenza, Alessio Orsino, Domenico TaliaLast updated on Dec 5, 2025 Cite DOI ProjectAdversarial Robustness Bias Ethical AI Fairness Jailbreak Large Language Models Stereotype Sustainable AI