Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable CodeBy rooter / August 23, 2024 CodeBreaker technique can create code samples that poison the output of code-completing large language models, resulting in vulnerable — and undetectable — code suggestions.