Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code
Posted on
CodeBreaker technique can create code samples that poison the output of code-completing large language models, resulting in vulnerable — and undetectable — code suggestions.