Security researchers have uncovered a new flaw in some AI chatbots that could have allowed hackers to steal personal information from users.
A group of researchers from the University of California, San Diego (UCSD) and Nanyang Technological University in Singapore discovered the flaw, which they have nameed “Imprompter”, which uses a clever trick to hide malicious instructions within seemingly-random text.
As the “Imprompter: Tricking LLM Agents into Improper Tool Use” research paper explains