‘Hypnotized’ ChatGPT and Bard Will Convince Users to Pay Ransoms and Drive Through Red Lights

Security researchers at IBM say they were able to successfully “hypnotize” prominent large language models like OpenAI’s ChatGPT into leaking confidential financial information, generating malicious code, encouraging users to pay ransoms, and even advising drivers to plow through red lights. The researchers were able…

Read more…