Cybersecurity Leader Uploads Sensitive Files to AI

Not surprising when Trouble Ensues

Last summer, the interim head of a major U.S. cybersecurity agency uploaded sensitive government contracting documents into the public version of ChatGPT. These files were marked “For Official Use Only”, meaning they were sensitive (but not secret or top secret). When placed into the public LLM, they may be used by the LLM to answer questions from other LLM users putting the data at risk of disclosure and discovery.

The incident stands out because this individual was a cybersecurity leader responsible for defending the US’s national infrastructure. Fortunately, automated security controls detected the uploads, triggering alerts, that lead to an internal review before significant damage was done.

In that sense, the system worked. But the more important question here might not be that obvious. From where CyberHoot sits, it appears the adoption and attraction of AI tool usage can get the best of educated, well-meaning, and high-ranking individuals. It’s so new and powerful that employees, your employees are unable to resist using it to hone their communications, analyze documents, and put our sensitive business data at risk. Read on to find out what we all should do to put up our own safeguards.

Why This Happens Even to Smart People

This mistake was not driven by malice or ill will. For this career cybersecurity professional, they we’re trying to work faster, smarter, and more efficiently. That same motivation exists in every organization today. Employees use AI tools to draft emails, summarize documents, and analyze data. The problem is not productivity. The problem is awareness. People do not understand the difference between public and private AI tools. They assume a tool is safe because it is popular, widely used, or not explicitly blocked by IT (in this case the individual had forced an exception to use ChatGPT which was blocked from everyone else’s use in the agency). That assumption leads to everyday mistakes, such as pasting customer data, company financials, or a human resource complaint, into a public LLM AI prompt. Doing this puts the data at risk of discovery.

What Public AI Tools Do With Your Data

When information is entered into a public AI service, it is shared with the company operating that platform. Most free AI tools use user inputs to improve their models, which means the data may be stored or reused in ways the user never intended. Uploading a customer list, a contract, or internal reports is not a breach or a hack. It is a direct result of the terms of service that users agree to when they start using the tool. Once the data is shared, control over it is effectively lost.

Why Training and Proper Tooling Matters More Than Handbook Prohibitions

This is why awareness training matters more than written policies. AI prohibitions buried on page 53 of the 100 page employee handbook will not be read, understood, and certainly won’t stop real-world usage. In this powerful emerging technology, people need simple lessons on why the rules exist and how to apply them in their daily work. Effective training explains what public AI tools are, how they differ from private ones. Training should explain what data should never be uploaded and which approved tools can be used. The goal is not to scare teams away from AI use. The goal is to help them use it safely and confidently.

IT Teams and Security Professionals must Prepare Solutions

In addition to training your employees on how to protect sensitive data, IT teams must also empower their users to work the way they want to work (inside private or secure LLMs). Provision private AI tools you have evaluated and secured appropriately for use by your team. Force employees to use the company approved tools and train them on why this matters. Reward the individuals who follow the rules rather than making examples out of the individuals who fail. Make your approved tools the easiest way to work and you’ll have compliance built into your tech stack. Then make sure you monitor for oversights and mistakes that can sometimes creep in to address those, as this government agency did, to prevent larger data breaches from happening.

The Takeaway for Every Organization

The lesson is clear. If a senior cybersecurity leader can make this mistake, any organization can. Employees are like river water. They will follow the path of least resistance to complete their work. Employees need to be trained on the risks involved in sharing sensitive data with public LLMs. With clear guidance, short conversations, and approved low-friction AI solutions in place, you can reduce risk without blocking innovation, slowing work, and frustrating end users. Start small, train your team, and build from there. When people understand the rules and have safe tools to use, they will protect your data every time.


Additional Resources


The post Cybersecurity Leader Uploads Sensitive Files to AI appeared first on CyberHoot.

Leave a Reply