Introduction
In the realm of video conferencing, AI digital assistants are becoming quite popular. Their integration into platforms like Teams and Zoom is very convenient, providing exceptional summarization of the discussion topics. However, they raises cybersecurity concerns about the data being collected and exposed to unknown 3rd parties providing these services. This article explores the potential risks of using digital AI assistants on your video calls. It will focus on the potential for sensitive data exposure and data privacy risks that exist.
Understanding the Risks
AI digital assistants in video conferencing have access to any critical data discussed or shared on those sessions. They process conversations, chat histories, and create video records storing this data in 3rd party cloud environments. This exposes any sensitive information discussed or shared on these third-party video sessions. Platforms like Fathom assure data protection, encrypted communications between all participants, and have robust but not quite complete protection built into their privacy policies. That said, many risks remain.
Privacy Policies and Data Sharing
Privacy policies like Fathom’s are crucial. They state intentions not to misuse data. However, loopholes exist. In fathom’s own words from their Data Privacy policy they state:
[Fathom] Marketing. We do not rent, sell, or share information about you with nonaffiliated companies for their direct marketing purposes, unless we have your permission. We do not sell the content of your meetings or information about your meeting attendees to anyone.
Does this mean affiliated companies to Fathom are able to get transcripts of “de-identified” data? Critical, sensitive, or even regulated data might still be accessible to third parties without your direct consent or knowledge. Gaining a thorough understanding of the digital AI’s data privacy policies is vital to your data’s security.
The Threat of Data Leakage
Video conferencing involves sharing sensitive information. AI digital assistants might unintentionally capture this sensitive data. Think back to your most recent video meetings, did you talk about anything mission critical with your team? Would you be comfortable sharing that with unknown 3rd party digital assistants? Was there confidential discussions or private data. To CyberHoot vCISO’s the risk of critical and sensitive data leakage is significant.
AI’s Data Processing: A Double-Edged Sword
AI assistants process data to enhance our video conference user experiences. However, this processing is a definitely a double-edged sword. It’s beneficial to the meeting summary recipients, but at what cost? Sharing the data with affiliated partners of the digital AI assistants? Does this expose your data?
AI and Compliance Issues
Using AI in regulated industries can complicate compliance. Industries like healthcare or finance have strict regulations. AI’s data handling might not always align with these putting you in violation of disclosure laws. It is highly advisable that you prevent the adoption of AI technologies such as video plugins as Digital AI assistants until more is know about how they process and share the data entrusted to them for summarization.
Best Practices for Using AI Assistants
- Understand AI Capabilities: Know what data the AI processes and stores.
- Check Privacy Settings and Policies: Ensure settings are configured for maximum privacy. Review data privacy policies of your digital AI assistants to see if they are acceptable.
- Regularly Review Policies: Stay updated with platform policy changes.
- Limit Sensitive Discussions: Avoid sharing highly confidential information on any video calls where a digital assistant is present.
- Use Encryption: Encrypt data to protect it during transmission.
- Train Staff: Educate your team on AI risks and best practices.
- Govern Staff: Update your cybersecurity governance policies, especially in regulated industries, guiding staff on whether they can or cannot use such digital assistants. If your business discusses critical intellectual property, healthcare, or financial matters, you should put policies in place that prohibit using AI Digital assistants in these regulated industries and contexts.
Conclusion
In this article, we navigated the potential risks of AI assistants in video conferencing. Critical and sensitive data is getting exposed by these newly-minted AI assistants. Digital assistants attending video conferences seems like a breakthrough use of AI. The summarization power and accuracy of these digital assistants mean they have enjoyed a meteoric rise in popularity. However, it’s important to be aware that this comes with serious data privacy risks. Understanding and mitigating these risks is crucial for data protection and your cybersecurity. As we embrace new AI technologies, adopting a cautious approach and staying informed will be key to protecting your sensitive, critical, or regulated data from exposure.