Business readiness for the impending deepfake superstorm

EXECUTIVE SUMMARY:

Deepfake technologies, as powered by artificial intelligence (AI), are rapidly proliferating, affecting businesses both large and small, worldwide. Between last year and this year, AI-driven deepfake attacks have increased by an astonishing 3,000%. Although deepfake technologies do have legitimate applications, the risk that they pose to businesses is non-trivial. The following serves as a brief overview of what to keep track of:

Business risk

1. Deepfakes impersonating executives. At this point, deepfakes can mimic the voices and appearances of high-ranking individuals so effectively that cyber criminals are manipulating financial transactions, ensuring authorization of payments, and weaponizing videos to gain access to information.

The financial losses caused by deepfakes can prove substantial. Think $25 million or more, as exemplified in this incident. Millions of dollars lost can affect the company’s gross revenue, jeopardizing a company’s future.

What’s more is that impersonation of an executive, even if it only occurs once, can send stakeholders into a tailspin as they wonder who to trust, when to trust them and whether or not to only trust people in-person. This can disrupt the fluidity of day-to-day operations, causing internal instability and turmoil.

2. Reputational damage. If deepfakes are used publicly against an organization – for example, if a CEO is shown to be on stage, sharing a falsehood – the business’s image may rapidly deteriorate.

The situation could unravel further in the event that a high-level individual is depicted as participating in unethical or illegal behavior. Rebuilding trust and credibility after such incidents can be challenging and time-consuming (or all-out impossible).

3. Erosion of public trust. Deepfakes can potentially deceive customers, clients and partners.

For example, a cyber criminal could deepfake a customer service representative, and could pretend to assist a client, stealing personal details in the process. Or, a partner organization could be misled by deepfake impersonators on a video call.

These types of events can erode trust, lead to lost business and result in public reputational harm. When clients or partners report deepfake issues, news headlines emerge quickly, and prospective clients or partners are liable to back out of otherwise value-add deals.

Credit risk warning

Cyber security experts aren’t the only people who are concerned about the impending “deepfake superstorm” that threatens to imperil businesses. In May, credit ratings firm Moody’s warned that deepfakes could pose credit risks. The corresponding report points to a series of deepfake scams that have impacted the financial sector.

These scams have frequently involved fake video calls. Preventing deepfake scams – as through stronger cyber security and related measures – can potentially present businesses with greater opportunities to ensure good credit, acquire new capital and obtain lower insurance rates, among other things.

Cyber security solutions

Deepfake detection tools can help. Such tools typically use a variety of deepfake identification techniques to prevent and mitigate the threats. These include deep learning algorithms, machine learning models and more.

Check Point Research (CPR) actively investigates emerging threats, including deepfakes, and the research informs Check Point’s robust security solutions, which are designed to combat deepfake-related risks.

To see how a Check Point expert views and prevents deepfakes, please see CyberTalk.org’s past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

The post Business readiness for the impending deepfake superstorm appeared first on CyberTalk.