The role of AI in combating cyber crime

In this exclusive Cyber Talk article, CEO of SEON, Tamas Kadar, shares insights into artificial intelligence-based cyber attacks and how to address them.

Phishing and pharming incidents are becoming more common in the U.S. due to the growing sophistication of hackers. The rise of AI-based phishing attacks has made it easier for criminals to deceive individuals into sharing their personal information, often bypassing email spam filters. However, AI is not solely used by criminals seeking to evade security measures. As you may already know, AI can also be utilized to combat cyber crime. In this article, we will explore various applications of AI in addressing cyber crime, along with the existing rules and regulations governing its use in the current market.

Cyber crime trends in 2023

Tackling cyber crime is becoming increasingly challenging, partly due to increased attacks. SEON’s Global Cybercrime Report discovered that phishing is becoming more of an international issue than ever before. In the U.S. alone, the study found that it was the most common form of cyber attack – alongside pharming – in 2022.

Both phishing and pharming involve tricking victims into providing confidential personal information, such as credit card information, email login details, or any other information that means a criminal can access a victim’s finances. In 2022, one of the biggest phishing attacks was on the company Twilio: A fraudster posing as their IT department tricked employees into providing their login credentials via SMS, telling them that their passwords had expired.

In an article by The Guardian, it stated that criminals are now using AI to improve their phishing technique: One tool up their sleeve is AI chatbots. While previous phishing attempts were easier to spot thanks to poor spelling and grammar, AI chatbots are now fixing this with improved messages that are better equipped to bypass your spam filter.

AI cyber security tools can help respond to the developing sophistication of criminals when it comes to phishing tactics. Some are even capable of helping you catch them before they’re even able to onboard you with stolen credentials.

AI cyber security tools on the market

Combining data enrichment with machine learning is one of the most significant trends in recent years. To start, data enrichment helps you build a better customer profile by aggregating data from multiple sources to create a complete dataset on customers. Using a data point like a customer phone number or email address, you can glean more information about their social media footprint and how long they’ve had it.

Combining data enrichment with machine learning allows your cyber security tools to analyze the additional customer data collected and to decide on whether it creates a risky profile, based on specific rules. You’ll have access to readable rules and parameters if you’re using whitebox machine learning (rather than the less transparent blackbox machine learning).

Another trend in AI-powered cyber security is behavioral analytics. In this case, your ML tool can analyze all user behavior and spot when a pattern is suspicious or not. Examples of suspicious behavioral patterns include how a customer uses their mouse, the time of day that a customer is logging on, and even someone’s typing cadence. By referring to customer data on typical behavioral patterns of legitimate customers, ML tools can spot when their behavior seems out of place and suspicious.

Using customer data, however, comes with complications which we’ll look at in this next section on the AI regulatory landscape in 2023.

The current AI regulatory landscape

The E.U. is currently developing an E.U. AI act, which was proposed in early 2023 and is yet to be passed. The proposed act emphasizes the AI tools’ data security and cyber security needs. This is especially crucial if you are handling large quantities of user data, which your tool will be if it’s analyzing your customers for cyber threats.

On the other hand, the U.K. government is taking a different approach, introducing a white paper with the aim of balancing customer trust with innovation.

While the E.U. and U.K. develop their own regulations, the U.S.A. has a more voluntary approach to safe AI usage. The U.S.A. hasn’t currently adopted a federalization of AI regulations and has no regulations that come close to those proposed by the E.U.

There is, however, an Artificial Intelligence Risk Management Framework 1.0 (the RMF), which isn’t a requirement to follow by companies utilizing AI. While it’s not currently an industry standard or regulation, it could be worked into one soon if the U.S. government deems it necessary.

What is the Risk Management Framework?

The RMF outlines key ways to ensure the trustworthiness of your AI system. This includes ensuring the AI system is resilient to attacks, maintaining privacy and data confidentiality, and the validity and reliability of the information it provides.

As you can see, there are certain ramifications of not following the framework, leaving data vulnerable to attacks, or making your customer data vulnerable to theft. You also need to provide transparency with customers about how their data will be used (regardless of whether you’re using cyber security tools or any other AI-based tool).

What the RMF says about using customer data

The RMF also educates companies about the uses and risks of AI. One of the most important aspects here for users of AI-based cyber security tools is the use of customer personal data. If your tech uses behavioral analysis and data enrichment in order to assess customer risk, then software companies using AI will need to make their adherence to privacy laws clear to their users.

As a company using AI-based cyber security tools, it’s a good idea to ensure that they follow these laws and make this clear in the form of a company policy (ideally, one that’s displayed on a clear and easily accessible part of your website).

AI bias

Another potential risk to the success of your AI-based cyber security approach to phishing and other attacks is the possibility of AI bias. All datasets are varyingly incomplete and, therefore, will always contain some bias, and it’s important to pay attention to when this can arise. AI is “only as good as the data it’s given”. It might decide that a user is hostile based on the AI developer’s own personal beliefs about the security of particular organizations or countries.

If you don’t provide your AI with diverse enough data for training, then it may not make accurate decisions on the basis of the training data. That’s because, even if your dataset is incomplete, your AI algorithm will still work to fill in the gaps through data aggregation.

Plat.AI’s article on AI bias explains that it is one of the biggest challenges in 2023; there are multiple industries and applications where machine learning contributes to creating a society where some groups and individuals are disadvantaged. The Wall Street Journal found that, due to the expense of limiting such bias, most companies take a reactive rather than proactive approach to dealing with AI bias in their products.

The Wall Street Journal found that “AI systems have been found to be less accurate at identifying the faces of dark-skinned people, particularly women [and separately, have been found] to give women lower credit-card limits than their husbands.”

Solutions to AI bias in cyber security tools

In the context of the AI cyber security tools you’re using, AI’s decisions may still be influenced by both developer bias and training data. This can lead to frustrating outcomes, such as false positives or legitimate users being blocked as spam. The answer, then, is to track and assess risk in any algorithms that you’re using for cyber security purposes.

About the author
The Co-Founder of SEON Fraud Fighters, the Hungarian startup that broke funding records, Tamas Kadar is also the founder of Central Europe’s first crypto exchange. In fact, it was serendipitous events right then that led him to start working on his own fraud prevention company when he realized what was already on the market didn’t cover his needs. Starting with the bold idea of utilizing digital footprints and social signals to assess customers’ true intentions, SEON promises to democratize the fight against fraud. Today, the company protects 5000+ brands around the world as an industry-agnostic, fully customizable, yet intuitive end-to-end fraud prevention solution that’s highly ranked in the industry.

The post The role of AI in combating cyber crime appeared first on CyberTalk.