False positives in API security are a serious problem, often resulting in wasted results and time, missing real threats, alert fatigue, and operational disruption. Fortunately, however, emerging technologies like machine learning (ML) can help organizations minimize false positives and streamline the protection of their APIs. Let’s examine how.
What are the Risks Associated with False Positives in API Security?
Before discussing how machine learning can reduce false positives in API security, it’s essential to understand their risks, which fall into two categories: business and security.
- Business Risks: False positives can cause financial losses by blocking legitimate API calls, leading clients to cancel contracts, and damaging the service provider’s reputation. These issues increase support costs and impact multiple teams.
- Security Risks: False positives can distort service analytics, leading to poor decision-making and weakened threat responses. Security teams might disable protective measures to reduce alerts, introducing vulnerabilities. Additionally, blocked API requests can corrupt data sets, compromising decision-making and increasing exposure to attacks.
How Can Machine Learning Minimize False Positives in API Security?
ML technologies enhance API security by considering the context of each request—such as a user’s action history, geolocation, and time of access—to make better-informed decisions and reduce false positives that might occur due to context-insensitive rules.
This process works by ML algorithms analyzing historical API traffic, such as typical request rates, commonly accessed endpoints, usual data payloads, and user interaction patterns, to establish a baseline of normal behavior. The algorithm compares ongoing behaviors to this baseline and alerts security teams to any deviations. Over time, the algorithm learns more about legitimate behavior patterns, thus reducing the chance of flagging them as threats.
Machine learning-powered techniques improve on traditional, static methods, as the latter does not account for the dynamic nature of API usage. ML models continuously learn and evolve with data, providing more accurate detection and minimizing disruptions caused by false alarms. This adaptability ensures that security measures remain effective while maintaining a seamless user experience.
For example, a common API attack prone to false positives is Account Takeover, where attackers gain unauthorized access using compromised credentials. This attack can mimic legitimate user behavior, such as multiple login attempts or using a VPN, which traditional rule-based systems flag as suspicious. Wallarm’s ML-based API Abuse Prevention solution handles this by analyzing various indicators like authentication time, location, IP reputation, and number of login attempts. They adapt to the specific behavior of users and the protected API, allowing for legitimate actions (e.g., password errors) without unnecessary blocking, reducing false positives effectively.
Challenges in Using Machine Learning to Reduce False Positives
That said, using machine learning technology to reduce false positives is challenging. As an API security provider, Wallarm is all too aware of this.
Developing Approaches for Multiple Customers
First and foremost, it can be difficult to develop approaches that work for multiple, diverse customers, as each customer has their own APIs with unique features. While it’s possible to build an ML model that works well for a specific API, creating one that works for all APIs can be a serious challenge. To overcome this, Wallarm identifies the clients for whom the model does not work as well as it should and adapts detectors to the specific characteristics of the customer’s API.
Insufficient or Poor-Quality Training Data
Moreover, accurate results rely on a large volume of real traffic on which the model can be trained, and some customers may not have the necessary traffic or, if the customer is trialing the solution, use synthetic traffic. Other customers may have highly diverse traffic whereby all users behave differently, or, conversely, the traffic is homogeneous, and users are all but indistinguishable from one another. In all cases, the solution is to use Wallarm’s special detectors that combine the traditional, signature-based approach to threat detection with threat intelligence feeds.
Changes in Traffic Volume or APIs
Another potential problem is changes in traffic volume or APIs. For example, during sales events, the traffic volume can increase sharply, and users may start behaving much more actively than before. In this case, the appearance of more active users may be perceived as an attack. To solve this problem, Wallarm updates the models in our detectors before each traffic analysis using new traffic from the customer. The thresholds in the detectors are adjusted to account for the changes in traffic if such behavior is present in most sessions, ensuring we do not block new, more active users.
Minimizing False Positives Without Missing Genuine Threats
When attempting to minimize false positives in API security, it’s crucial to ensure that solutions don’t miss genuine threats. Wallarm’s API Abuse Prevention solution uses several methods to address this:
- Filtering System: A rule-based system filters out false positives by accounting for specific behaviors, like health checks or non-standard automation, which may be detected as suspicious but are essential for the customer.
- Weighted Voting System: This system assigns different weights to multiple detectors, enabling them to focus on specific signs of automated threats for more accurate detection.
- Regular Evaluation: Automated and manual quality checks ensure the accuracy of results, using tools to visualize API activity and confirm whether a session is genuinely malicious or business-related.
- Hybrid Approach: The Wallarm solution combines signature-based methods with machine learning. Depending on customer needs, the approach adjusts to prioritize either accuracy or broader threat detection, balancing false positives and negatives.
Balancing Machine Learning Tools with Human Oversight and Expertise
It’s also important to consider that human oversight is absolutely necessary when using AI technologies like ML tools. For organizations deploying their own ML tools for API security, consider the following:
- Set Clear Goals: Define your objectives for using ML, like reducing false positives or detecting advanced threats. If the problem can be solved without ML, use more straightforward methods.
- Educate Your Team: Ensure your security team understands ML concepts, especially those specific to your tools. Utilize public resources, courses, and thorough documentation review.
- Data Quality: Train ML models with accurate and relevant data to ensure optimal performance.
- Transparency: Make sure security analysts can interpret and control ML decisions and results.
- Clear Protocols: Establish protocols for escalating alerts from ML tools to human analysts, specifying timelines and decision-making roles.
For Wallarm’s models, customer support and regular result verification enhance tool quality and user benefits. Book a demo of our cutting-edge solutions to see how Wallarm can help your organization.
The post Reducing False Positives in API Security: Advanced Techniques Using Machine Learning appeared first on Wallarm.