When your AI Assistant Becomes the Attacker’s Command-and-Control

Earlier this month, Microsoft uncovered SesameOp, a new backdoor malware that abuses the OpenAI Assistants API as a covert command-and-control (C2) channel. The discovery has drawn significant attention within the cybersecurity community. Security teams can no longer focus solely on endpoint malware. Attackers are weaponizing public and legitimate AI assistant APIs and defenders must adjust.

What is SesameOp?

SesameOp is a custom backdoor malware. It is engineered to maintain persistence and, crucially, allow an attacker to manage compromised devices covertly. 

According to Microsoft, the infection chain combines a loader component (Netapi64.dll) with a  .NET-based backdoor (OpenAIAgent.Netapi64) and leverages the OpenAI API as a C2 channel to fetch encrypted commands.  

Fundamentally, this means that attackers abused fields such as Assistant descriptions, custom instructions, and messages to store information, including commands, and execute them using SesameOp malware. 

The really important thing, however, is that attackers did not exploit a vulnerability in the AI service; they repurposed a legitimate API feature. Why is that a problem? Because it means the legitimate cloud-AI features organizations trust and rely on can now form part of an attacker’s infrastructure. And that means attacks on them are much harder to detect. 

Using legitimate services for C2 isn’t a new phenomenon. This incident is just the latest in a history of covert communications disguised as legitimate use. For example, earlier this year, Palo Alto’s Unit42 documented a malware campaign using Lambda URLs for C2. In other words, the pattern isn’t new, but the specific execution is. 

So what does this mean for your API surface, your AI and agentic deployments, and how you defend them?

What Does SesameOp Mean for the API Risk Landscape?

Vulnerable APIs are nothing new. In Q3 2025, Wallarm logged 1,602 API-related vulnerabilities, 20% more than in the previous quarter, with misconfigurations accounting for 38% and broken authentication issues close behind. Even more telling: 16% of additions to CISA’s KEV catalog were API-related. 

But now the API attack surface is more complicated. Organizations are rolling out AI assistants, orchestration agents, and MCP-based microservices, creating new endpoints, traffic patterns, and entirely new trust boundaries that most organizations haven’t mapped or instrumented. SesameOp shows how attackers are using this reality to their advantage. 

The critical thing to understand about SesameOp is that attackers didn’t break OpenAI’s Assistant AI; they merely repurposed it. They turned a legitimate AI endpoint into a covert communication channel that blends in with legitimate traffic. Put simply, the AI assistant was technically functioning exactly as it should. 

SesameOp is emblematic of a broader shift in API attack tactics. Attackers are shifting away from traditional code exploits and toward exploiting legitimate services and business logic. They utilize cloud platforms as infrastructure, blend in with typical AI-generated traffic, and exploit the trust organizations place in AI systems. 

Traditional WAFs and API gateways weren’t designed to deal with this. They can’t interpret assistant instructions, agent workflows, context-passing, or MCP behavior. They don’t understand when an API endpoint is behaving “off” or being used as a covert channel. 

Defenders must adapt. They require API- and AI-aware protection, real-time blocking, deep visibility, and behavioral analytics that can detect when an assistant or agent stops behaving like a product feature and starts acting like an attacker’s control system. 

How Wallarm Can Help

So, how can organizations protect themselves from threats like SesameOp? Let’s explore how Wallarm’s capabilities can help reduce risk at every stage of the SesameOp attack. 

Step 1: Mapping the Attack Surface

The SesameOp attacker leveraged a legitimate AI assistant API channel. Wallarm’s continuous discovery capability can surface new or unexpected API and AI-assistant endpoints, unfamiliar credentials, or sudden use of AI services from machines that shouldn’t be talking to them. 

Step 2: Detecting Abnormal Traffic

Although the SesameOp C2 channel appeared to be legitimate AI calls, Wallarm’s behavioral analytics and filtering nodes can detect anomalous patterns within otherwise normal API traffic. Repetitive polling, unusual request timing, or unapproved destinations stand out because Wallarm baselines normal behavior and flags deviations, even when the domain itself is legitimate. 

Step 3: Blocking and Prevention

Once the backdoor starts using the Assistants API as its command loop, fetching instructions and posting encoded results, Wallarm’s inline enforcement can break the cycle. By blocking unauthorized API calls, suspicious payloads, or non-whitelisted AI-assistant actions, Wallarm cuts off both command retrieval and exfiltration through the AI endpoint. 

Step 4: Response and Investigation

If something does slip through, Wallarm’s detailed request logs, metadata, and integrations give IR teams clear visibility. That includes unusual assistant creation calls from a dev machine, repeated API polling from a workstation, and encoded messages flowing to an AI endpoint. This context accelerates containment and makes the covert C2 channel easier to face. 

What You Need to Do Now and Wallarm’s Added Value

SesameOp shows that the line between legitimate AI use and AI abuse is razor-thin. Defending against this new class of threat starts with tightening the fundamentals, but doing so through an API and AI-aware lens. 

Organizations should prioritize: 

  • Inventorying API and AI/agent endpoints: Don’t assume every AI assistant, MCP interface, or microservice is visible. You can’t protect what you can’t see. 
  • Monitoring both inbound and outbound traffic: Public AI endpoints are now C2 candidates. Track who talks to them, how often, and why. 
  • Enforce strict control over API keys and destinations: Allow-list only approved keys. Removed unused credentials. Block unknown or untrusted destinations by default.
  • Using inline protection: Detection alone won’t stop a backdoor using an AI endpoint as a communication loop. You need inline protection that can block, not just alert, suspicious API/agent calls. 
  • Shifting API/AI security in the SDLC: Scan API definitions, MCP interfaces, agent permissions, and endpoint configurations before they hit production. 
  • Preparing for legitimate-to-abuse scenarios: Not every attack will involve malware. Detecting misuse of normal APIs relies on behavioral analytics and anomaly detection.
  • Choosing a platform that supports the full stack: Web apps, APIs, microservices, serverless, and AI agents are now all part of the same attack surface. You need a platform that supports all of it. 

But how can you operationalise those fundamentals? Without bolting together countless separate tools and drowning in manual work? By partnering with Wallarm. Here’s how our solution can help: 

What You Need to Do How Wallarm Helps
Inventory API and AI/agent endpoints Automated API and AI endpoint discovery across environments
Monitor inbound/outbound traffic Behavioral analytics, anomaly detection, unapproved destinations
Control API keys and destinations Policy enforcement, credential hygiene monitoring, allow-listing capabilities
Inline blocking of suspicious calls Real-time filtering and blocking of malicious or policy-violating API and agent traffic
Shift security left CI/CD integrations, API schema scanning, vulnerability discovery pre-production
Detect legitimate service abuse Behavioral baselining to catch covert C2, polling loops, encoded payloads inside seemingly legitimate traffic
Full-stack protection Unified coverage for web apps, APIs, microservices, serverless, and AI agent ecosystems 

Combat Evolving Threats with Wallarm

If SesameOp tells us anything, it’s that we can no longer treat AI traffic as an afterthought or assume that legitimate services constitute safe traffic. The organizations that stay safe will be the ones that treat API and AI/agent security as a unified problem. 

Want to see how Wallarm can help you protect against evolving threats? Schedule a demo

Want to learn more about the link between AI and API security?

The post When your AI Assistant Becomes the Attacker’s Command-and-Control appeared first on Wallarm.