How AI Can Enable Zero-Trust API Security

Can AI help in enabling Zero-Trust API Security?

10 months ago   •   4 min read

By Bill Doerrfeld
Table of contents

Year after year, attacks on APIs seem to be increasing. API attacks rose by 400% over a six-month period. And Salt Labs also discovered that about 80% of API attacks occur over authenticated endpoints. This news is alarming because it means that what appears to be legitimate API traffic might very well be a malicious actor working with stolen permissions. This underscores the need for a zero-trust approach to protecting API access.

Simultaneously, AI is making great strides to bring all sorts of new powers to both hackers and defenders. Generative AI and large language models (LLMs) are accelerating the programming world at large, and there are some interesting parallels in which AI could be trained to amplify API security, namely in runtime behavioral analysis, security assistance, and advanced authentication mechanics.

alt

The modern corporate network is fractured and full of potential insider threats. In this environment, the solution is most often to construct architectures that adopt a "trust no one" approach. Below, we'll explore several areas where AI could complement this zero-trust strategy. We'll consider how advancements in AI could help counter malicious efforts against APIs and look at what the future of securing APIs with AI might soon look like.

Using AI For Runtime API Security

First and foremost, the most apparent way AI could complement a zero-trust approach for API security is through advanced anomaly detection. A machine learning model would first be trained on typical API production data to set a baseline of normal behavior. This baseline could consider API request frequencies, normal resource usages, request payloads, and other contextual cues.

Once a benchmark is produced, a system could automatically flag abnormal patterns that deviate from normal API usage. Such an algorithm could help sniff out fishy tactics, like malformed requests, malicious actors performing reconnaissance, or bots spamming various endpoints. Depending on the severity of the incident, an AI system could flag the account or immediately suspend its associated API key.

Running AI over production requests can bring many benefits to API providers. First off, it can reduce the time required to respond to possible security breaches. Systems could be instantly protected while security teams respond to remediate issues. Secondly, a model could theoretically continue to learn and improve its accuracy and decision-making over time, thus reducing false positives and human intervention.

That said, training a machine learning model is no walk in the park. For it to be relevant and functional, it would have to have intimate knowledge of the specific API at hand. This would require an extremely dense data corpus of production behaviors, including fine-grained logs. Depending on the sector you're working in, some of this data might have to be encrypted and anonymized to avoid illegally collecting and storing sensitive data.

alt

AI-Assisted API Security Copilot

With the rise of generative AI and chatbots like ChatGPT and Bard, AI-assisted development is becoming increasingly common. And many software development platforms are beginning to embed foundational models (FMs) that suggest code snippets or improve code quality. Microsoft has been expanding the GitHub Copilot concept to other domains, such as cybersecurity.

It's not hard to imagine a similar AI assistant explicitly designed to aid the modern API developer. In fact, 60% of API developers are already employing generative AI tools in their work, found the 2023 State of the API Report. These developers most commonly use generative AI to find mistakes in their code, and some already explicitly use it to flag potential security vulnerabilities in their APIs.

Using natural language prompts, a developer could ask such an assistant to generate sample security policies or set up API security frameworks. For example, commands could involve initiating keys, spinning up an API gateway, or reducing token lifetime for a specific user subtype. In essence, AI assistants could further abstract the low-code dashboards already in use today to manage and secure APIs with more natural language wrappers.

An API security AI assistant could help shift-left security too. If a copilot was specifically trained on known API vulnerabilities, it could continually assess the state of application security for misconfigurations or threats during development and testing. Such AI-driven analysis could add an additional check to manual API security code reviews.

Using AI For Adaptive Authentication

Another area where AI might be useful is within the realm of authentication. Malicious actors are continually devising new ways to fly beneath the radar of traditional security monitoring. And, given that so many API attack requests appear to be legitimate, API providers need more advanced ways to prove that the caller is who they say they are.

alt

Adaptive authentication can reject requests if the system detects irregularities during the authentication process, like an unknown device or a request made at an unusual time. For example, an impossible journey between two locations could signal additional security prompts. The process of creating a machine learning model for this scenario is also referred to as user and entity behavior analytics (UEBA).

Of course, for adaptive authentication algorithms to be genuine AI, they would have to go beyond simple rule-based systems. They would have to be trained on massive amounts of typical login patterns and continue to adapt upon ingesting more data.

Support Zero-Trust API Security With AI

There are many ways in which AI could be used to enhance API security. It could bring additional insights to behavioral analytics to increase threat intelligence and response. New AI pair programming techniques could democratize complex API security implementations. All this effort should help to prevent hacker reconnaissance, man-in-the-middle attacks, injection, and DDoS attacks against the API-based services we use and love.

However, while AI could greatly enhance zero-trust API security, it's important to note that we're still in the early stages of AI, and much of this technology has yet to be developed. AI-washing is also quite prevalent, and cybersecurity solutions might claim to do more on the tin than they deliver in practice. As such, AI usage should be carefully assessed, and always deployed in conjunction with established and proven API security best practices to create a holistic strategy.

Spread the word

Keep reading