Get Over Yourself, Your Cybersecurity Product Isn't AI

There's been a lot of "new" AI products flooding the market. Some are calling it the AI-washing epidemic.

9 months ago   •   7 min read

By Bill Doerrfeld
Table of contents

We're in an AI-washing epidemic. AI is the new buzzword. Left and right, nearly every new software coming to market in 2023 is some sort of AI-powered this or ChatGPT-enabled that.

I don't mean to downplay the revolutionary impact of today's breed of artificial intelligence — there's undoubtedly genuine enthusiasm and legitimate solutions are lifting all boats. But, AI capabilities are routinely overhyped and exaggerated.

"AI hype is playing out today across many products," writes Michael Atleson, Attorney, FTC Division of Advertising Practices. "The fact is that some products with AI claims might not even work as advertised in the first place."

And, no industry has more AI-washing than cybersecurity.

alt

To put things in perspective, at 2023's RSA Conference, nearly a dozen vendors, as well as over 50 startups, announced AI-powered cybersecurity products.

That's enough to make one's head spin — especially since AI is such a nebulous idea. It also makes you wonder how many of today's AI-branded cybersecurity solutions are no more than firewalls with advertising fluff.

Moreso, just because a security solution legitimately deploys AI doesn't mean it's any safer than a solution that doesn't — especially considering the repercussions of the latest generative AI wave.

Below, we'll examine what authentic machine learning and AI is and what it isn't. We'll briefly review how AI can aid application programming interface (API) security efforts and highlight the many API security measures that don't involve AI at all.

Because as we'll see, non-AI security solutions are just as effective — and necessary — to prevent bad actors from disrupting valuable web APIs.

What Even Is AI?

Defining AI can be a challenge to pin down. There's a lot of noise around AI, and it might mean something different depending on who you ask.

Ask a film producer, and AI might mean a self-aware apocalyptic intelligence bent on destroying humanity. Ask a VC or PR rep, and AI might mean dollar signs and clicks. Ask a high school student, and it's the key to finishing a late paper.

alt

In general, "artificial intelligence" is an umbrella term that encompasses areas like machine learning, computer vision, natural language processing, reasoning, and others.

ChatGPT defines true artificial intelligence (AI) as "systems or machines that possess the ability to perceive, understand, reason, learn, and make decisions or take actions autonomously, without explicit programming for each specific task."

Machine learning (ML) is a subset of AI in which algorithms or models are designed to generalize from large amounts of data to make future predictions. The key with ML is that programs can learn and improve from data without explicitly being programmed to do so.

An example AI tool that uses ML is the facial recognition API from Kairos, which is trained on a large dataset of facial expressions to identify emotions. Or, Microsoft Cognitive Services provides conversational language understanding, which can be used to power chatbots. There are numerous other AI/ML services on the market as well.

What Is Not AI?

Since AI is a buzzy concept, some products are mislabeled as AI. Yet, not all automations are actually AI-driven. Many complex software programs don't make decisions or actions autonomously as would a human.

For example, traditional programming languages such as JavaScript or Python involve writing explicit instructions for a computer to follow. Similarly, many automated processes exist in which routines are automated through robotic process automation or low-code/no-code, such as Zapier or IFTTT workflows. Both scenarios are rules-based and don't necessarily involve an autonomous intelligence.

Furthermore, certain data analysis techniques do not involve AI either. Statistical analysis might involve making judgments from large quantities of data yet not involve the learning and adaptation associated with artificial intelligence.

Lastly, if a large language model (LLM) or co-pilot is used to generate production code shipped within a product, the product shouldn't be labeled as AI-powered. (On that note, LLMs shouldn't be conflated with artificial general intelligence (AGI), which has yet to be fully developed.)

How AI Could Aid API Security Efforts

If we consider web APIs, there are interesting parallels in which AI could aid cybersecurity efforts. One area is runtime monitoring. For example, a model could be developed that creates a baseline of typical user behaviors and then compares this against runtime requests to spot potentially nefarious actions.

Introducing runtime monitoring for APIs has become a mounting concern since hackers are getting through broken access control systems. OWASP lists Broken Authentication as the 2nd-worst API risk, and one report even found that nearly 80% of API attacks occur over authenticated endpoints.

A number of cybersecurity tools are legitimately using AI. However, runtime analysis is only one aspect of holistic API security, and it is no replacement for foundational cybersecurity frameworks and best practices.

Most API Security Measures Don't Involve AI

Plenty of robust cybersecurity offerings don't incorporate AI but are vital to protecting the modern web and software supply chain. I'm talking about the open standards, policies, encryption, gateways, rate limiting, and human judgment that make it safe to host and consume APIs.

Here are some examples of API security solutions that don't usually involve AI. Each is integral to protecting APIs and thwarting bad actors from disrupting services that end users rely upon every day.

alt

  • Authentication: Methods to authenticate users and applications, such as Basic HTTP or API keys. This also encompasses multi-factor authentication, one-time passwords, keyfob, and biometric logins.
  • Authorization: Protocols such as OAuth 2.0 can grant authenticated clients access to the proper resources. OAuth flows often share JSON Web Tokens (JWTs), which denote role-based permissions.
  • Validation: Programs that validate and sanitize user input help avoid common attack vectors, like injection and cross-site scripting.
  • Rate limiting: Systems that enforce limits and rules on API traffic. Gateways, such as Kong, NGINX, and Tyk, can filter requests and enforce traffic restrictions, such as the number of requests a client can make to an API during a specific timeframe.
  • Transport layer security (TLS): A cryptographic protocol that improves SSL to secure communication over computer networks.
  • Logging and notifications: Logging systems catalog interactions to help diagnose problems and error notification programs help security teams respond in a timely manner.
  • Security testing: Security testing tools like automated vulnerability scanning can compare API code against a database of CVEs. Still, many aspects of security reviews are performed manually.
  • API lifecycle management: Systems that are used to carefully evolve APIs over time, deprecate endpoints, and maintain control over API versioning.
  • API inventory management: Programs that help maintain an up-to-date catalog of internal and third-party APIs to avoid sprawl and shadow IT.

Why AI-Washing Matters

As one can see, plenty of baseline cybersecurity mechanisms don't utilize artificial intelligence or machine learning. But why does the AI vs. non-AI distinction matter? It's only semantics, right?

Well, it's a good tenant to retain truth and transparency when describing technology. In a crowded marketplace of API bandwagoning, it becomes difficult to assess the legitimacy of AI solutions, which might hinder the visibility of innovative, authentic AI and ML products.

"For most ML projects, the buzzword 'AI' goes too far," writes Eric Siegel for Harvard Business Review. "It overly inflates expectations and distracts from the precise way ML will improve business operations."

Not to mention, there are potential downsides to using AI in cybersecurity. LLMs, for example, are known to hallucinate and produce insecure code.

alt

Thus, we shouldn't place too much faith in AI for generating things like security policies and protocols, which should be vetted and matured by cybersecurity professionals.

Furthermore, just because a cybersecurity solution uses AI doesn't mean it's more effective at preventing criminal behaviors. It will require science and testing to back up these claims, and most often, vendors don't have the studies to prove it.

It's Not All AI, Not Yet

AI, once a mainstay of science fiction, has morphed into a new selling point for business. (The number of press headlines I personally see in my inbox with the phrase AI is staggering). Clearly, there is a lot of excitement.

Some groups have attempted to quantify this trend. For example, Fortune Business Insights states the global AI market is valued at over $500 billion US in 2023 and forecasts it to reach an astonishing $2,025.12 billion by 2030.

Cybersecurity can use AI to great ends for identifying phishing emails and spotting malicious links. And similarly, API providers could deploy AI at the perimeter to enhance security. This may end up in a 'good AI' vs. 'bad AI' arms race as hackers deploy generative AI to remain under the radar of traditional systems.

While there are certainly legitimate use cases for AI in cybersecurity, broad statements concerning the use of AI should be taken with a grain of salt. It is advantageous for companies to brand themselves to take advantage of a trend. As such, software solutions will likely continue to overuse the term AI going forward for some time.

Just because you don't know how something works under the hood doesn't mean it's AI. It's not all AI overlords, not yet. We will still need proven technologies, best practices, and human intelligence to oversee many aspects of cybersecurity.

Spread the word

Keep reading