AI is everywhere we look and shows no signs of fading. By 2025, global AI adoption will reach 378 million users, with the market valued at $244 billion. Over 61% of American adults used AI in the first six months of 2025, and nearly one in five rely on it daily.
Yet, generative AI is known for its errors, which have prevented it from being adopted fully in sectors requiring extreme accuracy, or in regulated sectors such as health and finance.For similar reasons, AI solutions in cybersecurity should be approached with high levels of scrutiny. Poor AI in a solution may result in excessive false positives and a weakening of user privacy rather than stronger data security.
Unfortunately, these considerations haven’t slowed down vendors who are eager to get on the AI bandwagon. They’ve filled their marketing pitches with AI-labeled solutions from threat detection and automation to AI-integrated cloud offerings. From a cybersecurity leader’s perspective, the question is whether these solutions are just more hype or something that will meaningfully improve an organization’s security posture. They’ve even repackaged older features as AI. For example, the cybersecurity sector has used machine learning for years, and this age-old tool is now prominently rebranded as AI.
In this article, we’re going to cut through the hype and break down what you, as a security leader, should look for in vendor AI offerings.
When AI is just marketing
Vendors in cybersecurity have rebranded existing features as AI to capitalize on current hype, often without adding new capabilities. Many adaptive features, like behavioral analysis, originated from machine learning techniques such as decision trees or clustering, developed well before the 2023-2025 AI surge. Vendors now repackage these as advanced AI to align with market trends.
Vendors are also actively “agent washing,” the rebranding of existing products as agentic when they lack substantial agentic capabilities, reports Gartner. Seventy-six percent of security professionals agree that the AI market is saturated by hype.
Repackaging allows vendors to increase prices or draw in buyers expecting superior security, but the features often do little more than match prior versions' effectiveness. In the worst case scenario, a vendor is just selling an older product and touting it as the latest and greatest solution.
While some vendors integrate genuine innovations like deep learning, the opacity around specifics—such as model details or training data—blurs lines. Buyers should demand evidence of improvements, like reduced false positives, to avoid investing in rebranded legacy tech.
“We see the promise of AI-driven threat detection, but mostly it’s just basic signature matching with a new marketing label,” says Zbyněk Sopuch, CTO at Safetica. “Another popular tactic is offering chatbots or ‘AI-powered assistants’ that repackage static knowledge bases but don’t truly reduce analyst workload. The key is to rethink the whole process and focus on added value, but that's not the current market situation.”
Zbyněk Sopuch, CTO at Safetica
When AI just adds to the problem
Some cybersecurity vendors integrate AI features that merely amplify noise and signals, overwhelming teams rather than addressing core issues. The new tools end up leading to alert fatigue and only increase the issue of vendor complexity.
A gap also exists in perception: 71% of executives claim AI boosts productivity, but only 22% of SOC analysts agree. As one analyst states, “It’s not that we don’t want to use AI, we just don’t trust it to work reliably without us watching it.”
“Poorly tuned AI can flood teams with false positives, creating alert fatigue instead of reducing it,” says Sopuch. “Many companies just ‘deploy AI’ and consider the job done, but don't think about it as the starting point.”
Zbyněk Sopuch, CTO at Safetica
The very nature of AI itself can cause an issue, such as when employees input data into LLMs whose lack of privacy protections can inadvertently lead to data leakage. In ChatGPT’s case, the matter is especially dire, given that OpenAI is currently under court order to perpetually keep all copies of user conversations even after users delete them.
AI-infused end-user tools have been the cause of many sleepless nights for cybersecurity professionals. Whether it’s Windows Recall screenshotting every action a user takes, coding agents going rogue, AI assistants recording and transcribing every confidential meeting, or AI-assisted code being shipped with errors, companies have been reckless in pushing out tools without sufficient testing in an effort to catch the bandwagon.
Cybersecurity vendors are no different. The FOMO surrounding the need to plaster an AI label on every service is palpable. Rushed AI solutions have consistently led to vulnerabilities and security and risk tied to AI features and products are afterthoughts if they’re considered at all. In these cases, AI may actually be contributing to an organization’s attack surface and risk exposure instead of helping them manage it.
Where AI is truly useful
When considering vendors who are marketing and demoing AI-powered features, it’s important to consider the fundamentals in any procurement decisions.
“When evaluating AI in a product, security leaders should focus on practicality,” says Libor Pazdera, Senior Technical Consultant at Safetica. “AI sounds exciting, but does it help your team or add more tasks? Some vendors use ‘AI-powered’ to sound modern, but it’s often just basic automation. Ask: What does the AI do? Can it explain its decisions? Does it save time or just increase alerts? AI should assist, not replace or confuse. If it makes your team’s job easier, that’s a good sign. If it feels like a mystery, it’s probably not ready.”
Libor Pazdera, Senior Technical Consultant at Safetica
Consider features that provide automation, and ask yourself if the automation will enhance or hinder productivity. For example, automated actions based on event activity may automatically lock users out based on a single anomalous event. Without human intervention to assess the level of risk, it could lead to excessive disruption. However, automated lockout for high-threshold anomalies could mean the difference between containing a breach and not. Knowing the nuance behind the application of these features is necessary when considering a solution that’s right for you.
“Smart” features that are supposed to help with “noise” can sometimes lead to noise themselves. Using LLMs as a filter is a potential use case. Depending on the training data and how adaptable it is, the LLM may be well trained enough to identify a high threat while deprioritizing other alerts, which can actually save a team time.
The most important question to ask is whether AI adoption augments a team. Ask vendors to show you how their tool speeds up response times.
“When evaluating AI features in a vendor solution, security leaders should adopt a balanced perspective,” says Safetica’s CISO Radim Travnicek. “On one hand, AI brings significant benefits: It can accelerate detection, automate repetitive tasks, and uncover insights that would otherwise be missed.
Radim Travnicek, CISO at Safetica
Don’t buy the hype, buy the outcome
It’s easy to fall into the hype of AI and feel FOMO when it comes to vendors, especially when they push their flashy demos.
Regardless of the pitch deck, ask the vendor to show you results. Ask how many threats the tool stopped with AI and how many without. Case studies and use cases are extremely helpful here and if the company doesn’t have any, then that means the features aren’t field tested enough.
By focusing less on the features themselves and more on the outcomes, you can ensure that you’re not just being pulled in by marketing and you’re thinking more holistically about how AI-driven features can fit within your specific department.
The goal is a secure organization, not a 007-style org with flashy gadgets that still let the bad actors through.
AI is everywhere we look and shows no signs of fading. By 2025, global AI adoption will reach 378 million users, with the market valued at $244 billion. Over 61% of American adults used AI in the first six months of 2025, and nearly one in five rely on it daily.
Yet, generative AI is known for its errors, which have prevented it from being adopted fully in sectors requiring extreme accuracy, or in regulated sectors such as health and finance.For similar reasons, AI solutions in cybersecurity should be approached with high levels of scrutiny. Poor AI in a solution may result in excessive false positives and a weakening of user privacy rather than stronger data security.
Unfortunately, these considerations haven’t slowed down vendors who are eager to get on the AI bandwagon. They’ve filled their marketing pitches with AI-labeled solutions from threat detection and automation to AI-integrated cloud offerings. From a cybersecurity leader’s perspective, the question is whether these solutions are just more hype or something that will meaningfully improve an organization’s security posture. They’ve even repackaged older features as AI. For example, the cybersecurity sector has used machine learning for years, and this age-old tool is now prominently rebranded as AI.
In this article, we’re going to cut through the hype and break down what you, as a security leader, should look for in vendor AI offerings.