It’s a grim reality: Bad actors are harnessing artificial intelligence (AI) to improve the effectiveness of their attacks. It’s time for IT and business leaders to put AI to work defending their data. The best place to start is speeding up attack detection.

“When it comes to discovering attacks, it’s all about the data. The faster you can analyze it, the better,” says Rita Jackson, senior vice president of product marketing at OpenText.

“Although the sheer quantity of data that must be scanned is steadily increasing, there’s more processing power than ever — plenty for AI to discern trends and patterns that might betray a breach,” says Jeff Healey, vice president of analytics and AI product marketing at OpenText.

AI is much on the mind of IT and business leaders who are concerned about beefing up cybersecurity — and who isn’t? In a new CIO MarketPulse research survey from Foundry, faster detection is the top-rated cybersecurity benefit (63%) that respondents expect to receive from AI.

Overall, nine out of 10 respondents expect that AI will help make their organization better able to protect data assets. AI-enabled cybersecurity benefits include:

  • 50% – Faster remediation
  • 56% – Ability to embed security software into development cycles (aka DevSecOps)
  • 48% – Ability to improve access control by analyzing anomalies in typical user activities

False positives are the bane of many a cybersecurity manager. “False-intrusion detection is a massive opportunity for recent advancements in AI, driven by almost unlimited computing power,” says Healey, noting that having many false positives can overwhelm cybersecurity staff.

“Instead of heightening their awareness of danger, a plethora of false positives can cause staff to ignore all alarms, greatly increasing vulnerability to attack,” Healey explains. In the survey, 47% of all the responding decision-makers said AI will help reduce the number of false positives. IT and security pros seem especially confident: 56% said AI will help reduce false positives.

As generative AI (GenAI) comes into widespread usage, it will be particularly useful in penetration testing. For example, GenAI tools can be asked to create a cyberattack with a high probability of success against an organization that has a particular set of defenses.

Jackson offers another note of encouragement: “At OpenText, we are infusing AI into every product. This AI is invisible, because it’s embedded, so it does the work for you. And because it’s AI, the more you use it, the more you train it, so its effectiveness steadily increases over time,” she says.

Ultimately, the stakes are too high to leave AI on the sidelines, Jackson says. “You must ask yourself, ‘What’s the risk of not putting AI into a security platform?’”

Learn more about how AI solutions from OpenText can help improve your organization’s security posture.

Share
Share