The AI increase is amplifying dangers throughout enterprise knowledge estates and cloud environments, based on cybersecurity professional Liat Hayun.
In an interview with TechRepublic, Hayun, VP of product administration and analysis for cloud safety at Tenable, suggested organisations to prioritise understanding their threat publicity and tolerance, whereas prioritising tackling key issues like cloud misconfigurations and defending delicate knowledge.
She famous that whereas enterprises stay cautious, AI’s accessibility is accentuating sure dangers. Nonetheless, she defined that CISOs as we speak are evolving into enterprise enablers — and AI might finally function a strong device for bolstering safety.
How AI is affecting cybersecurity, knowledge storage
TechRepublic: What’s altering within the cybersecurity surroundings resulting from AI?
Liat: Initially, AI has turn into far more accessible to organisations. In the event you look again 10 years in the past, the one organisations creating AI needed to have this specialised knowledge science crew that had PhDs in knowledge science and statistics to have the ability to create machine studying and AI algorithms. AI has turn into a lot simpler for organisations to create; it’s nearly similar to introducing a brand new programming language or new library into their surroundings. So many extra organisations — not simply giant organisations like Tenable and others — but in addition any start-ups can now leverage AI and introduce that into their merchandise.
SEE: Gartner Tells Australian IT Leaders To Undertake AI At Their Personal Tempo
The second factor: AI requires numerous knowledge. So many extra organisations want to gather and retailer increased volumes of knowledge, which additionally generally has increased ranges of sensitivity. Earlier than, my streaming service would have solely saved only a few particulars on me. Now, possibly my geography issues, as a result of they’ll create extra particular suggestions based mostly on that, or my age and my gender, and so forth. As a result of they’ll now use this knowledge for his or her enterprise functions — to generate extra enterprise — they’re now far more motivated to retailer that knowledge in increased volumes and with rising ranges of sensitivity.
TechRepublic: Is that feeding into rising utilization of the cloud?
Liat: If you wish to retailer numerous knowledge, it’s a lot simpler to do this within the cloud. Each time you resolve to retailer a brand new kind of knowledge, it will increase the amount of knowledge you’re storing. You don’t need to go inside your knowledge middle and order new volumes of knowledge to put in. You simply click on, and bam, you’ve got a brand new knowledge retailer location. So the cloud has made it a lot simpler to retailer knowledge.
These three parts kind a sort of circle that feeds itself. As a result of if it’s simpler to retailer knowledge, you possibly can improve extra AI capabilities, and then you definately’re motivated to retailer much more knowledge, and so forth. In order that’s what occurred on the planet in the previous few years — since LLMs have turn into a way more accessible, widespread functionality for organisations — introducing challenges throughout all these three verticals.
Understanding the safety dangers of AI
TechRepublic: Are you seeing particular cybersecurity dangers rise with AI?
Liat: The usage of AI in organisations, in contrast to using AI by particular person individuals internationally, remains to be in its early phases. Organisations wish to guarantee that they’re introducing it in a approach that, I might say, doesn’t create any pointless threat or any excessive threat. So by way of statistics, we nonetheless solely have a number of examples, and they don’t seem to be essentially an excellent illustration as a result of they’re extra experimental.
One instance of a threat is AI being skilled on delicate knowledge. That’s one thing we’re seeing. It’s not as a result of organisations will not be being cautious; it’s as a result of it’s very troublesome to separate delicate knowledge from non-sensitive knowledge and nonetheless have an efficient AI mechanism that’s skilled on the precise knowledge set.
The second factor we’re seeing is what we name knowledge poisoning. So, even when you’ve got an AI agent that’s being skilled on non-sensitive knowledge, if that non-sensitive knowledge is publicly uncovered, as an adversary, as an attacker, I can insert my very own knowledge into that publicly uncovered, publicly accessible knowledge storage and have your AI say issues that you simply didn’t intend it to say. It’s not this all-knowing entity. It is aware of what it’s seen.
TechRepublic: How ought to organisations weigh the safety dangers of AI?
Liat: First, I might ask how organisations can perceive the extent of publicity they’ve, which incorporates the cloud, AI, and knowledge … and every little thing associated to how they use third-party distributors, and the way they leverage totally different software program of their organisation, and so forth.
SEE: Australia Proposes Obligatory Guardrails for AI
The second half is, how do you establish the essential exposures? So if we all know it’s a publicly accessible asset with a high-severity vulnerability to it, that’s one thing that you simply in all probability wish to handle first. However it’s additionally a mixture of the affect, proper? When you have two points which are very comparable, and one can compromise delicate knowledge and one can’t, you wish to handle that first [issue] first.
You additionally need to know which steps to take to handle these exposures with minimal enterprise affect.
TechRepublic: What are some huge cloud safety dangers you warn in opposition to?
Liat: There are three issues we normally advise our prospects.
The primary one is on misconfigurations. Simply due to the complexity of the infrastructure, complexity of the cloud, and all of the applied sciences it offers, even if you happen to’re in a single cloud surroundings — however particularly if you happen to’re going multi-cloud — the possibilities of one thing turning into a difficulty simply because it wasn’t configured appropriately remains to be very excessive. In order that’s undoubtedly one factor I might deal with, particularly when introducing new applied sciences like AI.
The second is over-privileged entry. Many individuals assume their organisation is tremendous safe. But when your own home is a fort, and also you’re giving your keys out to everybody round you, that’s nonetheless a difficulty. So extreme entry to delicate knowledge, to essential infrastructure, is one other space of focus. Even when every little thing is configured completely and also you don’t have any hackers in your surroundings, it introduces extra threat.
The side individuals take into consideration essentially the most is to establish malicious or suspicious exercise as early because it occurs. That is the place AI could be taken benefit of; as a result of if we leverage AI instruments inside our safety instruments inside our infrastructure, we are able to use the truth that they’ll have a look at numerous knowledge, and so they can do that actually quick, to have the ability to additionally establish suspicious or malicious behaviors in an surroundings. So we are able to handle these behaviors, these actions, as early as doable earlier than something essential is compromised.
Implementing AI ‘too good of a possibility to overlook out on’
TechRepublic: How are CISOs approaching the dangers you might be seeing with AI?
Liat: I’ve been within the cybersecurity business for 15 years now. What I like seeing is most safety specialists, most CISOs, are in contrast to what they was like a decade in the past. Versus being a gatekeeper, versus saying, “No, we are able to’t use this as a result of it’s dangerous,” they’re asking themselves, “How can we use this and make it much less dangerous?” Which is an superior development to see. They’re turning into extra of an enabler.
TechRepublic: Are you seeing the great aspect of AI, in addition to the dangers?
Liat: Organisations must assume extra about how they’re going to introduce AI, quite than pondering “AI is simply too dangerous proper now”. You possibly can’t try this.
Organisations that don’t introduce AI within the subsequent couple of years will simply keep behind. It’s a tremendous device that may profit so many enterprise use circumstances, internally for collaboration and evaluation and insights, and externally, for the instruments we are able to present our prospects. There’s simply too good of a possibility to overlook out on. If I may also help organisations obtain that mindset the place they are saying, “OK, we are able to use AI, however we simply must take these dangers into consideration,” I’ve performed my job.”