Dark Reading’s Becky Bracken: Hello everybody and welcome back to Dark Reading Confidential. It’s a podcast from the editors of Dark Reading, bringing you real world stories straight from the cyber trenches. Today I am joined by my colleague Alexander Culafi, who is going to talk to us today about AI in the security world. Alex?
Dark Reading’s Alexander Culafi: Thanks, Becky. Today we’re going to be talking about AI deployments in the workplace and, more specifically, the security organization. Over the last two, three years, we’ve seen many companies try to pull AI in, some more successful, some less successful. We’re obviously talking about LLMs, machine learning, that’s AI, the way folks say AI today. And we’re going to be talking a bit about how successful these deployments of security, AI tools, products, et cetera, have been.
Frederick Lee, “Flee”, CISO of Reddit, thank you for joining me. And Dave Gruber, principal analyst, cybersecurity, at Omdia, thank you very much for joining us today.
There are many ways AI models are sold in the cybersecurity ecosystem today. Threat detection and analysis, automated incident response, and vulnerability management among them. Of the folks you talk to in your day-to-day life, or perhaps your own experience, how are you seeing teams take advantage of this new wave of machine learning, AI tech and practice?
Flee, we’ll start with you.
Reddit’s Frederick Lee: Yeah, oddly enough, this is one of those things where it’s not just smoke and mirrors with regards to some of the new technology. We’re not fully there with regards to actually the promise of LLMs. However, we are seeing some value. And in a lot of ways, you’re actually seeing this, my team, other teams, especially here in Silicon Valley, for example, are getting a lot of value from the AI in LLMs, in particular with regards to actually automation and making some of that automation easier and more approachable for people that may not even actually [have taken] advantage of it in the past.
For example, you know, there is a vendor out there called Tines, and Tines was doing a lot of workflow, et cetera. Tines is now even easier to use because people can now actually talk to Tines effectively like you would another human. Where I’ve seen a lot of people leverage some of the existing infrastructure is actually literally taking some of the run books they have today, feeding those into LLMs, and turning those into agents to actually, you know, continue some of the operations, et cetera. And to some extent, leveraging AI to expand the coverage that a team can currently do today; not just coverage from the standpoint of like, Hey your attack surface, but literally like the hours we can actually get a response back to end users for various things.
So, I’m seeing a lot of people leaning more into the automation and simplification aspects of it (AI) and on the simplification side one of the interesting things we’re seeing in a lot of products are people using LLMs to essentially [input] human speech into various different programming languages — you know a great example of this would be things like BigQuery or Splunk — being able to now instead of you having to actually learn Splunk’s query language or actually having to learn how to utilize BigQuery literally being when actually just type in, Hey give me information about this IP address, and the LLM itself then translating that into the appropriate queries to actually get the analysts back the data they want.
Omdia’s Dave Gruber: Yeah. So, if I can jump in here too, and give you a little bit on a broader industry perspective. And I loved your examples there. Thank you so much for being very specific about them.
As we’re talking about the use of AI, I wanted to just tee up sort of those two phases of how we were thinking about this over the course of the last 18 to 24 months. So, we’ve seen a tremendous amount of generative AI use that’s been inserted into most of the core security tools and mechanisms. And people have become very familiar with and are using it pretty widely for lots of use cases. Many are which are, I’ll call them horizontal use cases, things like automation for specific tasks might be data enrichment. It might be a malware sandbox or in sort of lot of the more traditional automation use cases that are now more dynamic in the way that we handle those.
The other good horizontal example would be in summarization of an incident and it’s helping take what otherwise would have been an arduous task for an analyst to write up a summary of a case and share that with other people. Now that’s happening very, very quickly because general AI is one of the things that it does very, very well.
The other use case that I’ll talk about is a category of use cases, which is the vertical use cases. And for that, I’ll zero in on things like threat intelligence analysis. And as we know, the ability to operationalize threat intelligence is one of the more challenging aspects, and any delay that takes place from when we gain access to threat intelligence until we can operationalize it within the infrastructure adds additional risk to the organization. And so, as we put AI to work to help us speed up that process, do more analysis, faster understand what’s relevant specifically to my organization contextually, and then get that into the cycle. As we insert it into the tools and get it to the analysts, now we’re more threat aware and we can respond faster and more accurately to threats as they happen.
I’ve been doing research in this space quarterly now over the course of the last year and I’m absolutely amazed at how fast things are moving.
And Flee, I know you’ve seen this in your world too. It sort of goes through the traditional net new tech adoption cycle, except on steroids. Things are moving very, very fast, right? So, what do we do when we first get new tech? Is we get our hands all over it and we’re a little afraid of it. We’re not sure what the boundaries of it are and we try to learn what to do with it. But once we understand the boundaries, once we understand how we can apply it, what it can do, suddenly it becomes a useful tool.
R’s Frederick Lee: Yeah.
O’s Dave Gruber: Quickly going through this cycle of like what’s possible, well actually starting out with a little fear, and then understanding what’s actually possible, then understanding what the boundaries are, and then we start putting it to work, now we can narrow it down, and Flee just named several use cases that are more specific to that. Like after we understand what’s possible, then we can put that to work.
Guess what? People are seeing some pretty significant value right away, so some pretty good news there.
R’s Frederick Lee: I love the reference to kind of like how we traditionally introduce tools in the security world. And one of the more interesting things is this, at least for me, you can see Gray Beard and all that, is probably one of the first times or maybe one of the rarer cases where a new technology came out that security practitioners were excited about, where they were actually seeing, Hey, there’s actually something here that might be promising to me, not just another bit of technology you have to actually figure out how to secure.
Now, obviously, we’re still thinking about, how do I secure LLMs? But I think immediately a lot of security practitioners saw that promise as something that helps expand what we can actually do inside of security. So, I think it’s also part of the reason we’re actually seeing that rapid iteration of development as well as that rapid iteration of adoption by security teams.
O’s Dave Gruber: Yeah, right on. And there is excitement.
But I got to say, like a year ago, when I started asking the question to both practitioners and security leaders, the leaders were more motivated at what was possible, of course, than necessarily the practitioners were. There was a fair amount of just nervousness and cautiousness in approaching things. But boy, I’ll tell you, over the last three research cycles that I’ve gone through right up until now, now there’s, I’ll call it what it is, it’s excitement about what’s possible, not only excitement about like how I can get my job done better, but excitement about the promise for making my life better and maybe my career prospects better going forward too.
So that’s a big flip. I’m not saying there’s not still some caution with certain people, but for the more you get your hands on this stuff, the more excited you get.
DR’s Alexander Culafi: The risks that have come with some of this new LLM technology are fairly well established to some extent. For example, with Vibe coding, AI coding tools, which is not exactly the same thing, but is relevant to security, AI generated code has a tendency to introduce vulnerabilities when there’s no experienced human engineer auditing, working alongside, making sure stuff is pushed to production safely.
For some of these security use cases, whether it’s what you were mentioning, Flee, or, you know, this malware sandboxing, threat intelligence, et cetera, what sort of risks do organizations need to watch out for? And I’ll start with you on this one, Dave.
O’s Dave Gruber: So, we need to frame risks maybe, I’ll, because I’m an industry analyst, I always sort of take a broader perspective. So, I’m going to frame the risk question this way. So, there’s hard risks and there are soft risks.
When you think about hard risks, your example of vulnerability is a hard risk, right? It’s like something problematic has been introduced into the cycle itself.
A soft risk is one where I say, I don’t understand yet the boundaries of what’s it’s capable of. So, I don’t quite trust the decisions that are being made. So, there’s risk of me just buying into the decision-making process without me understanding what kind of monitoring and review cycle is necessary for me to take advantage of the technology.
I like to call that more about the learning process that’s associated with it, but it also translates into risk. If I don’t pay attention, and I don’t understand what’s happening, and how the tech is being applied and how to configure it properly, and pointed at the right things properly, I do introduce risk into the function itself. And I frame it that way to say that there’s, sure, there’s always risk in anything that you don’t understand well. And even your vulnerability example isn’t far off base from that.
When you think about traditional vulnerability management and traditional software development, we’re very adept at understanding how to flush out vulnerabilities and issues in the software development lifecycle, right? We’ve had many, many years of experience. We have specialized tools that help us do that right in the dev cycle. And so those tools aren’t well flushed out yet in the AI driven code development cycle, but it’s happening very, very quickly. Those tools are being put in place. We’re understanding the cycle and the process better.
So, in my mind, these are all aspects of the maturity cycle as we utilize the technology. And again, back to get your hands on it when you understand what’s possible, what can be done, where the boundaries are, where the issues are, when to trust, when not to trust.
Now you can take, I’ll call them mitigation steps to actually make sure that we inject human oversight into the process at the right places, leverage other technologies to help us mitigate those risks, and at least identify those risks. And then you know, just like you burn down risk in the software life cycle, and anything will do the same as we onboard net new capabilities, whether by the way you built them yourself because there’s a fair amount of custom development going on even in the sec ops function.
Although, you know, there’s sort of pockets of where that’s happening versus your onboarding the use of AI-enabled security technologies from the plethora of security vendors in the marketplace that right now, and putting this tech to work for us.
Flee, what’s your thoughts?
R’s Frederick Lee: No, I love where you’re actually going with that. There are some specific things we’re already kind of aware of.
Everybody’s heard about prompt injection and some of the concerns around that. And one of the things I like to remind people, is that when you think about an LLM, think about it as a programming language. In programming, there’s kind of this concept of control characters, meaning, Hey, you’re (looking for) for loops, “if” statements, includes, et cetera, versus data characters or user input.
If you combine that right it, it’s kind of like the prompt injection problem. It’s difficult to resolve. And especially with regards to how people are leveraging LLMs, because we’re also taking a lot of resources externally to say, “Hey, I found a new skill on GitHub. I want to use that in my LLM.”
I think a lot of people are also aware of some of the things going on with, you know, Clawdbot or Moltbot or whatever it actually wants to call itself today, you know, exposing some additional things there. One other area is that LLMs are also kind of re-exposing problems that we already have in organizations.
So, for example, you know, a lot of people are using LLMs to make knowledge sharing easier, knowledge discovery easier. Well, it turns out that not everybody has their document access control done well. So maybe now your LLM is exposing information to people that they’re not authorized to have because of some other access gap that you already have inside of your infrastructure. And LLMs just make that so much easier and approachable for people to now actually be able to, for lack of a word, do their own recon and exfiltration.
And then I’ll go back and I don’t want to beat up on Clawdbot too much, but we also just think about like authorization and authentication inside of that. You know, we’re now having people leveraging, you know, MCP servers, essentially almost think of it almost like an LLM API type call or these tools, et cetera. Those tools are leveraging credentials and we’re not always doing a great job of controlling that access, both from the standpoint of like, “Hey, am I reusing my general employee credentials for my LLM?”
[If yes] now there’s actually confusion about whether this was a human doing the action or a bot doing the action but it’s also the other cases where some of these keys, et cetera, are just being exposed because of how LLM works allowing somebody else to maybe potentially actually retrieve that.
You know, it is an interesting thing because I do believe that we’re going to rise to the occasion as an industry and that we’re already seeing people actually put things in place.
You know, there’s a lot of interesting products out there that effectively like give you kind of like an LLM guardrail slash firewall to actually help a prompt injection.
There’s a bunch of other things out there to kind of help with some of the access control problems.
I do want to come back to a statement that you said at the beginning of this though, Alex, which is the idea that LLMs and agentic IDEs [AI-native code editors] create vulnerable code. I personally haven’t seen any evidence around that and I haven’t actually seen anybody really show some deep research around that.
It’s worth as always remembering that LLMs are essentially regurgitating what they’ve studied before. There definitely are some languages where LLMs are introducing more vulnerable code but often that’s because these samples that they look at are vulnerable. When you look at some of the more mature things though, LLMs actually do a really good job and oftentimes a lot of these agentic IDEs, etc., and even the LLMs themselves, recommend security controls themselves prior to the user even prompting them.
So, it is one of those areas where we’re still studying, but it’s also an area for them to actually stay concerned. But I’m still actually optimistic on LLMs writing good code for people.
O’s Dave Gruber: Yeah, and I got to totally agree with you there.
A couple of just sort of points overlaying that. One is hygiene matters more when you automate things than it does when you have humans in the loop. And why is that? Because humans are pretty good at filtering out obvious things and miss others. Machines sometimes, it’s back like, if you think about when you started to build playbooks and SOAR, it’s why SOAR wasn’t great for every single security operations team. Some utilized it widely, but oftentimes those were organizations that had the best-defined processes, the most well thought out configurations that could be replicated and automated. Those same principles apply where hygiene and definition are important, the ability for as we train models and we wire up agents.
DR’s Alexander Culafi: Thank you.
O’s Dave Gruber: In the agentic model, to connect to different things we need to understand and think things through very, very well.
Data hygiene becomes super, super important in this process, right? Because these are data-hungry applications, and all of a sudden, you know, we have fleets of agents that are working against a large dataset. And again, humans have this natural ability to sort and filter as we read data and determine what passes the test of looking correct and not correct based on human knowledge.
Machines have less ability to do that. And so, the cleanliness of the data and the data hygiene in what we build those models with and maintain those models and those datasets with are particularly important.
Some of my more recent research has really started to bubble that up to the surface that people who are being very aggressive in the adoption of this technology, who by the way are super excited about it and getting great value are also learning that the data set matters and there’s work to be done at the data layer.
R’s Frederick Lee: I love that call out so much, The reason why I love that call out is, and it goes to, and I think probably some listeners and people tuning into the podcast may even be familiar with kind of like the B-sides presentation regarding Glean [AI search assistant that connects to all enterprise applications], right? And one of the things you’re actually mentioning is like, well, Hey, yes, you can totally adopt AI. You can bring in LLMs, et cetera. But if you don’t have good processes in place, you’re not going to get good outcomes, right? The LLMs, et cetera, are just going repeat what you’re already doing today. And as I mentioned about things like access control, it’s going to expose flaws as opposed to maybe helping you actually remediate those flaws.
DR’s Alexander Culafi: Something I want to bring up is we were talking about the LLMs potentially introducing insecure code. I want to bring up the study that I was sort of referencing here. Veracode a few months ago had their 2025 Gen.ai code security report. Researchers tested 100 LLMs against 80 different coding tasks and found that AI models chosen secure implementations 45% of the time. I’ll also say that more specifically models failed to prevent cross-site scripting issues 86% of the time.
Now, this [research] is six-plus months old, which is forever ago, and it’s less to, you know, call you out, Flee, so much as to say that it is a process that I’ve heard is getting better, and that generally speaking, like these tools are very useful, but it also seems important to always have a person in the loop, as well to audit code. Would you say that’s fair?
R’s Frederick Lee: No, that’s definitely fair. And I’m not deeply familiar with that study. I remember when it came out one of the things I was curious about in that study is how does this compare to a human that’s never had security training and isn’t working with the security team? And does the study reflect how developers inside of a corporation are using these tools? And when you actually go out and look at examples or, you know, like easy example, hey, you go out and you look at cloud code and some of the things that you’re on GitHub and cloud code or even things like open code, you’re actually seeing like the agents.md files, the instruction files, the prompts themselves say, Hey, here are some of the security guidelines. Here are some of the things you want to make sure are in place.
And part of the reason I’m so bullish and optimistic on Agentic development is because it gives yet another entry point for security teams to codify some of those rules and guidelines and make it even easier for developers to do the right thing and even in those cases where the developer is primarily an agentic IDE or just an LLM, et cetera, being able to actually follow that guidance every single time, et cetera, et cetera, to say, if you’re getting user input, sanitize it.
And in some cases, even if you say explicitly, “use this library to sanitize it,” being able to do these other kinds of things where you’re kind of modeling how a security team operates and you’re saying, Hey, our security team teaches our developers secure coding, here are the practices we have here, the golden paths from an API standpoint we wanted to use ID these LLMs are actually really, really good at understanding and following that. And that’s part of the reason why I still would love to see more and, even more so actually see some real cases of where we’re actually seeing hey somebody used an agentic IDE LLM to actually write code and it did introduce a vulnerability and kind of a normal development environment what I mean like normal development inside of a corporation that has policies, rules, guidance, even references and examples for people in LLMs to study.
O’s Dave Gruber: Hey, Flee, I love what you just said there and it made me immediately think to sort of the challenge we’ve had for the longest time in the notion of sort of DevSecOps and having security really have the visibility and controls that they want during the dev process. And that’s been a friction point for years. And it’d be interesting to see how this thing shapes up, and whether the security team can now have better visibility because things are in a more automated fashion where they both have oversight and influence over the security level that’s applied within the dev process.
Traditionally this has been a dev function. Dev owns those tools and therefore dev makes the decisions around them. I’m curious, you know, as a security leader, whether you think that you’ll have the kind of levers in play to be able to work with your dev teams to be able to have influence there.
R’s Frederick Lee: I am super excited, not just from my own personal experience working with dev teams now with LLMs, cetera, but also what I’m hearing from other peers is, know, these like, you know, agent guiding files or something like an agents.md is something that developers and dev ops teams and dev sec op teams are enthusiastic to get the security team involved with because they also see the value because what it represents is a smaller touch point for security. It means less friction for developers because security is going to give that guidance without directly touching the code. And it makes that adoption far easier.
I think the ecosystem itself has done a good job of encouraging people to think about security as they’re leveraging these tools. As you can obviously tell, I’m really bullish on LLMs and helping developers write code.
O’s Dave Gruber: That really gets me excited from a security viewpoint. Like that’s exciting.
DR’s Alexander Culafi: The next thing I wanted to ask about is sort of the implementation aspect. So, you know, I’m sure three years after chat GPT was introduced and two and a half years since we, maybe, I don’t know, it’s 2026, maybe it’s actually three years since some of these security products started coming out that are doing the automated threat intelligence and the data analysis and all that sort of stuff.
If I haven’t, and I’m at whether let’s say a smaller organization or bigger organization and I’m looking to help out my SOC [security operations center] with some of these products, where should I start? What precautions should I take? What should I be thinking about?
Flee, we’ll start with you.
R’s Frederick Lee: It’s worth spending a lot of time really about the architecture you want to set up.
In my case, I highly advise people actually just literally think of an AI stack and how that is going to be interoperated. I do recommend that people get some kind of LLM gateway to both help with managing which models are being used, because not all models are trustworthy, and making it easier for people to do some of that kind of like prompt monetization.
And then start thinking about how you actually encapsulate some of those other aspects of how people utilize LLMs. So, for example, instead of letting people directly write code or maybe directly call an agent, that’s actually going to be brokered via MCP (model context protocol) and an MCP registry. Ideally, even an MCP gateway, such that you have a lot more control over the access, a lot more insight and auditability around that.
And then kind of making an additional kind of like viewer or kind like presentation layer that people are actually operating from a day-to-day basis. I know, one of things that is useful for people to actually consider is going out and also just looking what’s going on in the open source ecosystem around this. Because we’re seeing a lot of like adoption there as well as kind of like that commercial slash open source offerings that are working on this problem. There’s some really good vendors both on the open source side and on the commercial side that can actually help people get started.
You know, obviously I have a sweet spot in my heart for these small and medium-sized businesses et cetera. And it’s worth them also thinking about how they get started cheaply and how do they get started easily, and that’s where some of these open source commercial products come into play.
The other thing is actually worth going back to, something that something Dave mentioned earlier, if you’re beginning that journey, start reviewing some of your existing practices seeing where you think you might already have gaps.
I’ve seen people get some quick wins from an early adoption standpoint taking well-defined run books that they already have today and then trying to turn that into an agentic workflow and getting some value out of that.
And if I’m allowed to speak about vendors SlashNext, you know, one recommendation, there’s actually some really, really good plays out there for people that are kind of just wanting to get started in this space that can be very reasonable from an economic standpoint.
O’s Dave Gruber: Yeah, let me add to that and say that a lot of this depends on what you have on your team for people, for money, for knowledge and skills. Because I research companies of all sizes and shapes and makeups, and what I realize is, like always, there are cohorts of companies that are addressing this thing differently.
There are some sometimes smaller companies, but more so less mature or smaller SOC teams, who have less resources, less capabilities, and they’re choosing to partner up with managed security service providers (MSSPs) who are coming in and they’re helping ramp this cycle up. They’re helping people lay the foundation, the groundwork, establish governance and policy around the basic infrastructure.
And remember, you know, there’s two sides to this equation. While we’re talking security here and security operations and basic security functions, we’re also talking about securing all the consumption and use of AI throughout whatever your company is, your organization, is.
And so to understand what to do in your own camp, you sort of have to understand what the broader industry perspective is on the implementation, the governance, the controls, and the use and the foundations for all AI tech, regardless of whether you’re in security or if you’re in some other function in the company. So, if you find yourself struggling with the knowledge and the resources, getting your own team up to speed on what’s happening in the industry [can help].
And Flee, you just need some good ways to go out and get your hands a little dirty. But not everybody has the ability to do that on their own. So, the use of a service partner is great. Lots of security vendors are pouring huge amounts of money and time and effort into this thing. So, this stuff’s going to show up in existing tools that you’re already using. So you won’t need to build everything on your own.
And if you’re a higher end, more mature, and you have a team, and you have engineering resources and architects and folks that can invest in building your own custom models and your own custom systems and you can dedicate some teams to getting their hands dirty. I’m seeing tremendous results.
It’s not a build versus buy, sort of do one versus do the other. Get your hands dirty, create some of your own environment so you understand what’s possible, how it works, what the guardrails need to look like, and then also work together with both service providers and security solutions vendors so you can consume what’s happening there. Remember, everything’s moving collectively at the same speed.
It’s a funny situation we’re in where one possibility is moving forward very quickly. So, we’ve got to keep up. Vendors have to keep up with that same train. So do service providers and so do we as practitioners. The tide is rising and we all have to vote sort of collectively together. And then we all have to work together. We have to partner up with each other and learn from each other.
I love your reference to the open source community, Flee, because that’s like, that’s a big influence to what’s happening but also a really great way to sort of see what’s happening in the industry and get involved in that too. That’s additive, feels like a lot of pieces and parts here but my point is there’s a lot of opportunity to learn and to keep up and don’t feel like you got to go it alone and solve this entire thing by yourself.
R’s Frederick Lee: Dave, thank you so much for mentioning the fact that the vendors are already introducing this into their tools. It’s not even, in most cases, it’s not even an additional SKU or something else you have to pay for. They see the value and it’s like, hey, we can get you to use our product more because we’re adding these features and they’re keeping up with the tech, et cetera. Which is super exciting for me as a security practitioner.
You know, there is kind like the other side, especially for some of the companies need to watch out. It’s not just security vendors that are adding in AI. Some your other vendors are also adding in AI and that’s also something we still need to keep aware of.
O’s Dave Gruber: Yeah, yeah, for sure. In my research, what I have seen, certainly in the last two cycles, is security buyers are willing to rip out existing tools if they don’t think their vendors are keeping up with their investment in AI right now. So, you need to push your vendors to, so one, take advantage of them, but also make it clear that you have high expectations about what they’re going to bring to the table for you in the tool sets that you’re already utilizing or.
You know, as many of my clients have told me that they’re willing to change vendors if they think some other vendor is outperforming in this space.
DR’s Alexander Culafi: Dave from my market standpoint are there specific implementations and I’m sure this varies a lot depending on how much budget folks are working with how big the team is. But you know the thing I’ve heard anecdotally is that some of these AI security products can be very expensive with some possibly costing as much as an as an analyst salary and so the thing I want to ask you is where are teams seeing value and where they may be seeing less value from an implementation standpoint.
O’s Dave Gruber: Well, let me start out by saying if I can buy a high-quality security solution at the cost of one security analyst, I’m a happy camper. So that doesn’t scare me a bit. But on a broader point, I’ll just say that like all net new introduction of technology, price tends to if you’re the first mover and you have something no one else has, you’re going to charge more for it. As soon as all the rest of the competition gets in the market, it drives the price down very, very quickly.
R’s Frederick Lee: Yes.
O’s Dave Gruber: With the pace of how fast things are moving here, that totally is the case. There are some solutions that start out at a bit of a higher cost. Those prices are coming down quickly. I get inquiries by vendors all the time who tell me, “Help me price my AI enabled solutions properly so I don’t get shot out of the market where I’m overcharging for things because I need to keep up with the competition of what’s happening.”
I think you need to look at this as at this point, the speed of what’s possible here with this technology is well worth putting some money upfront. I’ve already seen all kinds of budget shift from other areas into investing in this space.
And my research shows that people aren’t paying attention down to the detailed ROI [return on investment] of what people are getting back yet because they see the potential outweighs the possible risk and cost right now.
And so, establish a budget based on what you can afford. Look for paths to achieve the most you can with that budget and know that things are going to be, certain solutions are going to be higher priced for a while. And if you don’t want go through that process on your own, figuring out what all that looks like, ask for some help from some service providers because people are looking at this thing very closely and there are some lower cost approaches to solve these problems.
You might end up, you know, putting some of own staff on it and having to reallocate some resources to do so, but there’s a lot of paths here.
R’s Frederick Lee: Yeah, I thank you Dave for mentioning if I can get a security tool that does the job of an analyst at the price of an analyst, I’m excited. Because the thing you have to remember is like, ideally, you know, we’re all human and we want real lives, et cetera. It means that you’re working an eight-hour day, but there’s 24 hours. We need three shifts. If I can get that for the cost of one analyst, that’s 24 seven, 365. That is a cost savings, even if it looks odd from a direct comparison standpoint, if you aren’t taken into full context.
But having that ability to have something repeatedly going through data, looking for interesting telemetry, signaling on that telemetry, firing off alerts, et cetera, 24/7 is a bargain if you’re getting it at the cost of a SEC analyst.
DR’s Alexander Culafi: That’s definitely true for when the product is really good, when it works great, et cetera, et cetera. The thing I’ll say is I, whenever I go to events, I try to talk to folks that are not senior in security that aren’t the people that, I necessarily talk to on a weekly basis, which are either the buyers, the executives or the security leaders. And more than once, not every time, but more than once I have heard that same tension you probably hear at all companies whenever there’s compulsory AI use.
It’s this doesn’t work as well as the vendor says. It’s very expensive. They’re making us use it. I don’t want to use it. And it’s a reality of every company, whether you’re in security or not. It’s just it’s the thing because AI is the big thing right now. Have you either of you seen that tension in your own lives either through conversations or in your own work and have you had to address or work through that tension to some extent?
I think, Flee, you would be a good person to ask this first.
R’s Frederick Lee: Yes, yes. So I have seen that tension and some of it was even tension caused by myself and misunderstandings from myself. And where I see that I’ve gotten it wrong in the past, where I can see peers and other companies getting it wrong is this idea that, we have AI, do something with it. And that’s actually backward. It really is about, “Hey, what problems do I have today?” And, “Is AI something that could improve this?”
I think that’s where we made some missteps and this includes some of the vendors where they’re like just kind of adding AI because somebody says you need it. And [users were] like, “Well no, I really want something very deterministic there’s no need for all this other kind of stuff you know I really just wanted to build a few plug-ins really really specific data have really specific workflows. I don’t need AI.” And now you’re kind of introducing it just to say, “We have it,” or taking processes that aren’t necessarily improved by a but could you still need a lot of human in the loop in the process itself.
So, the AI itself isn’t necessarily actually reducing work. And in some cases, it’s actually creating work. And I think that’s where we’ve made a lot of mistakes, both on the vendor side, the corporation side, and even the individuals from a team leadership standpoint of just saying, “Hey, AI is popular and cool, kids. Go out there and make sure you’re using AI.”
What you really need is an actual strategy. And most people are actually starting with, and this is the classic case of, hey, I have a tool.
I’m going to find something to use it for as opposed to really going down and thinking, Hey, what problems do we have? Is it right size for AI? What is the risk of leveraging AI for this? What are some of those tradeoffs? Am I going to have data integrity issues? Because hey, if you have bad data, as Dave said earlier, if you have bad data going into your LLM, you’re going to get bad results, right? And that’s the same thing with actually guarding like run books and the general maturity of the organization. And so, there’s all these things that you have to take into consideration. You can’t just say, “Go use LLMs,” you have to say, “Hey, this specific problem works well for what LLMs can do. Here are some of the tradeoffs of what we’re doing around that if we decide to go the LLM route. And how do we actually make sure that the ROI is truly there?”
O’s Dave Gruber: Yeah, and you know what I hear from the people that are more experienced and have spent more time here is those are all the same learnings that people will tell you. But it ties back to what I said earlier in the conversation, which is about learning not only what’s possible, what you can trust, what does trust even mean? Where do you need to rebuild, re-architect basic hygiene and infrastructure? There’s a lot of moving parts here. So, it’s not as this one is not.
You know, it’s not as simple as, here’s a simple tool that does a simple few things. Where can I apply it? Is this is a broad use new set of capabilities, which could potentially enable me to rethink the way I even do things? But that’s not how we adopt things? We adopt things by going, “OK, what am I doing today where I can find a specific use case?”
So, people are, you know, the typical adoption model is you start narrow. start, you look for, you know, an overuse term, low hanging fruit, but things that are easier to measure, understand, easier to apply something that new to get my feet wet with that a little bit, understand what’s possible, where the holes are, then move, expand a little bit. The trick of it is though, right now is things are moving so fast, that cycle has to go fast too. Otherwise, you fall behind and you’re not keeping up with what’s coming next.
So, we need to be kind of aggressive in the cycle, in the process and get experiences fairly quickly. We need to almost, I’m not going to say over believe in what’s possible, but just know that this will mature quickly and it’s up to us to put it to work in ways that are both meaningful, but most importantly that actually have a real positive impact on the outcomes.
So, I’ve been asking regularly, so what is the impact of this on your outcomes? And it turns out people are telling me, hey, this reduced my meantime to respond or my meantime to contain this increased the percentage of alerts I’m able to process and I found net new threats that would have been left on the floor because they wouldn’t have made it through my cycle. I found things in areas of my attack surface that I just wasn’t looking at in the past. there are some real metrics that and what we see is as people use this thing more or these capabilities more, we’re seeing real net positive outcomes.
R’s Frederick Lee: I think one of the great things on that net positive outcomes is the cost of coverage is lower. You’re kind of like looking toward things like, well, hey, I can actually look at my entire attack surface and say that, hey, nothing is not important anymore because I have this thing that can actually just always do the same thing, investigate the same issues over and over again. But even when it comes to things like some of your classic, like user interaction type things, you know, like tons of people, especially those in bone management, part of their job is literally reaching out to other humans, checking in on things, verifying, et cetera. And that is one of those things where you’re never gonna have enough people to actually cover that properly. But now you can leverage these LLMs to do things like Slack notifications, follow up with somebody, double check if JIRA ticket is updated, etc.
DR’s Alexander Culafi: Excellent. Well, this has been great. I think we’re at time. Thank you so much, Flee and Dave. I have learned a lot. It is always great to talk to people who know way more about security than me and just talk, you know, not try to always be telling a story in the journalist sense, but just hear what folks are actually working with on a day-to-day basis. So, thank you very much.
O’s Dave Gruber: Pleasure.
R’s Frederick Lee: Well, thank you for having me.
DR’s Becky Bracken: Alex, thank you so much for hosting. Flee and Dave, it was really great practical real advice. Like you said, I got a lot out of it. So, I want to thank you both again for your time. This has been Dark Reading Confidential. It is a podcast from the editors of Dark Reading. And on behalf of my colleague, Alexander Culafi and myself, Becky Bracken, and the rest of us here at Dark Reading, thanks so much for listening and we will see you next time. Bye bye.
