Get a demo

CISOs at the Crossroads of Security and Artificial Intelligence

Ways to Leverage AI’s Power While Safeguarding Your Organization’s Secrets

Gutsy Staff | March 28, 2024

decorative image

“No longer do we have to learn how to speak computer, the computer has learned to speak human.”

-Bruce Schneier, Gutsy Advisor

AI is here, like it or not. We asked renowned cryptographer and Gutsy Advisor Bruce Schneier what CISOs should think about concerning adopting artificial intelligence tools into their security organization. Here is how he recommends navigating the crossroads of security and AI:

"[AI] is now becoming a big deal very fast, and I worry that it's going to happen so fast that we're not going to be able to put in any regulation."

Takeaway #1: Data Access and Security

CISOs will be in charge of balancing the need for AI’s access to organizational data and secrets.

Takeaway #2: Redefine Risk

In the era of AI, boldly confront any new vulnerabilities and risks, such as prompt injection and algorithmic biases, and do not be afraid to reevaluate your existing governance processes and frameworks.

Takeaway #3: Pursue Trustworthy AI Solutions

Advocate for the prioritization of AI vendors that offer trustworthy solutions, emphasizing factors like data privacy, security, and transparency. This includes exploring containerized AI frameworks and pushing for regulatory oversight to ensure responsible AI deployment.

Related Resources:

1) [Article/video] Balancing Freedom and Responsibility Through Security Governance

2) [Article/video] How Process can Prove Trustworthiness in an Era of Outsourcing

3) [Article/video] When Investing in Security Processes is a Solid Governance Strategy

The full transcript:

John:

What should CISOs think about AI and the threats it's uniquely posing to an enterprise security organization as opposed to the current state?

Bruce:

You know, I want to see CISOs look at the use of AI in their organization. And there are a bunch of unique challenges. It’s going to work best if it has all the organization's data, it ingests everything, and that's great. But yikes, how do we make that work?

I was talking to a person who's doing AI for the CIA. They want this analysis tool to create insights, but they can't give it all of the country's secrets. That would be crazy. So how do you balance those two? Now, a CISO’s not going to have CIA level secrets, but they have to think about that same things:

  • Is the AI in their organization?
  • Is it outsourced?
  • Is the training they're giving it going to be used with other organizations where the data might leak?

We're going to assume that these AIs are going to be everywhere. You know, my guess is they are going to be the way we all interface with everything from now on. That no longer do we have to learn how to speak computer; the computer has learned to speak human. How that affects the network, it's not going to be obvious because we know humans have their risks. Computers have a different set of risks. So how do we think about computer risk? You know, just because they're AIs doesn't mean that they are not vulnerable to all the things computers are vulnerable to. I mean they're just computers. Why should we think the software is better written? That’s a lot to worry about here.

John:

One of the most common concerns, at least that I've heard people raise, and certainly one that I would hold if I were [still] doing that [CISO] job, is not even about the sort of malicious AI as much as it is about my own organization, using various free or easily purchased AI products and just basically leaking my organization's data into these different systems and not really having any control over how it subsequently used and so forth.

Bruce:

It goes back to trust again, right? And AI is just going to be some managed service that you are going to contract with and you need to know where your data is, how it's going to be used. I mean, I know some companies that are trying to build trustworthy AI where things are containerized, where your data doesn't leak to other organizations, where maybe the processing, the training or the fine-tuning happens in your organization, on your network, not on someone else's network.

This is all in flux, but there is going to be a market for trustworthy AI because every organization is going to worry about their data leaking. They’re going to worry about whether the data can be hacked and they're going to want the benefits.

They're going to want an AI trained on their corporate opus in a way that works for them but doesn't leave them at risk. This is not a unique problem you've got. So I would hang tight a little bit. I mean if you want to be on the bleeding edge, you know, do the best you can with ways to create trustworthy AI, but this is a service category, and it will be robust real soon.

I want AI to be no worse than humans that it replaces, and we have lots of rules governing what humans can do, whether it is misogyny or racism or various kinds of discrimination or preferences or ways to deal with data, they should be subject to those exact same rules. I want government to make sure this competition here.,

I don't like it that the major tech platforms are the ones building the AI. They’re the ones hiring all the talent so there's very little of it in universities now. So I want a more robust market both domestically and internationally. I want the government to treat AI like a technology that I'll kill you.

This is now becoming a big deal very fast, and I worry that it's going to happen so fast that we're not going to be able to put in any regulation. I mean, I remember drones. For many years, we were told ‘don't regulate drones. The market’s too nascent, you'll kill the market.’

Then suddenly everyone gets one for Christmas one year, and we're told it's too late to regulate drones. Everyone has one.

We really can't make that mistake with AI.