Navigating the Intersection of Generative AI and Zero Trust Principles – Ben Corll, CISO in Residence at Zscaler

By Lane F. Cooper, Editorial Director, BizTechReports, Moderator for CIO.com

The convergence of generative AI and Zero Trust principles presents a unique set of both challenges and opportunities for IT and cybersecurity professionals. To shed light on these dynamics, we had the privilege of speaking with Ben Corll, CISO in residence for Zscaler, who offered his expert perspective in the wake of a CIO.com virtual roundtable with senior technology decision-makers on how organizations can reconcile the seemingly conflicting demands of AI and cybersecurity.

“Generative AI (Gen AI) is insatiable. It wants to analyze, assess, and assimilate all the information it can access,” Corll explained. “It’s designed to learn and adapt, accessing all the data it’s allowed to. On the other hand, Zero Trust operates on the principle of ‘never trust, always verify’—only granting access where it’s explicitly allowed. So, do we have a conflict between Gen AI’s need to learn and Zero Trust’s stringent access controls? That’s a fascinating conversation.”

As enterprises increasingly rely on Gen AI to extract value from vast datasets, the tension between data accessibility and security is becoming more pronounced. “The more data you collect and analyze, the more opportunities you have to extract value,” Corll noted. “But you have to put guardrails in place. The key question is: Can we put Zero Trust controls around Gen AI? And how do we manage the conflict between Gen AI’s desire to access everything and the need to control that access?”

Corll emphasized that while Gen AI’s capabilities are impressive, it’s ultimately just another application within an organization’s IT ecosystem. 

“Think about your CMDB or asset management solutions,” he said. “Are you comfortable with them discovering everything on your network? The same logic applies to vulnerability management tools. If used within a carefully established framework, these tools don’t break Zero Trust principles. The same premise should apply to Gen AI.”

When it comes to implementing Zero Trust in the context of Gen AI, Corll suggested a proactive approach. 

“It comes down to us, the security professionals and IT administrators, to put those guardrails in place. We must start thinking about how to label proprietary information or set configurations preventing Gen AI from accessing certain data.”

He also pointed out the potential for future advancements in AI controls. “We’re likely going to see capabilities emerge that allow us to say, ‘Please don’t discover this,’ whether through keywords, text files, or other configurations. The privacy and security communities must develop ways to apply these guardrails, regardless of whether we’re dealing with public, commercial, or enterprise AI models.”

Access control remains a critical area of focus. Corll believes that while organizations can apply access rules to Gen AI, there’s still work to be done. 

“Is it possible to prevent unauthorized access through checks and balances? Yes. But it needs to be better. Can we apply roles and create different profiles to control who can ask what? That’s where things get interesting. It will require significant sophistication, such as the ability for AI to differentiate between a cybersecurity professional and an HR manager when a query requests sensitive information.”

Corll warned of the risks of prompt manipulation and social engineering tactics aimed at AI models. “It’s not about poisoning the AI—it’s about manipulating it, almost like social engineering the model to get the information you want.”

It is essential to view Gen AI as a component of a broader risk management strategy rather than as a standalone entity. Many organizations are still in the early stages of maturity in this area, struggling to identify data leakage risks and often lacking the processes and safeguards to manage AI applications effectively.

As Zscaler works with clients to mature their risk management processes, Corll advises a systematic approach. 

“Talk to the business, find the use cases, and run risk assessments. It’s like engaging your development environment. Similar issues will arise. For instance, you will discover that different developers use six tools for the same purpose. Can you reduce that? Can you standardize? And if not, why not?”

He stressed the importance of understanding the impact of data loss on the organization and conducting gap analyses to identify missing tools and capabilities. “Do you even have visibility today? If so, what controls are in place? And do you understand the capabilities you have? Have you enabled them?”

When asked who should be involved in these discussions, Corll emphasized the need for a multidisciplinary approach. 

“There isn’t a business unit that couldn’t benefit from the insights that Gen AI can provide.  But we also need to consider governance—are we violating privacy? Do we have the right to process this information? It’s not just about internal policies; depending on the jurisdiction, we may need to engage external legal counsel and workers councils.”

Corll offered a sobering assessment of where most organizations stand today. 

“In a word, immature. But with the right steps—engaging the business, running risk and gap analyses, and involving the right stakeholders—we can move forward responsibly and harness the full potential of Gen AI while maintaining the security and integrity of our data,” he concluded.

###

EDITORIAL NOTE: Click here to view the full interview with Zscaler’s Ben Corll 

Editor's PickStaff Reports