• | 8:00 am

AI is the future of cybersecurity. This is how to adopt it securely

Github’s chief security officer says AI won’t replace the need for security teams, but it will greatly enhance their work.

AI is the future of cybersecurity. This is how to adopt it securely
[Source photo: Andriy Onufriyenko/Getty Images]

Where do I start with AI? Is it safe? How do I talk about risk with my stakeholders?

These are questions I am asked often these days.

While AI is helping developers code faster and be more productive, some leaders are concerned that it can introduce additional security and risk management headaches. But the cybersecurity industry is no stranger to emerging technologies, and we must continue to embrace every tool at our disposal to secure the software ecosystem.

AI MAKES THE PROMISE OF ‘SHIFT LEFT’ A REALITY

Used effectively, AI can help prevent vulnerabilities from being written in the first place—radically transforming the security experience. AI provides context for potential vulnerabilities and secure code suggestions from the start (though please still test AI-produced code). These capabilities enable developers to write more secure code in real time and finally realize the true promise of “shift left.”

This is revolutionary. Traditionally, “shift left” typically meant getting security feedback after you’ve brought your idea to code, but before deploying it to production. But with AI, security is truly built in, not bolted on. There’s no further way to “shift left” than doing so in the very place where your developers are bringing their ideas to code, with their AI pair programmer helping them along the way. It’s an exciting new era where generative AI will be on the front line of cyber defense.

However, it’s also important to note that, in the same way that AI won’t replace developers, AI won’t replace the need for security teams. We’re not at Level 5 self-driving just yet. We need to keep our hands on the wheel and work with our existing security controls, not abandon them.

GREEN-LIGHTING AI WITHIN YOUR ORGANIZATION

Some teams are already reaping major productivity benefits from AI, but other leaders are still concerned about its security risks and want to know how to create the right standards around AI tools. How can you ensure good security outcomes while enabling your software creators to do their best (and most secure) AI-empowered work?

We’ve had a lot of practice with AI at GitHub, and I want to share a few best practices organizations can leverage when looking to adopt a generative AI tool. These strategies will separate the organizations that thrive and the organizations that fall short in protecting their most valuable assets. Let’s take a look.

TREAT AI TOOLS LIKE ALL OTHER TOOLS

While AI is a novel technology, it has more in common with other tools than not. You can start with the same security and risk frameworks for evaluating an AI tool that you would for any other tool you’re looking to bring into your stack, and customize them over time. Ask for data flow diagrams, external testing reports, and other information about the tool’s security and maturity.

At GitHub, we have processes that help us identify and manage the risks associated with any new tool provided by an external vendor. New tools or services are carefully reviewed by our procurement, legal, privacy, and security teams—with a particular focus on what data will be used, how it will be used, and how the data will be protected by the external vendor.

UNDERSTAND DATA USE AND RETENTION

The key thing that you want to keep tabs on is how your data, or that of your customers, is managed. After all, you can imagine the security concerns that come with a third-party vendor retaining and using your sensitive company or customer info. So, you need to know how the AI tool manages your data, where the data goes, how it’s shared, and if it’s retained. Pay attention to whether the vendor uses customer data for training their AI models, and understand what options are available to opt in or out of that data usage based on your needs.

INSPECT THE INTELLECTUAL PROPERTY CLAUSES

AI tools open up a whole new world of intellectual property (IP) questions and the legal landscape is still evolving. Consider the new tool’s IP clauses and review the terms of the licensing agreement to understand what protections may be offered. For example, Microsoft and Google both offer customers IP indemnity when using their generative AI tools.

Plus, keep in mind that any time a developer uses code they didn’t create they encounter this risk—such as when they copy code from an online source, or reuse code from a library. That is why responsible organizations and developers employ code scanning policies and other review practices.

TRACK THE TOOL’S TRACK RECORD

Understanding the tool’s track record is important. This ensures the AI product is reliable, effective, and aligns with your company’s objectives. What is the tool’s past performance and accuracy? Look for successful use cases that demonstrate its effectiveness. What kind of dataset is used to train it? Make sure that dataset is relevant to your projects. Other things to look at include bias mitigation, user reviews, and ability for customization.

AUDIT THE TOOL’S AUDITS

We all know that third-party testing and audits are invaluable in gauging a technology’s efficacy or security. Same story here. Ask if the AI tool has gone through third-party testing. The AI tool may not be compliant because of how new it is, but are the company’s other products compliant? Is there a plan to achieve compliance for the AI product? Choosing a tool that has been rigorously tested and audited will strengthen the security posture of your organization.

When it comes to AI, you don’t have to be “the department of no.” By engaging in the above best practices, you can set guardrails and rules of engagement that will result in secure outcomes.

Just like AI won’t replace developers, it won’t replace your need for security teams, but it will greatly enhance their work. There’s no better way to “shift left” than by having an AI-assisted pair programmer right there in the IDE with your developers, helping them write safer and more secure code in real time. This will help you get to better security outcomes faster—and, dare I say, will radically transform the next decade of security.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mike Hanley is the CSO and SVP of Engineering at GitHub. More

More Top Stories:

FROM OUR PARTNERS

Brands That Matter
Brands That Matter