The duality of AI in cyber security: risk factors and enablers
Paul Bedi is the founder and CEO of IDMworksAn industry-leading identity and access management (IAM) firm.
AI has certainly been dominating the conversation this year.
We’ve seen software companies – including tech giants Microsoft, Google, GitHub, and Intercom – enhance their offerings with new AI functionalities. We’ve seen the general public use ChatGPT for a variety of use cases, from writing resumes and creating content to creating test parameters for a piece of programming code.
However, as these tools are gaining popularity, concerns are also being expressed by individuals who fear losing their jobs, and technology leaders who feel the need to control the development of AI.
It’s an interesting phenomenon, but it’s one we’ve experienced before. Every time a new tool or technology is introduced, it’s natural to have a mix of excitement and fear – and it’s at this juncture that we figure out how to use it to empower development and progress. Harnessing that enthusiasm and pairing it with the right security, ethical and moral principles is what will enable us to use AI to power innovation.
As business, technology, and government leaders continue to evaluate the potential of AI, I think there is a discussion that is noticeably missing: its role in cybersecurity. The way I see it, AI has a dual impact in the cybersecurity industry: it is both a significant risk factor and an area of opportunity. How we deal with these two factors will determine what our industry looks like in the future.
AI as a risk factor
When it comes to potential risks to cybersecurity from AI, there are two components that are important to pay attention to.
The first are malicious actors using OpenAI technology to refine their efforts. Recent research shows that cybercriminals are already experimenting with ChatGPT to recreate malware strains, develop decryption tools, and create dark web marketplaces for fraudulent activities. Additionally, just as the average person uses ChatGPT to make email drafts more conversational or formal, bad actors are using the same signals to refine their phishing campaigns.
In short, cybercriminals are in a better position than ever to launch sophisticated social engineering attacks – and both individuals and companies need to be better prepared to stop them.
For the second component, it is important for companies to look inward. Preventing insider threats is a key pillar in any cybersecurity strategy, and today it requires a strong focus on AI-enabled tools.
Take a look at Samsung’s recent example. In May, the company banned the use of AI-powered chatbots until they put the right safeguards in place to use AI safely and effectively. This happened when an employee allegedly uploaded sensitive code to ChatGPT, which then led to the data being leaked, as there is no way to reduce the amount of data that is shared with other external users once it is in the OpenAI servers.
To mitigate these concerns, companies need to put the right guardrails in place to ensure that no sensitive content is entered as input into AI-powered tools. These guardrails must take into account data privacy, protection of proprietary information, and any industry or government regulations. The reputational and financial risks of not having these guardrails in place – especially for businesses with large volumes of customer or financial data – can be huge.
AI as an area of opportunity
While AI is pushing the cybersecurity sector to be even more proactive and aware, there is also significant potential to leverage AI and increase the effectiveness of our offerings. Some cybersecurity companies have started by introducing data-rich algorithms and automation into their tools.
For me, this only scratches the surface. As a field, it feels like we’re still thinking too simplistically about how we can use AI. Even with incident threat detection and response, there is still a long way to go before those functions become truly sophisticated.
As such, we need to move the conversation forward and take a more innovative approach as we consider what this technology can do to enhance our efforts. For companies that want to stay ahead of the game, I would suggest bringing in scientists and mathematicians with expertise in AI to think about the problems that have not been solved yet and how AI-powered solutions can change it.
This is something that needs to happen sooner rather than later. As I mentioned above, cybercriminals are already starting to use the AI tools available to them. It is important that our industry takes a proactive approach so that we are not left behind in their efforts.
where to next?
AI is a powerful phenomenon, there is no doubt about it. What will define the cybersecurity field is how we approach it, and how seriously we think about it to better protect businesses and individuals.
The Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs, and technology executives. Am I eligible?