Four best practices for AI-powered cybersecurity
Artificial intelligence (AI) has achieved a prevalence across business functions in the last few years. Now, hackers are following suit. Today, cybercriminals can deploy AI to boost the success of many of their attacks. For example, they can use AI to spot patterns in user behavior, which hackers can take advantage of, or deploy it to identify new network vulnerabilities. As well as giving criminals improved accuracy, AI also works at immense speed, in real-time.
To combat these threats, cybersecurity teams need to be one step ahead. But this is no easy task. It’s been well-documented that today’s cybersecurity analysts are overwhelmed by the vast number of data and endpoints they need to monitor. Plus, there is a huge skills gap within the sector. (ISC)2 shows there were 3.12 million cybersecurity vacancies in 2020. To fill all of these, employee numbers would need to increase by a startling 89%.
How AI can improve cybersecurity
To combat new-age, intelligent attacks, while also relieving the burden on cybersecurity teams, AI is a must-have tool. We that 75% of executives say deploying AI allows their organization to respond faster to breaches, while three in five say it improves the accuracy and efficiency of analysts.
Despite the benefits, many companies struggle to successfully implement AI, particularly when it comes to scaling up pilots for enterprise-wide use.
To help organizations with successful deployment, four of the best practices are detailed here:
Selecting how you will use AI and who will oversee it is instrumental for a return on investment. A strategy needs to be laid out for AI deployment, taking into consideration governance mechanisms. For example, cybersecurity leaders need to define roles and responsibilities for cyber analysts, and assign ownership over who will monitor AI algorithm output to ensure any anomalies are caught and fixed.
It’s also important to select the right use cases for implementation, and review and expand these on an ongoing basis. To begin, cybersecurity leaders should choose AI programs that are less complex to implement but have high rewards, such as malware or intrusion detection. It’s also best to deploy use cases where the datasets are complete and up to date.
Harness the power of your data
AI is only as successful as the data you feed it. To be effective, organizations need to ensure that AI has full visibility into the enterprise’s infrastructure, data systems and application landscapes.
As well as this, data must be kept current for consistent high-quality output. This is where a data platform comes in. Organizations can either buy a ready-made platform to feed their information into, or build one internally. This platform must be reviewed and tweaked on an ongoing basis to make sure the AI tool is receiving adequate information.
Soar with SOAR
Security orchestration, automation and response (SOAR) are technologies that allow organizations to collect security data and alerts from different sources. SOAR supports incident analysis and triage by leveraging a combination of human and machine power. For AI deployment, these tools are essential in helping analysts define, prioritize and drive incident response activities through connections to data sources and platforms.
Upskill your teams
Deploying and harnessing the power of AI relies on a skilled team that understands the insights it generates and can take appropriate action where needed. Consequently, it’s paramount to upskill cybersecurity teams so that they understand AI processes and alerts. It can also be helpful to create user-friendly, intuitive interfaces for AI tools, to help cybersecurity teams interact with the technology without needing intense training.
AI’s potential to supercharge cybersecurity operations must be harnessed. As attack surfaces continue to grow and hackers become more advanced, the technology will become an additional teammate to cybersecurity teams in the security operations center. To ensure that investments provide an ROI and are accurate, it is vital that cybersecurity leaders deploy AI strategically, ensuring that they are giving both the tool and their teams the right information they need.
By Geert van der Linden, Executive Vice President of Cybersecurity at Capgemini
AI Shows its Value; Governments Must Unleash its Potential
2020 has revealed just how far AI technology has come as it achieves fresh milestones in the fight against Covid-19. Google’s DeepMind helped predict the protein structure of the virus; AI-drive infectious disease tracker BlueDot spotted the novel coronavirus nine days before the World Health Organisation (WHO) first sounded the alarm. Just a decade ago, these feats were unfathomable.
Yet, we have only just scratched the surface of AI’s full potential. And it can’t be left to develop on its own. Governments must do more to put structures in place to advance the responsible growth of AI. They have a dual responsibility: fostering environments that enable innovation while ensuring the wider ethical and social implications are considered.
It is this balance that we are trying to achieve in the United Arab Emirates (UAE) to ensure government accelerates, rather than hinders, the development of AI. Just as every economy is transitioning at the moment, we see innovation as being vital to realising our vision for a post-oil economy. Our work in his space has highlighted three barriers in the government approach when it comes to realising AI’s potential.
First, addressing the issue of ignorance
While much time is dedicated to talking about the importance of AI, there simply isn’t enough understanding of where it’s useful and where it isn’t. There are a lot of challenges to rolling out AI technologies, both practically and ethically. However, those enacting the policies too often don’t fully understand the technology and its implications.
The Emirates is not exempt from this ignorance, but it is an issue we have been trying to address. Over the last few years, we have been running an AI diploma in partnership with Oxford University, teaching government officials the ethical implications of AI deployment. Our ambition is for every government ministry to have a diploma graduate, as it is essential to ensure policy decision-making is informed.
Second, moving away from the theoretical
While this grounding in the moral implications of AI is critical, it is important to go beyond the theoretical. It is vital that experimentation in AI is allowed to happen for its own sake and not let ethical problems stymie innovations that don’t yet exist. Indeed, many of these concerns – while well-founded – are born out in the practical deployment of these end-use cases and can’t be meaningfully discussed on paper.
If you take facial recognition as an example, looking at this issue in abstract quickly leads to discussions over privacy concerns with potential surveillance and intrusion by private companies or authorities’ regimes.
But what about the more specific issue of computer vision? Although part of the same field, the same moral quandaries do not arise, and the technology is already bearing fruit. In 2018, we developed an algorithmic solution that can be used in the detection and diagnosis of tuberculosis from chest X-rays. You can upload any image of a chest X-ray, and the system will identify if a person has the disease. Laws and regulations must be tailored to unique use-cases of AI, rather than lumping disparate fields together.
To create this culture that encourages experimentation, we launched the RegLab. It provides a safe and flexible legislation ecosystem to supports the utilisation of future technologies. This means we can actually see AI in practice before determining appropriate regulation, not the other way around. Regulation is vital to cap any unintended negative consequences of AI, but it should never be at the expense of innovation.
Finally, understanding the knock-on effects of AI
There needs to be a deeper, more nuanced understanding of AI’s wider impact. It is too easy to think the economic benefits and efficiency gains of AI must also come with negative social implications, particularly concern over job loss.
But with the right long-term government planning, it’s possible to have one without the other; to maximise the benefits and mitigate potential downsides. If people are appropriately trained in how to use or understand AI, the result is a future workforce capable of working alongside these technologies for the better – just as computers complement most people’s work today.
We’ve to start this training as soon as possible in the Emirates. Through our Ministry of Education, we have rolled out an education programme to start teaching children about AI as young as five years old. This includes coding skills and ethics, and we are carrying this right through to higher education with the Mohamed bin Zayed University of Artificial Intelligence set to welcome its first cohort in January. We hope to create future generations of talent that can work in harmony with AI for the betterment of society, not the detriment.
AI will inevitably become more pervasive in society, digitisation will continue in the wake of the pandemic, and in time we will see AI’s prominence grow. But governments have a responsibility to society to ensure that this growth is matched with the appropriate understanding of AI’s impacts. We must separate the hype from the practical solutions, and we must rigorously interrogate AI deployment and ensure that it used to enhance our existence. If governments can overcome these challenges and create the environments for AI to flourish, then we have a very exciting future ahead of us.