With software and cyber security organisations working to leverage AI solutions into their armary of cyber defences, what are the benefits to taking that approach and what are the potential risks?
Keijo Mononen, General Manager, Ericsson Security Solutions explains to Technology Magazine how AI fits into the future of cybersecurity.
How can AI be used to enhance cybersecurity, and what are some potential risks or challenges associated with this approach?
Many companies use artificial intelligence and machine learning to identify malware, as it is impossible to do so through traditional ways of constructing the signatures of each. In the security operations domain, Microsoft has combined GTP4 with their own security model trained with trillions of security events to produce Security Copilot, a product that helps bolster defense in the response process. However, there are still risks associated with the use of AI in security. Some rely too much on AI and value it over their own human analysis, allowing attackers to learn how to bypass the AI technology or even use it to their advantage. A challenge is that AI is not always 100% accurate, so one cannot let AI fully drive security, as it may drive the wrong actions. Attackers are becoming more clever and are developing ways to bypass AI, so other aspects of security are still vital.
AI-powered security tools
What are some of the most promising AI-powered security tools and solutions currently available, and how are they being used to protect businesses and organisations?
At this point, Microsoft Security Copilot stands out as a new approach to security operations, especially in incident response. It combines a large language model (LLM) with its security model, as well as vast amounts of data, to enable full scale incident response. Currently, this technology is only being previewed, and is not yet available for testing. Some other companies are also working towards bringing similar solutions to market, but Microsoft is leading the way currently.
How can businesses ensure the ethical use of AI in security, and what measures can they take to prevent bias or discrimination in AI algorithms?
As AI continues to develop, ensuring ethical use of the technology is paramount. Businesses must ensure training data is collected with consent – taking into account copyrights and licenses – anonymized, removed of possibilities for privacy exposure, and tested properly with penetration testing. Additionally, training data must represent the domain and context sufficiently to provide a representative and accurate prediction for the security user, and the business must properly evaluate and communicate the costs of inaccuracies.
What are some of the most significant cybersecurity threats facing businesses today, and how can AI be leveraged to detect and mitigate these threats?
Ransomware and insider threats stand out as two of the most significant threats facing businesses today. In terms of insider threat detection, AI and machine learning are quite essential, as the technologies help to find anomalies in large haystacks of data that cannot be found by many traditional techniques. Ransomware, on the other hand, is challenging even for AI to combat, especially as inaccuracies may cause business disruption. However, AI can still help with the speed of action once ransomware is identified, which is critical in countering the attacks.
In what ways can AI be used to improve incident response and crisis management in the event of a security breach, and what role does human oversight play in this process?
Though AI is becoming a vital part of incident response in cybersecurity, it is not able to replace a skilled human – the technology can act as more of an assistant to the cybersecurity expert. Acting as a guide to the incident responder, AI helps in addressing the clearer, simpler incidents, a process that can be automated through the help of human-assisted training. AI can also find anomalies that are difficult for humans to find amongst mass amounts of data. Humans are able to bring the business and technical context to the table in a way that AI cannot and are vital in training AI to handle simpler threat cases. For the human security analyst, AI can act as an assistant, so long as a human is there to correct the operations of AI if necessary and act as the final decision maker.
Please also check out our upcoming event - Cloud and 5G LIVE on October 11 and 12 2023.
BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.
BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.
- Microsoft Copilot makes technology more accessible with AIAI & Machine Learning
- Asian Games 2023: Developing digital sports technologyDigital Transformation
- Cisco acquire Splunk for improved cyber threat protectionDigital Transformation
- Salesforce & McKinsey partner on generative AIAI & Machine Learning