Jun 6, 2021

Why the tried and tested prevention approach needs support

Technology
MDR
AI
Cybersecurity
Martin Riley
5 min
Martin Riley, Director of Managed Security Services at Bridewell Consulting speaks about adapting to a detection and response strategy

As the world increasingly goes digital, bad actors and external threats are exploring various new avenues to exploit organisations across all sectors, from healthcare to aviation and beyond. From ransomware to phishing attacks, threats are drastically evolving, which is why in the modern age, simply relying on a preventative approach alone is no longer a sufficient strategy. This is particularly evident in the case of recent cyber attacks such as the one on British Airways in 2020. BA’s systems had been compromised for two months before becoming aware of the issue, leading to both personal and credit card data of customers to be stolen over this period.

Businesses that don’t invest in a detection strategy are unable to effectively discover how and when a breach has happened, and are most likely to pay the highest penalties, as reported in IBM’s Cost of a Data Breach Report. Currently, the average time to detect a breach is 280 days, a staggeringly high figure, while the average cost of a data breach racks up to $3.86 million. And it doesn’t take much for an attack to escalate into a catastrophe; all that is needed is one email with malware that can lead to cyber attackers gaining full control within a week. For security professionals, adopting a managed detection & response (MDR) strategy provides the appropriate tools to be effectively prepared.

Utilising artificial intelligence in MDR

With cyber attackers able to act quickly, adopting an MDR strategy can ensure that an organisation acts with similar speed to tackle the threat, drastically cutting the potential costs involved with having to deal with a cyber incident. To truly understand why an MDR strategy is a crucial investment, it’s important to look at its key components of processes, technology and people.

At the core of an effective MDR strategy is threat intelligence, threat hunting and penetration testing, plus deployment and management of security monitoring and incident response. These solutions support the NIST framework, allowing organisations to identify, protect, detect, respond and recover from cyber threats. Underpinning these services is detection and response technology that is increasingly powered by artificial intelligence (AI) and machine learning (ML).

ML, as a key part of AI, is capable of learning and adapting over time from analysing human behaviour, and this can also be applied to the cyber security space. Take for example a phishing email that is sent to an organisation by an attacker. With cyber security professionals defining set parameters of what would constitute a risky email, ML can check for key giveaways and either block the email from reaching its recipient or allow it through while flagging it as a potential risk. If allowed through and ultimately proven to be malicious, ML can feed this data back into its model and continuously learn the signs and hallmarks of any future malicious emails that may be sent to an organisation and block future threats. 

As previous experiences are accumulated, technologies such as ML can work out how to respond accordingly to a new cyber attack. With attackers also using ML in some cases to improve their rate of success, adopting ML in the organisation is crucial to cover every attack vector that a cyber hacker may explore.

When looking specifically at malware, AI has a key role to play here too. In the case of spyware, where an employees’ activities and information are logged and used maliciously by an attacker, AI can become aware of the compromise and share information with other devices on the company network, providing visibility of its footprint and ultimately protection against further malicious damage by disrupting the existing activities and blocking future instances. Undoubtedly, these emerging technologies are playing a key role in underpinning MDR strategy, but that’s not to say that the role of the security professional is made redundant.

Striking the balance with a hybrid security operations centre

Many organisations may currently utilise a security operations centre (SOC) to manage their cyber security, which consists of a specialist team that solely deals with 24/7 continuous protective monitoring. This may be outsourced or in-house and there a benefits and draw backs to both. Running a SOC in-house can pose difficulties in terms of skills and people needed, while opting for a completely outsourced SOC may not be suitable for an organisation that wishes to develop its already existing in-house team. 

Conversely, a hybrid SOC considers the skills and value of cyber security teams, in-house engineers and blends this with the knowledge and expertise of an external provider. Integration with external expertise can allow for access to the people that support the processes they don’t have the in-house skills for, such as threat hunting, threat intelligence, machine learning, analytics and developing security content, while allowing in-house security professionals to focus on other business projects in the organisation. For the C-suite, this approach allows them to develop their employees, enabling them to gain new skills in detection and response from tapping into this external knowledge, while saving costs on hiring a comprehensive in-house team to tackle emerging threats.

Adopting a modern approach to battle modern threats

The events of the last year, particularly the large-scale shift to remote working and increased utilisation of cloud-based systems, has unfortunately opened up the battlefield for cyber attackers. Now is the time for organisations to fight fire with fire and adopt holistic security technologies and stacks that have AI and ML powering them, while utilising a hybrid SOC approach with support from the appropriate external provider. Doing so can allow businesses to truly benefit from an MDR strategy that priorities detection as much as prevention.

Share article

Jul 14, 2021

Discord buys Sentropy to fight against hate and abuse online

Technology
Discord
Sentropy
AI
2 min
Sentropy is joining Discord to continue fighting against hate and abuse on the internet

Discord, a popular chat app, has acquired the software company Sentropy to bolster its efforts to combat online abuse and harassment. Sentropy, monitors online networks for abuse and harassment, then offers users a way to block problematic people and filter out messages they don’t want to see.

First launched in 2015 and currently boasting 150 million monthly active users, Discord plans to integrate Sentropy’s own products into its existing toolkit and the company will also bring the smaller company’s leadership group aboard. Discord currently uses a “multilevel” approach to moderation, and a Trust and Safety (T&S) team dedicated to protecting users and shaping content moderation policies comprised 15% of Discord’s workforce as of May 2020.

“T&S tech and processes should not be used as a competitive advantage,” Sentropy CEO John Redgrave said in a blog post on the announcement. “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”

 

Cleanse platforms of online harassment and abuse

 

Redgrave elaborated on the company’s natural connection with Discord: “Discord represents the next generation of social companies — a generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth. In this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We don’t take this responsibility lightly and are humbled to work at the scale of Discord and with Discord’s resources to increase the depth of our impact.”

Sentropy launched out of stealth last summer with an AI system designed to detect, track and cleanse platforms of online harassment and abuse. The company emerged then with $13 million in funding from notable backers including Reddit co-founder Alexis Ohanian and his VC firm Initialized Capital, King River Capital, Horizons Ventures and Playground Global.

“We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities,” Redgrave said.

 

Share article