The key challenges of migrating databases to the cloud
As enterprises continue to embark on their digital transformation journeys part of this change may involve migrating in-house applications, databases and data to the cloud. But while all the benefits of cloud are widely understood, migrating a database or an application to the cloud is not always smooth sailing and there can be challenges to overcome when transitioning. Here I wanted to highlight some of the steps enterprises should take to ensure their database migration is successful.
Why migrate to the cloud?
Let’s first look at why organisations might want to migrate their mission critical database to the cloud. As stated above, the benefits of cloud adoption have been widely documented over the last few years. Factors such as convenience, scalability, flexibility, and economic efficiency, all leading to improved productivity being the key takeaways. During the first wave of cloud adoption we saw that the main benefit was the cloud’s availability of low-cost IT infrastructure, which suited project-based development, Test & Dev, and DevOps. Now during the current wave of adoption, we are seeing more mission critical applications and a wider range of workloads migrating to the cloud. The net effect is a shift up the stack with an increased emphasis on application availability and assurance. Along with this, cloud environments are becoming more manageable, which enables companies to focus on more strategically relevant IT tasks such as defining transformative IT services for their internal and external users.
With the rise of remote working, the ease of access that cloud provides to its users is important. With cloud accessible from a multitude of devices, employees have access to vital information wherever they are located. What we are seeing more often now is organisations migrating mission critical applications into the cloud including workloads with a variety of requirements. Furthermore, cloud migration enables very effective disaster recovery and hi‑availability options for geographically‑distributed businesses.
Don’t migrate half-heartedly
Migrating workloads to the cloud is a strategic decision for an organisation, as it underpins the transformation towards a more agile and business-aligned operating model for their IT. Because of this, I would avoid migrating half-heartedly. If you are going to migrate to the cloud you should move over as much as possible. While a database can be migrated to the cloud independently, it usually makes more sense – technically and strategically – to also migrate with it the applications which are served and interact with it in order to take full advantage of the benefits of cloud.
If you are going to migrate a database to the cloud in isolation without its associated application, it’s crucially important to rely on specific expertise both for the migration and the following service management related to it. The migration should consider all the interdependencies and resource requirements to guarantee performance, availability and security in the cloud; and adequate service management skills are key to ensure proper configuration and change management. That’s why when it comes to a mission critical database, those serving an ERP system – a specialised enterprise‑class cloud provider which also provides managed services and SLAs - should be considered over general purpose or pure‑IaaS cloud alternatives.
Three key reasons why enterprises don’t migrate to the cloud
Enterprises are often cautious about putting their mission critical workloads into the cloud for very specific reasons:
1. Security and compliance. Often security burdens are so heavy that enterprises are cautious about considering alternatives.
2. Performance degradation. Many enterprises cannot compromise on the performance of mission critical apps.
3. Availability. If users cannot access applications, they are unproductive; workflows are stifled and the business is up in arms. That’s exactly what Enterprise‑class cloud providers which specialise in mission‑critical workloads effectively address, with a combination of ad hoc cloud architectures and cloud management tools as well as stringent operating principles and expertise.
For example, rather than using a general‑purpose cloud architecture, with Enterprise‑class cloud providers who specialise in mission‑critical workloads, enterprises can utilise dedicated individual VRF domains. VRF is a network virtualisation technique used by Telco Providers, the same one used to isolate MPLS circuits. This helps in isolating domains in a multi-tenanted cloud, with many benefits in terms of security, compliance, resource utilisation and optimisation of network topologies and segmentation. This option provides enterprises the freedom of the public cloud while still keeping it virtually private and as secure - or even more secure - than what they have on-premise.
Make sure you have a plan
With any large scale programme of work like moving a mission critical application into the cloud, the first step is to devise a plan. By doing so, organisations can avoid the worst-case scenarios that may appear both during and after the migration. This plan should cover what features you require in the cloud and what steps are being taken to secure this data. It is also important to truly understand the cloud environment being used, whether Private, Public, Hybrid or Multi-Cloud, as no one cloud does it all, and different use cases require specialised attributes. Therefore, preparing and understanding the use case is another important requirement.
There is another key aspect, which is the complexity of different application environments. They don’t live in isolation. There is a lot of interdependency between different application modules or between the applications and the databases. The way those interdependencies are sustainable in a private cloud environment is via very sophisticated network topologies through the use of VLANs, etc. These are the driving principles which govern the network or VLAN configuration in a private environment. When you consider moving legacy applications to the cloud, you don’t want to disrupt those configurations, topologies and related private IP addressing schemes, as the ripple effect could be fatal. Lifting and shifting to a standardised cloud environment just won’t work here.
And finally – cost efficiencies
The biggest cost of mission critical applications and databases is not building them and starting them up, but rather the total cost of running them. Migrating a database to an enterprise‑class specialised cloud gives access to a set of automation tools and operating templates which reduce human intervention and improve speed and accuracy for most of these important tasks, while significantly reducing the associated operating costs.
Roberto Mircoli, EMEA CTO, Virtustream
Fastly's CDN Reportedly to Blame for Global Internet Outage
A huge outage has brought down a number of major websites around the world. Among those affected are gov.uk, Hulu, PayPal, Vimeo, and news outlets such as CNN, The Guardian, The New York Times, BBC, and Financial Times.
It is thought a glitch at Fastly ─ a popular CDN provider ─ is causing the worldwide issue. Fastly has confirmed it’s facing an outage on its status website but fails to specify a reason for the fault ─ only that the problem isn’t limited to a single data centre and, instead, is a “global CDN disruption” that is potentially affecting the company’s global network.
“We’re currently investigating potential impact to performance with our CDN services,” the firm said.
What is Fastly?
Fastly is a content delivery network (CDN) company that helps users view digital content more quickly. The company also provides security, video delivery, and so-called edge computing services. They use strategically distributed, highly performant POPs to help move data and applications closer to users and deliver up-to-date content quickly.
The firm has been proving increasingly popular among leading media websites. After going public on the New York Stock Exchange in 2019, shares rose exponentially in price, but after today’s outages, Fastly’s value has taken a sharp 5.21% fall and are currently trading at US$48.06.
What are CDNs?
Content delivery networks (CDNs) are a web of small computers, or servers, that link together to collaborate as a single computer. CDNs improve the performance of internet-connected devices by placing these servers as close as possible to the people using those devices in different locations, creating hundreds of points of presence, otherwise known as POPs.
They help minimise delays in loading web page content by reducing the physical distance between the server and the user. This helps users around the world view the same high-quality content without slow loading times.
Without a CDN, content origin servers must respond to every single end-user request. This results in significant traffic to the origin and subsequent load, thereby increasing the chances for origin failure if the traffic spikes are exceedingly high or if the load is persistent.
The Risk of CDNs
Over time, developers have attempted to protect users from the dangers of overreliance through the implementation of load balancing, DDoS (Denial of Service) protection, web application firewalls, and a myriad of other security features.
Clearly, by the state of today’s major website outage, these measures aren’t enough. Evidently, CDNs present a risk factor that is widely underestimated ─ which needs to be rectified with haste. Content delivery networks have become a key part of the global infrastructure, and so it’s imperative that organisations start to figure out risk mitigation strategies to protect companies reliant on the interconnected service from further disruption and disarray.