Are Transfer Appliances Suitable for Cloud-to-Cloud Migrations?
Sending data physically instead of via data links is the basis of a long-standing joke in the telecom community about the IPoAC protocol. IPoAC stands for IP over Avian Carriers, and is directly related to sending notes using homing pigeons. Attaching a 1 TB SD card to a pigeon’s leg may prove to be a faster means of transfer than uploading a video over a slow WiFi so the receiver a few kilometers away can download it.
In much the same way, loading a truck of drives filled with over 100 petabytes of data and sending it from Washington to New York, a 4-hour drive, will result in transferring data at about 58 terabits per second, which turns out to be 58,000 times faster than the fastest consumer internet links.
The practical sense behind the jokes translates to real life, where Amazon Web Services offers AWS Snowmobile, which is exactly a truck loaded with HDDs, designed specifically to ship enormous amounts of data. There is also the smaller AWS Snowball with just 80 TB of storage, which can be rented from Amazon for 10 days. Microsoft Azure offers similar services called the Azure Data Box in sizes ranging from 8 TB to 1 PM, while the Google Cloud Platform offers its so-called Transfer Appliance ranging from 200 TB or 1 PB.
Using such transfer appliances is a good way for migrating large amounts of data to a cloud, since a typical enterprise would only have several Gbps network bandwidth, and migrating such enormous amounts of data may take years if using business-line internet connections.
But are transfer appliances also a viable option for cloud-to-cloud data migration? Most businesses faced with the need to migrate petabytes from one cloud to another discover that they are enclosed in a lock-in trap that some cloud providers had set for them. The extraordinary egress fees involved, which can be as high as $80,000 per petabyte migration. Even though Flexify.IO can reduce migration costs by over 2 times thanks to its custom-built and maintained infrastructure, data owners consider using transfer appliances to physically ship data instead.
But is it a viable option? The process will include the following steps:
· Ordering a transfer appliance, such as AWS Snowball, with data from a source cloud;
· Ordering an empty transfer appliance, such as Azure Data Box, from a destination cloud provider;
· Copying the data from one appliance to another locally;
· Shipping the appliances back.
It is a lot of hassle to manage such a process, including the need to sync deliveries and reliably copy large amounts of data between appliances. Including shipping and copying processes, first at the source cloud provider, then locally, then at a destination cloud provider, the whole process may take well over a week, and has been known for its many blunders.
Relying on transfer appliances is a much more complicated process compared to using a reliable migration service like Flexify.IO to copy the data over digital links. But is relying on appliances a faster way of doing things? It certainly isn’t, because Flexify.IO runs in a cloud and uses a scalable cloud infrastructure and backbone data links, meaning that it can migrate data as fast as 40 Gbps, and even faster. It will take only about 3 days to migrate 1 PB at such speeds, much faster than physical appliances could be shipped and managed.
But is it cheaper? Not at all. Clients will have to pay Amazon at least $0.03 per GB written to their appliance, plus rental fees for both appliances and shipping, not counting the time spent copying and managing them. And this a scenario that does not involve shipping the appliances between countries or continents.
Physical appliances are archaic and no longer a solution for cloud-to-cloud migration, as they are more cumbersome, slower, and more expensive compared to direct data transfers over dedicated data links provided by such services as Flexify.IO.
But if you still choose to use a physical appliance for some reason, Flexify.IO can certainly help synchronize the changes made in a week or two after the data was originally copied from the source cloud.