Code[ish] logo
Related Podcasts

Looking for more podcasts? Tune in to the Salesforce Developer podcast to hear short and insightful stories for developers, from developers.

Tags

  • public cloud
  • private cloud
  • hybrid cloud
  • enterprise computing
  • security
  • storage
  • kubernetes
  • networking

59. All About the Cloud

Hosted by Robert Blumen, with guest Giorgio Regni.

When we talk about "the cloud," there's more than just the Internet: there's public cloud, private cloud, hybrid cloud, and even multi-cloud. As with any architectural decision, each of these distinct locations require careful consideration when you're building an application. Giorgio Regni is the co-founder and CTO of Scality, and he's going to demystify the different types of clouds, and explain why you might choose one over the other.


Show notes

Giorgio Regni, the co-founder and CTO of Scality, and Robert Blumen, a DevOps engineer at Salesforce, cover the basics of on prem, public cloud, private cloud and multi-cloud. The discussion covers business drivers, use cases, division of workloads, architecture, and networking concerns present in each of these categories.

For enterprises, there is no "one cloud fits all" approach to building applications. The public cloud is more than an experimental platform for non-critical applications and unproven products - nor is the path of all computing migrating to its final home in the public cloud inevitable. Instead enterprises are arriving at the right mix of on premises private clouds which are increasingly containers providing APIs, and one or more public clouds. The motivations for selecting a cloud type can be vertical best-of-breed based on the offerings of a specific cloud provider, distributing peak capacity at lower cost, off site disaster recovery, or choosing a mix of vendors to avoid lock in.

This mixed model works conceptually, but it also introduces issues around security, privacy, and the ability of enterprises to meet service level agreements. Increasingly, companies are grappling with integrating authentication solutions for these disparate locations. As a cloud storage provider, Giorgio provides his insights on how companies are building out these infrastructures and the tools they use to solve their problems.

Transcript

Robert: Welcome to the Code[ish] podcast. This is Robert Blumen. I'm a dev ops engineer at Salesforce. I have with me today Giorgio Regni. Giorgio is the co-founder and CTO of Scality, a company in the software defined storage space. Prior to Scality, Giorgio was co-founder and VP of Engineering at Zanga where he developed anti-abuse software. Welcome Giorgio.

Giorgio: Thank you Robert.

Robert: We're going to be talking about hybrid cloud and multi-cloud. Let's start with some definitions. What is public cloud?

Giorgio: So public cloud is infrastructure that's available online, but it's not in your data center is not hosted by you that as an enterprise or user, you can rent and pay as you go. That's what we call public cloud.

Robert: Something that's not on the cloud at all. Where is it?

Giorgio: So customers may have their own servers and typically they don't own the data centers. So we're going to rent space in what we call a colo, a co-location, which is still a data center, but since they are, it's their own machines or they rent from machines directly, we're going to call it something else rent public cloud.

Robert: And so companies can own their own hardware or rent and lease space and they've been doing that for a long time. Could you distinguish that from private cloud?

Giorgio: Yes, we can. So in term of the way that they're going to use the system, we let dictate if it's just a private deployment or private cloud. So if they deploy your system that is meant to be used on demand by another department in the company or some other tenant. And resources are shared between different entities. I think we'll call it a private cloud. If it's a single service like email systems or a very specific point application, we're not going to create a cloud. It's a cloud because it's shared between multiple users.

Robert: What kind of software is necessary to transform a bunch of servers that you own in your data center into a private cloud?

Giorgio: So I see multiple scenario. So it could be a virtualization, so you could say that if you deploy VMWare at scale, you have your own private cloud. It can also be with something like OpenStack, but you will deploy on a bare metals and provide virtual machines to your customers, internal customers. Or it could also be container based with something like Kubernetes for example,

Robert: What these software layers do is they enable resources on the network to be requested or provisioned with an API and shared without going through let's say a department where you physically request somebody and they go and set up the server and give you the keys to it.

Giorgio: That's correct. And it also manage resources. So to give an example, we, we have our own OpenStack cloud that we use for test at Scality and there's always 2000 VMs running. But we don't have the CPU capacity to host 2000 VMs. So it manages interdependently who use what resources and which VMs are off or on. And this is part of how to manage your cloud and that's part of a cloud software.

Robert: So you mentioned OpenStack a couple of times for listeners not familiar with that. Briefly describe it.

Giorgio: So OpenStack is a set of software that allows someone to be added their own compute cloud that would be similar to a service like AWS or Azure but deployed in a private fashion.

Robert: Okay. So we talked about now the private and public cloud. Let's get into hybrid cloud.

Giorgio: So we see hybrid cloud is becoming more and more popular and that's a combination of using the cloud and private infrastructure. As we first saw private cloud, it started as a way to tier data into another system that could be in the cloud. So I make keep something like a hundred terabyte in my data center and all my backups and archives and all the extra capacity could go in the cloud. That's how we see it at the beginning and now we see more application that are actually designed to be hybrid in the way that they function where we'll benefit from the cloud and use cloud resources on demand.

Robert: You're talking about a couple of different use cases? There are one of them being, we do one set of functions on our own premises and then we do something different like backup on the private cloud. Second case you mentioned you're doing the same application is using both private and public cloud. Could you give some more examples of that second use case?

Giorgio: Yep. I can give you some examples. So one is major companies that produce their own content may have a private cloud for storing all of their assets and that's petabytes of videos and audio and everything we want to keep. They may need to send data online in the public cloud to be able to share it and survey a built in features so that we can talk to S3, talk to Azure block storage to share the data and then they can also send it and distribut to your CDNs. So that's one use case. So that content delivery network, sorry, in the cloud, there's also in media as well. A use case of doing some processing in the cloud, for example, transcoding. So with all the devices that exist, iPad, iPhones, different resolution, different Android devices. You have to transcode your video to be compatible and performant with multiple devices.

Giorgio: You may want to do that in the cloud, so you send your whole video files, you do all the transcoding in the cloud and then you which we've the different files that result from transcoding.

Robert: What would be the thought process of why you would want to do the transcoding in the public cloud?

Giorgio: Because it's a very bursty kind of workload. You don't have to do it all the time and you may not want to invest into the type of service they can do that quickly. For example, in MP4 there's a lot of acceleration that's possible, so you may not want to invest into this hardware if you only go into use that from time to time. So when you can rent the compute resources in the cloud.

Robert: You're taking advantage of cloud's pay as you go pricing model rather than having servers that sit around with zero CPU for 75% of the time.

Giorgio: We see another use case similar with analytics where you may not want to manage your own Hadoop system or Spark system, which are two big data products, so you may want to explore two logs, send some of your logs into the cloud, use services in the cloud, that are dedicated to doing analysis as opposed to maintaining or deploying your own system.

Robert: Let's go back to this use case where an app, the same application is using private and public cloud. It's doing one task on private cloud and it's offloading some other tasks like transcoding onto public cloud. A reason could be bursting. Are you seeing enterprises divide up the workload where they run a base load workload privately and they offload of the variable part of the workload onto public cloud?

Giorgio: Yes. Yeah. That's definitely something that we see where you size a private system for the average load and you would use the cloud as a way to burst when you need it.

Robert: Let's focus on his case where you have a bursty workload. I could see a architecture where you know that it never goes below a minimum of a certain workload, so you would host that on private cloud and then you would only do the workload above that variable part on public cloud. Are there cases where you've seen that?

Giorgio: No, we don't see cases like that for the same workload having a private and a piece in the cloud, we don't really see that in the case of transcoding. It's a completely different workload. The transcoding from what would be done privately.

Robert: Okay. Why wouldn't they just set up their own Hadoop cluster? Is it due to the variable workload or the skillset or what? What's the thinking going on?

Giorgio: There's a big skillset gap into being able to properly run a Hadoop cluster and make sure the data is safe and make sure you get the performance that you need. Where if you start it in the cloud, you're going to benefit from the expertise of a cloud provider is going to be a much more transplant process and you don't have to worry about hardware or worry about scale. It will auto scale for you. That's a big point. And also the type of hardware you will need is different when on a typical server, so you may not want to have to manage multiple different type of servers and all the uniqueness of how to manage a Hadoop cluster versus what you're used to managing.

Robert: A lot of businesses need to ensure they continue to operate even if they lose a data center is using the public cloud as your backup data center a popular use case.

Giorgio: Yes. I think it's becoming a proper use case. It also means that the application can run in the cloud, otherwise it's limited in its application. So we call it a DR when it's possible to resume service from the cloud because if it's just to store the backups but it's not the same thing only is storing backs up, backups in the cloud. If you want to restore service, you need to retrieve your backups and now we see customers who actually use application that can also run in the cloud. So they do backups locally, they're copied into the cloud. And then in the cloud they have templates so that they can start the virtual machines and the application in the cloud in case of any interruption of service in the main data center.

Robert: What does that mean? What is a template?

Giorgio: So depending on which compute platform they're going to use, if it's on for example, EKS Kubernetes is going to be Kubernetes configuration. If it's on virtual machine, it could be Terraform templates, but the other way to quickly start the application in the cloud.

Robert: I've got, okay, so you may need a hundred servers to run your application and six different databases in three message queues. You don't have to provision them until you realize that you need to fail over. So then you'd run your Terraform script and it would come up. Is that what you're describing?

Giorgio: Yes, absolutely. So you don't pay for any compute resource until you them in the cloud.

Robert: And part of what I think you're talking about there is the variability between the two environments. You have essentially the same application, but all the end points are going to be a bit different in the IP addresses. And maybe the DNS and so you need some kind of configuration that abstracts out the difference between your data center and the cloud you're running in.

Giorgio: Yes, and in term of the service itself, maybe access via a domain name for example. So you also need a way to switch from sending the data to your own data center and now pointing to the new instance in the cloud. So it's going to take things like maybe hours before you can resume the service in the cloud, which is already much better than having to download your backup, deploy new physical servers and reinstall the servers, which we are talking about weeks. In the case of a physical system.

Robert: When public cloud started, it was pretty much a plain Linux server. Now you get these much higher value added workloads like Hadoop clusters, Kubernetes clusters. Are you seeing more as these more complex services are offered? Is the workload calculation shifting?

Giorgio: So we see more and more services in the cloud and the catalogs of the cloud service provider keeps going in term of very targeted pinpoint services. That's very interesting. In term of what our customers do, we don't see them using that many services. It's very targeted on a per use case basis.

Robert: So when clouds started, there was an idea that no one will put anything important on the cloud. And then I seen a certain amount of hype around everything will go to the cloud. Where are enterprises in terms of private, public, and hybrid, and is there a trend or if we centered into kind of an equilibrium where it's a mix and it's always going to be that way.

Giorgio: So what we see is, it depends on the enterprise. There's a kind of each enterprise has their own philosophy on that one. And we don't try to change their mind. We do a storage platform for on prem. So you were looking at people who are going to store on prem. We see more and more people that are looking to do private storage for their sensitive data. And not to say that it doesn't exist. Customers who want to store private data, important data in the cloud, but we see more and more people wanting to take the data and put it back on prem for security reasons is one, but also for cost reasons.

Robert: So walk us through what does a situation look like where it's cheaper on prem.

Giorgio: So typically if you need to access your data often it's going to be cheaper on-prem because we're cloud that charges you for capacity but also for access and the access charges can really add up quickly. So, if you plan to really use your data, it makes more sense to store it on prem.

Robert: Okay. So, so far we've gone through private cloud, public cloud, hybrid cloud, like to introduce another buzzword here, multi-cloud. How is that different than what we've been talking about so far?

Giorgio: So the way that we define muti-cloud is when your using more than one cloud, of course. But we see that in the context of multi-part application. So we don't really see one application using multiple clouds. We see an enterprise that has multiple application making use of both private resources and cloud resources and since each application wants to use the best services, wherever it is, you end up using more than one cloud.

Robert: What you're describing, I would call it a best of breed strategy where you're saying Amazon has a great Hadoop cluster as a service. Google has a great Kubernetes, we'll use the best of breed provider and I think you're also said that what you're not seeing is let's hedge our bets by putting some data in Amazons S3 in some data and Google's blob storage so that we can have these two cloud providers compete and we're not locked in. You're sayings that's not really a driver.

Giorgio: That's not really a driver. It is sometimes, but it's not a primary driver. I would say that it's more multi-cloud is being driven by different application, making the best of breed choice into what cloud to use.

Robert: Would it be fair to say then a competition in the between providers is moving toward competition for who can provide the best managed services?

Giorgio: I agree, a lot of the competition now between cloud provider is the quality and the different apps they can run for you in the cloud.

Robert: Companies that are in some kind of a SaaS business or provide an API and store data, does the customer care which cloud the data is on?

Giorgio: So a lot of time the customer care about geography of wherever that data is stored. So which country is it stored in more than which provider is actually giving you the storage capacity? In Europe, we see a lot of a country like Germany would like all the data to stay in Germany for example. But the main is to know where the data is in term of location.

Robert: Are you seeing a lot of use of Terraform to manage these multiple backends?

Giorgio: So we see that in term of a compute clusters using Terraform to deploy the same service across different clouds. We see Val a lot. We also see Kubernetes being used as well and deploying what they call helm chart directly. And we see that the market is going for software like Kubernetes to automate deployment in the future.

Robert: Okay. So how are enterprises using Kubernetes in either hybrid or multi-cloud?

Giorgio: So, at this stage for us, we have a product called Zenko, but once on top of Kubernetes when we deploy it in production, we actually give our own Kubernetes distribution for support and quality reasons. But when the customer wants to POC and to do a dev test and test our product. Where we'll use one of our cloud hosted Kubernetes, so EKS in Amazon, where are they going to, not start their own cluster but use one of the ones in the cloud, even GKE, to do their own test.

Robert: If I understand that you have the ability to do testing on public cloud but you'll operate your own cluster for production.

Giorgio: Yes.

Robert: And is this taking advantage of the more variable load on the test environment that you don't have to maintain a full test network and have it be idle?

Giorgio: Yeah, so you don't have to maintain your own test network, you don't have to deploy any new servers and if you want to wrap up the test, it's just an API call and you can set another test after that. So, that makes it very easy to do testing.

Robert: So when you have this network environment that crosses organizational boundaries within the corporate network, you may have LDAP as your identity provider. How is identity managed in hybrid or multi-cloud?

Giorgio: So, that's a very critical question. We've been asked how to make sure that the same identity are being used where your on prem or in the cloud a lot of times. One solution that seems to be used a lot is to--whether it's public cloud or private cloud goes to a central authentication system. So typically in the case of Microsoft Active Directory, you would configure your cloud identity to check in with your local Active Directories and you would configure a software like Scality Link for storage or Scality Zenko for multi cloud to also use the same Active Directory servers. By doing that now you have the same identity across different local and distant public services.

Robert: So we did a show about the idea of an event based architecture for data integration with something like a Kafka at the centerpiece in architecture like that. And you have applications, some of which are in your private cloud, some of which are in one or more of multi clouds. Where does central service, I guess you could say the same thing about logging services. Where do these central aggregating services sit?

Giorgio: So, I can talk about our use case. I cannot really comment in general, but in our case for storage, the customer always decides which is its main core data center. It could be in the cloud or it could be on prem, but there's a core site and then the other sites are considered more edge locations. So in the case of private deployment of multiple petabytes of data, let's call it the core storage. Then that storage, we do a main logging, we'll have the Active Directory for authentication. We also have a Kafka queues for events and they're going to be closer to the core site.

Robert: You're trying to minimize latency or the cost of moving data around. You put the storage close to where most of the traffic?

Giorgio: Yes. And a lot of these services like Kafka and or others. They're not meant to be a geo distributed or cause larger latencies. So it's better to have a main site and everybody subscribes to the main site.

Robert: Great. Well Giorgio, it's been a pleasure speaking to you. Thank you very much for appearing on Code[ish].

Giorgio: Thank you Robert it was a pleasure to talk with you.

About code[ish]

A podcast brought to you by the developer advocate team at Heroku, exploring code, technology, tools, tips, and the life of the developer.

Hosted by

Avatar

Robert Blumen

Lead DevOps Engineer, Salesforce

Robert Blumen is a dev ops engineer at Salesforce and podcast host for Code[ish] and for Software Engineering Radio.

With guests

Avatar

Giorgio Regni

CTO, Scality

Giorgio Regni is the co-founder and CTO of Scality, a company in the software-defined storage space.

More episodes from Code[ish]