Looking for more podcasts? Tune in to the Salesforce Developer podcast to hear short and insightful stories for developers, from developers.
57. Discussing Docker Containers and Kubernetes with a Docker Captain
Hosted by Mike Mondragon, with guest Bret Fisher.
Docker has emerged as an extraordinarily popular way to safely and predictably deploy applications. But because of its rapid evolution, changing business targets, and technical composition, it can still be a bit daunting to understand when to use it versus other container runtimes, let alone the task of managing it via Kubernetes and other orchestrators. This episode takes a deeper look at the components of Docker, with a strong emphasis on developer productivity in smaller organizations, not massive enterprises. Bret Fisher is our guest, and as a certified Docker expert—a Captain, actually—his job (and passion) is to share his technical expertise.
Mike Mondragon interviews Bret Fisher, who works as a freelance DevOps/sysadmin consultant, and who also has the designation of being a Docker Captain. Docker Captain is a distinction that Docker awards select members of the community that are Docker experts and are passionate about sharing their Docker knowledge with others. To that end, Bret walks us through the history of how he became involved in Docker, and indeed, the history of Docker itself: the problems it tried to solve, and the way the codebase evolved to provide those solutions.
Much of the conversation centers around the confusing terminology and processes present in the Docker ecosystem: when to use Docker Compose, the differences between running Docker locally and in production, and when to consider adopting Kubernetes. There are also various container runtimes which developers can make use of, and Bret touches on the characteristics of each as well.
Bret looks towards the future of Docker, the company, as they recently sold off a portion of their enterprise-focused business. Docker is returning to its original intent to provide developers with better tooling to deploy and isolate their applications. He urges caution to teams ready to move wholeheartedly to Docker and instead focus on solutions that match their problems, not those of immense enterprise corporations.
Links from this episode
Mike: Hello, and welcome to the Code[ish] Podcast. My name is Mike Mondragon, and I'll be your host for this episode of Code[ish]. Today we are speaking with Bret Fisher, who is a Docker Captain about Docker containers. I know Brett from his DevOps and Docker Talk Podcast. Bret, can you introduce yourself?
Bret: Hi there, Mike. My name is Bret. I am a Docker Captain, a DevOpsish sysadmin. I basically do ops for a living as a freelance consultant, and then I talk on the internet about Docker and Kubernetes constantly.
Mike: I'd love to hear about what it's like to be a Docker Captain, and what those responsibilities are.
Bret: Yeah, that's a great question because Docker Captains had been around for three, four years maybe. I think it was 2016 when they first announced it. Essentially, there's other similar programs in the industry, but there are three rules I guess to be a Docker Captain. You can't work for Docker, you need to be an expert in some Docker tooling or container tooling, or not necessarily an expert, but someone who knows it well, and there's so many tools now, you can't possibly keep track of them all. It used to be just Docker, but now there's so much.
Bret: That's how I started. I was a meetup organizer. I was speaking at conferences and I kept bugging Docker for swag that I wanted to give out at conferences because I kept talking about their tools, and I was starting to blog a little bit about it and they said, "Hey, you're doing all of these things, and were ..." Originally, they had a blogging and speaking program, but then eventually they created a larger one known as Docker Captains. There's about 60 or 70 of us around the world.
Mike: One thing, even now for me, it's sometimes I still conceptualize Docker as the monolith, but actually it has been broken up into components, and it might be useful to get you to talk about what Docker is today for new developers or even old developers that haven't really been paying attention to that stuff.
Bret: Yeah, and we've certainly seen lots of movement in the industry. So even if you were keeping up to date two years ago, it can be tough to figure out where are we now and how did we get here. The shorter story on this scenario was that essentially six or seven years ago, a company, much like yourselves, was creating an online platform called .cloud. They, at some point, decided that they were going to open source the way they built their cloud.
Bret: A lot of that was automation around some fundamental Linux capabilities, mainly Cgroups and namespaces, two parts of the kernel, that allowed you to isolate programs and run multiple copies of those programs all on the same host without them really interfering or being able to see each other. So it created that great security isolation. Then they invented some other idea where they were saying, "Let's take an app and all of its dependencies, let's shove them all into a location on the hard drive."
Bret: "Then allow us to tarball all that up and ship it around the internet using HTTPS file storage, and we'll call that an image of your application, and you can move that around everywhere you want and we'll create that common package format for any application on relatively any platform," and of course, at the time it was just Linux, now it's Windows and Mainframe and IOT and everywhere. So they created these two parts of it, right? The idea of what a modern container is and then the packaging format that we now call a container image for moving apps around, versioning those apps, having different iterations of those apps.
Bret: They brought that all together into a project they called Docker. For years, they just iterated on that. They just made that better so that more people could use it, more user friendly. They paid focus very much on the developer workflow. Their namesake besides those inventions of the ideas of what a modern container looks like and the packaging format was keeping it simple enough that a developer could use it. Because traditionally, me being a sys admin, we love infrastructure, so we would naturally create a bunch of infrastructure to run your apps and that would unnecessarily complicate things.
Bret: So Docker's motivation was, "Hey, let's reduce the complexity, let's make this easier to update applications, to deploy applications, keep them isolated, run 10 versions of the same app on a single server or 100 versions of that app across 10 servers." Right? So they did really well with that. Then somewhere around 2015, 2016, everybody started running Docker everywhere. And when I say everyone, I mean everyone that's on the bleeding edge of tech. The rest of the world was ignoring it, right?
Bret: The bleeding edge of tech, the cloud companies, Netflix, the Googles, I'm sure that Heroku was getting into it too, they had so many servers running Docker, they felt like, "Okay, well, we have this really easy way to package up and move these apps around, but we want to manage 100 servers as easy as we manage one. So let's figure out a way to orchestrate these servers into a cluster so that we can use this similar command lines that we use for Docker to deploy 100 copies of that app on 100 servers." That kind of stuff. That was where the orchestrator was born. Now, nowadays-
Mike: This is Swarm?
Bret: Yeah, this is Swarm and Kubernetes, and they both came out. Kubernetes was a little bit early. They actually were announced at DockerCon. But we had all these other ones too. We had a Mesosphere at the time. Marathon was part of it. There was three or four other ones. AWS came out with ECS, which they still run--elastic container service--and the whole goal of all of these things was simply to run Docker on many different servers, but have a single command line interface to deploy and manage apps across all those servers.
Bret: That's right around the time where Kelsey Hightower called it the container wars, where we had all these different ideas of what it meant to run containers across a bunch of servers. Ironically at the time, Docker was ... they nailed it with the runtime, and the runtime is what we call the Docker engine. It sits on a server and just launches your containers, and then manages the container, stops the container, downloads the image, uploads the image. That's the engine, what we now call the runtime. That hasn't changed a whole heck of a lot in the last seven years.
Bret: We've had some other ideas about it, containerd, CRI-O, Rocket, some other projects that ... But essentially, at this point, they all do fundamentally the same thing. The good news is, in case you haven't checked this stuff out for a couple of years, is that runtime hasn't changed much, the Docker command line is still very much what it used to be. It just has more bells and whistles, and that containers fundamentally are this still the same idea that we had from Docker 1.0 back in 2014.
Mike: Yeah. My background's as a full stack developer and so over the course of my career, Linux has always been there. Back in the day, you'd have to know how to admin a Linux machine as your work environment, and then when VirtualBox and Vagrant came around, that was great for me because then I could have components of my full stack there without having to manage it from the command line, or I could manage those components from one source instead of invoking a database or search engine, whatnot, from different shells on my desktop.
Mike: The part of Docker that I use right now, or I use quite a bit for local development is Docker Compose. It's the same thing where I'm working with different components and usually they are orchestrated together in terms of making the full app, and so that's how I have been using Docker. It's been real convenient for me as far as a developer.
Bret: Yeah. I'm a huge fan of Docker Compose and I still use it daily. It hasn't gone away. If anything, what has started to happen, and so for those that haven't looked at Docker Compose, it's a tool that comes with Docker, it's a separate binary, and it's there to help you forget all of the stuff in the Docker run command. It's there so that you don't have to be this expert on all the different command lines you have to run just to start your apps.
Bret: Compose created a YAML file format known as the Compose file, and if you run this command line, Docker compose, and it can spin up one or more containers, it might spin up a database, a web front end, the API backend, whatever. If you have a microservice, it's almost necessary because you'll have maybe 20 containers and each one of them can be all in the same YAML file, and you can do a simple Docker compose up command regardless of your app. That's the one thing that I love talking about, is Docker run definitely feels like you need to be a little bit of a container expert because you have to set all these options, and if you're doing local development, you have to set up volumes and networks and all that stuff.
Bret: But once you get your Compose file format down, and you probably would Google my Java app or my Ruby app, and then you'd find some Compose file examples and Docker file examples. But once you get that Compose and Docker file down, you're running the same command every day. Docker compose up, Docker compose down, Docker compose up. That's the workflow of anyone regardless of their application developing environment, and I feel like that is also ... it's one of those huge things that we just ... when you look back 10 years ago, you're like, "How did we ever get along without this tool?"
Mike: Right. One other thing that reminds me of my own ins and outs of development with using Docker Compose is ... the other thing that I like is you can ... Let's say I have a DB and Redis but I have changed code on my app, and somehow it's not working right, and I just want to restart the container that my app is composed of, and so I can just stop it and start it without actually bringing all the other components in Docker Compose up and down, which I've found pretty useful. Because if I have a bunch of components that are in Docker Compose, then I don't want to bring them all up and down if I'm just restarting the basic component, that's my app.
Bret: Yeah, that's a good tip too. I think with Docker Compose ... I've worked with several teams that they felt like Docker Compose command line, even though it's way simpler than the Docker command line, they wanted to present their large development teams, and when I say large, I mean 30 to 40 developers. They wanted to present to them a higher level of abstraction in their tooling, so they would create maybe shell scripts around Docker Compose.
Bret: I'm very much against that. I actually think that most developers today, especially if you're looking to do microservice work, should learn the basics of Docker, but then very quickly shift to focusing on that Docker Compose command line and really learn that tool. Because if you do, in my mind, it becomes the way you manage your apps for development, it replaces, like you said, the virtualization of years past where we had all these VM tools running locally that would help automate starting servers and stopping them. It becomes that one thing.
Bret: What you're seeing now, and this is I think part of what's showing the success of Docker Compose itself, is when you use tools like VS Code, Visual Studio Code or other IDEs, they're all starting to integrate Docker and even Docker Compose into their tooling so that you can just do it. Essentially, instead of having to type Docker compose up, you just click a button or you select something in a menu and it will automatically run that Docker Compose file for you because once it sees that file, it knows, "Okay, this is how you would prefer to start all your apps and manage them."
Bret: Honestly, I have a couple of courses. Between the couple of courses that I have on Docker and Docker Compose, I could easily put together five hours of training on just Compose, because like you said, you can do a Docker compose up, dub dub dub, and that will only start the service in your Compose file called dub dub dub, that might be your www, not short speak. But if you have a database on the back end, and you just call that service DB in the ammo file, you could do Docker compose up DB, and that will only start that part of your Compose file, and then any of its dependencies, which is where we get into the depends on feature in the YAML, which allows you to connect the different parts of your app.
Bret: But the most important part of that feature is, if you have a microservice, you typically don't want to start all 20 of those at the same time. When you're working on one of them, you just really want its required dependencies for it to function, and that's what the Docker Compose up, and then whatever the name is of your service, and it relies on, it depends on feature in the YAML to ensure that everything else has started before you start the one you're working on.
Mike: When search around, there's always different versions and I'm not sure if I'm doing the right syntax for my Docker Compose. Is there an easy way to make sure I'm doing it right, or if I'm choosing the right syntax? Can you do anything off the top of your mind that ... Any protip you can give me around that?
Bret: Well, when it comes to the syntax and the versions themselves, the cheat sheet here is there are two current versions of Docker Compose YAML. For a minute, let's separate the YAML file language and that definition from the Docker Composed CLI tool which executes commands based on that YAML, right? Those are technically two separate things. So when we talk about the Docker Compose file, it has a version.
Bret: Today there are two version branches, and that's the two branch and the three branch, and a lot of the confusion in the community has been thinking that everyone should be running the three branch. What's really ended up happening, whether that was intentional or not, is that the two branch is ideal for local development, and the three branch is designed for multi server production clusters, running Swarm or Kubernetes. So what ended up happening is that one of my most common tips for people when I start working on their projects and they've been using Compose a while, is that they've upped the version to the latest three version, which I think is 3.7.
Bret: So you'll see this first line in the YAML file that says version 3.7. Then I know for a fact that they're not using that file in production because their production is different or they don't use Compose YAML in production, and they're just using it for local development. So then I will always back them up to version 2.4, which is the latest version on the two branch, because that actually has more features for local development than the three. Mostly just because they didn't want to create one super set of all the functionality for local development because some things for local development don't make sense in production.
Bret: For local development, you want something called the wait on or the depends on, when we talked about. You want that to be able to wait for your database to start up before it tries to start at the Node.js app or whatever. In production when you're talking about multiple clusters and multiple servers, what you really need in production is retries, right? Any distributed system, you need retry, but on local development, you really don't want your app to sit there and spin over and over retrying, retrying, retrying. So we have this concept in the Docker Compose two version that is probably the most misunderstood feature of Compose.
Bret: So I'm always trying to get the word out, that if you use the depends on feature in a two point something version file, you can add a health check to it. So just go, look up the Docker documentation. I think it's docs.docker.com, and in that single page of composed documentation, it's inferred in there, it's not super obvious, but that if you add a health check and you update that little depends on feature, it will wait for the health check to finish on any of your apps before it will start the other parts of the app that depend on it. Once I show a developer that, when they're running a bunch of different containers, they get super excited, right?
Mike: I'm excited.
Mike: You just taught me something on the versions and the health check.
Bret: Yeah. Yeah, and it exists in the version two format, but they took it out of the version three, and I largely believe that that was because, in clusters, we don't have that ability to really even do that. Most servers can't intelligently check other things that other servers are running to see if those things are properly started. So we have things like retries and clusters and health checks and other things in clusters that are different than what you would do locally with wait for. Over the years, people have created all sorts of scripts and things called wait for Scripts and whatnot, to solve this problem locally, but it turns out Compose actually already does it. You were just not aware of it.
Mike: I see. In this theme of clearing up things that have been seen, or may not quite understand, is it Rocket, rkt, and Kata containers? How do they play in this compared to Docker?
Bret: Well, the spoiler here is that almost everyone still uses Docker on their local machine. I think a lot of us that are super excited about anything new, right? Anything new that shows up on the scene, and people start talking about it. We're thinking, "Okay, do I need to redesign or change my tooling to optimize for this new stuff? The boring part of that is actually everyone still really uses Docker because it's still the most feature-rich, tried and true command line friendly for developer tooling out there.
Bret: But there ... or now, what happens is a lot of other tools showing up that fill in edge cases, I would call them. So Rocket was a project started three or four years ago, and the idea of it was let's try an alternative form of the container runtime, and that's technically inside of Docker and the Docker engine, there's a little part of it that we now call containerd, that is the one thing that runs your app in a container. It basically talks directly to the kernel and does a lot of low level stuff.
Bret: But with Docker, we add these layers of abstraction on top so that when you say Docker, what you're really saying is, "I'm talking about the command line tool that talks to the Docker engine," and it has all these concepts around things like networks and images and containers and volumes and configs and secrets, and there's all sorts of other things they can do. It does snapshotting. But that's not the low level toolings concern necessarily. That's all of the convenience features that Docker puts on top of it.
Bret: Over the years, we have had other container runtimes as different ways to try this out. Rocket was the first on the scene that got a lot of attention, but unfortunately, it's now deprecated project and is no longer being maintained. So I wish they would make that a little more obvious in their GitHub, because it's not obvious. But I think the other container runtimes that we have now include containerd and CRI-O. Those are the two other popular ones. Now, containerd was what we now know as Docker breaking up the monolith, and Docker even joked in I think a 2016 or 2017, they joked that we created a tool for helping you deploy microservices, but then we created a monolith tool to do that.
Bret: If anyone remembers back to the news in 2016, maybe it's 2017, I keep mixing those two years up. It was a blur of a time. But Docker announced that they were moving Docker to moby/moby. What this was, was a rebranding of the low level components that Docker was using to create Docker, and essentially what it was, was them attempting to split out some of the raw low-level open source that was relatively unopinionated, and give that to the community even though it was already open source. It wasn't separated from Docker, the company, so a lot of people were concerned that one company was controlling the container ecosystem too much, and it was a valid concern.
Bret: A lot of the maintainers were Docker employees, so it felt like not everyone else got an opinion on this tool that now everyone was using. So Docker said, "We're going to take the low level tooling that's largely unopinionated," and when I say largely unopinionated, I mean that we all generally agree that this is the way you do these low level things. They took that chunk out, they moved it to a new repo called moby/moby and GitHub, and they called it the Moby Project, and it largely confused everyone.
Bret: Today, what we look at is we see that in Docker's GitHub, they have the Docker CLI, they have Docker engine releases, and that's what we all use day to day. But underneath that, there's now a bunch of other libraries and tools, one is called runc, one is called containerd, there's a bunch of different kits. We have Swarm kit, and VPN kit, and Build kit, and this is all stuff that you would never care about unless you were working on developing the container tools themselves. Right?
Bret: If you were building clouds yourself and you wanted this low level tooling, you would care about all that infrastructure. But for the rest of us who just want a tool to run containers, we're all just still running Docker CLI and the Docker engine. So we install Docker and it technically puts something called Dockerd, the Docker daemon on a Linux server or a Windows server, and then we use the command line Docker to connect to that API and run it.
Mike: The other thing that can be confusing right now also is some messaging that you alluded to around the state of Docker, the organization itself, is that right?
Bret: Yeah. Even though Docker, the company, has been around six years, they're still like a startup. They're not yet profitable, they've been announcing for years that they're very close to profit, and they're not a public company, so none of us really get to see any of that. So they have had multiple rounds of funding over the years, trying to figure out their market space and basically where they're going to make their money.
Bret: Because right now, all the things we've talked about, those are all free. So around 2016, everybody was talking about the orchestrators, which is that higher level tool that manages a bunch of servers, and we all now know those as Kubernetes and Swarm and ECS. That looked like it was going to be the place where everyone was going to truly compete, and that where the money was to be made, right? The billions were in the orchestrator, not in the runtime because everyone had ... Docker had basically had one, where everyone was running Docker, and now the competition, the war was really about this multi server container orchestrator and that's when they had actually had a tool called Swarm that we now know is Swarm Classic.
Bret: Then they recreated it, and we call it Swarm Mode, which is a built in feature of Docker that's still a great way to run a bunch of servers and runs containers on top of them. Then Kubernetes was very, very quickly taking over the scene and everyone was starting to try to figure out how to run it. It is now by far the most used container orchestrator for multi server management. So what Docker decided was, around that time, they decided they were going to create products around the orchestration and that was going to be their big money breadwinner.
Bret: Over the years, they have been building out that practice, and most people, if you watch the company itself, to start, it was 75, 50 engineers and a couple of marketing people and product design people, but mostly engineering. As they started to sell these products, they realize they need to basically become a red hat, they need to become a company that is creating a consulting arm, a support arm, a sales arm, and they were going after government business. We actually had a something called FED Summit every year here in the Washington, D.C. area that would cater to the government offices about all the security stuff that Docker had built in because they really focused a lot on security and code pipelines for secure development and deployments.
Bret: They were competing with Red Hat, and with Rancher and with Heptio who's now VMware, and a lot of these huge multibillion dollar companies. It was a tough time. I guess, and then this is all me speculating, but at some point this year they figured out that they really have two businesses. They have developer tooling, that is what we think of as Docker and Docker Compose and Docker engine, and now they have things like the new Build Kit, which is a more efficient way to build Docker images and get you lots of new features for building container images. They're experimenting with a lot of that stuff, and making it even easier to deploy apps.
Bret: Then they have this enterprise business that's about managing multi server clusters, and that was known as Docker Enterprise. Obviously, that tool ran Docker but it was largely a separate business, and a lot of us, even in the Captain's world, we run one fence with the other. We either used one part of the tooling or the other part, and it was a little tough to figure out, "Okay, how are we supposed to dance this line of being open source fanatics?" But also Docker obviously wanted us to be fans of their enterprise tooling and a lot of us were, but a lot of us in the community were also trying to just ... we wanted them to win, we wanted them to be a company, and make money.
Bret: They basically made a decision, and this is where we get to an announcement from two months ago. So that's the background. They made this announcement that they were splitting the company and that they were selling off all of the closed source assets, and all of the people to support that. So that meant out of the 400 plus employees, that was 350 to 325 employees. So they sold off that entire side of the business, and that includes everything from Docker Enterprise to the Docker certification, the DCA, Docker Certified Administrator or associate, actually Docker Certified Associate.
Bret: They sold that off, and they sold it to a company, Mirantis, who is known for being a cluster sales and consulting company for Kubernetes and multi server container clusters, right? They could have sold the VMware or Red Hat or someone else, but it was a perfect example of taking a company that's already an expert in this and then giving them fuel for the fire, because Docker had some 700 Enterprise customers, including really big multinational conglomerates. So that business now gone. What's left is, Docker is now 75 engineers, almost exclusively engineers, 75 people that maintain all of the open source.
Bret: Their focus is now on DockerHub and basically making DockerHub as awesome as it can be, and adding what we assume a whole bunch of features and basically building it up into what we all really wanted it to be in the first place, which was this place to find applications and deploy them as easy as possible. Then Docker Desktop, which a lot of us probably use today on our Windows and Mac machines to make it easier for running Docker and Docker Compose. They've already started to do some of that.
Bret: They're adding GUIs to Docker Desktop so that you can manage your containers and Docker Compose with a GUI. Who knows what else they're going to do? So that's now their new focus, is they're back to the original goal before we got distracted in orchestration, which was developer tooling to make it easier for devs on their machine to get apps into production without necessarily worrying about the multi server container orchestration clustering thing that we now basically most of us run Kubernetes and a few of us still run Swarm.
Mike: For me, recently, a lot of people in my team have ... We do have Kubernetes in a part of my team, and a lot of the developers have gotten the C-KAD. What is it? Certified Kubernetes application developer, is that right?
Mike: I have yet to do that, and to help myself learned that, I built a Raspberry PI-based cluster, four Raspberry PI 4s, I think the latest Raspberry PI. I think I did the Kubernetes the hard way where I was like, "I want to learn each bit of this on my own and not use a distribution," and then ha having listened to the Kubernetes Podcasts, I heard them talk about Rancher keys, and it's like, "Oh, I could've just done this all a much quicker and easier with Rancher keys." So that's what I have in my little play cluster.
Bret: Were you talking about Kelsey Hightower's repo when you said you were doing it the hard way?
Mike: Well, no, I wasn't using his repo at the time, but I had heard of that before and I realized that, "Oh, when you roll by hand like this, you're doing it the hard way."
Bret: Yeah. What we're talking about is Kelsey Hightower's Kubernetes the hard way repo, which is very popular. I used it very early on as a way ... I didn't actually run every command, but I mentally processed the documentation, right? I went through, I was like, "Okay, now I'm creating a certificate. Oh, now I'm creating another certificate. Oh, and now I have to install this tool. Okay, and now I need to move this certificate over here. Okay, now I need to copy of that one."
Bret: You're going step by step to learn what the installers and the distributions are providing for you. Because that's a lot of their value. A lot of these distributions, their value is in managing the installation and set up and then the upgrading of Kubernetes itself. Now, if you're someone who deploys to one of the cloud provided Kubernetes. Fargate, even DigitalOcean now has one. All of those, that's one of the things they do for you is they manage what we call the control plane. The control plane is the API and all of its dependencies including a database and other things. That is the control plane that manages your apps on Kubernetes.
Bret: For a lot of us, we don't ever have to care about that. We should probably know a little bit about it, like what you did. That was a great learning lesson. But if we're just app deployers and app managers, we don't necessarily care about the infrastructure components, and that's what a lot of these are doing. EKS at AWS, their elastic Kubernetes is managing the API and all of its dependencies so that basically they just ... you tell them to set it up and then they give you an end point, and that is an API end point that you tell your command line, your Kubernetes command line, to talk to. That's one of the pieces that I think years from now, a lot of us won't ever need to know how Kubernetes is made.
Mike: Okay. The picture that comes to mind for me then was ... Like I said, I had heard about keys on the Kubernetes Podcast from Google, and I knew that I didn't want to just go with the distribution and take everything, all of the assumptions, that it made. So I resisted installing keys. The thing that stuck out to me was the creator was talking about how he had replaced etcd with--Essentially, the etcd function of Kubernetes with their own implementation that runs on SQLite, if I remember correctly.
Bret: Yeah. Yap.
Mike: So hearing you talk about this, it's like, "Okay, if I'm messing around with different Kubernetes's distributions, that's fine because they are all conforming to the Kubernetes API," and so all of my Kubernetes, all of my Kubectl, Kube control commands are going to are going to be the same regardless of EKS or whatever other distribution I'm using. Is that right?
Bret: Right. Whether or not you're running on Google's Kubernetes engine or whatever, the reason to known your ... to learn a distribution is usually going to be only because you need it in your own data center. If you're running in the cloud, pick whatever cloud vendor you're using, use the one they provide you, right? So that way you don't have to learn a different distribution, which is really just a way to install and manage it. All of these, are using the Kube control command line, or some call it Kubectl or however you want to call it, the kube control.
Bret: That's the thing, is that you should be able to deploy your applications on any of these APIs with the same command line and the same YAML file. The nuance is get into where it's persistent storage and maybe load balancer configurations, things that are possibly external to just the servers themselves. Obviously, that gets a little different depending on who you're using and which driver, essentially which plugin you're using for them. So that's not quite there where it's literally drag and drop. But, hey, this is as close as we've all ever been. It's still complicated but it's still the best we've seen.
Mike: I see. Okay. I'm a full stack developer. Do I actually really need to do my development environment in Kubernetes, or can I get by with doing the Docker Compose thing that we talked about earlier?
Bret: This is the consulting answer, right? It depends.
Mike: It depends.
Bret: I'd say that, like a lot of these tools, it requires understanding your team, right? How big is your team? What's the expertise of your team? How complicated is your app? I recently had a discussion with a team that they were developing two apps. I mean, it was literally two different containers. You don't need Kubernetes for that, right? Just deploy it on Heroku or deploy it on Cloud Run on Google or on something else. That doesn't require the abstraction of understanding all this infrastructure. Because Kubernetes is a lot.
Bret: One of the challenges of Kubernetes is that it's a lot of infrastructure, it's a lot of complexity, and it's necessary if you're an infrastructure manager. But Kubernetes was written by operators for operators, and that's a really interesting distinction is that Docker was written by developers that were focused on app development to make app development easier for developers. Kubernetes was written by operator developers that were focused on making operations for people that were building private clouds easier.
Bret: That gets lost in the messaging of everything should be Kubernetes, everything should be this and that. But really all Kubernetes is, is a bunch of APIs that are running Docker. A lot of people forget that this is all still Docker containers running from Docker images. You maybe are technically using the Docker engine underneath Kubernetes, you might be using an alternate runtime known as containerd or CRI-O, C-R-I-O. On order of popularity from some reports I think a week ago, and when it comes to who's running the container, Docker is still 70% containerd, which is technically partially built by the Docker team and is what Docker runs.
Bret: When you run Docker, you're actually running containerd underneath it. But containerd is just a leaner version without all the fancy features that ... It has very basic features for running your containers. That is now at, I don't know, 20%. I'm going to get my math wrong and percentages. Then CRI-O, which is built by Red Hat and used on OpenShift is run by OpenShift people, so maybe 5%. What we've learned now is that all of the Kubernetes management engines for the cloud, so we're talking DigitalOcean, Google, Amazon, Azure, they have all standardized on containerd as a runtime.
Bret: But for you and I as a developer, we shouldn't ever have to care about that, right? Unless we're managing the infrastructure, that's something that we just never have to worry about. What are our goal should be is keep things as simple as possible locally. Now, most of us aren't in that boat. Most of us have more complicated apps and we need to have a bunch of different things running locally. I still prefer Docker Compose. I think the jury is out on whether you should move to Kubernetes locally.
Bret: I can say for a fact that it will drain your battery faster than anything else on your machine. Even the best smallest version of Kubernetes still eats 10% of my eight core MacBook. But yeah, I think honestly, it depends but mostly no. The people that I see running Kubernetes locally for development are either developing on the Kubernetes project themselves, or they are so bleeding edge that this podcast would be boring to them.
Bret: They're using something called Servicemesh, and they're injecting basically a bunch of proxies into every single container they run, and their doing a whole lot of automation around their app inside of Kubernetes so that it's actually quite hard for them to fully test their app locally without having the Kubernetes system.
Mike: Yeah, that was the original thinking of Heroku, is that we wanted to save the developer from having to know all about DevOps and so the original interface to Heroku, it was the Git interface, git push heroku master. As a developer locally, I can work on my app try it out locally or just push it up to Heroku and then see it running there. Then Heroku handles all of the security and all of the infrastructure that's needed to run the app, whether it be a database or a memory cache, those kinds of things.
Bret: Yeah. I mean, I have friends that I have never been able to win over on the container bandwagon because they love Heroku, and their answer to, "I need to develop another app and deploy it," I'm just going to use Heroku. We will have conversations around the fact that largely containers was the idea of trying to help us reduce the complexity like Heroku does, but for everywhere else.
Bret: For a lot of my friends, they never needed to go containers because they already had ... Not to sound like a freaking advertisement for Heroku, but I still use it today. Ironically, parts of my own infrastructure that I set up for my container courses technically runs in containers on Heroku because it's just so much easier to run it there.
Mike: Cool. Was there any other thoughts that you have for us?
Bret: I think to really wrap all this up, I think that when I talk to people about consulting clients, when I talk at conferences now, I do always tend to mention that we all live in bubbles and we all love the tools we know and prefer the tools that we know. Obviously, if you're listening to this podcast, you're curious and you're listening to the ... you're wanting to learn about new things, which is step one of all this.
Bret: But I think that with the way that we have gotten online with our news and our media and the hype cycles of all this new tech, it can be a little a daunting. Everyone feels like they're not on the coolest thing if they're not in Kubernetes right now. At least when I talk to people, they all think they should be doing Kubernetes and they all should be doing containers. I want to give everyone permission to not do that.
Bret: While we're on the podcast, talking about that these are all excellent tools that solve problems, but if you don't really have those problems, you maybe don't need that tool. We've had a couple of companies that do cloud deployments of things, and they've talked about why they're not doing Kubernetes. It's not because they necessarily have a different idea of how to do containers or whatever, they just feel like for their new unique environment, their new unique a scenario for whatever their cloud is, the idea there is, it's like this is definitely a cool tool that is going to change a lot of the industry in the future.
Bret: But it doesn't mean that it's a panacea for every different possible problem that we all have, and you should always try to strive for the simplest way to solve your problem rather than the coolest way to solve your problem. I am also very guilty of this because I love new tech, and so whenever I'm talking with a consulting team that wants to do containers, one of our very first conversations is around, "Okay, let's at your team, let's look at the expertise, let's look at your tooling that you have today."
Bret: "And let's look for basically how do we get your code from your developer machine into production more reliably and faster, and let's start looking at the human side of DevOps and what those original goals were around mean time to recovery and all of the learning that you're supposed to be doing around the DevOps workflows." We might figure out that you know what? You don't need a Kubernetes, you don't need a clustering system for containers. You might be able to get away with using Heroku or you might build to get away with using some of AWS's other alternatives, or nowadays what we're seeing in some of the parts of the community is something called the JAMstack, which is the idea of hosting a CMS web backend in one place, and then really the front end that everyone looks at as a static website.
Bret: You don't have to worry about production, clustering, monitoring and all that stuff because the front end of your website is just a static nginx web server that can be run anywhere, right? We're starting to see all these other ways that ... Of course, we can talk about serverless, which is a whole separate podcast. All of these ideas are ideas on how to automate infrastructure so that developers can get their code into production faster and more reliable and replace it more often, right? That's all this is. We talk about a lot of this stuff here, but I also want to say, or maybe not, it's up to you.
Mike: Okay. Well, Bret, thank you so much for your time. I learned a number of things, and I really appreciate it.
Bret: Yeah. Thanks for having me. I was glad to be here.
A podcast brought to you by the developer advocate team at Heroku, exploring code, technology, tools, tips, and the life of the developer.
Lead Member of Technical Staff, Heroku
Mike is a team member of Heroku's Runtime Control Plane team ... the place where the "git push heroku master" commands land.
More episodes from Code[ish]
Laura Fletcher, Wesley Beary, and Ian Varley
In this episode, Ian, Laura, and Wesley talk about the importance of communication skills, specifically writing, for people in technical roles. Ian calls writing the single most important meta skill you can have. And the good news is that... →
Jim Jagielski and Alyssa Arvin
Jim Jagielski is the newest member of Salesforce’s Open Source Program Office, but he’s no newbie to open source. In this episode, he talks with Alyssa Arvin, Senior Program Manager for Open Source about his early explorations into open... →
Lisa Marshall and Greg Nokes
This episode of Codeish includes Greg Nokes, distinguished technical architect with Salesforce Heroku, and Lisa Marshall, Senior Vice President of TMP Innovation & Learning at Salesforce. Lisa manages a team within technology and product... →