Containers may be one of the hottest technologies on the block, but while they are a great solution for some things, they aren’t the perfect solution for every implementation, and you still need to think carefully about how containers fit within your entire infrastructure architecture with careful thought put into how these containers are developed, managed, monitored, deployed, and secured in production.
At ContainerCon, which was part of LinuxCon North America held in Toronto this week, there were plenty of talks touting the advantages of containers, along with a few talks about the challenges and cautions associated with using containers, especially when using them in production.
Linux containers have been around for a while with technologies like LXC, but they haven’t exactly been easy to use. In other words, the early containers were “not for mere mortals” as Vincent Batts from Red Hat mentioned during his talk about container standards.
The difference now is that many of the more recent container projects have been heavily focused on making it easy to use containers, and as a result, containers have become very popular. Everyone has heard of Docker. As Corey Quinn said during his session “Heresy in the Church of Docker” —”The first rule of Docker is never to shut the hell up about Docker.” With $180 million in funding to date, Docker has certainly had its share of hype.
With this ease of use and popularity come a number of challenges, especially when it comes to deploying containers into production. In other words, Quinn says that DevOps plus containers don’t make things that work on your laptop magically work in production. You really need to spend some time understanding the ways that containers may fail before you put them in production. Those failures or other issues can happen anywhere in the stack: the containers themselves, the applications running within the containers, security, networking, etc. You really need to look at each of these layers to understand the failures and troubleshoot the issues.
There is currently a shift to microservices-based architectures with small, modular independent processes and scalable development models in an environmentally agnostic setting. Ultimately, moving from a three-tier architecture to a microservices architecture creates a lot of complexity, and every deployment may end up as a unicorn, which Quinn described using this model:
— Josh Atwell (@Josh_Atwell) February 11, 2016
Quinn also talked about how containers are not the death of configuration management, and how the move to immutable infrastructure doesn’t mean the end of configuration management. Docker is not a quick solution to replace configuration management despite what you might hear at other conference talks since conference talks tend to focus on the successes, and not the miserable failures.
In Michal Svec’s talk, “Are Containers Enterprise Ready? Bridging Traditional and Agile IT,” he expressed similar concerns about using containers carefully when in production, but from a slightly different angle. Traditional IT and newer, agile approaches using of containers shouldn’t be an all or nothing mindset. Within your environment, it probably makes sense to keep some applications (OpenStack, Oracle SAP, etc.) in a more traditional, non-containerized environment. For other applications, an agile approach deployed in containers will be more appropriate, but those containers and images need to be carefully monitored, patched, and secured like you would for anything else running within your production infrastructure.
One of the other challenges is that standards for containers are still emerging. Batts talked about how container standardization is happening across several different areas: packaging, runtime, networking, and cloud. There are also a lot of tools available, so it’s important to really think about and define your use cases, rather than just using something because it’s the hot, new container tool with the most attention this week. He also recommends ensuring that your container integration touchpoints stay generic to avoid lock-in to a particular solution, especially while the standards are still being developed.
Containers can be an important part of your infrastructure solution, but they obviously aren’t the perfect solution for everything. The require careful thought about how they will be used, especially within your production environments.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.