Meet the Fockers! What the Fork? by James Kelly

In the past week, all the rage (actual hacker rage in some forums) in the already hot area of containers is about forking Docker code, and I have to say, “what the fork?”

What the Fork?

Really!? Maybe it is because I work in IT infrastructure, but I believe standards and consolidation are good and useful...especially “down in the weeds,” so that we can focus on the bigger picture of new tech problems and the apps that directly add value. This blog installment summarizes Docker’s problems of the past year or so, introduces you to the “Fockers” as I call them, and points out an obvious solution that, strangely, very few folks are mentioning.

This reminds me of the debate about overlay networking protocols a few years back – only then we didn’t blow up Twitter doing it. This is a bigger deal than that of overlay protocols however, so my friends in networking may relate this to deciding upon the packet standard (a laugh about the joy of standards). We have IP today, but it wasn’t always so pervasive.

Docker is actually decently pervasive today as far as containers go, so if you’re new to this topic, you might wonder where the problem is. Quick recap… The first thing you should know is that Docker is not one thing, even though we techies talk about it colloquially like this. There’s Docker Inc., Docker’s open source github code and the community and Docker products. Most notably, Docker Engine is usually just “Docker,” and even that is actually a cli tool and a daemon (background process). The core tech that made Docker famous is a packaging, distribution, and execution wrapper over Linux containers that is very developer and ops friendly. It was genius, open source, and everyone fell in love. There were cheers of “lightning in a bottle,” and “VMs are the new legacy!” Even Microsoft has had a whale of a time joining the Docker ecosystem more and more.

Containers and good tech around them has absolutely let tech move faster, transforming CI/CD, devops, cloud computing, and NFV, well…it hasn’t quite hit NFV absolutely yet, but that’s a different blog.

Docker wasn’t perfect, but when they delivered v1.0 (summer 2014), it was very successful. But then something happened. Our darling started to make promises she didn’t keep and worse, she got fat. No not literally, but Docker received a lot more VC funding, and grew organically and with a few acquisitions. This product growth took them away from just running and wrapping containers into orchestration, analytics, networking etc. (Hmm. Reminds me of VMware actually. Was that successful for them? :/) Growing adjacent or outside of core competencies is always difficult for businesses, let alone doing it with the pressure of having taken VC funding and expected to multiply it. At the same time, as Docker started releasing more features and more frequently, quality and performance suffered.

By and large I expect that most folks using Docker, weren’t using it with Docker Inc. nor anyone’s support, but nonetheless, with Docker dissatisfaction, boycotts and breakups starting to surface, in came some alternatives: fairly new entrants Core OS and Rancher, with more support from other traditional Linux vendors Red Hat (Project Atomic) and Canonical (LXD).

Containers, being fundamentally enabled by the Linux kernel, all Linux vendors have some stake in seeing that containers remain successful and proliferate. Enter OCP – wait that name is taken – OCI, I mean OCI. OCI aims to standardize a container specification: What goes in the container “packet” both as an on-disk and portable file format and what it means to a runtime. If we can all agree on that, then we can innovate around it. There is also a reference implementation, runC, donated by Docker, and I believe that as of Docker Engine v1.11 it adheres to the specification. However, according to the OCI site, the spec is still in progress, which I interpret as a moving target.

You may imagine the differing opinions at the OCI table (and thrown across blogs and Twitter – more below if you’re in the mood for some tech drama) and otherwise just the slow progress has some folks wanting to fork Docker so we can quickly get to a standard.

But wait there’s more! In Docker v1.12, near-released and announced at DockerCon June 2016, Docker Inc. refactored their largest tool, called Swarm, into the Docker Engine core product. When Docker made acquisitions like SocketPlane that competed with the container ecosystem, it stepped on the toes of other vendors, but they have a marketing mantra to prove they’re not at all malevolent: “Batteries included, but replaceable.” Ok fine we’ll overlook some indiscretions. But what about bundling the Swarm orchestration tools with Docker Engine? Those aren’t batteries. That’s bundling an airline with the sale of the airplanes.

Swarm is competing with Kubernetes, Mesosphere and Nomad for container orchestration (BTW. Kubernetes and Kubernetes-based PaaSs are the most popular currently), and this Docker move appears to be a ploy to force feed Docker users with Swarm whether they want it or not. Of course they don’t have to use it, but it is grossly excessive to have around if not used, not to mention detrimental to quality, time-to-deploy, and security. For many techies, aside from being too smart to have the Swarm pulled over their eyes, this was the last straw for another technical reason: the philosophy that decomposition into simple building blocks is good, and solutions should be loosely coupled integrations of these building blocks with clean, ideally standardized, interfaces.

Meet the Fockers! So, what do you when you are using Docker, but you’re increasingly are at odds with the way that Docker Inc. uses its open source project as a development ground for a monolithic product, a product you mostly don’t want? Fork them!

Especially among hard-core open source infrastructure users, people have a distaste for proprietary appliances, and Docker Engine is starting to look like one. All-in-ones are usually not so all-in-wonderful. That’s because functions built as decomposed units can proceed on their own innovation paths with more focus and thus, more often than not, beat the same function in an all-in-one. Of course, I’m not saying all-in-ones aren’t right for some, but they’re not for those that demand the best, nor for those that want choice to change components instead of getting locked into a monolith.

All-in-ones are usually not so all-in-wonderful.

Taking all this into account, all the talk of forking Docker seems bothersome to me for a few reasons.

First of all there are already over 10000 forks of Docker. Hello! Forking on github is click-button easy, and many have done it. As many have said, forking = fragmentation, and makes it harder for those that need to use and support multiple forks or test integrations with them aka the container ecosystem of users and vendors.

Second, creating a fork, someone presumably wants to change their fork (BTW non-techies, that’s the new copy that you now own – thanks for reading). I haven’t seen anybody actually comment on what they would change if they forked the Docker code. All this discussion and no prescription :( Presumably you would fix bugs that affect you or try to migrate fixes that others have developed, including Docker Inc who will obviously continue to develop them. What else would you do? Probably scrap Swarm because most of you (in discussions) seem to be Kubernetes fans (me too). Still, in truth you have to remove and split out a lot more if you really want to decompose this into its basic functions. That’s not trivial.

Third, let’s say there is “the” fork. Not my fork or yours, but the royal fork. Who starts this? Who do you think should own it? Maybe it would be best for Docker if they just did this! They could re-split out Swarm and other tools too.

Don’t fork, make love.

Finally, and my preferred solution. Don’t fork, make love. In seriousness, what I mean is two things:

1.       There is a standards body, the OCI. Make sure Docker has a seat at the table, and get on with it! If they’re not willing to play ball, then let it be widely known, and move OCI forward without them. I would think they have the most to lose here, so they would cooperate. The backlash if they didn’t may be unforgiving.

2.       There is already a great compliant alternative to Docker today that few are talking about: rkt (pronounced Rocket). This is pioneered by CoreOS, and now supported on many Linux operating systems. rkt is very lightweight. It has no accompanying daemon. It is a single, small, simple function that helps start containers. Therefore, it passes the test of a small decomposed function. Even better, the most likely Fockers are probably not Swarm users, and all other orchestration systems support rkt: Kubernetes’s rktnetes, Mesos’s unified containerizer, and Nomad’s rkt driver. We all have a vote, with our wallet and free choices, so maybe it is time rock it on over to rkt.

 

In closing, I offer some more ecosystem observations:

In spite of this criticism (much is not mine in fact, but that of others to which I’m witness) I’m a big fan of Docker and its people. I’m not a fan of recent strategy to include Swarm into core however. I believe it goes against the generally accepted principles of computer science and Unix. IMO Docker Engine was already too big pre-v1.12. Take it out, and compete on the battle ground of orchestration on your own merits and find new business models (like Docker Cloud).

I feel like the orchestration piece is important and the momentum is with Kubernetes. I see the orchestration race a little like OpenStack vs. Cloud Stack vs. Eucalyptus 5 years ago where the momentum then was with OpenStack, and today it’s with Kubernetes but for different reasons. It has well-designed and cleanly defined interfaces and abstractions. It is very easy to try and learn. Those who say it is hard to learn, setup and use, what was your experience with OpenStack? Moreover, there are tons of vanilla Kubernetes users; can’t say that about OpenStack. In the OpenStack space, there are OpenStack vendors adding value with support and getting to day-1, but with Kubernetes the vendor space is adding features on top of Kubernetes, moving toward PaaS. That’s a good sign that the Kubernetes scope is about right.

I’ve seen 99% Docker love and 1% anti-Docker sentiment in my own experience. My employer, Juniper, is a Docker partner and uses Docker Engine. We have SDN and switching integrations for Docker Engine, and so personally, my hope is that Docker finds a way through this difficulty by re-evaluating how it is using open source and business together. Open source is not a vehicle to drive business nor shareholder value which is inherently competitive. Open source is a way to drive value to the IT community at large in a way that is intentionally cooperative.

Related:

http://thenewstack.io/docker-fork-talk-split-now-table/

https://medium.com/@bob_48171/an-ode-to-boring-creating-open-and-stable-container-world-4a7a39971443#.yco4sgiu2

http://thenewstack.io/container-format-dispute-twitter-shows-disparities-docker-community/

https://www.linkedin.com/pulse/forking-docker-daniel-riek

Having Fun with Marketing Fluff by James Kelly

Imagine you could walk up to someone and know their soul. Imagine they could know yours. To some extent, we all have a thirst to be able to get to the quintessence of someone or something like this, but I realized today that there are two breeds of people that especially seek this: yoga practitioners and techies. For me this is an interesting common ground…

Living in Silicon Valley with lots of yogis and techies, and falling into both categories myself, I realized that the hardcore folks of either category have something in common indeed. They tend to have a low tolerance and appreciation for marketing, and quickly want to get to the underlying truth. Working in marketing for the last few years and having done a panel session on marketing OpenStack yesterday, this is even more interesting to me, but let me come back to that. First, let me step back and explain this coincidence.

Hardcore techies – There’s little patience for the sales pitch, messaging or the value proposition because these folks pride themselves on scientific skepticism and understanding the details. They’re keen to physically or logically dissect and reassemble technology. They don’t associate their job and work value with their organizational brand and commercial success, but rather with the speeds and feeds, features and functions, and technological progress.

Hardcore yogis – Although most of them have a physical asana practice, life is an inside job for the hardcore yogi. The heart of the practice is one of noticing ebb and flow in participation in the life that is given with care and without judgement, resistance or attachment. This is not about gymnastic ability, striking the perfect pose, an idyllic beach setting for the yoga shala, nor the latest style of Lululemon.

Both of these personalities desire the truth, and both of them alike often snub marketing and branding as a veil or an illusory distraction from the truth they represent.

On one hand, I concede that buying into anything based on its marketing is a precarious undertaking because there’s obviously a lot inauthenticity out there. I’ll also add that sensationalized and adamant unpermissive promotion really can mar a product or cause, especially for those of us that are not ever-hungry oblivious consumers.

On the other hand, I have a lot of respect for authentic marketers. Thinking about the way you’re seen in the world matters in practice because it is your interface to the environment. No one has the ability to see the soul of a person straight away, nor can anyone fully determine all the facets of a product or service in an instant. This is why marketing identity and branding matters. Marketing is innate to interaction and nature.

Marketing is innate to interaction and nature.

Just as the bee notices a flower in part because of its beauty in color and aroma, we also make decisions based upon, not just logic, but also how inviting something is from psychological and sensory perspectives.

If you’ve ever thought that marketing is fluff or inconsequential, the core truth is that you’re right. It is not the essence of a thing or a being. However, occupying a universe of duality and matter, everything has a perspective and an interface, everything shows up somehow, some way. In this world, impression and influence do matter. Just like the yogi participates in the dance of life for the wonder of it all, so you should participate in meaningful marketing and have fun with it all.

*Random Footnote: Interesting how today “software” more than “hardware” is really becoming the core practice in both of these areas of tech and yoga.

Getting to GIFEE with SDN: Demo by James Kelly

A few short years ago, espousing for open source and cloud computing was even more difficult than touting the importance of clean energy and the realities of climate change. The doubters and naysayers, vocal as they are, are full of reasons why things are (fine) as they are. Reasons, however, don’t get you results. We needed transformative action in IT, and today, as we’re right between the Google NEXT event and the OpenStack Summit in Austin, open source and cloud are the norm for the majority.

After pausing for a moment of vindication – we told you so – we get back to work to improve further and look forward, and a good place to look is indeed at Google: a technology trailblazer by sheer necessity. We heard a lot about the GCP at NEXT, especially their open source project Kubernetes, powering GKE. What’s most exciting about such container-based computing with Docker is that we’ve finally hit the sweet spot in the stack with the right abstractions for developers and infrastructure & ops pros. With this innovation now accessible to all in the Kubernetes project, Google’s infrastructure for everyone else (#GIFEE) and NoOps is within reach. Best of all, the change this time around is less transformative and more incremental…

One thing you’ll like about a serverless architecture stack like Kubernetes, is that you can run it on bare-metal if you want the best performance possible, but you can easily run it on top of IaaS providing VMs in public or private cloud, and that benefits us with a great deal of flexibility in so many ways. Then of course if you just want to deploy workloads, and not worry about the stack, an aaS offering like GKE or ECS is a great way to get to NoOps faster. We have a level playing field across public and private and a variety of underpinnings.

For those that are not only using a public micro-service stack aaS offering like GKE, but supplementing or fully building one internally with Kubernetes or a PaaS on top of it like OpenShift, you’ll need some support. Just like you didn’t build an OpenStack IaaS by yourself (I hope), there’s no reason to go it alone for your serverless architecture micro-services stack. There’s many parts under the hood, and one of them you need baked into your stack from the get go is software-defined secure networking. It was a pleasure to get back in touch with my developer roots and put together a demo of how you can solve your networking and security microsegmentation challenges using OpenContrail.

I’ve taken the test setup for OpenContrail with OpenShift, and forked and modified it to create a pure demo cluster of OpenContrail + OpenShift (thus including Kubernetes) showing off the OpenContrail features with Kubernetes and OpenShift. If you learn by doing like me, then maybe best of all, this demo cluster is also open source and Ansible-automated to easily stand up or tear down on AWS with just a few commands to go from nada to a running OpenShift and OpenContrail consoles with a running sample app. Enjoy getting your hands dirty, or sit back and watch demo video.

Those that want to replicate the demo. Here's the link I mentioned in the video: https://github.com/jameskellynet/container-networking-ansible