#SimpleAF Networking in 2018 by James Kelly

stone-simple-network.jpg

Last week at its flagship customer event, NXTWORK, Juniper Networks set a valiant vision for its role in the future of networking: “Engineering. Simplicity.” Next week as we take respite from engineering and set some 2018 goals, simplicity sounds good. Here are some of my ideas inspired by engineering for simplicity and Juniper’s new #simpleAF tag. Simple as fishsticks…of course.

Da Vinci called simplicity: ultimate sophistication. It would come more easily to those solving more primitive challenges, but maybe you, like Juniper, audaciously tackle cloud-grade problems, and in such domains and at such scale, simplicity is anything but simple to achieve.

The thing about creating #simpleAF simplicity is that it’s not just about tools or products. If you’re a  network operator, another tool won’t make a revolutionary nor lasting impact toward simplifying your life any more than the momentary joy of a holiday gift. Big impacts and life changes start inside out. They don’t happen have-do-be, rather they are be-do-have. Juniper is doing its own work to put simplicity at its company’s core being, but this article, besides some gratuitous predictions, is about transforming your network operations, putting simplicity at your core for a happier prosperous 2018.

Be… the NetOps Change

To be the engineer of network simplicity, it’s time to lose the title of network admin, operator or architect and embrace a new identity as a:

Network Reliability Engineer

Just like sysadmins have graduated from technicians to technologists as SREs, the NRE title is a declaration of a new culture and serves as the zenith for all that we do and have as engineers of network invincibility. And where could invincibility be more important than at the base of the infrastructure that connects everything.

Start with this bold title, and fake it until you make it who you truly are.

Do… It Like They Do on the DevOps Channel

Just like SREs describe their practices as DevOps, network reliability engineering embraces DevNetOps.

While Dev and (app) Ops are working closely together atop cloud-native infrastructures like Kubernetes, the cluster SRE is the crucial ops role that creates operational simplicity by design of separation of concerns. Similarly, the NRE can design simplicity by offering up an API contract to the network for its consumers—probably the IaaS and cluster SREs in fact.

The lesson here is that boundaries are healthy. Separate concerns by APIs and SLA contracts to deliver simplicity to yourself as well as up the chain, whether that is to another overlay network or another kind of orchestration system.

It’s important that your foundational network layers achieve simplicity and elegance too. Trying to put an abstraction layer around and over top of a hot mess is a gift to no one, least of all yourself if it’s your mess.

So how do we clean up the painful mess that is the job of operating networks today? I’ve examined borrowing the good habits of SREs and DevOps pros before, in the shape of DevNetOps blogs and slideshares. Here is a quick recap of good behaviors to move you from the naughty to the nice list, along with my predictions for 2018.

1.     Start with Networks as Code
Prediction: When people say the network CLI is  dead, they jump straight to thinking about GUIs. Well for provisioning changes, you ought to start thinking about Eclipse Che or an IDE instead of product GUIs, and start thinking about GUIs more for observability and management by metrics. Networks as code start with good coding logistics. This coming year, I think we’ll see DevNetOpserati practice this simple truth and realize the harder one that networks as code is better on top of automated networking systems themselves. In other words, networking and config as code belong on top of SDN “intent-driven” systems, not box-by-box configurations in your github repo; nevertheless, that may be a good step depending on where you’re at in your journey.

2.     Orchestrate Your Timeline as a Pipeline
Prediction: It follows from coding habits that you follow a review and testing process for continuous integration (CI). While vendors dabble in testing automation services and simulation tools, I predict that we will see more of a focus on these in 2018 and opinionated tools that orchestrate the process workflow of CI and eventually continuous delivery/deployment (CD). This is whitespace for vendor offerings right now, and the task is ripe for truly upleveling operator simplicity with process innovation. While the industry talks about automation systems like event-driven infrastructure (EDI) and continuous response, mature DevOps tooling is ready and waiting for DevNetOps CI process pipeline automation. Furthermore, it’s more accessible in terms of codifying or programming with a higher reliability ROI.

3.     Micro and Immutable Architecture
Prediction: 2017 was the year everyone went koo koo for Kubernetes and containers. It has mainstream adoption for many kinds of applications, but networking systems from OSs to SDNs to VNFs are all lagging on the curve of refactoring into containers. I’ve reported on how finer-grained architectures are the most felicitous for DevOps, DevNetOps, and the software and hardware transformation that networking needs in order to achieve the agility of CD. We must properly architect before we automate. We’ve started to see containers hit some SDN systems in 2017, but I predict 2018 will be the year we start to see VNF waypoints as containers with multi-interface support in CNI and a maturation of higher-order networking in the ruling orchestrator, Kubernetes.

4.     Orchestrated CD from Your Pipeline
Prediction: I’m doubtful that we’ll hit this mark in 2018 in the area of DevNetOps, mostly because of some of the above prerequisites. On the flip side, it’s obvious that in 2018 we are going to see the network play a big role in micro- services CD thanks to the rise of service meshes that do a much better job of canary and blue-green deployments. CD for actual networking systems is likely experimental in 2018 or bleeding edge for some SDN systems.
 

5.     Resiliency Design and Drills
Prediction: This is the fun practice of chaos engineering, designing and automating around failures to stave off black-swan catastrophes. This design is already showing up in preference for more smaller boxes instead of fewer larger boxes. Most enterprises are also getting around to SD-WAN to use simultaneous hybrid WAN connectivity options. There is more to do here in terms of tooling and testing drills in CI pipelines, as well as embracing the “sadistic” side of the NRE culture that kills things for fun to measure and plan for invincibility or automatic recovery. In 2018, I think this will continue to focus on evolving availability designs until we make more progress in areas 2 and 3 above.
 

6.     Continuous Measurement
Prediction: The SRE and NRE culture is one of management by metrics; thus, analytics are imperative. There is always progress happening in the area of telemetry and analytics, and I know at Juniper we continue to push forward OpenNTI, JTI and AppFormix. 2018 beckons with the opportunity to do more with collection systems like Prometheus and AppFormix, employing narrow-AI deep learning for the first time to raise new insights beyond normal human observation.

7.     Continuous Response
Prediction: With the 2017 rage around intent-based or -driven networking, there is sure to be progress in 2018. While the past few years have focused on EDI, I think the most useful EDI actions are largely poised to get subsumed into SDN systems with controllers in them, similar to how Kubernetes’ controllers implement the continuous observation and correction to the declared state. There is however a tribe of NREs that look to automate across networking systems and other IT systems like incident management, ticketing and ChatOps tools. As I wrote about in May, I think the maturing serverless or FaaS space will eventually win as the right platform for these custom EDI actions.

Have… a Happier Holiday

As an NRE, it’s not just about doing DevNetOps behaviors or processes nor is it about having the greatest tools or code. When you’re network reliability engineering for simplicity, it’s equally about what’s really important when you take the NRE cape off and go home: not getting called in and enjoying a happier holiday. And so, simplicity is what I wish for you this holiday season, and for the next one, may you be further down the road of engineering simplicity.

image credit Manuchi/Pixabay

Forging DevOps Culture with Hedge-fund Flair by James Kelly

teamwork-culture shutterstock_506137132.jpg

People: your most important resource and your greatest predicament to DevOps potency.

When the DevOps consultants recess and you need to scale a pilot-project team’s savvy, how do you affect the wider organization with DevOps principles?

Balancing the ingredients of this so-called mentality is trickier than revamping tools and processes. We all know to let tooling lead thy process, and process lead thy tooling. We know the approach is a rolling upgrade, not a mass reboot.

But in the plethora chapter and verse on DevOps, cultural principles are still parsimonious—not another definition, nor “automate everything,” nor the trite dev and ops working closely—real principles of cultural behaviors, their reasoning and an implementation track record.

When I was pouring through the pages of Principles by the Steve Jobs of investing, Ray Dalio, I was expecting to learn about life, finance and business from this famed hedge-fund investment and business guru. I did. I also realized, Ray’s high-performing investment and management principles codify common aspects of the DevOps mentality with some new ideas and revisions. And he’s got the CEO and CIO track record to support it, only his c-level ‘I’ stands for investment.

In the spirit of the ‘S’ for sharing in DevOps’s CALMS, Ray has provided a principles manifesto in clear, practical terms. I won’t reveal them all—I encourage you to read the book for that—but here are five of his greatest principles, distilled and steeped with my own perspective for the DevOps anthology.

1. Expedite Evolution, Not Perfection

From the opening biography, we come to know Ray as a continual learner by trial and error. He’s always looking for lessons in failures to carry forward, to do it better next time. He doesn’t regret failures; he values them more than successes because they provide learning.

Ray tells how he wouldn’t be where he is today—one of TIME’s top-100 most influential people in the world—if he had not hit rock bottom, having to let go of all his employees and forced to borrow $4000 from his dad to pay household bills until his family could sell their second car.

Because Ray upcycles painful mistakes into lessons and principles, learning and efficiency compound. He embraces evolutionary cycles, and knows a thing or two about compounding. Our human intelligence allows us to falter and adapt in rapid cycles that compound wisdom, without waiting for effects of generations. This iterative, rather than intellectual, approach performs better with the added benefit that, being experiential, you know it works.

If you’re a DevOps advocate, your Kaizen lightbulb may have lit. Kaizen is continuous learning: as I say, it’s the most important of all continuous practices in DevOps—and in life. Drawing from Ray’s rapid iteration of trial, error, reflect and learn, we see how he pairs Kaizen with Agile, values learning from failure, and takes many small quick steps for faster evolution.

To solidify the value behind this concept pairing, imagine a fixed savings interest rate, but change the cycle. What’s better: 12% annually or 1% monthly? “Periods do Matter” in this Investopedia article will show you that shorter cycles are better than longer ones. There is the technical reasoning behind why faster failing, leads to better evolution.

In another great read, 4 Seconds, Peter Bregman exemplifies how to manage learning and failure in business by telling the story of teaching his daughter to ride a bike without training wheels. Managing is knowing just the right time to step in and catch her. Too soon and she won’t learn to rebalance herself. Too late and...wipeout! He explains, “Learning to ride a bike, learning anything actually, isn't about doing it right: it's about doing it wrong and then adjusting. Learning isn't about being in balance, it's about recovering balance. And you can't recover balance if someone keeps you from losing balance in the first place.”

In summary, allow failure, cycle quickly and record the lessons. Depriving your people from the opportunity to fail, you deprive them from the opportunity to succeed—and the opportunity to improve. Breed a culture of rapid feedback and experimentation with guardrails, allowing failure without fatality.

2. Triangulate and Be Actively Open Minded

DevOps aficionados are familiar with “fail fast,” Agile and Kaizen. What’s further interesting about Ray, is how he allows for failure and equally reaches for high standards. And beyond technology, excellence is rarely discussed in DevOps circles.

Ray pursues life’s best. “You can have virtually anything you want, you just can’t have everything you want,” he says. Aside from his uncompromising principles in hiring and maintaining excellent people, Ray insists on excellent decision making to instill quality into evolution.

If failure doesn’t form progress, “fail fast” falls flat. Just like machine learning uses new and quality data to improve, our cycle progress is proportional to the quality and newness of abilities and information we use to pursue our goals.

The approach Ray hammers again and again is triangulation: exploring opinions different than his own or the first one offered up. Varying judgments can’t all be right, but understanding different viewpoints, is like making a quantum leap in an evolutionary cycle compared to learning from one source.

Ray’s dramatic story of receiving a cancer diagnosis indelibly impresses the importance of triangulation.

Obviously shaken up, he began to estate plan and spend more time with family, but he also consulted three experts. The first two doctors had wildly different prognoses and proposals for treatment or surgery. So he got them speaking with one another; they were respectful in understanding each other’s take, and Ray learned a lot. Finally, the third doctor suggested a regiment of no treatment nor surgery, but instead to monitor a biopsy of the cells every 6 months because his data showed treatments and surgery didn’t necessarily extend life in cases of cancer of the esophagus.

The three specialists, Ray and his family doctor agreed that this final approach wouldn’t hurt. The learning value or this triangulation aside, the outcome of the story will floor you: Ray’s first biopsy showed that he didn’t have any cancerous cells.

Back to DevOps, the CALMS ‘S’ for sharing is brilliant, but we can push beyond sharing. Actively seeking, not only sharing, information is key to boosting the quality of our decision making and evolution. Companies like Google do this with a manic focus on data, and data is just one avenue of information that may or may not go against our own beliefs.

In general, DevOps leaders must advocate for a culture and habits of active open mindedness, seeking opinions of other believable people and data. Like Ray, assertively explain your own opinions, while maintaining poise and humility to change your mind.

3. Radical Truth and Transparency

At the heart of Ray’s high-performing company, Bridgewater, is a culture of radical truth and transparency. Their patriarch trusts in truth, and loves his people like family, but also equitably protects the whole more than any part. For the greater good he doesn’t hold back in accurate evaluation, root-cause analysis, and openly pointing out problems, even in people. “Love the people you shoot,” he writes, “Tough love is both the hardest and most important kind of love to give.”

The firm keeps an internally available “baseball card” on each employee’s strengths and weaknesses synthesized from evidence-based patterns and a collection of business tools with psychographic-data crunching backends. Weaknesses aren’t misconstrued for weak people, and employees aren’t pigeonholed; the transparency enables orchestrating employees’ best work and identifying their believability in decision making.

For decades, Bridgewater has been using data on people and their track records to do believability-weighted decision making with the help of computers. The company’s “Idea Meritocracy” tools like the Dot Collector Matrix collect data and help teams make believability-weighted decisions, even instantly in meetings. While this was pioneered for investment decisions, Bridgewater later adopted the system for management decisions. Ray also hints he’s working toward offering the system as a service.

This principle is about being ruthless in demanding integrity, honesty, accuracy and openness. Common workplace biases like loyalty, niceness, confidentiality and secrecy might seem safe or well-intentioned in small contexts, but are ultimately self-defeating of the big-picture success of the whole.

Every person and organization has a unique twist on values and workplace politics, but while Bridgewater’s success speaks volumes, its radically straightforward approach is also reported to be the preference of techies and millennials that make up many DevOps-forward teams.

Embracing DevOps, results in more than dev and ops working together—it’s working more closely with the business too. While DevOps leaders can’t control the culture in the wider organization, they can shape the sub-culture of their own teams. Not only is it more manageable on that scale, but this cultural principle and corresponding tools seem a natural fit for IT workers. Just maybe as the role of IT is growing in most businesses today, the culture might catch on.

4. Be Candid and Fearless, Rather Than Blameless

Blameless post-mortem or retrospective meetings are not uncommon in a DevOps culture.

You can probably guess how Ray might see this differently.

If your culture is blameless, there’s less accountability, so you’re more likely to miss lessons and chances for improvement. It’s not just about fixing the machine neither, it’s about helping individuals. And if someone is truly not capable, you could fail to see it if you don’t dispassionately trace the blame.

Accuracy requires great diagnosis, and Ray’s method for root-cause analysis makes Toyota’s 5 Whys look skin deep.

Ray advocates to keep people responsible for investigation reporting up independently of where diagnosis happens, so there’s no fear of recrimination. “Remember people tend to be more defensive than self-critical. It is your job as a manager to get at truth and excellence, not to make people happy. Everyone's objective must be to get the best answers, not the answers that will make the most people happy.”

Having said that, Bridgewater’s culture also pushes everyone to tell the truth without fear of adverse consequences from admission of mistake.

When an employee missed placing a trade for a client, it ended up costing millions for younger, smaller, less-capitalized Bridgewater. With the whole company watching so to speak, Ray decided not to fire this employee—he knew that would lead to a culture of people hiding their mistakes instead of bringing them to light as soon as possible.

With respect to handling missteps, this hedge funder would admonish blamelessness in favor of candor and staff fearlessness. It has the efficiency of earlier learning and earlier redesign for prevention. It also doesn’t eschew accountability, encouraging individual improvement that eventually lifts the whole team.

5. Management by Machine and Metrics

Techies will appreciate how Ray talks about his business as a machine.

If you have great principles that guide you from your values to your day-to-day decisions, but don't have a way of making sure they're systematically applied, you leave their usefulness to chance. We need to cement culture into habits and help others do so as well. Systematizing any cultural principle into a process, tool, or both, “it typically takes twice as long,” Ray says, “but pays off many times over because the learning compounds.”

Bridgewater always put investment principles into algorithms and expert systems, and has long since run the rest of their business by software machinery as well.

Is this just the well-worn “automate everything” DevOps call?

Automation advances scale, performance, correctness, consistency and instrumentation. But high-performing businesses like Bridgewater also manage by metrics: they compare outcomes and measurements to goals.

While data-driven decision making is eminent these days, data-driven measurement and accountability is less common. We have KPIs, QBRs and performance reviews, but how many teams are consistently managed by metrics? We more easily look forward, than take an objective look in the mirror even though it’s critical for evolution.

Goal-setting zealots argue that goals must be measureable, and Ray’s advice takes it one step further: Don’t look at the numbers you have and adapt them to your needs. Instead, “start with the most important questions and come up with the metrics that will answer them,” he says. “Remember any single metric can mislead.” Furthermore, like big-data analytics, data garbage in equals information garbage out.


Be the Change

Ray also says, “An organization is the opposite of a building: it's foundation is at the top.”

But we all know stories of change percolating from all levels of organizations, communities and countries. If you’re not a CEO like Ray was, you can still make a meaningful difference bottom-up or managing your own team, leading by example.

You could simply publish your team’s principles, create a tool, or ignite behaviors you want to spread. Of the DevOps people-process-technology, people are your most important resource; so forge the principles of their operating systems: sharpen, tweak, prioritize and balance. With the transformation door open in your digital business and DevOps journey, there’s no better time to make an invaluable mark on culture—in IT and beyond.

image credit Jacob Lund/Shutterstock

Micro-services: Knock, Knock, Knockin’ on DevNetOps’ Door by James Kelly

glass-1613653.png

A version of this article was published on October 13, 2017 at TheNewStack https://thenewstack.io/microservices-knock-knock-knockin-devnetops-door/

“You’ve got to ask yourself one question: ‘Do I feel lucky?’ Well, do ya, punk?” – Dirty Harry

Imagine that putting this famous question to the sentiment of IT deployments, it points to IT and even business performance with scary accuracy. High performers – “The most powerful guns in the world,” to borrow Harry’s words – pull the trigger on deployments with high confidence, while deployment dread is a surefire sign of lower performers, advises the State of DevOps report.

In his talks, Gene Kim, author of The DevOps Handbook, corroborates that deployment anxiety is associated with hapless businesses half as likely to exceed profitability, productivity and market share goals, and evidently with lower market-cap growth.

We may conclude high-performing teams wear the badge of confidence because they’re among the ranks in the academy of agile, deploying more often – orders of magnitude more often. Their speed is in taking lots of little steps, so they have regular experience with change.

And feeling luckier is also an effect of actually being luckier. Data show high performers break things less often; and when they do, there are more clear-cut forensics. They bring in better MTTF and MTTR than lower performers’ old-fashioned police work because the investigation and patching proceeds quickly from the last small step, instead of digging for clues in bigger deliveries peppered with many modifications.

Less, more often, is better than more, less often

Imagine conducting IT on two technology axes: time and space. If it’s clearly healthier to automate faster, smaller steps with respect to the timeline or pipeline, then consider space or architecture. Optimizing architecture design and orchestration to recruit nimble pipeline outputs, what stands out in today’s line up of characters? Affirmative, ace: micro-services.

The principles of DevOps have been around a while and in emerging practice for more than a decade, but the pivotal technologies that cracked DevOps wide open were containers and micro-services orchestration systems like Kubernetes. Looking back, it’s not so surprising that smaller boundaries and enforced packaging from developers, preserved through the continuous integration and delivery pipeline, make more reliable cases for deployment.

A micro-services architecture isn’t foolproof, but it’s the best partner today for the speed and agility of frequent or continuous deployment.

Micro-services networks and networking as micro-services

In a technical trial of service meshes versus SDN, there are three key positions networking takes in today’s micro-services scene:

  1. In a micro-services design, the pieces become smaller and the intercellular space – the network – gets bigger, busier, and hence, vital. Also, beyond zero-trust-style protection of the micro-services themselves, it’s important to have this network locked down.
  2. Service discovery, service/API gateways, service advertising with DNS, and service scale-out or -in with load balancing are all players in networking’s jurisdiction.
  3. Beyond micro-services, any state replication, backup, or analytics over an API, a volume, or a disk, also rides on the network.

Given the importance of networking to the success of micro-services, it’s ironic that networking components are mostly monolithic. Worse, deployment anxiety is epidemic: network operators have lengthy change controls, infrequent maintenance windows, and new code versions are held for questioning for 6-18 months and several revisions after availability.

A primal piece on DevNetOps cites five things we can borrow from the department of DevOps to remedy network ops in time and space, starting with code, pipelines and architecture.

Small steps for DevNetOps

Starting into DevNetOps is possible today with Spinnaker-esque orchestration of operational stages: a network-as-code model and repository would feed into a CICD pipeline for all configuration, template, code and software-image artifacts. With new processes and skills training of networking teams – like coding and reviewing logistics as well as testing and staging simulation – we could foil small-time CLI-push-to-production joyrides and rehabilitate seriously automation-addled “masterminds” who might playbook-push-to-production bigger mistakes.

The path to corrections and confidence on the technology time axis, begins with automating the ops timeline as a pipeline with steps of micro modifications.

Old indicted ops practices can be reinvented by the user community with vendor and open-source help for tooling, but when it comes to architecture, the vendors need to lead. Vendors are the chief “Dev” partner in the DevNetOps force, and motives are clearer than ever to build a case to pursue micro-services.

Small pieces for DevNetOps: Micro-services

After years of bigger badder network devices – producing some monolithic proportions so colossal they don’t fit through doors – vendors can’t ignore the flashing lights and sirens of cloud, containers and micro-services.

It’s clear for DevNetOps, like for DevOps, micro-sized artifacts are perfectly sized bullets for the chamber of an agile pipeline. But while there’s evidence of progress in the networking industry, there’s a ways to go.

Some good leads toward a solution include Arista supporting patch packages separate from their main EOS delivery; the OCP popularizing software and hardware disaggregation in its networking project; and Juniper Networks building on disaggregation by supporting node splicing and universal chassis for finer-grained modularity and management boundaries. Furthermore, data center network designs of resilient scale-out Clos network fabrics with pizza-box-sized devices are gradually favored over large aggregation devices. And in software-defined networking, projects like OpenContrail are now dispatched as containers.

In the world of DevOps, we know that nothing does wonders for deployment quality like developers threatened with the prospect of a page at 2am. But for DevNetOps that poetic justice is missing, and the 24/7 support between the vendor-customer wall hardly subdues operator angst when committing a change to roll a deployment. Moreover, the longer the time between a flawed vendor code change and the time it’s caught, the more muddled it gets and the tougher it is to pin.

The best line of defense against these challenges is smaller, more-frequent vendor deliveries, user tests and deployments. Drawing inspiration from the success of how DevOps was bolstered by micro-services, imagine if while we salute DevNetOps continuous and agile operations today, we compel vendors and architectural commissioners to uphold designs for finer-grained felicitous micro-services, devices and networks for a luckier tomorrow.

A version of this article was published on October 13, 2017 at TheNewStack https://thenewstack.io/microservices-knock-knock-knockin-devnetops-door/

 

For more information on defining DevNetOps and DecSecOps, see this article and my short slideshare: