BUY and SELL CELL PHONES!

The Cloud Native Computing Foundation (CNCF) is just a year old, but at the start of its second conference in Berlin, it unveiled a number of initiatives that aim to improve support for some of the major container technologies.

Docker’s core container runtime, together with Kubernetes runtime and gRPC, a high-performance remote procedure call technology, have been accepted by the Technical Oversight Committee as incubating projects within CNCF.

CNCF’s vision is to promote interoperability and portability across different container execution environments. In other words, it should be possible for an application running in a Docker container to talk to a containerised Kubernetes application, and vice versa.

A cloud-native application runs in such containers and can make use of additional code, or microservices, running in different containers.

While Docker and Kubernetes are well established, many of the technologies presented during the opening keynote are very much unknown, particularly in mainstream enterprises.

What is even more remarkable is that some of the key open source technologies began just a few years ago as tiny operations, and are now supported by leading open source companies with hundreds of code contributors.

One of these is Prometheus. Brian Brazil, founder of Robust Perception, worked on the original project. “It started in 2012 as one company with two people,” he said. “Prometheus now has more than 300 contributors, and 500 companies are using it in production.”

Prometheus is a metrics monitoring system which uses a time series database and has its own query language. A few enterprises are now engineering IT systems to use containers to improve the way software is developed, and Prometheus is among the tools of choice for monitoring such systems.

Justin Dean, senior vice-president for technical operations at Ticketmaster, said: “We use Docker for containers and the Kubernetes ecosystems, plus we make heavy use of Prometheus. People love how much easier it is to handle time series with Prometheus.”

Ticketmaster was one of the major organisations on a CNCF panel discussion to showcase the use of containers in big business. “It is really hard to be nimble and roll out new products and features compared to startups,” he said. “A few years ago, we started to realise we needed to get faster and deliver software faster, otherwise we would start losing to competitors.”

Using a DevOps approach

Industry consensus is to use a DevOps approach to software development, delivery and deployment, giving coding teams responsibility for all aspects of the software they build.

“Anyone who has been through the process understands the work required,” said Dean. “We wanted autonomous software delivery, where every team had everything they needed to deliver their product to market, and be responsible even for profit and loss. We were trying to create mini micro businesses all across the company that could move as fast as the competition instead of large, monolithic teams.”

The challenge in achieving this is both cultural and technical. There were too many tools required for the engineers at Ticketmaster to use. “We quickly got to a situation where we needed to revamp the tech, and ended up in the container space and the Kubernetes ecosystem,” he said.

Global ticket distribution system Amadeus began its journey in 2014. Eric Mountain, senior expert of distributed systems at the company, said research and development teams also need to buy into the idea that DevOps is the right approach to develop software.

“R&D already has a system that works, so why move? You need an awful lot of communication to convince them it will given them something easier,” he said. “Containers makes that conversation easier.”

A shift in culture

Dean agreed the shift in culture was Ticketmaster’s biggest challenge. “We have 350 products and tonnes of teams with different software stacks, and they deliver software differently. There’s a lot of muscle memory built up. When you force a change, it has to be welcome.”

But, like Mountain at Amadeus, Dean found containers made the approach easier. The container effectively ring fences the code being developed. The coders cannot simply tweak some other piece of code or even alter the hardware, to make their software work. Essentially, the new code being developed plugs into a pipeline.

“When you deliver software in a container through a pipeline, it forces a massive change,” said Dean. “This single-handedly had more impact than anything else and dovetailed into DevOps.”

The combination of containers, software-based infrastructure and microservices represents a major evolution in the way applications are deployed and managed. It is a fast-paced, changing world, where standards and preferred tools are still emerging. When Amadeus began its journey, Prometheus did not exist, so it needed to engineer its own monitoring, but this was done in a way that allowed it to plug in and replace functionality as the tools ecosystem evolved.

Arguably, for the traditional enterprise, this is perhaps a major roadblock. Unlike old-school enterprise applications that were vertically integrated and monolithic, the new world order is cloud-native, built around loosely coupled containerised applications where developers need to define the bits of code their applications require (known as dependencies).

“There is a pretty big hurdle to get an application into a container,” said Dean. “The barrier to entry is steep and there is a whole new language to learn.”

LEARN LAPTOP REPAIR!

Source link