TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Cloud Native Ecosystem / Containers / Microservices / Security

How Containers and Microservices Work Together to Enable Agility

Mar 4th, 2016 1:10pm by
Featued image for: How Containers and Microservices Work Together to Enable Agility

Enterprise Chief Information Officers (CIOs) used to think about their IT assets primarily in terms of hardware, with software serving as the system of record. Today, the emergence of an application-centric approach is causing many to rethink that view, placing the application first, not the machine.

Open source projects — from new technology companies to established enterprise providers — are driving this innovation. These projects serve as the core to a new stack economy, and they are the underlying foundation for automated, programmable infrastructure.

When IT systems were designed around the client/server paradigm, software was required to run on individual servers. With this approach, server sprawl proliferated. Virtualization brought relief, allowing servers to be used more efficiently.

As virtualization brought about greater efficiencies and reliabilities, CIOs started to look at how application development could go faster, especially as the Internet became more widely available.

[cycloneslider id=”ebook-2-sponsors”]

Virtualization was not meant to speed application development, nor was it intended as a way to build real-time services. It was an era of complex systems hardware, with application development processes that were built on waterfall methodologies. Application development took months, and updates to the application required tedious planning and execution processes.

At first, many look at service-oriented architectures (SOA), which relies on the premise that the application has components communicate over a network. Today, SOA can be considered a precursor to microservices.

“You can’t take a monolithic ticketing system and containerize it today. But there are pieces of the app, such as DNS services, security services, networking services — aspects around it that can be containerized”–Ken Owens, Cisco

IBM’s Jason McGee said at DockerCon EU 2015 that SOA missed something, though. It focused on the interface and defined how to talk to the service, but it did not define how teams should be organized. Further, it missed how to encompass the delivery lifecycle.

Today, with microservices, we see the decomposition happening and the teams getting organized accordingly. Small teams are working on many little parts. Now there are teams of 5, 10 or 15 people working on parts of the service.

For the most part, the old approach was defined according to the way monolithic systems were built, McGee said. The interface and lifecycle were not separated. It kept the approaches of the time. It was built on Java, with a particular stack on a versioned environment.

Service-oriented strategies are now looked at more holistically. REST and JSON allow ways for services to work together, independent of how the team works, McGee said. The model allows the developer to use the language that makes sense for the task or for the people on the team. There still exists a dominance of a software that a team uses, but there is more of a polyglot approach.

Microservices are reflecting a trade-off, McGee said. That tradeoff is embodied in the complexities that come with microservices. A tool and management system is needed around the microservices and containers that are getting packaged.

Fundamentally, developers have to figure out how the services find each other, McGee said. And then there is the monitoring and visibility. How do you recognize what is happening? It becomes a mapping project. There are multiple failure points. It gives rise to immutable infrastructure and speaks to looking at the overall ecosystem more holistically.

Frameworks and Fundamentals in Microservices

Companies that had to build scalable systems to meet the unpredictable and sometimes heavy demands of Internet traffic were the first to think about this problem, and they turned to frameworks. A de facto standardization for how services are delivered had evolved from these systems, McGee observed.

But how does the developer know all the approaches? As it is now, existing frameworks define the processes, which forces the developer to start again if there is a decision to change the framework. It becomes an evolutionary problem.

Standardization is key, and groups, such as the Cloud Native Computing Foundation (CNCF), aim to pave the way for faster code reuse, improved machine efficiency, reduced costs and an increase in the overall agility and maintainability of applications.

CNCF has a host of member companies on board with this idea, including Apcera, Apprenda, Cisco, Cloudsoft, ClusterHQ, CoreOS, Datawise.io, eBay, Engine Yard, Docker, Google, IBM, Intel, Joyent, Mesosphere, NetApp, Portworx, Rancher, Red Hat, Twitter, VMware, Weaveworks and more.

CNCF is looking at the orchestration level, followed by the integration of hosts and services by defining APIs and standards through a code-first approach. At the CoreOS Tectonic Summit in New York, Google product manager Craig McLuckie talked about offering Kubernetes to the CNCF project and Google’s hope to make software a de facto standard for managing containers.

From Google’s point of view, container-based architectures will have to be the standard for companies needing to build scaled-out systems and services. The microservices approach served as the foundation for what we are starting to call “cloud-native computing.” By breaking the application into containers, small teams can specialize and become accomplished in simple, clear ways.

The nuance of this approach is reflected in the container itself, which will vary in scope, depending on the resources that are required. There may be different resources demands, such as more processing or I/O. Packaging them separately allows for more efficient use of resources. It also makes it easier to upgrade components without taking down the application as a whole.

As businesses increasingly tie their products, services and devices to the Internet, more data will be generated and, with it, new ways to use it. That, in turn, will mean new thinking about how applications are developed.

Frameworks will be critical to that evolution, as will the different aspects of what those frameworks do. That means the need, for example, to develop schedulers based upon business requirements. It will also mean a way for more nuances, or semantics, to build directly into the architecture itself.

Mantl is an open source framework Cisco has developed with the Mesos ecosystem that offers integrations from services, such as Consul and Vault by Hashiworks. It’s representative of the frameworks that are emerging from Kubernetes, Mesos and the numerous open source projects that support the microservices ecosystem.

Mantl uses tools that are industry-standard in the DevOps community, including Marathon, Mesos, Kubernetes, Docker and Vault. Each layer of Mantl’s stack allows for a unified, cohesive pipeline between support, managing Mesos or Kubernetes clusters during a peak workload, or starting new VMs with Terraform. Whether you are scaling up by adding new VMs in preparation for launch, or deploying multiple nodes on a cluster, Mantl allows the developer to work with every piece of a DevOps stack in a central location, without backtracking to debug or recompile code to ensure that the microservices the developer needs will function when needed.

The obstacles on the journey to simplify IT systems through automation are familiar, Cisco’s Ken Owens noted. How do you connect disparate systems? How do you deal with information security policies around data security and governance? This is where containers enter the picture. Containers have allowed developers and infrastructure teams to work together on internal, test and developer environments.

Microservices reflect a trade-off that is embodied in the complexities that come with distributed services. There needs to be a tool and management system around the microservices and the containers that are getting packaged.

There is a major misconception that monoliths can’t be broken into containers or be cloud native. From an application architecture point of view, there are monoliths and there will be monoliths for some time.

“You can’t take a monolithic ticketing system and containerize it today,” Owens said. “But there are pieces of the app, such as DNS services, security services, networking services — aspects around it that can be containerized.”

The scheduler is the most important part of the system when considering how microservices are used. From Owens’ perspective, the most standard schedulers are focused around Mesos with Zookeeper and Marathon. Kubernetes is applicable for use in data science, offering higher-speed schedulers.

[cycloneslider id=”ebook-2-sponsors”]

There are two aspects to schedulers: the functioning of scheduling the jobs, and the efficiencies and availability of requests. Cisco has a minimum of three control nodes per set of six main service nodes. This provides a 2-to-1 ratio — allowing for failure and continued high availability.

This speaks to the conversation we will hear in the year ahead about this emerging microservices layer and how you orchestrate services, the schedulers, the control plane underneath, and how they all connect together. It is, in essence, a new stack of orchestration that puts an emphasis on matters, such as application portability and self-organized containers.

Addressing Security and Containers in the Enterprise

Creating their own container allows customers to write around one standard container. Customers get the tools they need for multi-tenant environments, allowing them to control complex topologies across public cloud services and on-premises environments.

Apcera is a platform for enterprises to deploy containers with security and policy enhancements included. It is all automated in the packaging phase, and there are granular controls over components, such as network policy and resource allocations, so that the customer has a general understanding of their cluster’s security footprint noted Apcera’s Josh Ellithorpe.

In addition, Apcera has developed a semantic pipeline that offers capabilities to guarantee data integrity. For example, Apcera technology acts as a proxy to the database. Drop requests do not go to the database directly. Apcera also offers what it calls “policy grammar,” which focuses on resource allocation, network controls, scheduling policies and authorizations. This gets to the heart of how semantics define how a data pipeline is managed. The policy grammar helps describe the fundamental business rules and how the application is deployed for the organization.

A Docker image has multiple layers. Apcera detects the framework and calls into a package manager that has a tag system to describe dependencies. When software is audited in Apcera, the user can get an accurate look at what is there. Audit logs are captured for everything for complete understanding. The developer spends less time thinking about what dependencies are needed.

Their whole idea is to inject transparent security into the process so that it is invisible to the developer. The developer just connects to a database, but they do not see the policy enforcement in action. The industry is now realizing that the policy itself needs a common grammar. Apcera is now, in turn, making the grammar openly available.

Conclusion

Microservices reflect a trade-off that is embodied in the complexities that come with distributed services. There needs to be a tool and management system around the microservices and the containers that are getting packaged. The roots of today’s microservices platforms and frameworks come out of the machines that ran the software for what we once knew as service-oriented architectures.

The emergence of microservices can be viewed as an evolution of the earliest frameworks, developed originally by companies that were the first to scale-out programmable and dynamic services that served millions of users. Today, the approach developed by these pioneers is the foundation for the mainstream company that understands that software and people are now the heart and soul of their entire organization.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.