How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
AI / CI/CD / DevOps

Extend Spinnaker Automated Delivery with Machine Learning and Custom Pipeline Logic

Jun 5th, 2018 1:56pm by
Featued image for: Extend Spinnaker Automated Delivery with Machine Learning and Custom Pipeline Logic

The open source Spinnaker is a continuous delivery tool originally developed by Netflix and Google, one that could be used to run a development pipeline for multiple cloud deployments. The software has found a home with the OpenStack community. Like OpenStack, Spinnaker streamlines and automates an inherently complex process of packaging resources in a heterogeneous environment.

“In an ideal world, Spinnaker should live inside the OpenStack Foundation, because the approach that OpenStack has been solving problems in the infrastructure space is very similar to what Spinnaker does in the application delivery space” Boris Renski, co-founder of Mirantis, recently explained to us. Mirantis uses Spinnaker as a component of its recently launched-in-Beta commercially supported Mirantis Application Platform.

At the OpenDev conference last month in Vancouver — held in conjunction with the OpenStack Summit — Mirantis’ Head of Technical and Marketing Content, Nick Chase gave a presentation, embedded below, on how to extend Spinnaker’s pipelines with your own custom logic, an approach he called “intelligent delivery.”

Of course, Spinnaker offers customize-able pipelines to create deployable artifacts right out of the box. You can automatically scale server groups, or swap out server groups for testing. But Chase offered a number of additional ways to enhance Spinnaker with further intelligent decision-making.

You can extend spinnaker through your own scripts for instance, which could conditionally define how artifacts get built — This is similar to how Jenkins works on the continuous integration side of the CI/CD equation. Spinnaker has an API that can be programmed against and the maintainers are also working on a command-line interface.

Pipelines can be triggered into action from external events, such as a Git commit. In effect, you could use Spinnaker as the basis of a GitOps-style environment, enhanced no doubt by some create some pretty sophisticated branch conditioning with the software, as well as set actions based on the current and previous states. “You can have a script that predicts something will happen before it actually happens, and takes actions,” he explained.

There are a number of ways to add in this additional logic. This can be directly through pre-coded logic. A policy engine, such as the Cloud Native Computing Foundation‘s Open Policy Agent (OPA), or OpenStack’s Congress. For machine learning-driven system optimization, Chase recommended another OpenStack project, Fault Genes, which uses ML to better automate the detection and prediction of OpenStack infrastructure outages.

Chase offered three demonstrations of extending Spinnaker, though 1: triggers, 2: pipeline variables and 3: conditional stages:

The Cloud Native Computing Foundation, Google and OpenStack Foundation are sponsors of The New Stack.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Mirantis.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.