“DevOps,” “SecOps,” “DevSecOps,” “ChatOps,” “NoOps” — the terms go on and on. In this episode of The New Stack Analysts podcast, Toph Whitmore, principal analyst for Blue Hill Research, talks about data operations, or “DataOps.” He describes it as looking at the data production pipeline in a holistic manner, marrying data-management objectives with data-consumption ideals to maximize data-derived value.
DataOps and DevOps both require coordination between multiple teams. Above is a diagram that describes the data production value chain. Connoisseurs of DevOps/CI/CD pipelines, feel free to do a compare-and-contrast with your favorite model while listening in to our podcast:
Listen to all TNS podcasts on Simplecast.
Blue Hill’s report, “DataOps: The Collaborative Framework for Enterprise Data-Flow Orchestration.”
To get a more technical explanation of DataOps, Whitmore recommends attending DataOps Summit 2017, June 20-21, Boston.
April 2016 Blue Hill podcast with Blue Hill’s James Haight, TNS’ Alex Williams and Benjamin Ball: Hadooponomics: The Game Theory of Open Source Foundations and the Big Data Skills Gap.
Feature image: A conceptual assembly-line context diagram of “DataOps” with enterprise data value chain and horizontal data-governance-stack overlays. Source: Blue Hill Research.