How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Infrastructure as Code / Operations / Platform Engineering / Security

The 6 Pillars of Platform Engineering: Part 1 — Security

Platform team workflows and checklists for building security, pipelines, provisioning, connectivity, orchestration, and observability into their platform.
Sep 20th, 2023 11:00am by
Featued image for: The 6 Pillars of Platform Engineering: Part 1 — Security
Feature image by Dimitris Vetsikas from Pixabay

Platform engineering is the discipline of designing and building toolchains and workflows that enable self-service capabilities for software engineering teams. These tools and workflows comprise an internal developer platform, which is often referred to as just “a platform.” The goal of a platform team is to increase developer productivity, facilitate more frequent releases, improve application stability, lower security and compliance risks and reduce costs.

This guide outlines the workflows and checklist steps for the six primary technical areas of developer experience in platform engineering. Published in six parts, this part, part one, introduces the series and focuses on security. (Note: You can download a full PDF version of the six pillars of platform engineering for the complete set of guidance, outlines and checklists.)

Platform Engineering Is about Developer Experience

The solutions engineers and architects I work with at HashiCorp have supported many organizations as they scale their cloud operating model through platform teams, and the key for these teams to meet their goals is to provide a satisfying developer experience. We have observed two common themes among companies that deliver great developer experiences:

  1. Standardizing on a set of infrastructure services to reduce friction for developers and operations teams: This empowers a small, centralized group of platform engineers with the right tools to improve the developer experience across the entire organization, with APIs, documentation and advocacy. The goal is to reduce tooling and process fragmentation, resulting in greater core stability for your software delivery systems and environments.
  2. A Platform as a Product practice: Heritage IT projects typically have a finite start and end date. That’s not the case with an internal developer platform. It is never truly finished. Ongoing tasks include backlog management, regular feature releases and roadmap updates to stakeholders. Think in terms of iterative agile development, not big upfront planning like waterfall development.

No platform should be designed in a vacuum. A platform is effective only if developers want to use it. Building and maintaining a platform involves continuous conversations and buy-in from developers (the platform team’s customers) and business stakeholders. This guide functions as a starting point for those conversations by helping platform teams organize their product around six technical elements or “pillars” of the software delivery process along with the general requirements and workflow for each.

The 6 Pillars of Platform Engineering

What are the specific building blocks of a platform strategy? In working with customers in a wide variety of industries, the solutions engineers and architects at HashiCorp have identified six foundational pillars that comprise the majority of platforms, and each one will be addressed in a separate article:

  1. Security
  2. Pipeline (VCS, CI/CD)
  3. Provisioning
  4. Connectivity
  5. Orchestration
  6. Observability

Platform Pillar 1: Security

The first questions developers ask when they start using any system are: “How do I create an account? Where do I set up credentials? How do I get an API key?” Even though version control, continuous integration and infrastructure provisioning are fundamental to getting a platform up and running, security also should be a first concern. An early focus on security promotes a secure-by-default platform experience from the outset.

Historically, many organizations invested in network perimeter-based security, often described as a “castle-and-moat” security approach. As infrastructure becomes increasingly dynamic, however, perimeters become fuzzy and challenging to control without impeding developer velocity.

In response, leading companies are choosing to adopt identity-based security, identity-brokering solutions and modern security workflows, including centralized management of credentials and encryption methodologies. This promotes visibility and consistent auditing practices while reducing operational overhead in an otherwise fragmented solution portfolio.

Leading companies have also adopted “shift-left” security; implementing security controls throughout the software development lifecycle, leading to earlier detection and remediation of potential attack vectors and increased vigilance around control implementations. This approach demands automation-by-default instead of ad-hoc enforcement.

Enabling this kind of DevSecOps mindset requires tooling decisions that support modern identity-driven security. There also needs to be an “as code” implementation paradigm to avoid ascribing and authorizing identity-based on ticket-driven processes. That paves the way for traditional privileged access management (PAM) practices to embrace modern methodologies like just-in-time (JIT) access and zero-trust security.

Identity Brokering

In a cloud operating model approach, humans, applications and services all present an identity that can be authenticated and validated against a central, canonical source. A multi-tenant secrets management and encryption platform along with an identity provider (IdP) can serve as your organization’s identity brokers.

Workflow: Identity Brokering

In practice, a typical identity brokering workflow might look something like this:

  1. Request: A human, application, or service initiates interaction via a request.
  2. Validate: One (or more) identity providers validate the provided identity against one (or more) sources of truth/trust.
  3. Response: An authenticated and authorized validation response is sent to the requestor.

Identity Brokering Requirements Checklist

Successful identity brokering has a number of prerequisites:

  • All humans, applications and services must have a well-defined form of identity.
  • Identities can be validated against a trusted IdP.
  • Identity systems must be interoperable across multi-runtime and multicloud platforms.
  • Identity systems should be centralized or have limited segmentation in order to simplify audit and operational management across environments.
  • Identity and access management (IAM) controls are established for each IdP.
  • Clients (humans, machines and services) must present a valid identity for AuthN and AuthZ).
  • Once verified, access is brokered through deny-by-default policies to minimize impact in the event of a breach.
  • AuthZ review is integrated into the audit process and, ideally, is granted just in time.
    • Audit trails are routinely reviewed to identify excessively broad or unutilized privileges and are retroactively analyzed following threat detection.
    • Historical audit data provides non-repudiation and compliance for data storage requirements.
  • Fragmentation is minimized with a flexible identity brokering system supporting heterogeneous runtimes, including:
    • Platforms (VMware, Microsoft Azure VMs, Kubernetes/OpenShift, etc.)
    • Clients (developers, operators, applications, scripts, etc.)
    • Services (MySQL, MSSQL, Active Directory, LDAP, PKI, etc.)
  • Enterprise support 24/7/365 via a service level agreement (SLA)
  • Configured through automation (infrastructure as code, runbooks)

Access Management: Secrets Management and Encryption

Once identity has been established, clients expect consistent and secure mechanisms to perform the following operations:

  • Retrieving a secret (a credential, password, key, etc.)
  • Brokering access to a secure target
  • Managing secure data (encryption, decryption, hashing, masking, etc.)

These mechanisms should be automatable — requiring as little human intervention as possible after setup — and promote compliant practices. They should also be extensible to ensure future tooling is compatible with these systems.

Workflow: Secrets Management and Encryption

A typical secrets management workflow should follow five steps:

  1. Request: A client (human, application or service) requests a secret.
  2. Validate: The request is validated against an IdP.
  3. Request: A secret request is served if managed by the requested platform. Alternatively:
    1. The platform requests a temporary credential from a third party.
    2. The third-party system responds to the brokered request with a short-lived secret.
  4. Broker response: The initial response passes through an IAM cryptographic barrier for offload or caching.
  5. Client response: The final response is provided back to the requestor.

Secrets management flow

Access Management: Secure Remote Access (Human to Machine)

Human-to-machine access in the traditional castle-and-moat model has always been inefficient. The workflow requires multiple identities, planned intervention for AuthN and AuthZ controls, lifecycle planning for secrets and complex network segmentation planning, which creates a lot of overhead.

While PAM solutions have evolved over the last decade to provide delegated solutions like dynamic SSH key generation, this does not satisfy the broader set of ecosystem requirements, including multi-runtime auditability or cross-platform identity management. Introducing cloud architecture patterns such as ephemeral resources, heterogeneous cloud networking topologies, and JIT identity management further complicates the task for legacy solutions.

A modern solution for remote access addresses the challenges of ephemeral resources and the complexities that arise with ephemeral resources such as dynamic resource registration, identity, access, and secrets. These modern secure remote access tools no longer rely on network access such as VPNs as an initial entry point, CMDBs, bastion hosts, manual SSH and/or secrets managers with check-in/check-out workflows.

Enterprise-level secure remote access tools use a zero-trust model where human users and resources have identities. Users connect directly to these resources. Scoped roles — via dynamic resource registries, controllers, and secrets — are automatically injected into resources, eliminating many manual processes and security risks such as broad, direct network access and long-lived secrets.

Workflow: Secure Remote Access (Human to Machine)

A modern remote infrastructure access workflow for a human user typically follows these eight steps:

  1. Request: A user requests system access.
  2. Validate (human): Identity is validated against the trusted identity broker.
  3. Validate (to machine): Once authenticated, authorization is validated for the target system.
  4. Request: The platform requests a secret (static or short-lived) for the target system.
  5. Inject secret: The platform injects the secret into the target resource.
  6. Broker response: The platform returns a response to the identity broker.
  7. Client response: The platform grants access to the end user.
  8. Access machine/database: The user securely accesses the target resource via a modern secure remote access tool.

Secure remote access flow

Access Management Requirements Checklist

All secrets in a secrets management system should be:

  • Centralized
  • Encrypted in transit and at rest
  • Limited in scoped role and access policy
  • Dynamically generated, when possible
  • Time-bound (i.e., defined time-to-live — TTL)
  • Fully auditable

Secrets management solutions should:

  • Support multi-runtime, multicloud and hybrid-cloud deployments
  • Provide flexible integration options
  • Include a diverse partner ecosystem
  • Embrace zero-touch automation practices (API-driven)
  • Empower developers and delegate implementation decisions within scoped boundaries
  • Be well-documented and commonly used across industries
  • Be accompanied by enterprise support 24/7/365 based on an SLA
  • Support automated configuration (infrastructure as code, runbooks)

Additionally, systems implementing secure remote access practices should:

  • Dynamically register service catalogs
  • Implement an identity-based model
  • Provide multiple forms of authentication capabilities from trusted sources
  • Be configurable as code
  • Be API-enabled and contain internal and/or external workflow capabilities for review and approval processes
  • Enable secrets injection into resources
  • Provide detailed role-based access controls (RBAC)
  • Provide capabilities to record actions, commands, sessions and give a full audit trail
  • Be highly available, multiplatform, multicloud capable for distributed operations, and resilient to operational impact

Stay tuned for our post on the second pillar of platform engineering: version control systems (VCS) and the continuous integration/continuous delivery (CI/CD) pipeline. Or download a full PDF version of the six pillars of platform engineering for the complete set of guidance, outlines and checklists.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Enable, Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.