How to Create an Internal Developer Portal MVP
What needs to go into an internal developer portal and how should it be set up by platform engineers and used by developers? This post will take a practical approach to building a portal minimum viable product (MVP), assuming in a GitOps and Kubernetes native environment. MVPs are a great way to get started with an idea and see what it can materialize into. We’ll explore the software catalog, both a basic catalog and an extended one, and then look at setting developer self-service actions and specifically how to deploy a microservice from testing to production. Then we’ll add some scorecards and automations.
Sounds difficult? It’s actually quite simple.
5 Steps to Creating an MVP of Your Developer Portal
- Forming an initial software catalog. In the example below we will show how to populate the initial software catalog using Port’s GitHub app and a git-based template.
- Enriching the data model beyond the initial blueprints, bringing in more valuable data to the portal.
- Creating your first self-service action. In the example below we will show how to scaffold a new microservice, but you can also think of adding Day 2 actions, or an action with a TTL (temporary environment, for instance).
- Enriching the data model with additional blueprints and Kubernetes data, and allowing developers to build additional self-service actions so that they can test and then promote the service to production.
- Adding scorecards and dashboards. These features offer developers insight into ongoing activities and quality initiatives.
Defining and Creating the Basic Data Model for the Software Catalog
The basic setup of the software catalog will be based on raw GitHub data, though you can make other choices. But how will the developer portal “classify” the data and create software catalog entities?
In Port, blueprints are where you define the metadata associated with the software catalog entities you choose to add to your catalog. Blueprints support the representation of any asset in Port, such as microservice, environment, package, cluster, databases, etc. Once blueprints are populated with data — in this case, coming from GitHub — the software catalog entities are discovered automatically and formed.
What are the right blueprints for this initial catalog and how do we define their relations? Let’s look at the diagram:
Let’s dive a little deeper:
- The Workflow Run blueprint shows metadata associated with GitHub workflow pipelines.
- The Pull Request blueprint shows metadata associated with, well, pull requests. This will allow you to create custom views for the PRs relevant to teams or individual developers.
- The Issues blueprint shows metadata associated with GitHub issues.
- The Workflow blueprint explores pipelines and workflows that currently exist in your GitHub (and uses them to create self-service actions in Port that can trigger more GitHub workflows).
- The Microservice blueprint shows GitHub repositories and monorepos represented as microservices in the portal.
This basic catalog provides a developer with a strong foundation to understand the software development life cycle. This helps developers become familiar with the tech stack, understand who are the owners of the different services, access documentation for each service directly from Port, keep track of deployments and changes made to a given service, etc.
Data Model Extension: Domain and System Integration
Given that these fundamental blueprints provide good visibility into the life cycle of each service, the model we just discussed can suffice. You can also take it one step further and extend the data model by introducing domain and system blueprints. Domains often correspond to high-level engineering functions, maybe a pivotal service or feature within a product.
System blueprints are a depiction of a collection of microservices that collectively enhance a segment of functionality provided by the domain. With the addition of these two blueprints, we can now see how a microservice fits in a greater app or functionality and how it provides the developer with additional insight into how their microservice interacts with the greater tech stack. This information can be invaluable to speed up the onboarding process for new developers, as well as make diagnosing and debugging incidents easier since the dependency between microservices and products within the company is clearer.
When we finish ingestion, we’ll have a fully populated MVP software catalog. Drilling down into an entity, we can understand dependencies, health, on-call data and more.
Internal developer portals aren’t only about a software catalog, containing microservice and underlying resource and DevOps asset data. They are mostly about enabling developer self-service actions. Let’s go ahead and do that.
First Developer Self-Service Action Setup
Internal developer portals are made to relieve developer cognitive load and allow developers to access the self-service section in the portal and do their work with the right guardrails in place. This is done by defining the right flow in the portal’s UI, and then by loosely coupling it with the underlying platform that will execute the self-service action, while still providing feedback to developers about their executed actions, such as logs, relevant links and the effects of the action on the software catalog. We can also show whether a self-service action is waiting for manual approval.
For the MVP, let’s define a self-service action for scaffolding a new microservice. This is what developers will see:
When setting up a self-service action, the platform engineer doesn’t just define the backend process, but also sets up the UI in the developer self-service form. By being able to control what the developer sees and can do, as well as permissions, we can allow developers to perform actions on their own within a defined flow, setting guardrails and relieving cognitive load.
Expanding the Data Model with Kubernetes Abstractions
We’ve begun by saying that we’re working in a Kubernetes native environment. Kubernetes knowledge is not common, and our goal is to abstract Kubernetes for developers, providing them with the information they need.
Let’s add the different Kubernetes resources (deployments, namespaces, pods, etc.) into our software catalog. This then allows us to configure abstractions and thus reduce cognitive load for developers.
When populated, the cluster blueprint will show its correlated entities. This will allow developers to view the different components that make up a cluster in an abstracted way that’s defined by the platform engineer.
To bring everything together, let’s create an “environment” blueprint. This will allow us to differentiate between multiple environments that are in the same organization and create an in-context view (including CI/CD pipelines, microservices, etc.) of all running services in an individual environment. In this instance we will create a test environment and also a production environment.
Now let’s build a relation between the microservice blueprint we made in our initial data model to the workload blueprint. This will allow us to understand which microservices are running in each cluster as workloads. A workload is a running service of a microservice. This will allow us to have an in-context view of a microservice, including which environments it is running in, meaning we now know exactly what is going on and where (Is a service healthy? Which cluster is a service deployed on?).
Generally, creating relations between blueprints can be compared to linking a number of tables with a foreign key but using identical names for both tables. You can customize the information you see on each entity or blueprint, thus modeling them to suit your needs exactly. You can build relations that are one-to-one or one-to-many. In our example, the link between workload and microservice is a one-to-one relation, as each workload is only one deployment of one microservice.
Let’s now create a relation between cluster and environment so that we know where we have running clusters. We could also expand this idea to a cloud region or environment, depending on the context.
Let’s also create a relation between microservice and system, and workflow run and workload. This allows us to see the source of every workload, as well as see what microservices make up the systems in our architecture.
And that’s it!
Scorecards and Dashboards: Promoting Engineering Quality
The ability to define scorecards and dashboards has proven to be of great significance within enterprises, as they help push initiatives and drive engineering quality. This is thanks to teams now being able to visualize service maturity and engineering quality of different services in a domain and thus understand how close or far they are from reaching a production-ready service.
The highly discussed distinction between portal and platform fades away when put into practice. While one focuses on infrastructure and backend definitions, the other empowers developers to take control of their needs through a software catalog and self-service actions, as well as be able to give great insight into service and infrastructure well-being, which is allowed through scorecards and visualizations.
Want to try Port or see it in action? Go here.