TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Kubernetes / Observability

Log Management for Red Hat OpenShift

This primer on logging and log management for OpenShift explains which monitoring metrics are available and how to view access monitoring and log data.
Sep 9th, 2020 1:55pm by
Featued image for: Log Management for Red Hat OpenShift

LogDNA sponsored this post.

Franciss Espenido
Franciss is a Partnerships Program Manager at LogDNA, where he focuses on technical enablement for observability offerings on IBM Cloud. He previously served as a Technical Support Engineer for the company, where he developed a deep understanding of modern logging practice and challenges in developer workflows.

OpenShift consists of multiple components and layers, and ensuring the health of all of them requires collecting logs and metrics from various individual parts. At the same time, admins must manage log data that provides visibility into the health of their OpenShift clusters as a whole. On top of this, depending on which approach you take to deploying OpenShift, there may be logs associated with the underlying infrastructure that you should collect and analyze as well.

To help admins wrap their heads around all of the above, this article offers a primer on logging and log management for OpenShift. It explains which OpenShift monitoring metrics are available and how to view access monitoring and log data. Most of the information below applies to any OpenShift deployment, but we’ll use Red Hat OpenShift on IBM Cloud as the basis for specific examples.

What Is OpenShift?

Before delving into how to manage logs on OpenShift, let’s quickly explore what OpenShift is and which forms it is available in. This background is important because, again, OpenShift is a multi-faceted platform that can be deployed in more than one way. Understanding how OpenShift works, how OpenShift features vary across different deployment models, and how OpenShift relates to Kubernetes is an essential first step in mastering OpenShift log management.

OpenShift is a platform for managing containerized applications. It’s developed by Red Hat, which offers multiple deployment options:

  • OpenShift Container Platform, which you deploy on your own infrastructure. This is a commercial platform that you pay for. Red Hat provides support.
  • OKD (formerly known as OpenShift Origin), which is a fully open source and completely free platform. Origin is community-supported.
  • OpenShift Online, which is a SaaS implementation of OpenShift that is hosted on infrastructure maintained by Red Hat. It is a commercial platform supported by Red Hat.
  • OpenShift dedicated, which is a fully managed offering from Red Hat. This is a commercial offering as well, and it offers the most extensive level of support from Red Hat.

Red Hat OpenShift on IBM Cloud

In addition to the preceding iterations of OpenShift, which are available directly from Red Hat, several public cloud vendors offer fully managed OpenShift services (in partnership with Red Hat) that are hosted on their clouds. This includes OpenShift on IBM Cloud, a service that allows users to spin up OpenShift clusters on infrastructure provided and managed by IBM, with just a few clicks.

OpenShift on IBM Cloud provides access to most of OpenShift’s native functionality. However, as we’ll explore later in this article, it also offers certain add-on features — including a more user-friendly OpenShift log management solution than OpenShift’s built-in log tooling.

OpenShift vs. Kubernetes

It’s common to see comparisons between OpenShift and Kubernetes. That’s because OpenShift is based on Kubernetes. In fact, OpenShift is a certified Kubernetes distribution that is fully compatible with Kubernetes’s native tooling. That means you can use tools like kubectl on OpenShift if you want.

This does not mean, however, that OpenShift is just a Kubernetes distribution. It differs from Kubernetes in several key ways. For one, there are OpenShift-specific tools — like oc, which provides many of the same features as kubectl as well as a built-in container registry — that admins should generally use instead of relying on generic Kubernetes tools. For another, although the technology within OpenShift is open source, the platform is a commercial product developed by Red Hat. Processes like upgrades and log management (again, keep reading for more on that) also work somewhat differently in OpenShift than in Kubernetes.

What to Log in OpenShift

No matter which OpenShift deployment option you choose, there are several types of metrics to monitor and log.

OpenShift Events

In OpenShift, an event is one of dozens of different actions that may occur within your cluster. Some of them are routine occurrences (like the creation of a container) that generally don’t require your attention. Others signal undesirable conditions (like a storage volume that failed to mount, or an out-of-memory situation) that you may want to investigate further. A full list of OpenShift event types is available here.

The information included in events data is basic. OpenShift tells you that the event occurred, but it doesn’t provide detail about why it occurred, if something failed, or what the scope of the failure was. OpenShift also doesn’t map interrelated events together; it’s up to you to figure out how one event relates to another. For these reasons, events provide only limited visibility.

Nonetheless, events offer a quick way of gaining a basic understanding of the state of your cluster and of identifying common problems. Historical event data is also useful for researching the root cause of failures.

OpenShift API Audit Logs

OpenShift provides support for logging API requests issued by users and administrators, as well as by other components of the cluster. This data, known as API audit logs, provides a deeper level of visibility into actions performed within a cluster. That’s because the audit logs provide full context for each request: where it originated, which namespace it impacted (if relevant), what the response was, and more.

Infrastructure Logs

If you’re responsible for managing the infrastructure that hosts your OpenShift cluster, keeping track of the health of the underlying servers is important. You can do this by looking at the standard Linux log files in the /var/log directory of each pod in your OpenShift cluster.

How to Manage Logs in OpenShift

Compared to standard Kubernetes, log management in OpenShift is a bit more convenient overall, thanks to the robust support for accessing and interpreting log data that is built into OpenShift’s native tools.

Accessing OpenShift Event Data

Event data can be accessed in several ways. One is to use the OpenShift Web Console by navigating to Browse > Events. This is convenient if you need to check event data quickly, but you can’t access it programmatically in this way.

If you want more control over which event data you are looking at or want to pass it to external tools, you can use the CLI utility oc with a command such as:


You can pass a few parameters to control output, such as specifying a certain namespace using the -n flag. However, the extent to which you can focus on specific events using oc is limited, so it may be necessary to pipe the oc output into external tools (like grep) to home in on events related to a certain node, events of a certain type, and so on.

A third approach is to use an external log management tool to read and analyze event data. For example, if you use OpenShift on IBM Cloud, you can take advantage of the native LogDNA integration to set up log analysis in a few simple steps. IBM Log Analysis with LogDNA enables real-time access to log data, alerting based on log contents, and the ability to store log data as long as you want. The latter feature may be particularly important because OpenShift itself deletes log data permanently if you delete a namespace. But, if you import log data into IBM Log Analysis with LogDNA, you can store it for as long as you need — even if the original data source disappears.

To use this option, you must first create a LogDNA service instance within your OpenShift cluster. You can do this graphically through the IBM Cloud UI by following a simple set of configuration steps, or using the CLI with a command like:


For full documentation of the command, refer to the IBM Cloud documentation.

You will then need to deploy LogDNA agents to connect to your OpenShift cluster and forward logs to IBM Log Analysis with LogDNA.

LogDNA agents can be deployed in OpenShift using the oc utility. To do this, first create a new namespace in your cluster to host the agents with a command such as the following (here, we’ll create a namespace called ibm-observe):


Next, create a service account for the LogDNA agent:


Next, configure privileges so that the logdna-agent service account can create privileged LogDNA pods:


Next, add a secret for the ingestion key that the LogDNA agent will use to send logs:


Finally, deploy LogDNA agents to nodes using kubectl. Preconfigured YAML files are available for different public IBM cloud endpoints, so you can use a command such as the following to deploy an agent:


For a full list of available public endpoints, as well as additional context on deploying LogDNA agents on IBM cloud, refer to the documentation.

Once fully configured, the LogDNA service instance and agents provide a LogDNA dashboard within your OpenShift console. There, you have full access to LogDNA’s feature set for viewing and managing OpenShift events and other log data.

Viewing OpenShift API Audit Logs

To view API audit logs in a generic OpenShift installation, you must first enable audit logging by adding an auditConfig stanza to the /etc/origin/master/master-config.yaml on your master node, as in the following example:


The enabled: true parameter turns audit logging on; other fields define log settings.

Once audit logs are enabled, audit log data will be managed via systemd and can be accessed by running journalctl on the node where log data is stored. Typically, that is your OpenShift master node. If you wish, you could also access the audit log file directly with a command like:


In OpenShift version 4.0 and later, you can also access audit logs using the oc utility:


If you use OpenShift on IBM Cloud, OpenShift’s native audit logging utilities are replaced by IBM Log Analysis with LogDNA. You can configure LogDNA to collect and analyze audit log data using the same LogDNA integration process described in the preceding section of this article.

This approach offers several advantages over relying on native OpenShift tooling. IBM Log Analysis with LogDNA provides advanced log analysis features, as well as a graphical interface that makes it more convenient to access audit log data. It also aggregates audit logs alongside other types of log data, providing admins with the ability to review all logs from a central location. It enables you to store log data for the duration of your chosen retention period, even if it disappears from OpenShift itself.

Accessing Infrastructure Logs

Accessing logs for underlying host infrastructure is straightforward. You use the same logging and analytics tools that you would to access any type of Linux or *nix server log.

You can access infrastructure logs by logging into each of your nodes and looking at log files in the /var/log directly. Or, to streamline the process, you can use a log aggregation and management tool like IBM Log Analysis with LogDNA, which can collect and manage log data from all of your host servers and make it available to you in a central location. If you have more than a handful of servers, aggregating this data is the only practical way to keep track of it all. Use the Red Hat OpenShift on IBM Cloud observability plug-in to create a logging configuration for IBM Log Analysis with LogDNA in your cluster, and use this logging configuration to automatically collect and forward pod logs to IBM Log Analysis with LogDNA.

IBM Cloud offers an integration for LogDNA for generic server monitoring in addition to the OpenShift integration described above. Or, you can always install LogDNA generically on any Linux server, regardless of whether the server is part of a public cloud. (LogDNA also supports Windows and macOS, but because OpenShift runs only on Red Hat Enterprise Linux, it’s not relevant to this discussion.)

Conclusion

OpenShift offers native log management tools that provide visibility into the various key parts of an OpenShift cluster. If you come from a generic Kubernetes background, you’ll probably be impressed by just how much native logging functionality OpenShift offers, relative to stock Kubernetes.

That said, OpenShift’s native log management solutions aren’t always the best tools for the job. In some cases, such as when you need to aggregate a large volume of log data and be able to store log data for longer than OpenShift supports natively, a third-party log management tool like LogDNA is a better fit. LogDNA provides a much richer set of log aggregation, analytics and management features than OpenShift offers on its own. And, thanks to LogDNA’s integration with platforms like IBM Cloud, deploying LogDNA to manage OpenShift logs on these platforms is as simple as running a few commands or clicking a few buttons in the Web UI.

This post is part of a larger series that explores the difference between logging for Kubernetes and logging for RedHat OpenShift. Download the full eBook here.

Red Hat is a sponsor of The New Stack.

Feature image via Pixabay.

At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: feedback@thenewstack.io.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.