In the second signing just this week, the Cloud Native Computing Foundation officially announced Thursday morning it is assuming stewardship of CoreDNS, a project born just a year ago from an idea to gut the insides of a web server called Caddy and make it respond to complex requests for service discovery.
“There are many different DNS servers out there, and there are even other service discovery solutions that are based on DNS,” said John Belamaric, a distinguished architect at network visibility service provider Infoblox and a key contributor to CoreDNS. “But one of the great things about CoreDNS is that it’s extremely extensible and flexible. That makes it easy to adapt to the frequently changing, dynamic world in cloud-native.”
Miek Gieben — the original author of SkyDNS, and who created CoreDNS in a matter of a few days — commandeered Caddy’s chassis because it had an interesting extensibility model. In resolving a query, it passes the criteria through a chain of add-in functions, which Gieben calls “middlewares.” (Put JBoss and WebSphere out of your mind, and think of this instead as a chain of software in the middle of things.) Such a function is registered with CoreDNS using a function written in Go. In a completely non-sophisticated way, the criteria is passed from the front to the back of the chain of functions, until one is capable of resolving it.
This way, the functionality of DNS can be very easily customized for the purposes of service discovery, which come into play far more frequently with respect to microservices.
In a blog post Wednesday, Belamaric revealed that the CoreDNS team had successfully extended their server’s functionality to serve not just A but SRV, PTR, and TXT records, in a manner that was essentially compatible with the Kube-DNS add-on for Kubernetes.
“The flexibility of that architecture enables us to easily adapt CoreDNS to fit different use cases in the cloud-native environment,” Belamaric told The New Stack. Similar to the “middlewares” his team inserted to enable back-end service discovery for Kubernetes, he said, it would be easy enough to have a chained function link to etcd key value store, giving similar functionality to any other orchestrator.
“Traditional DNS tools like BIND really aren’t built for that level of dynamic interaction, that you get out of etcd or Kubernetes,” he continued. He reminded us that BIND stores its data in files, and that binding makes it difficult for the system to be modified for rendering slightly different data.
BIND, he said, “is a code base with an incredible number of features. Of course, as we know in our industry, as something grows and grows, ages over time, and accumulates features, it becomes more and more difficult to modify, especially without breaking those features. So try to swap out the back end data store for BIND, from files which get reloaded periodically, and have relatively long [times-to-live] on their records, and incrementing serial numbers, to work on something like etcd, would be pretty challenging from a development standpoint.”
Service discovery — as we were reminded with the CNCF’s acceptance of gRPC on Wednesday — is the critical distinction between the bulky, centralized service-oriented models of the past and the agile, lightweight systems of today. It might seem counter-intuitive for microservices to re-introduce themselves to each other with each conversation (like a fish portrayed by Ellen DeGeneres), but it avoids the penalty of utilizing a centralized, bulky, serialized database. At least, that’s the goal, so long as service discovery isn’t bound to BIND’s minimal database engine, which is to modern data services what a wind-up music box is to a philharmonic orchestra.
“Most existing DNS service discovery solutions are bound to individual orchestration environments, or they are built on less flexible architectures,” Belamaric noted. “It’s not designed with any level of concurrency involved; BIND is a single-threaded environment, as opposed to CoreDNS, which is multithreaded. There are other servers such as Unbound that are multithreaded, but they also have a lot of functionality that isn’t necessarily applicable in the cloud-native world.”
Belamaric told The New Stack that CoreDNS’ contributors hope to gain more visibility to the broader development community, as well as insight from CNCF’s many user members.
“There’s access to a cluster that we will be able to make use of, to push the scalability testing,” he added. “As far as functionality, we do want to extend [CoreDNS] to other orchestrators, potentially, than [CNCF’s] Kubernetes. We’d like to put a little bit more structure around service registration, and possibly have a service registration and discovery solution combination.”
As administrators and CIOs look to containerization developers to resolve the outstanding security issues looming over their technologies, Belamaric foresees the possibility for CoreDNS to be extended through the application of selective policy. Specifically, he sees where the DNS server can be made to approve or deny requests based on rules, perhaps carried out by an open source policy framework such as the Open Policy Agent.
This way, potentially dangerous queries, perhaps from non-authenticated sources or in situations where the state of the network may be in transition, can be stopped cold — disarming potential incursions or data breaches before they can even start. Belamaric acknowledged that such a system would require the implementation of access control lists (ACL), and that performance penalties would be incurred. Nevertheless, he believes, those penalties would not outweigh the net gains in efficiency from moving to a modern, multithreaded data engine in the first place.
Whether or not the CNCF is making kings with its choices for cloud-native platform hosting, you can’t deny the Foundation is lifting them up onto some very tall pedestals.
Feature image: Telephone switchboard at Joint Base Elmendorf in Richardson, Alaska, circa 1950, in the public domain.
The Cloud Native Computing Foundation is a sponsor of The New Stack.