This week, Docker announced some changes to Docker Hub Autobuilds — the primary one of interest being that autobuilds would no longer be available to free tier users — and much of the internet let out a collective groan to the tune of “this is why we can’t have nice things!”
*sigh* A story as old as time. If you offer any compute, people will mine crypto. If you offer any storage/hosting people will host porn.
This is why we can’t have nice things. https://t.co/LWbGw3xhHE
— Joe Beda (@jbeda) June 8, 2021
So, if you happen to be looking for yet another reason to immediately cringe and discard anyone who comes up to you crowing about the benefits of cryptocurrencies, Docker getting rid of its autobuild feature on Docker Hub can be added to your arsenal.
“As many of you are aware, it has been a difficult period for companies offering free cloud compute,” wrote Shaun Mulligan, principal product manager at Docker in the company’s blog post, citing an article that explores how crypto-mining gangs are running amok on free cloud computing platforms. Mulligan goes on to explain that Docker has “seen a massive growth in the number of bad actors,” noting that it not only costs them money, but also degrades performance for their paying customers.
And so, after seven years of free access to their autobuild feature, wherein even all of you non-paying Docker users could set up continuous integration for your containerized projects, gratis, the end is nigh. Like, really, really nigh, as in next week — June 18.
While Docker offered that they already tried to correct the issue by removing around 10,000 accounts, they say that the miners returned the next week in droves, and so they “made the hard choice to remove Autobuilds.”
There are definitely some users out there who see this as becoming part of a pattern, following the company’s rate-limiting of Docker Hub last year, and others still, such as the team lead for the Ethereum decentralized platform, who call foul, though perhaps that’s to be expected from those of the crypto persuasion.
Docker yet again ratcheting down the terms of their Hub service, this time removing the essential “autobuild” functionality entirely in free tiers.
We’ve already moved to GH Actions, and would be covered by the FOSS aspect of this, but every e-mail I get from them is bad news. pic.twitter.com/Yg2x77m4r1
— Buster “Silver Eagle” Neece (@SlvrEagle23) June 10, 2021
For its part, Docker has tried to again stave off the criticism, offering users a discount on subscriptions, and offering members of its open source program the ability to continue to use autobuilds for free, though I suspect the criticism there remains the same as with Docker Hub’s rate-limiting: open source maintainers have enough on their plate, never mind maintaining and proving their qualifications for yet another membership.
Meanwhile, for those of you who pay Docker — and last time around, we were reminded that their subscriptions start at a mere $5 per month — the company says it will increase the number of parallel builds to 5 for Pro and 15 for Team subscribers, as well as increase build instance types and use BuildKit to provide “beefier” machines and better performance.
This Week in Programming
- “Docker Scan” Comes to Linux: And while we’re on the topic of Docker, the company also announced this week that it would be bringing “docker scan” to Linux after launching the vulnerability scanning capability at the end of last year. The functionality has previously been available on Docker Hub, as well as on Docker Desktop for Mac and Windows, and now it is moving over to the Docker CLI on Linux. “The experience of scanning on Linux is identical to what we have already launched for Desktop CLI,” Docker wrote in its blog post, noting that it will use all the same flags, but comes with one major difference. Instead of upgrading your Docker Desktop, you will need to install or upgrade your Docker Engine, which you can read all about in the Install Docker Engine section of Docker documentation.
Hot take: there is absolutely no reason to ever use C. It is kompletely unnesessary for koding.
— Hillel (@hillelogram) June 10, 2021
- VS Code Gets Support for Remote Repositories: Microsoft has released a new Remote Repositories extension for Visual Studio Code, which is built alongside GitHub, to support working with, well, just that — repositories that live remotely. “A large part of what developers do every day involves reading other people’s code: reviewing pull requests, browsing open-source repositories, experimenting with new technologies or projects, inspecting upstream dependencies to debug applications, etc,” they write. While normally you would need to clone the repository to do this with VS Code, now you can do this pretty much instantaneously. The extension currently supports GitHub repos, and will soon add support for Azure Repos, and means you can “work on as many repos as you like without having to save any source code on your machine.” Of course, there are some limitations to what you can do: you cannot use terminals, some features like IntelliSense and Go to Definitions may not work correctly, searching may be limited by the remote repository host, and not all extensions will support running in this sort of virtual workspace. For a quick intro, check out the video below:
- GitLab Defaults to ‘Main’ Branch in 14.0: GitLab has moved to 14.0, its annual major release, and it says that this release includes a few breaking changes that come in the form of planned deprecations. The move will be made through daily deployments until June 22, when both GitLab.com and self-managed GitLab will move fully over to 14.0. In addition to the deprecations, GitLab is officially moving to use “main” as the default branch, instead of “master” — a change that has been in process for a while now and was recently made in Git itself earlier this year. To find out all that’s new in GitLab 14.0, head on over to their lengthy blog post detailing all the changes or check out the video summary.
- Netflix Shows The Way with eBPF: For those of you who read our pages often, you know that we regard eBPF as something to watch. If you’re unfamiliar, the extended Berkeley Packet Filter (eBPF) gives Linux (and now Windows) users a way to run sandboxed programs within the kernel space, without changing kernel source code or loading modules, and this week, Netflix offers some insight on how it uses eBPF flow logs at scale for network insight. They do this by using a network observability sidecar called Flow Exporter that uses eBPF tracepoints to capture TCP flows at near real time. Of interest here, of course, is that this is happening at Netflix, where they are “ingesting and enriching billions of eBPF flow logs per hour,” all enabled by “leveraging the highly performant eBPF along with carefully chosen transport protocols to consume less than 1% of CPU and memory on any instance in our fleet.”
Debugging Code in Production😂 pic.twitter.com/IX9KDmTlBP
— Alvin Foo (@alvinfoo) June 4, 2021
GitLab is a sponsor of The New Stack.