Last week here, we took a look at the acceptance of the vgo versioning system for Go, which many in the community seemed to feel was an abrupt decision that failed to take their objections into account. Of note in the acceptance, I had thought, was the onus of “a technical solution to all possible scenarios,” which was part of the reasoning that put the kibosh on the discussion phase.
This week, we find a similar flavor of tale, but this time from one of the Rust team’s core members, Aaron Turon, on how they and the Rust community are dealing with similar issues of debate and distrust. In his blog post, entitled “listening and trust, part 1,” Turon describes his troubled feelings around recent developments in the Rust community and the “increasing signs of distrust, ‘us vs them’ thinking, and people feeling like they have to yell in order to be listened to.”
“If our community grows more quickly than we can establish shared values/norms/culture, we could so easily descend into acrimony and tribalism,” Turon writes. “I’ve seen other language communities go through very painful periods, and I’m eager to try to steer Rust’s community around them if we can.”
Rust, after all, is a rather young language like Go, having appeared just eight years ago now and with version 2.0 on the way some time in the year ahead. In his post, Turon describes previous missteps the team has made with its request for comment (RFC) process and the methods they took to fix them, including the idea of a “No New Rationale rule,” wherein “decisions must be made only on the basis of rationale already debated in public (to a steady state).”
The "no new rationale" rule makes great sense. #mozlando
— jascha kaykas-wolff (@kaykas) December 11, 2015
“The unifying theme here,” Turon explained, “is a steady move away from ‘being in the room when it happens’ to a fully inclusive process.”
So, there it is — a peek into the sausage factory that is the current RFC process over at Mozilla concerning Rust. Make sure to check out the full post for more of the not-too-gory details. And I know that I, for one, look forward to reading the next post in the series, where Turon has promised to “focus on the kinds of breakdown [he’s] been seeing, and some of [his] hypotheses about the underlying causes.”
— Esther Schindler (@estherschindler) May 30, 2018
This Week in Programming
- A Bit More on High Standards: I don’t know about you, but I like to consider myself a bit of a low-level weather and time geek. Like, I don’t really get into modeling or anything like that, but any time a headline has something to do with those topics, I usually click, and this blog post on dealing with time in programming was no exception. In the post, calendar app creator Zach Holman facetiously asks “UTC is enough for everyone …right?” and explores the various exceptions to the time rules such as “the country that recently decided to skip a certain day, or that the Unix epoch isn’t technically the number of seconds since January 1970, or that February 30 happened at least twice in history.” It’s an interesting read on that basis alone, but also touches upon another idea I’ve found interesting lately related to programming — that desire for sometimes un-meetable, high standards and accounting for all contingencies. Quoth the author: “As programmers, we’re kind of inherently built to want the ABSOLUTE BEST HIGHEST FIDELITY FORMATS OF ALL TIME. Like dammit, I need the timestamp down to the micromillinanosecond for every cheeseburger that gets added to my bespoke Watch-The-BK-Throne app. If I do not have this exact knowledge to the millisecond of when I consumed this BBQ Bacon WHOPPER® Sandwich From Burger King® I may die.”
- TypeScript 2.9: Moving on to some programming news, Microsoft announced TypeScript 2.9 this week, which brings with it a number of new editor and language/compiler features. One highly anticipated feature, according to the announcement, is that TypeScript 2.9 now allows users to move declarations to their own new files and rename files within their project while keeping import paths up-to-date. In addition, editors will now be able to show users when certain parameters are unused (and unnecessary) instead of returning an error. Beyond these editor features, new language features include import() types, –pretty by default, support for well-typed JSON imports, type arguments for tagged template strings, and support for symbols and numeric literals in keyof and mapped object types. In addition, 2.9 has some minor breaking changes that you should keep in mind if upgrading. According to the announcement, Microsoft is “aiming to deliver an experience around project-to-project references, a new unknown type, a stricter anytype, and more” with the upcoming version 3.0.
- Update Git! And if you hadn’t yet heard, Git announced this week the May 2018 Git security vulnerability. I sort of enjoy how it’s called “the May vulnerability,” as if you can look forward to a new security vulnerability each month. Just wait until what we have in store for June! But in all seriousness, an industry-wide security vulnerability was disclosed this week “that can lead to arbitrary code execution when a user operates in a malicious repository.” Git 2.17.1 includes a fix, as does Git for Windows 2.17.1(2) and all users are encouraged to update their Git clients as soon as possible, though the Git team has blocked these types of malicious repositories from being pushed to VSTS to ensure that they cannot be used as a vector for transmitting maliciously crafted repositories to users who have not yet patched their clients for this vulnerability. Click through to read the post for all the gory details.
Hey, I just cloned you
and this is crazy
buy here's my PR
merge me maybe
— David Neal (@reverentgeek) May 25, 2018
- Stack Overflow’s Developer Survey Data Goes Public: It’s that time of year again, for all of you to make those pretty, interactive infographics that I like so much, seeing as Stack Overflow released its 2018 developer survey data to the public. The data includes responses from more than 100,000 developers from around the world. The data is available both from Stack Overflow itself and on Kaggle Datasets, where you can explore the dataset using Kernels.
- A Snapchat Developer Platform? Techcrunch reports that Snapchat is prepping its Snapkit platform to bring a camera, login to other apps. According to the article, “Snapchat is secretly planning the launch of its first full-fledged developer platform, currently called Snapkit,” which will provide a Snapchat login for other apps, allow for the use of Bitmoji avatars, and host a version of Snap’s full-featured camera software that can share back to Snapchat.
- Google Introduces Machine Learning Practica: Continuing on its march to turning everyone into an AI developer, Google has introduced Machine Learning Practica. The Machine Learning Practicum on Image Classification “contains video, documentation, and interactive programming exercises, illustrating how Google developed the state-of-the-art image classification model powering search in Google Photos.”
- An AI Winter, You Say? Speaking of AI, one post this week warns us all that the AI Winter is well on its way (to which one Redditor quipped that, indeed, “Buzzword Spring never ended”). The article takes particular exception with deep learning, which it says “has been at the forefront of the so-called AI revolution for quite a few years now, and many people had believed that it is the silver bullet that will take us to the world of wonders of technological singularity (general AI).” In support of the argument, the author writes that “not on the surface yet, NIPS conference is still oversold, the corporate PR still has AI all over its press releases, Elon Musk still keeps promising self-driving cars and Google CEO keeps repeating Andrew Ng’s slogan that AI is bigger than electricity,” before exploring how “the place where the cracks are most visible is autonomous driving — an actual application of the technology in the real world.”
Anyone who wants to write an alarmist op-ed warning about the "dangers of AI" should be forced to first spend 48 hours using TensorFlow to solve a non-trivial problem.
— Neil Conway (@neil_conway) May 30, 2018