Modal Title
Machine Learning / Software Development

Swift’s Chris Lattner on the Possibility of Machine Learning-Enabled Compilers

Swift's Chris Lattner, Ask Me Anything from the Association for Computing Machinery's Special Interest Group on Programming Languages.
Aug 9th, 2020 6:00am by
Featued image for: Swift’s Chris Lattner on the Possibility of Machine Learning-Enabled Compilers

Chris Lattner has led an interesting life. After co-creating the LLVM compiler infrastructure project, he moved on to Apple, where he became one of the designers of the Swift programming language. After a six-month stint at Tesla, Lattner settled into a new role at SiFive, a fabless semiconductor company building customized silicon using the free and open RISC-V instruction set architecture.

All this experience has given him a unique perspective on not just programming languages, but also on the compilers that translate them into lower-level code — and the communities that use them. So in June, he made an appearance at the Association for Computing Machinery‘s Special Interest Group on Programming Languages, offering attendees at their virtual conference a chance to “Ask Me Anything.”

He looked to the past of Swift and the future of compilers — as well as some issues they’re facing here in the present. And he even sees a possible role for machine learning in both programming and compiler development.

Here are some of the highlights:

 

Swift Changes

Lattner had an interesting response when someone asked if any features in Swift were dropped as the language evolved?

“Yes! Tons of bad ideas were taken out of Swift.” He smiled, then adds “And tons of good ideas were added…”

Swift’s changelog even reaches back to the early proprietary versions of the now-open source software, and there’s also an evolution repository showing how Swift, “now a real language,” as Lattner described it, “came to be through its evolution and through iteration and change.” There are more changes around the time of Swift 2, Lattner noted, “because that was a time in Swift’s design evolution that compatibility could be broken.”

Some of the interesting changes throughout its history:

  • Though it was later removed, early on Swift had what Lattner called “Ruby-style closures with pipes delimiting the arguments, and all kinds of stuff like that.”
  • Swift also used to have the incrementing and decrementing syntax ++ and — (with both a prefix and postfix version). “That got removed as not being worth it and causing confusion.”
  • Lattner also remembered that Swift used to have C-style for loops with an initialization, a condition and an increment. “That got removed, because we can have forEach loops, and we can have smart iterators and things like that.”

As an original co-author of LLVM, he’s also given a lot of thought about that space where languages and compilers meet. LLVM’s core libraries include an optimizer, a machine-code generator, and clang, a fast compiler for C/C++/Objective-C code with user-friendly error messages. Later someone at Cornell University asked what the biggest problems were for the current generation of compiled languages — and Lattner started by acknowledging an oversight that’s “partially my fault.”

“I think that many of the modern compiled languages, in which I would include Swift, Rust, a bunch of that kind of 2010-and-later languages — have forgotten and are rediscovering the value of high-level intermediate representation.”

“This is something the Fortran community has known for decades,” he added with a smile.

Intermediate representation (or IR) is the way compilers create their own internal versions of source code for optimization and retargeting, “and I think that having a high-level intermediate representation, a language-specific IR for doing much more flexible transformations, is very useful.” He gave credit to the MLIR (Multi-Level Intermediate Representation) project for working on the problem, “and I’m very happy to see it being adopted in a number of domains, including hardware design now. It’s really nice to just be able to pick up high-quality infrastructure and be able to build cool things out of it.

“I think that that space has not been explored as deeply as it should, merely because the infrastructure has not been good enough. And as the technology will diffuse through the compiler design community, I think that we’ll start to see new and interesting things.”

Chris Lattner in 2011 - by Alexandre Dulaunoy via Wikipedia

A questioner from the Computer College of London wanted to explore why LLVM is so popular. Is it technical reasons like being simpler/faster/more extensible, or for social reasons like being advocated by influential people and organizations at the right time?

“I would say both,” Lattner answered — also saying its popularity got a boost from its permissive open source licensing. “I don’t think there’s any one answer.” But then he points out LLVM’s tool libraries are really useful — for example, when a researcher wants to focus on a specific problem without building out all of its surrounding infrastructure. “I think that the most profound aspect of LLVM, that has really helped move this space forward, is the modularity — the fact that it is not monolithic… Instead it’s a pile of libraries that can be sliced and diced and applied to different kinds of problems in different ways… I’m very happy when I see people do things I never even thought about or never even imagined.”

And what makes that possible? Clean design, proper layering, and correct interfaces — “through a community and a structure that values that.” And of course, having a healthy open source community, “because then you get not just the one arrow through the technology that one organization cares about, but you start to get multiple different vectors going on where different people in different organizations are caring about different things. And that leads to it being more well-rounded.”

A programmer’s programmer, Lattner’s thoughts always seem to return to the best ways to keep developers happy. Lattner speaks appreciatively of the newer analytical testing techniques that have come along, like fuzzing and tier improving, “because they find a different class of bug that is often more nefarious and more expensive to track down if you encounter it in the wild.” He called it “a really great complement to writing test cases and kind of the traditional way of doing software development. I think they’re great tools… I’m a huge fan of the work.”

He added with a laugh, “I think it has really helped move the compiler technology forward, by making it actually work.”

Machine Learning in Compilers?

He’s the first one to acknowledge there’s room for improvement. “You may have noticed that compilers get described as these beautiful platonic ideals of these algorithms that you can read about in a textbook, on the one hand. On the other hand, you go look at them, and they’re full of horrible heuristics and all kinds of grungey details that then make spec go fast or something… ”

So when a questioner from the University of Sao Paolo asked about the use of machine learning in compiler optimizations — Lattner saw the potential. “One of the challenges is that the existing compilers, LLVM included, were never designed for machine learning integration. And so there’s a lot of work that could be done to integrate machine learning techniques, including caching, offline analysis, and things like this, to integrate that into our compiler frameworks. But because the abstractions were wrong, it’s really hard to do that outside of a one-off research paper…”

He even provided an example. “Having a prediction algorithm to say, ‘We think it’s better to split this,’ for example, would be completely sound — you could make it deterministic, and it probably would have a lot of value… I think the existing frameworks are not perfectly well set up for that, but there is definitely a lot of work to be done there.”

Another question from Microsoft Research asked how machine learning and AI were influencing software development now, and Lattner suggested that’s still in its early phases. When it comes to machine learning algorithms, “most people are treating them as a function. ‘I can use a model to train a cat detector. Now I have a cat detector function; I shove in an image and I get back a prediction…’ Right? They’re basically functions.”

But instead, he’d like to see machine learning become its own programming paradigm — another form of coding to accompany approaches like object-oriented programming and functional programming. “And it’s a programming paradigm that’s really, really, really good at solving certain classes of problems.

“I would not write a boot loader using machine learning, for example — there are classes of problems it’s not very good at. But it’s a tremendously useful way to solve problems in the domain that most humans live in. So I think that it should be just part of the toolbox. And the more we can break down the tooling gaps, the infrastructure gaps, the language gaps, and make it integrated with the normal application flow, the better we’ll be.”

And then he casually adds a very useful aside for people interested in further research. “This is the idea of the Swift for TensorFlow project, if you’re familiar with that…”

Lattner closed by saying he’d really enjoyed answering questions, adding “Maybe we can do this again next year.”


WebReduce

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.