Culture / Development / Machine Learning

Code n00b: Hail, Web Development Robot Masters!

26 Jan 2018 12:00pm, by

Ye olde internet is populated with some supremely depressing shite, but by far the most depressing thing I read last week was Emil Wallner’s blog post on FloydHub. The actual title was “Turning Design Mockups into Code with Deep Learning,” but an equally applicable one could have been, “Think Your Web Dev Job is Safe From Takeover By Robots? Guess Again!”

Good ol’ Emil’s thesis? “In less than two years, we’ll be able to draw an app on paper and have the corresponding front-end in less than a second.” He explained that deep learning algorithms, along with synthesized training data, have now evolved to a point where a neural network, if given a design mockup, can be trained to code the HTML and CSS for a “basic” website. As someone who transitioned to web development after a couple decades in print journalism — another profession circling the drain — this is not exactly welcome news. I read further into the blog, hoping to find this was all just some blue sky bloviating and not a sign of the impending artificial intelligence automation of my newfound career.

Unfortunately, Wallner laid out a pretty compelling case. One I can’t follow very closely because neural networks are not exactly my thing. The basic process involves feeding the neural network several layouts with the corresponding HTML:

Before: the sample screenshot given to the neural network.

Then it gets a screenshot mockup to build on its own and “learns” by predicting the HTML markup, then verifying by matching the prediction to the training examples to see if they match. Going tag by tag one at a time, it corrects itself and iterates until it generates a finished markup. A similar process then uses a “dataset of generated bootstrap websites” to generate the correlating CSS code. Assuming there wasn’t some Photoshopping going in behind the scenes, his output is damned impressive:

After: The coded version of the sample screenshot (Click to embiggen).

The good news, at least temporarily, is that the process currently requires such a vast amount of computing power that it is not yet a viable option for actually automating the commercial production of websites. Even creating a bare-bones “Hello World” experimental version is beastly: “Expect to rent a rig with 8 modern CPU cores and a 1GPS internet connection to have a decent workflow,” noted Emil. However, with computing power becoming gradually ever cheaper and more available, that will not be the case forever. (Also, full stack folks, don’t get complacent because you’re not safe either: the next frontier is to tie the markup and stylesheets to scripts, and eventually the backend).

So, then, what might our future look like?

The reality is that front-end development of a static website is a logical place to apply machine learning. The data is not complex, and even current deep learning algorithms can follow and map the logic. Wallner’s prediction of auto-generated front-end apps in less than two years appears sound: as he points out, there are already two working prototypes built by Airbnb’s design team and Uizard.

For those who would like to get a closer view of the impending train wreck, the open source code for Wallner’s experiments — written in Python and Keras, a framework on top of TensorFlow — is available on Github and FloydHub in Jupyter notebooks.

If this the inevitable future, at least it’s open source.

So what can we do to prepare? Besides, of course, practicing our robot-buffing skills? (Our future overlords presumably will prefer to be nice and shiny, and it’s hard to outsource a truly detailed finish job. They’ll still need humans for that, at least.) I’ve been pondering this since reading Wallner’s blog last week, and honestly, I think the answer is nothing. There’s nothing developers can do, given current technology and what looks to be coming down the pike pretty damn quickly, to stay one step ahead of machine learning. For one thing, they never get tired and they don’t need caffeine or bio breaks.

Realistically, though, this is not so different from the way the internet and web development has gone since the founding of ARPANET. Imagine one of the original wide area network pioneers looking forward four or five decades to where things are now: the tech we take for granted would have to be as astonishing to them, as the future in the form of automated coding bots is for us to behold now.

Most of us are not pioneers, though; we are each just digging away in our own tiny corner of the webiverse. And we have all had to learn the new thing, and then the next new thing, and the one after that, at the very least in order to remain employable. There are always more new things. This is one of them. But it’s not all of them. Until the next currently unimaginable breakthrough, at least, computers are still linear, logic-bound entities. So we just keep learning right along with the machines, and looking to see where our flexible, creative, monkey minds take us next.

While we let them do all the grunt work.

Check out more Code n00b columns.

Feature image by Samuel Zeller on Unsplash.

A newsletter digest of the week’s most important stories & analyses.