Data / Development / Machine Learning / Storage

This Week in Programming: Amazon’s Yearly Reinvention

2 Dec 2017 6:00am, by

With AWS re:Invent 2017 this week in Las Vegas, there’s a bunch of news for you Amazon users, which, let’s face it, is quite a few of you. After all, the company accounts for nearly 50 percent of the cloud market with Microsoft Azure trailing at just 10 percent. When AWS goes down, so does much of the Internet.

Okay, you don’t need us to tell you that Amazon is big, but we can tell you that there have been several announcements this week that might get you excited, from new VR app creation tools to deep-learning video recognition to a new, fully-integrated and cloud-based IDE.

But first, a quick Python tutorial.

This Week in AWS Re:Invent Developer News

  • Cloud9 IDE Reborn: Amazon today announced AWS Cloud9, a cloud-based IDE that directly integrates with AWS, allowing users to write, run and debug code from their browser. AWS Cloud9 is reborn from c9.io, which Amazon acquired last year, and is also based on the popular open source Ace Editor. In its blog post, Amazon highlights three main components — the editor, AWS integrations, and its collaborative features. Sidenote for those of you not paying the closest attention to the latest IDE news, AWS  Cloud9’s collaborative features — which consist of collaborative editing, terminal sharing and chat — come on the heels of announcements by both GitHub and Microsoft that Atom and Visual Studio would both get real-time collaboration in the latest version. (And so far, Microsoft appears to lead the pack in this particular feature race, with capabilities far beyond simply seeing the other person type.) As for AWS integrations, the IDE makes it easy to work with serverless applications and launch an environment directly from AWS CodeStar. Basically, it’s meant to be your one-stop shop. And then there’s the editor, which Amazon says has “all the typical IDE features you would expect: live syntax checking, auto-indent, auto-completion, code folding, split panes, version control integration, multiple cursors and selections,” as well as built-in support for more than 40 language modes and Vim mode.
  • Collaborative, Multi-User Applications That Work Offline: Next on the list, Amazon introduced AWS AppSync, a fully managed serverless GraphQL service for real-time data queries, synchronization, communications and offline programming features. Supporting development for iOS, Android, and JavaScript applications, AppSync looks to “simplify the retrieval and manipulation of data across multiple data sources with ease, allowing [developers] to quickly prototype, build and create robust, collaborative, multi-user applications.” AppSync isn’t fully available yet, but you can apply to take part in the preview. Check out the Techcrunch article on Appsync for a plain-language explanation of how it works.
  • VR for Dummies: Available in preview so far, Amazon Sumerian is here to make it easy to “create and run virtual reality (VR), augmented reality (AR), and 3D applications quickly and easily without requiring any specialized programming or 3D graphics expertise.” Currently, Sumerian-created “apps run on popular hardware such as Oculus Rift, HTC Vive, and iOS mobile devices (support for Android ARCore coming soon).” Techcrunch offers some insight into not only the possible meaning of the name, but company’s overall strategy with VR.
  • Video Recognition with a K: Announcing Amazon Rekognition Video, the purposefully misspelled follow-up to Amazon Rekognition Image that “brings scalable computer vision analysis to your S3 stored video, as well as, live video streams.” The new product works to “automate all the tasks necessary for detection of objects, faces, and activities in a video” and it’s the real-time analysis of live video streams that’s the focus on this one — just think of all those heads-up displays you’ve seen in your favorite sci-fi movies!
  • That’s Quite a Camera You’ve Got There: And on this same sort of angle, Amazon announced AWS DeepLens, “a new video camera that runs deep learning models directly on the device, out in the field.” The device “combines leading-edge hardware and sophisticated on-board software, and lets you make use of AWS Greengrass, AWS Lambda, and other AWS AI and infrastructure services in your app.” With a 4 megapixel camera, 1080P video capture, and a dedicated Intel Atom processor, DeepLens can “run tens of frames of incoming video through onboard deep learning models every second.” It also comes with dual-band Wi-Fi, USB and micro HDMI ports, as well as 8 gigabytes of memory. As for software, DeepLens runs Ubuntu 16.04 and has a device-optimized version of MXNet, and the flexibility to use other frameworks such as TensorFlow and Caffe2.
  • Bring On The Movie-Tastic Future: Combine all these things — purpose-built AI cameras, real-time video recognition, easy 3D, VR, and AR app creation — and add to them the final AWS re:Invent announcement we’ll look at, Elemental-based AWS Media Services, and you have quite the feature set for created that “next generation” of apps that people like to talk about. The media services are “an integrated suite of services that make it easy for video providers of all kinds to create reliable, flexible, and scalable video offerings in the cloud” that “let customers build end-to-end workflows for both live and on-demand video with the professional features, image quality, and reliability needed to deliver premium video experiences to viewers across a multitude of devices.”

This Week in Programming News

  • Neural Networks API Goes Live with Final Android 8.1 Preview: Google announced the final preview of Android 8.1, which Programmable Web calls “anything but minor,” citing “a bevy of under-the-hood changes for developers and end users alike.” Most notably, the Neural Networks API is now active, as is the Pixel Visual Core chip in Google’s Pixel smartphones, which is accessible with the Android Camera API. The Neural Networks API boosts smartphone AI functionality by using on-device hardware to run computationally intensive operations rather than the cloud, paving the way for higher-level machine learning frameworks such as TensorFlow Lite and Caffe2.
  • Kotlin 1.2 Released with Multi-Platform Code: JetBrains announced Kotlin 1.2, calling it “a major new release and a big step on our road towards enabling the use of Kotlin across all components of a modern application.” This latest version adds “the possibility to reuse code between the JVM and JavaScript,” meaning you can reuse code “across all tiers of your application — the backend, the browser frontend and the Android mobile app.” Additionally, Kotlin 1.2 brings about a 25 percent compilation performance increase over the previous version. For full details on what’s new, check out the documentation and the What’s New in Kotlin 1.2 page.
  • Rails 5.2 with Active Storage: Rails 5.2 hits the streets with the Active Storage framework, supporting Amazon’s S3, Google’s Cloud Storage, and Microsoft Azure Cloud File Storage out of the box. It also includes a “sparkling new” Redis cache store that “supports Redis::Distributed, for Memcached-like sharding across Redises.”
  • JetBrains Unveils Go IDE GoLand: JetBrains, the main company behind Kotlin, unveiled its Go-specific IDE GoLand, which it says is “aimed at offering the same level of developer experience for Go as PyCharm does for Python or IntelliJ IDEA does for Java.” The IDE offers testing and debugging tools, integrations for Git, Docker, databases, and a terminal, and supports front-end development with coding assistance for JavaScript, TypeScript, React, Vue.js, Angular, and others. At issue for some users, it seems, is the company’s strategy of releasing different IDEs for each language instead of simply adding support to one central IDE, where settings could be universal. For Java developers using IntelliJ IDEA Ultimate, at least, GoLand can be used as a plugin.
  • Mozilla’s Open Source Speech Recognition & Voice Dataset: With its latest release, Mozilla looks to bring speech recognition services to the masses, rather than just those who can afford commercially available products. The announcement includes two major products: the initial release of DeepSpeech and “the world’s second-largest publicly available voice dataset, which was contributed to by nearly 20,000 people globally.” The company boasts that DeepSpeech “has a word error rate of just 6.5 percent on LibriSpeech’s test-clean dataset.” The initial release includes pre-built packages for Python, NodeJS and a command-line binary, but is only available in English for now, with multi-language support expected for the first half of 2018. As for the dataset, which is part of Project Common Voice, the first release includes nearly 400,000 recordings representing 500 hours of speech and is available for download.
  • Google Retires Realtime API: Google’s Realtime API is being retired after four years to make way for solutions using “fast, flexible cloud-based storage solutions like Google Cloud SQL and Google Cloud Firestore.” Programmable Web breaks down the timeline for the API’s deprecation: “Applications using the Google Realtime API will work as expected until December 11, 2018; however, the company is not accepting new API clients. On Dec. 11, 2018, the API will no longer allow Realtime API documents to be modified, the documents will become read-only. Finally, on Jan. 15, 2019, the API will completely shut down. A JSON export API will remain available, however.”

Google is a sponsor of The New Stack.

Feature image: Slot machine at The Venetian, home of this year’s AWS Re:Invent.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.