Earlier this week, I found myself wandering down the wormhole of comments that is nearly every AskReddit thread I clicked on, when I stumbled across this video of Saqib Shaikh, a developer who happens to be blind, showing how he uses Visual Studio 2017 with screen reader software for writing and debugging code at the Microsoft Build 2017 conference last May:
“With these things, often it is just the small changes that matter, but it makes such a big difference to my productivity and how I code every day,” Shaikh said, explaining how the tweaking of some changes in Visual Studio allowed him to greatly expedite his workflow.
“So maybe if you work on any user interfaces, be they desktop, mobile or web. Then these small changes in your apps can make a big difference in how someone uses your apps every day,” Shaikh said.
I was partially taught to code over the phone, from a man I never met in person, who also happened to be blind. It was the early 1990s, I was running a BBS and I’d bought the source code from VBBS creator Roland De Graaf, who also happened to be legally blind.
Whenever I spoke with mentor-of-sorts on the phone, I could hear his screen reader in the background, dictating to him what was on his screen in much the same way he would dictate to me what I had to type to fix my code. It was a familiar process, after all, for someone who first learned to code by typing in code that was printed on the back pages of Discover magazine.
So, while for me there has never been a question of whether or not someone could code without being able to see, it is a question often asked. And the answer is obviously yes.
Here’s another account from a blind programmer about software development 450 words per minute — the rate at which their screen reader “displays” their screen to them.
Back to Saqib Shaikh, who also happens to be a software engineer in the artificial intelligence and research group at Microsoft. Taking the idea of screen readers to the next level, Shaikh has created an app using Microsoft’s Intelligence APIs to work with smart glasses or smartphones that can help describe the world around him in much the same way his computer reads his screen.
Back at the Track
- Stack Overflow continues its data-driven analysis of programming languages this week with a look at the reason behind Python’s meteoric growth. According to the study, “the fastest-growing use of Python is for data science, machine learning and academic research.” This growth, however, is not limited to a single field, as “Python’s growth is spread pretty evenly across industries,” which “tells a story of data science and machine learning becoming more common in many types of companies, and Python becoming a common choice for that purpose.”
- KDnuggets dives even further into the story of Python in the realm of data science and machine learning in a post on Python vs R — Who Is Really Ahead in Data Science, Machine Learning? Spoiler alert: Python is running away with the race.
- If you weren’t yet convinced, a quick little article in Electronics Weekly lays out the very simple reasons for why Python isn’t just for beginners — if you ever thought that in the first place.
- InfoWorld takes a look into why Oracle is dumping Java EE, which will be adopted by Eclipse, steward of the popular Eclipse IDE. The change will likely lead to more than simply a name change for Java EE, with the stated goal being one of open governance, collaboration, and faster iterations to keep up with enterprises moving to more cloud-centric models.
- And finally for this week, one from our own pages — GitHub’s Atom Text Editor Gets a Full IDE. With the release of Atom-IDE, a set of packages designed to bring IDE-style functionality to Atom, the popular text editor has taken “the first step down a path toward more first-class IDE functionality in Atom,” according to one engineer on the project. Take a look at the post for the full list of features.