Data / Machine Learning / Programming Languages

What Every Developer Should Know about Machine Learning

14 Oct 2016 8:15am, by

This post is one in a series of tutorials and analysis exploring the fields of machine learning and artificial intelligence. Check back on Fridays for future installments.

Have you ever wondered how your email automatically gets filtered for spam? Is there an actual person who understands the context in your email and then decides whether it’s spam or not? Or how a self-driving car works?

These intelligent actions are being performed by machines with the help of Machine Learning (ML).

ML is the ability of a machine to determine similar patterns in a data by using statistical analysis and perform decisive actions based on such patterns.

For instance, how is it possible for a smartphone camera to automatically detect a face in a picture? The answer to this is also machine learning. An intelligent machine learning software application works in sync with the phone camera that allows to detect and draws boxes around the faces. This intelligence is possible when a machine learning algorithm is trained to identify faces from other objects and gives it the ability to perform decisive actions like humans.

Image Source : http://blog.photoshelter.com/wp-content/uploads/2014/03/facedetection.jpg

Face Detection using a phone camera.

The milestones achieved by ML are significant and worth mentioning. In 2015, Microsoft launched a machine learning driven service, called How Old?, that could guess the age of a person when given a photo. The service started out giving fairly accurate results immediately and its accuracy improved as more people used it.

Google’s Alpha Go program based on machine learning defeated a professional player on the board game Go in 2015. IBM’s Watson will be used as a teacher’s assistant for teaching mathematics to third-grade students in the near future. Self-driving cars by Tesla and Uber are being tested and are seen as the future in autonomous driving. Google Allo, an instant messaging app, allows a user to reply without typing with the help of a virtual assistant. It would come pre-installed on Pixel phones later this month. Soon, many devices will have some sort of intelligence and machine learning is the key to it.

Machine Learning Before it was Cool

Hello Watson!

Hello, Watson!

About fifty years ago, ML started with the vision of making intelligent machines by mimicking the information processed in a similar manner as humans do.

The human brain processes huge amount of information from a number of sensors present on a human body to perform complex tasks. There has always been a wish by computer scientists to be able to see that with computers. The earliest machine learning application can be seen in a Checkers program designed by Arthur Samuel in 1952. The machine taught itself the strategies required to play checkers after playing games against humans. It adopted those strategies by self-learning and improved its gameplay.

Arthur Samuel was the pioneer in Machine Learning and Artificial Intelligence and was awarded the IEEE Computer Pioneer Society award in 1987. His publication titled, “Some Studies in Machine Learning Using the Game of Checkers” holds a significant number of citations.

In 1957, the “perceptron” model developed by Frank Rosenblatt formed the basis of an artificial neural network. The simplest model consists of a two layers, an input and output layer. Multiple inputs are given to the input layer and are summed at the output nodes. The inputs are accompanied with their individual weights and if the summation of these inputs reaches a threshold, then it activates the perceptron generating a single output. By trying a different combination of input weights, multiple output responses can be generated. There are sophisticated perceptron models that consist of several “hidden” layers between the input and output layers to solve complex problems.

Perceptron, smallest unit of ANN (Artificial Neural Network)

Perceptron, building block of ANN (Artificial Neural Network)

 

The famous “Nearest Neighbour (NN)” algorithm was invented in the year 1967 when it was seen for the first time that a machine could identify similar types of data based on the data which it was presented beforehand. During late 1990’s, this algorithm defined the era of  “Pattern Recognition” (PR). The practical application of PR includes computer vision, speech recognition, NLP (Natural Language Processing), Object Detection and Recognition, medical imaging, robotics, systems and surveillance etc.

In late 2000s ML was used in web applications, data mining (Big Data) and language processing. Advancement towards supervised and unsupervised methods of machine learning started to develop. Popular algorithms include Regression, K-Nearest Neighbor Search and Support Vector Machines (SVM).

The checker program and the perceptron learning comes under the category of supervised learning, where the output of the problem is known prior. For instance, for detecting a face in an image, an ML algorithm is trained with the examples consisting of faces and other objects, and the machine learns through training examples to classify a face from a non-face based on its training.

Nearest Neighbor tries to cluster data points based on color. The legend shows ten clusters that have been determined by the algorithm.

Nearest Neighbor tries to cluster data points based on color. The legend shows ten clusters that have been determined by the algorithm.

 

Whereas, the nearest neighbor algorithm is a type of unsupervised learning, where it is not known beforehand the prior output. Suppose there are random objects with different colors, size and shapes. The NN algorithm will try to cluster similar objects based on color, shape and size together. The ML algorithm tries to find correlation within the data without any training examples.

Recent Trends in ML

The foundation of ML was developed around early 1950’s and has been around for quite some time. But, suddenly ML has become a buzzword in technology and is inevitable to stay for a long time. This is because of the easy availability of powerful computers which were not available around early 1990’s. For example, there was limited space for a storing few number of images on a hard drive. There were no powerful GPU’s and the overhead was the cost of the hardware components. In short, machine learning was limited by hardware resources and was only used by researchers in government projects, academics and scientists.

Open source Machine Library by Google

Open source Machine Library by Google

But as we have progressed in computer hardware, people are not limited to use it. They can now afford to buy faster and cheaper computers and run machine learning algorithms. An excellent thing in the machine learning world right now is that a lot of frameworks built are open-sourced and free to use. For example, Google’s TensorFlow, a free machine learning library, has made their code open source and has revolutionized the working culture of machine learning. By doing this, early adoption and development of the technology becomes easier and rapid. In the subsequent posts, we will see how machine learning can be applied to solve all our problems awesomely!

Feature image via Pixabay.

 

 

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.