This new episode of The New Stack Makers podcast examines how real estate information provider Trulia uses artificial intelligence and computer vision to deliver a more personalized experience to its users. For this interview, Deep Varma, Trulia’s vice president of data engineering, spoke with TNS managing editor Joab Jackson:
Ingesting large amounts of data in near real-time is something many of today’s organizations are having to consider, particularly those focusing on using customer metrics to shape their products and services. In the discussion, Varma notes that Trulia makes use of Apache Kafka to power its personalized platform.
“We tag events into a real-time messaging layer built on top of Kafka. We have connected our machine learning system to see those real-time systems and use those models to predict what customers are interested in. The last layer is a real time API that allows for desktops, mobile, and web browsers to give customers a personalized experience in real-time,” said Varma.
The company has also built a strong library for image recognition, allowing the service to identify features in real estate listing photos — such as hardwood floors and bay windows — that the listing themselves neglect to mention. Varma explains that Trulia also makes use of the Cython static compiler for Python and GPUs rather than CPUs to keep up with its process-intensive data streaming requirements.