Traffic has no friends, especially in Colombo. Nonetheless, we braved through the traffic and made our way to the VirtusaPolaris auditorium. This was where SLAAI or Sri Lanka Association for Artificial Intelligence brought together in partnership with VirtusaPolaris, held the latest Colombo AI Meetup. Here’s everything we learned about AI at this meetup.
Natural Language Processing 101
The first speaker of the day was Dr. Ruvan Weerasinghe on the topic of Natural Language Processing. In case you’re lost, Natural Language Processing is all about making computers understand what we say. This is one of the many pieces of tech powering AI Siri, Cortana and Google Now. Dr. Ruvan’s presentation was aimed at showing everyone how Natural Language Processing has evolved over the years.
He opened his presentation by showing us how natural language processing was carried out in the past. As Dr. Ruvan explained to us, in the past natural language processing involved teaching computers to understand human speech. However, this is literally a lot more complex than it sounds. The biggest challenge for natural language processing is simply put linguistics. Computers faced significant challenges in the past when understanding languages due to its many factors such as structure, grammar, phonetics, etc.
So what do we do today? Today, it’s not about understanding language. Rather it’s about looking at the data and finding patterns. The more data you have, the better your application is at understanding what people are saying. In fact, Dr. Ruvan went on to state that having more data is better than having a better algorithm. He then moved on to share some of the applications of natural language processing, which include: answering questions, summarizing information, and machine translation to name a few. Dr. Ruvan concluded his presentation by sharing with us some of the challenges of natural language processing faces today. Some of these challenges are in identifying the complexity of the projects and that little has been done for local languages.
Zone24x7’s smart retail platform
Following Dr. Ruvan presentation, we saw Rashan Anushka Peiris, a software architect from Zone 24×7 take the stage. His presentation was focused on sharing with us some of the inner workings behind Zone 24×7’s smart retail platform. He kicked off his presentation by giving us some background on the project. The project was aimed at helping a chain of the largest department stores in the US. The competition was facing stiff competition, declining profit margins and saw declining customer satisfaction.
Zone 24×7’s answer to this problem was to build a smart retail platform powered by AI. This smart retail platform would sense the customer, understand their needs and then take a decision to satisfy this need. The goal was to provide a personalized experience that would turn a random customer into a diehard fan. So how does this smart retail platform work?
It all starts with the sensors. These sensors are used to manage inventory and also user behavior. The main platform for these sensors is a robot. A robot called Aziro to be precise. Aziro isn’t a robot as cool as an Ironman suit of armor, but it comes close. Aziro is an intelligent robot that can navigate almost any retail environment and find items using RFID tags. Yet this is still only one part of Zone 24×7’s smart retail system.
Moving on, Rashan showed us how the system makes sense of the data it collects. For starters, the system would require reliable infrastructure and messaging protocols. Some examples of these protocols Rashan shared with us were Kafka and NiFi. Once the data is received, the system needs to have scalable storage, retrieve it quickly and also be capable of processing it in both batches as well as real-time depending on the data. But for the system to be truly smart and understand the customers, it also uses machine-learning data mining.
Once the system, understands what customers want from the data, Rashan showed us how it satisfies the needs of the customer. It starts off by looking at what items the customer has purchased, viewed, considered buying, etc. Based on this, the system would give the customer personalized recommendations. These recommendations can be given to the customer by the many assistive bots and devices the system is compatible with. Rashan concluded his presentation by showing us how the system would use smart displays not only for recommendations but also for other tasks such as customer greetings.
Following the presentations, we saw a panel discussion take place. The panel was moderated by Dr. Ajith Madurapperuma – Head of Electrical & Computer Engineering at the the Open University of Sri Lanka and the panelists included the speakers as well as Janaka Pitadeniya – associate director at VirtusaPolaris. The discussion started off with the panelists expanding on the topics they spoke about.
Eventually, we saw a very active Q & A session, with the speakers constantly getting questions from the audience. But all things must come to an end. With the conclusion of the panel discussion, the meeting came to an end.