Exploring AI & its possibilities at the SLASSCOM AI Asia Summit 2019


“What I love about AI is its ability to replicate human beings,” said Dr. Romesh Ranawana – CTO of Tengri UAV. His quote describes in a single sentence, the power of Artificial Intelligence. Over the past decade, artificial intelligence has gone from being a buzzword to a very real trend. For proof, you only need to look at your phone to see several examples of it in action. To explore how Sri Lanka could take advantage of the opportunities presented by this trend was the focus of the SLASSCOM AI Asia Summit 2019. 

SLASSCOM | AI | Artificial Intelligence
Ranil Rajapakse – SLASSCOM Chairman speaking at the AI Asia Summit 2019

Delivering the opening remarks at the event was Ranil Rajapakse – Chairman of SLASSCOM. He emphasized the need for continued investment in artificial intelligence for Sri Lanka to become a hub for knowledge and innovation. Similar sentiments were echoed by Trine Jøranli Eskedal – The Norwegian ambassador to Sri Lanka. “As industry leaders, you should use AI to share your expertise and knowledge,” said the Ambassador. 

Ranil also highlighted the importance of strong policies addressing the myriad of factors around artificial intelligence. These include the reskilling of workers due to automation, security concerns, integration of AI in government & industry, and more. 

SLASSCOM | Norway | AI | Artificial Intelligence
Trine Jøranli Eskedal – The Norwegian ambassador to Sri Lanka speaking at the SLASSCOM AI Asia Summit 2019

To that end, the organization is drafting an AI policy for Sri Lanka. Speaking at the conference, Jeevan Gnanam – former Chairman of SLASSCOM shared that it’s an evolving 300-page document built on top of the newly proposed Personal Data Protection Legislation. 

Over a day, the speakers at the SLASSCOM AI Asia Summit 2019, addressed such issues and explored the future of the technology. Their topics ranged from the use of artificial intelligence in drones to giving computers emotions to best practices of machine learning. Here’s a glimpse of the lessons we gained at the conference. 

Building responsible AI

“I believe we can never really be freed from our biases, the only thing we can do is, to be honest, and aware about them,” said Dr. Inga Strumke – Manager for AI at PWC Norway. Opening her talk with these remarks, she went on to emphasize that it was, for this reason, artificial intelligence developers have an immense responsibility on their shoulders. 

SLASSCOM | AI | PWC | Inga Strumke
Dr. Inga Strumke called for developers to be responsible with artificial intelligence

When artificial intelligence was still in the realm of science fiction, the worst-case scenario was that it would become sentient and wipe out all human life. Today, AI has left the realm of science fiction and entered the real world. The worst-case scenario portrayed by Hollywood remains fiction. Nevertheless, that’s not to say that artificial intelligence can’t cause great harm when used irresponsibly. 

The dangers of bias

Quite often, this is the result of an artificial intelligence system unknowingly being biased. To understand how these systems can be biased, it helps to look at the US Court System. Nowhere on Earth are people arrested in greater numbers than in the United States. It’s courtrooms now facing an unprecedented number of cases have turned to artificial intelligence as a means of speeding up the legal process. 

Enter Risk Assessment algorithms. Proprietary in nature, these algorithms would analyze the data of the defendant such as their income, where they’re from, what they’re being charged with, etc. The goal of this analysis was to identify how likely it was that a defendant would commit another crime. Based on this they would deliver a score that determined the level of punishment a defendant receives. In simple terms, this algorithm would decide whether you get bail or go to prison. 

Like many artificial intelligence tools today, these Risk Assessment algorithms relied on Machine Learning. In turn, Machine Learning relies on taking decisions based on past historical data. This is where the problems began. Historically, low-income communities and minorities were unfairly policed under the unfair assumption they were more inclined to commit crimes. Thus, when the algorithms acted on this past data, they were repeating this unfair bias in a vicious cycle. This is the danger of bias in artificial intelligence. 

SLASSCOM | AI | PWC | Inga Strumke | Risk Assessment Algorithms | Overpolicing | US
AI is like dynamite. When used irresponsibly it can harm people (Image credits: The Appeal)

Highlighting this, Dr. Inga stated, “AI is dynamite. We have to keep that in mind.” She went onto explain that when Alfred Nobel invented dynamite, it was originally meant for mining. But it’s explosive power was soon harnessed to harm humans. Likewise, artificial intelligence is at a similar crossroads today. If used for good it can do everything from helping you find a good movie on Netflix to diagnosing cancer early. But when used irresponsibly it can cause great harm. 

The ethical dilemma of fairness

So how can businesses utilize artificial intelligence responsibly? The answer to that question is a broad one. It starts with collaboration between experts in specialized areas and programmers. For artificial intelligence to be used responsibly, there must be clear communication to the programmers building the system. Ultimately, the goal is to build artificial intelligence systems that treat everyone fairly and improve the lives of everyone. 

However, as we’ve seen bias is a great obstacle towards that goal. Yet, Dr. Inga goes onto say, “Bias isn’t the worst of it. While it can be hard to detect, it’s well defined. Fairness, on the other hand, is a social construct with different and at times conflicting definitions.” Returning to the case of Risk Assessment algorithms, she posed a question to the audience. Would it be fairer to have different thresholds between the number of white people and black people being released or to judge each case ignoring ethnicity?

SLASSCOM | AI | Artificial Intelligence | Ethical Dilemma
Bias isn’t the biggest ethical dilemma argues Dr. Inga. Rather, it’s the debate between individual fairness and group fairness (Image credits: Today’s Veterinary Business)

Answering the question, Dr. Inga stated that by adopting different thresholds we were ultimately deciding whether a defendant would be released based on ethnicity. This is illegal. Hence, by trying to be fair to all groups, we’ve ended up violating individual fairness. Looking at this case, Dr. Inga went onto say, “When you’re building an AI that will be used by humans, you will face ethical dilemmas such as this.”

Explaining how decisions were made

She then moved onto another challenge, which was explainability. Naturally, we are curious as to how the systems we use work underneath the hood. As we crave explainability, we have 2 options before us. The first is to build a model that can be easily explained. The second is to try and explain the model we have.

Elaborating on the latter, Dr. Inga said, “You can use a complex model. But then I’m faced with the task of having to explain it. A timely question then pops up. Who is the person I’m explaining this to?” While a customer may ask how their privacy is being used, a business leader could ask how the system could increase revenue. 

She went on to argue that the only responsible solution was to build artificial intelligence systems with models that can inherently explain and evaluate themselves as they grow. “This explainable modelling approach may require a different mindset for each model. It’s a lot of work,” she admitted. 

Ultimately, explainability translates into trust. Businesses can spend years building trust with customers. Yet, it can be lost in an instant. “Trust has never been more challenging than it is today. Responsible AI is turning into a business imperative globally. In today’s demanding markets, simply offering artificial intelligence isn’t enough. The solutions have to be responsible and companies must understand their impact and see the bigger picture,” reminded Dr. Inga towards the end of her talk. 

How Disney uses artificial intelligence to read minds 

In the past, there was only one way for Hollywood studios to tangibly understand what audiences thought of their movies – crowdsourcing websites like IMDB and Rotten Tomatoes. Even today, these websites hold great sway in the entertainment industry. But they can’t paint a crystal clear picture. 

Explaining this, Dr. Rajitha Navarathna – Machine Learning and AI Consultant at 99X Technology, shared his experience while working at Walt Disney. “There were two people who looked like they had a very positive experience. But one gave the movie a 6/10 while the other a 9/10,” he said. 

Rotten Tomatoes | SLASSCOM | AI
Traditionally, movie studios turned to websites like Rotten Tomatoes to understand what audiences thought of their movies (Image credits: Hollywood Reporter)

As we can see, how people perceive positive experiences is very subjective and possibly biased. Dr. Rajitha admitted that since he worked at Disney, he tends to give its movies a higher rating. So he took on the challenge of using artificial intelligence to read the minds of the audience as they watch a movie. 

This has given previously unknown insights. An example Dr. Rajitha shared was of an audience that was originally quite reserved. But after a certain scene, the audience suddenly became lively. So they created a system that utilized cameras with infrared lights. The larger the audience, the more cameras were utilized. 

Exploring AI & its possibilities at the SLASSCOM AI Asia Summit 2019 4
Dr. Rajitha Navarathna shared at the SLASSCOM AI Asia Summit 2019 how AI can be utilized to read the minds of audiences

The system monitored test audiences while watching movies. Alongside this system, traditional rating systems were used as well. However, gaining such insights aren’t without their fair share of challenges. For starters, movie theatres are dark. This was why infrared lights were utilized. 

However, the cameras were over 20 feet above ground in a low-lit environment. Most facial recognition models were designed to operate in normal lighting conditions. This made it challenging to capture the entire audience. Particularly, those seated in the back. Furthermore, the system generated vast amounts of data. 

In spite of the odds, the system was powerful. Describing the effectiveness of this system, Dr. Rajitha said, “I can tell how much a group enjoyed a movie. I can even identify the exact scenes you enjoyed.” Elaborating on this, he shared that during comedies 90% of the time audiences were focused on the movie. Whereas with horror movies, audiences tend to play with their phones or do other activities.

How AI is reshaping business & academia

Across the world, businesses are increasingly investing in artificial intelligence. Speaking at the SLASSCOM AI Asia Summit 2019, Madu Ratnayake – Executive Vice President and CIO at Virtusa said, “We’re seeing increasing adoption of AI. Gartner predicts it will create $2.9 trillion in value by 2021. It’s completely disrupting businesses, giving rise to new business models.” 

SLASSCOM | AI | Artificial Intelligence | Virtusa
Madu Ratnayake – Executive Vice President and CIO at Virtusa speaking at the SLASSCOM AI Asia Summit 2019

With rising adoption, it has begun reshaping business operations. These range from how businesses recruit, measure performance, predict their revenues, etc. It’s also completely transforming industries such as agriculture, banking, healthcare, and more. Much of this can be attributed to the rise of cloud computing, which has made artificial intelligence more accessible.  

Madu also shared that Virtusa has strong partnerships with academia to develop its AI capabilities. In particular, he named Stanford and MIT as universities, the company has formed partnerships with. Through these partnerships, the company develops its capabilities. At the same time, they introduce fresh challenges across multiple domains for academics.   

SLASSCOM | AI | Artificial Intelligence | Tengri UAV | Drones
Dr. Romesh Ranawana – CTO of Tengri UAV stated at the SLASSCOM AI Asia Summit 2019 that Sri Lankan AI research is 5 years behind

Yet, during a panel discussion, Dr. Romesh stated, “Much of our artificial intelligence research in Sri Lanka is 5 years behind what’s being done in countries like the UK.” Responding to this Dr. Chrisantha Fernando – Senior Research Scientist at Google DeepMind shared that the research is more accessible as well. 

“Take Medium, it’s an incredible repository of people giving intuitive explanations of complicated algorithms. I also refer to Github. If you take the simplest implementation of an algorithm, you can get a lot of intuition for it,” he said. After gaining an intuitive understanding, researchers have 3 options. Then can try to improve existing approaches, explore unsolved benchmarks, or identify new problems. 

SLASSCOM | AI | Artificial Intelligence | DeepMind
Dr. Chrisantha Fernando – Senior Research Scientist at Google DeepMind shared that research is readily accessibly online but the regulations need to catch up

However, he added that regulations have to catch up with the advancements. Taking the example of Cambridge Analytica, he shared how artificial intelligence was being used to track your clicks. In turn, used to target specific electoral advertisements to you. This he argues is unethical but becoming a standard practice. “In a sense, advertising is deadlier compared to smoking. There should be a large health warning on each advert,” he stated. 

AI + Data = Fuel to Innovate 

We live in a world that generates vast quantities of data. At the SLASSCOM AI Asia Summit 2019, Samith Gunasekara – Head of AI and Machine Learning at Boeing AnalytX, helped us visualize just how much data we generate. He shared, “A Boeing 787 Dreamliner generates 40 terabytes of data per flight.” 

SLASSCOM | AI | Artificial Intelligence | Boeing
A Boeing 787 Dreamliner generates 40 TB of data per flight. We live in a world generating vast quantities of data (Image credits: WIRED)

This data is evidence we’re living in the 4th industrial revolution. “I wholeheartedly believe artificial intelligence is the next electricity. It’s integrating everything around us,” said Samith. With the market for artificial intelligence valued to be more than the entire GDP of Sri Lanka, there are great opportunities for those hungry few. 

An example he shared was of digital dashboards. Enterprises love being overloaded with information from dashboards. But rarely do they form a clear detailed narrative. Even when they do, humans are biased. This he highlighted was just one area where artificial intelligence can be immensely powerful. But to truly take advantage of these opportunities, he encouraged the concept of Macro Innovation.

SLASSCOM | AI | Artificial Intelligence | Boeing
Samith Gunasekara – Head of AI and Machine Learning at Boeing AnalytX speaking at the SLASSCOM AI Asia Summit 2019

In his own words, he described the concept as, “Leapfrog innovation.” To explain it in detail, he shared the story of a group of 15-year-old students. They created a prosthetic limb. To do so, they took an Arduino board, a motor, a camera, and a prosthetic limb Combining these components with existing machine learning models, the prosthetic intelligently knew how much pressure to apply when picking up different objects. Costing less than $200, they did everything with existing technologies. This is Macro Innovation. 

What’s next for artificial intelligence in Sri Lanka? 

The SLASSCOM AI Asia Summit 2019 covered several aspects of Artificial Intelligence. The above merely scratches the surface. Many other speakers introduced us to how it’s being used regionally, the potential of conversational artificial intelligence, the challenges it faces in the real world, and so much more. 

The SLASSCOM AI Asia Summit 2019 covered a variety of topics featuring numerous speakers

Ultimately, the technology is opening a market that’s set to be worth trillions. Sri Lanka has the potential to grab a slice of this pie. The size of the slice, however, depends on the actions we take today. We have to draft policies, make data more widely available for research, invest in developing skills, and more. The road ahead isn’t without challenges. But now would be a good time to take that first step forward.


  1. […] Another fundamental issue is that of bias in AI systems. This is where, depending on the dataset used to train an AI system, they could exacerbate existing bias’. The most notable example being when US courtrooms adopted AI systems to decide if defendants could receive bail. Due to existing bias by Police Departments against minorities, the system ended up unfairly denying minority defendants bail. We’ve touched on this before to know that it’s not an issue with a simple answer.  […]


Please enter your comment!
Please enter your name here