in

The UCSC Research Symposium: the five coolest projects we saw

It’s not often that one gets access to a whole host of research projects from a local university. Then again, the University of Colombo doesn’t hold its Research Symposiums everyday. At the 2013 / 2014 Symposium (which we attended and tweeted about), we were exposed to so many star-class research projects that was actually difficult to explain all of them in the detail they deserved without republishing the manual.

We’re not going to.

Highly complicated disaster management projects rarely interest people until a tsunami arrives: something closer to everyday life draws more attention. Therefore, we’re not going to judge the most  most technologically advanced projects – all 35 projects listed are excellent in their own fields (to the point that even the judges had a hard time selecting between them).

Instead, we’re going to talk about the five coolest ideas that we noticed. Most of these we can see being turned into viable creations, software or otherwise, that can be named, branded and sold as products in their own right.
LOT

Let’s begin.
A Topic Model Approach for Mood Based Song Classification

Mood-based song classification? The first thing that springs to mind is an episode of Jimmy Neutron, where on entrant is a device that looks a lot like a blender and plays songs based on someone’s emotional state.

Hack cartoon science aside, this is not that far off. This project, by C.T. Fernando and Dr. A. R. Weerasinghe, operated on the one assumption every music fan knows to be true: songs are written based on emotions, and these emotions are hidden in the lyrics.

The duo’s approach utilizes an algorithm that scours through the lyrics for songs, searching for emotion and classifying music as such. Imagine a vast network relentlessly churning through thousands of songs searching and sorting.

While there have been previous attempts at this, notably the affective lexicon and fuzzy clustering model  introduced by Y. Hu, the UCSC duo’s model utilizes Latent Dirichlet Allocation (LDA) to classify the songs using topic models and claims more accuracy in mood classification. We can see this powering massive cloud music services and even more things like iPods – provided they’re powerful enough.

 

A Statistical Model of Source Code Using Method Usage Patterns to Analyse the Source Code Quality

This long-winded sounding title almost entirely encapsulates what this project is about: using patterns to test the quality of source code.

Code quality control is a problem that every software company faces. The base argument is simple enough – there are certain patterns to good code, patterns that make it easier for multiple developers to understand, work with and maintain code.

This system, by Pankajan Chanthirasegaran, Dr. A.R. Weerasinghe and S.R.K. Branavan, goes a step beyond traditional metrics (like Cyclomatic Complexity and Lines of Code) by applying Natural Language Processing algorithms to programming languages. The idea here is that key patterns can be identified – for example, when the sequence in which methods are called is inconsistent with the training data, this model puts up the red flag. It’s not perfect, but done right, software companies would pay big bucks for this. 

 

PageRank based Core-Attachment Model to Detect Protein Complexes by Analyzing Protein Networks

Protein network analysis is cool. For one, the industry has yet to come with algorithms that are robust when taking on protein interaction networks, so there’s plenty of space for innovation here.

protein2It’s even cooler when Shazan Jabbar, Mrs. Rupika Wijesinghe and Dr. A.R. Weerasinghe decide to use PageRank to do the analysis. Yes, PageRank, of Google and SEO fame. Weighted Multi-CORE Ranker, or WMCoreRanker for short, is an algorithm which identifies core proteins using PageRank and reconstructs protein complexes. It’s actually the first time PageRank has been used in an application of this type, and we have a feeling Google is going to be very, very pleased.

 

Automation of constructing an eProfile from Web Contents

Here’s a really cool and slightly stalker-ish project by Hashan Silva backed by Dr. Ajantha Athukorale: a system that will scour the Internet for whoever you point it at, run through relevant web pages and assemble a profile of said person for your perusal.

No, it doesn’t look like this, but it could. T
his info-hunting tech could probably be retooled to instacreate CVs for the masses. Needless to say, there’s a whole lot of tech going behind this (including a modular five-part system, which we assume can be upgraded piecemeal). Suffice it to say that it’s geared to hunt down your “Early Life and education”, “Career Life” and “Achievements” like an automated boss. 

 

Swarnaloka: Adaptive Music Score Trainer for the Visually Impaired in Sri Lanka

Swarnaloka (by Kavindu Ranasinghe, Dawpadee Kiriella and Shyama Kumari supervised by Dr. Lakshman Jayaratne) is a complex system. It seeks to turn music notations into understandable notation for the visually impaired – not just simple notes, but multi-layered Eastern music. It’s not music Braille, and nor is it a system for reading music alone: it’s what the team calls a “controllable auditory interface” for visually impaired people to study and experiment with music.
girl

According to the schematics, it’s a two-part architecture – one which renders printed music notation into this kind of output, another which reads the output and uses an audio sequencer, sound bank and mixer to provide an auditory output. Thus far, in field testing, it’s proven superior to the usual method of translating manually into Braille music notations – it’s apparently both faster and more understandable.

Report

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

One Comment

Newbie

Written by Team ReadMe

Soosci: the birth of a new social network

Softlogic takes an arrow to the knee