From Android, to Chrome, to everything else. Here’s what Google announced last night at Google I/O 2017. Sundar Pichai, CEO of Google, took to the stage and began to explain how they have managed to scale up their 7 major apps such as Youtube, Maps, Gmail, Drive, Photos, and of course the kingpin, Android. With major improvements to machine learning and artificial intelligence, there have also been improvements in Voice. Google Drive now has 800 million monthly users. Google Photos has over 500 million users. And as of this week, Android has crossed over 2 billion devices.
Over the past year, Google has utilized the power of AI for smart replies inside Inbox. These smart replies use AI to analyze any email you receive and offers a collection of short replies that you can send with a tap. Thus, eliminating the need to actually type a reply and saving you time. All it takes a single tap. As of today, smart replies will now be available to every Gmail user as well.
One of the biggest announcements of the night was heard right at the start. Google Lens is a collection of tools that identifies images and helps you take certain actions. These tools will be found inside everything Google announced this year. But what kind of actions does Google Lens allow you to do? Some frighteningly powerful ones. If you point your camera at the details of your friends Wi-Fi router, you can have his password. But that’s merely one example of what’s possible when Google Lens is integrated with other apps. Google Lens will ship with Google Assistant and Google Photos.
At Google I/O last year, Google announced that they had created their own processors for AI applications. These processors were called Tensor Processing Units or TPU for short. These Google claimed were significantly faster compared to normal processors and GPU’s in executing AI related tasks. And now, the TPU’s have gotten an upgrade.
This upgrade is the new Cloud TPU chips. Individually these Cloud TPU’s are capable of 130 trillion floating point operations. But pair a bunch of Cloud TPU’s together and you can build a supercomputer with 11.5 Petaflops. You can start playing with these brand new Cloud TPU’s on the Google Cloud Computing Engine, right now.
Starting from today, Google’s AI efforts will fall under the umbrella of the Google.ai initiative. This new initiative will focus on AI research, building AI tools, and applied AI. The most important place Google is utilizing machine learning is on Google Search and Google Assistant. Advancements in Healthcare such as neural nets to identify breast cancer and initiatives such as Google Draw, which we covered some time ago, are also examples of Google’s powerful new AI initiative.
Today, Google Assistant is active on over 100 million devices. In case you’re lost, Google Assistant is the Android digital assistant alternative to the likes of Siri and Cortana. In the coming months, Google aims to make Assistant smarter and more capable. To kick things off, you can now type instructions without saying them out loud to Google Assistant. Additionally, Google Assistant will also now be available on iPhones.
But that’s merely the top of the iceberg. Thanks to Google Lens, you can now ask Google Assistant to carry out visual translations. This means if you see a sign in a language you’re unfamiliar with, you can simply point your phones camera at the sign. The Google Assistant will then translate the sign for you. Furthermore, with Google Lens, Google Assistant can now analyze images in real-time to give you a variety of options.
Finally, you can expect Actions by Google will now be coming to Android devices. More importantly, Actions by Google will now support financial transactions. Google seems to have built everything needed from account creation to invoicing, so developers won’t have to do too much work. During a demo, we food being ordered and paid for inside Google Assistant purely through voice control with the final step being a fingerprint scan to confirm the order.
Google Home now has over 70+ smart home partners building services for it. So what new features can we expect to be added to Google Home? For starters, you can expect proactive assistance. This means, it’ll give you information that you might find handy before you ask for it. During the demo, we saw Google Home looking up the calendar and sharing a traffic alert before it was requested.
The next big feature coming to Google Home is phone calls. All you need to do is simply tell it whom to call and Google Home will take care of the rest. The only catch is that this feature is currently restricted to US and Canadian numbers. Furthermore, Google Home will soon support a variety of music streaming services such as Soundcloud and Deezer. Spotify too is adding their free music service to Google Home.
Finally, the Chromecast is getting an update that will allow it to show visual responses from Google Home. This means if you can ask your Google Home to show your calendar, it would work with your Chromecast to actually display your calendar on your TV.
This year, Google Photos is getting three new features. All of them designed to make it easier for you to share your pictures. The first of these features is suggested sharing. This involves Google Photos analyzing your entire collection. It identifies the best pictures of your friends and suggests sharing those pictures with your friends. But it also goes a step further and identifies pictures you and your friends took at events you attended together to suggest sharing them with each other as well.
The second new feature to come to Google Photos is shared libraries. This allows you to share libraries of pictures with another person. If you take pictures of certain people that you select, then these pictures automatically get added to the library and shared with another person.
The final feature is photobooks, which focuses on helping you share printed photos. All you have to do is select a bunch of pictures. Afterwards, you can set the layout of your photobook and you’ll receive a book of your pictures. Google aims to utilize machine learning to create photobooks of varying designs in the future. But right now, photobooks are available for $9.99 if you live in the US.
Besides these sharing features, we also got to see how Google Lens would work inside Google Photos. During the demo we saw it identify the location of a garage in the screenshot. With a single tap on the phone number in the screenshot, it was then possible to call the garage.
YouTube has over a billion users every month. And there’s two new features coming to YouTube. The first is support for 360 degree videos for the living room. In other words, you can now immerse yourself in 360 videos with the YouTube app on your gaming console or smart TV. All you need to do is simply play the video and you can explore the video with your remote.
The second new feature coming to YouTube is super chats. These are paid messages that fans can use to get in touch with and noticed by their favorite content creators during a livestream. Simply enter a message, pay an amount, and you’ll see your message pinned on top. Additionally, there’s also an API that can be used to invoke real world events with super chats.
Android O is coming. Whether it’s going to be called Android Oreo or Android Oatmeal is still something that’s up for debate. But we now have a clearer idea as to what it will be bringing to the table when it lands on your phone. Everything Android O brings to the table falls under two themes: Fluid experiences and Vitals.
One of the first things is picture-in-picture, which is multitasking given an upgrade. This feature would allow you to have a minimized window of certain apps like YouTube videos, while using other apps on your phone.
Another new feature coming in Android O is notification dots. Similar to iOS, this means you’ll now see the number of notifications you have from a single app in a small red dot that’s found on the corner of the apps icon. If you long press these dots, then you can see all the notifications from that particular app. Copy text is also being improved. This is now being called smart text selection. Simple tap on text as you normally would to copy it. This time it’ll give you suggestions like phone calls and maps for addresses. Furthermore, it’ll highlight entire addresses instantly.
50 billion apps are scanned every day for viruses. Google Play Protect will be available to all devices. You’ll see a simple notification mark when looking at your app library in Google Play.
Secondly, Android O is twice as fast. Furthermore, limits are going to be placed on background services to save your data and battery. Play Console Dashboards will now tell developers exactly why their app has performance issues and how many users are affected. Android Studio will now let you find the exact line causing issues. Kotlin is now an officially supported language for Android.
If you have a compatible Android device such as a Nexus 5X, 6P or a Google Pixel, you can download the Beta of Android O and install it to your smartphone. Bear in mind that since it’s in Beta, it may have its fair share of bugs. You have been warned,
Android Go focuses on optimizing Android O and the latest version is all set to run on entry devices, apps that have low data usage and also a Play store with apps made for the next billion users.
They’ll be on devices with 1GB or less memory. It can run on 512MB to 1GB of memory. Android Go would also have inbuilt data management and savings options showing how much data is remaining. Chrome’s data saver feature will be enabled by default. You can expect the first Android Go devices to ship by 2018. That’s not all though. GBoard, Google’s keyboard app now has translations features built in and it can also write words phonetically by simply sounding it.
Daydream is now going to see a standalone VR headset. The most noticeable feature here is that it doesn’t require a phone or PC. The headset utilizes sensors to keep a track of your eye movement and ensures that it’s in sync both in the real and virtual world. These will be offered as a license that manufacturers can use to build their own devices. Right now they’re working with HTC and Lenovo.
Google Visual Positioning Service is Tango working in conjunction with Google Maps to get more accurate locations indoors. It would look for key visual points indoors; the phone can get the exact location. By having these exact locations, it enables the creation of richer AR experiences. This will be a key feature of Google Lens. Speaking of Tango, the upcoming Asus Zenfone AR will use Tango and Samsung has also issued a statement that Tango will be supported via a software update in their latest flagship Galaxy S8 and S8 Plus smartphones.
Google for Jobs is google aiming to help people find jobs. To do so they’ve started off with a cloud API. They’ve built a new feature in search, to help you find the job that’s right for you. This includes jobs you might not find in other places. Google search will give you listing of jobs in your area.
They use machine learning to cluster all types of retail jobs and others that use even different names (like sales clerk). Tap a listing, and you’ll get more info. One more click and you can apply. Currently, Google isn’t planning to start hosting its own job listings. Rather, it’s collecting job openings from the likes of Facebook, LinkedIn, Glassdoor, Monster, and ZipRecruiter. from there, it will filter jobs based on criteria such as length of the commute, and also tries to merge openings for similar jobs that might be listed under different names.
Well there you have it, folks. That’s everything Google announced at this year’s Google I/O Conference in a nutshell. Overall, not exactly jaw dropping apart from the advancements in Google Photos and Google Lens. Even stuff like Google Home and and VR and AR are not exactly popular in Sri Lanka so there’s no real point getting hyped up about it. As for Android O, a majority of devices are yet to receive Android 7.0 Nougat, a full year after it has been launched. Overall, as the title says, it was an a far cry from the exciting keynotes of perhaps 2014 and 2015. Let’s hope that more exciting times are ahead.
Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
thank you for subscribing 🙂
awww something went wrong 🙁
We respect your privacy and take protecting it seriously
What kicked off in 2011 as a friendly gaming event has now developed into a fully-fledged gaming tournament. With the goal of promoting team building, leadership, and planning, the Virtusa
What kicked off in 2011 as a friendly gaming event has now developed into a fully-fledged gaming tournament. With the goal of promoting team building, leadership, and planning, the Virtusa LAN Challenge 2018 is happening.
Semi-Finals of the internal tournament will take place on the 22nd and 23rd of January 2018 at Virtusa premises.
january 22 (Monday) - 23 (Tuesday)
Virtusa Pvt. Ltd. 752, Dr Danister De Silva Mawatha, Colombo 09
The main purpose of the workshop is to give students the ability to analyze and present data by using Azure Machine Learning, and to provide an introduction to the use
The main purpose of the workshop is to give students the ability to analyze and present data by using Azure Machine Learning, and to provide an introduction to the use of machine learning and big data.
Module 1: Introduction to Machine Learning
This module introduces machine learning and discussed how algorithms and languages are used.
· What is machine learning?
· Introduction to machine learning algorithms
· Introduction to machine learning languages
Module 2: Introduction to Azure Machine Learning
Describe the purpose of Azure Machine Learning, and list the main features of Azure Machine Learning Studio.
· Azure machine learning overview
· Introduction to Azure machine learning studio
· Developing and hosting Azure machine learning applications
Module 3: Managing Datasets
At the end of this module the student will be able to explore various types of data in Azure machine learning.
· Categorizing your data
· Importing data to Azure machine learning
· Exploring and transforming data in Azure machine learning
Module 4: Building Azure Machine Learning Models
This module describes how to use regression algorithms and neural networks with Azure machine learning.
· Azure machine learning workflows
· Using regression algorithms
· Using neural networks
Module 5: Using Azure Machine Learning Models
This module explores how to provide end users with Azure machine learning services, and how to share data generated from Azure machine learning models.
· Deploying and publishing models
· Consuming Experiments
Module 6: Using Cognitive Services
This module introduces the cognitive services APIs for text and image processing to create a recommendation application, and describes the use of neural networks with Azure machine learning.
· Cognitive services overview
· Processing language
· Processing images and video
· Recommending products
Feel free to contact us for any inquiries
uditha bandara – 0716092918
All Day (Wednesday)
ANC education ,310 R A De Mel Mw, Colombo 03 00300
Blue Chip Training0716092918
Startup Weekend is a global phenomenon - 54 hours of fast and furious prototype development through to exploring potential markets and pitching. It’s an unparalleled opportunity to build lasting relationships
Startup Weekend is a global phenomenon – 54 hours of fast and furious prototype development through to exploring potential markets and pitching. It’s an unparalleled opportunity to build lasting relationships with co-founders; mentors, and investors.
The real value comes from taking an idea from concept through to execution using Lean tactics and working under high pressure with the best startups.
26 (Friday) 5:00 pm - 28 (Sunday) 8:00 pm
Oak Ray Regency Kandy
Oak Ray Regency Kandy, No 9, Devani Rajasinghe Mawatha,, 20000 Kandy