Google's I/O event: From Mobile-first to AI-first
The Google I/O event, which was live-streamed in more than 80 countries and had an on-site audience of around 7000, witnessed major developments in Android and AI. The keynote address by Sundar Pichai saw a shift towards AI-first approach from the previous mobile-first stance, owing to the developments in AI technology and its role in making human-machine communication more efficient. Here's all about it.
Google Lens: Taking visual identification to the next level
Google Lens app launched by Google will enable users to identify objects through image recognition technology. Pichai gave example of flowers whose genus one might not be aware of, but once one points Google Lens app camera towards it, he/she will get the information. Google Lens will also provide information about a place in Google Maps, once the app camera is pointed towards it.
AI: From data centers to Google Compute engine
Pichai also launched Google.ai, where all the problems, discoveries, developments, efforts and findings related to AI technology would be shared. He announced the introduction of a new TPU (Tensor Processing Unit), Cloud TPU, which would facilitate AIs with both training and interference. These cloud TPUs would also be introduced to the Google Compute engine and would help in aiding advanced scientific research.
Google Assistant: Combining text, voice and image
The Google Assistant will now also be made available on iPhones. While previously only voice inputs were recognized by the Google Assistant; now image and text inputs would also be facilitated and the translation feature is now capable of image translation too. Google Assistant would also be able to make transactions with saved financial details and have fingerprint recognition for security purposes.
Google Home: Making everyday life easier
Google Home will be made available in Canada, Australia, France, Germany, and Japan. New features include hands-free calling for users in Canada and US and proactive assistants, which can predict actions following daily behavioral pattern. Visual response support will be added to Google Home, which one can access through TV; free Spotify music, Bluetooth, SoundCloud and Deezer support will enhance the assistance experience.
Google Photos: Sharing is caring
Google Photos, which has 500 million monthly active users, will get a 'suggested sharing' update, prompting the user to share the picture with the people featured in it. Update of 'shared libraries' would enable people to share their photo library with close ones. This library will have selective sharing capabilities i.e. it will show pictures of people known to the recipient and hide others.
YouTube gets more interesting and interactive
The YouTube App would now support 360-degree videos and YouTube Go will now have support for 191 languages in addition to offline saving and sharing. There will be a 'superchat' feature, where users can pay to get their comments across to the YouTube video creator. This is aimed to enable better interaction and also as a revenue generating avenue for the YouTube artists.
Android O: Fluid experience and vitals
Android O would focus on 'vitals' of a phone, like security, battery life, and stability, to ensure smooth functioning and will also make the phone's start-up time twice as fast. Google Play Protect will now protect devices from malwares. A check on background apps for better battery usage, notification dots, rehashed copy and paste, are some of the new features that have been added.
AR and VR: The Future?
Daydream will now be in Galaxy S8 and S8+ and LG's next phone, and Google is planning for a wireless 'standalone VR headset'. Google also introduced the VPS (Virtual Positioning Service) based on Tango, which helps devices find their locations inside a physical establishment accurately and then guide people through the establishment; the next phone with Tango technology will be the ASUS ZenFone AR.