Google has been holding its annual developer conference for eleven years now, but I/O is not only just a developer conference. It’s a place for Google to announce some of its latest technologies and talk about the future.
This year at I/O 2017, Google announced new products powered by machine learning, some big improvements to Assistant and Google Home, and it offered up a glimpse of the future of Android in your car.
Following on from last year, Google I/O had a similar format of talks, Codelabs, different tents, office hours and sandbox (where you got hands on with tech). Due to the popularity of the tent talks, this year they created a booking system which allowed attendees to reserve seats in advance, a great addition that helped to solve the queuing challenge they encountered last year.
New this year was the developer keynote, which was focused on what Google are doing to help developers. It took place after the main keynote and the main goal was to make developers jobs easier, minimizing the pain points of building a product.
One of the biggest cheers of the day went to the announcement of Kotlin as an official language.
Android Studio 3.0
The new Android Studio, version 3.0, will be released soon, with new profilers (CPU, Memory and Network). These profilers will make developers lives easier as there will no longer be a need to use other tools to see what is happening with your apps. Build speed has also been improved and the emulator is even better with play services now embedded.
The new Architecture Components were announced helping data storage and lifecycle management; making your app more modular, avoiding memory leaks and preventing the ability to write boilerplate code.
Last year Google I/O previewed instant apps, enabling users to experience everything you love in an app without having to install it. However, it was available to only some partners who could test it. This year they opened up instant apps to all developers. It’s available in Android Studio 3.0, with more information available here. This year they also added a new tool (Modularize) to help developers refactor their codes to simplify the building of an instant app.
In the travel industry, is it possible to imagine a lot of ways to use instant apps, one of them is to check in to a flight without having to install the app but using the app experience.
Google is now distributing the android dependencies through their own Maven Repository, mitigating the need to use the Android SDK manager.
Layout Inspector and APK Analyzer Improvements
Google made improvements in the Layout inspector as well as adding tools to help shrink the app.
You can now build and deploy apps for the assistant on the phone. The apps available in Assistant can be found easily in the app directory and the user is able to simply add a shortcut to start the app. Imagine booking a flight using assistant with a shortcut, saying “Hey Google, book a flight to Dublin on 10th August”.
Google are also distributing the Assistant SDK so it’s possible to embed the Google Assistant on any device.
It wasn’t just code improvements at Google I/O, there were also lots of changes to the Android developer console thrown into the mix. These included changing the statistics page to give speedy and more flexible access to important data and creating a new session called Android Vitals, which can provide detailed insight into the technical performance of your app in terms of stability, battery and render times.
AI & IOT
Google I/O 2017 had a keen focus on AI Artificial Intelligence) and IOT (Internet of Things) with Google looking to democratize AI by providing the Tensor Flow, a software framework for machine learning.
AMP & Progressive Web Apps
Finally AMP pages and Progressive Web Apps (PWAs) were discussed; helping developers build high-class experiences that feel immersive, load quickly, work offline and send notifications to the user. The key change here was to make PWAs part of the OS so that if you save a PWA in the home screen the user will be able to see the app in the launcher too, giving them a similar experience to using apps.
Codelabs are hands-on tutorials for developers, that show and help you learn new Google technologies.
At Google I/O there was a special area with lots of stations specially prepared for developers to code, explore and experiment with technologies both old and new.
A key benefit of Codelabs at Google I/O was that any questions you had or if you wanted to learn more about the tech a Google engineer was on hand to have a chat or answer any questions.
All stations in the the Codelabs area were kitted out with a computer with the newest Android technologies such as Android Studio 3.0 Beta and the developer’s preview of Android O (all available to download and use by developers), each connected to a device, typically a Pixel.
Developers were able to explore the new Android Architecture based on lifecycle-aware components, introducing LiveData as a key component, and also learn things that they may not get to work with on a day to day basis, such as Tensor Flow and Android Things.
For the non-attendees, all Codelabs and proofs of concept can be found here.
While there was some activity available for non-attendees it was important to be on-site for the workshop on Android Things, for two main reasons;
- Firstly, all the necessary hardware was available to try out such as different types of boards including add-on boards with buttons and leads, like Rainbow HAT.
- Secondly, we were given the Android Things starter kit containing a i.MX 7Dual board, several modules like camara and screen, and all the necessary tools to continue learning and working with the platform.
Developers also had the chance to play with a Android Things powered device, that took a photograph of a person, if the person was smiling it would dispense some candy for them. The image scan and recognition was made through machine learning, a collision of great technology.
As per the Android Things Codelab there were other tasks that developers had to complete including programming a Google Assistant and Android Things powered box to play Number Genie. Through Google Assistant it was possible to program Actions on Google to access the game, and then start a new match in which you have to guess the number the machine has thought for you.
There was various tents around Google IO where developers showed off things they had built using Google Technology. In many of the tents, projects were done by non Google developers, who just wanted to show off cool stuff they had made, and most of the projects were available open source.
- IoT Tent - Internet of Things tent, showed experiences using Android Auto, Android Things, Android TV, Cast, Assistant, Home, Nest
- Assistant Tent - Showed many different implementations using the Google Assistant
- Android Platforms Tent - Instant Apps, Android Wear, Android Auto, Play Console. Play Award Winners were also here.
- Accessibility For Android Tent - More about Accessibility for Android, Chrome, Voice.
- VR Tent - Showed off Daydream and Tango
- Google Maps & Mobile Web Tent - Google Maps and Mobile Web implementations, including Progressive Web Apps, AMP, Payments, Device Frameworks
- Firebase Tent - Showed off different applications that integrated with Firebase
Due to the fast pace nature of the event, going between talks filled up most of the day, so it was tricky to get around to everything. The most interesting tents were the IOT and Assistant tents. These were the tents that had the most non Google developed applications, so it was great to see people putting the great technology Google make to good use.
One of the tents showed off the new Cloud TPU system that Google created for improving machine learning.
Another had a Quick Draw application; a multiplayer game where 3 people had to draw something specific on their tablets and Google's Image Recognition System had to correctly guess the image. The winner was then declared after a minute of drawing. It showed off how well their work in image recognition has come since last year.
Another great demo had several items up on poles. It used a phone to detect the image you were looking at, then using image recognition detected the item which then, using Google Assistant, created a song based around that item.
With so much going on it was a fantastic week which we will come back with so many learnings that we are looking forward to applying to real life over the coming months. Until next year Goolge I/O, it's been a blast!
Travelport Digital’s Alejandra Stamato, Andrew O’Hart, Isabel Porto and Shane Murphy all attended Google IO 2017 and reflected on all the developments and learnings from their week in Mountain View.