The Google I/O keynote opened with an animation asking us to “Make Good Things Together,” and in this article, I’m going to round up some of the things announced in the Keynote and Developer Keynote, that are of interest to Smashing readers. The announcements in the keynote were backed up by sessions during the event, which were recorded. To help you use the things announced, I’ll be linking to the videos of those sessions plus any supporting material I’ve been able to find.
I would love to know which of these announcements you would like to find out more about — please do leave a comment below. Also, if you are an author with experience to share on any of these then why not drop us a line with an outline?
The main announcements were all covered in the keynote presentations. If you want to watch all of the keynotes, you can find them on YouTube along with some condensed versions:
- The main Keynote led by CEO Sundar Pichai (here’s a condensed ten-minute version of the most important points),
- The Developer Keynote led by Jason Titus (here’s a five-minute version of the important points covered in the Developer Keynote).
Google I/O And The Web
I was attending Google I/O as a Web GDE (Google Developer Expert), and I/O typically has a lot of content which is more of interest to Android Developers. That said, there were plenty of announcements and useful sessions for me.
The Web State of the Union session covered announcements and information regarding Lighthouse, PWAs, Polymer 3.0, Web Assembly and AMP. In addition to the video, you can find a write-up of this session on the Chromium Blog.
What’s New in Chrome DevTools covered all of the new features that are available or coming soon to DevTools.
Progressive Web Apps were a big story through the event, and if you have yet to build your first PWA, the PWA Starter Kit presentation can help you get started using Polymer. To look more deeply into Polymer, you could continue with Web Components and the Polymer Project: Polymer 3.0 and beyond. The Polymer site is now updated with the documentation for Polymer 3.0.
Angular wasn’t left out, watch the What’s New in Angular session for all the details.
Headless Chrome is a subject that has interested me lately, as I’m always looking for interesting ways to automate tasks. In the session The Power of Headless Chrome and Browser Automation, you can find out about using Headless Chrome and Puppeteer. If you are wondering what sort of things you could achieve, there are some examples of things you might like to do on GitHub.
Also, take a look at:
- “Building A Seamless Web” by Dru Knox
- “Web Performance Made Easy” by Addy Osmani and Ewa Gasperowicz
- “Make Your WordPress Site Progressive” by Alberto Medina
- “The Future Of The Web Is Immersive” by Brandon Jones
- “Build The Future Of The Web With Web Assembly And More” by Thomas Nattestad
Android Developer News
I’m not an Android developer, but I was surrounded by people who are. I’ve tried to pick out some of the things that seemed most exciting to the crowd. The session, “What’s New In Android,” is a great place to go to find out all of the key announcements. The first of which is the fact that Android P Beta is now available, and many of the features announced will be available as part of that beta. You can check to see if your device is supported by the Beta here.
Android Jetpack is a set of libraries, tools, and architectural guidance to help make it quick and easy to build great Android apps. The IDEs are integrated with Android Studio, and this seems to be an attempt to streamline the developer experience of common tasks. You can find out more information about Android Jetpack in the session video on What’s New In Android Support Library.
The ability to create Actions in Apps is something that is now in Beta and enables developers to create interactions that cross from Voice to displays — be that your watch, phone or the new Smart Screens that will be introduced later this year.
Slices are an interactive snippet of an App UI, introduced in Android P. To find out more, take a look at this I/O Session from which you can learn how to build a slice and have it appear as suggestions in search results.
- Watch the session Design Actions for the Google Assistant Beyond Smart Speakers by Sada Zaidi,
- Explore the Conversational Design website,
- Read more about Actions,
- Bookmark the Actions Playlist from Google Developers on YouTube.
Having looked at a few specific announcements for the Web and Android, I’ll now take a look at some of the bigger themes covered at the event and how these might play out for developers.
Artificial Intelligence, Augmented Reality, And Machine Learning
As expected, the main keynote as well as the Developer keynote both had a strong AI, AR, and ML theme. This theme is part of many Google products and announcements. Google is leveraging the huge amount of data that they have collected in order to create some incredible products and services, many of which bring with them new concerns on privacy and consent as the digital and real world merge more closely.
Google Photos is getting new AI features which will help you improve your photographs, by giving suggestions as to how to fix brightness or offer suggested rotations.
A new version of Google News will use AI to present to users a range of coverage on stories they are interested in.
One of the demos that achieved a huge round of applause was when Google Lens was demonstrated being pointed at a section of text in a book, and that text was then able to be copied and pasted into the phone.
— Android Authority (@AndroidAuth) 8. Mai 2018
If you are interested in using AI then you might like to watch the session AIY: Do It Yourself Artificial Intelligence. Also,
- Lead designers at Google on “Design, Machine Learning And Creativity,”
- “Bringing AI And Machine Learning Innovations To Healthcare” by Lily Peng and Jessica Mega,
- “Exploring AR Interaction” by Chris Kelley, Elly Nattinger, and Luca Prasso
- “AR Apps: Build, Iterate And Launch” by Tim Psiaki and Tom Salter
When traveling, I know the all-too-common scenario of coming out of a train station with maps open and having no idea which direction I am facing and which street is which. Google is hoping to solve this issue with augmented reality, bringing street view photographs and directions to the screen to help you know which direction to start walking in.
Google Maps are also taking more of a slice of the area we might already use FourSquare or Yelp for, bring more recommendations based on places we have already visited or reviewed. In addition, a feature I can see myself using when trying to plan post-conference dinners, the ability to create a shortlist of places and share it with a group in order to select where to go. Android Central have an excellent post on all of the new maps features if you want to know more. These features will be available on the Android and iOS versions of the Google Maps app.
For developers, a roundup of the changes to the Maps API can be found in the session Google Maps Platform: Ready For Scale.
Introducing ML Kit
While many of us will find the features powered by Machine Learning useful as consumers of the apps that use them, if you are keen to use machine learning in your apps, then Google is trying to make that easier for you with ML Kit. ML Kit helps you to bring the power of machine learning to your apps with Google APIs. The five ready-to-go APIs are:
- Text Recognition
- Face Detection
- Barcode Scanning
- Image Labeling
- Landmark Recognition
Two more APIs will be ready in the coming months: A smart reply API allowing you to support contextual messaging replies in your app, and a high-density face contour addition to the face detection API.
You can read more about ML Kit in this Google Developers post Introducing ML Kit and in the session video ML Kit: Machine Learning SDK For Mobile Developers.
The most talked about demo of the keynote was Google Duplex, with a demo of Google Assistant having a conversation with a restaurant and hairdresser in order to make a reservation and book an appointment. The demo drew gasps from the crowd as the conversation was so natural, the person on the other end of the phone did not recognize they were not talking to a person.
It didn’t take long for people to move from “*That’s cool!*” to “*That’s scary!*” and there are obvious concerns about the ethics of a robot not declaring that it is not a real person when engaging with someone on the phone.
The recordings that were played during the keynote can be found in Ethan Marcotte’s post about the feature, in which he notes that “Duplex was elegantly, intentionally designed to deceive.” Jeremy Keith wisely points out that the people excited to try this technology are not imagining themselves as the person at the end of the phone.
In addition to Duplex, there were a number of announcements around Google Assistant including the ability to have continued conversation, a back-and-forth conversation that doesn’t require saying “Hey, Google” at the beginning of each phrase.
As a layperson, I can’t help but think that many of the things Google is working on could have hugely positive implications in terms of accessibility. Even the controversial Duplex could enable someone who can’t have a voice call to more easily deal with businesses only contactable by phone. One area where Google technology will soon have an impact is with the Android App Google Lookout which will help visually impaired users understand what is around them, by using the phone camera and giving spoken notifications to the user.
There were several sessions bringing a real focus on accessibility at I/O, including the chance for developers to have an accessibility review of their application. For web developers, Rob Dodson’s talk What’s New In Accessibility covers new features of DevTools to help us build more accessible sites, plus the Accessibility Object Model which gives more control over the accessibility of sites. For Android Developers What’s New In Android Accessibility details the features that will be part of Android P. With the focus on AR and VR, there was also a session on what we need to think about in this emerging area of technology: Accessibility For AR And VR.
Linux Apps Are Coming To Chrome OS
An interesting announcement was the fact that Linux Apps will be installable on Chrome OS, making a ChromeBook a far more interesting choice as a developer. According to VentureBeat, Google is using Debian Stretch, so you’ll be able to run apt and install any software there is a Debian package for. This would include things like Git, VS Code, and Android Studio.
The material.io website has been updated for the new version of Material Design; the big announcement for that being Theming, which will allow developers using Material to create their own themes making their apps look a little less like a Google property. Gallery will then allow teams to share and collaborate on their designs.
Also announced was the Material Theme Editor which is a plugin for Sketch, making it Mac only. The website does say that it is “currently available for Sketch” so perhaps other versions will appear in due course.
You can find a write-up of how to create a Material theme on the material.io website. The design.google site is also a useful destination for Material and other Google design themes. From the sessions, you can watch:
- “Customize Material Components For Your Product” by Richard Fulcher, Rachel Been, and Josh Estelle
- “Code Beautiful UI With Flutter And Material Design” by Mary Via and Will Larche
- “Build Great Material Design Products Across Platforms” by Jonathan Chung, Nick Butcher, and Will Larche
Announced at the keynote was the new Google Digital Wellbeing site, along with a suite of features in Android P, and also on YouTube aimed at helping people to disconnect from their devices and reduce stress caused by things such as alerts and notifications. You can explore all of the features at wellbeing.google/. Most of these will require Android P, currently in Beta, however, the YouTube features will be part of the Youtube app and therefore available to everyone.
As a developer, it is interesting to think about how we can implement similar features in our own applications, whether for web or mobile applications. Things like combining notifications into one daily alert, as will be enabled on Youtube, could help to prevent users being overloaded by alerts from you, and able to properly engage with them at a scheduled time. It has become easier and easier to constantly be asking our users to look at us, perhaps we should try to instead work with our users to be available when they need us, and quietly hide away when they are doing something else.
For more information on building a more humane technology ecosystem, explore the Center For Humane Technology website.
Every news site has been posting their own reviews of I/O, so I’ll wrap up with some of the best coverage I’ve seen. As an attendee of the event, I felt it was slickly managed, good fun, yet it was very clear that Google has well-rehearsed and clear messages they want to send to the developer communities who create apps and content. Every key announcement in the main keynotes was followed up by sessions diving into the practical details of how to use that technology in development. There was so much being announced and demonstrated that it is impossible to cover everything in this post — or even to have experienced it all at the event. I know that there are several videos on the I/O playlist that I’ll be watching after returning home.
- TechCrunch has an excellent roundup, with individual articles on many of the big announcements,
- There’s also a coverage of the event from CNET,
- The Verge has a story stream of their content reporting on the announcements.
If you were at I/O or following along with the live stream, what announcements were most interesting to you? You can use the comments to share the things I didn’t cover that would be your highlights of the three days.