Yesterday I attended the Google I/O Extended event at Google NYC. I used to work there prior to graduate school, so it was great to be back there. I got to see some interesting tech talks, meet up with old friends from Google, and watch a live stream of the keynote.
When I got there, they had coffee and breakfast for everyone, so I happily helped myself to some while checking out the swag they gave us and snapping some pictures:
Morning Tech Talks
In the morning, a few NYC Googlers gave tech talks about some of their projects they were working on. The tech talks were each very brief, about 10 minutes each, so they did not really have time to get in depth on anything, but they were interesting to hear about.
The first talk was on neural nets and deep learning. This was probably the talk I was most excited about. Google has been working on a project called “Google Assistant” (Google’s answer to Siri), which they were very excited about and talked about a lot during the day. Google Assistant is meant to be a conversational successor to Google Now. Users can talk to their “assistant”, and it should understand context. If, for example, you ask the assistant if Captain America is any good, it should tell you that it had good reviews. Then if you tell the assistant, “I want to go see that tonight at 7,” it should recognize that you are still talking about Captain America (and give you showtimes for that movie in your area, and let you book tickets with one click).
Computationally, this seems like a very difficult problem, and this talk highlighted some of that complexity. It was very interesting to see how neural nets and deep learning techniques are applied to speech to go from sound waves, to phonemes, to basic words, to sentences and conversational context.
The last talk was on a project called Accelerated Mobile Pages (AMP). This project is, in the words of the speaker (Flavio Palandri Antonelli), to “Make the web great again.” Flavio was hilarious, and included memes like this when explaining why the web has sometimes failed in its transition to mobile:
AMP is essentially a set of guidelines and tools for developing mobile pages, along with a cache. Apparently they are able to significantly speed up page loads and limit common annoyances on mobile browsing like content shifting (when you’re reading something on your phone, but then an ad loads and pushes the content down so you have to scroll).
After that we went for lunch at one of Google’s cafes in the building, where I met up with an old friend. After lunch we came back to watch the keynote.
The keynote has been written about a lot (just search for “Google IO 2016 Keynote”) so there’s not much I can add about that. The one thing from the keynote that I’m most excited by is Google’s work in machine learning and AI.
Toward the end of the keynote, Sundar Pichai said that Google is working on using machine learning to help computers recognize signs of diabetic retinopathy from retinal scans. This is a disease that requires experts to diagnose, so this technology may help people who live in remote areas that do not have access to those experts.
After the keynote, there were live demos in the NY office. I ended up meeting up with a former colleague who gave me a quick tour of some parts of the building that I had not seen when I worked there a few years ago.
Overall, it was a great experience and I loved being back at Google NYC for a day!