Explore our 2018 sessions
With the rise of voice assistants, voice is becoming another surface area for users to interact with your product or service. We can now start to blend this new technology with our existing offerings to improve user experience, engagement, and satisfaction.
In this session, we’ll learn about the Google Actions platform and how it works to provide you all the tools you need to build your own conversational interfaces. We’ll also spend time looking at how to design a conversation interface, including thinking through the various phases of dialog and sketching out expected flows. Finally, we’ll look at how to review and improve your Action by using the analytics and AI training tools available from Google.
Data Studio is Google’s serverless business intelligence and data visualization platform. As an easy and free-to-use reporting solution, Data Studio helps businesses and individuals to understand their data, derive key insights, effectively communicate measurements, and make better informed decisions. In this session, we will use the Chrome UX report dataset in BigQuery as an example and show how you can build scalable, cost effective dashboards by creating a custom solution with Apps Script and Firebase Realtime Database.
“A young child defines the world purely on the small amount they can see…This is the root of dataset bias: intelligence based on information that’s too small or homogenous.” Advances in AI technologies like machine learning and deep neural networks have potential to save time and boost productivity. But what if we train these technologies using datasets that exclude large portions of the population?
For example, some facial recognition software doesn’t acknowledge dark skin. Why? People of color were excluded from the datasets that were used to train the software. If AI isn’t designed with inclusion upfront, its rewards won’t equally benefit us all. This talk shares the risk of unconscious bias in algorithmic datasets, and how developers can overcome it.
What does it take to launch a far field voice experience such as an Amazon Alexa skill or a Google Action? Navya walks through the various steps that take the product manager from engaging with field and market researchers, from identifying the initial functional requirements for the skill to engaging with UX designers, developers, voice experiences testers, beta testers, and then marketers through the launch and the post launch of the skill. She provides specific examples and shares learnings from real client deployments of conversational experiences.