Google launched two new products today — Pixel 6 and Pixel 6 pro. Center stage at this launch was artificial intelligence and machine learning used to make smartphones more personal. Google is setting the bar high for the use of ML in apps and mobile features.
October 20, 2021 3 min read
Google announced today their pathway to creating a more personal mobile experience - Pixel 6 and Pixel 6 pro. There are many reasons why Google might want to create a more personalized phone: one, there is a demand for this type of personalization and, two, Google sees the opportunity to differentiate the Pixel from other smartphones.
If personalization is the goal, it is no surprise that machine learning (ML) is at the core of today’s offerings. Consumers expect more and more personalized products, and software that is able to create personal, unique experiences for users stands out in the market.
This launch also demonstrates that the standards are rising for apps and mobile features. Other industry players will have to use ML to remain competitive.
Today’s announcement highlighted ML in some fascinating ways:
Tensor Chip
The Pixels are powered by a custom-made chip designed to enable AI capabilities on the mobile devices. The launch highlights Google’s perceived need for this type of hardware in being able to offer, “Google’s latest advances in AI directly on a mobile device.” Whether the future of ML happens mainly on device or in the cloud, it is clear that Google believes ML is the core of this newest iteration of the Google pixel.
Camera Features
The first category of features announced relates to the camera and images. These features include the Magic Eraser, Motion Mode, Face Unblur and Real Tone.
Magic Eraser allows users to use smart technology on the go to identify and remove unwanted objects or people in the background (say goodbye to photobombing). Motion Mode allows users to have a still subject in the forefront with an “aesthetic blur” in the background. Motion Mode uses ML to separate the subject and the moving element of the picture, then, by taking multiple quick pictures, it fuses the images together. Face Unblur, uses data from two cameras and four ML models to combine images to create the best photo. The last feature is Real Tone which tries to more accurately represent a wide range of skin tones in photos. As part of the Real Tone feature, several skilled photographers took millions of photos of historically underrepresented populations, creating, as Google reports, “an image data set that is 25% more diverse”. If you are interested in reading more about bias in data, I recommend this previous post, but underrepresentation in data can be a major factor in creating biased models. Creating a more diverse data set is an important part of removing bias from algorithms and ML models.
Text and Audio Features
The second major category of features announced in the launch relates to text and audio recognition. Google uses Natural Language Processing to create call features, enhanced voice typing, and live translation in text and speech.
The call features can one, show you how long expected wait times are for a toll-free number you are planning to call; two, translate automated menu options to text that you can interact with on your phone; and three, wait on the line for you as you wait for a live representative. These are all tasks only humans were thought to be able to achieve, but with the advent of ML, programmers no longer have to design code with millions of different rules to react to different situations. ML allows computers to respond to a much larger pool of circumstances.
This is only to mention a few of the many ML powered features announced in Google’s pixel launch today. This launch, heavily centered and focused on AI and ML, highlights the growing demand for more personalized consumer products and the priority placed on companies to use personalization to differentiate themselves.