How Google uses AI/ML to make our lives easier?

Alphabet Inc. CEO Sundar Pichai said in an interview at the World Economic Forum in Davos, Switzerland —

AI is one of the most profound things we’re working on as humanity. It’s more profound than fire or electricity

Let me tell you some of the mind-boggling things google has developed using AI/ML:

🎵Song stuck in your head? Just hum to search

So next time you can’t remember the name of some catchy song you heard on the radio or that classic jam your parents love, just start humming. You’ll have your answer in record time.

How do machines learn melodies?

When you hum a melody into Search, our machine learning models transform the audio into a number-based sequence representing the song’s melody. Our models are trained to identify songs based on a variety of sources, including humans singing, whistling, or humming, as well as studio recordings. The algorithms also take away all the other details, like accompanying instruments and the voice’s timbre and tone. What it’s left with is the song’s number-based sequence or the fingerprint.

It compares these sequences to thousands of songs from around the world and identify potential matches in real-time. For example, if you listen to Tones and I’s “Dance Monkey,” you’ll recognize the song whether it was sung, whistled, or hummed. Similarly, our machine learning models recognize the melody of the studio-recorded version of the song, which Google can use to match it with a person’s hummed audio.

This builds on the work of our AI Research team’s music recognition technology. Google launched Now Playing on the Pixel 2 in 2017, using deep neural networks to bring low-power recognition of music to mobile devices. In 2018, they brought the same technology to the SoundSearch feature in the Google app and expanded the reach to a catalog of millions of songs. This new experience takes it a step further because now they can recognize songs without the lyrics or original song. All we need is a hum.

Efficient fast replies for email:

Gmail and AI

How 🧐?

The content of language is deeply hierarchical, reflected in the structure of language itself, going from letters to words to phrases to sentences to paragraphs to sections to chapters to books to authors to libraries, etc. Consider the message, “That interesting person at the cafe we like gave me a glance.” The hierarchical chunks in this sentence are highly variable. The subject of the sentence is “That interesting person at the cafe we like.” The modifier “interesting” tells us something about the writer’s past experiences with the person. We are told that the location of an incident involving both the writer and the person is “at the cafe.” We are also told that “we,” meaning the writer and the person being written to, like the cafe. Additionally, each word is itself part of a hierarchy, sometimes more than one. A cafe is a type of restaurant which is a type of store which is a type of establishment, and so on.

In proposing an appropriate response to this message we might consider the meaning of the word “glance,” which is potentially ambiguous. Was it a positive gesture? In that case, we might respond, “Cool!” Or was it a negative gesture? If so, does the subject say anything about how the writer felt about the negative exchange? A lot of information about the world, and an ability to make reasoned judgments, are needed to make subtle distinctions.

Given enough examples of language, a machine learning approach can discover many of these subtle distinctions. Moreover, a hierarchical approach to learning is well suited to the hierarchical nature of language. It is found that this approach works well for suggesting possible responses to emails. Google uses a hierarchy of modules, each of which considers features that correspond to sequences at different temporal scales, similar to how we understand speech and language.

Each module processes inputs and provides transformed representations of those inputs on its outputs (which are, in turn, available for the next level). In the Smart Reply system and the figure above, the repeated structure has two layers of hierarchy. The first makes each feature useful as a predictor of the final result, and the second combines these features. By definition, the second works at a more abstract representation and considers a wider timescale.

YouTube Recommendations:

It prioritizes three main elements to provide the best search results: relevance, engagement, and quality. To estimate relevance we look into many factors, such as how well the title, tags, description, and video content match your search query. Engagement signals are a valuable way to determine relevance. We incorporate aggregate engagement signals from users, i.e. we may look at the watch time of a particular video for a particular query to determine if the video is considered relevant to the query by other users. Finally, for quality, our systems are designed to identify signals that can help determine which channels demonstrate expertise, authoritativeness, and trustworthiness on a given topic.

Search results may differ for each user. For example, if a user watches a lot of sports videos and searches for ‘cricket’, we might recommend videos featuring the sport cricket rather than nature videos with crickets in them.

In addition to those three main factors, we strive to make search results relevant for each user and we may also consider your search and watch history. That’s why your search results might differ from another user’s search results for the same query. For example, if you watch a lot of sports videos and search for ‘cricket’, we might recommend videos featuring sport cricket rather than nature videos with crickets in them.

📸Visual Ways to search and understand the world

Visual tools to help you learn

If you’re a parent, your kids may ask you questions about things you never thought you’d need to remember, like quadratic equations. From the search bar in the Google app on Android and iOS, you can use Lens to get help on a homework problem. With step-by-step guides and videos, you can learn and understand the foundational concepts to solve math, chemistry, biology and physics problems.

Shop what you see with Google Lens

Lens uses Style Engine technology which combines the world’s largest database of products with millions of style images. Then, it pattern matches to understand concepts like “ruffle sleeves” or “vintage denim” and how they pair with different apparel.

Bring the showroom to live with AR

AR experience of the 2020 Volvo XC40 Recharge

When you can’t go into stores to check out a product up close, AR can bring the showroom to you. If you’re in the market for a new car, for example, you’ll soon be able to search for it on Google and see an AR model right in front of you. You can easily check out what the car looks like in different colors, zoom in to see intricate details like buttons on the dashboard, view it against beautiful backdrops and even see it in your driveway. We’re experimenting with this feature in the U.S. and working with top auto brands, such as Volvo and Porsche, to bring these experiences to you soon.

🔗Don’t forget to check these amazing links:

Lastly don’t forget to try these amazing features provided by Google using Artificial Intelligence and Machine Learning.