New Google Tools:The Price You Have To Pay For Using Them All

Google has developed all these new tools for you to benefit from. In return, the company doesn’t want money. It want’s something else.

Few can doubt that almost all of Google’s latest tools will do their best in order to make life more convenient for their users.

But that convenience will not come without a price.

Users who carry out their everyday tasks with Google services should not forget that the main part of the bargain between them and Google is still the company forking over an increased amount of their data.

Last Monday, the company announced to the world a slew of new and enhanced features.

These features, Google said, would come to a wide array of the company’s consumer products.

Google added some additional features to its operating system of mobile devices Android as well.

One feature encouraged consumers to take regular breaks.

Other features came as a part of the company’s Google Assistant product that praised kids when they used words such as “please”.

Perhaps the same feature would do that for adults as well.

But that’s a discussion which we’ll leave for later.

Google representatives spoke a huge crowd consisting of developers, techies, and reporters during the company’s annual I/O conference.

Just like the last time (and every time before that) the company held the conference in Mountain View, California.

During the conference, Sundar Pichai, the company’s CEO rightly noted that the company’s new features added to an already comprehensive list of tools that made technology more accessible and helpful to people.

Needless to say, Google also hopes that all of the extra convenience that it has planned to bring via its new products and features would make digital consumers a bit more willing when it comes to sharing all of their personal data with the company while they, for example, ask Google services to “please” tell them a good story.

Sundar Pichai, also made a quick point about how the company, as a technology giant, had plans to enable consumers to feel much more mindful when it came to how they used the company’s products and how those products integrated themselves into the consumer’s daily habits.

Readers should know that for over a decade now Google, as a technology company, has managed to dominate entire markets for services such as,

  • Email
  • Browsers
  • Search engines

And much more.

But perhaps Google has not done so out of public good.

Perhaps the company is reacting to the growing pressure on its products and services from critics such as Tristan Harris.

Most have heard of Tristan Harris as the former design ethicist at Google who turned into an ethical-tech advocate.

Tristan has consistently criticized big technology companies and leaders such as Google and Facebook for their products which are by their nature addictive.

There is little doubt about the fact that technology can play a very positive role in advancing forces which will make people’s lives much easier.

With that said, Pichai specially mentioned that the company had an awareness that there were possibly very important and real questions that critics had raised about the actual impact of all technological advances and, of course, the role these technological advanced would play in the lives of the people in the future.

Let’s keep that in mind for a bit.

And take a look at some of the most intriguing product and feature announcements from Google at its annual I/O conference.

The list is ordered according to the degree of convenience that the company’s products and features offer.

So, in order of increasing convenience:

Pretty Please For Google Assistant

The new feature is pretty please for the company’s Google Assistant product.

What does it do?

Or more importantly what is it for?

Well, the company announced that the pretty please feature would roll out in order to assuage all kinds of parent concerns such as their kids developing a bossy attitude and essentially becoming more demanding when they commanded the company’s Google Assistant product to wake up and get to work with phrases such as “Hey Google”.

With the new feature in place, Google Assistant will have little trouble in responding to kids’ queries which would include kid-friendly words such as “please”.

So kids could use phrases that start with “please” in order to ask Google Assistant to do something.

In response, because of the new feature, Google Assistant will gain the ability to respond back with “Thanks for asking so nicely”.

It may also respond with “Thanks for saying please”.


Some feel that Google felt a little pushed to introduce this feature for increase politeness because of reports that came out last month that Alexa (Amazon’s version of voice-enabled Google Assistant) would soon have the ability to encourage kids to use their favorite magic words in order to get their way with the company’s digital assistant.

If all of this does not make much sense then just understand that these features do nothing apart from encouraging children to develop more politeness in the way they ask for things.

When will it become available?

Well, according to most sources, the Pretty Please feature will become available later this year.

The Do Not Disturb Feature For the Android P Product.

First a little something about Android P.

Google is now offering Android P in beta phase for a select few of various flagship Android smartphone devices.

As one would expect, one of the first Android smartphone devices that will get this new feature would be the Google Pixel smartphones.

After that, the company will roll out the Do Not Disturb feature for more Android smartphone devices in the coming fall.

But what does it do?

Think of it as an update.

An update to the company’s Android platform in terms of its Do Not Disturb mode.

The new Do Not Disturb mode will enable users to place their smartphone devices face-down on any given table in order to send the phone directly into silent mode.

Moreover, the Do Not Disturb will also get expanded areas of impact.

Instead of just hushing texts and calls for the user the new Do Not Disturb feature would keep the smartphone’s screen dark as well.

Users will still have the option of enabling calls from specific contacts if they want to.

When will it become available?

According to most sources, the new Do Not Disturb feature will become available whenever the company decides to roll out Android P.

For beta users, the feature is available right now.

The Dashboard Feature For The Android P Product.

Again, readers need to know that beta users can utilize the new Android P operating system right now but only if they have one of the few supported flagship Android smartphones.

As mentioned before, if you have a Google pixel smartphone then you are all set to try out all the latest feature.

Later though, the rest of the world will get to use these new features as well.

What is this Dashboard feature?

The Dashboard feature basically shows Android users data on how they go about using their smartphone device.

The new Dashboard feature will simply enhance that functionality.

In other words, the new Dashboard will offer users more specific data on how they utilize their smartphone device.

The new details would include things like how many times the user has unlocked his/her smartphone device on any given day.

And the period of time the user has spent using specific mobile applications.

The demo that Google showed had the Dashboard feature gaining the ability to actually see a detailed hour-by-hour breakdown of the user’s app use within Gmail.

Additionally, the new Dashboard feature will also come with a timer.

The timer feature will enable users to set strict limitations on the amount of time they can spend on a given application before their Android smartphone device will request the user to take a necessary break.

When will this feature become available?

As far as beta users go, they can take advantage of this feature right now.

For other, it will become available as soon as Google rolls out Android P to all Android smartphone devices.

The Style Match feature For the Google Lens Product


What is Style Match and what does it have to do with Google Lens?

Google will add the Google Lens feature to the camera application on a few Android smartphone devices.

The company has invested a lot of resources in coming up with a reasonable number of major new features.

Just one of those features is known as Style Match.

Think of Style Match as Shazam except that Style Match is for outfits instead of music.

Style Match will also work for home decor.

This new Style Match feature will enable users to point their smartphone camera at a specific dress or a lamp.

After they have done so, the smartphone will pull out more information and/or reviews about that product and where the user can purchase similar items.

When will this feature become available?

Most sources say that users will have access to the Style Match feature in the next couple of weeks.

The Smart Compose Feature For Gmail

What is Smart Compose?

The Smartphone Compose feature takes advantage of machine learning in order to suggest to the user the phrases that the user may want to type next.

It does that as the user is typing his/her next message.

All that the user has to do, in order to use the suggested phrases, is to keep hitting the tab key on his/her keyboard.

To take an example, let’s say the user types Taco Tuesday as his/her message subject line.

The new Smart Compose feature in Gmail will jump in and try to help the user by suggesting things such a the address of the user’s favorite taqueria and guacamole.

Is this feature really helpful?


But is the Smart Compose feature also a reminder that the company behind Gmail (Google and/or Alphabet) knows a bit too much about the user.

Let’s be honest, most of the users who use Gmail as their primary email service know that Gmail knows a ton of information about them.

That information includes what type of things the users would say in certain situations.

When will this feature become available?

Sources say, sometime in May.

The Google Duplex feature For Google Assistant

What is Google Duplex?

Before we get to that, the first thing users need to know is that this feature is still in its experimental stage.

With that out of the way, let’s talk about what is this feature really.

It is an AI-agent.

An AI-agent that has an impressive human sound.

Google Duplex can help consumers to make all sorts of phone calls (at the moment it is restricted to things like bookings) in order to help the user set up appointments without any input.

Let’s take the example of a simple call that Google Duplex can help with.

Google showed that users could use Google Duplex to make a call to a restaurant.

And then seamlessly work through a slightly tricky situation.

The AI-agent tried to act on behalf of the user and wanted to make a reservation for a total of four people.

But, it found out that the restaurant did not require a reservation for a party that had less than six people.

Google Duplex handled all of that in a sound that was hard to distinguish from a human’s voice.

When will this feature become available?

Google has yet to determine when will the company offer this feature.

The company did make the statement that it would start Google Duplex’s testing phase by rolling it out to Google Assistant as early as this summer.

The first version of Google Duplex will only assist users to make restaurant reservations and/or appointments for hair salon trips.

Google Duplex will also help users to figure out different stores’ working hours during holidays.

Talking about Google, one of the companies that Google owns is DeepMind.


DeepMind has been hard at work in improving its own AI programs.

Recently, DeepMind’s neural networks managed to mimic cells which are called grid cells.

These are found in the brains of all humans.

More interestingly, scientists believe that the grid cells are responsible for helping people to know where they are at any given point in time.

Such discoveries have helped DeepMind’s AI program to get very good at tasks such as navigation.

And DeepMind has managed to do so by developing the previously mentioned human brain-like GPS system.

Researchers at DeepMind originally trained the AI program only to navigate through a digital/virtual maze.

But during the experiments, it unexpectedly managed to develop a new architecture.

That architecture, according to DeepMind, resembled a neural GPS System.

A neural GPS system that scientists have previously found inside the human brain.

After the AI had developed the new architecture it then successfully managed to find its own way around the virtual maze.
Researchers say that the AI system did that with unprecedented skill.

As mentioned before, this latest discovery comes from an Alphabet-owned UK-based company that goes by the name of DeepMind.

The company has dedicated its resources to areas that help in the advancement of general AI (artificial intelligence).

DeepMind published its work in the world-renowned journal Nature.

It clearly hints how researchers can make use of artificial neural networks to explore specific aspects of the human brain which have previously remained mysterious.

Readers should know that these artificial neural networks themselves are inspired by subjects such as biology.

Researchers say that the community should treat such an idea with proper caution.


Because there is a lot that researchers simply do not understand about how the human brain worked.

Apart from that, the actual functioning of various artificial neural networks is (most of the time) very hard to explain.

DeepMind researchers had one aim in mind and that was to train a given artificial neural network so that it was able to mimic path integration.

But what is path integration?

Path integration is a method.

It is a method that animals utilize in order to calculate all of their movements while they move through a given space.

DeepMind researchers trained an artificial neural network with the help of a feedback loop.

This enabled the AI agent to navigate the virtual maze.
Researchers kept on feeding the artificial neural network with examples of all the routes that a mic took while trying to traverse a real physical maze.

DeepMind researchers found that artificial neural networks managed to develop something pretty similar to what scientists had found in biological brains.

That is, grid cells.

According to researchers, animals use these cells in order a provide themselves with a way to position themselves in the real-world, ie, the physical space.

These cells usually arrange themselves in triangular grids.

Scientists first identified grid cells back in 2005.

Those involved in the discovery managed to earn a Nobel Prize for their work about nine years later in 2014.

Artificial neural networks have shown that they can successfully carry out many useful tasks.

Until now though, these neural networks have not proven themselves to be any good at tasks such as navigation.


Zohair A. Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.
Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.