In 2017, chief executive officer of Alphabet Sundar Pichai declared that he wanted Google to be an “AI-first” company.


Since then, artificial intelligence has made its way into “operating systems and apps to home devices and in-vehicle interfaces”, according to ZDNet, with almost all of Google’s products incorporating AI to some degree.


With any deployment of AI, the question of ethics is never far off, and for the company whose former slogan is, “don’t be evil”, this certainly rings true.


At Google’s AI in Action event in Zurich, Verdict Magazine heard how the company is tackling the issue of responsible AI deployment.

Google the AI giant

In 2018, the company made the decision to change the name of its research division to “Google AI”, reflecting the fact that it was "implementing machine learning techniques in nearly everything we do".


After acquiring DeepMind in 2014, and launching Google Neural Machine Translation, Google Assistant and Waymo in 2016, artificial intelligence is now central to Google’s operations, transforming the company from an online search provider into a major player in the world of AI.


From products such as Google Duplex, which makes reservations over the phone using a human-like voice, to the Pixel 4’s new car crash detection feature, which detects when a user may have been in an accident and calls the emergency services, Google’s applications of AI are certainly impressive, but, for one of the most data-rich companies in the world, its continued advances in this area may be unnerving for some.

“In general web technologies, not just AI but anything that touches technology, is something that is often misunderstood.”

“I think maybe the crux of the matter is that, in general, web technologies, not just AI but anything that touches technology, is something that is often misunderstood,” says Olivier Bousquet, head of machine learning research at Google.


“What is a cookie? What does it mean to have cookies stored by your website and so on. These are questions that are poorly understood. And then of course if when you read the terms and conditions on most websites...nobody understands what that means.


“I think that's a problem and what we are doing to address this problem, we have different approaches, but in general, we are trying to educate people and provide digital skills to people at large.


“For AI, there is an extra need for education and that's why we share a lot of the course material that we use internally for people to give consent in a way that they understand what they are giving consent to. We have to educate them and there is still a lot more to be done in this area.”

Ethical implications: The military and beyond

As is the case with any new technology, defining what an ethical approach looks like is a conundrum that many companies are tackling. Of course, many uses of AI make the lives of users’ easier, and some even offer great societal benefit. Initiatives such as Project Euphonia, which helps those with speech impediments communicate, or the Google Flood Forecasting Initiative, which predicts where flooding might occur, exemplify this.


However, discussions of AI bias, as well as issues surrounding the use of user data and user consent are never far off. In 2018, Google set out seven guiding principles for its AI deployment: be socially beneficial; avoid creating or reinforcing unfair bias; be built and tested for safety; be accountable to people; incorporate privacy design principles; uphold high standards of scientific excellence and be made available for uses that accord with these principles.


Although the company has been praised for its refusal to sell its facial recognition technology APIs, which have the potential to be used for surveillance policies, it has faced backlash for Project Maven, as the debate over what oversteps the line when it comes to ethical AI use continues.

“Working with the military with Gmail is one thing. Providing technology that could be used to lead to harm is another thing and that's where we have to draw the line.”

Project Maven refers to a contract with the US Department of Defense in which artificial intelligence was used to analyse images from military drones. This prompted protests from Google employees, with thousands signing a petition calling for the project to be cancelled. In 2018 Google announced that the contract was coming to an end.


Combined with the fact that in 2019, the company put an end to its AI ethics council, intended to guide its “responsible development of AI” after Google workers protested the involvement of right-wing think tank The Heritage Foundation, it is clear that the issue is not one that is easily solvable.


Bousquet says that this area can be “blurry” and company-wide ethical guidelines help mitigate this:


“Working with the military doesn't mean necessarily working on applications that can cause harm. When things starting to look blurry...as the technology world [moved] towards more AI technology, then we realised, yes, there is a line to be drawn here.


“Working with the military with Gmail is one thing. Providing technology that could be used to lead to harm is another thing and that's where we have to draw the line. But before that we did not have those principles because we were not really exposed to such situations.”

Respecting ethical norms

Another ethical conundrum that AI brings with it is the data it runs on. AI relies on having access to comprehensive data to learn, but while this can bring with it many benefits, where this data is stored, how it is anonymised and for what purpose it is used is of paramount importance.


Earlier this year, The Wall Street Journal reported that a "limited number" of Google employees had been given access to electronic patient health data by US healthcare provider Ascension without patients' knowledge under a scheme dubbed “Project Nightingale”, a contract in which Ascension’s data was stored on Google Cloud.   


Although in a blog post, Google said that the project “adheres to industry-wide regulations” and that that “Ascension’s data cannot be used for any other purpose than for providing these services we’re offering under the agreement, and patient data cannot and will not be combined with any Google consumer data”, it demonstrates that the handling of sensitive data, particularly health data, is something that the general public has reservations about.


With Google making its intentions to move into the area of health AI clear, and with “studying the use of artificial intelligence to assist in diagnosing cancer, predicting patient outcomes, preventing blindness” part of Google Health’s mission, both the benefits and ethical considerations of this will undoubtedly continue to be the subject of debate.


Bousquet said:


“The fact that we are collecting data is not necessarily bad per se, but, of course, it means that we have a huge responsibility with respect to the privacy of the data and the use of the data.

“The question is, can we advanced the technology to make it as unbiased as possible and unbiased enough such that the benefits outweigh the potential harm from the biasness?”

“When it comes to customers of our cloud, storing their data on our cloud, this data is siloed and encrypted. So that means nobody at Google has access to that data. So if you're a cloud customer, you're using our servers to store your data, then it's your data. Nobody looks at it, it's not that we have access to it or we share it or whatever."


He said that developing new technology must be done so in a way that "respects ethical norms:


“How we do we balance those things? I mean, that's why we have this constant dialogue, because we realised that the technology that we are developing has a lot of potential impacts, and we have to make sure that whenever we make use of data that people are sharing with us, we do it in a way that is compatible with first the regulation, and second respects the ethical norms that we are contributing to [help] define together with the rest of society.”


Algorithmic bias is another aspect of the technology that big tech is facing scrutiny over. In 2018, Amazon scrapped an internal recruiting tool after it showed bias against women. Pichai has said that the company is working to ensure that AI does not "reinforce bias that exists in the world".


“The question of bias in the system; it's unclear whether we will completely solve that problem. But now the question is, can we solve it enough such that the application that we develop is useful and not harmful?” says Bousquet.


“But if there are situations where we cannot fix the bias problem, then we will not do it. And so, the question is, can we advance the technology to make it as unbiased as possible and unbiased enough such that the benefits outweigh the potential harm from the biasness?”

“A single company cannot be prescribing how AI should be developed”

When considering all of these issues, the extent to which the use of AI should be externally regulated is a crucial consideration.


“As the technology evolves and new applications emerge it's important to think whether the existing regulation applies and just has to be interpreted in the right way in that context or needs to evolve,” says Bousquet.


“At Google, the way we think about it is that it's good to engage the dialogue as early as possible with opinion formers, policymakers, people that are involved in establishing regulation so that they know, they understand the technology, they understand the limitations, the risks, and so on, and they can craft, if necessary, the appropriate regulation.


“So it's kind of a back and forth. As we have seen in the past with GDPR, for example, Google was doing a lot already in privacy and the fact that it gets materialised in principle, that is not just Google applying its own principle, but it gets materialised into something that is applicable to every company, is a good thing for society.”


He believes that a wider conversation about the impacts of AI on wider society is much needed:


“There was a New York Times article recently, which said AI is too important to leave it to Google and Facebook alone. I would agree. A single company cannot be prescribing how AI should be developed and applied to problems in society. So yes, we are trying to push as much as we can this conversation and this guidance in the public space. But there is only so much we can do as a single company.”

Back to top

Share this article