Skip to main content
A Magazine for Sheffield

How Does Google Make Money?

To understand Google’s highly successful business model, we first need to know about the accidental but highly significant discovery the company made in the early noughties, a discovery which transformed it from unprofitable provider of free web searches to ubiquitous corporate verb.

How does google make money illustration 1
Illustration by Alastair Flindall

In 2019, Google was worth roughly $900bn. That’s more than the entire economy of Switzerland that year. Or, put scientifically, an awful lot of Toblerone.

Why is Google so profitable? For most of us, most of the time, its services are free. Need to find something online? Send an email? Get from one place to another? All free. Google will translate foreign languages for you, provide online storage and keep your photos organised. You can share, cast and calculate, play, text and cite, watch, learn and make. All day, every day. All completely free.

So where does the money come from? Tesco sells groceries. Ford sells cars. Google sells... what exactly? There is only one product the company trades which goes back almost to its earliest days, and that’s advertising. About 80% of Google’s income was derived not from search results or patents or even software, but from the sale of advertising space on its various platforms. From a revenue point of view, Google is fundamentally in the ads game.

This shouldn’t come as a surprise. We’ve all seen Google’s ads. Whenever we check our emails or watch a video on YouTube, there they are, the chatter of our digital lives, no sooner noticed than ignored. But in many ways Google is very different to other ad companies. Its mission statement, ‘To organize the world's information and make it universally accessible and useful,’ doesn’t even mention adverts. Can you name a single other marketing company which has the same profile or revenue? I can’t. So what’s going on?

To understand Google’s highly successful business model, we first need to know about the accidental but highly significant discovery the company made in the early noughties, a discovery which transformed it from unprofitable provider of free web searches to ubiquitous corporate verb.

Zuboff surveillance capitalism cover

"Everyone needs to read this book as an act of digital self-defense," Naomi Klein said about Shoshana Zuboff's critically-acclaimed 2018 book.

As Shoshana Zuboff explains in her recent book, The Age of Surveillance Capitalism, Google has always used the data we generate when we search to improve the results of its algorithm. When you type (or nowadays speak) a search query, you are inadvertently telling Google a large amount of information which it uses to increase the relevancy of its results. This used to be fairly basic stuff like your location, search history, how you spell things. These days it could feasibly include the movement of your retinas, your tone of voice or your computer-analysed emotional state.

Back when Google considered its primary business to be the return of high-quality search results, this information was used to improve the service. Google Translate, for example, got a head start because of the sheer amount of linguistic data the company held. But there was a limit to what this data could be used for and although it was stored, it was considered a waste product, ‘data exhaust’.

But then, back in the early 2000s, the company realised this data could be used for another purpose. In Google mythology, this was the ‘Carol Brady’s Maiden Name’ moment, the day they noticed dramatic spikes in searches about a 1970s television character. Curiously, the repeated spikes happened at the precise but obscure time of 48 minutes past each hour. Why? Because this was the answer to the $1m question on an episode of Who Wants To Be A Millionnaire? which aired that day across the US’s various time zones. 48 minutes into each episode was when the question was asked.

It might seem trivial but because Google, and only Google, could see this pattern, they could spot a trend which was invisible to everybody else. They knew before the TV executives, before the ratings people, before anyone wanting to sell a related product. As one Google executive put it coyly at the time, "There is tremendous opportunity with this data."

Such insights weren’t confined to the big picture either. Google realised the ‘exhaust’ we generated could actually be put to the highly profitable task of building user profiles to predict which adverts we would respond to most favourably. Information from your browser, your encounters with past adverts and purchases, your phrasing, location, spelling - all of it could be used by Google to understand you better. Today, that list is a lot longer and a lot more intrusive.

Google sign Photo by Paweł Czerwiński on Unsplash
Paweł Czerwiński (Unsplash)

The more we used Google to discover information, the more information Google discovered about us. This cycle, which had been used solely to improve the service, now took a critical detour. Information from you was packaged as predictions and sold to advertisers, before both your search query and your adverts were returned to your screen at the same time. This model became the fundamental financial engine of Google. In two years, the company’s revenue leapt from $347m in 2002 to $3.2bn in 2004. It hasn’t stopped since.

The information-into-advertising model also explains Google’s trajectory from search engine into tech giant. A simple logic underpins the whole operation: the more data you have, the better your predictions will be. The only way to stay ahead of your competitors therefore is to acquire more data - which is precisely what Google did.

Why does a search company need a maps service? So it knows where you are and what you might need to buy when you’re there. Why are Google’s services personalised? All the better to know you, my dear. Why does an internet company suddenly announce they are moving into phones? Because phones became ‘smart’ and a new source of hugely profitable behavioural data became available. From a tech company’s point of view, the smart phone revolution was a dream come true. A smartphone is yours and only yours, never leaves your side and is always ‘on’, always transmitting.

Shoshana Zuboff calls this ‘surveillance capitalism’. Information about us is taken in huge quantities with or without our consent as a new kind of raw material. It is then processed by a tiny number of private tech platforms before being sold as predictions about our behaviour to companies who induce us to take actions not necessarily to our benefit.

As even Apple’s CEO, Tim Cook, explains, the model developed at Google has become the destructive standard of our digital age: "Our own information, from the everyday to the deeply personal, is being weaponized against us with military efficiency. Billions of dollars change hands on the basis of our likes and dislikes, our relationships and conversations. These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded, and sold."

In a world of surveillance capitalism, you don’t simply search Google to find information; Google uses its search to get information about you. That handy GPS map which takes you from A to B doesn’t principally exist to help you move about, but to capture the details of your movement. The apps you play might be free and fun, but they are scraping your phone for rich behavioural data which is then used to target political campaigns at you.

Once you realise that data about you – what you buy, how you spend, what you like, what you share, where you go, how you sleep, eat, exercise, who you vote for, and on and on ad infinitum – is the raw material for a new extractive industry, you begin to see this process at work in places you would never have suspected.

Cars are no longer simply a means of transport but data collection devices which allow insurance companies to observe and measure behaviour, and therefore subject it to new forms of contractual obligation. Facebook’s ‘Like’ button, despite the company’s denials when it was introduced, was not primarily a cool new feature but a highly effective means of tracking your browsing activity, even when you’re not on Facebook. The ‘single sign on’ mechanism that allows you to sign up for internet services using, say, your Google account, does exactly the same thing.

Strava sells the location data you provide to people interested in our movements. Amazon keeps the data captured by Alexa even if you request deletion. Facebook boasts about bringing ‘free internet’ to Africa, but it’s the imperative of establishing the company’s surveillance practices in a new continent that is pushing the agenda, not philanthropy or development.

Perhaps most soberingly, even children’s toys have been turned into surveillance devices. Free games like Pokémon Go are, on closer inspection, boundary pushing exercises in behavioural data extraction. Meanwhile, in the home, toys like the My Friend Cayla doll sends children’s speech to third-party companies who use it to improve the very technologies which are then used to sell, among other things, dolls. In Germany, a country whose past means they take surveillance more seriously than most, the doll is considered an ‘illegal surveillance device’ and parents are encouraged to destroy any models they purchased.

None of the examples above are instances of a company going rogue. The collection of data is not a byproduct of these services; it is the point of them. Data extraction is the crucial first step in a process whereby surveillance capitalists create their product. Knowledge of us becomes predictions about us, which can then be sold to people who wish to pre-empt our desires.

Silhouette photograph of man surveillance privacy Photo by Chris Yang on Unsplash
Chris Yang (Unsplash)

The critical thing to note here is that our usual debate around ‘privacy’ is almost useless when it comes to describing and therefore challenging what is going on.

It might hurt to hear this, but Mark Zuckerberg is not interested in your cat pics. What he is interested in is the psychology, preferences and predictable habits of everyone on the planet. So his platform aims to capture information about everyone and everything, everywhere. The fact that your latest selfie with Mr Fluffington gets hoovered up along with everything else might feel a little intrusive, but you really shouldn’t take it personally.

In fact, while we are busy getting indignant about what ‘they’ know about ‘me’, we are not paying attention to the real issue: what they know about all of us. Over the last two decades, a historically unprecedented amount of information has been concentrated in a tiny number of private companies, owned and directed by an even smaller number of individuals, who are increasingly unaccountable to our laws and our politics.

Big deal, you might say. It’s just adverts. I don’t pay attention anyway, and in return I get all these incredible services for free. What this misses is one of the big political developments of our time: the expansion of surveillance capitalism, its principles and practices, far beyond advertising into new areas of our lives where, the evidence suggests, we are completely unprepared for it.

In the last five years alone, we have seen elections turned upside down by new practices on social media. Political micro-targeting, for example, would be impossible if Facebook either did not hold all this information about us or made it inaccessible to third parties. But until recently a deliberate policy of the company allowed external developers access to huge amounts of our personal information through our profiles in order to incentivise them to build apps for the platform. This technique then migrated into politics.

Innocuous ‘competitions’ like predicting the Euro '16 results were actually data harvesting operations in disguise. As Dominic Cummings wrote on his now infamous blog: "Data flowed in on the ground and was then analysed by the data science team and integrated with all the other data streaming in. This was the point of our £50m prize for predicting the results of the European football championships, which gathered data from people who usually ignore politics."

Microtargeting itself is a challenge to democracy as we understand it because, in principle, it allows every voter to see a different message. Whereas before elections were about grouping voters around shared policy platforms, now there can be as many policies as voters.

Such adverts also come from opaque groups operating with undeclared funds on a platform which is itself unregulated. As The Information Commissioner’s Office observes, democracies "are struggling to retain fundamental democratic principles in the face of opaque digital technologies." Another Parliamentary report is blunter: British electoral law is ‘not fit for purpose’ in the digital age.

Elsewhere, new areas of our lives have been co-opted by surveillance capitalists. Every day, the same data principles used by advertisers are employed by credit rating agencies and insurers to bring previously uncommercialised behaviour into new contractual relationships.

As Hal Varian, Google’s Chief Economist, explains, "nowadays, it’s easier just to instruct the vehicular monitoring system not to allow the car to be started" if drivers are considered to have breached contract. Easier for the insurers, perhaps. Not easier for the parent with small children, the worker with a commute, the sick patient going to hospital.

The price of convenience is a new set of rules enforced over a new aspect of life. All the time the monitoring continues, searching for ways to transform your current behaviour into new contracts which you never imagined possible. As Google and other companies increasingly move into areas like health and education, we should also anticipate the same logic being rolled out there.

All this is just the start. To see one possible version of the future, we don't have to time travel. Just look to China. There, the ‘social credit’ system observes things like how you cross the road, whether you pay your bills and how much you give to charity. It then turns this into a credit score which determines whether you can buy a train ticket, get a loan or purchase property.

It might feel like Black Mirror to us, but the only difference between social credit and surveillance capitalism is that the Chinese State has co-opted data extraction for political use and expanded its scope beyond what we consider legitimate. There is no technical distinction which can be meaningfully made between Facebook and the Communist Party of China.

Where do we go from here? We are only 20 years into this journey. IBM already has a personality criterion they believe can measure ‘the need for love’. Facial scanning can detect emotional reactions unseen by the human eye. Facebook can affect voter turnout just by adding a few buttons to our profile. Who we vote for will increasingly be swayed by campaigns which target our personalities, not our politics.

Data on an inhuman scale now reveals more about us than we could ever possibly know ourselves.

Advertising was the beginning of the commodification of this data but it won’t be the end. Nor will commodification. In ways we are only just beginning to understand, we are giving extraordinary powers to companies who do not have our best interests at heart.

The fundamental question is this: if knowledge of, and ability to predict, an individual or a group’s behaviour is a form of control, who do we want to be in control, us or them?

More News & Views

More News & Views