fbpx

Almost half of the world’s population uses one of Meta’s services every month. Facebook and Instagram combined hold over 75% of the social media market share, and WhatsApp has become the world’s default instant messaging app.

This is the story of how Facebook took over the world.

In the early days of Facebook, it was reported that Zuckerberg ended meetings by shouting ‘domination!’ And it’s safe to say that he has achieved it. To understand how, we need to rewind nearly 20 years.

It’s 2005. You’re in high school or starting your last year of college, or maybe you’re a young parent trying to stay in touch with friends when you hear about a new website called Facebook. So you sign up. At the time, you had no idea you were looking at a website that would change the world as we know it.


Facebook, of course, didn’t invent social networking. It started in 1997 with SixDegrees.com, the first website to feature profiles, friends, and location services.

Then, in 1999, LiveJournal came onto the scene as a way to keep in touch with friends via blogging. By 2007 it had 14 million users and was sold to a large Russian media company.

Then there was Friendster in 2002 and MySpace in 2003. By 2005, MySpace was the dominant social network in the United States. But MySpace, or any of us, didn’t have any idea what was coming.

In one of Harvard’s dorm rooms was a young Mark Zuckerberg, working on a social networking site that would soon overthrow every competitor in the market.

One working in Facebook’s favor was timing. Thanks to the rising availability of broadband, more people were on the internet in the mid-2000s than ever before. These previous social networks helped Facebook compile a long list of technical and business mistakes to avoid.

But it certainly wasn’t all luck.


Zuckerberg and his co-founders built Facebook in a controlled, methodical way. It started at Harvard and then slowly expanded to other universities, high schools, and corporations.

It wasn’t until September of 2006, after two years of limited availability, that Facebook opened its platform to anyone 13 and over. This slow growth allowed time to perfect the technology and allowed its founders to hire intelligent engineers who constantly added new features.

Facebook quickly gained around 12 million users. And by 2008, just two years after its public release, 100 million people were using Facebook.

That same year, Sheryl Sandberg joined the company as Chief Operating Officer after working as Chief of Staff for the Treasury Department in the Clinton administration. Sandberg was viewed as the adult in the room with Zuckerberg. From there, things took off.


By December 2009, Facebook had become the most popular social platform in the world. When the movie The Social Network came out in 2010, Facebook’s supremacy was officially solidified in Hollywood history.

But that wasn’t enough for Zuckerberg, who was looking for domination. Much of Facebook’s success is thanks to its unique growth team. The company isn’t just worried about getting new people to join but is concerned with monthly active users, which indicates how often people return to the site and spend time on it.

It might seem obvious these days, but in 2007, when Zuckerberg was just 23, he created a growth team that used data to generate engagement. At the time, other companies largely considered growth to be the responsibility of the P.R. and marketing departments, whereas Facebook prioritized data and engineering.


In the early days, people left the site because they couldn’t find their friends fast enough. So, the growth team created the ‘People You May Know’ function that allowed Facebook to access your contacts to suggest friends immediately. As you might expect, this led to some privacy issues, like psychologists’ patients being recommended to befriend each other. This would certainly not be Facebook’s last dance with privacy concerns.

And that wasn’t the only tool in the growth team’s arsenal that came with controversy.

Tristan Harris, a computer scientist and former Google employee, co-founded the Center for Humane Technology to push back against the addictive elements of technology. All tech companies, Facebook included, have relied on people’s weaknesses and addictive tendencies to gain time and attention.

The perfect example is the now-ubiquitous ‘Like’ button. This kept people returning to the site for the dopamine hit they’d feel when someone liked their post. As Harris puts it, the Like button and everything that came with it essentially turned our smartphones into slot machines.

That was what the company wanted. Because the more addictive its platform is to its users, the more opportunity for revenue. Because, well, ads.

Facebook’s advertising model didn’t start off how we know it today. In the early college campus days, the site sold what it called Flyers which were ads students could buy to promote parties and other campus activities.

As the company grew, businesses flocked to Facebook to advertise because they were able to directly target audiences by college, degree type, preferred courses, age, gender, and interests. This made Facebook’s advertising far more effective than traditional print or T.V. advertising.

By the end of 2007, 100,000 companies had signed up with Facebook business pages, promoting themselves through advertising.

Like Google ads, Facebook’s advertising strategy focused on reinventing the wheel altogether by linking ads to specific and targeted users. This idea seems so normal to us now when we merely talk about a toaster oven, and suddenly, it appears on our Instagram feed. But it was revolutionary then and continues to be insanely profitable for Facebook.

But some might argue that the ultimate key to Facebook’s status as a global behemoth is its consistent acquisition of companies it sees as competitors or additions to its master plan, aptly called the copy-acquire-kill method.

It started in 2012 when Facebook bought Instagram for $1 billion. Two years later, it grabbed the global messaging app WhatsApp for $19 billion and virtual reality company Oculus for $2 billion.

When companies like Snapchat wouldn’t sell to them, Facebook simply copied and integrated the app’s features into their own app. If this sounds familiar, it’s because it’s the same thing they’re trying to do to Twitter by introducing Threads.

Facebook’s rise has been meteoric, but it came with a rocky and bumpy ride.

The first sign of danger for Facebook came in 2013 when its content moderation strategy, or lack thereof, unraveled. It was found to be experimenting with its users by showing certain content to influence people’s moods. Eventually, it issued an apology. But that was small potatoes compared to what would come.

The idea of fake news didn’t really exist before 2015. But at the start of the 2016 U.S. presidential election, when a study found that 63% of Americans on Facebook got their news from Facebook, the company knew it had to get ahead of the potential for misinformation.

It introduced a new feature that allowed users to flag articles as false news and rolled out a program for journalists to try and favor hard-hitting journalists.

Sadly these measures did next to nothing to stop the spread of misinformation.

In the 2016 U.S. election, more people engaged with fake news stories than real ones. And it was all because of the way the algorithm was designed. Fake news is sensational. It’s purposefully designed to cause outrage and fear. But this also means it’s more likely to be clicked on and commented on, prioritizing the story because the algorithm looks for and shares the articles that get the most interaction.

Publicly, Zuckerberg insisted that Facebook was not an influence in the election. But behind the scenes, it provided Congress with information that proved a Russian-based organization had run 3,000 ads between 2015 and 2016 in possible connection with election interference. The ads, covering topics from race to gun rights, reached 10 million U.S. citizens.

Facebook knew it was poisoning people’s brains and making a ton of money off it but would never admit to doing so publicly.

Perhaps that’s because in 2017, riding the wave of fake-news controversies, Facebook made a $3 billion profit. A 76% increase over the year before. Why would they admit to anything when it wasn’t hurting their earnings? If anything, the spread of misinformation increased engagements which increased ad revenue.

If fake news wouldn’t stop the rocket ship that was Facebook, perhaps privacy violations would.

In March of 2018, a story broke in the New York Times, and The Guardian that the personal data of up to 87 million people had been scraped by a Facebook-adjacent app called ‘This is Your Digital Life.’

People passed on personal information via the app, and the British consulting firm Cambridge Analytica used the info to advise right-leaning groups like the campaigns of Donald Trump and Vote Leave, a pro-Brexit group.

This data leak brought a magnifying glass to Facebook’s privacy issues like never before. As a result, the company’s stock lost 60% of its value, wiping out about $70 billion. Advertisers got spooked.

In response, Zuckerberg suspended Cambridge Analytica and certain apps and testified before the United States Congress. The scandal ultimately ended up with Facebook paying a $663,000 fine in the UK and was seen as a turning point around social media platforms and their access to our information.

The following year, the U.S. Federal Trade Commission fined Facebook $5 billion over user privacy violations, a record-breaking fine for a tech company but still a small price to pay for a company that makes upwards of $100 billion yearly.

Since then, Facebook’s reputation has continued to plummet. In 2020, a data scientist said the company failed to stop political manipulation by foreign governments. In 2021, Facebook was discovered to be the central planning platform for the riots at the U.S. Capitol on January 6th. And later that year, Frances Haugen (how-gen, with a hard g), a former employee, testified that Facebook and the companies it owned knew they were causing harm to people but continued to put profit over the welfare of their users. Only one other industry doesn’t care about the welfare of its users and continues to sell them things they know are harmful to them. They also call their customers users.

Needless to say, while still raking in the cash, Facebook needed a facelift.

Enter Meta.

Facebook announced that the parent company of the social platform and all the other companies it had acquired would be changed to Meta. The new name could potentially leave behind the controversies plaguing Facebook.

Because the reality is that Facebook just wasn’t what it used to be. A 2018 study found that 51% of people aged 13-17 used Facebook, a significant drop as Facebook lost out to Snapchat, Instagram, YouTube, and then TikTok.

Under its new parent company, Meta, Facebook has continued to flourish in emerging markets, allowing its influence to grow globally. One of the ways it does this is by enabling easy communication in parts of the world where other channels are inaccessible. For example, in much of Africa, Facebook IS the internet. It’s free on many African telecom networks, and users don’t need phone credit to log on.

Only 8% of African households have a computer, so internet access via mobile phone is critical. Facebook’s Free Basics provides internet service that gives users credit-free access to the platform and works on low-cost mobile phones.

And in further expansion plans, Meta is developing satellites that can beam internet access to remote areas, mainly so residents can use Facebook.

Some see this as digital colonialism, a way of turning people in the global south into consumers of Western corporate content. It’s a valid argument, but saying that Facebook is the sole culprit of this practice is naive.

One can visit almost any country on the planet and order McDonald’s or Starbucks. Uber currently has cars on the roads of major African cities like Lagos with a safety rating of 0 just to increase profits.

As crucial as these emerging markets are for the future of Facebook’s world takeover, Meta’s plans are even more far-reaching.

And they need to be. Because during the Covid-19 pandemic, more than 70% of Meta’s stock value eroded. That was partially due to Apple’s introduction of an app-tracking transparency feature which allowed people to opt out of apps, like Facebook and Instagram, from tracking their data. This meant less targeted advertising, which meant less money for Meta.

Of course, the introduction of TikTok can’t be underestimated either.

So Meta is making some BIG bets to become even more powerful than it already is.

The first could pay off. One of the keys to Facebook and Meta’s domination has been eliminating competition. One of the mainstays throughout Facebook’s days has been Twitter. Zuckerberg saw an opening when users soured on the app after Elon Musk purchased it and upended some publicly favorable company policies.

To an overwhelming response, Meta launched Threads to compete with Twitter. In two hours, the app had gained 2 million users. By the next day, over 30 million people had signed up. It has now gone down in history as the most rapidly downloaded app EVER.

While Meta’s leadership was pleasantly surprised by the turnout, they planned for it, courting some of the most followed people on Twitter to convert to threads like Ellen DeGeneres, Bill Gates and Oprah.

In an odd turn of events, users seemed so soured by one power-hungry tech magnate that they embraced another. For many, Zuckerberg feels like the lesser of two evils, the Devil you know is better than the Angel you don’t.

Threads has been losing ground in recent weeks, although Zuckerberg insists that they’re doing basic work on the app to make it function better, and once it’s ready for a jolt, Meta will throw more weight behind it.

But Meta’s takeover isn’t dependent on a Twitter lookalike.

Meta was born from creating the metaverse, a digital world accessed through virtual reality where you can socialize, work, shop and more. Facebook Spaces, a precursor to the metaverse, was a VR app that allowed participants to hang out with their friends in person while wearing their VR headset.

And who distributes most of the VR headsets in the world? Meta.

Meta has purchased seven of the most successful VR development studios in the world and has one of Earth’s largest VR content catalogs. In fact, it owns so many VR-related companies that in 2022, the Federal Trade Commission, an agency that protects the rights of U.S. consumers, blocked Meta from buying yet another popular VR. studio.

Signaling that Meta’s copy-acquire-kill plan might be losing one of its legs.

Regardless, Zuckerberg is staking Meta’s future on the metaverse and the digital worlds it will contain. He’s spoken about a future where users adopt avatars to work in virtual board rooms, attend digital events with friends, and shop in digital stores.

The final component to such immense power coming together is, unsurprisingly, artificial intelligence. Like almost every other tech company, Meta has increased its focus on AI However, Meta has differentiated itself by pledging its AI will be open source.

Open-source AI means that the company’s code will be free to developers and software enthusiasts worldwide. Companies like Google and the not-so-aptly-named OpenAI have set limits on who can access the latest technology and control what can be done. Zuckerberg insists that making the AI code open source will allow people to scrutinize and improve upon it.

Like any AI, there is concern that Meta could be used for evil. I made a video on how artificial intelligence could be a key ingredient to generating more spam, scamming and disinformation. But Meta says that releasing the technology to the public can strengthen its ability to fight against these abuses.

Nick Clegg, Meta’s president of global public policy, said it’s not sustainable to keep this important technology in the hands of a few corporations. Which, coming from an executive at Meta, is a bit ironic.

Because even if Meta shares its AI code, it still controls so many of us in ways we’ll never be able to surmount.

Copy-acquire-kill has been a mainstay of Meta’s strategy. Facebook itself was ultimately just a copy of previous social networks, albeit a better one, which it eventually killed. Then, over time, the company made huge acquisitions like Instagram and WhatsApp to balloon its influence and copied other successful giants like Twitter and Snapchat.

What happens if Meta gets all the power?

That’s called a monopoly, and if you’ve ever played the board game, you’d know that it’s not fun when you’re on the losing side. And everyone except the person who has a monopoly is on the losing side.

If companies can’t compete with Facebook and the docket of other entities Meta owns, then Meta gets all the opportunity. This, ultimately, could limit innovation.

We’re already seeing it play out.

Recently, there’s been suspicion that Meta is selling VR headsets for dirt cheap and at a loss to drive out competitors. This is called predatory pricing, and while it’s illegal, it can be complicated to prove because, generally, low prices are seen as good for consumers.

Right now, Meta is too young to be called a monopoly. But if the Metaverse comes to fruition and Meta owns every entity within it, this predatory behavior could become the norm.

And consumers, us, the people who have been clicking, sharing, liking and friending for all these years, might find it’s too late to turn back.

But what’s the alternative? Leave Threads for Twitter? Facebook for Snapchat? Instagram for TikTok? Well, certainly not the last one because TikTok is way more dangerous than you think, and the video on your screen right now tells you why.

Subscribe
Notify of

0 Comments
Inline Feedbacks
View all comments