On Social Media Platforms
The current state of social media is a failure of ambition and imagination.
I’ve been thinking endlessly about social media since 2020.
As a computer scientist, I’ve paid attention for a very long time to how digital spaces shape our perception of public opinion and reality. But it wasn’t until the pandemic that my frustration at their impact began to feel urgent, prompting deeper thinking about how we should be building our information ecosystems.
I spent much of the pandemic trying to curate (and share) news and facts. As opportunists and propagandists took the big mics, experts struggled to spread knowledge faster than misinformation. While social platforms have been morphing for a while, they have recently attained a frightening ability to upend vulnerabilities in our health, economic and political systems, and to facilitate violence around the globe.
Trying several new social media platforms (at times as a beta user) has left me disappointed. The same principles that brought us here—shallow product thinking, growth and speed at the cost of safety and user experience, engagement over facts, let the community moderate itself—seem to be in play, with tweaks. There is a lot of tech nihilism; I keep hearing variations of “this is just too complex”.
This is not simply a profit maximization conundrum. The belief that this is the inevitable direction any for-profit business will take, because profit follows short-term engagement on platforms, is inaccurate but hard to kill. The reality is that when platforms can no longer keep users safe, they lose financial value—because user experience, trust, value and retention all suffer. This can be seen now as both Facebook and Twitter hemorrhage users and advertisers. Twitter’s valuation has halved within months, and as I write this, NPR has left the site and Microsoft has dropped it from its advertising platform.
Building for safety is the financially strategic thing to do. The goal of a social media platform should be to optimize for long-term retention and value.
(I) On Scale
(a) Social Media Has Broken At Scale
What does it mean to successfully scale a technology product?
When a product fails to function in harmony with the cultural and political systems it is meant to interface with, when it becomes culpable for genocides, election interference, state-sponsored violence, coordinated misinformation, and propping up dictators—can we say that it has scaled successfully?
What we are seeing right now is that social media products have broken at scale. In 2021, I gave a talk about this at Barnard College titled ‘One Size Fails All’. Not until we devise a more ambitious definition of scale will we engineer social platforms to be resilient to bad actors.
Technology founders should be required to scale products to create financial and social value—without wreaking havoc on users, public infrastructure, and economies.
(b) What Scale Enables
One proposed solution to the exploitation of amplification and virality in large social platforms is to do away with the feature. To build smaller sub-communities where discourse and nuance can thrive. While this is useful, we still need scale. It’s nice to have civil conversations in protected spaces, but without scaled-up platforms like Twitter—where the promise of reaching anyone and everyone remains true—movements like MeToo, Black Lives Matter, and the Arab Spring may never have spread.
As we reimagine social media, it’s important to build spaces that have the reach of platforms like Twitter—where injustices can be exposed, communities mobilized, movements organized, and anonymity allowed. This is essential for activists, journalists, and anyone not protected by layers of privilege.
The solution for highly exploitable features like anonymity and amplification is not to do away with them. More in Section (IV)(c) below about this.
(II) Who Is Building? And For Whom?
Product development at tech startups often abides by the doctrine that technology is neutral with respect to geography because all users are the same. But I can tell you that the Facebook people use in Pakistan or the Philippines should not look the same as the one they use in the US. And yet, it does—it is fundamentally built the same way, and carries little to no intelligence, context, knowledge or customization informed by local risks and vulnerabilities.
Some of the most powerful technology products in the world continue to be built without deep thinking or intentional design—often by the same founders who broke them, or those with the same insular worldview.
The most obvious reason these founders have little appreciation of how their product decisions widen inequities of safety, access, power, wealth and knowledge—is that the harms of their shallow thinking are not felt equally. Their ‘move fast and break things’ breaks things least of all for them, and most for the ‘overlooked majority’ that I elaborate on in Section (IV)(a) on user safety.
We must pay attention to who is building, and for whom. We must question why social media platforms continue to be built by and for an advantaged minority, at the cost of profound damage to the rest of us.
If we want products that work for all of society—and that are optimized for long-term user engagement—we will need to start with founders who have a broader range of societal and lived experiences. We will need to build teams with multi-disciplinary training that can anticipate risks and inform impact.
Social media is among the most human products we will ever build. It is a congregation of humans, with all their complexity, biases and relations. So in addition, we must engineer it with learnings from the humanities and social sciences—the expertise of psychologists, anthropologists, criminologists, sociologists, historians and lawyers.
(III) Preventing and Predicting Harm
The current state of social media is neither inevitable nor unavoidable.
Platforms should be designed to be safe and equitable—from the start—with the goal of preventing harm, instead of relying heavily on damage control e.g. content moderation, flagging and user banning. The latter is costly, complex and has proven ineffective at creating safety.
We must elevate the bar for how to build social platforms in the first place—as resilient, because they are designed to predict and prevent harmful outcomes.
The bar can no longer be set at founder’s good intentions, because after close to 20 years of designing and using social media, we have the solutions, evidence, expertise, and technical capacity to support the goal of building safe and equitable digital spaces. We know the profound power of architecture and algorithms in creating digital norms and shaping user behavior. We know it is possible to create platforms that do not fuel violence, upend systems, or become havens of misinformation.
As long as these goals are a priority during the building process.
(a) Reimagining the Fundamentals
Social media has changed human behavior online and offline—with its decisions on what gets noticed and incentivized. Because its foundational features—follower numbers, likes, retweets, 280 characters, infinite feeds, virality—reward and reinforce posturing, rage, and attention-seeking, we will need to reimagine these fundamentals to build better digital spaces. Features that train us to create and follow hype instead of the truth should have no place in the product development process.
None of us are immune to the cues and incentive structures of these platforms. We are part of what social media expert (and once my undergraduate thesis advisor at MIT), Sinan Aral, calls ‘The Hype Machine’.
We need new features and algorithms — ones that don’t exist yet — if our goal is safety, connection, knowledge, nuance, complexity, and existing in non-binaries. Back to the basics.
(b) Product Engineering Creates Digital Outcomes
All platforms carry a consciously-designed architecture that creates and sustains digital norms and user choices. It seeds the community by attracting certain kinds of users (the ones that get the best experience out of the product), creating the environment that everyone lives in, and driving user outcomes.
Platform architecture, feature decisions, and the algorithmic engine shape user behavior and build online culture.
What would happen if there were no mandatory feed — no endless stream of information coming at us at superhuman speed. What if follower-count was not a stand-in for an account’s reputation and reliability? What would replace it and how would that change the content we consume? What if we could see our proximity to misinformation? What if we were given safety tools that actually protected us from harassment? What if our social media world could adapt to suit our needs for that day, goal or phase of our lives?
The harms of social media are not a coincidence, a side effect or an unintended externality. As Maria Ressa said: “Our biology, our brains, our hearts have been systematically and insidiously attacked by the technology that delivers our news and prioritized the distribution of lies over facts—by design.”
(c) Asking the Right Questions
Here are some questions to think more deeply about, in building social media:
What behavior should be rewarded? This will become the norm.
Who gets the best experience out of this platform? Whose needs were considered in the design process? These are the users who will stay on and benefit from the platform in the long-term.
How can this feature be abused? Whatever can be abused, will be.
What are default settings, design elements and language reinforcing? This will become the platform’s culture.
How is platform misuse disincentivized? How is it reported and penalized? This has the power to change user behavior.
How is problematic behavior monitored? Are we situated to catch problems early? Prevention of harm over damage control.
(d) Incentives & Penalties—Not Lists & Rules
Norms aren’t built nor are rules internalized with lists of community rules, just as company culture isn’t built by displaying values on a wall.
Digital norms are created and sustained by incentives for good behavior, and penalties or consequences for undesirable behavior. More on this in the section on misinformation below. Design decisions like visual cues, reminders, user flows and other explicit signals on the platform are also powerful in nudging us towards adopting these norms. Loopholes that can be exploited by bad actors will be exploited by them—they must be anticipated and removed.
None of the platforms I use give me clarity on what behavior they expect of me, let alone incentivize me to adopt it. Mostly, I have no clue. They never told me, there is little precedent for it, and no one modeled it. Those that violated platform guidelines mostly went unnoticed or unpunished.
(IV) On User Safety
The harms of badly-engineered social media are not distributed evenly.
Users who face the greatest risks are those whose needs were ignored during product development. These platforms were built without an appreciation of their experiences—because founding teams, by and large, do not belong to these groups.
The most targeted users face a distinct set of risks online—the centrality and immensity of this defines their online experience. Tracy Chou, creator of the anti-harassment plugin Blockparty says:
“My whole life is oriented around how I can be safe—psychologically, mentally and physically.”
Online and offline violence are a continuum for these users; building safe products requires understanding how this violence is enabled by a confluence of factors. For example, violence against women journalists like Nobel Laureate Maria Ressa operates at the intersection of networked misogyny, sexualized attacks, viral disinformation, erosion of press freedom and populist politics.
(a) Building For The Majority
Current social media platforms have been built for the needs of an advantaged minority.
Demographic data for platforms like Twitter shows that the sum of the most targeted groups—women, people of color, journalists, activists, LGBTQ+ and audiences outside the US—are the majority. This is the overlooked majority that represents a costly lost opportunity because so many get driven off, or do not engage fully with the platform due to harassment and safety concerns.
It is worthwhile to look at the staggering scale of Twitter’s user base that identifies as non-male and non-white:
22% of all American women are on twitter, only slightly lower than the 25% for men. Worldwide, 43.6% of existing Twitter users identify as female.
Twitter is disproportionately popular among black users—they make up 24% of all Twitter users, which is close to double their representation in the US population.
26% of African-American internet users said they use Twitter, compared to only 19% of white users.
Platforms should center the experiences of the overlooked majority because when the most vulnerable users are safe, everyone is safe. The experience becomes better for everyone. This is not only the responsible, but also the strategic thing to do. Building a product where a majority of users have a suboptimal experience at best— and a life-threatening one at worst—is a strategic misstep and a costly oversight.
Similarly, non-US audiences for social media platforms—for single countries—are sometimes larger than US audiences. These are audiences that are disproportionately at risk when social media products are not built to carry knowledge and protection that is customized for them, or do not recognize the local language(s).
India has 329.65M Facebook users, compared to the US’ 179.65M. Indonesia has 129.85M, Brazil 116M, Mexico 89.7M
The US has 76.9M Twitter users, compared to 58.9M in Japan, and 23.6M in India.
*Data from Statistica, Jan 2022
The global perspective is essential to capture, if social media platforms aspire to be tools for free speech worldwide, to enable social change and political movements, to provide a safe space for activists and journalists, to prevent its weaponization by actors with authoritarian or violent agendas.
Those that are closest to the problem are closest to the solution—the next wave of social platforms needs founding teams that belong to the overlooked majority. Because they experience the greatest intensity and ferocity of digital violence and coordinated misinformation campaigns, they deeply understand the needs and vulnerabilities of these groups, and are best-suited to build products that succeed at solving for these.
(b) Current Architecture Protects and Incentivizes Abuse
Current platform architecture and design is engineered for bad actors i.e. with features, loopholes and vulnerabilities that are easy for them to exploit. In the absence of robust mechanisms to identify and disarm abusive users, a culture of impunity emboldens and escalates abuse, and protects abusers.
Simultaneously, platforms have rudimentary, unintuitive, unresponsive, hard-to-use and superficial tools to protect users that are at risk of being targeted.
Women journalists have voiced over and over that these platforms are not designed for them to safely do their jobs—they are instead designed for abusers.
“Social media platforms protect the harassers more than they protect me.”—Saba Eitizaz, Pakistani-Canadian journalist and podcast host.
“I want harassment to be as annoying for my harassers as it is for me to report it.”—Talia Lavin, journalist.
(c) Exploitable Features As Revocable Privileges
Features like amplification, anonymity and reporting are regularly exploited to cause harm. Taking amplification as an example, while the promise of broad reach is essential for activism, and to make injustices visible—it is also a powerful tool for propaganda, misinformation, hate speech and coordinated attacks.
Such features should be treated as a user privilege that is earned, and can be revoked upon misuse. They should not be indiscriminate rights that are free-for-all, and that remain so despite misuse.
Free speech is not the same as free reach. Reach via algorithmic amplification is and should be a privilege. Content that is identified as abusive or misinformation should be deamplified to incentivize good behavior. More on this in the following section, on misinformation.
(V) On Misinformation
Prioritizing facts over misinformation is a product decision to proactively engineer. A platform’s architecture and features can be built to disincentivize both the creation and consumption of misinformation.
We improve what we measure, so quantifying misinformation and making it visible is an essential step towards reducing it.
(a) Defining and Measuring Misinformation
In order for misinformation to take hold in public narrative, it has to be encountered multiple times by users, and amplified by those they follow or trust. Thus, reducing exposure in a timely manner becomes critical. This is only possible if we have robust mechanisms to categorize and quantify misinformation.
Social media researchers have been doing this for a very long time. They have identified and categorized 13 types of misinformation, ranked by type, severity and source. These 13 types are lies, fake news, manipulated or doctored content, misleading content, propaganda, sensationalized content, false context, journalistic errors, astroturfing (masking sponsors), imposter content, rumors, hoaxes, and conspiracy theories.
Yet, little to none of these methodologies have received attention from social media platforms, let alone been used to reduce misinformation in the network.
(b) Putting Control Back In Users’ Hands
The knowledge above should be used to build tools that gauge the reliability of content and accounts—so users can make informed decisions about what they share, and what they expose themselves to.
Research (such as this in the Spectrum IEEE) shows that users make better choices online when given information about the credibility of content and accounts. When aware that a source is low-credibility, they become more sensitive to sharing content from that source.
Numerous social media research groups have created methodologies that give users a window into their exposure to misinformation—by looking at the reliability of accounts they follow. Among this work is that of MIT professor David Rand (see an MIT Sloan article on it here, and read their paper in Nature Communications). Rand and his team calculated ‘falsity scores’ for the accounts of political elites by analyzing the amount of misinformation shared by them. Their research found that the falsity score of social media accounts you’re exposed to has a statistically significant impact on the information you end up sharing (and your opinions) i.e. the more high-falsity accounts you follow, the more misinformation you share.
(c) Disincentives for Spreading Misinformation
Making misinformation visible means users can see how much falsehood an account is creating or spreading—thus impacting its repute and credibility, and disincentivizing the act.
Current metrics for repute rely on misleading and incomplete metrics such as follower count, content engagement (even the legacy Twitter blue-tick relied heavily on engagement to determine when to assign verification to a user). Instead a user’s repute in an information ecosystem should be based on the reliability of the content they share. This also elevates the bar for quality of content in the network.
Amplification is among the most effective tools that platforms can use to do this. Quantifying misinformation allows for amplification decisions to be more informed. This is a much-needed step in moving away from indiscriminate amplification. In addition, safety thresholds can be defined for reliability beyond which users are at risk of having their accounts flagged, or removed.
The goal should always be to minimize the risk of amplifying harmful content. To build an ecosystem that gives users reliable content, and reduces their exposure to misinformation. To help users become more discerning and thoughtful in what they share.
Ultimately, misinformation is most effectively combatted by combining human and artificial intelligence. Here, AI is a vital but incomplete solution, with limitations when it comes to context, emotion, intent, satire, and cultural cues.
I’m Building Again
My work in technology—now spanning more than 15 years and many parts of the world, including the US, Europe, South Asia and East Africa—has been wide-ranging, and my experiences as a person of color and a Pakistani-born woman have provided essential depth and insight for every product I’ve built.
In the last decade, I’ve founded a fast-growing ecommerce startup, been part of teams that built the tech core for 2 unicorns, led small and large engineering teams, designed and developed several AI-driven health-tech products, advised venture funds, mentored startup studios on their AI strategy, and coached founders and CEOs.
My earliest interest in computer science was rooted in my fascination with developing computational models of human intelligence. My graduate work at MIT’s CSAIL (with AI legend Patrick Winston) built models of how humans tell, perceive, and understand stories. All of this has come together as I conceptualize a new kind of social media platform—one that is resilient and human. I’m excited to build publicly again and to focus my skills on engineering safe and equitable technology.
More about the social media platform I’m building—Heydays—here (you can also sign up for the beta waitlist and reserve your handle).
If the ideas here excite you, get in touch. I’m looking for exceptional teammates (technical and non-technical), advisors (particularly in journalism, AI, tech policy, and legal), and investors.
My social media DMs are open—reach me via Twitter or Instagram, use the contact form on my website, leave a comment here, or share this post with others.
The world needs you, and your vision, and Heydeys more than ever.
Good luck Saba, I am super excited for Hey Days!