Let’s not bury the lead: I was (am currently) permanently suspended by X, formerly known as Twitter. You’re probably wondering what posts landed me on the wrong side of the X moderation team, especially considering even Alex Jones managed to get reinstated. Although, I think the truth may be sadly less spicy than you’d expect; my posts focus on scientific progress in AI and biotech, mirroring this Substack, and avoid politically charged topics. So, what went wrong? I was never given a clear explanation-just an automated email vaguely referencing the user agreement. Here’s my story; perhaps you can make more sense of it than I could.
Who am I?
I’m a scientist with a deep-seated passion for tackling complex problems. My career has spanned about 12 years in academia and industry, ranging from topics such as tissue engineering and novel pain management therapeutics in academia to treatments for antibiotic-resistant diseases and de-extinction in industry. Alongside these scientific pursuits, I have grown a passion for entrepreneurship, leading me to explore the startup world and ultimately engage with a broader audience on platforms like X.
Why X?
I really tried to avoid social media. The perpetual fight for attention on these platforms and their addictive nature seemed incredibly toxic. As a result, I deactivated all such accounts for several years. However, my perspective changed when I realized how crucial personal branding and community reach are for disseminating scientific knowledge and creating a network within my areas of interest. This motivated me to reactivate these accounts and ultimately start using X, which seems to be the dominant platform for researchers and others scientifically inclined.
I started actively using X in June of 2023. I had several intentions here:
I truly believe the freedom of speech philosophy is important for the future of online discourse.
The X microblogging platform seems to be well suited for current events in tech.
There is a large presence of both academic and industry related personalities that have migrated here.
Most importantly, I wanted to draw additional attention to my Substack posts and interact with the scientific community.
My account and community were fairly small / niche. While I managed to work my way up to 500+ followers over the past 6 months, that is still a fairly small time. I kept my posts mostly to STEM-related topics. I purposely avoid politics because I don’t see any reason to bisect my community. There are enough people talking about divisive topics without me contributing. With such a boringly normal account, how did I manage to get on the wrong side of X moderation?
The Suspension
Let’s start with what I found out. On Sunday, 17DEC23, I received the following email notification:
This is the totality of the information I have received. There were no warnings, messages detailing the violation of the user agreement, examples of what was done wrong, just this cryptic email.
I thought there must be some mistake. Particularly since I was a verified X Premium + subscriber and often would use Ads. Perhaps the X Algorithm mistakenly flagged my account? So, I click the link to submit an appeal. This appeal process results in an automated response from X after every request (flow diagram below). It claims to review my suspension despite the message being an automated response sent instantaneously upon submitting the appeal. I then looked for other avenues to contact customer support. However, there is no mechanism to get a human review at X.
I have been exiled from the platform with no explanation, recourse mechanism, or appeal process.
Finding myself abruptly cut off from X, I began searching for alternative ways to address this issue. This has included searching X Trust and Safety on LinkedIn and directly messaging them. Despite reaching out to 20 or so individuals via direct messages, I have not received any replies. It is unclear who else may be able to help me navigate these waters.
Has this happened to anyone else?
This experience has raised a critical question: how many others have faced similar situations with X? Has X become so automated and understaffed that users are left with no viable recourse? For better or worse, it looks like I am not alone. There are many threads full of users with similar stories: suspension for violating 'Twitter’s Terms of Service’ Example link. You have to admit, there is something a bit hilarious about the automated responses still referring to X as Twitter. It is a bit of a giveaway regarding how far behind their moderation is on the status of the company.
There are many such examples of others being suspended with no explanation, recourse option, or ability to reach support. It turns out that many of these users were Premium + and or business accounts that pay for advertising. This means that X is banning paying active users who are likely not violating (or knowingly violating) the rules while simultaneously losing out on significant revenue.
So, why is this happening?
Let’s start with the obvious - the Trust and Safety department is either a complete ghost town or unbelievably backlogged. What should users think when even the automated messages from X are still labeled as ‘Twitter’ despite a name change over a year ago?
Okay, the moderation review process is broken. We get it. But how did I get suspended to begin with? I have no way to know for sure and can only speculate. But based on the totality of other users with similar experiences, there seems to be a few mechanisms.
If a user is reported on the platform a certain number of times, the account is flagged.
Some portion (or potentially all) of moderation review is being automated. These systems have a significant amount of false positives.
A nearly automated process with no external review or recourse means an X Review Bot is the judge, jury, and executioner.
What’s next and how can X improve?
When Elon Musk took over Twitter and converted it to X, he said no account would be permanently banned unless it broke the law. Yet, we are in a scenario where spam bots are running out of control on the platform, and real accounts are being suspended. How can X turn this ship around?
I don’t have all of the answers, but I think a few things can be done to begin improving the situation immediately.
Develop Methodology to Distinguish Humans vs Bots
Increased Transparency
Clear Communication: Platforms should provide clear, detailed explanations when actions are taken against an account. This includes specific posts or behaviors that led to the decision.
Public Moderation Data: Regularly publishing data on moderation actions, including the number of appeals and their outcomes, could increase trust and accountability.
Improved Appeal Process
Human Review: Ensure that there is a human review element in the appeal process, particularly for significant actions like bans.
Timely Responses: Set and adhere to timelines for reviewing appeals to avoid leaving users in limbo.
Accessible Support: Provide multiple, easy-to-access avenues for users to seek help and clarification, not just automated systems.
Why does this matter?
As we delve deeper into the digital age, the role of major social platforms in our lives becomes increasingly significant. It’s vital that these platforms maintain transparency and fairness, especially when they wield the power to silence voices. My situation is a microcosm of a larger issue many face online. What happened to me isn’t just about a lost account; it’s about highlighting the need for robust and fair mechanisms to challenge platform decisions. This is essential for individuals and preserving the diverse set of voices necessary for a healthy digital ecosystem.
Final thoughts and a request to the audience
I never expected to become a case study for problematic platform moderation, yet here I am, sharing my story in hopes it sparks change. For those who have faced similar issues, you’re not alone. It’s time we demand more accountability from the platforms that have become so integral to our public and private lives. Let’s start a conversation on how we can build a fairer digital world.
I humbly request that my readers share this post. It would be helpful if we could reach enough people to have a chance to course-correct the ship. If anyone else has suggestions, please feel free to share them below.
Rename this post "X will ban your paid account for literally no reason at all and there's absolutely no way to appeal", cut down the intro to stick to the point, then link the post as a community note to any tweet Elon makes regarding the censorship resistance of X.
I'm really sorry that happened. Even though all the social media platforms are repulsive in their own ways, it's still infuriating to hear about things like that. It's outrageous that these platforms (and increasingly conventional businesses) keep customers at arm's length by automating everything. Especially when it's a matter of something as substantial as a lifetime ban there absolutely should be not just a human review process but a human immediately stepping in and handling the case. If anyone's violating the terms of service in those situations it's the platforms, which present themselves as ways for people to connect, communicate, expand their businesses, and raise their profile with customers and colleagues--but which then think nothing of banning people who have tried to do exactly that, sometimes after putting years of work into their accounts. Users should also have the right to face their accusers, even if the accuser remains anonymous, by knowing *exactly* which content was allegedly reported, and a human representative should be required to explain *exactly* how the content violates the TOS. Sending a canned message citing the TOS but refusing to explain how someone actually violated it is bullshit, especially when they won't even cite the offending material. And when I say "should be required" I mean required by the guy who runs the company.
As for Musk, I don't trust him as far as I can throw him. If he wanted things to be different there, they would be. If he wanted X to be fair, responsive, and rational, it would be. It wouldn't be set up to jerk people around for no reason. Whatever he is, he's not a defender of free speech.
It probably won't make you feel any better, but Twitter suspended me two years ago for posting Thomas Jefferson's "tree of liberty" quote. That was it. Just the quote. I'm starting to think that what triggers AI is rational content.