Maybe this was more of an intro/pitch to something I already support, so I wasn't quite the audience here.
But I feel that talking about the open social web without addressing the reasons current ones aren't popular/get blocked doesn't lead to much progress. Ultimately, big problems with an open social web include:
- moderation
- spam, which now includes scrapers bringing your site to a crawl
- good faith verification
- posting transparency
These are all hard problems and it seems to make me believe the future of a proper community lies more in charging a small premium. Even charging one dollar for life takes out 99% of spam and gives a cost to bad faith actors should they be banned and need another dollar to re-enter. Thus, easing moderation needs. But charging money for anything online these days can cause a lot of friction.
In my opinion, both spam and moderation are only really a problem when content is curated (usually algorithmically). I don't need a moderator and don't worry about spam in my RSS reader, for example.
A simple chronological feed of content from feeds I chose to follow is enough. I do have to take on the challenge of finding new content sources, but at least fore that's a worthwhile tradeoff to not be inundated with spam and to not feel dependent on someone else to moderate what I see.
That's just means you're effectively acting as a moderator yourself, only with a whitelist. It's just your own direct curation of sources.
And how did you discover those feeds in the first place? Or find new ones?
I know people have tried to have a relatively closed mesh-of-trust, but you still need people to moderate new applicants, otherwise you'll never get any new idea of fresh discussion. And if it keeps growing, scale means that group will slowly gather bad actors. Maybe directly by putting up whatever front they need to get into the mesh or existing in-mesh accounts. Maybe existing accounts get hacked. Maybe previously-'good' account-owning people have changed, be it in opinion or situation, to take advantage of their in-mesh position. It feels like a speedrun of the internet itself growing.
> That's just means you're effectively acting as a moderator yourself, only with a whitelist
Agreed, though when you are your own moderator that really is more about informed consent or free will than moderation. Moderation, at least in my opinion, implies a third party.
> And how did you discover those feeds in the first place? Or find new ones?
The same way I make new friends. Recommendations from those I already trust, or "friend of a friend" type situations. I don't need an outside matchmaker to introduce me to people they think I would be friends with.
I think it's the act of creating an access point that allows posting when you get spam, not necessarily if it's curated. Your email isn't a curated feed but it will get tons of spam because people can "post" to it once they get your address. Sane with your cell phone number and your physical mailbox.
Since a community requires posting and an access point, spam is pretty much inevitable.
Having worked on the problem for years, decentralized social networking is such as tar pit of privacy and security and social problems that I can't find myself excited by it anymore. We are clear what the problems with mainstream social networking at scale are now, and decentralization only seems to make them worse and more intractable.
I've also come to the conclusion that a tightly designed subscription service is the way to go. Cheap really can be better than "free" if done right.
It's unfortunate, and I don't necessarily want to say decentralization isn't viable at all. But I only see decentralization at best address the issue of scraping. It's solving different problems without necessarily addressing the core ones needed to make sure a new community is functional. But I think both kinds of tech can execute on addressing these issues.
I'm not against subscriptions per se, but I do think a one time entry cost is really all that's needed to achieve many of the desired effects. I'm probably in the minority as someone who'd rather pay $10 one time to enter a community once than $1-2/month to maintain my participation, though. I'm just personally tired of feeling like I'm paying a tax to construct something that may one day be good, rather than buying into a decently polished product upfront.
If I have to pay you to access a service, and I'm not doing so through one of a small number of anonymity-preserving cryptocurrencies such as Bitcoin or Monero, then the legitimate financial system has an ultimate veto on what I can say online.
Dunno necessarily if they are _forced_ to expose that data.
Something like OAuth means that you can give different levels of private data to different actors, based on what perms they request.
Then you just have whoever is holding your data anyway (it's gotta live somewhere) also handle the OAuth keys. That's how the Bluesky PDS system works, basically.
Now, there is an issue with blanket requesting/granting of perms (which an end user isn't necessarily going to know about), but IMO all that's missing from the Bluesky-style system is to have a way to reject individual OAuth grants (for example, making it so Bluesky doesn't have access to reading my likes, but it does have access to writing to my likes).
In a federated system, the best you can do is a soft delete request, and ignoring that request is easier than satisfying it.
If I have 100 followers on 100 different nodes, that means each node has access to (and holds on to) some portion of my data by way of those followers.
In a centralized system, a user having total control over their data (and the ability to delete it) is more feasible. I'm not saying modern systems are great about this, GDPR was necessary to force their hands, but federation makes it more technically difficult.
I'm a consultant that builds for startups. I'm not an entrepreneur myself.
If I were to build something like this, I'd use a services non-profit model.
Ad-supported apps result in way too many perverse economic incentives in social media, as we've seen time and time again.
I worked on open source decentralized social networking for 12 years, starting before Facebook even launched. Decentralization, specifically political decentralization which is what federation is, makes the problems of moderation, third order social effects, privacy and spam exceedingly more difficult.
>Decentralization, specifically political decentralization which is what federation is, makes the problems of moderation, third order social effects, privacy and spam exceedingly more difficult.
I disagree that federation is "specifically political decentralization" but how so?
You claim that decentralization makes all of the problems of mainstream social networking worse and more intractable, but I think most of those problems come from the centralized nature of mainstream social media.
There is only one Facebook, and only one Twitter, and if you don't like the way Zuckerberg and Musk run things, too bad. If you don't like the way moderation works with an instance, you don't have to federate with it, you can create your own instance and moderate however you see fit.
This seems like a better solution than everyone being subject to the whims of a centralized service.
To clarify, I don't mean big P Politics, I mean political in the sense that each node is owned and operated separately, which means there are competing interests and a need to coordinate between them that extends beyond the technical. Extrapolated to N potential nodes creates a lot of conflicting incentives and perspectives that have to be managed. And if the network ever becomes concentrated in a handful of nodes or even one of them which is not unlikely, then we're effectively back at square one.
| if you don't like the way Zuckerberg and Musk run things, too bad
It's important to note we're optimizing for different things. When I say third-order social effects, it means the way that engagement algorithms and virality combine with massive scale to create a broadly negative effect on society. This comes in the form of addiction, how constant upward social comparison can lead to depression and burnout, or how in extreme situations, society's worst tendencies can be amplified into terrible results with Myanmar being the worst case scenario.
You assume centralization means total monopolization, which neither Twitter or Facebook or Reddit or anyone has been able to do. You may lose access to a specific audience, but nobody has a right to an audience. You can always put up a website, blog, write for an op-ed position at your local newspaper, hold a sign in a public square, etc. The mere existence of a centralized system with moderation is not a threat to freedom of speech.
Federation is a little bit more resilient but accounts can be blacklisted, and whole nodes can be blacklisted because of the behavior of a handful of accounts. And unfortunately, that little bit of resilience amplifies the problem of spam and bots, which for the average user is much bigger of a concern than losing their account. Not to mention privacy concerns, which is self-evident why an open system is more difficult than a closed one.
I'll concede that "worse" was poor wording, but intractable certainly wasn't. These problems become much more difficult to solve in a federated system.
However, most advocates of federation aren't interested in solving the same problems as I am, so that's where the dissonance comes from.
> Ultimately, big problems with an open social web include:
These two seem like the same problem:
> moderation
> spam
You need some way of distinguishing high quality from low quality posts. But we kind of already have that. Make likes public (what else are they even for?). Then show people posts from the people they follow or that the people they follow liked. Have a dislike button so that if you follow someone but always dislike the things they like, your client learns you don't want to see the things they like.
Now you don't see trash unless you follow people who like trash, and then whose fault is that?
> which now includes scrapers bringing your site to a crawl
This is a completely independent problem from spam. It's also something decentralized networks are actually good at. If more devices are requesting some data then there are more sources of it. Let the bots get the data from each other. Track share ratios so high traffic nodes with bad ratios get banned for leeching and it's cheaper for them to get a cloud node somewhere with cheap bandwidth and actually upload than to buy residential proxies to fight bans.
> good faith verification
> posting transparency
It's not clear what these are but they sound like kind of the same thing again and in particular they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see instead of a bunch of spam from anons that nobody they follow likes.
>You need some way of distinguishing high quality from low quality posts.
Yes. But I see curation more as a 2nd order problems to solve once the bases are taken care of. Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.
The tools needed for curation, stuff like filtering, finding similar posts/comments, popularity, following, are different from those needed to moderate, or self moderate (ignore, down voting, reporting). The latter poisons a site before it can really start to curate to its users.
>This is a completely independent problem from spam.
Yeah, thinking more about it, it probably is a distinct category. It simply has a similar result of making a site unable to function.
>It's not clear what these are but they sound like kind of the same thing again
I can clarify. In short, posting transparency focused more on the user and good faith verification focuses more on the content. (I'm also horrible with naming, so I welcome better terms to describe these)
- Posting transparency at this point has one big goal: ensure you know when a human or a bot is posting. But it extends to ensuring there's no impersonation, that there's no abuse of alt accounts, and no voting manipulation.
It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google. But this is definitely a step that can overstep privacies.
- good faith verification refers more towards a duty to properly vet and fact check information that is posted. It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing. It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.
>they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see
Yes, they are. I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform. Being able to address that naturally requires some more authorian approaches.
That's why "good faith" is an important factor here. Any authoritarian act you introduce can only work on trust, and is easily broken by abuse. If we want incentives to change from "maximizing engagement" to "maximizing quality and community", we need to cull out malicious information.
We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.
A lot of tech folks hate government ID schemes, but I think MDL with some sort of pairwise pseudonyms could help with spam and verification.
It would let you identify users uniquely, but without revealing too much sensitive information. It would let you verify things like "This user has a Michigan driver's license, and they have an ID 1234, which is unique to my system and not linkable to any other place they use that ID."
If you ban that user, they wouldn't be able to use that ID again with you.
The alternative is that we continue to let unelected private operators like Cloudflare "solve" this problem.
Telegram added a feature where if someone cold dms you, it shows their phone number country and account age. When I see a 2 month old account with a Nigeria phone number I know it's a bot and I can ignore it.
The EU’s eIDAS 2.0 specification for their digital wallet identity explicitly supports the use of pseudonyms for this exact purpose of “Anonymous authentication”.
Those are important reasons, but there are other reasons as well, such as concentration of market power in a few companies, which allows those companies to erect barriers to entry and shape law in ways that benefit themselves, as well as simply creating network effects that make it hard for new social-web projects to establish a foothold.
That's an even harder problem to solve. I do agree we should make sure that policy isn't manipulated by vested powers and make things even harder to compete with.
But network effects seems to be a natural phenomenon of people wanting to establish a familiar routine. I look at Steam as an example here, where while it has its own shady schemes behind the scenes (which I hope are addressed), it otherwise doesn't engage in the same dark patterns as other monopolies. But it still creates a strong network effect nonetheless.
I think the main solace here is that you don't need to be dominant to create a good community. You need to focus instead on getting above a certain critical mass, where you keep a healthy stream of posting and participation that can sustain itself. Social media should ultimately be about establishing a space for a community to flourish, and small communities are just as valid.
it's pure waste-generation, but hashcash is a fairly old strategy for this, and it's one of the foundations of Bitcoin. there's no "proof of payment to any beneficial recipient", sadly, but it does throttle high-volume spammers pretty effectively.
Imagine a world where every City Hall has a vending machine you can use to donate a couple bucks to to a charity of your choice, and receive an anonymous one-time use "some real human physically present donated real money to make this" token.
You could then spend the token with a forum, to gain and basic trust for an otherwise anonymous account.
To check out other FediForum keynotes, many demos showing off innovative open social web software, and notes from the FediForum unconference sessions, go to https://fediforum.org (disclaimer: FediForum co-organizer here)
Social media is simply an extension from cybernetics to the principles of cog-sci as a "protocol" network where status and control are the primary forces mediated. This is irrefutable - the web was built as an extension of the cog-sci parameters of information as control.
Social media can't be saved, it can only be revolutionary as a development arena for a new form of language.
"The subject of integration was socialization; the subject of coordination was communication. Both were part of the theme of control...Cybernetics dispensed with the need for biological organisms, it as the parent to cognitive science, where the social is theorized strictly in terms of the exchange of information. Receivers, senses of signs need to be known in terms of channels, capacities, error rates, frequencies and so forth." Haraway Primate Visions.
I don't understand how technologists and coders can be this naive to the ramifications of electronically externalizing signals which start as arbitrary in person, and then clearly spiral out of control once accelerated and cut-off from the initial conditions.
I believe that the more populist layer of the www became social media apps. Hosted LLMs (claude, chatGPT etc) are going to become the popular source of information and therefore narrative. What
we must remember is that we should retain control of our thoughts, and be aware of how we can share them without financially
interested parties claiming rights to their use or abuse. I am trying to solve some of these problems with NoteSub App - https://apps.apple.com/gb/app/notesub/id6742334239 - but have yet to overcome the real issue of how we can stop the middleman keeping the loop closed with him in between.
I've never really got social media in any of its forms. I use messaging apps to stay in contact with people I like, but that's about it.
I skimmed this article, I still don't get it. I think group chats cover most of what the author is taking about, public and private ones. But this might be my lack of imagination. I feel there article, and by extension, the talk could have been a lot shorter.
But you're posting here, in socisl media, no? So you sought out something here that a group chat wouldn't give.
Most of the article here is focused more on making sure any social media (be it chats, a public forum, or email) isn't hijacked by vested powers who want to spread propaganda or drown the user in ads. One approach to that focused in this article is decentralization, which gives a user the ability to take their ball and go home.
Of course, it's futile if the user doesn't care about wielding that power.
Group chats are where real people socialise with their actual friends now. Social media is where people consume infinite slop feeds for entertainment. The days of people posting their weekend on Facebook are long gone.
> By open do you mean not centralised? I don't get the significance of big S social media. Functionally how would big S improve on group chats?
Social media has two functions: chat (within groups/topics/...) and discovery (of groups/topics/...). So unless we rely only on IRL discovery, we need a way to do discovery online.
Discovery is probably the main problem social media creates. Almost all of these problems solve themselves when you remove discovery. If someone in your friends group chat is spamming porn you just remove them. There's no need for the platform to intervene here, small groups of people can moderate their own friend groups.
Once you start algorithmically shoving content to people you have to start worrying about spam, trolling, politics, copyright, and all kinds of issues. The best discovery system is friends sharing chat invite links to other friends who are interested.
Why was this chosen to be a keynote? This talk seems to not care about open social media, but rather that existing social media sites don't follow the author's political agenda. Having a keynote trying to rally people into building sites that support a niche political agenda that the general public doesn't agree with doesn't accomplish the goals of making open social media more viable. This along with equating things with Nazis just further alienates people.
> What specific pain point are you solving that keeps people on WhatsApp despite the surveillance risk, or on X despite the white supremacy?
Why wouldn't a genuinely open social web allow people to communicate content that Ben Werdmuller thinks constitutes white supremacy, just as one can on X? Ideas and opinions that Ben Werdmuller (and people with similar activist politics to him) think constitute white supremacy are very popular among huge segments of the English-speaking public, and if it's even possible for some moderator with politics like Werdmuller to prevent these messages from being promulgated (as was the case at Twitter until Musk bought it in 2022 and fired all the Trust and Safety people with politics similar to Werdmuller's), then it is not meaningfully open. If this is not possible, then would people with Werdmuller's politics still want to use an open social web, rather than a closed social web that lets moderators attempt to suppress content they deem white supremacist?
> As I was writing this talk, an entire apartment building in Chicago was raided. Adults were separated into trucks based on race, regardless of their citizenship status. Children were zip tied to each other.
> And we are at the foothills of this. Every week, it ratchets up. Every week, there’s something new. Every week, there’s a new restrictive social media policy or a news outlet disappears, removing our ability to accurately learn about what’s happening around us.
The reaction to the raid of that apartment building in Chicago on many social media platforms was the specific meme-phrase "this is what I voted for", and indeed Donald Trump openly ran on doing this, and won the US presidential election. What prevents someone from using open social media tech to call for going harder on deportations, or to spread news stories about violent crimes and fraud committed by immigrants? If anything can prevent this, how can the platform be said to be actually open?
While I tend to support there being open social alternatives, I haven’t really seen the people behind them talk about the most important aspect: how will you attract and retain users? There has to be more to the value proposition than “it’s open”. The vast majority of users simply do not care about this. They want to be where their friends, family, and favorite content creators are. They want innovation in both content and format. Until the well intentioned people behind these various open web platforms and non-platforms internalize and act on these realities, the whole enterprise is doomed to be a niche movement that will eventually go out with a whimper.
Social media relies on our dead. arbitrary signaling system, language, which once it's accelerated becomes a cybernetic/cog-sci control network, no matter how it's operated. Language is about control, status and bias before it's an attempt to communicate information. It's doomed as an external system in arbitrary symbols.
Maybe this was more of an intro/pitch to something I already support, so I wasn't quite the audience here.
But I feel that talking about the open social web without addressing the reasons current ones aren't popular/get blocked doesn't lead to much progress. Ultimately, big problems with an open social web include:
- moderation
- spam, which now includes scrapers bringing your site to a crawl
- good faith verification
- posting transparency
These are all hard problems and it seems to make me believe the future of a proper community lies more in charging a small premium. Even charging one dollar for life takes out 99% of spam and gives a cost to bad faith actors should they be banned and need another dollar to re-enter. Thus, easing moderation needs. But charging money for anything online these days can cause a lot of friction.
In my opinion, both spam and moderation are only really a problem when content is curated (usually algorithmically). I don't need a moderator and don't worry about spam in my RSS reader, for example.
A simple chronological feed of content from feeds I chose to follow is enough. I do have to take on the challenge of finding new content sources, but at least fore that's a worthwhile tradeoff to not be inundated with spam and to not feel dependent on someone else to moderate what I see.
That's just means you're effectively acting as a moderator yourself, only with a whitelist. It's just your own direct curation of sources.
And how did you discover those feeds in the first place? Or find new ones?
I know people have tried to have a relatively closed mesh-of-trust, but you still need people to moderate new applicants, otherwise you'll never get any new idea of fresh discussion. And if it keeps growing, scale means that group will slowly gather bad actors. Maybe directly by putting up whatever front they need to get into the mesh or existing in-mesh accounts. Maybe existing accounts get hacked. Maybe previously-'good' account-owning people have changed, be it in opinion or situation, to take advantage of their in-mesh position. It feels like a speedrun of the internet itself growing.
> That's just means you're effectively acting as a moderator yourself, only with a whitelist
Agreed, though when you are your own moderator that really is more about informed consent or free will than moderation. Moderation, at least in my opinion, implies a third party.
> And how did you discover those feeds in the first place? Or find new ones?
The same way I make new friends. Recommendations from those I already trust, or "friend of a friend" type situations. I don't need an outside matchmaker to introduce me to people they think I would be friends with.
I think it's the act of creating an access point that allows posting when you get spam, not necessarily if it's curated. Your email isn't a curated feed but it will get tons of spam because people can "post" to it once they get your address. Sane with your cell phone number and your physical mailbox.
Since a community requires posting and an access point, spam is pretty much inevitable.
Having worked on the problem for years, decentralized social networking is such as tar pit of privacy and security and social problems that I can't find myself excited by it anymore. We are clear what the problems with mainstream social networking at scale are now, and decentralization only seems to make them worse and more intractable.
I've also come to the conclusion that a tightly designed subscription service is the way to go. Cheap really can be better than "free" if done right.
It's unfortunate, and I don't necessarily want to say decentralization isn't viable at all. But I only see decentralization at best address the issue of scraping. It's solving different problems without necessarily addressing the core ones needed to make sure a new community is functional. But I think both kinds of tech can execute on addressing these issues.
I'm not against subscriptions per se, but I do think a one time entry cost is really all that's needed to achieve many of the desired effects. I'm probably in the minority as someone who'd rather pay $10 one time to enter a community once than $1-2/month to maintain my participation, though. I'm just personally tired of feeling like I'm paying a tax to construct something that may one day be good, rather than buying into a decently polished product upfront.
If I have to pay you to access a service, and I'm not doing so through one of a small number of anonymity-preserving cryptocurrencies such as Bitcoin or Monero, then the legitimate financial system has an ultimate veto on what I can say online.
It does if you don't pay to access the service as well, because the financial system is the underpinning of their ad network.
Even in a federated system, you can be blacklisted although it does take more coordination and work.
i2p and writing to the blockchain are an attempt to deal with that through permanence, but those are not without their own (serious) problems.
Yeah kind of agree. Decentralised protocols are forced to expose a lot of data which can normally be kept private like users own likes.
Dunno necessarily if they are _forced_ to expose that data.
Something like OAuth means that you can give different levels of private data to different actors, based on what perms they request.
Then you just have whoever is holding your data anyway (it's gotta live somewhere) also handle the OAuth keys. That's how the Bluesky PDS system works, basically.
Now, there is an issue with blanket requesting/granting of perms (which an end user isn't necessarily going to know about), but IMO all that's missing from the Bluesky-style system is to have a way to reject individual OAuth grants (for example, making it so Bluesky doesn't have access to reading my likes, but it does have access to writing to my likes).
In a federated system, the best you can do is a soft delete request, and ignoring that request is easier than satisfying it.
If I have 100 followers on 100 different nodes, that means each node has access to (and holds on to) some portion of my data by way of those followers.
In a centralized system, a user having total control over their data (and the ability to delete it) is more feasible. I'm not saying modern systems are great about this, GDPR was necessary to force their hands, but federation makes it more technically difficult.
>I've also come to the conclusion that a tightly designed subscription service is the way to go. Cheap really can be better than "free" if done right.
"Startup engineer" believes the solution to decentralization is a startup, what a shock. We look forward to your launch.
I'm a consultant that builds for startups. I'm not an entrepreneur myself.
If I were to build something like this, I'd use a services non-profit model.
Ad-supported apps result in way too many perverse economic incentives in social media, as we've seen time and time again.
I worked on open source decentralized social networking for 12 years, starting before Facebook even launched. Decentralization, specifically political decentralization which is what federation is, makes the problems of moderation, third order social effects, privacy and spam exceedingly more difficult.
>Decentralization, specifically political decentralization which is what federation is, makes the problems of moderation, third order social effects, privacy and spam exceedingly more difficult.
I disagree that federation is "specifically political decentralization" but how so?
You claim that decentralization makes all of the problems of mainstream social networking worse and more intractable, but I think most of those problems come from the centralized nature of mainstream social media.
There is only one Facebook, and only one Twitter, and if you don't like the way Zuckerberg and Musk run things, too bad. If you don't like the way moderation works with an instance, you don't have to federate with it, you can create your own instance and moderate however you see fit.
This seems like a better solution than everyone being subject to the whims of a centralized service.
To clarify, I don't mean big P Politics, I mean political in the sense that each node is owned and operated separately, which means there are competing interests and a need to coordinate between them that extends beyond the technical. Extrapolated to N potential nodes creates a lot of conflicting incentives and perspectives that have to be managed. And if the network ever becomes concentrated in a handful of nodes or even one of them which is not unlikely, then we're effectively back at square one.
| if you don't like the way Zuckerberg and Musk run things, too bad
It's important to note we're optimizing for different things. When I say third-order social effects, it means the way that engagement algorithms and virality combine with massive scale to create a broadly negative effect on society. This comes in the form of addiction, how constant upward social comparison can lead to depression and burnout, or how in extreme situations, society's worst tendencies can be amplified into terrible results with Myanmar being the worst case scenario.
You assume centralization means total monopolization, which neither Twitter or Facebook or Reddit or anyone has been able to do. You may lose access to a specific audience, but nobody has a right to an audience. You can always put up a website, blog, write for an op-ed position at your local newspaper, hold a sign in a public square, etc. The mere existence of a centralized system with moderation is not a threat to freedom of speech.
Federation is a little bit more resilient but accounts can be blacklisted, and whole nodes can be blacklisted because of the behavior of a handful of accounts. And unfortunately, that little bit of resilience amplifies the problem of spam and bots, which for the average user is much bigger of a concern than losing their account. Not to mention privacy concerns, which is self-evident why an open system is more difficult than a closed one.
I'll concede that "worse" was poor wording, but intractable certainly wasn't. These problems become much more difficult to solve in a federated system.
However, most advocates of federation aren't interested in solving the same problems as I am, so that's where the dissonance comes from.
> Ultimately, big problems with an open social web include:
These two seem like the same problem:
> moderation
> spam
You need some way of distinguishing high quality from low quality posts. But we kind of already have that. Make likes public (what else are they even for?). Then show people posts from the people they follow or that the people they follow liked. Have a dislike button so that if you follow someone but always dislike the things they like, your client learns you don't want to see the things they like.
Now you don't see trash unless you follow people who like trash, and then whose fault is that?
> which now includes scrapers bringing your site to a crawl
This is a completely independent problem from spam. It's also something decentralized networks are actually good at. If more devices are requesting some data then there are more sources of it. Let the bots get the data from each other. Track share ratios so high traffic nodes with bad ratios get banned for leeching and it's cheaper for them to get a cloud node somewhere with cheap bandwidth and actually upload than to buy residential proxies to fight bans.
> good faith verification
> posting transparency
It's not clear what these are but they sound like kind of the same thing again and in particular they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see instead of a bunch of spam from anons that nobody they follow likes.
>You need some way of distinguishing high quality from low quality posts.
Yes. But I see curation more as a 2nd order problems to solve once the bases are taken care of. Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.
The tools needed for curation, stuff like filtering, finding similar posts/comments, popularity, following, are different from those needed to moderate, or self moderate (ignore, down voting, reporting). The latter poisons a site before it can really start to curate to its users.
>This is a completely independent problem from spam.
Yeah, thinking more about it, it probably is a distinct category. It simply has a similar result of making a site unable to function.
>It's not clear what these are but they sound like kind of the same thing again
I can clarify. In short, posting transparency focused more on the user and good faith verification focuses more on the content. (I'm also horrible with naming, so I welcome better terms to describe these)
- Posting transparency at this point has one big goal: ensure you know when a human or a bot is posting. But it extends to ensuring there's no impersonation, that there's no abuse of alt accounts, and no voting manipulation.
It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google. But this is definitely a step that can overstep privacies.
- good faith verification refers more towards a duty to properly vet and fact check information that is posted. It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing. It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.
>they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see
Yes, they are. I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform. Being able to address that naturally requires some more authorian approaches.
That's why "good faith" is an important factor here. Any authoritarian act you introduce can only work on trust, and is easily broken by abuse. If we want incentives to change from "maximizing engagement" to "maximizing quality and community", we need to cull out malicious information.
We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.
A lot of tech folks hate government ID schemes, but I think MDL with some sort of pairwise pseudonyms could help with spam and verification.
It would let you identify users uniquely, but without revealing too much sensitive information. It would let you verify things like "This user has a Michigan driver's license, and they have an ID 1234, which is unique to my system and not linkable to any other place they use that ID."
If you ban that user, they wouldn't be able to use that ID again with you.
The alternative is that we continue to let unelected private operators like Cloudflare "solve" this problem.
Telegram added a feature where if someone cold dms you, it shows their phone number country and account age. When I see a 2 month old account with a Nigeria phone number I know it's a bot and I can ignore it.
The EU’s eIDAS 2.0 specification for their digital wallet identity explicitly supports the use of pseudonyms for this exact purpose of “Anonymous authentication”.
Those are important reasons, but there are other reasons as well, such as concentration of market power in a few companies, which allows those companies to erect barriers to entry and shape law in ways that benefit themselves, as well as simply creating network effects that make it hard for new social-web projects to establish a foothold.
That's an even harder problem to solve. I do agree we should make sure that policy isn't manipulated by vested powers and make things even harder to compete with.
But network effects seems to be a natural phenomenon of people wanting to establish a familiar routine. I look at Steam as an example here, where while it has its own shady schemes behind the scenes (which I hope are addressed), it otherwise doesn't engage in the same dark patterns as other monopolies. But it still creates a strong network effect nonetheless.
I think the main solace here is that you don't need to be dominant to create a good community. You need to focus instead on getting above a certain critical mass, where you keep a healthy stream of posting and participation that can sustain itself. Social media should ultimately be about establishing a space for a community to flourish, and small communities are just as valid.
"- moderation
- spam, which now includes scrapers bringing your site to a crawl
- good faith verification
- posting transparency"
And we have to think about how to hit these targets while:
- respecting individual sovereignty
- respecting privacy
- meeting any other obligations or responsibilities within reason
and of course, it must be EASY and dead simple to use.
It's doable, we've done far more impossible-seeming things just in the last 30 years, so it's just a matter of willpower now.
It'd be cool if you had to pay a certain amount of money to publish any message.
And then if you could verify you'd paid it in a completely P2P decentralized fashion.
I'm not a crypto fan, but I'd appreciate a message graph where high signal messages "burned" or "donated money" to be flagged for attention.
I'd also like it if my attention were paid for by those wishing to have it, but that's a separate problem.
it's pure waste-generation, but hashcash is a fairly old strategy for this, and it's one of the foundations of Bitcoin. there's no "proof of payment to any beneficial recipient", sadly, but it does throttle high-volume spammers pretty effectively.
Maybe if you could prove you sent a payment to a charity node and then signed your message in the receipt for verification...
Imagine a world where every City Hall has a vending machine you can use to donate a couple bucks to to a charity of your choice, and receive an anonymous one-time use "some real human physically present donated real money to make this" token.
You could then spend the token with a forum, to gain and basic trust for an otherwise anonymous account.
I like that idea a lot.
To check out other FediForum keynotes, many demos showing off innovative open social web software, and notes from the FediForum unconference sessions, go to https://fediforum.org (disclaimer: FediForum co-organizer here)
Social media is simply an extension from cybernetics to the principles of cog-sci as a "protocol" network where status and control are the primary forces mediated. This is irrefutable - the web was built as an extension of the cog-sci parameters of information as control.
Social media can't be saved, it can only be revolutionary as a development arena for a new form of language.
"The subject of integration was socialization; the subject of coordination was communication. Both were part of the theme of control...Cybernetics dispensed with the need for biological organisms, it as the parent to cognitive science, where the social is theorized strictly in terms of the exchange of information. Receivers, senses of signs need to be known in terms of channels, capacities, error rates, frequencies and so forth." Haraway Primate Visions.
I don't understand how technologists and coders can be this naive to the ramifications of electronically externalizing signals which start as arbitrary in person, and then clearly spiral out of control once accelerated and cut-off from the initial conditions.
I believe that the more populist layer of the www became social media apps. Hosted LLMs (claude, chatGPT etc) are going to become the popular source of information and therefore narrative. What we must remember is that we should retain control of our thoughts, and be aware of how we can share them without financially interested parties claiming rights to their use or abuse. I am trying to solve some of these problems with NoteSub App - https://apps.apple.com/gb/app/notesub/id6742334239 - but have yet to overcome the real issue of how we can stop the middleman keeping the loop closed with him in between.
I've never really got social media in any of its forms. I use messaging apps to stay in contact with people I like, but that's about it.
I skimmed this article, I still don't get it. I think group chats cover most of what the author is taking about, public and private ones. But this might be my lack of imagination. I feel there article, and by extension, the talk could have been a lot shorter.
> skimmed this article, I still don't get it.
But you're posting here, in socisl media, no? So you sought out something here that a group chat wouldn't give.
Most of the article here is focused more on making sure any social media (be it chats, a public forum, or email) isn't hijacked by vested powers who want to spread propaganda or drown the user in ads. One approach to that focused in this article is decentralization, which gives a user the ability to take their ball and go home.
Of course, it's futile if the user doesn't care about wielding that power.
Group chats are where real people socialise with their actual friends now. Social media is where people consume infinite slop feeds for entertainment. The days of people posting their weekend on Facebook are long gone.
> The days of people posting their weekend on Facebook are long gone.
All of my friends do this on instagram or snap.
Group chats are lowercase S social media but they still benefit from being open.
By open do you mean not centralised? I don't get the significance of big S social media. Functionally how would big S improve on group chats?
> By open do you mean not centralised? I don't get the significance of big S social media. Functionally how would big S improve on group chats?
Social media has two functions: chat (within groups/topics/...) and discovery (of groups/topics/...). So unless we rely only on IRL discovery, we need a way to do discovery online.
Discovery is probably the main problem social media creates. Almost all of these problems solve themselves when you remove discovery. If someone in your friends group chat is spamming porn you just remove them. There's no need for the platform to intervene here, small groups of people can moderate their own friend groups.
Once you start algorithmically shoving content to people you have to start worrying about spam, trolling, politics, copyright, and all kinds of issues. The best discovery system is friends sharing chat invite links to other friends who are interested.
Why was this chosen to be a keynote? This talk seems to not care about open social media, but rather that existing social media sites don't follow the author's political agenda. Having a keynote trying to rally people into building sites that support a niche political agenda that the general public doesn't agree with doesn't accomplish the goals of making open social media more viable. This along with equating things with Nazis just further alienates people.
> What specific pain point are you solving that keeps people on WhatsApp despite the surveillance risk, or on X despite the white supremacy?
Why wouldn't a genuinely open social web allow people to communicate content that Ben Werdmuller thinks constitutes white supremacy, just as one can on X? Ideas and opinions that Ben Werdmuller (and people with similar activist politics to him) think constitute white supremacy are very popular among huge segments of the English-speaking public, and if it's even possible for some moderator with politics like Werdmuller to prevent these messages from being promulgated (as was the case at Twitter until Musk bought it in 2022 and fired all the Trust and Safety people with politics similar to Werdmuller's), then it is not meaningfully open. If this is not possible, then would people with Werdmuller's politics still want to use an open social web, rather than a closed social web that lets moderators attempt to suppress content they deem white supremacist?
> As I was writing this talk, an entire apartment building in Chicago was raided. Adults were separated into trucks based on race, regardless of their citizenship status. Children were zip tied to each other.
> And we are at the foothills of this. Every week, it ratchets up. Every week, there’s something new. Every week, there’s a new restrictive social media policy or a news outlet disappears, removing our ability to accurately learn about what’s happening around us.
The reaction to the raid of that apartment building in Chicago on many social media platforms was the specific meme-phrase "this is what I voted for", and indeed Donald Trump openly ran on doing this, and won the US presidential election. What prevents someone from using open social media tech to call for going harder on deportations, or to spread news stories about violent crimes and fraud committed by immigrants? If anything can prevent this, how can the platform be said to be actually open?
Spritely is the solution. Been baking for a few years now. Just pushed an update last week, in fact: https://spritely.institute/
While I tend to support there being open social alternatives, I haven’t really seen the people behind them talk about the most important aspect: how will you attract and retain users? There has to be more to the value proposition than “it’s open”. The vast majority of users simply do not care about this. They want to be where their friends, family, and favorite content creators are. They want innovation in both content and format. Until the well intentioned people behind these various open web platforms and non-platforms internalize and act on these realities, the whole enterprise is doomed to be a niche movement that will eventually go out with a whimper.
Whatever happened to Diaspora?
Social media relies on our dead. arbitrary signaling system, language, which once it's accelerated becomes a cybernetic/cog-sci control network, no matter how it's operated. Language is about control, status and bias before it's an attempt to communicate information. It's doomed as an external system in arbitrary symbols.