Social media and "deplatforming"
Feb. 24th, 2021 06:49 pmDebate: Big Tech was Right to Deplatform Trump
https://play.acast.com/s/intelligencesquared/debate-bigtechwasrighttodeplatformtrump
This fiery debate is worth listening to, if you can get past a couple of slimy sleight-of-hand arguments thrown in by Nadine Strossen during the opening and closing statements.
But I am disappointed to see that none of these fine people actually get the point about social media companies and what they do. In this debate they've broken the issue down along these lines:
1. Kicking Trump off their platform was a sensible response to criminal acts that violated their terms of service.
2. Kicking Trump off their platform is censorship and sets a terrible precedent where big tech companies can entirely ditch notions of free speech, and censor anyone they like for any reason.
But here's the thing. Kicking Trump off these platforms entirely ... is a bit of a red herring.
Social media companies, by definition, always do one thing: They take in pieces of content generated by users of the platform, then mix it all around in various ways, throwing in advertising to suit their clients, and then present that content back, to other users. To differentiate these companies there are usually extra features, like length limits, hashtags, robot-generated content, friend and follower lists, and so on, but they're all variations on the same theme: Content made by users, remixed and fed back to other users.
Here's what all four debaters in that podcast fail to understand: There is no such thing as free speech on a social media website. To perform its basic service - to do what it is expected to do by definition - a social media company makes trillions of split-second decisions every day about what to show, and what to hide, for every single user. Every single one of these tiny decisions is made deliberately, according to rules and software constructed by humans employed at that company, and is itself a microscopic act of censorship.
You imagine yourself standing in a "public square" and talking out loud on these platforms. You imagine your voice radiating away from your body, in a physical space, loud at first but growing faint with distance, according to laws of physics that no one can interfere with. To reinforce that image, the platform shows your content in a context with others', and you imagine those people standing nearby, like they just happened to be passing by, or they walked over to hear you speak.
That image is, of course, a lie.
A more accurate image of what happens on social media is, you are jammed in a tiny, dark box, alone, and there are two slots on the wall. A continuous stream of words printed on ticker-tape spews out of one, and into the other, you cram your own words, printed on ticker-tape. They vanish into the wall. And all around you, beyond the walls of your tiny box, mysterious machinery hums and clicks. Whether your words reach a hundred people, a million people, or no one at all, is entirely up to the machine.
Oh -- well, there is one way to make your words reach more people. The wall also has a third slot. You can feed money into that.
Social media companies censor you, and me, and everyone else, countless times a day, invisibly. It's what they do. They do it almost entirely through automated systems driven by computer code, which is how they are able to operate at such high volume. It is extremely inconvenient for them to employ actual humans to read through and pass judgement on whether some little scrap of content should be chosen over others to present to a given user, because there are just too many connections between people to consider, so they rely on software to do that job just about all the time, including software that receives feedback - some deliberate, some observed by stealth - from users and use the feedback to tip the scales. If they had to pay humans an hourly wage to make the equivalent decisions they'd all be bankrupt in a matter of hours.
Because they have to rely on automated systems, and those systems have subtle flaws and can be gamed by clever humans or just plain overwhelmed, they are compelled to declare a "hands off" approach to the content they show users, from an ethical and legal standpoint. To put it bluntly, in order to be profitable, they have to say "If you see something dangerous, wrong, or offensive, we are not responsible. That's entirely the doing of the user who fed it into our machine."
But it isn't, obviously. It's just that the code they wrote doesn't care what's dangerous, wrong, or offensive. It just cares what makes you a repeat customer.
Social media companies are transmitting a whole lot of communication - according to their own inscrutable rules - and that sheer volume has led people to believe that they are the de-facto equivalent of the "public square". And in that misconception, people like the debaters in the podcast above are shouting that we need to keep social media "censorship free", in order to stop social media companies from having "too much power". Sorry, but that's like declaring that geese should not go "honk" because it makes them sound too much like geese. It's pointless, and impossible, and it makes no sense.
Another important point here is that somewhere in this debate there is a border, between what people think of as "censorship" and what could instead be defined as "editorial review". That is, it's possible to define almost all the decisions that social media software makes as equivalent to the editor of a newspaper deciding what's fit to print. Put that way, the process sounds less monstrous -- and the label is, I think, more accurate, given that these are corporations and not government entities doing the editing. But it also much more clearly makes the point that these corporations are responsible for their editorial choices, and for the content of the "newspaper" they deliver. No social media company can make any claim of factual accuracy or impartiality - they're clearly not journalists, and are not beholden to journalistic standards of truth in reporting - but perhaps that should change. Or perhaps they should be compelled to draw the line a little clearer between what they do, and what an entity like, say, The Wall Street Journal does.
Here's my advice:
Make social media companies more responsible for the content they take in, remix, and selectively show you. Hold their feet to the fire as enablers of violence, idiocy, and thievery, whenever it manifests itself. They are in possession of all the knobs they need to turn, to make something visible to every single user of their platform, or to no one at all. Make them responsible for tuning them. Accept as a given that they can, and do, censor you and everyone else, capriciously, according to their own whims and tastes, and that it is the price you pay for admission to their kingdom. And, at the same time:
Recognize and punish monopolistic behavior on the part of social media companies. Stop them from shutting out competitors, and/or break them the hell up.
Selective censorship is a fundamental component of social media -- of the service they offer. It cannot function without it. Recognizing this fact reveals another fact: We can never trust them to act as a "public square" or to represent free speech. So we need to make sure they don't get big enough to become a threat to these things. And they are already big enough. Social media as a communications construct is not going away, and it's better that corporations have the knobs of censorship rather than the government, but not if those corporations are "too big to fail" for the sake of free speech or democracy. They need to be kicked down to manageable size.
https://play.acast.com/s/intelligencesquared/debate-bigtechwasrighttodeplatformtrump
This fiery debate is worth listening to, if you can get past a couple of slimy sleight-of-hand arguments thrown in by Nadine Strossen during the opening and closing statements.
But I am disappointed to see that none of these fine people actually get the point about social media companies and what they do. In this debate they've broken the issue down along these lines:
1. Kicking Trump off their platform was a sensible response to criminal acts that violated their terms of service.
2. Kicking Trump off their platform is censorship and sets a terrible precedent where big tech companies can entirely ditch notions of free speech, and censor anyone they like for any reason.
But here's the thing. Kicking Trump off these platforms entirely ... is a bit of a red herring.
Social media companies, by definition, always do one thing: They take in pieces of content generated by users of the platform, then mix it all around in various ways, throwing in advertising to suit their clients, and then present that content back, to other users. To differentiate these companies there are usually extra features, like length limits, hashtags, robot-generated content, friend and follower lists, and so on, but they're all variations on the same theme: Content made by users, remixed and fed back to other users.
Here's what all four debaters in that podcast fail to understand: There is no such thing as free speech on a social media website. To perform its basic service - to do what it is expected to do by definition - a social media company makes trillions of split-second decisions every day about what to show, and what to hide, for every single user. Every single one of these tiny decisions is made deliberately, according to rules and software constructed by humans employed at that company, and is itself a microscopic act of censorship.
You imagine yourself standing in a "public square" and talking out loud on these platforms. You imagine your voice radiating away from your body, in a physical space, loud at first but growing faint with distance, according to laws of physics that no one can interfere with. To reinforce that image, the platform shows your content in a context with others', and you imagine those people standing nearby, like they just happened to be passing by, or they walked over to hear you speak.
That image is, of course, a lie.
A more accurate image of what happens on social media is, you are jammed in a tiny, dark box, alone, and there are two slots on the wall. A continuous stream of words printed on ticker-tape spews out of one, and into the other, you cram your own words, printed on ticker-tape. They vanish into the wall. And all around you, beyond the walls of your tiny box, mysterious machinery hums and clicks. Whether your words reach a hundred people, a million people, or no one at all, is entirely up to the machine.
Oh -- well, there is one way to make your words reach more people. The wall also has a third slot. You can feed money into that.
Social media companies censor you, and me, and everyone else, countless times a day, invisibly. It's what they do. They do it almost entirely through automated systems driven by computer code, which is how they are able to operate at such high volume. It is extremely inconvenient for them to employ actual humans to read through and pass judgement on whether some little scrap of content should be chosen over others to present to a given user, because there are just too many connections between people to consider, so they rely on software to do that job just about all the time, including software that receives feedback - some deliberate, some observed by stealth - from users and use the feedback to tip the scales. If they had to pay humans an hourly wage to make the equivalent decisions they'd all be bankrupt in a matter of hours.
Because they have to rely on automated systems, and those systems have subtle flaws and can be gamed by clever humans or just plain overwhelmed, they are compelled to declare a "hands off" approach to the content they show users, from an ethical and legal standpoint. To put it bluntly, in order to be profitable, they have to say "If you see something dangerous, wrong, or offensive, we are not responsible. That's entirely the doing of the user who fed it into our machine."
But it isn't, obviously. It's just that the code they wrote doesn't care what's dangerous, wrong, or offensive. It just cares what makes you a repeat customer.
Social media companies are transmitting a whole lot of communication - according to their own inscrutable rules - and that sheer volume has led people to believe that they are the de-facto equivalent of the "public square". And in that misconception, people like the debaters in the podcast above are shouting that we need to keep social media "censorship free", in order to stop social media companies from having "too much power". Sorry, but that's like declaring that geese should not go "honk" because it makes them sound too much like geese. It's pointless, and impossible, and it makes no sense.
Another important point here is that somewhere in this debate there is a border, between what people think of as "censorship" and what could instead be defined as "editorial review". That is, it's possible to define almost all the decisions that social media software makes as equivalent to the editor of a newspaper deciding what's fit to print. Put that way, the process sounds less monstrous -- and the label is, I think, more accurate, given that these are corporations and not government entities doing the editing. But it also much more clearly makes the point that these corporations are responsible for their editorial choices, and for the content of the "newspaper" they deliver. No social media company can make any claim of factual accuracy or impartiality - they're clearly not journalists, and are not beholden to journalistic standards of truth in reporting - but perhaps that should change. Or perhaps they should be compelled to draw the line a little clearer between what they do, and what an entity like, say, The Wall Street Journal does.
Here's my advice:
Make social media companies more responsible for the content they take in, remix, and selectively show you. Hold their feet to the fire as enablers of violence, idiocy, and thievery, whenever it manifests itself. They are in possession of all the knobs they need to turn, to make something visible to every single user of their platform, or to no one at all. Make them responsible for tuning them. Accept as a given that they can, and do, censor you and everyone else, capriciously, according to their own whims and tastes, and that it is the price you pay for admission to their kingdom. And, at the same time:
Recognize and punish monopolistic behavior on the part of social media companies. Stop them from shutting out competitors, and/or break them the hell up.
Selective censorship is a fundamental component of social media -- of the service they offer. It cannot function without it. Recognizing this fact reveals another fact: We can never trust them to act as a "public square" or to represent free speech. So we need to make sure they don't get big enough to become a threat to these things. And they are already big enough. Social media as a communications construct is not going away, and it's better that corporations have the knobs of censorship rather than the government, but not if those corporations are "too big to fail" for the sake of free speech or democracy. They need to be kicked down to manageable size.