What if YouTube stopped giving recommendations?
And if a state official in Texas or Florida Requested Instagram not to remove vaccine misinformation that is against app rules?
Or what if TikTok redid its For You tab so that content moderators have to approve videos before they appear?
The Supreme Court this week opened the door to radically different ways of thinking about social media and the internet. The court is set to hear up to three cases in this mandate about the legal protections social media companies used to become industry giants and the freewheeling freedom companies now have over online speech, entertainment and information.
His rulings could be the beginning of a new reality on the internet, where platforms are much more cautious about the content they decide to distribute to billions of people every day. Alternatively, the court could also create a situation where tech companies have little power to moderate what users post, undoing years of efforts to limit the scope of misinformation, abuse and hate speech.
The result could make parts of the internet unrecognizable, as some voices grow louder or quieter, and information spreads in different ways.
The key to the future of the internet is being able to strike that balance between preserving that participatory nature and increasing access to good information, said Robyn Caplan, a senior research scientist at Data & Society, a nonprofit that studies the internet. .
At issue in a case that the court has agreed to hear are targeted recommendations, suggestions that services make to keep people scrolling, clicking, scrolling, and watching. Tech companies usually can’t be sued simply for allowing people to post problematic content, but in the coming months, the court will consider whether that immunity extends to posts recommended by the companies themselves.
A second case involving Twitter asks how aggressive tech companies must be to stop terrorists from using their services, and a third case yet to be taken for discussion may center on state laws in Texas and Florida that prevent tech companies from taking down large material areas.
The Supreme Court’s decision to hear the targeted recommendations case landed like a bombshell in the tech sector on Monday because the high court has never fully considered the question of when companies can be sued over material others post on online services . Lower courts have repeatedly held the companies immune in nearly all cases due to a 1996 federal law, Section 230 of the Communications Decency Act.
The recommendations case involves YouTube videos about the Islamic State terrorist group, but the outcome could affect a wide range of tech companies depending on how the court rules later this year or next.
They will see this case as potentially an existential threat, said Ryan Calo, a law professor at the University of Washington.
If tech companies lose immunity for recommended posts, companies that rely on unverified user-generated content like Instagram and TikTok could so you need to rethink how they connect people with content.
At the very least, they’re going to have to be much, much more careful about what they leave on their platform, or much more careful about what they let their recommendation engines do for people, Calo said. (A colleague of Calos filed the lawsuit in question, although Calo is not involved in the case.)
The two cases the Supreme Court has agreed to hear, and the third likely to come, present a test of the legal and political might of the tech industry, which has faced increased scrutiny in Washington from lawmakers and regulators, but has largely rejected major threats to its considerable profits and influence.
In other words, the court could curb Big Tech in a way that Congress hasn’t chosen.
What this could do is put more pressure on platforms to give users more transparency about how the recommender system works and then control it, said Brandie Nonnecke, who researches tech companies as founding director of the CITRIS Policy Lab at the University of California . , Berkley.
These are largely uncontrolled media systems that deliver content to people in ways you and I don’t understand, he said.
The Supreme Court’s ruling on targeted recommendations won’t necessarily affect online services that make recommendations but don’t allow user-generated content, such as Netflix or Spotify.
The immunity granted by lower courts under Section 230 has helped make possible a whole generation of Internet companies, from review sites like Yelp and Glassdoor, to news websites that allow user comments, to companies of social media that allow people to post more or less freely. Businesses can abandon or remove individual posts largely without fear of defamation or privacy infringement lawsuits.
Jeff Kosseff, author of a book on Section 230, The Twenty-Six Words That Made the Internet, said the outcome of the Supreme Court case was impossible to predict, but that smaller companies with fewer resources had the most to lose.
If the scope of Section 230 were substantially narrow, I think you’d see particularly small platforms really guessing whether they want to take the risk of allowing user content, he said.
If you’re a hyper-local news site that allows comments on your stories, and you might not even have defamation insurance, you’ll think twice about allowing comments, he said.
The idea of stripping tech companies of immunity to algorithmic amplification has been around for years. Roger McNamee, a venture capitalist and former Facebook investor, proposed it in 2020. Two members of Congress put the idea into legislation the same year.
When the court hears the case’s arguments, it will do so in the context of an Internet very different from what existed in 1996. In those days, the relatively few people who used the Internet often did so via dial-up modems, and there were few or no recommendation engine on websites.
Tech companies were also in their infancy. Now, US tech companies are among the most valuable companies on the planet.
In today’s world, the Internet will be just fine and no longer needs this protection, said Mary Graw Leary, a law professor at the Catholic University of America
Leary said the Supreme Court should consider the broader context of the Communications Decency Act, which also included anti-obscenity provisions designed to protect children from pornography.
As industries grow and become more and more powerful, and we become increasingly aware of the extent of damage industries can create, there is more of a need for regulation, he said.
#Supreme #Court #change #internet