This post is public.
by Ethan Zuckerman, The Conversation
Credit: Pixabay/CC0 Public Domain
Leaked internal documents suggest Facebook—which recently renamed itself Meta—is doing far worse than it claims at minimizing COVID-19 vaccine misinformation on the Facebook social media platform.
Online misinformation about the virus and vaccines is a major concern. In one study, survey respondents who got some or all of their news from Facebook were significantly more likely to resist the COVID-19 vaccine than those who got their news from mainstream media sources.
As a researcher who studies social and civic media, I believe it's critically important to understand how misinformation spreads online. But this is easier said than done. Simply counting instances of misinformation found on a social media platform leaves two key questions unanswered: How likely are users to encounter misinformation, and are certain users especially likely to be affected by misinformation? These questions are the denominator problem and the distribution problem.
The COVID-19 misinformation study, "Facebook's Algorithm: a Major Threat to Public Health", published by public interest advocacy group Avaaz in August 2020, reported that sources that frequently shared health misinformation—82 websites and 42 Facebook pages—had an estimated total reach of 3.8 billion views in a year.
At first glance, that's a stunningly large number. But it's important to remember that this is the numerator. To understand what 3.8 billion views in a year means, you also have to calculate the denominator. The numerator is the part of a fraction above the line, which is divided by the part of the fraction below line, the denominator.
Getting some perspective
One possible denominator is 2.9 billion monthly active Facebook users, in which case, on average, every Facebook user has been exposed to at least one piece of information from these health misinformation sources. But these are 3.8 billion content views, not discrete users. How many pieces of information does the average Facebook user encounter in a year? Facebook does not disclose that information.
Market researchers estimate that Facebook users spend from 19 minutes a day to 38 minutes a day on the platform. If the 1.93 billion daily active users of Facebook see an average of 10 posts in their daily sessions—a very conservative estimate—the denominator for that 3.8 billion pieces of information per year is 7.044 trillion (1.93 billion daily users times 10 daily posts times 365 days in a year). This means roughly 0.05% of content on Facebook is posts by these suspect Facebook pages.
The 3.8 billion views figure encompasses all content published on these pages, including innocuous health content, so the proportion of Facebook posts that are health misinformation is smaller than one-twentieth of a percent.
Is it worrying that there's enough misinformation on Facebook that everyone has likely encountered at least one instance? Or is it reassuring that 99.95% of what's shared on Facebook is not from the sites Avaaz warns about? Neither.
In addition to estimating a denominator, it's also important to consider the distribution of this information. Is everyone on Facebook equally likely to encounter health misinformation? Or are people who identify as anti-vaccine or who seek out "alternative health" information more likely to encounter this type of misinformation?
Another social media study focusing on extremist content on YouTube offers a method for understanding the distribution of misinformation. Using browser data from 915 web users, an Anti-Defamation League team recruited a large, demographically diverse sample of U.S. web users and oversampled two groups: heavy users of YouTube, and individuals who showed strong negative racial or gender biases in a set of questions asked by the investigators. Oversampling is surveying a small subset of a population more than its proportion of the population to better record data about the subset.
The researchers found that 9.2% of participants viewed at least one video from an extremist channel, and 22.1% viewed at least one video from an alternative channel, during the months covered by the study. An important piece of context to note: A small group of people were responsible for most views of these videos. And more than 90% of views of extremist or "alternative" videos were by people who reported a high level of racial or gender resentment on the pre-study survey.
While roughly 1 in 10 people found extremist content on YouTube and 2 in 10 found content from right-wing provocateurs, most people who encountered such content "bounced off" it and went elsewhere. The group that found extremist content and sought more of it were people who presumably had an interest: people with strong racist and sexist attitudes.
The authors concluded that "consumption of this potentially harmful content is instead concentrated among Americans who are already high in racial resentment," and that YouTube's algorithms may reinforce this pattern. In other words, just knowing the fraction of users who encounter extreme content doesn't tell you how many people are consuming it. For that, you need to know the distribution as well.
Superspreaders or whack-a-mole?
A widely publicized study from the anti-hate speech advocacy group Center for Countering Digital Hate titled Pandemic Profiteers showed that of 30 anti-vaccine Facebook groups examined, 12 anti-vaccine celebrities were responsible for 70% of the content circulated in these groups, and the three most prominent were responsible for nearly half. But again, it's critical to ask about denominators: How many anti-vaccine groups are hosted on Facebook? And what percent of Facebook users encounter the sort of information shared in these groups?
Without information about denominators and distribution, the study reveals something interesting about these 30 anti-vaccine Facebook groups, but nothing about medical misinformation on Facebook as a whole.
These types of studies raise the question, "If researchers can find this content, why can't the social media platforms identify it and remove it?" The Pandemic Profiteers study, which implies that Facebook could solve 70% of the medical misinformation problem by deleting only a dozen accounts, explicitly advocates for the deplatforming of these dealers of disinformation. However, I found that 10 of the 12 anti-vaccine influencers featured in the study have already been removed by Facebook.
Consider Del Bigtree, one of the three most prominent spreaders of vaccination disinformation on Facebook. The problem is not that Bigtree is recruiting new anti-vaccine followers on Facebook; it's that Facebook users follow Bigtree on other websites and bring his content into their Facebook communities. It's not 12 individuals and groups posting health misinformation online—it's likely thousands of individual Facebook users sharing misinformation found elsewhere on the web, featuring these dozen people. It's much harder to ban thousands of Facebook users than it is to ban 12 anti-vaccine celebrities.
This is why questions of denominator and distribution are critical to understanding misinformation online. Denominator and distribution allow researchers to ask how common or rare behaviors are online, and who engages in those behaviors. If millions of users are each encountering occasional bits of medical misinformation, warning labels might be an effective intervention. But if medical misinformation is consumed mostly by a smaller group that's actively seeking out and sharing this content, those warning labels are most likely useless.
Getting the right data
Trying to understand misinformation by counting it, without considering denominators or distribution, is what happens when good intentions collide with poor tools. No social media platform makes it possible for researchers to accurately calculate how prominent a particular piece of content is across its platform.
Facebook restricts most researchers to its Crowdtangle tool, which shares information about content engagement, but this is not the same as content views. Twitter explicitly prohibits researchers from calculating a denominator, either the number of Twitter users or the number of tweets shared in a day. YouTube makes it so difficult to find out how many videos are hosted on their service that Google routinely asks interview candidates to estimate the number of YouTube videos hosted to evaluate their quantitative skills.
The leaders of social media platforms have argued that their tools, despite their problems, are good for society, but this argument would be more convincing if researchers could independently verify that claim.
As the societal impacts of social media become more prominent, pressure on the big tech platforms to release more data about their users and their content is likely to increase. If those companies respond by increasing the amount of information that researchers can access, look very closely: Will they let researchers study the denominator and the distribution of content online? And if not, are they afraid of what researchers will find?
This post is public.
by Mark Andrejevic, Abdul Karim Obeid, Daniel Angus, Jean Burgess, The Conversation
Investigative reports at ProPublica purchased an ad in Facebook’s housing categories via the company’s advertising portal. The ad purchased was targeted to Facebook users who were house hunting, but excluded anyone with an ‘ethnic affinity’ for being African-American, Asian-American or Hispanic. Credit: Julia Angwin and Terry Parris Jr/ProPublica, CC BY
Social media platforms are transforming how online advertising works and, in turn, raising concerns about new forms of discrimination and predatory marketing.
Today the ARC Centre of Excellence for Automated Decision Making and Society (ADM+S)—a multi-university entity led by RMIT—launched the Australian Ad Observatory. This research project will explore how platforms target Australian users with ads.
The goal is to foster a conversation about the need for public transparency in online advertising.
The rise of 'dark ads'
In the mass media era, advertising was (for the most part) public. This meant it was open to scrutiny. When advertisers behaved illegally or irresponsibly, the results were there for many to see.
And the history of advertising is riddled with irresponsible behavior. We've witnessed tobacco and alcohol companies engage in the predatory targeting of women, underage people and socially disadvantaged communities. We've seen the use of sexist and racist stereotypes. More recently, the circulation of misinformation has become a major concern.
When such practices take place in the open, they can be responded to by media watchdogs, citizens and regulators. On the other hand, the rise of online advertising—which is tailored to individuals and delivered on personal devices—reduces public accountability.
These so-called "dark ads" are visible only to the targeted user. They are hard to track, since an ad may only appear a few times before disappearing. Also, the user doesn't know whether the ads they see are being shown to others, or whether they're being singled-out based on their identity data.
There's a lack of transparency surrounding the automated systems Facebook employs to target users with ads, as well as recommendations it provides to advertisers.
In 2017 investigative journalists at ProPublica were able to purchase a test ad on Facebook targeting users associated with the term "Jew hater." In response to the attempted ad purchase, Facebook's automated system suggested additional targeting categories including "how to burn Jews."
Facebook removed the categories after being confronted with the findings. Without the scrutiny of the investigators, might they have endured indefinitely?
Researchers' concern about dark ads continues to grow. In the past, Facebook has made it possible to advertise for housing, credit, and employment based on race, gender and age.
This year it was found delivering targeted ads for military gear alongside posts about the attack on the US Capitol. It also enabled ads targeting African Americans during the 2016 US presidential campaign to suppress voter turnout.
Public support for transparency
It's not always clear whether such offenses are deliberate or not. Nevertheless they've become a feature of the extensive automated ad-targeting systems used by commercial digital platforms, and the opportunity for harm is ever-present—deliberate or otherwise.
Most examples of problematic Facebook advertising come from the United States, as this is where the bulk of research on this issue is conducted. But it's equally important to scrutinize the issue in other countries, including in Australia. And Australians agree.
Research published on Tuesday and conducted by Essential Media (on behalf of the ADM+S Centre) has revealed strong support for transparency in advertising. More than three-quarters of Australian Facebook users responded Facebook "should be more transparent about how it distributes advertising on its news feed."
With this goal in mind, the Australian Ad Observatory developed a version of an online tool created by ProPublica to let members of the public anonymously share the ads they receive on Facebook with reporters and researchers.
The tool will allow us to see how ads are being targeted to Australians based on demographic characteristics such as age, ethnicity and income. It is available as a free plugin for anyone to install on their web browser (and can be removed or disabled at any time).
Importantly, the plug-in does not collect any personally-identifying information. Participants are invited to provide some basic, non-identifying, demographic information when they install it, but this is voluntary. The plug-in only captures the text and images in ads labeled as "sponsored content" which appear in users' news feeds.
Facebook's online ad library does provide some level of visibility into its targeted ad practices—but this isn't comprehensive.
The ad library only provides limited information about how ads are targeted, and excludes some ads based on the number of people reached. It's also not reliable as an archive, since the ads disappear when no longer in use.
The need for public interest research
Despite its past failings, Facebook has been hostile towards outsider attempts to ensure accountability. For example, it recently demanded researchers at New York University discontinue their research into how political ads are targeted on Facebook.
When they refused, Facebook cut-off their access to its platform. The tech company claimed it had to ban the research because it was bound by a settlement with the United States' Federal Trade Commission over past privacy violations.
However, the Federal Trade Commission publicly rejected this claim and emphasized its support for public interest research intended "to shed light on opaque business practices, especially around surveillance-based advertising."
Platforms should be required to provide universal transparency for how they advertise. Until this happens, projects like the Australian Ad Observatory plugin can help provide some accountability. To participate, or for more information, visit the website.
This post is public.
The videos, which began popping up last month, show students taking items, such as wet floor signs, microscopes and clocks.
Sept. 16, 2021, 12:11 AM CEST
Sept. 16, 2021, 12:11 AM CEST
TikTok on Wednesday confirmed it has banned content around the "devious licks" trend, in which students post videos of items they allegedly stole from their schools.
The videos, which began popping up last month, show students taking items, such as wet floor signs, microscopes and clocks, and then describing them as their "devious licks."
The platform said this kind of content violates its community guidelines.
"We expect our community to stay safe and create responsibly, and we do not allow content that promotes or enables criminal activities," a TikTok spokesperson said in an email. "We are removing this content and redirecting hashtags and search results to our Community Guidelines to discourage such behavior."
Insider first reported the news.
The trend started after TikTok user @jugg4elias posted a video of himself pulling a box of disposable masks out of his backpack with the caption, "A month into school absolutely devious lick. Should've brought a mask," according to meme database Know Your Meme. That video received roughly 239,000 views in a week, Know Your Meme reported.
Days after @jugg4elias posted his video, another user, @dtx.2cent, posted a video showing that he had allegedly stolen a hand sanitizer dispenser, according to Know Your Meme. That video reportedly earned 7.2 million views.
Those two accounts, and their "devious licks" videos, appear to have been removed as of Wednesday. TikTok said it can't comment on specific accounts.
Copycats soon sprung up, racking up millions of views, including one user who garnered 2.6 million views for posting a microscope as his "devious lick." Phillip Hamilton, an editor at Know Your Meme, told NBC News that he's not surprised the trend was banned after its rapid growth led to others attempting to film their own devious licks.
"Countless devious licks videos have millions and millions of views, and while some are obviously jokes, others are not so clear at all. Stealing trends are on the rise on TikTok, with a trend where users steal the head off of a LeBron James action figure also going viral just in July," Hamilton said.
Hamilton said there are other stealing trends that have popped up on the platform, but that TikTok has been inconsistent in what it has and hasn't banned.
"At this point, devious licks has spread outside the platform. It's all over Instagram, Facebook, you name it. I'm not surprised they did it, just surprised that they let it get to such a point before taking steps to shut it down," he said.
Facebook, which also owns Instagram, did not immediately respond to a request for comment made by NBC News.
TikTok has made efforts to stifle trends that violate its community guidelines and could lead to serious injury. Recently, the platform removed the hashtag and videos of the milk crate challenge, in which people were climbing pyramids made of milk crates and often falling, which led to some injuries.
Some schools have made public statements saying they are monitoring for students who try to participate in the "devious licks" trend.
River Ridge High School in New Port Richey, Florida, posted a statement to its Facebook saying that bathrooms and other property on its campus were being destroyed because of the trend. It also said it would investigate all instances of people participating or encouraging the trend and would be "providing school discipline at the maximum level allowed."
"Please talk with your student immediately, as all students involved in the making of the video, destruction of property or watching and cheering on the destruction will be addressed and face consequences," the post reads.
The term "devious licks" is now unsearchable on the platform, but a misspelled version of the hashtag "#devoiuslick" was still showing search results as of Wednesday afternoon and had more than 388,000 views.
In a tweet on Wednesday, TikTok said it expects its "community to create responsibly - online and IRL."
"Please be kind to your schools & teachers," the platform wrote.
This post is public.
“This is critical infrastructure like the highways that we drive on,” one city official said.
Due to an extremely slow internet connection at her home in Kenly, N.C., teacher Michelle Galloway was unable to instruct her students virtually at the beginning of the pandemic.Justin Kase Conder / for NBC News
June 16, 2021, 9:53 PM CEST
By Ezra Kaplan and Haley Messenger
ROCKY MOUNT, N.C. — Across the well-manicured homes in this city’s subdivision of Fairfield, Jeff Stanovich never thought that he would be hunched over his router wishing his internet would work.
It’s a near daily routine. Unplug the modem, wait 45 seconds, plug it back in, wait for the blue lights to all come back on.
“And then if I'm lucky, internet recycles and I'm able to use it. Doesn't always happen. There's a lot of times I have to do it multiple times to get it to work,” said Stanovich, who set up his work-from-home desk less than 10 feet away from the router in an effort to improve the situation.
But just 20 miles south on I-95, residents in Wilson County have some of the best internet in the nation. That’s because in 2008 the city of Wilson established its own broadband service, called Greenlight Community Broadband, and began wiring the entire county with fiber optic cables.
“When the pandemic hit, we were able to prioritize getting our teachers and students connected,” said City Manager Grant Goings.
Michelle Galloway was one of those teachers who needed to be connected. Her house, surrounded by wheat fields on a road that is mostly trafficked by combines and tractors, is not wired for broadband. Suddenly teaching remotely for the first time, she sometimes had to teach via chat because the video and audio just wouldn’t work.
Jeff Stanovich works at his home in Rocky Mount, N.C.NBC News
“Sometimes I would cry after a class was over, knowing that I had done such a poor job communicating with my students,” Galloway said.
During spring break, she called Greenlight and they arranged to get her connected, even though the nearest connection point was over a half mile away. Today, her fiber optic connection is so strong that her remote students can see the wheat harvest over her shoulder in high definition.
But the widely different internet experiences of North Carolina residents ilke Stanovich and Galloway reflect the broader problems facing more than 30 million Americans who live in areas where the internet infrastructure simply isn’t there. This disproportionately affects rural and tribal areas because it’s just too expensive for private companies to install the wiring. The farther homes are spread apart, the lower the return on investment.
So some cities and counties across the country have decided to build broadband as a utility for residents, just like water or power.
Michelle Galloway at her home in Kenly, N.C.Justin Kase Conder / for NBC News
“This is critical infrastructure like the highways that we drive on,” said Goings. “We can all agree that people need water to drink, whether it's profitable or not.”
While the Biden administration’s American Jobs Plan aims to fix this with a goal of providing coverage to every household, prioritizing smaller groups like Greenlight, that doesn’t help in North Carolina. After Greenlight was built, the telecommunications industry pushed through a state law prohibiting any more cities from building their own internet networks, even as rural residents begged for more options.
Greenlight Community Broadband has the capacity to serve all of Eastern North Carolina but is restricted to the county of Wilson by state law.NBC News
In 2006, after beginning to build the backbone of a fiber optic system for the city, Wilson officials reached out to Time Warner Cable, which was the cable provider at the time, about a possible partnership; the city would build the infrastructure and the cable company would operate it.
But Goings said the company was “combative” toward the proposal.
“They had an attitude of, ‘We'll decide when you need that level of service, and we don't believe that Wilson needs that level of service,’” Goings said. So the City Council decided to build and run its own municipal broadband network.
“I believe our City Council sort of had an epiphany and said, ‘You know what, no board of directors in New York City is going to decide what infrastructure our community has. And so if you don't want to partner with us, we'll do it ourselves,’” Goings said. “I don't know that we were taken seriously at that moment in time. But I think we are now.”
But Wilson officials said that because this new service competed with the cable company, the company wouldn’t let that happen without a fight. Time Warner Cable would not only not invest in Wilson, "but immediately started seeking legislation in the General Assembly to prevent us from investing in ourselves,” Goings said.
As the city of Wilson pressed ahead with building out Greenlight, Time Warner Cable, along with the North Carolina Telecommunication Association, an industry lobby group, lobbied the North Carolina General Assembly for a state bill to make it nearly impossible for a city to run its own internet service.
Workers with Greenlight Community Broadband run fiber optic cable in rural Wilson County, N.C.NBC News
The legislation failed in 2007, and a year later Wilson launched Greenlight, with over 1,000 homes connected by the end of 2008.
Time Warner Cable was acquired by Charter Communications in 2016. A spokesperson for Charter told NBC News: “We can’t speak to what Time Warner Cable may have done 10 years before Charter acquired the company. We welcome competition and are proud of our record of building, maintaining and upgrading our networks.”
In the meantime, the North Carolina Telecommunication Association continued lobbying to pass the bill. Over the course of six years, the telecommunication industry gave $1.6 million to state candidates and political parties, according to an analysis by OpenSecrets, a nonpartisan research group tracking money in politics. AT&T gave over $520,000 and Time Warner Cable (now Spectrum) and CenturyLink each gave over $300,000.
Then, in 2011, both chambers of the North Carolina General Assembly were won by Republicans for the first time since Reconstruction. The bill passed and was signed into law.
Marcus Trathen, counsel for the North Carolina Telecommunication Association said in an email "NCCTA supports fair competition — and that’s what North Carolina’s level playing field law is all about. This law does not prevent a city from getting into the competitive telecommunications business, it protects taxpayers and competitors from harm when a city enters the telecommunications business."
Wilson and Salisbury, North Carolina, the only other municipal-run broadband network in the state, were granted a waiver to allow their programs to continue. Greenlight was restricted from expanding past the edge of the county, despite the technical capacity to do so.
The law that led to an effective ban on city-run internet in North Carolina was part of a nationwide, coordinated effort to pass state legislation that favored major telecommunication companies over municipally operated networks.
The campaign was led by the American Legislative Exchange Council (ALEC), an industry-sponsored political consultancy that often promotes conservative and business-oriented bills in state legislatures that are run by Republicans. The group includes multiple state legislatures and corporate representatives who work together to create “model bills” that are then used as the basis for legislative actions across the country. Past members have included AT&T, Comcast, Time Warner Cable, and CenturyLink. NBC News is a subsidiary of Comcast.
AT&T, CenturyLink and Comcast each declined to comment for this article.
In 2002, ALEC members created model legislation called the Municipal Telecommunications Private Industry Safeguards Act that explicitly “limits the authority of municipalities to own and operate telecommunications and advanced service and cable television facilities and to provide public and advanced telecommunication and cable television services to a municipality’s inhabitants.”
From 2002 to 2004, the bill was adopted in varying forms in 18 states across the country, creating barriers to municipal broadband in each, according to the Institute for Local Self-Reliance, a Washington-based nonprofit group. North Carolina was the 19th and final state to restrict municipal broadband in 2011.
Alexis Jarrett, press secretary at ALEC, said the model legislation was no longer part of the council’s policy and that “All policies heard at ALEC are proposed by state legislators. In many cases, the policies already exist in the states.”
In Rocky Mount, just north of Wilson and its fiber-optic network, Mayor Sandy Roberson said the lack of competition has resulted in sub-par service for his constituents. The main service provider in the area is Suddenlink Communications, which is owned by Altice USA, the fourth-largest cable provider in the U.S.
Rocky Mount, N.C.Denis Tangney, Jr. / Getty Images/iStockphoto
“Unlike all of us, Suddenlink has not evolved to meet the demand, nor shown any compassion toward their customers,” wrote Roberson in a letter to North Carolina Attorney General Josh Stein. “Complaints are at an all-time high and we need help from your office to investigate the business practices of Suddenlink Communications/Altice USA.”
The attorney general’s office said it has received more than 650 complaints about Suddenlink since January 2020, as well as letters from several other mayors.
“Attorney General Stein is concerned about this issue and our office has been communicating with Suddenlink and officials in the affected communities, and this process is ongoing,” said Nazneen Ahmed, Stein's spokesperson.
But Lisa Anselmo, executive vice president of communications at Altice, a telecommunications company that bought Suddenlink, said the company has been in touch with the attorney general regarding its investments in the state.
“Suddenlink is committed to delivering high-quality connectivity, service, and support to our customers in Rocky Mount and across North Carolina,” Anselmo wrote in an email responding to questions from NBC News. “Since 2017, the company has made over $27 million in investments in North Carolina and now more than 98 percent of Suddenlink customers in the state have access to 1 Gig speeds.” (A speed of 1 gigabyte per second is considered extremely fast broadband.)
Roberson said that if it were up to him, he would partner with Wilson and bring Greenlight to Rocky Mount. He sees reliable broadband as essential for the future of his city.
“It's about enabling your own population to be able to be successful,” Roberson said. “We've got to have the technology to enable our resident population to live here, to work here, to be able to educate here, to be able to aspire to be great things, to be able to engage.”
Jeff Stanovich works at his home in Rocky Mount, N.C.NBC News
As Jeffery Stanovich struggles with his own connection, he has watched his adult children return home during the pandemic and try to work. But they, too, had to take turns using the internet or it crashed. He said that after NBC News contacted Suddenlink for comment, five technicians visited his home and repaired his internet. But he realizes that not all of his neighbors are as fortunate.
“I can tell you right now, living in a small town has its advantages,” Stanovich said. “But we shouldn't be at a disadvantage from our access to the internet, because then you're limiting our opportunities to succeed, and people in this town need this to succeed.”
This post is public.
The meme of an excited Shiba Inu named Kabosu is officially the most expensive meme NFT ever sold.
NBC News; Getty Images
June 11, 2021, 7:38 PM CEST / Updated June 11, 2021, 8:36 PM CEST
"Doge," the image of an excited-looking Shiba Inu considered one of the internet's most iconic and renown memes, has sold as a nonfungible token, or NFT, for $4 million, making it the most expensive meme NFT of all time.
On Friday, the auction winner, @pleasrdao, placed a willing bid of 1,696.9 of the cryptocurrency Ethereum, worth approximately $4 million, according to the auction site Zora.
"We're so happy to be a part of this milestone in internet history. If any meme deserved to be the new meme NFT record holder, it's Doge," said Don Caldwell, editor-in-chief of the internet meme database Know Your Meme, which certified Doge ahead of the auction to ensure that the meme was being sold by its rightful owner.
Caldwell added that Doge, which he called "one of the most iconic memes in internet history," had previously won Know Your Meme's 'Meme of the Decade' award in December 2019.
The meme was first put up for auction on Tuesday by Atsuko Sato, the owner of Kabosu, the dog from the image, and ran for roughly three days before being sold to @pleasrdao.
A bidding war for the meme broke out on Zora, where Doge was being auctioned, between users @twodollahotdoge and @pleasrdao on Friday afternoon, driving up the price until @pleasrdao prevailed with the winning bid.
The term "Doge" stems from the Flash cartoon "Homestar Runner." In a 2005 episode, the titular character Homestar calls another character his "D-O-G-E." Five years later, in 2010, Sato posted the image of Kabosu to her personal blog, not realizing the photo would take on a life of its own.
“I took the photos to update my blog,” Sato said in a statement. “I take a lot of pictures every day, so that day was nothing out of the ordinary. Kabosu loves having her photo taken, so she was delighted to have the camera pointed towards her.”
The photo made the rounds on sites like Tumblr and Reddit in the early 2010s, and when a Reddit user referred to the image of Kabosu as "Doge," the name stuck and the meme was born.
“When I first found out about the Kabosu memes, I was very surprised. I was terrified at the thought that just one photo I had casually posted on my blog could spread all over the world to places I didn’t know,” Sato said.
The meme is usually formatted with the image of Kabosu, her paws crossed and a slight smirk on her face, surrounded by broken English phrases like "Wow. So scare" and "What r u doing?"
Doge's popularity has had a recent resurgence due to Dogecoin, the cryptocurrency branded after the meme. Although Dogecoin has been around for several years, it spiked in popularity after Tesla CEO Elon Musk endorsed it on social media this year.
A portion of the proceeds from the sale of the NFT will go toward several charities, including the Japanese Red Cross Society and the World Food Programme.
NFTs are a kind of certificate of authenticity. The NFT is a string of unique characters. The characters are connected to a blockchain, a group of computers that act as a digital ledger that no one computer can change. The same concept powers cryptocurrencies like Bitcoin, but while bitcoins are all essentially the same, NFTs are nonfungible, or unique.
An NFT of Doge would not act as a copyright, however, according to Decrypt, a website that covers cryptocurrency. The NFT is more like a digital autograph from the meme creator, Decrypt reported, certifying authenticity — similar to an autographed baseball card.
There has been a recent influx of memes being sold as NFTs, which some experts have called a "meme gold rush."
Other memes that rival Doge include "Disaster Girl," which sold for 180 of Ethereum, worth roughly $430,000 at the time of sale, and "Overly Attached Girlfriend," which sold for 200 Ethereum, worth roughly $482,000 at the time, according to the auction site Foundation. An NFT of "Pepe the Frog" sold for roughly $1 million, according to The Washington Post.
Seven other images of Kabosu, certified by Know Your Meme, will be sold as NFTs and put up for auction starting on Friday.
This post is public.
- Cryptocurrency creators who claim they are trying to educate consumers say their content is getting censored and taken down for violating community guidelines on TikTok.
- TikTok recently rolled out a system that automatically removes content that violates community guidelines.
- Many creators are now considering a shift to Instagram, Twitter, YouTube and Discord if TikTok continues to remove crypto-related posts.
A man holding a phone walks past a sign of Chinese company ByteDance’s app TikTok, known locally as Douyin, at the International Artificial Products Expo in Hangzhou, Zhejiang province, China October 18, 2019.
If Lucas Dimos gets one more video removed from TikTok, he suspects he’ll get kicked off the platform forever.
In the last month, at least 10 of his videos have been taken down for promoting illegal activities and regulated goods. But Dimos, who posts content on a TikTok account called theblockchainboy, says he hasn’t done anything wrong.
“It’s left my account on such a razor’s edge,” said Dimos. “We’re dying out here. We’re floundering. We’ve tried to build a community and guide this whole movement in the right direction.”
TikTok cryptocurrency content creators like Dimos, who claim they are just trying to educate consumers about the emerging market, say they have been hard hit by rapid changes at the social media company. In July, TikTok implemented a system that can automatically block videos violating its policies, and the company recently updated its branded content policy, barring the promotion of financial services including cryptocurrencies, unless users disclose it through a branded content option in the app.
CNBC spoke with 11 cryptocurrency creators for this story who say they have posted educational crypto videos, yet their content is getting flagged or taken down permanently with little explanation from TikTok. Some said their accounts are also getting temporarily banned more frequently. Most of the influencers CNBC spoke with said they haven’t been paid to promote any crypto content on TikTok in the last few weeks, and that video views and new followers have dwindled since July.
TikTok declined to comment on the reports from creators. Instead, a spokesperson pointed CNBC to the company’s community guidelines page. The spokesperson would not specify which guidelines crypto creators were violating.
When videos are pulled from TikTok, the creators say they get a notification that they violated community guidelines. Sometimes it notes that they posted content related to ‘illegal activities and regulated goods.’ The guidelines haven’t been updated since December, though TikTok rolled out its new detection feature a little over a month ago, when creators say the takedowns ramped up.
Some of the creators CNBC spoke with are considering shifting to platforms like Google-owned YouTube, Twitter, Facebook’s Instagram and the chatting app Discord, where they say they have more freedom to share content.
“The real issue isn’t the advertisements,” said a cryptocurrency influencer who goes by Wendy O. “It’s the fact that TikTok is shadow banning and censoring crypto content creators and a lot of us are posting great educational pieces.”
Navigating unknown terrain
The creators CNBC spoke with say the recent updates are warranted to crackdown on spam accounts or fraudulent money-making schemes like “rug pulls,” where individuals entice users to buy into a crypto, prop up the price and pull out. But they say the system is overcompensating and affecting those who are just trying to educate their audiences about crypto.
“It was almost like a knee-jerk reaction from TikTok against all of these scams that were going on,” said Dimos, who has over 314,000 followers on the platform. “While they may have blocked the scam posts, all of us creators are unable to post any of our content.”
In recent weeks, creators say they can’t use words like “Binance,” “decentralized finance,” or specific coins like bitcoin or ethereum in a video without it getting removed. The consensus among creators is that those words alert the algorithm, leading to an automatic takedown or review. Chart analysis and forecasting also flag the system, according to creators like Timothy, who goes by cryptoweatherman on TikTok.
Under its community guidelines, TikTok says it will remove content “including video, audio, livestream, images, comments, and text” in violation, and that it uses a “mix of technology and human moderation” to enforce its guidelines before content is even reported.
If a video is removed, the user is typically notified that the post violates a community guideline or relates to “illegal activities and regulated goods.” But the message doesn’t offer specifics, the creators said. The creators CNBC spoke with said if they get two videos taken down, they are banned from posting content for at least 24 hours. Bans can last anywhere from a day to a week, the creators who CNBC spoke with said.
TikTok’s guidelines say that users are notified when content is removed and that they can appeal the takedown if they “believe no violation has occurred.” The company also says it will suspend or ban accounts that are repeatedly “involved in severe or repeated violations” and that they take into account “information available on other platforms and offline in these decisions.”
When the creators reach out to TikTok about an account ban or pulled video, most say they have been ignored or sent an automated response. There’s no phone number to call, no representative to email, yet, some creators say when they appeal the takedown TikTok reinstates many videos.
Joshua Thillens, who goes by Joshua Jake on TikTok, says he’s had at least eight videos taken down. When CNBC spoke with Mason Versluis, a creator with nearly 430,000 followers on an account called cryptomasun, he was experiencing a 48-hour ban, which prevented him from posting.
Prior to the guideline change, Versluis had one video taken down showing a car crash. In the last few weeks, he said between 30 and 35 of his videos have been taken down. Most were reinstated after he submitted an appeal.
“I’m not claiming to be a financial wizard or a financial advisor or even a trader,” Versluis said. “I’m simply someone who’s been in crypto for four years. I’ve studied this in-depth and the information I bring and the opinions should speak for themselves.”
Views and followers reach a bottleneck
Many creators also say views and followers have slowed in recent weeks and followers say the creators aren’t showing up on their “For You” pages. TikTok’s community guidelines say the company may limit a post’s visibility in search results and other feeds.
Mack Lorden, who has over 102,000 followers on TikTok, says he quickly built his following when he launched his platform in January. But he “reached a bottleneck” last month, a change he attributes to both the guideline change and the price drop in the crypto market.
Smaller creators like Miguel Morales, who has a little over 5,200 followers on his account called blockchaincrusader, are also feeling the pinch. Morales says views have dropped on good videos from almost 100,000 to 10,000. On a good day, he’s lucky to get over 1,000 views.
Nick Dye, who has over 14,400 followers on his account called the_cryptokeeper, says he was getting 300 to 400 followers a day consistently within the first few months of launching his account and after posting a viral video. He said new followers went from hundreds to maybe three or six per day after the policy change. His first 12,000 followers he says he got in just 25 days. The last couple thousand has taken double the time.
“No one knows what they can and can’t say,” Dye said. “It discourages the newer creators like myself that have the knowledge and experience that they can offer. We’re scared to share things for fear of getting banned.”
Getting creative with crypto content
In the weeks since the changes, creators say they are getting creative with the content they post. Wendy O. recently posted a series of videos referring to cryptocurrencies as ketos. Dye calls ethereum his “girlfriend.” Carla Nasui and Walker Van Dixhorn, who are known as The Crypto Couple, have always used comedy to discuss crypto, a setup they say has kept them from getting banned. Morales has cut back from making three videos a day to just two.
Some creators CNBC spoke with have done sponsored content in the past and say they can make anywhere from $300 to $20,000, depending on their following. That payment is occasionally made in cryptocurrencies.
Versluis has partnered with DAO Maker, a crowdfunding platform that also serves as a launchpad for some cryptos, and IOI, one of the cryptocurrencies launched through it. Wendy O. created content for a platform involved in crypto-lending called Celsius.
Some also make money through paid Discord subscriptions and one-on-one investing sessions. But all of the creators CNBC interviewed said they make most of their content for fun or to educate the masses and they never explicitly tell users to buy into a coin.
An uncertain future at TikTok
Angel Talamantes who goes by begincrypto and has over 41,100 followers admits some influencers were “abusing” sponsored posts and promoting projects without doing their research prior to the changes.
“A lot of influencers that have a huge following don’t really know what they’re promoting,” he said. “They hear the pitch from the cryptocurrency and it sounds good.”
Morales and the other creators are happy to comply with the rules, but say the company needs to clarify the guidelines. This includes a clear definition of what is considered promoting cryptocurrencies and what qualifies their content as illegal activity and regulated goods. They also want a direct representative from TikTok who can answer questions and concerns.
Amid the uncertainty, some creators are getting ready to put their time into other platforms. Lorden says he’s building up his YouTube and Discord accounts.
Thillens recently launched an account called CryptoKnight with Versluis, Timothy and Dimos that’s run through Discord. They plan to host public events and launch a website so they “can’t be filtered.” But he admits going viral overnight on TikTok trumps the engagement on other platforms.
“There’s nothing like that,” he said.
This post is public.
image: GETTY IMAGES
Facebook has confirmed it will continue to ban Taliban content from its platforms as it considers the group to be a terrorist organisation.
Facebook has confirmed it will continue to ban Taliban content from its platforms as it considers the group to be a terrorist organisation.
The company says it has a dedicated team of Afghan experts to monitor and remove content linked to the group.
For years, the Taliban has used social media to spread its messages.
Its rapid takeover of Afghanistan raises fresh challenges for technology firms on how to deal with content related to the group.
"The Taliban is sanctioned as a terrorist organization under US law and we have banned them from our services under our Dangerous Organisation policies. This means we remove accounts maintained by or on behalf of the Taliban and prohibit praise, support, and representation of them," a Facebook spokesperson told the BBC.
Facebook has defined the Taliban as a dangerous organisation for many years.
Facebook said: "We also have a dedicated team of Afghanistan experts, who are native Dari and Pashto speakers and have knowledge of local context, helping to identify and alert us to emerging issues on the platform."
The social media giant said it does not make decisions about the recognition of national governments but instead follows the "authority of the international community".
Facebook highlighted that the policy applies to all of its platforms including its flagship social media network, Instagram and WhatsApp.
However, there are reports that the Taliban is using WhatsApp to communicate.
Facebook told the BBC that it would take action if it found accounts on the app to be linked to the group.
Rival social media platforms have also come under scrutiny over how they handle Taliban-related content.
Spokesmen have used Twitter to update their hundreds of thousands of followers, as the organisation retook control of Afghanistan.
In response to BBC questions about the Taliban's use of Twitter, a company spokesperson highlighted policies against violent organisations and hateful conduct.
According to its rules, Twitter does not allow groups that promote terrorism or violence against civilians.
Alphabet's YouTube did not immediately respond to a BBC request for comment on its policies in respect to the Taliban.
This post is public.
Hate raids involve overwhelming streamer comment boards with abusive remarks, sometimes using automated or "bot" accounts.
Video game platform Twitch on Wednesday vowed to battle "hate raids" in response to complaints about harassment of women and players of color.
The Amazon-owned platform for sharing video game play and commentary said that it updated filtering tools to better detect hate speech in chat forums and that more improvements are on the way.
Hate raids involve overwhelming streamer comment boards with abusive remarks, sometimes using automated or "bot" accounts.
"We've seen a lot of conversation about botting, hate raids, and other forms of harassment targeting marginalized creators," Twitch said in a string of tweets.
"You're asking us to do better, and we know we need to do more to address these issues."
The posts came in response to complaints about racist, sexist, and anti-gay onslaughts aimed at Twitch streamers who don't appear to be straight, white and male.
People targeted with such abuses shared their viewpoints and stories on Twitter with the hashtag #TwitchDoBetter.
Those who shared included Omega Jones, a Twitch streamer who uses the handle @CriticalBard at his channel on the platform.
"People like to act like hate raids and follow bots that say some horrible (things) don't exist," Jones said.
He shared a snippet of video on Twitter that showed the chat forum on his screen filling with racist insults as he made his way through a game.
"When marginalized creators on this platform, specifically people of color, tell you that Twitch needs to do better, just listen," Jones said.
A Twitch streamer whose profile identified her as bi-racial tweeted that all her friends were being targeted with hate raids and called for the platform to be made safer.
"We're launching channel-level ban evasion detection and account verification improvements later this year," Twitch said.
"We're working hard to launch these tools as soon as possible, and we hope they will have a big impact."
Twitch added that it is reaching out to streamers to learn more about abuses.
This post is public.
YouTube said Sunday it had barred Sky News Australia from uploading new content for one week, citing concerns about COVID-19 misinformation.
The move comes after a review of posts uploaded by the Rupert Murdoch-owned TV channel, which has a substantial online presence.
"We have clear and established COVID-19 medical misinformation policies... to prevent the spread of COVID-19 misinformation that could cause real-world harm," a YouTube statement said.
With 1.86 million YouTube subscribers, the channel—which is owned by a subsidiary of Murdoch's News Corp—has a conservative following well beyond Australia.
Its posts, including some questioning whether there is a pandemic and the efficacy of vaccines, are widely shared on social media forums around the world that spread virus and vaccine misinformation.
The last YouTube upload, from three days ago, features a host claiming that lockdowns have failed and criticising state authorities for extending Sydney's current stay-at-home orders.
Sky News confirmed the temporary ban and a spokesperson said "we support broad discussion and debate on a wide range of topics and perspectives which is vital to any democracy".
"We take our commitment to meeting editorial and community expectations seriously."
YouTube has a "three strikes" policy on violations, with the first resulting in a one-week suspension, a second strike within 90 days producing a two-week ban, while a third means permanent removal from the platform.
Former US president Donald Trump was temporarily banned under the policy.
YouTube is owned by Google parent company Alphabet.
This post is public.
by Northwestern University
Credit: Unsplash/CC0 Public Domain
Following the Surgeon General's July 15 advisory on health misinformation and social media, President Joe Biden remarked that Facebook and other social media platforms are "killing people." Though Biden quickly backpedaled on his remark, Facebook rebutted it, citing instead its own study that showed increasing "vaccine acceptance" by U.S. Facebook users.
So, does Facebook play a role in COVID-19 misinformation? New survey results from researchers at Northwestern, Harvard, Northeastern and Rutgers universities show that it does.
While the researchers state that their results do not indicate that social media platforms are "killing people," as Biden said, they do find, however, that those who relied on Facebook for COVID-19 news had substantially lower vaccination rates than the overall U.S. population. Those who received most of their news from Facebook also displayed lower levels of institutional trust and greater acceptance of misinformation.
"We certainly cannot say the platform causes vaccine hesitancy, but it does seem like a place where such people gather," said James Druckman, the Payson S. Wild Professor of Political Science in Northwestern's Weinberg College of Arts and Sciences and associate director of the University's Institute for Policy Research."That makes it all the more important to ensure the provision of accurate information on Facebook."
Between June 9 and July 7, Druckman and his colleagues from the COVID States Project asked 20,669 people across the U.S. to name news sources they rely on for COVID-19 updates, vaccination rates and information.
The researchers found Facebook is a significant source of COVID-19 news. Three in 10 people (33%) reported visiting Facebook in the last 24 hours to find news about COVID-19. When compared with other possible responses, the results show people use Facebook about as much as mainstream news sources like CNN (32%) and Fox News (30%).
The vaccination rate of respondents who said they find COVID-19 news from Facebook and other sources is 10% lower (61%) than those who said they do not use Facebook (71%). This vaccination gap widens to 40% between people who used multiple sources (87%) but not Facebook, and those who only used Facebook for COVID-19 news (47%).
Even when the researchers accounted for respondents' race, age, home location, education and other demographic characteristics, they still observed those who received COVID-19 news from Facebook had lower vaccination levels and showed higher levels of vaccine resistance.
Trust in institutions, vaccine misperceptions
In examining institutional trust, the researchers discovered that respondents who relied on Facebook for COVID-19 information tended to be less trusting of the news media, as well as other government and scientific institutions. For respondents who said they only received news from Facebook, only 37% said they trusted the mainstream media "some" or "a lot," compared to 47% of other respondents.
Druckman and his colleagues also asked respondents whether common COVID-19 misperceptions—such as if vaccines alter people's DNA or contain microchips to track people—were factual. Respondents who exclusively used Facebook were more likely to believe the false claims, with 22% believing at least one. This result was on par with people who only watched Fox News (21%), and much higher than those who relied on multiple new sources (7%).
Druckman and his colleagues say their survey results do not suggest that Facebook prevented Americans from being vaccinated; still, they do provide key insights into social media news consumption and COVID-19 vaccination rates.