This post is public.
Credit: CC0 Public Domain
Instagram on Wednesday began letting English-language users enhance profiles to show new gender pronouns they want applied to them in conversation.
The photo-centric social network became the lasted internet service to broaden beyond basic "he" or "she" references to pronouns preferred by those with alternate gender identities.
"Add pronouns to your profile," Instagram said in a tweet.
"The new field is available in a few countries, with plans for more."
Instagram users can opt to have pronoun choices shown publicly, or just to friends at the service, according to a demonstration of the option by AFP in the United States.
Pronoun options included "them" and "theirs" along with nonbinary "co," "ze" and others shared at a resource center page at the University of California, Davis, website.
"We are giving people more tools to express themselves on Instagram," the Facebook-owned service said in response to an AFP inquiry.
"Sharing pronouns has been widely adopted by our community, and with this feature we hope to normalize the adoption further."
Instagram added that it consulted with advocacy groups on pronoun options.
The addition to Instagram profiles was rolled out globally in English, with work being done to expand to other languages, according to Instagram.
This post is public.
by Barbara Ortutay
In this Jan. 6, 2021, file photo, smoke fills the walkway outside the Senate Chamber as supporters of President Donald Trump are confronted by U.S. Capitol Police officers inside the Capitol in Washington. With riot cases flooding into Washington's federal court, the Justice Department is under pressure to quickly resolve the least serious cases. (AP Photo/Manuel Balce Ceneta, File)
Former President Donald Trump will find out this week whether he gets to return to Facebook in a decision likely to stir up strong feelings no matter which way it goes.
The social network's quasi-independent Oversight Board says it will announce its ruling Wednesday on a case concerning the former president.
Trump's account was suspended for inciting violence that led to the deadly Jan. 6 Capitol riots. After years of treating Trump's inflammatory rhetoric with a light touch, Facebook and Instagram silenced his accounts on Jan. 7, saying at the time he'd be suspended "at least" through the end of his presidency.
Though Trump posted often to Facebook—and his campaign was especially skillful at using the social network's advertising tools to reach potential voters—his platform of choice was always Twitter. But Twitter banned him permanently, without an oversight board to kick the final decision to.
While not always as high-profile as Twitter, Trump's Facebook posts were widely shared, as were those of his those of his conservative supporters such as Ben Shapiro and Dan Bongino, who continue to amass millions of views and comments. On Twitter, meanwhile, Fox News host Tucker Carlson appears to be stepping into the conservative provocateur-in-chief role in the vacuum left by Trump.
In this Jan. 6, 2021 file photo, Trump supporters try to break through a police barrier at the Capitol in Washington. With riot cases flooding into Washington's federal court, the Justice Department is under pressure to quickly resolve the least serious cases. (AP Photo/Julio Cortez, File)
"If they reinstate him, Facebook will claim this proves the Board's independence. If they don't, Facebook will say its judgment to exclude Trump was vindicated. Heads they win, tails we lose. Journalists should know better than to take this window dressing seriously," said Laurence Tribe, professor at Harvard Law School and member of the Real Facebook Oversight Board, a group critical of Facebook and its panel.
Facebook created the oversight panel to rule on thorny content on its platforms in response to widespread criticism about its inability to respond swiftly and effectively to misinformation, hate speech and nefarious influence campaigns. Its decisions so far have weighed on the side of free expression vs. restricting content.
In its first rulings, the panel overturned four out of five decisions by the social network to take down questionable content. It ordered Facebook to restore posts by users that the company said broke standards on adult nudity, hate speech, or dangerous individuals.
This included a Myanmar user's Burmese-language Facebook post about Muslims that included two widely shared photos of a dead Syrian toddler was offensive but did not rise to the level of hate speech, it ruled.
President Donald Trump speaks to crowd before boarding Air Force One at Andrews Air Force Base, Md., in this Wednesday, Jan. 20, 2021, file photo. Former President Donald Trump will find out this week whether he gets to return to Facebook. The social network's quasi-independent Oversight Board says it will announce its decision Wednesday, May 5 on a case concerning the former president. Trump's account was suspended for inciting violence that led to the deadly Jan. 6 Capitol riots. (AP Photo/Luis M. Alvarez, File)
But none of the rulings have the same gravity as this week's decision on Trump. The board was to announce its decision last month but that was delayed, it said, because it needed to process more than 9,000 public comments.
The board's 20 members, which will eventually grow to 40, include a former prime minister of Denmark, the former editor-in-chief of the Guardian newspaper, along with legal scholars, human rights experts and journalists.
The first four board members were directly chosen by Facebook. Those four then worked with Facebook to select additional members. Facebook pays each board members a salary.
The board's independence has been questioned by critics who say it's a Facebook public relations campaign intended to draw attention away from deeper problems of hate and misinformation that still flourish on its platforms.
This March 29, 2018, file photo shows the Facebook logo on screens at the Nasdaq MarketSite, in New York's Times Square. Former President Donald Trump will find out this week whether he gets to return to Facebook. The social network's quasi-independent Oversight Board says it will announce its decision Wednesday, May 5 on a case concerning the former president. Trump's account was suspended for inciting violence that led to the deadly Jan. 6 Capitol riots. (AP Photo/Richard Drew, File)
"The Oversight Board is designed to distract journalists and policy makers from the massive harm being done every day by Facebook," said Roger McNamee, an early investor in Facebook. "To view the board as legitimate, one must accept that a group structured to review a handful of cases a year is enough to supervise a platform that is undermining democracy around the world, amplifies denial in a pandemic, allegedly engages in price fixing in digital advertising, amplifies hate speech, and shares tens of millions of harmful messages every day."
Facebook regularly takes down thousands of posts and accounts, and about 150,000 of those cases have appealed to the oversight board since it launched in October 2020. The board has said it is prioritizing the review of cases that have the potential to affect users around the world.
This post is public.
Spotify countered Apple's move on podcasts by allowing creators to keep all subscription revenue until 2023
Spotify unveiled plans Tuesday for podcast subscriptions, countering a similar initiative from Apple but offering a better deal than its US rival by allowing creators to keep all revenue for a limited time.
The updated podcast platform for the United States, set to expand internationally in coming months, lets podcasters make episodes available only to subscribers.
Taking advantage of the tool will cost podcasters nothing for the next two years, meaning they get all subscription revenue, Spotify said in a post.
Spotify planned to start charging a five percent fee for subscriptions starting in 2023.
"Our aim is to help podcasters maximize their subscription audiences and grow them from their existing listener bases," Swedish-based Spotify said.
The announcement comes as Apple is poised to add a subscription option to its pioneering podcasting service as it ramps up competition with Spotify.
Beginning in May, podcast fans around the world will be able to sign up for subscriptions for perks such as ad-free listening or exclusive content from creators, according to Apple.
Podcasters will receive 70 percent of subscription revenue in the first year, with Apple keeping the rest as a commission.
Once a subscriber has been subscribed for a year, the share for creators will rise to 85 percent, they added.
The subscription feature comes with an overhaul of the Apple Podcast platform that lays credit to kicking off the trend with its launch 15 years ago.
Industry tracker eMarketer last month said that Spotify has been growing quickly and that its podcasting service is on track to have more listeners in the United States than Apple Podcasts this year.
Spotify said that its subscription platform is kicking off with a dozen creators along with a collaboration with non-profit media organization National Public Radio.
This post is public.
by Ian Thomsen, Northeastern University
Social media channels are accused of enabling hate speech and misinformation that contributes to violence, including the Jan. 6 attack on the US Capitol. Credit: Matthew Modoono/Northeastern University
The day is coming when your posts to social media may travel through checkpoints before the messages go public.
All of your posts to Facebook, Twitter, and other social platforms may be instantly examined by an artificial intelligence filter that roots out hate speech and misinformation. Some posts that have been flagged by artificial intelligence may then be reviewed by a human supervisor.
Such is the recommendation of Usama Fayyad, executive director for Northeastern's Institute of Experiential Artificial Intelligence, in response to the increasingly urgent desire for oversight of social media.
Fayyad believes social media filters are needed because the platforms have grown and scaled faster than they can be regulated—with the result that social channels are now being accused of enabling hate speech and misinformation that contributes to violence, including the Jan. 6 attack on the US Capitol.
"Social media needs to pass through some hurdle, mostly algorithmic and automated, but correctly augmented with the right human intervention," says Fayyad, a leader in artificial intelligence for three decades who founded Open Insights, Yahoo! Research Labs, Data Mining at Microsoft, and the Machine Learning Systems group at NASA's Jet Propulsion Laboratory. "The problem of misinformation is difficult but not impossible.
"We know that some stuff fits in this zone of uncertainty where we still need human judgment," he says "With a relevant feedback loop that appropriately leverages human judgment, the more we deal with these issues through human intervention, the more the system learns—and the smarter the technology gets."
Facebook has been developing artificial intelligence tools to root out "bad activity," said Mark Zuckerberg, chief executive of the social media giant, in Congressional testimony as long ago as 2018. But will the biggest social channels—including Facebook, Twitter, and Instagram—commit to comprehensive changes to snuff out hate speech?
Fayyad is proposing a social media equivalent to the seven-second delay that TV networks use when covering sports and other live events. If a performer uses profanity, the delay enables censors to silence the foul language before it can be broadcast into homes.
"The social media companies have achieved their mission of transforming how we communicate," Fayyad says. "We've got to remember that all these companies are fairly new, and that we have not had the time to think about the issues as a society.
"This is an environment where something like social media can emerge and within two or three years become ubiquitous," Fayyad says. "We have not seen that in the history of humanity before. We need a new way of thinking around regulation and what it means and how quickly to evolve it."
The platforms' nascent efforts to develop filters are promising, says Christo Wilson, associate professor of computer science at Northeastern. There are a number of ways to sift out bad behavior: Reddit reminds users that they are responsible for community guidelines, notes Wilson, which reduces conflict, hate speech, and misinformation.
Wilson adds that the big social media companies have shown an ability to sniff out copyright violations and terrorist propaganda.
"The platforms are actually pretty good at catching that stuff, because they have very strong incentives to do so," says Wilson, who also directs Northeastern's bachelors program in cybersecurity. "So it's not like they can't do content moderation."
The Northeastern professors say that the goal of limiting bad actors will require a minimum of three steps:
There is bipartisan support for laws to curb the power of the biggest platforms.
Wilson argues that social media companies cannot be asked to set standards, police themselves, or take full responsibility for acting in the best interests of society when those measures may affect profits.
"At some point they're going to have to decide what's beyond the pale, and how it should be remedied by private actors," Wilson says of government regulators. "And I'm sure, immediately, someone will sue and take [the regulations] to the Supreme Court. Maybe it will fail, but we still have to try."
Fayyad believes comprehensive regulations will include heavy government fines for platforms that enable hate speech and misinformation. Such penalties will create additional incentives to invest in the necessary content moderation and technology.
Development of artificial intelligence
The evolution of search engines provides a helpful example.
"This is exactly how the popular search engines, including Google, emerged," Fayyad says. "They started out with a basic algorithm, and then they needed a lot of feedback with literally tens of thousands of editorial people reviewing search results. In the year 2000, it probably looked impossible: The search engines were nowhere near as good, the problem was too hard, the web was growing fast.
"It's pretty amazing how far search technology has come, just by incorporating that feedback loop and that ability to capture learnings and continuously improve over time," Fayyad says. "So I am hopeful that technology can help. Smart monitoring is not an impossible problem."
Include humans in the loop
Regulations will help define the role of human oversight whenever a post is flagged by artificial intelligence.
"Whether you refuse to host it entirely, or it goes for human review, there's a bunch of normative issues there," Wilson says. "And then there will be issues with human review as well. But if we accept that social networks and social media are going to be huge, we have to have moderation systems that scale—and there's no denying that AI and machine learning is going to be part of that."
Wilson says the role of human review may hinge upon a new perspective on the platforms and their role in society.
"When the platforms describe themselves, they don't talk about content moderation," Wilson says. "They talk about being an open community. And that sets an expectation that people can do whatever the hell they want. So they need to more strongly acknowledge their role in shaping discourse."
Wilson believes the harm of bad intentions can be further limited if the big platforms are broken down into smaller channels that no longer will be able to influence large populations.
"The ability to spread [dangerous] messages and really impact the mainstream is very much linked to the centralization of social media platforms," Wilson says. "This is one reason I also favor antitrust remedies: Smaller platforms have a little bit more latitude to do bad or to not moderate well, but there's less collateral damage."
Fayyad says most users won't notice the new measures.
"Ninety to 95 percent of the posts hopefully should not be blocked and should have no intervention," says Fayyad, who hopes that the relatively small percentage of harmful content will make it easier to catch.
Developing the technology will be expensive, Fayyad acknowledges. But he believes it will ultimately pay for itself.
"You create economic value," Fayyad says. "People will tend to use the social media that they trust and that they know is going to be safe. And that's going to be worth something."
This post is public.
by Brett Molina, Usa Today
Credit: CC0 Public Domain
Having trouble working with Google Drive or Google Docs on Monday? You are not alone.
Google says it's investigating issues impacting its productivity services, including Drive, Docs, Google Sheets and Google Slides.
"The affected users are able to access Google Drive, but are seeing error messages, high latency, and/or other unexpected behavior," reads a message on Google's workplace status dashboard Monday.
The tech giant cited similar issues with Docs, Sheets and Slides.
According to Down Detector, a website where users can report issues with online services, the first reports of issues were posted around 8:30 am ET. The most common issues were linked to logging in and apps not loading.
This post is public.
Twitter is a dominant and controversial force in civil society
Fifteen years ago Jack Dorsey typed out a banal message—"just setting up my twttr"—which became the first ever tweet, launching a global platform that has become a controversial and dominant force in civil society.
The short tweet on March 21, 2006 by the Twitter CEO is now being sold at auction, with bidding reaching $2.5 million.
It has been a long, strange journey for the social network, which in January deleted former president Donald Trump's account after he was blamed for inciting the violent insurrection on the US Capitol in January by extremist supporters seeking to overturn his election loss.
The banning of a head of state from the platform was both welcomed and denounced in a sign of the thin line Twitter and other social media networks often try to walk between neutrality, freedom of expression, and moderation and prevention of abuse.
Bidding on Dorsey's tweet ends later Sunday. He has said he will donate the funds to charity.
Dorsey's tweet will be sold as an NFT, or a non-fungible token.
NFTs use the same blockchain technology behind cryptocurrencies to turn anything from art to sports trading cards into virtual collector's items that cannot be duplicated.
This post is public.
by Coral Murphy Marcos
Credit: Unsplash/CC0 Public Domain
Zoom will roll out automatic closed captioning for all free accounts in the fall, according to a blog post from the company.
The video conferencing company will make the Live Transcription setting available to all Zoom users "to provide a platform that is accessible to all of the diverse communities we serve," it said.
Zoom users with free accounts who require the automatic closed captioning can receive the update upon request starting today, ahead of the broader release, by filling out a request form.
"Since we expect a high volume of requests, we appreciate your patience as we work to make automatic closed captioning available for all of our users," reads a statement by Jen Hill, Zoom's product marketing manager.
Live transcription has been available for paid accounts exclusively. The feature supports only the English language.
The live transcription feature allows hosts to enable captioning for participants as a person speaks in a meeting. The closed captioning can done by an integrated third-party closed captioning service or the host or an assigned meeting attendee for greater accuracy.
Background noise, volume and clarity of the speaker's voice, a speaker's proficiency with the English language and lexicons and dialects specific to a geography or community may affect the accuracy of the automated live transcription.
Microsoft Teams and Google Meet also provide free live captions for users. Teams offers transcriptions only in English, and Google Meet can caption English, French, German, Portuguese (Brazil) and Spanish (Mexico and Spain).
This post is public.
Credit: Unsplash/CC0 Public Domain
Facebook said Thursday it would expand its climate information hub and direct its users to experts to debunk myths and hoaxes in the field in a ramped up effort to fight misinformation.
The social media giant said it has added a section to its climate hub that features "facts that debunk common climate myths" and will rely on climate communication experts from the George Mason University, the Yale Program on Climate Change Communication, and the University of Cambridge.
The information hub is already available in France, Germany, Britain and the United States, and is being expanded to Belgium, Brazil, Canada, India, Indonesia, Ireland, Mexico, the Netherlands, Nigeria, Spain, South Africa and Taiwan.
"Along with expanding the center, we're improving it. We added a section that features facts that debunk common climate myths," Facebook said.
"We already direct people to the Climate Science Information Center when they search for climate-related terms, and will keep doing that where the center is available. In countries where it isn't, we will soon direct people to the UN Environment Programme, a leading global environmental authority."
This post is public.
Facebook is testing news feeds that will de-emphasize politics as part of an effort to reduce inflammatory content on the platform
Facebook said Wednesday it began rolling out news feeds with less political subject matter in line with a plan outlined by chief Mark Zuckerberg to reduce inflammatory content.
The leading social network said it would begin testing the change "for a small percentage of people" in Canada, Brazil and Indonesia this week, and the United States in the coming weeks.
"During these initial tests we'll explore a variety of ways to rank political content in people's feeds using different signals, and then decide on the approaches we'll use going forward," product management director Aastha Gupta said.
The change won't affect information about the Covid-19 pandemic and content from global health organizations or from official government agencies.
"As Mark Zuckerberg mentioned on our recent earnings call, one common piece of feedback we hear is that people don't want political content to take over their News Feed," Gupta said.
"Over the next few months, we'll work to better understand peoples' varied preferences for political content and test a number of approaches based on those insights."
The move comes with Facebook and other platforms under fire for enabling political misinformation and manipulation, notably during election periods.
Zuckerberg said last month Facebook is seeking to "turn down the temperature" on its sprawling platform by reducing the kind of divisive and inflammatory political talk it has long hosted.
He said the social media giant will no longer recommend politics-themed groups to users and was working on ways to reduce the amount of political content served up in users' news feeds by its automated systems.
"We're still going to enable people to engage in political groups and discussions if they want to," Zuckerberg said last month.
This post is public.
By Alex Hern, the UK technology editor for the Guardian
‘A significant proportion of WhatsApp users avoid Facebook itself while still using the chat app it owns.’ Photograph: Lionel Bonaventure/AFP/Getty Images
If you use WhatsApp – as around 30 million British people do – then you’ve probably already seen that the chat app is planning some changes.
But the announcement also underscores a truth that many had been reluctant to acknowledge: if you’re a WhatsApp user, you’re a Facebook customer, and while the two services have historically been quite distinct, the process of integration only moves in one direction.
On the face of it, this latest change need not spark inordinate concern. The most important data held by WhatsApp – the contents of user conversations – remains sacrosanct. The end-to-end encryption used by the app to protect the contents of all chats means that no one, including WhatsApp, knows what users are saying to each other, nor can they easily find out.
That encryption is constantly under attack, chiefly by law enforcement agencies that want a return to the heyday of the 2000s, when criminal conspiracies could be easily uncovered by demanding phone companies hand over the contents of SMS messages. But it has held firm, in part because Facebook’s long-term commercial vision is for more encryption, not less – a view Mark Zuckerberg laid out in 2019, when he posted a long note to his Facebook page titled A Privacy-Focused Vision for Social Networking.
But that note also laid out Facebook’s long-term plans for WhatsApp: merging the chat app with the company’s wider social network, in the name of “interoperability”. “With interoperability, you’d be able to use WhatsApp to receive messages sent to your Facebook account without sharing your phone number,” Zuckerberg wrote, “and [in commercial transactions] the buyer wouldn’t have to worry about whether you prefer to be messaged on one network or the other.”
Two years on, those goals are close to being achieved. Already, Facebook and Instagram users can send direct messages to each other without needing to switch apps. And this latest change will, from February, deepen the integration between Facebook and WhatsApp, allowing users to interact with shops that host storefronts on the former without leaving the latter.
If you’re comfortable with Facebook’s use of data (or that of its much closer subsidiary Instagram), it might be difficult to care about this. The company was recently forced by Apple to provide a privacy “nutritional label” on its iOS app, revealing how it works with user data. The labels disclosed more than 100 different pieces of data that may be collected, many of which are directly linked to user profiles, including health and fitness data, “sensitive info” and search histories. For the typical user, who has an account on both services, adding in the small amount of information WhatsApp has is a drop in a bucket by comparison.
But the change does start to eat away at the idea that you can be on WhatsApp without a Facebook footprint. The two apps’ very different histories and intended uses have led to a split in demographics among their users, and a small but significant proportion of WhatsApp users, drawn by the encryption, ad-free nature and no-frills interface, avoid Facebook itself while still using the chat app it owns.
For those users, this latest disclosure should become a watershed moment: a WhatsApp account and a Facebook account are still two separate things, but from here on out, every change is going to move in one direction. WhatsApp still collects much less data, so there’s no need to panic and sever ties immediately. But a privacy-conscious user would be well advised to begin thinking about what alternative platforms they could use to contact people who are currently only available through Facebook’s portfolio of apps.
Thankfully, there are alternative options, the most well-known of which is Signal, a free app developed by the non-profit that created WhatsApp’s own encryption system. With its roots in the privacy and security community, Signal’s technical underpinnings are second to none, and the app has spent the past few years working on becoming a viable alternative to slick user-focused services such as Facebook Messenger, without compromising on the features that make it a must-have for its more paranoid user base.
It should be no surprise that Signal is a viable alternative to WhatsApp: the non-profit which currently bankrolls the app was started with a $50m loan from Brian Acton, the co-founder of WhatsApp itself. Alternatively, you could listen to Elon Musk, who this week tweeted the simple message “use Signal”. He’s now the richest person in the world, so he must be right about something, it seems.
Whether you decide to switch or not – or just to set up a back-up chat app in case you feel the need to change down the line – the important thing is to make an active choice, and not allow a thousand small changes to add up a state of affairs you’d never have actively agreed to. We can’t all read the terms and conditions, but we can at least pause before clicking “Agree”.