11 min read
This article is more than 2 years old
Harassment and abuse in three dimensions, the dark side of the Metaverse
One digital watchdog was so shocked by the Metaverse footage which contained “abuse, harassment, racism and pornographic content”, that they could not release it to The Feed, only describe it.
Published 20 March 2022 9:45am
Updated 20 March 2022 10:09am
By Michelle Elias
Source: SBS
This article mentions sexual assault
In 1993, freelance journalist Julian Dibbell penned an essay that would set the stage for conversations about the limits of free speech, consent and abuse in digital communities.
The piece, “A Rape in Cyberspace,” chronicled the aftermath of a "cyber rape" by a player in a text-based game called LambdaMOO.
The player, 'Mr Bungle', had taken control of other players’ avatars and programmed them to graphically describe sexual acts.
In a chat room and virtual world populated by early adopters of the internet, avatars congregated to discuss the emotional trauma it had caused them and what the consequence should be for Mr Bungle’s actions.
One user, whose avatar was a victim, called his activities "a breach of civility". In real life, she said, "post-traumatic tears were streaming down her face".
Mr Bungle turned out to be the avatar of a university student egged on by fellow students. No action was taken against them.
Almost thirty years after that piece, another recount would also go viral. This time, a metaverse beta tester said she had been virtually “gang raped” on the virtual reality (VR) platform Horizon Worlds, created by Meta, the company formerly known as Facebook.
“Within 60 seconds of joining — I was verbally and sexually harassed — 3-4 male avatars, with male voices, essentially, but virtually gang-raped my avatar and took photos — as I tried to get away they yelled — ‘don’t pretend you didn’t love it’ and ‘go rub yourself off to the photo,’” Nina Jane Patel, recalled .
“A horrible experience that happened so fast and before I could even think about putting the safety barrier in place. I froze.
“It was surreal. It was a nightmare.”
The dream to make the metaverse mainstream
Meta wants the “metaverse” to be the next big thing – and not like Tik Tok big, like the Internet big.
A peek into the metaverse tells us it will be a three-dimensional digital society. There will be virtual identities, you will shop and socialise in virtual spaces, collaborate in virtual offices, exercise in virtual classes and there will be virtual economies.
With the right controllers and equipment, you could simulate a spin class or even a meeting room to collaborate with colleagues in different cities.
A person wearing a virtual reality (VR) headset plays an online game.
Avatars attend a spin class in the metaverse. Credit: Meta
“No one company will own and operate the metaverse. Like the internet, its key feature will be its openness and interoperability,” Meta said in a statement on its website.
Pastor DJ Soto, the lead pastor of VR Church, delivers a sermon in his home. He sings, preaches and performs digital baptisms in the metaverse to a growing congregation of avatars. Source: AP / Steve Helber/AP
Google has worked on metaverse-related technology for years. Apple and Microsoft have also joined the race to get their technology ready for widespread adoption.
While the first takes on Mr Zuckerberg’s announcement last October saw it labelled a little “lame,” lambasted the name and called it “arrogant” to brand virtual reality - which has been around for years - others had more sinister concerns.
In a three-dimensional world, experts, users and advocates are raising questions around abuse and harassment.
'A violation once every seven minutes'
In one popular virtual reality game, VRChat, a violation occurs about once every seven minutes, according to the London-based nonprofit Center for Countering Digital Hate (CCDH), which monitors misinformation and harmful content online.
The app isn’t owned by Meta but is hosted on its Quest platform connected to its headset. The CCDH says Meta should therefore bear responsibility for user safety.
While Meta has report and block features for the wider Quest platform, it’s unclear what reporting someone will achieve.
The group of researchers spent 11 hours on the game, the number one app in Meta's app store, and found 100 instances of something that was a “clear breach” of .
Imran Ahmed, the group’s chief executive, told The Feed the group was so shocked by the material, they did not release the footage, which contains “abuse, harassment, racism and pornographic content”.
“If it's a safe space for racists and people preaching white genocide, conspiracy theories, and people harassing 12-year-old kids, then it's not a safe space for black people, or brown people or children."
Mr Ahmed recounts one incident to The Feed.
“One example was a group of adults circling around a teenager and asking them to say, the N-word and saying, if you say the N-word, this girl will give you a virtual kiss.
“Eventually, the young kid, and it's pretty chilling the raw video, shows clearly a child's voice saying the N-word.
“That is not a place that you would send kids to on their own.”
A still from VRChat. Credit: The Virtual Reality Show/Youtube
Only 51 of the incidents could be reported to Meta using their web form as they couldn't be matched to a username in its database - a requirement for a complaint.
None of the reports were acknowledged by Meta in any way, said Mr Ahmed.
“Facebook has yet again gone for speed of growth over building safety from day one ... it's the exact opposite of what they told us they were going to do.”
Last year, around the time of the metaverse announcement, Mr Zuckerberg promised
"safety and privacy and inclusion before the products even exist".
A letter from the CCDH to Monika Bickert, the Vice President of Content Policy at Meta, shared its findings.
“We have no way of knowing if your company intends to act on any of the perpetrators, some of whom may be guilty of online abuse against minors,” the letter read.
Credit: SBS The Feed
Earlier this month, journalists from BuzzFeed News filled the Horizon World game with content banned from Facebook and Instagram to test its moderation. In the article, they said the world went unnoticed "until the PR team heard about".
Meta's product manager for VR integrity, Bill Stillwell, told The Feed in a statement: "We want everyone using our products to have a good experience and easily find the tools that can help in situations like these, so we can investigate and take action."
He added: "For cross-platform apps…we provide tools that allow players to report and block users.
"We will continue to make improvements as we learn more about how people interact in these spaces."
Bringing attention to the issue or making it worse?
Harms have always existed in virtual and augmented reality said Ben Egliston, a Postdoctoral Research Fellow at the Digital Media Research Centre at the Queensland University of Technology.
“I think it’s perhaps a mixed blessing that we are seeing a company of Facebook’s size entering the metaverse because it is forcing academics and policymakers to focus more attention on this question of social harm in these environments,” Mr Egliston told The Feed.
But there are reasons to be sceptical, especially when looking at Facebook's history, he added.
While Meta has been investing in responsible innovation where designers are aware of social harms, promising transparency and inclusivity, he’s not sure it’s being applied.
Meta has kept much of how it plans to enforce its safety protocols in VR a secret.
“There’s a degree to which this kind of policy is actually performative rather than something that actually is intended to have a meaningful impact,” Mr Egliston said.
“Facebook is a company that has a history of extracting data from their users, allowing various kinds of harm to take place on the platform and not explicitly caring about the welfare of young users on their platform.”
A Financial Times report quoted a memo sent to employees of Meta by the executive leading the push into the metaverse, Andrew Bosworth, in which he said moderating how users speak and behave “at any meaningful scale is practically impossible.”
Internal Facebook documents leaked last year and known as the Facebook Papers, revealed the social media giant privately and meticulously tracked real-world harms exacerbated by its platforms, ignored warnings from employees about the risks of its design decisions and exposed vulnerable people to dangerous content.
Mr Ahmed said it’s clear Meta’s ambition is to lead the next era of technology, reportedly spending $10 billion on the metaverse in 2021 alone while announcing plans to hire 10,000 new staff members to “operate the metaverse”.
“The question that we were trying to ask was, ‘Is [Meta] a fit and proper company to dominate this tech cycle?'” said Mr Ahmed.
Abuse in three dimensions
What will abuse and bullying be like in an all-encompassing digital environment where touch is made to feel real and the sensory experience is heightened?
Meta unveiled a new, default feature earlier this month, "a personal space boundary" that separates your avatar from others by a metre or so after safety concerns publicly mounted. But this is only for Meta-owned apps.
The Feed reached out to people who have been gaming in VR landscapes, with a number stating it was up to the user to block people around them. Others said it was an accepted part of gaming, especially for women.
"'Sexual harassment' in 'the metaverse' can be solved by muting/blocking someone," said one person.
Others denied the potential for trauma altogether.
But Australia's Safety Commissioner, Julie Inman Grant, who worked as an in-house lobbyist at Microsoft for 17 years, slammed this proposition. She said the onus should not be on the user to filter out abuse and that trauma could very well be perpetrated.
“Even when you're abused on Twitter, that can be emotionally distressing. Imagine what that kind of assault must feel like [in VR or AR].”
Mr Ahmed agreed: “It's a physically confronting environment that's designed to simulate reality. It tricks our brains into thinking we're seeing in 3D.”
“At what point does something become so scarring, that it leads to trauma, to very serious psychological ramifications.”
'It's a matter of will': Practical steps can be taken
Moderation of abuse in real-time will be tricky, but Ms Grant said embedding safety in design is possible and comes down to a "matter of will".
"My observation having been in the industry is that they tend to react or respond to three potential drivers: if there's a big revenue impact, if there's a big regulatory threat, or there is a reputational threat," said Ms Grant.
She said with a bit of investment and innovation, abuse in the metaverse could be prevented.
Mr Ahmed said there are obvious missing protections Meta's design.
If someone is repeatedly violating the rules, Mr Ahmed said the corresponding headset should be disabled. He said it’s just like kicking someone out of a pub.
“But ultimately, they've chosen not to do this because they think that can sell more devices to more early adopter users by making it look as though it's an 'anything goes' environment,” said Mr Ahmed.
Another blind spot he is concerned about is the "lack of backdoor" for law enforcement if there are grotesque breaches of criminal law.
For Meta, Ms Grant has one message.
"If you can create a metaverse, you can create a safer one, too."
If you or someone you know is in need of support please contact 1800RESPECT on 1800 737 732 or visit .