Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
The metaverse is the latest fad within Big Tech’s surveillance economy. According to Gartner’s projections, by 2026, 25% of the global population will log onto the metaverse for a least an hour a day — be it to shop, work, or socialize. In regard to accessing the metaverse for business, Gartner analyst Mark Ruskino pushes the start date out to the 2030s. Whenever the metaverse officially arrives, it is likely to bring a host of new problems related to privacy, security and user health.
According to Gartner’s definition, the metaverse is “a collective virtual open space, created by the convergence of virtually enhanced physical and digital reality.”
In an interview with MIT professor, Lex Fridman, Meta CEO Mark Zuckerberg stated that, “A lot of people think that the Metaverse is about a place, but one definition of this is: It’s about a time when basically immersive digital worlds become the primary way that we live our lives and spend our time. I think that’s a reasonable construct.”
Hopefully, Zuckerberg is wrong, and the metaverse never becomes the primary way we live our lives and spend our time. However, whether we like it or not, an iteration of the metaverse is coming, so we should be prepared for the effects.
Security and privacy concerns abound
In the metaverse, the cyberattack surface is expanded significantly. In any given metaverse ecosystem, there are IoT devices, wearables, and sensors in offices and homes — multiple hardware vendors will process a lot of sensitive user behavior, all in real-time.
Zuckerberg acknowledges the obvious security concerns that are part and parcel of the metaverse. As he said to Fridman, “People aren’t going to want to be impersonated. That’s a huge security issue.”
Undoubtedly, hijacked accounts, bots and age verification will all be important issues.
It will be crucial for metaverse creators and stakeholders to be able to verify that users are who they say they are. Zuckerberg and others are already exploring biometrics — that is, fingerprints, facial recognition and retinal imaging — for user verification. So, we can add biometric data to the colossal amount of personal and financial data that will be collected while we work, socialize, and shop on the metaverse.
Social engineering will be rampant
The metaverse will be fertile ground for social engineering attacks. Given that users look like avatars in the metaverse, there is an obvious concern that these avatars will be stolen, falsified or manipulated by bad actors. It makes sense. After an attacker takes over a user’s avatar, he or she could possibly then request information from that user’s colleagues.
In today’s world, if a colleague’s email is compromised, a phishing attack or nefarious request may come from that compromised email. However, in the metaverse, a request coming from a compromised avatar will likely be harder to spot. It remains to be seen how well colleagues will be able to ascertain the legitimacy of each other’s avatars, and it seems likely that it will be difficult to identify a stolen avatar in the workplace.
This widespread use of avatars will ultimately make it easier for bad actors to commit fraud. Given that the metaverse often relies on cryptocurrency transactions, it will also be easier for these actors to hide their ill-gotten gains within the metaverse.
Safety concerns
According to The Financial Times, an internal memo from Meta CTO Andrew Bosworth admitted that moderating people’s behavior in the metaverse “at any meaningful scale is practically impossible.” Granted this was in March 2021, but it does not inspire confidence, especially when one considers Meta’s safety track record within their legacy businesses, especially Instagram.
Moreover, initial reports about metaverse safety have not been great. The Center for Countering Digital Hate (CCDH) reported that VR Chat — a highly reviewed app within Facebook’s metaverse — was filled with harassment, racism and sexually explicit material.
As the metaverse aims to replicate much of the physical world, regulators and public policymakers are already beginning to question whether crimes committed in the metaverse should result in real-world punishment. Earlier this month, Sultan Al Olama, the UAE’s minister of Artificial Intelligence, said that serious metaverse crimes, like murder, should result in real-world punishments.
What’s in store?
In many ways, the metaverse is reminiscent of the dot-com bubble, when organizations rushed to buy up domain names. Likewise, we’re already seeing organizations today rushing to plant their corporate flags in the metaverse. However, this has slowed down over the last couple of months, suggesting that the metaverse could be a fad on the downtrend. For instance, sales volume for land in Decentraland and The Sandbox recently hit all-time lows.
Nevertheless, the metaverse is an emerging attack vector that needs to be taken seriously. Personal and financial data will be collected by IoT and edge devices, and that data will be processed at 5G speeds. Social engineering attacks are expected to be rampant; user privacy will be threatened, and safety is a huge issue. Even if (as Gartner calls it) the “metaversial business” does come to fruition, it will be filled with security, privacy and safety landmines.
Ramprakash Ramamoorthy is the director of AI research at the Zoho Corporation.