The term ‘metaverse’ was first coined by author Neal Stephenson in his 1992 science-fiction novel Snow Crash, where the metaverse was an immersive virtual world that people escaped to and led parallel lives by connecting to it with virtual reality goggles. Now the concept of the metaverse is being pushed heavily by Facebook, to such an extent that the company has rebranded itself Meta – and is investing billions in the concept. And Meta is just one of many companies that are convinced that that some variation on metaverse is the future of how we use the internet – although many remain skeptical that they’ll want to connect to it this way. While the metaverse might bring benefits to users, like any other internet-connected innovation, there will be cyber criminals, fraudsters and scammers who will be looking to exploit it – and that’s going to create cybersecurity and privacy challenges from the beginnng. One of the key aspects of the metaverse is that users are represented in virtual environments by customized avatars – but how will you be able to tell the person you’re interacting with is really who they say they are? “I can go into the metaverse, I can make an avatar that looks like you, and I can give it a name that says it’s the real you – and I will probably trick some people into thinking that it’s you,” says Caroline Wong, chief strategy officer at Cobalt, a cybersecurity and penetration-testing company. “This is tricky, because in the metaverse you might never hear someone’s real voice. You might never see someone’s real face. You could interact with a scam artist for days, for months, potentially for years and develop a trust relationship that has nothing to do with actual reality,” she explains. SEE: These are the cybersecurity threats of tomorrow that you should be thinking about today Phishing email and messaging scams are already successful enough on the internet as we know it today, with cyber criminals using social engineering to steal passwords, personal information and money. In the metaverse, that could be even easier, especially if people think they’re speaking to the physical representation of somebody they know and trust, when it could be someone else entirely. It’s possible that a fraudster could create an avatar that looks like you, then uses that to help conduct attacks against your friends or colleagues – or as with any other online account, they could just hack into the real one. If you are doing business with someone in a virtual world and someone else is able to take over their account, it could be very hard to spot. “A huge percentage of users accounts are compromised all the time. It’s something that’s going to certainly extend to the metaverse world; that needs to be protected better than it is today,” explains Andrew Newman, founder and CTO at ReasonLabs, a cybersecurity company. The use of virtual avatars also brings another problem – how do you verify that you’re speaking to a human at all? Text-based chat bots are already common to help provide people with customer services. Developments in artificial intelligence mean bots will only get better at interacting and responding to people. “You could be interacting with somebody and not know if it’s a person or a bot or AI. There’s a lot of evolution around how that will be used,” says Lewis Duke, engineer at cybersecurity firm Trend Micro. While much of the ability to access the metaverse, at least to the full extent, is going to be based around hardware like virtual reality headsets, it’s important to remember that software also forms a significant part of it – people will need to download software to access virtual spaces, use business tools, play games, and more. And like anything we download for our computers or smartphones, there’s the potential that the software we download is malicious – particularly if it comes from third-party stores or if it’s cracked software. SEE: Top 25 emerging technologies: Which ones will live up to the hype? A virtual reality headset is just another type of computer and planting malware on it would allow cyber attackers to gain access to systems, steal personal information or snoop on activity like they would with a smartphone or laptop. But in the metaverse, there’s additional layers for malware to interact with. “Obviously, it has access to your full device file system. But what’s even scarier is it has access to things like screen capture, screen viewing, all these kinds of things that are very privacy-sensitive,” says Newman. “Think about the practicality in other spaces. You have VR for medical, schools and other things – and you have this space, which is essentially unprotected,” he warns. Cybersecurity researchers at ReasonLabs have already shown how cyber criminals could perform an attack on a metaverse user. Dubbed a ‘big brother’ attack, it’s based around a user downloading malicious software that exploits developer mode and can be used for screen recording, as well as downloading malicious files, or tampering with what the user can see within the confines of their headset. That means attacks might not only be restricted to performing malicious activities within the metaverse itself – if attackers who infect a headset can manipulate how the user moves, it could be possible to cause them physical harm. “When I first put on my virtual headset, it says, ‘okay draw a physical line around the boundary that you’re going to try and stay in’. If an attacker manages to exploit some software vulnerability and manipulate that boundary, I’m potentially in actual physical danger, simply running into stuff,” says Cobalt’s Wong. “That’s an interesting thing about the metaverse, which is that it actually does introduce the possibility of actual physical harm due to the fact that your vision and your hearing are completely taken over,” she warns. However, it’s not just cyber criminals that need to be considered when thinking about cybersecurity and privacy in the metaverse – the sheer amount of sensitive data being collected in these environments means there’s the potential for companies that power the metaverse to exploit that information. “There are privacy issues that are going to come with the amount of data in addition to how much data we already give organizations – what we looked at, our biometrics, our reaction movements – it’s an absolute treasure trove,” says Duke. SEE: The stakes ‘could not be any higher’: CISA chief talks about the tech challenges ahead Companies can be quick to ensure they can collect and use the most data possible, but users might not even be aware of the implications of the information they’re giving up or what it means for their privacy. “It’s taken a long time for people to understand cookies. Only now in the last couple of years we’ve really seen legislation about managing how users are informed about how their data is used. And now we’re going to add many different bits of data on people again,” Duke adds. There’s also the issue that legislation tends to be slow to react to advancements in technology, which means by the time rules and regulations are put in place, it could already be too late – just look at how laws around cybersecurity and privacy for the Internet of Things have been outpaced by billions of smart devices being released into the world. The metaverse could have the same problem. “What’s to stop a big company from even collecting more data? Biometric data? All these kinds of things that there are no rules in place to stop,” says Newman. Currently, the metaverse is still on the fringes of how we use the internet, but much money is being invested into it by those who see it as the future of how we work, socialize and play online. Like any other social environment on the internet, there are those who will be looking to abuse it, but there are steps that users can take to help stay safe. For starters, any account they use to access the metaverse should be secured with multi-factor authentication to provide an additional barrier to accounts being taken over. It’s also recommended that applications are downloaded and installed from official sources, to reduce the prospect of malicious software being installed on your device. Inside the metaverse, it might be difficult to ever fully verify that the people you’re interacting with are who they say they are, but as with phishing emails, be mindful of any urgent or unusual requests – that might be a sign you’re interacting with someone with ill intent. “I’m very optimistic about the metaverse. I think it’s got tremendous benefits, and we can connect, and we can learn and it’s going to be really cool,” says Wong. And just like it’s wise to take a screen break from time to time, the same will apply with the metaverse, she notes. “We just need to remember if anything sketchy happens, you can just take your headset off to take a moment and figure out what your next step is.”