Since Facebook announced that it was changing its name to Meta earlier this year, there has been an uptake in the public’s interest in what is being referred to as ‘the Metaverse’.
Although it may seem like a new concept, it has been on the cards for a long time now, and has mostly been awaiting advances in technology to bring it to reality.
But what is the Metaverse? What are Digital Humans? And what is it all about?
The answers to these questions are not easy or straightforward – not because they are complex subjects, but rather, that they are open-ended questions and there are as many answers as there are people with visions of the future.
Rather than try to give a comprehensive answer to them, instead, let’s look at a number of these visions for the future, seeing where we are now and where we can expect them to go.
We will cover these under these topics:
- Digital Humans
- The Metaverse
- Virtual Influencers
- Virtual Models
At their core, Digital Humans are a virtual representations of real people. They may look identical to them or may look somewhat different.
We are all used to CGI in films, television and even computer games – and the fact that some look amazingly realistic.
But recent advances in technology and computing power has meant that, not only can they look superbly lifelike, but that they can now be created in real-time.
It is this technological leap forward that has shifted Digital Humans into the limelight.
Traditional CGI has always required immense computing power, and was initially created by generating visuals to match up to pre-recorded audio of a person, as with traditional animation.
The next stage was motion-capture, where actors would wear specially designed suits, covered in markers, which allowed the computer to track movement and replicate it in a 3D digital environment.
The computer could then be programmed to recreate the actor’s movements in a virtual scene, which resulted in a much more realistic motion that was previously available.
Digital Humans take this to the next level. Technology today allows for this motion to be captured and reproduced in real-time, as the actor is moving.
But this is only the tip of the iceberg. The same advances in technology have, at the same time, enabled us to reproduce skin, hair and cloth textures that are so real they are practically indistinguishable from the real thing.
Take a look at this video which is, unbelievably, already 3 years old. (We will meet her more up to date siblings a little later).
So a Digital Human can be described as an avatar – a virtual representation of a real person – but with the options of visual upgrades. Your avatar could look exactly like you, warts and all, or be enhanced, or you could choose to create an entirely new look for yourself.
Ever wanted to be taller, slimmer? Or have brown eyes instead of blue? Or how about the ability to change your hairstyle every day to suit your mood?
Today, Digital Human technology is still pushing boundaries to become more lifelike. They are popping up in computer games and on-screen performances and their applications are as many as there are creative ideas.
Real Face Generators
Alongside this has been the development in the photography world of a similar trend – Face Generators.
A Face Generator creates unique photo-realistic faces of people without actually being a real person. It does this by blending features of thousands of real people, to come up with an image that is entirely unique and that has never been seen before.
Face Generators are currently being used by thousands of businesses for advertising, website promotion and many other uses. Commercially, the advantage of a Face Generator is that you do not need to pay a model any royalties for using the photo! It also means that, as a business, you can generate the exact look you want for the purpose you need, whether that is look, hair colour, eye colour or even ethnicity.
Digital Humans are the result of a process that combines the best of the Face Generator realism with real-time motion capture.
And when we now talk about motion capture, we are beyond the special suits covered in markers – we are referring to a camera that reads and replicates your facial and body movements as they happen.
Here’s a great introduction to the process:
You can see the results in another video a little further on in this article.
One of the more exciting developments that is beginning to show promise is the use of Digital Humans is in the Metaverse.
But what exactly is the Metaverse and what does it offer us?
The concept of the Metaverse is not new – you may already be familiar with the online game Minecraft, for example, or even Second Life, which began as long ago as 2003.
These are online environments (working in real-time) in which you can explore, travel around, and meet people – live, if you will, a second life. Your online presence is represented in this world by your avatar – a character that you can create to look however you wish.
There are numerous examples of these platforms which all offer different experiences, but the underlying principle is that it is a created world that is limited only by imagination.
In some of these worlds, you can build homes for your avatar, have them live their lives and interact with other people.
The concept of the Metaverse has been around for a long time – the name is credited to the author Neal Stephenson, from his science fictions book ‘Snow Crash’, written way back in 1992. In the novel, he conceived the idea of avatars who meet up in virtual reality environments, such as 3D generated buildings.
Coming back to today, think of the coming Metaverse as a combination of these online 3D environments, blended with today’s realistic graphical representation.
Your online avatar (looking like you, or however you choose to represent yourself) will be able to socialise, work and live in this online, realistic environment. Your connection to this world will be 3D glasses (think in terms of a refined Oculus headset), with motion capture gloves or even a complete bodysuit.
Locations may be realistically modelled 3D environments (recreations of your office, for example) where you can meet and work with your team in this virtual environment.
Outside of work, this will be extended to include virtual concerts, other events, even social meet-ups with friends – all from the comfort of your own home.
It does raise the question as to why anyone would want to live this lifestyle – when it can be achieved ‘in real life’ – but it also includes many experiences that cannot be had otherwise.
Mark Zuckerberg, among others, sees the Metaverse as the next iteration of the internet – and look how quickly we became dependent on that for living our current lifestyle!
Again, this idea is not new – even back in 2007, the comedian Jimmy Carr held the world’s first professional comedy gig online, within Second Life. The graphics were terrible and the whole experience was glitchy, but it worked.
What has changed since then is the technology to create these environments – the Metaverse – on a global scale and with realism.
It does seem that like it or not, some version of the Metaverse is heading our way, and if it is, then Digital Humans will be a major part of that technology. Digital Human tech is the key to making online experiences in 3D seem truly lifelike.
A slight sidestep at this point (sort of) to discuss Vocaloids.
Essentially, a Vocaloid is a piece of software that sings. Backed by the Yamaha Corporation, it was developed as a synthesised voice, which enabled users to enter lyrics and melody for the software to then ‘perform’ the song. As it developed further, different voices (male and female) were added to the voicebank, first in English, then in Japanese.
Designed as a tool for professional musicians, for example, to add backing vocals to a track without having to employ a real singer, some of the voicebank characters have taken on a life of their own. Anime cartoon characters have been developed to accompany the voices.
The most successful of these is the Japanese ‘girl’ Hatsune Miku. In 2010, she released an album. In 2014, she opened for Lady Gaga at a concert.
She has successfully performed concerts to live audiences. The person does not exist, the only visual representation is a cartoon girl, and yet people flocked to her ‘live’ concerts to see her perform. In Japan, she is as popular as any real celebrity.
The difference with this celebrity is that she is programmable by almost anyone with access to the software. There are thousands of fan-produced songs – and also merchandise.
Miku is a performer, but she is also a fascinating merchandising tool. She is certainly not the only one, but she is the most popular.
Gorillaz is another example from the music industry. Created in 1998 by musician Damon Albarn and artist Jamie Hewlet, Gorillaz became a British band success as a virtual group, with graphical visuals and no clear idea of the identity of the actual performers.
One step on from this, and we have a shift towards what promises to be a most interesting concert in 2022.
ABBA are taking to the stage again – 40 years after the band split up, Agnetha, Björn, Benny and Anni-Frid are going on tour – but not as we are used to.
Using motion-capture recordings, they are performing as digital humans together with a live orchestra, in what promises to be a unique blend of real-life and computer-generated visuals – as they themselves say ‘ blurring the lines between physical and digital’.
It is these steps – taken both by Vocaloids and bands such as ABBA – that are changing the public’s expectations – and acceptance – of the convergence of reality and the digital world, and therefore play an important part in our journey towards the next evolution of online integration into our lives.
An influencer, according to Wikipedia, is a “celebrity who has acquired or developed their fame and notability through the internet”
A virtual influencer is much the same, except, as with Hatsune Miku above, they are not real. They are a marketing creation, designed by artists and programmers, usually to perform a specific brand purpose, although some are ‘freelance’.
They are computer-generated avatars that appear as real people. They are a substitute for human influencers, and almost exclusively used for social media marketing. Virtual influencers have their own values, characteristics and personalities.
The advantages are, in many ways, the same as for virtual musicians – they cannot put a foot wrong (ie. say the wrong thing, behave badly in public) and cannot, therefore, attract negative press. They will never have to be making a public apology for something they said 10 years ago; they will never be photographed staggering out of a nightclub at three o’clock in the morning.
In the past, it may have been a harder sell for a company to use virtual characters, but we are now at the point where they are becoming publically acceptable, and so the potential negatives that ‘fake’ people used to hold have all but disappeared.
Samsung Philippines has done this to great effect by teaming up with Lil Miquela, an American virtual influencer who now has over 3 million followers on Instagram, and has worked in the past with brands such as Prada and Calvin Klein.
Here is the video ad she made with Samsung:
More recently, we have seen the introduction of many more of these virtual influencers, especially after the recent pandemic lockdowns around the world, which hindered real-life influencers from getting out and about.
Indeed, even the WHO recruited an AI ‘universal adapter’ – Knox Frost – to spread coronavirus safety messages to his 700,000 online followers.
And while the biggest acceptance for these new celebrities is in Asia, their appeal is worldwide and growing fast.
It is estimated that there are now around 130 virtual influencers working around the globe – and with current trends, there are soon only going to be more.
Almost identical to virtual influencers, virtual models are the latest trend in digital fashion – and for the same reasons.
A brand can decide exactly how to represent itself and simply create the perfect model to match.
It is getting harder and harder to distinguish between real and virtual video – but it is almost impossible now with still photographs.
Virtual models are, perhaps, the first examples of mainstreaming Digital Humans where the public is sometimes not even aware of the nature of the person. Although for absolute trust, a brand should make it known if a model is real or not, as this technology advances, this may become less of an issue, as using Digital Humans becomes normalised.
Indeed – back in 2019, a virtual model – Daisy Page – was signed onto the books of a model agency in Los Angeles. Daisy is a forever-nineteen-year-old model who is represented in the same way as the agency’s real talent.
A key difference between Daisy and, say, Lil Miquela, is in the approach to their design. Daisy is 100% digital – including her clothes – which gives the creators 100% control over every design aspect.
Lil Miquela is usually a CGI head merged onto a real body in a photograph – this can make interactions with other people appear more real, but the blending of digital and real creates its own challenges.
But either way, both are big business.
In 2018, Lil Miquela got a job as a contributing arts editor with Dazed magazine – and in 2019, her creators raised $125 million from investors (TechCrunch).
So where does this leave us? The answer is – wherever you want to be 😊
The opportunities of the future of the Metaverse, as we have said above, are limited only by imagination.
It’s a cliché, for sure, but it also happens to be true.
We’ve already seen a few examples above, but let us finish up with a few more, to give more of an idea of where this future technology is leading us.
Remember the interview with Andy Serkis above? Here’s the final project they were working on at Unreal:
Unreal (creators of the Unreal Engine, used in many computer games, amongst other uses) are pioneering some amazing techniques in the field of the Digital Human – here’s another short video of theirs:
You can learn more about Unreal here.
To find out more about Lil Miquela, why not follow her on Instagram?
And Facebook, of course, needs no introduction – although they have recently rebranded as Meta, to signify the shift in the company’s focus on the Metaverse as their future.
Most recently, their VR brand, Oculus, has recently launched Horizon Worlds – in their words: “…a social experience where you can explore, play and create in extraordinary ways”. Here is their intro video:
As well as Prada, many other fashion brands have also used Digital Humans in their advertising – including Burberry, Givenchy, Balmain, Dior, Chanel and Louis Vuitton. There are those who predict that the Metaverse could become a $50 billion fashion and luxury goods market in the coming years.
The bank also recalled the recent sale of $5.7 million of NFTs by Dolce & Gabbana. (NFT – Non-Fungible Token, a subject for another article, but for now, think of them as restricted-supply, digital limited editions).
Meanwhile, other well-known brands, from IKEA to Porsche have made marketing moves in this new, Meta, direction.
In short – virtual influencers, virtual models, and all their variations are here to stay.
As the Metaverse opens up (it’s not going to happen overnight) then so will the acceptance of these virtual humans. We have even already seen the first Metaverse wedding on Labor Day (September 6th) – albeit it was a reenactment of the legal, physical, ceremony.
As we all become more integrated with the technology – and as we all begin to represent ourselves with Digital Humans – so will this currently-new technology flourish.
The proponents of the Metaverse envisage a world where we live, work and socialise digitally, and while you cannot perhaps conceive of that idea yourself just yet, there is every chance that this will come to pass.
Invention, innovation, adaptation and integration have always been a part of the human story.
It looks like this will continue to be true with the Digital Human story as well.