The world of VR and AR is one that is emerging, slowly interweaving itself amongst cultural conversations amid the zeitgeist. However, in the ways that brands and people consider the integration of technology in their everyday life, London-based tech firm M-XR (formerly Mimic-XR) has intently surveyed the arena and believes the industry has not hit peak maturation. Imploying their forward-thinking prowess, M-XR is using machine learning and 3D capture technology to import the real world into the digital realm as a means to develop inventive experiences to suit the needs of companies to creators. Recently, speaking at Samsung‘s Design Unfolded experience, alongside debuting its immersive project with BYBORRE, HYPEBEAST caught up with its co-founder Elliott Round to discuss how he and his business partner established the studio as a means to change how people will interact with the 3D world in the near future.
Can you give some background behind what M-XR is?
M-XR is really interested in the 3D space, so VR and AR — also anything 3D from live visual effects, films and games. We’re really focused on how can we make this industry more user-friendly, in terms of the creation process. At the moment, it’s very tedious: It’s a lot of manual work. And as a result of that, the creative tends to get forgotten or the budget’s so high. So, we’re really interested in how can you use tech and specifically AI to build tools that can almost automate a lot [of] this process to then empower the creator.
How did you find your way into working with VR and AR?
I was working in the film industry, [and] we moved into doing 360 and that was quite fun. But, you couldn’t really interact with it. That’s when I started playing around with a program called Unity. We started with some interactive work and you could pick stuff up, you could interact with the characters. It stopped being film and it was more like you [were] transferred into space and environment. And that was really exciting all of a sudden, but I was never [into] 3D games until that point; I thought it was a bit geeky, and I was like, “Okay, you can do some cool stuff with this like live music, events, exhibitions — all that kind of stuff.”
But, the problem was [that] it took so long to do anything, and, even then, it still looks a bit kinda crappy. So I kind of took a step back, and I was like, “If this really wants to pick up, how can we move it forward?” And the thing that seems to be the biggest problem was how would you create the content that goes into these worlds? I started looking to photogrammetry, which you’d just take a bunch of pictures and make some models. But the problem is, it isn’t real because it doesn’t have any textures and materials and suede, leather or plastic wouldn’t react to light differently at all.
So that’s when Ryan and I founded MXR, originally called mimic. And we started looking into how could we actually acquire these materials. So when we do a scan, we have all the properties of how it reacts to light. The technology has been progressing-and-progressing over the past couple of years. And now we can really capture a whole variety of material properties when we scan an object and put it into a digital environment, and if we light it the same way as the real objects, they’re indistinguishable.
Do you think the work you are doing could overtake the physical world?
I don’t think VR should be seen as replacing the real world. I love traveling and seeing real-life stuff; I don’t want to ever replace that. I [do] think it is great to enable people to shrink the world. There are places that you can’t visit for a whole plethora of reasons, VR can enable that. Equally, if you want to buy a product, you can’t always get that product or if you’re buying online you can only see it from a certain viewpoint or angle. If you can see it in AR, then you’re more likely to buy something and keep it rather than buy it and return it. Normally, you might see something online but can’t tell [if] that it is black suede or leather. I have no idea. If you can do that virtually, then there’s less disparity and you know what you’re getting into.
You’re almost breaking down the information gap.
Exactly. It’s almost like the Internet’s kind of blown up and got really far. We’ve started to hit a bit of a wall where we’re trying to take this 3D world we live in and compress it into a flat, 2D space. But there are times when that just doesn’t work, and I think that’s what VR and AR are already starting to show that you’re surpassing that.
If you’re creating something which genuinely serves a purpose and you’re using it as a tool, which you can’t natively just [use] on a touchscreen, then you’ve got something that isn’t a novelty.
What do you foresee a struggle in the VR and AR industry?
I think the biggest problem is audience uptake. We are seeing all this stuff and it’s like, ”Oh, this is amazing. Why is no one else getting on that? Why is no one using VR?” And the biggest problem is that there wasn’t enough content for the headsets, which are crazy expensive. So, you’re only buying if you’re a hardcore gamer; therefore, people are only going to make gaming experiences.
Now you’re getting headsets like the Oculus Quest, which is just dirt cheap for what it can do. This starts opening up the window for anyone that wants to try VR, and as a result of that, you’re getting more content creators. [It’s] starting to snowball a bit — not as rapidly as I think it should — but it’s starting to get there. But now it’s a really exciting time where you’re getting people play around [with] VR. Before it felt like you were only getting people from the film and games industry who’ve already got a pre-held notion of what this format should be. [This] is a new thing and it needs to be treated that way; It isn’t a film and it isn’t games.
Why do you feel brands haven’t fully understood how to implement the technology into their infrastructure?
AR three or four years ago came out, and it wasn’t really where it should have been. It was a bit premature and, as a result, you’ve got loads of people that threw a bunch of money at it and people got burned. So [now], people are a bit kind of cautious, but it’s really starting to pick up now. But with a lot of the big brands, I think they’re kind of waiting for someone else to make that first move. And in addition to that, they haven’t got the departments. I don’t think they see that there’s a huge need for as of yet. It’s coming, but a lot of these big studios and brands don’t have an in-house team to create an experience. They kind of hit a wall.
How does M-XR look to creating a product that goes past just being novel?
When the iPhone first came out and people were playing those hold a beer glass and you drink it or you’ve got a ball and you’re kind of navigating [through] a maze [games]. I feel like that’s where we are right now with AR because it’s so new. It’s like, “What do you do with that?” I think you can categorize something as being really good if you can take a VR piece and it [doesn’t] translate to a 2D format, then I think that’s a great use of that medium because it only exists in that [space] — and I think that’s the same with AR. If you’re creating something which genuinely serves a purpose and you’re using it as a tool, which you can’t natively just [use] on a touchscreen, then you’ve got something that isn’t a novelty; It has an actual use and it is something that you’re going to keep coming back to and not for just a bit of fun.
Why was tapping into sneaker culture considered when premiering your efforts?
One of the first things I ever scanned was a New Balance trainer. It was great because it was a really nice size. It wasn't too big. It fit perfectly in the rig we built, and it had a whole bunch of materials, like rubber, leather and kind of shiny [materials]. Most of the high-end AR on the internet was sneakers, so it felt obvious [that] we should do a sneaker. So we reached out to Nike and they actually sent us the Vapor Max. We’re like, “Cool, let’s scan it.” And [Nike] put some stuff out and we did the same. So we put that [shoe] online and it got quite a bit of traction.
[Looking] at CAD, I think the problem is it’s only ever going to be as good as the artist that is using it. So it’s as good as your eyes looking at a picture and trying to copy that picture, which can be pretty good but you’re always going to make it too perfect. That’s when it falls into this thing called the uncanny valley — where you’re like, “Yeah, it doesn’t look quite right, and I don’t know why.” That’s why you have to capture it; you can’t guess. You have to measure the objects as a whole and put it back onto the object so that all those nuances and subtle scratches and scuffs are there. And that’s when your brain’s like, “Yeah, this is a real thing.”
Do you think what you do fosters a great deal of accessibility for more people to get involved in using this tech to create?
Oh, definitely! In all honesty, that’s the whole reason I started looking into this. I wanted to be able to create more content. Before, all my time was spent modeling texture and dealing with stuff. I couldn’t actually build the experiences, [but] if we can automate this, it’s going to allow small studios to focus on the creative side of things, like the experience.
‘Okay, how can I reduce the amount of waste? How am I using the digital world as a means to cut back on my resources in the physical world?’
How do you think fashion can benefit from further integrating itself with AR and VR?
If that AR experience is good, the hope is that you’re going to start reducing things like waste. I also think for brands, it is going to open up a lot more freedom for customized clothes whereby you can customize it and see it instantly. I think it’s going to allow people, especially smaller brands, to capture stuff and put it into an AR experience. They won’t need a storefront. [Smaller brands] will be able to use the internet as a digital store and not be bound to the small town there in. They’ll have a virtual store for the whole world and people can come in and try on their clothing.
What could this do for sustainability, as well?
Obviously, the number of clothes that get bought and thrown away can hopefully be reduced. I think also it can be used for education. I mean we’ve done pieces in the past where it’s helped teach people about plastic waste. If you can do that with clothing, you can somehow show these products can be a little bit more expensive but made with all these really high-quality materials. Also, where the materials come from. You can see the positive impact that these make as opposed to buying cheap [clothes]. People are becoming a lot more kind of wise to it all. Like, “I bought this shirt. What’s going to happen afterward? Is it gonna just go in a landfill? Is it going to dissolve?” I think [AR and VR] can show and explain that. And especially with what BYBOORE is doing with picking the best quality materials and then why they’re picking those materials and sharing the whole production processes [digitally], it is almost as real and raw as possible. This should help curve the environmental impacts.
What was it like working with BYBORRE for this project?
It was quite exciting. When we first started speaking, we found out that he was very much focused on the R&D side of things, so that kind of matched with like our workflow and our mentality. It was quite fun seeing how he approached it, compared to how we did. So he sent us a massive cut of fabric and a bunch of parkas and sweaters. And it was a bit of an exciting challenge because we were like, “How do we digitize this?” Normally, we would scan things as a whole because we want to have that kind of customizability.
So we kind of started breaking it down into components. We first identified the different kinds of stitches within the fabrics that we were scanning. And then we looked at the fabric and patterns and how they came together. So we kind of created like a computer program to make this all procedural. That meant BYBORRE could sort of send us new designs and we wouldn’t have to scan it. We had all these elements, and we could then generate those on the fly. In some of the videos, you see almost a few of those designs moving in real-time. And his perspective and why he’s using tech, not as a tool to make more made us think like, “Okay, how can I reduce the amount of waste? How am I using the digital world as a means to cut back on my resources in the physical world?”
And what else did you learn from him?
The biggest thing was that I didn’t know anything about textiles. I kept tripping up on all my terms. It was interesting just seeing the different stages that they go through and how they test [materials and products]. They sent us this massive board of all the different tests they consider, like IM for stretch or waft. And then there are these scans of the human body where you could actually see where he was coming from. So that was quite close in their approach to that and then we thought how could we kind of recreate that in a synthetic, authentic way.
How did you decide upon the movement for the videos?
We were initially looking through some of the decks that we were sent, what became apparent was the reason they were creating these 3D stitches was to give it a sense of rigidity. You could have areas which were already stiff and areas which are soft within the same piece of fabric — not having to layer anything up. So motion became key. There was also another image that we saw that their team created. It was sort of like a day in the life of two characters: One was sort of going to work and was going to play sports. And it was all about how the fabric would adapt to that heat and motion. So, we wanted to show that it was adapting to that. So we used video. In climate chambers, you can see [clothing] is moving in a certain way in the wind. We tried to copy that. We had my guys wearing a coat and had them run around as a reference. In the video, you can see there’s Erosie, like this rusty brown colored [fabric], moving apart from the other materials because it’s soft. So that’s kind of big. It’s keeping the character cool and when he’s running you can see it flapping in the wind
What’s the future?
Aside from getting this capture system completely automatic, it’s then once you have this object, what do you do with it? So, as I mentioned, there’s not a huge infrastructure for AR right now and I think that’s something that we’re going to have to start building with brands, understanding what is the brand’s need, what are the brand’s customers needs and how can we slot AR into that. So this interactive piece that we’ve developed is sort of a preamble and a bit of R&D for us because it’s allowed us to sort of see how you interact with a digital fabric, how do you customize it, how do you see it in different lighting scenarios and do you want to see it in different environments?
So that’s something we can then start exploring and developing a little bit more. Hopefully, then we can put this into studios like BYBORRE’s where Borre can create new designs and see it instantly and how that fabric moves. In-store or online, the system will allow consumers to see an item before it’s been designed and made. And if at a brick and mortar, they can customize it on like a holographic light field display, take pictures of it and then send it to their friends for an opinion on it, as though it was a real item. When they hit send or buy, whatever is created will reduce iterative waste or return.
Check out M-XR’s work above and head to Samsung’s website to learn more about the Galaxy Fold.