Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

How Unity Is Building its Future on AR, VR, and AI

PCMag sat down with Unity’s artificial intelligence chief and its head of augmented and virtual reality for an exclusive look at how the leading 3D development platform is evolving to stay on top of an increasingly competitive market.

July 18, 2018
How Unity is Building Its Future On AR, VR, and AI

Unity is the most widely used 3D development platform in the world. It powers 40 percent of the top 1,000 mobile games and more than half of all new mobile games, according to app analytics firm Apptopia. Along with Unreal Engine, the two popular game engines underpin most of the gaming experiences on the web.

However, the 3D development space is far more crowded than it once was, particularly when it comes to augmented and virtual reality development.

Unity serves as building blocks or integrates with most of newer AR/VR platforms, including Apple's ARKit and Google's ARCore, but it's also now dealing with competition from the likes of Amazon Sumerian and other drag-and-drop interfaces looking to simplify the experience for less technical creators.

To stay ahead of the competition and evolve its platform for a growing ecosystem of new devices and 3D experiences, Unity is pushing a two-pronged strategy led by its AR/VR and AI divisions. PCMag spoke to Tony Parisi, Unity's Global Head of VR/AR Brand Solutions, and Danny Lange, Unity's VP of AI and Machine Learning, for an inside look at Unity's future and how the platform—and the games it creates—are getting smarter without you even realizing it.

Unity's View of the AR/VR Landscape

Unity's View of the AR/VR Landscape

IDC forecasts that spending on AR/VR products and services will reach $27 billion this year, and the market is only growing from there. Parisi said Unity aims to support every platform in the space.

"Three or four years ago, before I joined, Unity started making significant investments in the emerging VR market—the Oculus Rift, HTC Vive, Samsung Gear VR, etc.—with the goal of creating a sort of full-featured system for developers so they don't have to muck around in the low-level details of how to support these platforms," said Parisi. "We've gotten to the point where you write most of your applications once, and you can port it to these devices."

This isn't universally true, he said, because an app you create for a mobile viewer headset will be designed a bit differently from an app optimized for a PC-based experience. But according to Unity's internal metrics, the platform currently powers 69 percent of Oculus Rift experiences, 74 percent on HTC Vive, 87 percent on Gear VR, and 91 percent of mixed reality (MR) experiences on Microsoft HoloLens.

Working With a Growing Ecosystem

Parisi has been working in AR/VR since the mid-90s doing work on 3D visualization. He co-created the VRML file format and other specifications underlying WebGL, and has founded and worked for several other VR companies. He joined Unity in late 2016 to head up advertising, marketing, and strategy across AR, VR, MR, and the broader extended reality (XR) umbrella.

"[I joined] right after the Pokemon Go boom, which was sort of this simplistic AR more about location, going someplace and finding a Pokemon, plus a little bit of the camera. That's a factor that we continue to augment. It's about location as much as it is about the immersion," said Parisi.

On the AR front, Parisi talked about how Facebook and Snap are using their Camera Effects and Lens Studio developer platforms to evolve what you can do with smartphone cameras to map the environment around you. Bridging smartphone and headset-based AR are experiences like the Star Wars: Jedi Challenges game, which was also created with Unity and works with a smartphone combined with Lenovo's Mirage AR headset.

The next wave is phones with AR operating system support through Apple's ARKit and Google's ARCore, Parisi said. As with Amazon Sumerian, Unity partners with Apple and Google on creating AR content using 3D tools. Unity also serves as the foundation for open-source Google tools such as Tilt Brush and Blocks. Parisi envisions a shared augmented world that spans mobile operating systems.

"Apple and Google are both great partners. We have deep relationships with them to develop and support these experiences and XR content in through Unity's 3D tools," said Parisi. "A lot of the democratized creation tools that are not for coders or developers or professional designers are built in our engine. What's even better is that you can take Blocks models or Tilt Brush art and bring them into other Unity apps just like any other software that comes into Unity."

On the MR and VR side, the big device and software players are Oculus, HTC, and of course Microsoft and its Windows Mixed Reality ecosystem. Unity builds 3D apps for all of them, but Microsoft is blurring the lines a bit between mixed and virtual reality, Parisi said. HoloLens is a mixed reality device, but the Windows Mixed Reality headsets are VR.

"The industry is still coming to grips with what we should call all of this," said Parisi. "Depending on if you're designing content that mixes digital with the real world versus going into a completely immersive world, you have different challenges. In VR, the performance challenges are higher. You have to create a completely synthetic world. That means PC-based VR headsets rendered at 90 frames per second, blasting the performance. Mixed reality is less intensive, but it also has to adjust in real time to the entire environment it's processing."

The Evolution of 3D Content Creation

Unity is a democratized 3D creation tool for developing immersive content. There are Free, Plus, and Pro versions you can buy for monthly subscription fees, and it's royalty-free. So what you build with Unity is yours. Unity currently supports building for around 30 different platforms, from smartphones and PC operating systems to a host of game systems, smart TVs, and VR headsets.

Recently, Unity has been used to create a wide array of immersive content and experiences. These range from a Sundance Film Festival series called Spheres, which lets viewers explore a cosmic collision of two black holes in VR, to Disney's CocoVR game, which uses 360-degree projections transporting players into the Pixar film's Land of the Dead.

Parisi also talked about using Unity to build fun VR games like Beat Saber and augmented experiences including the MLB At Bat AR app, which lets baseball spectators point their smartphones at the game and see stats on the screen above the players.

Unity can build all these types of apps, but there are a lot of nuances to consider in design. PC-based experiences are more powerful, allowing for bigger 3D models in richer environments, and headsets like the Oculus Rift and HTC Vive also give you input controllers along with positional and room-scale tracking to consider. Parisi said you have to design for that kind of app differently than you would for a 3-degrees-of-freedom viewing experience on a headset like the Google Daydream View. It's not as easy as pushing a button, but Unity has worked to ensure that the 3D content you create can move from one device to another without too much re-coding.

Beyond gaming and entertainment applications for AR/VR experiences, one of the questions Unity has grappled with lately is how to improve its 3D creation environment to cater to different industries, skill levels, and use cases. As with low-code development tools aiming to satisfy both serious coders and non-technical users, Unity is figuring out ways to make its interface, asset store, and the rest of its platform work for different types of companies and users to expand its customer base.

"We looked at these different industries where AR and VR are springing up. It could be automotive, film, architecture, medical, or other organizations where you're distributing software to hundreds of thousands of seats," said Parisi. "It's completely different worlds, different enterprises, different backgrounds and production tools. We were initially focused on the mobile and games industries because that's where the growth was, but now the wheel has spun and the timing is right to take that playbook and bring it to some of these other industries now because the world is going B2B."

Current AR/VR Limitations

Current AR/VR Limitations

At the moment, the biggest obstacle to more widespread MR and VR adoption is not a lack of content, but hardware limitations when it comes to comfort and portability versus computing power, Parisi said. The fraught journey of the forthcoming Magic Leap headset serves as a textbook case study in trying to shrink a powerful-enough processor into a small-enough form factor for mass consumption.

"It's challenging to create a mixed reality experience that works walking around your living room versus out navigating open streets," said Parisi. "We're at a place where we're still going through iterations on the hardware to find the sweet spot between computing power and portability. With a VR headset, you can move around until you feel that snag of the wire. We need another way to transmit that data. That wire is a pain; it breaks immersion. I was somewhere else for a minute, but then I got something tangled around my foot."

On the AR side, the limitations are different; it's much more about computer vision and processing power for real-time 3D graphics. When it comes to computer vision and using different types of machine learning (ML) to fill in the gaps and make AR/VR experiences smarter and more seamless, that's where Unity's growing AI department comes in.

Unity's AI Strategy

Unity is using AI in a host of different ways, from improving gameplay and engine design to tracking user behavior and changing the way game developers can monetize their apps. The AI team, which counts more than 100 employees worldwide, is run by Danny Lange, Unity's VP of AI and Machine Learning.

Lange joined Unity about 18 months ago after stints as Head of Machine Learning at Uber and General Manager of Amazon Machine Learning. He's also worked at IBM and Microsoft. He came to the gaming world with a different perspective on ML, and has helped turn Unity's traditional ML efforts into more ambitious projects.

"Businesses like Amazon and Uber are so heavily machine-learning driven. Coming from a world of self-driving cars into this 3D-gaming environment, you want to look for the perfect place to push the limit of artificial intelligence," said Lange.

"When I came over, there were a number of machine-learning efforts underway where I brought in experience with reinforcement learning and dynamic systems where you basically improve behavior," Lange continued. "This is something we did a lot both at Amazon and Uber. Whether you're sending books or you are optimizing for a frictionless pickup for Uber, all that stuff is not really people sitting and designing it, it's computer systems learning where to tell you where to meet your Uber driver. And when I came to Unity I saw a huge opportunity in bringing those ideas along into the gaming world."

As Lange explained, Unity's ML projects span everything from gameplay to monetization. In one instance, the team looks for clusters of users who have certain spending patterns that Unity developers want to monetize through advertising or in-app purchases. Unity then surfaces those results to developers for more effective long-term engagement.

The company also applies ML to improve gameplay, engine design, and help with the content authoring process. Unity's overarching strategy is essentially broken down between the development side on more service-level ML and more advanced deep-learning research on the academic side. Lange said this also crosses into what Parisi's department is doing with XR development, where Unity gives researchers a 3D graphical environment to test out new ML algorithms.

"AR and VR are actually fantastic domains for machine learning and AI. I often think about AI as improved reality, and reality is hard to deal with," said Lange. "It's tough to come up with fixed algorithms that understand everything in a room and can overlay that room. You can't really code that. You have to use machine learning and AI to put virtual objects in a room that recognize that a table is a surface, and if you put it on the edge of a table it will fall. Those are areas where machine learning plays a very important role because of this dynamic real-world behavior; understanding the depth and dimensions in virtual spaces. We're aiming to bring out the leading edge of deep learning to revolutionize the way these games are created and behave."

Machine Learning Behind the Scenes

Machine Learning Behind the Scenes

On the game development and gameplay side, Lange pulled back the curtain into how Unity is weaving ML algorithms into the experience to automate the creation and iteration process.

"We capture a lot of behavioral data: when a game starts—how long you played, which scenes you go through in that game," said Lange. "In that sense, we're using machine learning and data analytics in a similar context to what Amazon or Google do—gathering behavioral data and feeding it back to the game developer. It's sort of the equivalent of web analytics, giving you actionable data to use right away to see what levels players are getting stuck on, what scenes don't work, etc."

This data can be used to optimize for factors like in-app purchases and advertising, but Lange said the algorithms help find a balance between showing users content they're interested in and monetizing games to the extreme. On a macro level, he explained that it's more about creating a self-perpetuating loop of behavioral data to help games evolve organically, taking manual developer work out of the equation.

"Machine learning is where you're able to take data and start to make connections," said Lange. "When you put that into a loop, like what you often will see in a game, the data leads to predictions by the game and the players. That creates more behavioral data, and now you actually have a system that can interact with and learn from users and how they interact with each other. One of our big endeavors is to use AI technology to create more organic games that evolve with usage."

Machine Learning Agents

The most important way Unity is doing this is through ML Agents, an open-source beta initiative that turns games and 3D simulations into training grounds for autonomous intelligent agents. Essentially, Unity lets developers deploy these flexible ML agents in any scenario and they'll act like a sponge: learning and evolving in a custom way based on whatever virtual environment you drop them into.

"My definition of machine learning versus AI is that with machine learning you gather data, train the system, and that's it," Lange explained. "AI is when the system retrains itself constantly and becomes better and better. We want games to be able to evolve, and one of our public initiatives to promote that is Machine Learning Agents."

Unity has several target audiences for its ML Agents. One scenario is for developers, allowing these automated characters to move around and interact with players. Instead of coding their actions, ML Agents learn through reinforcement as they simulate different levels and help developers rapidly test games acting as virtual players testing thousands of game levels in parallel.

Another use case for ML Agents centers around narrative. Lange said Unity is testing the agents by trying to figure out what a player will do next and then follow them move-by-move. The idea is to understand what human players are doing and how the game will offset those decisions in the larger scheme of the narrative. In games where millions of players are active, ML Agents can learn and tweak a game's levels or storylines on a massive scale.

In the "Goodboy" simulation above, Lange's team built a simple mobile game using a machine-learning model from Unity's ML-Agents toolkit. In the game, the little corgi fetches a stick with cute movements, all without being hard-coded. Instead, his actions and behaviors are controlled by ML Agents. Unity plans to work with platform partners to expand ML Agents to all Unity's supported platforms.

"If you think about that kind of capability inside a game at what we call the narrative level, it's not the characters moving around in the game that's necessarily controlled but the game narrative itself. So you basically have the game trying to play you into some very exciting roads. So you can have a choice of going left or right in a path and the game will basically orchestrate a number of events based on what it's predicting that you're gonna do, five, 10, 15 moves from now," said Lange.

"You can imagine how that might work in a multiplayer game," he continued. "One of the classical examples: early on in a game, two individuals meet up. One robs all the gold from another player, and then they separate with bad blood between them. So the game would then ensure a narrative where each player ends up together in a place where to survive they will have to work together. They need to cross a bridge, and they can only do it together otherwise they are out of the game. It would be very difficult to hard code your way through that with extensive simulations, but with ML Agents the game can dynamically create those kinds of simulations."

Building AI-Assisted Virtual Worlds

The "Pyramids" demo above is an environment showing off the findings of a reinforcement learning project called Curiosity, where ML Agents quickly explore a world to discover the hidden rewards on the map.

Another side of Unity's AI operations involves using ML to create more immersive scenes and textures when generating 3D content. Lange said this is a newer but very promising field where autonomous systems within a game can generate motion-controlled content and fill in natural movements, learning how a character, human, or animal moves and then mimicking that animation in a game.

"We have thousands of developers testing this out," said Lange. "On the academic side, we've started seeing a lot of NASA students and PhDs at MIT and the Paul Allen Institute in Seattle releasing stuff on Unity. I just met with developers in London looking at this for [non-player character] NPC development who are really pushing the limit on graphical performance with iPhones and Android devices."

Unity also has an engine called Extreme AI for mapping personalities to characters, similar to how Amazon Sumerian builds AI-infused "hosts." For non-playable characters in a game, Unity has begun experimenting with this for more natural simulation in the past year or two, Lange said.

"So if you want to build a robot or a self-driving car or design a house, you can do it in Unity and populate that house with NPCs," said Lange. "You can simulate 1,000 families living in that house and gather information on how the characters move around. Do the doors open the right way? Is there enough light in the rooms? If you do this in the cloud, you could have 1,000 different houses with 1,000 different families. This might seem like going way outside of gaming itself, but underlying all of this is gaming technology."

The Future of Immersive Apps

As the company's AR/VR and artificial intelligence ambitions expand, Unity is looking beyond gaming for a new generation of 3D apps. One example is the automotive industry, for which Unity recently spun up a dedicated team to help create AR/VR content for customers including Audi, Toyota, Lexus, and Volkswagen. Parisi said Unity is looking to apply the power of its cross-platform developer ecosystem to bring AR/VR app creation to new industries.

"We're changing how you design cars, make movies; how you do all these things as a company that knows how to sell to game developers," said Parisi. "As an example, let's say Ford wants to create an app in their innovation lab. They have high-end hardware and software, and then the Rift comes out and they decide to just do it on a gaming PC. They put out an ad and odds are, somebody in the Detroit area is a Unity programmer. That one person starts prototyping, it turns into a three-person innovation team, and then they start developing new ways to do car design to replace physical prototypes."

Parisi also sees a lot of potential for reducing friction when it comes to AR and e-commerce. The big coming inflection point is the World Wide Web Consortium's (W3C) coming ratification of WebXR, a new standard that will let AR and VR experiences run as web apps directly in desktop and mobile browsers.

Imagine seeing an ad for a new kitchen appliance in your social feed, and then dragging that 3D model into a mixed reality environment linked with your camera to see how it looks in your kitchen. For that kind of 3D advertising tech to work on a mass scale, Parisi said the web experience needs to be seamless. If you have to install an app to view every 3D object tagged with virtual information just to connect it to your camera, the model doesn't work, but Unity sees itself as a tool along with standards like WebXR that can bridge those compatibility gaps.

Parisi envisions a future where the form factor for AR/VR experiences is a self-contained entertainment device, be it an in-home experience, a location-based app, or an enterprise simulation for training. He also said the user interface needs to become completely immersive. The technology isn't there yet, but he doesn't believe it's as far off as some might think.

"Some people think it'll be decades before we can get to a really good immersive headset or glasses with enough computing power," said Parisi. "When you consider all the miraculous breakthroughs in miniaturization [from] all these compute aspects—CPU, GPU, 5G networking—in a few years we might be able to move some of that processing out to the edge or up to the cloud. The form factor could be anything, but the common element is definitely an immersive user interface where you can hit a button and experience fully realized digital characters or layered environments blending the digital and real worlds."

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

Table of Contents

TRENDING

About Rob Marvin

Associate Features Editor

Rob Marvin is PCMag's Associate Features Editor. He writes features, news, and trend stories on all manner of emerging technologies. Beats include: startups, business and venture capital, blockchain and cryptocurrencies, AI, augmented and virtual reality, IoT and automation, legal cannabis tech, social media, streaming, security, mobile commerce, M&A, and entertainment. Rob was previously Assistant Editor and Associate Editor in PCMag's Business section. Prior to that, he served as an editor at SD Times. He graduated from Syracuse University's S.I. Newhouse School of Public Communications. You can also find his business and tech coverage on Entrepreneur and Fox Business. Rob is also an unabashed nerd who does occasional entertainment writing for Geek.com on movies, TV, and culture. Once a year you can find him on a couch with friends marathoning The Lord of the Rings trilogy--extended editions. Follow Rob on Twitter at @rjmarvin1.

Read Rob's full bio

Read the latest from Rob Marvin