(v0.50) DOTS Tutorial - Build a Multiplayer AR app
  • What is the Purpose of this DOTS Tutorial?
  • Important Notes For Using This Gitbook
  • Unity ECS
    • Intro to Unity ECS
    • Create a Unity ECS Project
    • Spawn and Move Prefabs
    • Spawn and Move Players
    • Spawn Bullets and Destroy Player
    • Publish Builds in Unity ECS
  • Unity DOTS Physics
    • Intro to Unity DOTS Physics
    • Use DOTS Physics for Prefabs, Players, and Bullets
    • Use DOTS Physics for Collisions
  • Unity DOTS NetCode
    • Intro to DOTS NetCode
    • Create a Network Connection using DOTS NetCode
    • Load a Game using DOTS NetCode
    • DOTS NetCode and Prefabs
    • DOTS NetCode and Player Prefabs
    • Use DOTS NetCode for Collisions and Destroying Bullet Prefabs
    • Dynamically Changing Ghosts Between Interpolated and Predicted
  • UI Builder and UI Toolkit
    • Intro to UI Toolkit
    • Create a ScreenManager
    • Style a View
    • Create a ListView
    • Responsive Game UI
    • Navigate Between Scenes Using ClientServerBootstrap
  • Multiplayer (NetCode+, UI Toolkit+)
    • Intro to Unity NetCode Multiplayer
    • Host or Join a Multiplayer Session on LAN
    • Broadcast a LAN Multiplayer Game
    • Keep Score and Update Game UI
    • Send Ghosts with NetCode Using Relevancy
  • AR Foundation
    • Intro to AR Foundation
    • Set Up AR Foundation and ARKit
    • Spawn a Player using AR Foundation
    • Update UI using AR Foundation
Powered by GitBook
On this page
  • What you'll develop in the AR Foundation section
  • Functionalities included
  • A little bit about Augmented Reality (AR)
  • How AR technology works
  • AR Platforms
  • How Unity handles AR
  • Unity Mixed Reality ("XR") Tech Stack
  • Unity XR plugin framework
  • AR Foundation
  • Our Approach
  • Unity resources for AR Foundation
  1. AR Foundation

Intro to AR Foundation

Code and workflows to integrate AR Foundation into your desktop build

PreviousSend Ghosts with NetCode Using RelevancyNextSet Up AR Foundation and ARKit

Last updated 2 years ago

What you'll develop in the AR Foundation section

Functionalities included

  • Setting up AR Foundation + ARKit plug-in

  • Adding AR Foundation GameObjects to game

    • AR Session Origin

    • AR Session

  • Dynamically checking if deployed to AR platform and disabling AR functionality if not

    • Creating AR-specific systems that only run when AR enabled

  • Grabbing Pose Driver value and providing it to ECS to move player

    • Updating input response systems for updated movement controls

  • Pulling ECS data spawn position and updating Pose Driver to move location of AR Camera to behind player

    • Use ARSessionOrigin.MakeContentAppearAt() to update origin of AR session based on game play

  • Dynamically updating UI for AR instructions when deployed to AR platform

  • Updating PanelSettings to be responsive to both desktop and mobile platforms

MacOS development platform is required to deploy to iOS

Unity cannot directly deploy an iOS app. Instead, Unity compiles code to then be further compiled by Xcode (Apple's integrated development environment for macOS, iOS, etc).

A little bit about Augmented Reality (AR)

How AR technology works

Rather than publishing yet another explanation of AR on the internet ourselves (there are probably enough of those), we think it's better to just direct you to what we think is arguably the best explanation of how AR works in commodity hardware.

What technology is ARKit built on?

"Technically ARKit is a Visual Inertial Odometry (VIO) system, with some simple 2D plane detection. VIO means that the software tracks your position in space (your 6dof pose) in real-time i.e. your pose is recalculated in-between every frame refresh on your display, about 30 or more times a second. These calculations are done twice, in parallel. Your pose is tracked via the Visual (camera) system, by matching a point in the real world to a pixel on the camera sensor each frame. Your pose is also tracked by the Inertial system (your accelerometer & gyroscope — together referred to as the Inertial Measurement Unit or IMU). The output of both of those systems are then combined via a Kalman Filter which determines which of the two systems is providing the best estimate of your “real” position (referred to as Ground Truth) and publishes that pose update via the ARKit SDK. Just like your odometer in your car tracks the distance the car has traveled, the VIO system tracks the distance that your iPhone has traveled in 6D space. 6D means 3D of xyz motion (translation), plus 3D of pitch/yaw/roll (rotation).

The big advantage that VIO brings is that IMU readings are made about 1000 times a second and are based on acceleration (user motion). Dead Reckoning is used to measure device movement in between IMU readings. Dead Reckoning is pretty much a guess (!) just like if I asked you to take a step and guess how many inches that step was, you’d be using dead reckoning to estimate the distance. I’ll cover later how that guess is made highly accurate. Errors in the inertial system accumulate over time, so the more time between IMU frames or the longer the Inertial system goes without getting a “reset” from the Visual System the more the tracking will drift away from Ground Truth.

Visual / Optical measurements are made at the camera frame rate, so usually 30fps, and are based on distance (changes of the scene in between frames). Optical systems usually accumulate errors over distance (and time to a lesser extent), so the further you travel, the larger the error."

The translation + rotation (6 degrees of freedom) of a device is also referred to as its "pose" in AR.

If you know where the device is (its pose), rendering a virtual object to appear in the physical space becomes easier. The same way a Unity camera moving to the right in a 2D game causes the Unity rendering engine to render objects in a different perspective, the same happens with using AR in the real physical world.

AR Platforms

Now that we've learned a little more about how AR works (a highly-accurate pose is determined by device's sensor data), let's take a look at the platforms where you can build AR applications.

Currently, the leading players in the AR space are:

  • Apple's iOS (ARKit)

  • Google's Android (ARCore)

  • Microsoft's HoloLens (Mixed Reality Toolkit)

  • Magic Leap (Lumin)

ARKit, ARCore, Mixed Reality Toolkit, and Lumin each provide special APIs to their specific device hardware to help developers create AR experiences.

A lot of these APIs are similar in that they provide the developer with similar AR data, like providing the "pose" of the device. Each hardware manufacturer has their own version of the "pose" API.

Wouldn't it be nice if there was one single way to communicate with all of the leading AR platforms without having to build 4 different implementations...? Introducing: Unity AR Foundation.

How Unity handles AR

Unity Mixed Reality ("XR") Tech Stack

We have been working to improve our multi-platform offering, enabling direct integrations through a unified plugin framework. The resulting tech stack consists of an API that exposes common functionalities across our supported platforms in a frictionless way for creators while enabling XR hardware and software providers to develop their own Unity plugins. This architecture offers the following benefits:

  • Multi-platform developer tools such as AR Foundation and the XR Interaction Toolkit

  • Faster partner updates from supported plugins via the Unity Package Manager

  • More platforms have access to an interface to leverage Unity’s XR rendering optimizations and developer tools

AR Foundation

Unity's AR Foundation is an API that sits on top of all the major hardware AR SDKs mentioned earlier.

When "pose" is requested from the AR Foundation layer during runtime, AR Foundation automatically translates that request to whatever appropriate implementation.

Unity does not implement any of these AR functionalities itself; it's just a translation layer. AR Foundation calls a platform-specific "plug-in" to get the necessary data from the hardware. So adding the "AR Foundation" package into our Unity package is not enough; we must also include additional specific packages for the AR platforms we will be targeting. In our case, for this section, that additional package will be Apple's ARKit package.

It is important to note that not all plug-ins are made equal. Not all AR Foundation functionalities are available across all plug-ins. For example, both ARKit and ARCore now provide access to a depth API, but the HoloLens does not provide this data.

Functionality

ARCore

ARKit

Magic Leap

HoloLens

Device tracking

✓

✓

✓

✓

Plane tracking

✓

✓

✓

Point clouds

✓

✓

Anchors

✓

✓

✓

✓

Light estimation

✓

✓

Environment probes

✓

✓

Face tracking

✓

✓

2D Image tracking

✓

✓

✓

3D Object tracking

✓

Meshing

✓

✓

✓

2D & 3D body tracking

✓

Collaborative participants

✓

Human segmentation

✓

Raycast

✓

✓

✓

Pass-through video

✓

✓

Session management

✓

✓

✓

✓

Occlusion

✓

✓

Our Approach

In MainScene, we will run a check if "we are an AR system." If we are an AR system, we will create an IsARPlayerComponent singleton.

We will then use RequireSingletonForUpdate<IsARPlayerComponent> for our AR-specific systems.

We will create a new InputSystem for AR called ARInputSystem that takes in screen taps and translates them to "shoot" commands. We will also update our PlayerCommand to take in AR pose. So our ARInputSystem will be sending "shoot" data through screen taps and updated position through grabbing the "pose" that ARKit's provides.

AR Foundation is written using MonoBehaviours, so we will create and update an ARPlayerPoseComponent in an Update(). The MonoBehaviour will use the EntityManager to update our ARPlayerPoseComponent and our ARInputSystem will pull this data to add it to our PlayerCommand.

Unity resources for AR Foundation

To be best prepared for the code-alongs

Github branch link:

wrote a post on Medium during his time as the Founder of 6d.ai (company acquired by Niantic) to describe Apple's then-new release of ARKit. If you are interested in building AR applications and want to learn more about the technology, please read the full blog post here:

Image from Matt's blog post

-From 6d.ai's

The core problem that must be solved for in creating AR experiences is for the device to know exactly where it is in (translation + rotation). The device uses both visual measurements (camera), and inertial measurements (accelerometer/gyroscope) to figure out where it is in three-dimensional space.

There are some very interesting libraries to build web-based AR experiences , which can run on a phone browser, but these web libraries are limited.

-From

Unity documentation for AR Foundation 4.2.3: Refer to this for more information.

Unity documentation for ARKit XR Plugin 4.1.3: Refer to this for more information.

Read through 6d.ai's

https://github.com/moetsi/Unity-DOTS-Multiplayer-XR-Sample/tree/UI-Updates
Join our Discord for more info
Matt Miesnieks
https://medium.com/6d-ai/why-is-arkit-better-than-the-alternatives-af8871889d6a
"Why is ARKit better than the alternatives?"
6 degrees of freedom
like AR.js
Unity XR plugin framework
Unity's XR platform updates
https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@4.2/manual/
https://docs.unity3d.com/Packages/com.unity.xr.arkit@4.2/manual/index.html
blogpost on ARKit
Join our Discord for more info
Shooting down AR player from desktop
Getting shot down by desktop player
From "Unity XR Platform updates"