Getting started with the Magic Leap One
I have some great clients that enable me to play around with tons of interesting new stuff like the Magic Leap One. It’s great because you get to take a look at the latest and greatest but it can also suck pretty hard. It can suck because new stuff mostly does not come with instructions or those instructions were a “TBD” on the backlog. Yea yea, it seems like I am just whingeing about nothing but it’s easy to forget how frustrating a crash with no feedback is. This speaks volumes to the quality of SDKs being produced nowadays and of course, to that wonder that is the internet.
With moving some of our demos to the Magic Leap One, it was a mixed bag. Getting up and running was easy peasy! Beautifully documented with precise easy to find instructions. We had the device so no messing around with simulators.
A few things to note:
- To get a certificate for deploying to a device, you will need a DUNS number (DUN & Bradstreet) If you have ever published stuff as a company on iOS, you should already have it. Good news, unlike the Apple App Store, approval was a couple of seconds. I have been told you can get a temporary certificate from Magic Leap but how and where I have no idea.
- The device takes a little bit of time to be found by the deploying computer. Before deploying your app, you will need to use the Magic Leap console, mldb, to check for devices. Command “mldb devices” (yes, it’s like adb if you have deployed on android but different). If it does not show, restart your device and restart your computer. Once all restarted, make sure the Magic Leap One is ready, AKA you can open the main menu.
- Build and run works great from Unity. Once you can see your device and you have your certificates configured in the build and player settings menu, just hit the ol’ “Build and Run”.
So no we have pretty holograms (just a cube for us) running on the ML1 via Unity. Done right? … Yea no. I made the mistake of thinking everything was downhill from there. Once we got into the weeds, the ‘gators came out! (That reference is valid because… leave your guesses in the comments and I’ll let ya know)
Problem 1: Where is my click?
There is no Unity Canvas and generalized input system for the current SDK. They give you 2 raycast samples that help but its still collider based physics. After mucking around with trying to figure out how to write a custom Unity Event Manager (and reading up about the new Unity Input System that seems awesome ), I just went back to making a custom solution. It’s still the best way to go for now I think, gives you full control over what and how things get triggered. To clarify, this is using the MagicLeap controller as a cursor.
This is done in 3 steps
- Register all the canvases into a list/stack that you want to interact with ahead of time or as they become active. This saves you trying to traverse the scene for canvasses every frame.
- On Update, iterate through all your canvases, creating a plane for each one. Then use “Plane.Raycast” to get the collision of the controller ray on the canvas. Once you have that point you will need to transform it to screen space using “WorldToScreenPoint” on your camera. Now that you have a screen point, feed that into your GraphicRaycaster.Raycast as the position in a “PointerEventData” object.
- If there is no collision on a canvas, now do your “Physics.Raycast” to find what physical object you collided with. Remember to set a layer mask here to help with performance and to control what you would like to interact with.
Problem 2: Shady characters!
I have been using TextMesh Pro with Font Awesome. Its a really handy-dandy way for adding resolution independent icons to your app. The problem is the special characters. For some reason, ML1 does not like the special characters assigned to a glyph. It crashed my app consistently! I banged my head against the wall a few times and then just went with sprite icons and we were up and running. A piece of my crisp vector typography soul died but hey, Flash is dead.
Problem 3: One Camera to rule them all
So if you have built anything for XR, you know that UI/HUD is an issue. You have to drop it in the world somehow and manage penetration with other world objects. As soon as you add another camera to render UI over cameras, you get a significant performance drop and potentially confuse the user’s sense of depth. The best thing to do is find the closest point to the camera that you can mount the UI that the user can comfortably focus on and that the controller can still point to. After that, cull your world objects so that if they get into this zone, they fade out.
Update: Magic Leap does actually support multiple cameras.
Problem 4: Reflection
Thinking that the Magic Leap One is running Android, I assumed reflection would not be an issue. It is! A clue would be that you can only compile using IL2CPP so really, its an ahead of time (AOT) methodology. We were using some pretty solid cryptography c# dlls and it did not like those. What’s even worse is that the error messages are pretty awful. It basically says something happened and sorry for you! It was only through a long process of trial and error that we discovered it. We are still trying to find the exact offender but since our app was running on iOS (another AOT platform) we assumed we were good. Yea…. no.
All in all, I like the Magic Leap One. It has gotten a lot of bad press but it is a great piece of kit. The increased field of view was what the Hololens was sorely lacking but it also has hand tracking, marker tracking, and my favorite, eye tracking. Not just head tracking, eye tracking. This means you could make a hud that’s locked to the users head and where they look is what they select! You could also do some cool depth of field tricks. They have a way to go with stabilizing the hand tracking and the controller but this is a prerelease device with an SDK level (as of this doc) of 0.17.0. I’m looking forward to all the progress that the Magic Leap One will make and all the cool things that can be done with it.