Developing for Apple VisionOS

YLabZ
40 min readJun 18, 2023

--

You already know how to build for the VisionOS platform!

visionOS brings familiar frameworks and brand new concepts together so you can build an entirely new universe of apps designed for spatial computing. — Apple

  • Swift
  • Swift structured concurrency
  • SwiftUI
  • SwiftData
  • RealityKit (UI in RealityView)
  • ARKit
  • Reality Composer Pro (MaterialX)
  • XCode

Tech Stack

Here we review the basic tech stack used to build visionOS apps.

We also advice against using UIKit. Although it can be done by using UIHostingController its better to void it if possible.

XCode with preview showing a 3D model

First App

Steps to build you very first, very basic visionOS app.

In 2019 HTC setup a VR room and we climbed K2. We have been waiting for something like Apple visionOS ever since …

AR/VR Design Studio @ ZoeWare.com

Please see all these in the medium articles:

  • Developing for Apple VisionOS
  • Explore MaterialX in Reality Composer Pro
  • Reality Composer Pro — Working in Xcode
  • Diorama visionOS App Review

Full visionOS reading list om Medium.com

Apple Vision Pro - visionOS Development

9 stories

If you are wondering … is it worth building for?

People cry when seeing the demo

Everyone had a face of confession and excitement.

Get started with visionOS.

Basic overview of how to build for visionOS.

Good vid on visionOS

AR Models

Video on Reality Compose (NOT PRO)

AR models here:

Example of AR Models
  • Blender
  • Miya
  • Online stores like Sketch Fab or photogrammetry software.

We can also use Reality Scanner. (the ratings could be better)

https://apps.apple.com/us/app/realityscan-3d-scanning-app/id1584832280

Apple Hello World Example

Apple Example Hello World.

Download the code here.

Make sure to use the latest version of XCode 15 Beta 2 or later.

Apple Resources for learning VisionOS:

visionOS brings familiar frameworks and brand new concepts together so you can build an entirely new universe of apps designed for spatial computing.

SwiftUI, RealityKit, and ARKit for developing for visionOS.

And please do NOT use SceneKit for visionOS!!!

SceneKit & visionOS is only 2D

Apple Sample Code for visionOS:

→ Below is every section and the bullet list of videos with keywords highlighted:

Meet spatial computing

Fundamentals that make up spatial computing — windows, volumes, and spaces —to build engaging and immersive experiences. VisionOS to design with depth, scale, and immersion. Explore Xcode and the new Reality Composer Pro.

  • Get started with building apps for spatial computing.

Discover — windows, volumes, and spaces.

  • Principles of spatial design.

Design with depth, scale, windows, and immersion to build a human-centered experiences

  • Create accessible spatial experiences.

VisionOS is designed for accessibility: VoiceOver and Pointer Control and designed features like Dwell Control.

  • Develop your first immersive app.

Use Xcode Previews for your SwiftUI development, and take advantage of RealityKit and RealityView to render 3D content.

  • Meet SwiftUI for spatial computing.

Build an entirely new universe of apps with windows, volumes, spaces, add 3D content, and create a fully immersive experiences.

Explore SwiftUI and RealityKit

Focusing on SwiftUI scene types windows, volumes, and spaces (ImmersiveSpace — a new SwiftUI scene type). Model 3D API to render 3D content with RealityView.

  • Elevate your windowed app for spatial computing.

Bring your multiplatform SwiftUI app to visionOS and the Shared Space. Update custom views, improve your app’s UI, and add features and controls specific to this platform.

  • Take SwiftUI to the next dimension.

Bring three-dimensional objects to your app using volumes, get to know the Model3D API, use UI attachments in RealityView and add support for gestures.

  • Go beyond the window with SwiftUI

Create a new scene with ImmersiveSpace, place 3D content, and integrate RealityView. Explore how you can use the immersionStyle scene modifier, add virtual hands with ARKit and support for SharePlay.

  • Enhance your spatial computing app with RealityKit

Discover how SwiftUI scenes work in tandem with RealityView and embed your content into an entity hierarchy. Blend virtual content and the real world using anchors, bring particle effects into your apps, add video content, and create more immersive experiences with portals.

  • Build spatial experiences with RealityKit

Discovery RealityKit entities, components, and systems, add 3D models and effects with RealityView API. Add 3D objects to windows, volumes, and spaces. Combine RealityKit with spatial input, animation, and spatial audio.

Rediscover ARKit (visionOS)

ARKit algorithms to handle features like persistence, world mapping, segmentation, matting, and environment lighting. Take advantage of blending virtual 3D content and hand tracking that interacts with the real world.

  • Meet ARKit for spatial computing

Discover ARKit’s tracking and scene understanding features to develop hand tracking and scene geometry in your apps.

  • Evolve your ARKit app for spatial experiences

ARKit and RealityKit have evolved for spatial computing with API changes for iPadOS and iOS.

Design for visionOS

Design great apps, games, and experiences for spatial computing

→ Spatial UI/UX → Coming Soon

Explore developer tools for visionOS

Develop for visionOS with Xcode. Prototype in Xcode Previews, and import content from Reality Composer Pro (MaterialX). Use visionOS simulator to eval lighting conditions, collisions, occlusions, scene understanding for your spatial content, and optimize that content for performance and efficiency.

Develop with Xcode

Prototype in Xcode Previews, import content from Reality Composer Pro, use the visionOS simulator and create visualizations to explore collisions, occlusions, and scene understanding for your spatial content.

  • What’s new in Xcode 15

Enhancements to code completion, Xcode Previews, test navigator, test report, improved navigation, source control management, and debugging.

  • Develop your first immersive app

Immersive apps for visionOS using Xcode and Reality Composer Pro. Use Xcode Previews for your SwiftUI development, and take advantage of RealityKit and RealityView to render 3D content.

  • Meet RealityKit Trace

RealityKit Trace to improve the performance of your spatial computing apps.

  • Explore rendering for spatial computing

RealityKit rendering to improve customize lighting, add grounding shadows, rasterization rate maps, dynamic content scaling and control tone mapping for your content.

  • Optimize app power and performance for spatial computing

visionOS optimizing for performance and efficiency with tools and strategies to test and optimize your apps.

  • Meet Core Location for spatial computing

Core Location helps your app find its place in the world — literally.

Meet Reality Composer Pro

Preview and prepare 3D content for your visionOS apps. Reality Composer Pro leverages the power of USD to help you compose, edit and preview assets, such as 3D models, materials, and sounds through the latest updates to Universal Scene Description (USD) on Apple platforms.

  • Meet Reality Composer Pro

Easily compose, edit, and preview 3D content with Reality Composer Pro. Composing scenes, adding particle emitters and spacial audio, and even previewing content on device.

  • Explore materials in Reality Composer Pro

Reality Composer Pro can help you alter the appearance of your 3D objects using RealityKit materials. MaterialX and physically-based (PBR) shaders, shader graph editor with custom inputs.

  • Work with Reality Composer Pro content in Xcode

Reality Composer Pro show how to load 3D scenes into Xcode, integrate your content with your code, and add interactivity to your app.

  • Explore the USD ecosystem

Universal Scene Description (USD) on Apple platforms 3D for visionOS, explore MaterialX shaders and color management.

You do not learn Metal, Unity, C++, Rust etc … until you need to. You can do a ton without them and every year you will need them less and less as SwiftUI provides many of the same features.

We are currently building on VisionOS. Please watch our site Zoeware.com for more info about our VisionOS development.

Our first VisionOS App: will dynamic update meditation environment by reading your iWatch health data producing biofeedback.

Tools

Just a quick note on tools. Everyone knows how to build on this already!

All-new platform. Familiar frameworks and tools. — Apple

System Mechanics:

Here is how everything fix together.

SwiftUI can add a 3D model with Model3d or RealityKit using RealityView (plus gestures and attachments) and ARKit. USDZ is the file format and MaterialX is used texture the objects.

You build everything in XCode and Reality Composer Pro.

SwiftUI

Get ready to launch into space — a new SwiftUI scene type that can help you make great immersive experiences for visionOS. We’ll show you how to create a new scene with ImmersiveSpace, place 3D content, and integrate RealityView.

Explore how you can use the immersionStyle scene modifier to increase the level of immersion in an app and learn best practices for managing spaces, adding virtual hands with ARKit, adding support for SharePlay, and building an “out of this world” experience! — Apple Docs

VisionOS is build with SwiftUI. All the tools and frameworks work best with SwiftUI. All SwiftUI features works as expected in visionOS.

  • Scenes for spatial computing uses WindowGroup (render as 2D windows with 3D controls).
  • TabView and NavigationSplitViews are available.
  • Add .windowStyle(.volumetric) to get a volume.
  • Fill a volume with a static model using Model3D. For dynamic, interactive models with lighting effects and more, use the new RealityView.
  • ImmersiveSpace is a full view and only one at a time

We have three new SwiftUI Scenes:

Window

Think of this as your standard SwiftUI window but instead of clicking on it you just look at it and tap your fingers together.

TabView — Reacts to eyes. TabView — Navigation — Ornaments

SwitfUI TabView in visionOS
TabView {
DogsTab()
.tabItem {
Label("Dogs", systemImage: "pawprint")
}
CatsTab()
.tabItem {
Label("Cats", image: "cat")
}
BirdsTab()
.tabItem {
Label("Birds", systemImage: "bird")
}
}
On the left are TabView called ornaments

Code the tabView for the “Experience” and “Library”

@main
struct WorldApp: App {
var body: some Scene {
WindowGroup("Hello, world") {
TabView {
Modules()
.tag(Tabs.menu)
.tabItem {
Label("Experience", systemImage: "globe.americas")
}
FunFactsTab()
.tag(Tabs.library)
.tabItem {
Label("Library", systemImage: "book")
}
}
}
}
}

Create your own ornament using the .ornament modifier.

Because controls live outside the window … use tabview called ornaments.

Add ornaments to your SwiftUI window

Ornaments allow you to add accessory views relative to your app’s window. They can even extend outside the window’s bounds.

Add it to a ToolbarItem

StatsGrid

VStack(alignment: .leading, spacing: 12) {
Text("Stats")
.font(.title)

StatsGrid(stats: stats)
.padding()
.background(.regularMaterial, in: .rect(cornerRadius: 12))
}
Button(action: {
// perform button action
}) {
VStack(alignment: .leading, spacing: 12) {
Text(fact.title)
.font(.title2)
.lineLimit(2)
Text(fact.details)
.font(.body)
.lineLimit(4)
Text("Learn more")
.font(.caption)
.foregroundStyle(.secondary)
}
.frame(width: 180, alignment: .leading)
}
.buttonStyle(.funFact)

Hover effects

  • Critical to making your app responsive.
  • Run outside of your app’s process.
  • Added automatically to most controls.
  • If you’re using a custom control style, make sure to add hover effects to make them responsive and easy to use.

The eyes cause the hover effect to activate.

The Box on the right is pressed down and lighter to show it is being looked at
struct FunFactButtonStyle: ButtonStyle {
func makeBody(configuration: Configuration) -> some View {
configuration.label
.padding()
.background(.regularMaterial, in: .rect(cornerRadius: 12))
.hoverEffect()
.scaleEffect(configuration.isPressed ? 0.95 : 1)
}
}

SwiftUI Gestures

SwiftUI with gestures has been expanded for VisionOS.

SwiftUI takes care of everything

Gestures

Interaction

Full immersion input methods
  • Eyes: Look at an element & use an indirect pinch gesture.
  • Hands: reaching out & gesture on apps.
  • Pointer: trackpad, hand gesture or hardware keyboard.
  • Accessibility: e.g. VoiceOver, Switch Control.

Volume

Model3D is just another SwiftUI View to show 3D content.

Build ImmersiveSpace space

@main
struct WorldApp: App {
var body: some Scene {
ImmersiveSpace {
SolarSystem()
}
}
}

Make sure you use Model3D phases when loading and changing state.

Model3D(named: "Earth") { phase in
switch phase {
case .empty:
Text( "Waiting" )
case .failure(let error):
Text("Error \(error.localizedDescription)")
case .success(let model):
model.resizable()
}
}

Again …

struct MoonView {
var body: some View {
Model3D(named: "Moon") { phase in
switch phase {
case .empty:
ProgressView()
case let .failure(error):
Text(error.localizedDescription)
case let .success(model):
model
.resizable()
.scaledToFit()
}
}
}
}

With RealityKit

import SwiftUI
import RealityKit

struct GlobeModule: View {
var body: some View {
Model3D(named: "Globe") { model in
model
.resizable()
.scaledToFit()
} placeholder: {
ProgressView()
}
}
}
struct Globe: View {
@State var rotation = Angle.zero
var body: some View {
ZStack(alignment: .bottom) {
Model3D(named: "Earth")
.rotation3DEffect(rotation, axis: .y)
.onTapGesture {
withAnimation(.bouncy) {
rotation.degrees += randomRotation()
}
}
.padding3D(.front, 200)

GlobeControls()
.glassBackgroundEffect(in: .capsule)
}
}

func randomRotation() -> Double {
Double.random(in: 360...720)
}
}

Two ways to add Models — Model3D & RealityView(see below)

Model3D SwiftUI Scenes rendered by RealityKit

  • Model3D — Load models — rendered by RealityKit
  • Interaction are provided by SwiftUI (regular set of functions)

Load a Model3D from resources.

struct MoonView {
var body: some View {
Model3D(named: "Moon") { phase in
switch phase {
case .empty:
ProgressView()
case let .failure(error):
Text(error.localizedDescription)
case let .success(model):
model
.resizable()
.scaledToFit()
}
}
}
}

Everything with models are done with industry standard USDZ file (Pixar). All material (images, vectors, sounds, etc …) is self contained.

Handel the different states of scene phase …

@main
struct WorldApp: App {
@EnvironmentObject private var model: ViewModel
@Environment(\.scenePhase) private var scenePhase

ImmersiveSpace(id: "solar") {
SolarSystem()
.onChange(of: scenePhase) {
switch scenePhase {
case .inactive, .background:
model.solarEarth.scale = 0.5
case .active:
model.solarEarth.scale = 1
}
}
}
}
  • Scene Phases
  • Coordinate Conversions
  • Immersion Styles

Add gestures like any other SwiftIU view.

Use RealityView from RealityKit for updates, attachments and gestures see below in the RealityView Section.

Spaces

Full Space can use ARKit API for location and hand tracking.

Space has three styles

  • Mixed — Start with this
  • Progressive — Headset crown can vary this
  • Full — ImmersionSpace
Immersion styles

Code: setup immersiveSpace and open it. We set the immersiveSpace Id

// MyFirstImmersiveApp.swift
@main
struct MyFirstImmersiveApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}.windowStyle(.volumetric)
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView()
}
}
}

When the button is clicked we enter the immersive space with async/await.

struct ContentView: View {
@Environment(\.openImmersiveSpace) var openImmersiveSpace
var body: some View {
Button("Open") {
Task {
await openImmersiveSpace(id: "ImmersiveSpace")
}
}
}
}

Use the @Environment to open immersiveSpace and give it an id so you ca refer to it from code.

Move the space into the full ImmersionSpace with a few characters of code.

Setup

@main
struct WorldApp: App {
var body: some Scene {
// (other WindowGroup scenes)

ImmersiveSpace(id: "solar-system") {
SolarSystem()
}
}
}

Open

@Environment(\.openImmersiveSpace)
private var openImmersiveSpace

Button("View Outer Space") {
openImmersiveSpace(id: "solar-system")
}

Full immersion style 360 full surroundings.

@main
struct WorldApp: App {
@State private var selectedStyle: ImmersionStyle = .full
var body: some Scene {
// (other WindowGroup scenes)

ImmersiveSpace(id: "solar-system") {
SolarSystem()
}
.immersionStyle(selection: $selectedStyle, in: .full)
}
}

Code: Star field is made with RealityKit.

struct Starfield: View {
var body: some View {
RealityView { content in
let starfield = await loadStarfield()
content.add(starfield)
}
}
}
struct SolarSystem: View {
var body: some View {
Earth()
Sun()
Starfield()
}
}

Immersion Style.

@main
struct WorldApp: App {
@State private var currentStyle: ImmersionStyle = .mixed
var body: some Scene {
ImmersiveSpace(id: "solar") {
SolarSystem()
.simultaneousGesture(MagnifyGesture()
.onChanged { value in
let scale = value.magnification
if scale > 5 {
currentStyle = .progressive
} else if scale > 10 {
currentStyle = .full
} else {
currentStyle = .mixed
}
}
)
}
.immersionStyle(selection:$currentStyle, in: .mixed, .progressive, .full)
}
}

Set surrounding effects

@main
struct WorldApp: App {
@State private var currentStyle: ImmersionStyle = .progressive
var body: some Scene {
ImmersiveSpace(id: "solar") {
SolarSystem()
.preferredSurroundingsEffect( .systemDark)
}
.immersionStyle(selection: $currentStyle, in: .progressive)
}
}

Show the upper limb when people enter the scene

@main
struct WorldApp: App {
@State private var currentStyle: ImmersionStyle = .full
var body: some Scene {
ImmersiveSpace(id: "solar") {
SolarSystem()
}
.immersionStyle(selection: $currentStyle, in: .full)
.upperLimbVisibility(.hidden)
}
}

Add hand tracking with ARKitSession (see below)

struct SpaceGloves2: View {

let arSession = ARKitSession()
let handTracking = HandTrackingProvider()

var body: some View {

RealityView { content in

let root = Entity()
content.add(root)

// Load Left glove
let leftGlove = try! Entity.loadModel(named: "assets/gloves/LeftGlove_v001.usdz")
root.addChild(leftGlove)

// Load Right glove
let rightGlove = try! Entity.loadModel(named: "assets/gloves/RightGlove_v001.usdz")
root.addChild(rightGlove)

// Start ARKit session and fetch anchorUpdates
Task {
do {
try await arSession.run([handTracking])
} catch let error as ProviderError {
print("Encountered an error while running providers: \(error.localizedDescription)")
} catch let error {
print("Encountered an unexpected error: \(error.localizedDescription)")
}
for await anchorUpdate in handTracking.anchorUpdates {
let anchor = anchorUpdate.anchor
switch anchor.chirality {
case .left:
if let leftGlove = Entity.leftHand {
leftGlove.transform = Transform(matrix: anchor.transform)
for (index, jointName) in anchor.skeleton.definition.jointNames.enumerated() {
leftGlove.jointTransforms[index].rotation = simd_quatf(anchor.skeleton.joint(named: jointName).localTransform)
}
}
case .right:
if let rightGlove = Entity.rightHand {
rightGlove.transform = Transform(matrix: anchor.transform)
for (index, jointName) in anchor.skeleton.definition.jointNames.enumerated() {
rightGlove.jointTransforms[index].rotation = simd_quatf(anchor.skeleton.joint(named: jointName).localTransform)

}
}
}
}
}
}
}
}

8:17 — ImmersiveSpace with a SolarSystem view

@main
struct WorldApp: App {
var body: some Scene {
ImmersiveSpace(id: "solar") {
SolarSystem()
}
}
}

Add the launch window.

struct LaunchWindow: Scene {
var body: some Scene {
WindowGroup {
VStack {
Text("The Solar System")
.font(.largeTitle)
Text("Every 365.25 days, the planet and its satellites [...]")
SpaceControl()
}
}
}
}

9:11 — SpaceControl button using Environment actions for opening and dismissing an ImmersiveSpace scene

struct SpaceControl: View {
@Environment(\.openImmersiveSpace) private var openImmersiveSpace
@Environment(\.dismissImmersiveSpace) private var dismissImmersiveSpace
@State private var isSpaceHidden: Bool = true
var body: some View {
Button(isSpaceHidden ? "View Outer Space" : "Exit the solar system") {
Task {
if isSpaceHidden {
let result = await openImmersiveSpace(id: "solar")
switch result {
// Handle result
}
} else {
await dismissImmersiveSpace()
isSpaceHidden = true
}
}
}
}
}

10:44 — WorldApp using LaunchWindow and ImmersiveSpace

@main
struct WorldApp: App {
var body: some Scene {
LaunchWindow()
ImmersiveSpace(id: "solar") {
SolarSystem()
}
}
}

We can add animation to our objects in immersive space.

Add Animation

Full immersion space has many cool features.

Add Space Features

Best Practice: Always start your initial scene with a window / volume and guide the user into full space when needed. This makes for the best experience.

Add gestures to your 3D model.

Gesture combining dragging, magnification, and 3D rotation all at once.

// Gesture combining dragging, magnification, and 3D rotation all at once.
var manipulationGesture: some Gesture<AffineTransform3D> {
DragGesture()
.simultaneously(with: MagnifyGesture())
.simultaneously(with: RotateGesture3D())
.map { gesture in
let (translation, scale, rotation) = gesture.components()

return AffineTransform3D(
scale: scale,
rotation: rotation,
translation: translation
)
}
}

// Helper for extracting translation, magnification, and rotation.
extension SimultaneousGesture<
SimultaneousGesture<DragGesture, MagnifyGesture>,
RotateGesture3D>.Value {
func components() -> (Vector3D, Size3D, Rotation3D) {
let translation = self.first?.first?.translation3D ?? .zero
let magnification = self.first?.second?.magnification ?? 1
let size = Size3D(width: magnification, height: magnification, depth: magnification)
let rotation = self.second?.rotation ?? .identity
return (translation, size, rotation)
}
}

— —

Always think about the user experience.

Change the volume of an object as the scenePhase changes to help the user to know what is going on.

Use the phase change to notice the user of the state

— —

Coordinate System

Coordinate system in RealityKit and SwiftUI are different.

SwiftUI Y is down and immersionSpace Y is up. In full immersion space the user feet are the origin. Users coordinates mirror Z axis immersionSpace coordinates.

User, SwiftUI and Immersion coordinates shown below.

Immersion Coordinates vs SwiftUI

Code: Example of coordinate translation ‘model.solarEarth.position = proxy.transform(in: .immersiveSpace).center’

var body: some View {
GeometryReader3D { proxy in
ZStack {
Earth(
earthConfiguration: model.solarEarth,
satelliteConfiguration: [model.solarSatellite],
moonConfiguration: model.solarMoon,
showSun: true,
sunAngle: model.solarSunAngle,
animateUpdates: animateUpdates
)
.onTapGesture {
model.solarEarth.position = proxy.transform(in: .immersiveSpace).center
}
}
}
}

We are provided convenient translation functions to go between coordinate systems

Translate …

Notes:

No Dark / Light — the system must set this

All distances are in meters

Move the moon 1/2 meter up. (1M ~ 3F)

What is new in SwiftUI From Paul Hudson

https://www.hackingwithswift.com/articles/260/whats-new-in-swiftui-for-ios-17

Drawing and animation improvements

Reference

WWDC Video:

WWDC Notes: https://www.wwdcnotes.com/notes/wwdc23/10109/

RealityKit

RealityKit provides a simplified and declarative approach to working with AR, making it easier to develop AR experiences without extensive knowledge of lower-level graphics programming. It includes features like 3D rendering, animation, physics simulation, audio, entity-component system, and integration with ARKit for real-world tracking and scene understanding. — Apple Docs

  • Use RealityView in your SwiftUI Views
  • Add rich behaviors with custom materials, shaders, physics, and more
// Define a volumetric window.
struct WorldApp: App {
var body: some Scene {
// ...

WindowGroup(id: "planet-earth") {
Model3D(named: "Globe")
}
.windowStyle(.volumetric)
.defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters)
}
}
// Define a volumetric window.
struct WorldApp: App {
var body: some Scene {
// ...

WindowGroup(id: "planet-earth") {
Model3D(named: "Globe")
}
.windowStyle(.volumetric)
.defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters)
}
}

12:54 — RealityView asynchronous loading and entity positioning

import SwiftUI
import RealityKit

struct Orbit: View {
var body: some View {
RealityView { content in
async let earth = ModelEntity(named: "Earth")
async let moon = ModelEntity(named: "Moon")

if let earth = try? await earth,, let moon = try? await moon {
content.add(earth)
content.add(moon)
moon.position = [0.5, 0, 0]
}
}
}
}

13:54 — Earth rotation

import SwiftUI
import RealityKit

struct RotatedModel: View {
var entity: Entity
var rotation: Rotation3D

var body: some View {
RealityView { content in
content.add(entity)
} update: { content in
entity.orientation = .init(rotation)
}
}
}

Convert Coordinates

import SwiftUI
import RealityKit

struct ResizableModel: View {
var body: some View {
GeometryReader3D { geometry in
RealityView { content in
if let earth = try? await ModelEntity(named: "Earth") {
let bounds = content.convert(geometry.frame(in: .local),
from: .local, to: content)
let minExtent = bounds.extents.min()
earth.scale = [minExtent, minExtent, minExtent]
}
}
}
}
}

RealityKit provides the new Model3D for SwiftUI view.

  • Provides automatic gestures
  • Deep integration with SwiftUI attachments

RealityView

RealityView allows you to place reality content in a SwiftUI .

RealityView is how SwiftUI can use RealityKit.

  • A SwiftUI view that contains RealityKit entities
  • Connect observable state to component properties
  • Covert between view and entity coordinate spaces
  • Subscribe to events
  • Attach views to entities

Use RealityView API to add 3D objects to windows, volumes, and spaces to make your apps more immersive.

Functions:

__ __ __

Model3D

// Define a volumetric window.
struct WorldApp: App {
var body: some Scene {
// ...

WindowGroup(id: "planet-earth") {
Model3D(named: "Globe")
}
.windowStyle(.volumetric)
.defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters)
}
}

__ __

Model in 3D volume

// Define a volumetric window.
struct WorldApp: App {
var body: some Scene {
// ...

WindowGroup(id: "planet-earth") {
Model3D(named: "Globe")
}
.windowStyle(.volumetric)
.defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters)
}
}

__ __

Immersive Space

Place objects anywhere in the space.

// Define a immersive space.
struct WorldApp: App {
var body: some Scene {
// ...

ImmersiveSpace(id: "objects-in-orbit") {
RealityView { content in
// ...
}
}
}
}

__ __

Pragmatically add Reality View content

import SwiftUI
import RealityKit

struct Orbit: View {
let earth: Entity

var body: some View {
RealityView { content in
content.add(earth)
}
}
}

__ __

12:54 — RealityView asynchronous loading and entity positioning

Place the moon 1/2 a meter to the right.

import SwiftUI
import RealityKit

struct Orbit: View {
var body: some View {
RealityView { content in
async let earth = ModelEntity(named: "Earth")
async let moon = ModelEntity(named: "Moon")

if let earth = try? await earth,, let moon = try? await moon {
content.add(earth)
content.add(moon)
moon.position = [0.5, 0, 0]
}
}
}
}

__ __

Object rotation

import SwiftUI
import RealityKit

struct RotatedModel: View {
var entity: Entity
var rotation: Rotation3D

var body: some View {
RealityView { content in
content.add(entity)
} update: { content in
entity.orientation = .init(rotation)
}
}
}

__ __

14:27 — Converting co-ordinate spaces

import SwiftUI
import RealityKit

struct ResizableModel: View {
var body: some View {
GeometryReader3D { geometry in
RealityView { content in
if let earth = try? await ModelEntity(named: "Earth") {
let bounds = content.convert(geometry.frame(in: .local),
from: .local, to: content)
let minExtent = bounds.extents.min()
earth.scale = [minExtent, minExtent, minExtent]
}
}
}
}
}

__ __

14:56 — Play an animation

import SwiftUI
import RealityKit

struct AnimatedModel: View {
@State var subscription: EventSubscription?

var body: some View {
RealityView { content in
if let moon = try? await Entity(named: "Moon"),
let animation = moon.availableAnimations.first {
moon.playAnimation(animation)
content.add(moon)
}
subscription = content.subscribe(to: AnimationEvents.PlaybackCompleted.self) {
// ...
}
}
}
}

18:31 — Adding a drag gesture

struct DraggableModel: View {
var earth: Entity

var body: some View {
RealityView { content in
content.add(earth)
}
.gesture(DragGesture()
.targetedToEntity(earth)
.onChanged { value in
earth.position = value.convert(value.location3D,
from: .local, to: earth.parent!)
})
}
}

22:12 — Adding audio

Three audio types demonstrated with the user in the center.

  • Channel audio — stationary left / right
  • Spatial audio — moving directional sound
  • Ambient audio — surrounds the user from all sides
Three types of audio

Code for the audio sources.

// Create an empty entity to act as an audio source.
let audioSource = Entity()

// Configure the audio source to project sound out in a tight beam.
audioSource.spatialAudio = SpatialAudioComponent(directivity: .beam(focus: 0.75))

// Change the orientation of the audio source (rotate 180º around the Y axis).
audioSource.orientation = .init(angle: .pi, axis: [0, 1, 0])

// Add the audio source to a parent entity, and play a looping sound on it.
if let audio = try? await AudioFileResource(named: "SatelliteLoop",
configuration: .init(shouldLoop: true)) {
satellite.addChild(audioSource)
audioSource.playAudio(audio)
}

__ __

System — behavior and experience of the 3D model.

// Systems supply logic and behavior.
struct TraceSystem: System {
static let query = EntityQuery(where: .has(TraceComponent.self))

init(scene: Scene) {
// ...
}

func update(context: SceneUpdateContext) {
// Systems often act on all entities matching certain conditions.
for entity in context.entities(Self.query, when: .rendering) {
addCurrentPositionToTrace(entity)
}
}
}

// Systems run on all RealityKit content in your app once registered.
struct MyApp: App {
init() {
TraceSystem.registerSystem()
}
}

We load the same model but not with Model3D but with a async RealityView.

RealityView { content in
if let earth = try? await
ModelEntity(named: "Earth")
{
earth.addImageBasedLighting()
content.add(earth)
}
}

The code demonstrates the integration of RealityKit and SwiftUI to create an augmented reality experience showcasing a rotating Earth model with interactive gestures and dynamic attachments.

struct Earth: View {
@State private var pinLocation: GlobeLocation?

var body: some View {
RealityView { content in
if let earth = try? await
ModelEntity(named: "Earth")
{
earth.addImageBasedLighting()
content.add(earth)
}
} update: { content, attachments in
if let pin = attachments.entity(for: "pin") {
content.add(pin)
placePin(pin)
}
} attachments: {
if let pinLocation {
GlobePin(pinLocation: pinLocation)
.tag("pin")
}
}
.gesture(
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
withAnimation(.bouncy) {
rotation.degrees += randomRotation()
animatingRotation = true
} completion: {
animatingRotation = false
}
pinLocation = lookUpLocation(at: value)
}
)
}
}

Code: Below the scene place a toggle to enlarge the model

VStack {
Toggle("Enlarge RealityView Content", isOn: $enlarge)
.toggleStyle(.button)
}
.padding()
.glassBackgroundEffect()

Make the most of SwiftUI, ARKit, RealityKit use immersive space.

Immersive space and RealityView go hand in hand

RealityView also makes ARKit and SwiftUI work perfect together.

We can use AsyncAwait to thread the loading

SwiftUI way to use RealityKit with gestures and attachments.

Handel 3D Gestures — see below …

Code: Async load RealityKit “Scene” model and enlarge 25% on tap.

RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(scene)
}
} update: { content in
// Update the RealityKit content when SwiftUI state changes
if let scene = content.entities.first {
let uniformScale: Float = enlarge ? 1.4 : 1.0
scene.transform.scale = [uniformScale, uniformScale, uniformScale]
}
}
.gesture(TapGesture().targetedToEntity().onEnded { _ in
enlarge.toggle()
})

25:48 — Entity targeting Tapping on the RealityKit clouds in the scene will move them. Example below to add a tap gesture the the RealityView entity.

import SwiftUI
import RealityKit

struct ContentView: View {
var body: some View {
RealityView { content in
// For entity targeting to work, entities must have a CollisionComponent
// and an InputTargetComponent!
}
.gesture(TapGesture().targetedToAnyEntity().onEnded { value in
print("Tapped entity \(value.entity)!")
})
}
}

Add a move 100 centermeters on the tap gesture.

.gesture(TapGesture().targetedToAnyEntity().onEnded { value in
var transform = value.entity.transform
transform.translation += SIMD3(0.1, 0, -0.1)
value.entity.move(
to: transform,
relativeTo: nil,
duration: 3,
timingFunction: .easeInOut
)
})

SIMD3 stands for "Single Instruction, Multiple Data" and is a type provided by Apple's SIMD (Single Instruction, Multiple Data) framework. It represents a 3D vector or point in a three-dimensional space.

In the context of Swift, SIMD3 is a generic struct that can hold elements of various types, such as Float, Double, Int, etc. It provides efficient and optimized operations for working with 3D data, such as vector addition, subtraction, scaling, dot product, cross product, and more.

let vector = SIMD3<Float>(x: 1.0, y: 2.0, z: 3.0)

Code: Use RealityView not Model3D to add lighting to your models.

Use Reality View to add lighting

RealityView loads models with Swift Structured Concurrency try/await.

6:53 — RealityView in an ImmersiveSpace

ImmersiveSpace {
RealityView { content in
let starfield = await loadStarfield()
content.add(starfield)
}
}

Reality Kit Example:

Use Attachments :

import SwiftUI
import RealityKit

struct MoonOrbit: View {
var body: some View {
RealityView { content, attachments in
guard let earth = Entity(named: "Earth") else {
return
}
content.add(earth)

if let earthAttachment = attachments.entity(for: "earth_label") {
earthAttachment.position = [0, -0.15, 0]
earth.addChild(earthAttachment)
}
} attachments: {
Text("Earth").tag("earth_label")
}
}
}

Video

Vidoe playback in visionOS
public func makeVideoEntity() -> Entity {
let entity = Entity()

let asset = AVURLAsset(url: Bundle.main.url(forResource: "tides_video",
withExtension: "mp4")!)
let playerItem = AVPlayerItem(asset: asset)

let player = AVPlayer()
entity.components[VideoPlayerComponent.self] = .init(avPlayer: player)

entity.scale *= 0.4

player.replaceCurrentItem(with: playerItem)
player.play()

return entity
}

Match the background with pass through tinting and video events

var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.isPassthroughTintingEnabled = true

entity.components[VideoPlayerComponent.self] = videoPlayerComponent
content.subscribe(to: VideoPlayerEvents.VideoSizeDidChange.self,
on: entity) { event in
// ...
}

Build our world and look at it through a protal

struct PortalView : View {
var body: some View {
RealityView { content in
let world = makeWorld()
let portal = makePortal(world: world)

content.add(world)
content.add(portal)
}
}
}

public func makeWorld() -> Entity {
let world = Entity()
world.components[WorldComponent.self] = .init()

let environment = try! EnvironmentResource.load(named: "SolarSystem")
world.components[ImageBasedLightComponent.self] = .init(source: .single(environment),
intensityExponent: 6)
world.components[ImageBasedLightReceiverComponent.self] = .init(imageBasedLight: world)

let earth = try! Entity.load(named: "Earth")
let moon = try! Entity.load(named: "Moon")
let sky = try! Entity.load(named: "OuterSpace")
world.addChild(earth)
world.addChild(moon)
world.addChild(sky)

return world
}

public func makePortal(world: Entity) -> Entity {
let portal = Entity()

portal.components[ModelComponent.self] = .init(mesh: .generatePlane(width: 1,
height: 1,
cornerRadius: 0.5),
materials: [PortalMaterial()])
portal.components[PortalComponent.self] = .init(target: world)

return portal
}

Add particles

public class ParticleTransitionSystem: System {
private static let query = EntityQuery(where: .has(ParticleEmitterComponent.self))

public func update(context: SceneUpdateContext) {
let entities = context.scene.performQuery(Self.query)
for entity in entities {
updateParticles(entity: entity)
}
}
}

public func updateParticles(entity: Entity) {
guard var particle = entity.components[ParticleEmitterComponent.self] else {
return
}

let scale = max(entity.scale(relativeTo: nil).x, 0.3)

let vortexStrength: Float = 2.0
let lifeSpan: Float = 1.0
particle.mainEmitter.vortexStrength = scale * vortexStrength
particle.mainEmitter.lifeSpan = Double(scale * lifeSpan)

entity.components[ParticleEmitterComponent.self] = particle
}

Anchor it

import SwiftUI
import RealityKit

struct PortalApp: App {

@State private var immersionStyle: ImmersionStyle = .mixed

var body: some SwiftUI.Scene {
ImmersiveSpace {
RealityView { content in
let anchor = AnchorEntity(.plane(.vertical, classification: .wall,
minimumBounds: [1, 1]))
content.add(anchor)

anchor.addChild(makePortal())
}
}
.immersionStyle(selection: $immersionStyle, in: .mixed)
}
}

MaterialX

Apple is integrating MaterialX into the RealityKit offering.

MaterialX is an open standard for representing rich material and look-development content in computer graphics, enabling its platform-independent description and exchange across applications and renderers. Launched at Industrial Light & Magic in 2012. Sony Pictures Imageworks, Pixar, Autodesk, Adobe, and SideFX contributing to its ongoing development.

See below for more info about MaterialX.

MaterialX Library …

MaterialX Library

Use AccessibilityComponent with RealityKit

var accessibilityComponent = AccessibilityComponent()
accessibilityComponent.isAccessibilityElement = true
accessibilityComponent.traits = [.button, .playsSound]
accessibilityComponent.label = "Cloud"
accessibilityComponent.value = "Grumpy"
cloud.components[AccessibilityComponent.self] = accessibilityComponent

// ...

var isHappy: Bool {
didSet {
cloudEntities[id].accessibilityValue = isHappy ? "Happy" : "Grumpy"
}
}

Notes:

Never use Bitmaps … always use SVG and preserve Vector Data

ARKit

RealityView in RealityKit makes ARKit and SwiftUI work perfectly together for understanding the virtual space.

Rebuilt from the ground up with complete new API.

  • Anchor — places the object
  • Data Provider — updates the scene
  • Session — get data from providers from Anchors.

Privacy

// Privacy
// Authorization

session = ARKitSession()

Task {
let authorizationResult = await session.requestAuthorization(for: [.handTracking])

for (authorizationType, authorizationStatus) in authorizationResult {
print("Authorization status for \(authorizationType): \(authorizationStatus)")

switch authorizationStatus {
case .allowed:
// All good!
break
case .denied:
// Need to handle this.
break
...
}
}
}

World Tracking

World Tracking — WorldTrackingProvider

  • Add WorldAnchors for anchoring virtual content
  • Automatic persistence of WorldAnchors
  • Get the device’s pose relative to the app’s origin (needed by metal)
World Anchor is persisted.

Anchors

Provides Anchors:

Scene Understanding

  • Plane detection

Each plane is provided as a PlaneAnchor

Useful for content placement or low-fidelity physics simulations

  • Scene geometry

Mesh geometry is provided as MeshAnchors

Useful for content placement or high-fidelity physics simulations

Image Tracking (ImageTrackingProvider)

  • Specify a set of ReferenceImages to detect
  • Detected images are provided as ImageAnchors
  • Useful for placing content at known, statically placed images

Hand Tracking

ARKit handles the hand tracking.

joints
/ Hand tracking

@available(xrOS 1.0, *)
public struct Skeleton : @unchecked Sendable, CustomStringConvertible {

public func joint(named: SkeletonDefinition.JointName) -> Skeleton.Joint

public struct Joint : CustomStringConvertible, @unchecked Sendable {

public var parentJoint: Skeleton.Joint? { get }

public var name: String { get }

public var localTransform: simd_float4x4 { get }

public var rootTransform: simd_float4x4 { get }

public var isTracked: Bool { get }
}
}

Using the HandTrackingProvider()

  • Each hand is provided as a HandAnchor
  • Useful for content placement or detection custom gestures
  • Poll for latest ‘HandAnchors’ or receive ‘HandAnchors’ when updates are available
// Hand tracking
// Polling for hands

struct Renderer {
ar_hand_tracking_provider_t hand_tracking;
struct {
ar_hand_anchor_t left;
ar_hand_anchor_t right;
} hands;

...
};

void renderer_init(struct Renderer *renderer) {
...

ar_hand_tracking_configuration_t hand_config = ar_hand_tracking_configuration_create();
renderer->hand_tracking = ar_hand_tracking_provider_create(hand_config);

ar_data_providers_t providers = ar_data_providers_create();
ar_data_providers_add_data_provider(providers, renderer->world_tracking);
ar_data_providers_add_data_provider(providers, renderer->hand_tracking);
ar_session_run(renderer->session, providers);

renderer->hands.left = ar_hand_anchor_create();
renderer->hands.right = ar_hand_anchor_create();

...
}
// Hand tracking
// Polling for hands

void render(struct Renderer *renderer,
... ) {
...

ar_hand_tracking_provider_get_latest_anchors(renderer->hand_tracking,
renderer->hands.left,
renderer->hands.right);

if (ar_trackable_anchor_is_tracked(renderer->hands.left)) {
const simd_float4x4 origin_from_wrist
= ar_anchor_get_origin_from_anchor_transform(renderer->hands.left);

...
}

...
}

Example Code

  • App and ViewModel
  • Session initialization
  • Hand colliders
  • Scene colliders
  • Cubs

// App

// App

@main
struct TimeForCube: App {
@StateObject var model = TimeForCubeViewModel()

var body: some SwiftUI.Scene {
ImmersiveSpace {
RealityView { content in
content.add(model.setupContentEntity())
}
.task {
await model.runSession()
}
.task {
await model.processHandUpdates()
}
.task {
await model.processReconstructionUpdates()
}
.gesture(SpatialTapGesture().targetedToAnyEntity().onEnded({ value in
let location3D = value.convert(value.location3D, from: .global, to: .scene)
model.addCube(tapLocation: location3D)
}))
}
}
}

// ViewModel

// View model

@MainActor class TimeForCubeViewModel: ObservableObject {
private let session = ARKitSession()
private let handTracking = HandTrackingProvider()
private let sceneReconstruction = SceneReconstructionProvider()

private var contentEntity = Entity()

private var meshEntities = [UUID: ModelEntity]()

private let fingerEntities: [HandAnchor.Chirality: ModelEntity] = [
.left: .createFingertip(),
.right: .createFingertip()
]

func setupContentEntity() { ... }

func runSession() async { ... }

func processHandUpdates() async { ... }

func processReconstructionUpdates() async { ... }

func addCube(tapLocation: SIMD3<Float>) { ... }
}

// Hand tracking

class TimeForCubeViewModel: ObservableObject {
...
private let fingerEntities: [HandAnchor.Chirality: ModelEntity] = [
.left: .createFingertip(),
.right: .createFingertip()
]

...
func processHandUpdates() async {
for await update in handTracking.anchorUpdates {
let handAnchor = update.anchor

guard handAnchor.isTracked else { continue }

let fingertip = handAnchor.skeleton.joint(named: .handIndexFingerTip)

guard fingertip.isTracked else { continue }

let originFromWrist = handAnchor.transform
let wristFromIndex = fingertip.rootTransform
let originFromIndex = originFromWrist * wristFromIndex

fingerEntities[handAnchor.chirality]?.setTransformMatrix(originFromIndex, relativeTo: nil)
}

Build the scene

func processReconstructionUpdates() async {
for await update in sceneReconstruction.anchorUpdates {
let meshAnchor = update.anchor

guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue }

switch update.event {
case .added:
let entity = ModelEntity()
entity.transform = Transform(matrix: meshAnchor.transform)
entity.collision = CollisionComponent(shapes: [shape], isStatic: true)
entity.physicsBody = PhysicsBodyComponent()
entity.components.set(InputTargetComponent())

meshEntities[meshAnchor.id] = entity
contentEntity.addChild(entity)
case .updated:
guard let entity = meshEntities[meshAnchor.id] else { fatalError("...") }
entity.transform = Transform(matrix: meshAnchor.transform)
entity.collision?.shapes = [shape]
case .removed:
meshEntities[meshAnchor.id]?.removeFromParent()
meshEntities.removeValue(forKey: meshAnchor.id)
@unknown default:
fatalError("Unsupported anchor event")
}
}
}

Drop cube on tap location

class TimeForCubeViewModel: ObservableObject {
func addCube(tapLocation: SIMD3<Float>) {
let placementLocation = tapLocation + SIMD3<Float>(0, 0.2, 0)

let entity = ModelEntity(
mesh: .generateBox(size: 0.1, cornerRadius: 0.0),
materials: [SimpleMaterial(color: .systemPink, isMetallic: false)],
collisionShape: .generateBox(size: SIMD3<Float>(repeating: 0.1)),
mass: 1.0)

entity.setPosition(placementLocation, relativeTo: nil)
entity.components.set(InputTargetComponent(allowedInputTypes: .indirect))

let material = PhysicsMaterialResource.generate(friction: 0.8, restitution: 0.0)
entity.components.set(PhysicsBodyComponent(shapes: entity.collision!.shapes,
mass: 1.0,
material: material,
mode: .dynamic))

contentEntity.addChild(entity)
}
}

__ __

Render an ImmersiveSpace with CoreAnimation or MaterialX.

Use Reality Composer Pro to render with MaterialX

XCode

Everything is built with XCode with enhancements to code completion, Xcode Previews, test navigator, test report, streamlined distribution process, improved navigation, source control management and debugging.

Tabview for Previews

Documentation

Sample usage and screenshot of the code function.

/// Create the bird icon view.
///
/// The bird icon view is a tailored version of the ``ComposedBird`` view.
///
/// Use this initializer to display an image of a given bird.
///
/// ```swift
/// var bird: Bird
///
/// var body: some View {
/// HStack {
/// BirdIcon(bird: bird)
/// .frame(width: 60, height: 60)
/// Text(bird.speciesName)
/// }
/// }
/// ```
///
/// ![A screenshot of a view containing a bird icon with the bird's species name below it.](birdIcon)

Swift Macro

Support for new language feature Swift Macro

From Paul Hudson:

https://www.hackingwithswift.com/articles/258/whats-new-in-swift-5-9

The key things to know are:

They are type-safe rather than simple string replacements, so you need to tell your macro exactly what data it will work with.

They run as external programs during the build phase, and do not live in your main app target.

Macros are broken down into multiple smaller types, such as ExpressionMacro to generate a single expression, AccessorMacro to add getters and setters, and ConformanceMacro to make a type conform to a protocol.

Macros work with your parsed source code — we can query individual parts of the code, such as the name of a property we’re manipulating or it types, or the various properties inside a struct.

They work inside a sandbox and must operate only on the data they are given.

Example Swift Macro:

extension TokenSyntax {
fileprivate var initialUppercased: String {
let name = self.text
guard let initial = name.first else {
return name
}

return "\(initial.uppercased())\(name.dropFirst())"
}
}

public struct CaseDetectionMacro: MemberMacro {
public static func expansion<
Declaration: DeclGroupSyntax, Context: MacroExpansionContext
>(
of node: AttributeSyntax,
providingMembersOf declaration: Declaration,
in context: Context
) throws -> [DeclSyntax] {
declaration.memberBlock.members
.compactMap { $0.decl.as(EnumCaseDeclSyntax.self) }
.map { $0.elements.first!.identifier }
.map { ($0, $0.initialUppercased) }
.map { original, uppercased in
"""
var is\(raw: uppercased): Bool {
if case .\(raw: original) = self {
return true
}

return false
}
"""
}
}
}

@main
struct EnumHelperPlugin: CompilerPlugin {
let providingMacros: [Macro.Type] = [
CaseDetectionMacro.self,
]
}

Expande the macro right from XCode

Expand Macro in XCode to see what the compiler is providing

Logging

import OSLog

let logger = Logger(subsystem: "BackyardBirdsData", category: "Account")

func login(password: String) -> Error? {
var error: Error? = nil
logger.info("Logging in user '\(username)'...")

// ...

if let error {
logger.error("User '\(username)' failed to log in. Error: \(error)")
} else {
loggedIn = true
logger.notice("User '\(username)' logged in successfully.")
}
return error
}

XCode Preview

Allows for RealityKit 3D

  • Simulator allows to build the scene — Both Day / Night — Plan location
  • Everything is built with Xcode
  • XCode Preview allows for RealityKit 3D
  • XCode can preview 3D models.
  • Simulator allows to build the scene — Both Day / Night — Plan location

Reality Composer Pro

New way to preview and prepare 3D content for your visionOS apps. Reality Composer Pro leverages the power of USD to help you compose, edit and preview assets, such as 3D models, materials, and sounds.

Meet Reality Composer Pro

Reality Composer Pro inside XCode

It will generate a USDZ file for XCode to use then done.

Click the button to open Reality Composer Pro 👇🏾

Open Reality Compose Pro from inside XCode

This will open the Reality Composer Pro

left we have the hierarchy panel

On the trailing side, we have the inspector panel.

Content Library

With no code you can add objects and change materials.

Particles

Particle emitters allows us to create effects such as this flame in our scene. A particle emitter is composed of two parts:

  • Particles

set → Color, properties, force fields, and rendering sections

  • Emitter — Two ways to add: 1) Plus button at the bottom of the hierarchy panel. 2) adding a Particle Emitter Component to any object in the scene by clicking the Add Component button at the bottom of the inspector panel.

set → Timing, shape, and spawning sections.

Both the particle and emitter have multiple sections of variables to tweak and modify.

When a new scene is created, it appears as a new tab in the window.

Use particle emitters to make a cloud

Spacial audio

Audio Authoring

how audio file groups are emitted into a scene
  • Spatial — Position & Direction
  • Ambient — Direction
  • Channel — All around. Not position nor direction
spacial audio bird sound

Statistics

We can use the statistics from the scene to understand the performance.

By just viewing the statistic we can see places to optimize the scene.

Summery :

  1. Assemble 3D scenes
  2. Add RealityKit Components
  3. Optimize and Preview

Explore materials in Reality Composer Pro

Alter the appearance of your 3D objects using RealityKit materials. We’ll introduce you to MaterialX and physically-based (PBR) shaders, shader graph editor, and explore adding custom inputs to a material.

New ShaderGraphMaterial for VisionOS

MaterialX

Materials consist of one or more shaders. These are programs that do the actual work of computing the appearance of their material. With RealityKit 2 for iOS and iPadOS, we introduced CustomMaterial. Shaders in CustomMaterial are hand coded in Metal. In xrOS, we’re introducing a new type of material called ShaderGraphMaterial. This is the exclusive way of creating custom materials for xrOS. ShaderGraphMaterial uses networks, or graphs, of functional blocks, which is where it gets its name. ShaderGraphMaterial is based on an open standard called MaterialX and is an artist-friendly way to define materials. MaterialX was originally created by Industrial Light & Magic in 2012. ShaderGraphMaterial supports two main types of shaders, which we call Physically Based and Custom. Physically Based is a basic PBR shader. Choose this for simpler use cases. You configure this shader by providing constant, nonchanging values, like colors or images, for each property. Custom shaders, on the other hand, give you precise and custom control over the appearance of your 3D objects. Custom shaders can incorporate animation, adjust your object’s geometry, and create special effects on the surface of your object, like a sparkly paint look. — Apple

ShaderGraphMaterial can use Physically Based (PBR) or custom shaders.

Start with PBR shader (much easier) and if needed move to custom shaders. Custom shaders can incorporate animation.

Combining blocks we can make new material right inside Reality Compose Pro.

We can add a topographical map without any code because the shader does not need to be dynamic we can just use a PBR

PBR to set contour lines in model without any code!

Geometry Modifiers:

  • Surface shaders operate on the PBR attributes
  • Geometry modifiers operate on the geomtry
  • Both surface and geometry shaders can be built in the Shader Graph

Metal

Metal powers hardware-accelerated graphics on Apple platforms by providing a low-overhead API, rich shading language, tight integration between graphics and compute, and an unparalleled suite of GPU profiling and debugging tools. — Apple

OpenGL

Spacial Audio

USDZ

Depending on the device, its operating system, and the app, there are three renderers that might display your 3D assets: RealityKit, SceneKit, or Storm.

RealityKit

The RealityKit renderer is part of the RealityKit framework. It handles drawing for Reality Converter and Reality Composer. It also renders AR Quick Look for USDZ files on iOS and Quick Look for USDZ files on macOS. It’s also used for USDZ files in Xcode 15.

SceneKit

The SceneKit renderer is part of the SceneKit framework. It renders 3D content in Xcode, Motion, and all other apps that use the SceneKit framework. Preview and Quick Look also use it to display USD and USDC files on macOS 10 and 11. On macOS 13 and earlier, Quick Look uses SceneKit for displaying USDA files.

The SceneKit framework will be available on VisionOS, for rendering 3D content in a 2D view (as on iOS and other platforms). RealityKit, and SwiftUI’s Model3D, are available for incorporating 3D content as part of a spatial experience.

Storm

The Storm renderer is a Metal-native implementation of Pixar’s high-performance preview renderer. It’s available on macOS 12 and later, where Preview and Quick Look use it to display USD, USDA, and USDC files.

— —

Each renderer supports a subset of the USD features. Use the tables below to determine which features are available. See Creating USD Files for Apple Devices for more information on how to determine which engine renders your USD asset.

Work with Reality Composer Pro content in Xcode

Load 3D scenes into Xcode, integrate your content with your code, and add interactivity to your app.

  • Load 3D content
  • Components
  • Play audio
  • Material properties

Load content from Reality Composer Pro into RealityView inside XCode. This is the bridge between the two worlds.

RealityView { content in
do {
let entity = try await Entity(named: "DioramaAssembled", in: realityKitContentBundle)
content.add(entity)
} catch {
// Handle error
}
}

System has update function called once a frame.

These are all explained in RealityKit 2 docs.

Component

Add a component

let component = MyComponent()
entity.components.set(component)
tag the button in the attachments
receive the attachment in the updates closure
The make closure must have the entity ready when starting because it is only run once.

Duplicates will happen if added in the update closure.

placing entity in the update will keep adding it over and over.

SwiftUI add custom views with Attachments.

let myEntity = Entity()

RealityView { content, _ in
//Add the USDF file to project
if let entity = try? await Entity(named: "MyScene", in: realityKitContentBundle) {
content.add(entity)
}
} update: { content, attachments in
// add the attachment with the fish tag
if let attachmentEntity = attachments.entity(for: "🐠") {
content.add(attachmentEntity)
}

content.add(myEntity)

} attachments: {
Button { ... }
.background(.green)
.tag("🐠")
}

Add entities pragmatically

@Observable final class AttachmentsProvider {
var attachments: [ObjectIdentifier: AnyView] = [:]
var sortedTagViewPairs: [(tag: ObjectIdentifier, view: AnyView)] { ... }
}

...

@State var attachmentsProvider = AttachmentsProvider()

RealityView { _, _ in

} update: { _, _ in

} attachments: {
ForEach(attachmentsProvider.sortedTagViewPairs, id: \.tag) { pair in
pair.view
}
}
static let runtimeQuery = EntityQuery(where: .has(PointOfInterestRuntimeComponent.self))

RealityView { _, _ in

} update: { content, attachments in x

rootEntity.scene?.performQuery(Self.runtimeQuery).forEach { entity in
guard let component = entity.components[PointOfInterestRuntimeComponent.self],
let attachmentEntity = attachments.entity(for: component.attachmentTag) else {
return
}
content.add(attachmentEntity)
attachmentEntity.setPosition([0, 0.5, 0], relativeTo: entity)
}
} attachments: {
ForEach(attachmentsProvider.sortedTagViewPairs, id: \.tag) { pair in
pair.view
}
}
static let markersQuery = EntityQuery(where: .has(PointOfInterestComponent.self))
@State var attachmentsProvider = AttachmentsProvider()

rootEntity.scene?.performQuery(Self.markersQuery).forEach { entity in
guard let pointOfInterest = entity.components[PointOfInterestComponent.self] else { return }

let attachmentTag: ObjectIdentifier = entity.id

let view = LearnMoreView(name: pointOfInterest.name, description: pointOfInterest.description)
.tag(attachmentTag)

attachmentsProvider.attachments[attachmentTag] = AnyView(view)
let runtimeComponent = PointOfInterestRuntimeComponent(attachmentTag: attachmentTag)
entity.components.set(runtimeComponent)
}

Build your first VisionOS App

  • Create an XCode project
  • Simulator
  • XCode Previews
  • Reality Compose Pro
  • Create an immersive scene
  • Target gestures to entities

XCode

Initial Scene can be Window or Volume. We can move to Full after starting the program.

Immersive Space could be: Mixed / Progressive / Full

It is suggested to start people in None and let them progress toward full as they fell.

Initial project setup

Label below the globe

VStack {
Toggle("Enlarge RealityView Content", isOn: $enlarge)
.toggleStyle(.button)
}
.padding()
.glassBackgroundEffect()

Looking at the code we see it is just normal SwiftUI code with a RealityView baked inside.

RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(scene)
}
} update: { content in
// Update the RealityKit content when SwiftUI state changes
if let scene = content.entities.first {
let uniformScale: Float = enlarge ? 1.4 : 1.0
scene.transform.scale = [uniformScale, uniformScale, uniformScale]
}
}
.gesture(TapGesture().targetedToEntity().onEnded { _ in
enlarge.toggle()
})

20:31 — ImmersiveView

// MyFirstImmersiveApp.swift

@main
struct MyFirstImmersiveApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}.windowStyle(.volumetric)

ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView()
}
}
}

23:48 — openImmersiveSpace

struct ContentView: View {

@Environment(\.openImmersiveSpace) var openImmersiveSpace

var body: some View {
Button("Open") {
Task {
await openImmersiveSpace(id: "ImmersiveSpace")
}
}
}
}

25:48 — Entity targeting

import SwiftUI
import RealityKit

struct ContentView: View {
var body: some View {
RealityView { content in
// For entity targeting to work, entities must have a CollisionComponent
// and an InputTargetComponent!
}
.gesture(TapGesture().targetedToAnyEntity().onEnded { value in
print("Tapped entity \(value.entity)!")
})
}
}

28:56 — Move animation

.gesture(TapGesture().targetedToAnyEntity().onEnded { value in
var transform = value.entity.transform
transform.translation += SIMD3(0.1, 0, -0.1)
value.entity.move(
to: transform,
relativeTo: nil,
duration: 3,
timingFunction: .easeInOut
)
})

Understand the DioramaApp from Apple

Diorama Apple App

Get the code and resources here.

Quick overview:

Currently working on a detailed walkthrough of the codebase … stay tuned :-)

Link here:

What about KMM

No!

Issue open
Jetbrains response

We can’t be reliant on what they think is important!!! We have customers waiting on us. And that is why we build on iOS and not KMM

Much more to come … including the release of the SDK :-)

Please leave comments but remember this is just the skeleton with most of the “meat” still yet to come.

Thanks,

~Ash

--

--

Responses (1)