ARAnchor 到底是什么?

本教程将介绍ARAnchor 到底是什么?的处理方法,这篇教程是从别的地方看到的,然后加了一些国外程序员的疑问与解答,希望能对你有所帮助,好了,下面开始学习吧。

ARAnchor 到底是什么? 教程 第1张

问题描述

我正在尝试理解和使用 ARKit.但是有一件事我不能完全理解.

Apple 谈到 ARAnchor:

可用于在 AR 场景中放置对象的真实位置和方向.

但这还不够.所以我的问题是:

    ARAnchor 究竟是什么?

    锚点和特征点有什么区别?

    ARAnchor 只是特征点的一部分吗?

    ARKit 怎么确定其锚点?

解决方案

更新日期:2021 年 11 月 20 日.

TL;DR

ARAnchor

ARAnchor 是一个不可见的空对象,可以在世界空间中的锚点位置保存 3D 模型.把 ARAnchor 想象成一个带有局部轴的 transform node (您可以平移、旋转和缩放它)用于您的模型.每个 3D 模型都有一个枢轴点,对吧?所以这个支点必须满足一个ARAnchor.

如果您不在 ARKit/RealityKit 应用程序中使用锚点,您的 3D 模型可能会偏离它们放置的位置,这将极大地影响您应用程序的真实感和用户体验.因此,锚点是您的 AR 场景的关键元素.

根据 2017 年 ARKit 文档:

ARAnchor 是一个真实世界的位置和方向,可用于在 AR 场景中放置对象.在会话中添加锚点有助于 ARKit 优化该锚点周围区域的世界跟踪精度,以便虚拟对象相对于现实世界似乎保持在原位.如果虚拟对象移动,则从旧位置移除相应的锚点,并在新位置添加一个.

ARAnchor 是 ARKit 框架中存在的所有其他类型锚的父类,因此所有这些子类都继承自 ARAnchor 类,但不能直接在您的代码中使用它.我还必须说 ARAnchorFeature Points 没有任何共同之处.Feature Points 用于成功跟踪和调试.

ARAnchor 不会自动跟踪现实世界的目标.如果您需要自动化,则必须使用 renderer(...)session(...) 实例方法,如果您符合协议 则可以调用它们分别是 ARSCNViewDelegateARSessionDelegate.

因此,如果您想在场景中看到任何锚点,则必须可视化".它使用三个薄 SCNCylinder 原语.

在 ARKit 中,您可以使用不同的场景自动将 ARAnchors 添加到您的场景中:

    ARPlaneAnchor

      如果水平和/或垂直 planeDetection 实例属性为 ON,ARKit 能够将 ARPlaneAnchors 添加到当前会话.有时启用 planeDetection 会大大增加场景理解阶段所需的时间.

    ARImageAnchor( So, if you wanna see any anchor in your scene, you have to "visualize" it using three thin SCNCylinder primitives.

    In ARKit you can automatically add ARAnchors to your scene using different scenarios:

      ARPlaneAnchor

        If horizontal and/or vertical planeDetection instance property is ON, ARKit is able to add ARPlaneAnchors to the current session. Sometimes enabled planeDetection considerably increases a time required for scene understanding stage.

      ARImageAnchor ()

        This type of anchors contains information about the position and orientation of a detected image (anchor is placed at an image center) in world-tracking session. For activation use detectionImages instance property. In ARKit 2.0 you can totally track up to 25 images, in ARKit 3.0 and ARKit 4.0 – up to 100 images, respectively. But, in both cases, not more than just 4 images simultaneously. It was promised, that in ARKit 5.0, however, you can detect and track up to 100 images at a time (but it's still not implemented yet).

      ARBodyAnchor ()

        In the latest release of ARKit you can enable body tracking by running your session with ARBodyTrackingConfiguration(). You'll get ARBodyAnchor in a Root Joint of CG Skeleton, or at pelvis position of tracked character.

      ARFaceAnchor ()

        Face Anchor stores the information about the topology and pose, as well as face's expression that you can detect with a front TrueDepth camera or with regular RGB camera. When face is detected, Face Anchor will be attached slightly behind a nose, in the center of a face. In ARKit 2.0 you can track just one face, in ARKit 3.0 – up to 3 faces, simultaneously. In ARKit 4.0 a number of tracked faces depends on a TrueDepth sensor and CPU: smartphones with TrueDepth camera tracks up to 3 faces, smartphones with A12+ chipset, but without TrueDepth camera, can also track up to 3 faces.

      ARObjectAnchor

        This anchor's type keeps an information about 6 Degrees of Freedom (position and orientation) of a real-world 3D object detected in a world-tracking session. Remember that you need to specify ARReferenceObject instances for detectionObjects property of session config.

      AREnvironmentProbeAnchor

        Probe Anchor provides environmental lighting information for a specific area of space in a world-tracking session. ARKit's Artificial Intelligence uses it to supply reflective shaders with environmental reflections.

      ARParticipantAnchor

        This is an indispensable anchor type for multiuser AR experiences. If you want to employ it, use true value for isCollaborationEnabled instance property in MultipeerConnectivity framework.

      ARMeshAnchor

        ARKit and LiDAR subdivide the reconstructed real-world scene surrounding the user into mesh anchors with corresponding polygonal geometry. Mesh anchors constantly update their data as ARKit refines its understanding of the real world. Although ARKit updates a mesh to reflect a change in the physical environment, the mesh's subsequent change is not intended to reflect in real time. Sometimes your reconstructed scene can have up to 50 anchors or even more. This is due to the fact that each classified object (wall, chair, door or table) has its own personal anchor. Each ARMeshAnchor stores data about corresponding vertices, one of eight cases of classification, its faces and vertices normals.

      ARGeoAnchor (conforms to protocol)

        In ARKit 4.0+ there's a geo anchor (a.k.a. location anchor) that tracks a geographic location using GPS, Apple Maps and additional environment data coming from Apple servers. This type of anchor identifies a specific area in the world that the app can refer to. When a user moves around the scene, the session updates a location anchor’s transform based on coordinates and device’s compass heading of a geo anchor. Look at a list of supported cities.

      ARAppClipCodeAnchor (conforms to protocol)

        This anchor tracks the position and orientation of App Clip Code in the physical environment in ARKit 4.0+. You can use App Clip Codes to enable users to discover your App Clip in the real world. There are NFC-integrated App Clip Code and scan-only App Clip Code.

    There are also other regular approaches to create anchors in AR session:

      Hit-Testing methods

        Tapping on the screen, projects a point onto a invisible detected plane, placing ARAnchor on a location where imaginary ray intersects with this plane. By the way, ARHitTestResult class and its corresponding hit-testing methods for ARSCNView and ARSKView will be deprecated in iOS 14, so you have to get used to a Ray-Casting.

      Ray-Casting methods

        If you're using ray-casting, tapping on the screen results in a projected 3D point on an invisible detected plane. But you can also perform Ray-Casting between A and B positions in 3D scene. The main difference of Ray-Casting from Hit-Testing is that, when using the first one ARKit can keep refining the ray cast as it learns more about detected surfaces, and Ray-Casting can be 2D-to-3D and 3D-to-3D.

      Feature Points

        Special yellow points that ARKit automatically generates on a high-contrast margins of real-world objects, can give you a place to put an ARAnchor on.

      ARCamera's transform

        iPhone's camera's position and orientation simd_float4x4 can be easily used as a place for ARAnchor.

      Any arbitrary World Position

        Place a custom ARWorldAnchor anywhere in your scene. You can generate ARKit's version of world anchor like AnchorEntity(.world(transform: mtx)) found in RealityKit.

    This code snippet shows you how to use an ARPlaneAnchor in a delegate's method: renderer(_:didAdd:for:):

    func renderer(_ renderer: SCNSceneRenderer, 
     didAdd node: SCNNode, 
      for anchor: ARAnchor) {
     
     guard let planeAnchor = anchor as? ARPlaneAnchor 
     else { return }
    
     let grid = Grid(anchor: planeAnchor)
     node.addChildNode(grid)
    }
    

    AnchorEntity

    . According to RealityKit documentation 2019:

    AnchorEntity is an anchor that tethers virtual content to a real-world object in an AR session.

    RealityKit framework and Reality Composer app were released in WWDC'19. They have a new class named AnchorEntity. You can use AnchorEntity as the root point of any entities' hierarchy, and you must add it to the Scene anchors collection. AnchorEntity automatically tracks real world target. In RealityKit and Reality Composer AnchorEntity is at the top of hierarchy. This anchor is able to hold a hundred of models and in this case it's more stable than if you use 100 personal anchors for each model.

    Let's see how it looks in a code:

    func makeUIView(context: Context) -> ARView {
     
     let arView = ARView(frame: .zero)
     let modelAnchor = try! Experience.loadModel()
     arView.scene.anchors.append(modelAnchor)
     return arView
    }
    

    AnchorEntity has three components:

      Anchoring component

      Transform component

      Synchronization component

    To find out the difference between ARAnchor and AnchorEntity look at THIS POST.

    Here are nine AnchorEntity's cases available in RealityKit 2.0 for iOS:

    // Fixed position in the AR scene
    AnchorEntity(.world(transform: mtx)) 
    
    // For body tracking (a.k.a. Motion Capture)
    AnchorEntity(.body)
    
    // Pinned to the tracking camera
    AnchorEntity(.camera)
    
    // For face tracking (Selfie Camera config)
    AnchorEntity(.face)
    
    // For image tracking config
    AnchorEntity(.image(group: "GroupName", name: "forModel"))
    
    // For object tracking config
    AnchorEntity(.object(group: "GroupName", name: "forObject"))
    
    // For plane detection with surface classification
    AnchorEntity(.plane([.any], classification: [.seat], minimumBounds: [1, 1]))
    
    // When you use ray-casting
    AnchorEntity(raycastResult: myRaycastResult)
    
    // When you use ARAnchor with a given identifier
    AnchorEntity(.anchor(identifier: uuid))
    
    // Creates anchor entity on a basis of ARAnchor
    AnchorEntity(anchor: arAnchor) 
    

    And here are only two AnchorEntity's cases available in RealityKit 2.0 for macOS:

    // Fixed world position in VR scene
    AnchorEntity(.world(transform: mtx))
    
    // Camera transform
    AnchorEntity(.camera)
    

    Also it’s not superfluous to say that you can use any subclass of ARAnchor for AnchorEntity needs:

    func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
    
     guard let faceAnchor = anchors.first as? ARFaceAnchor 
     else { return }
    
     arView.session.add(anchor: faceAnchor)
    
     let anchor = AnchorEntity(anchor: faceAnchor)
     anchor.addChild(model) arView.scene.anchors.append(anchor)
    }
    

    Visualizing AnchorEntity

    Here's an example of how to visualize anchors in RealityKit (mac version).

    import AppKit
    import RealityKit
    
    class ViewController: NSViewController {
     
     @IBOutlet var arView: ARView!
     var model = Entity()
     let anchor = AnchorEntity()
    
     fileprivate func visualAnchor() -> Entity {
    
      let colors: [SimpleMaterial.Color] = [.red, .green, .blue]
    
      for index in 0...2 {
    
    let box: MeshResource = .generateBox(size: [0.20, 0.005, 0.005])
    let material = UnlitMaterial(color: colors[index])let entity = ModelEntity(mesh: box, materials: [material])
    
    if index == 0 {
     entity.position.x += 0.1
    
    } else if index == 1 {
     entity.transform = Transform(pitch: 0, yaw: 0, roll: .pi/2)
     entity.position.y += 0.1
    
    } else if index == 2 {
     entity.transform = Transform(pitch: 0, yaw: -.pi/2, roll: 0)
     entity.position.z += 0.1
    }
    model.scale *= 1.5
    self.model.addChild(entity)
      }
      return self.model
     }
    
     override func awakeFromNib() {
      anchor.addChild(self.visualAnchor())
      arView.scene.addAnchor(anchor)
     }
    }
    

    好了关于ARAnchor 到底是什么?的教程就到这里就结束了,希望趣模板源码网找到的这篇技术文章能帮助到大家,更多技术教程可以在站内搜索。