我有一个用SceneKit构建的3D互动地球仪,其中国家用圆点表示.下面的函数获取一个位置,并将相机动画设置为该位置.

如果用户没有与地球交互,那么我能够连续调用该函数并将相机动画设置到新位置.

但是,如果用户在场景中执行任何手势,则相机动画不起作用.

A solution found in a different SO thread (linked below) used the line sceneView.pointOfView = cameraNode at the beginning of the function.
This did solve the issue of the camera not animating after a gesture.

但是,这条线会使球体在设置动画之前重置到其原始位置.我一直在想办法绕过这个场景重置,但一直没有成功.

我假设在地球上做一个手势会为场景创建一个新的视点,并覆盖相机的视点.因此,在动画之前将场景的视点设置回摄影机可以解决该问题.

import Foundation
import SceneKit
import CoreImage
import SwiftUI
import MapKit

public typealias GenericController = UIViewController

public class GlobeViewController: GenericController {
    var nodePos: CGPoint? = nil
    public var earthNode: SCNNode!
    private var sceneView : SCNView!
    private var cameraNode: SCNNode!
    private var dotCount = 50000
    
    public init(earthRadius: Double) {
        self.earthRadius = earthRadius
        super.init(nibName: nil, bundle: nil)
    }
    
    public init(earthRadius: Double, dotCount: Int) {
        self.earthRadius = earthRadius
        self.dotCount = dotCount
        super.init(nibName: nil, bundle: nil)
    }
    
    required init?(coder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }

    func centerCameraOnDot(dotPosition: SCNVector3) {
        sceneView.pointOfView = cameraNode //HERE RESETS
        
        let fixedDistance: Float = 5.0
        let newCameraPosition = dotPosition.normalized().scaled(to: fixedDistance)

        let moveAction = SCNAction.move(to: newCameraPosition, duration: 1.5)

        let constraint = SCNLookAtConstraint(target: earthNode)
        constraint.isGimbalLockEnabled = true

        sceneView.gestureRecognizers?.forEach { $0.isEnabled = false }
        
        SCNTransaction.begin()
        SCNTransaction.animationDuration = 1.5

        self.cameraNode.constraints = [constraint]
        self.cameraNode.runAction(moveAction) {
            DispatchQueue.main.async {
                self.sceneView.gestureRecognizers?.forEach { $0.isEnabled = true }
            }
        }
        SCNTransaction.commit()
    }

    public override func viewDidLoad() {
        super.viewDidLoad()
        setupScene()
        
        setupParticles()
        
        setupCamera()
        setupGlobe()
        
        setupDotGeometry()
    }
    
    private func setupScene() {
        let scene = SCNScene()
        sceneView = SCNView(frame: view.frame)
        sceneView.scene = scene
        sceneView.showsStatistics = true
        sceneView.backgroundColor = .clear
        sceneView.allowsCameraControl = true
        sceneView.isUserInteractionEnabled = true
        self.view.addSubview(sceneView)
    }
        
    private func setupParticles() {
        guard let stars = SCNParticleSystem(named: "StarsParticles.scnp", inDirectory: nil) else { return }
        stars.isLightingEnabled = false
                
        if sceneView != nil {
            sceneView.scene?.rootNode.addParticleSystem(stars)
        }
    }
    
    private func setupCamera() {
        self.cameraNode = SCNNode()
        cameraNode.camera = SCNCamera()
        cameraNode.position = SCNVector3(x: 0, y: 0, z: 5)
        sceneView.scene?.rootNode.addChildNode(cameraNode)
    }

    private func setupGlobe() {
        self.earthNode = EarthNode(radius: earthRadius, earthColor: earthColor, earthGlow: glowColor, earthReflection: reflectionColor)
        sceneView.scene?.rootNode.addChildNode(earthNode)
    }

    private func setupDotGeometry() {
        let textureMap = generateTextureMap(dots: dotCount, sphereRadius: CGFloat(earthRadius))

        let newYork = CLLocationCoordinate2D(latitude: 44.0682, longitude: -121.3153)
        let newYorkDot = closestDotPosition(to: newYork, in: textureMap)

        let dotColor = GenericColor(white: 1, alpha: 1)
        let oceanColor = GenericColor(cgColor: UIColor.systemRed.cgColor)
        let highlightColor = GenericColor(cgColor: UIColor.systemRed.cgColor)
        
        // threshold to determine if the pixel in the earth-dark.jpg represents terrain (0.03 represents rgb(7.65,7.65,7.65), which is almost black)
        let threshold: CGFloat = 0.03
        
        let dotGeometry = SCNSphere(radius: dotRadius)
        dotGeometry.firstMaterial?.diffuse.contents = dotColor
        dotGeometry.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant
        
        let highlightGeometry = SCNSphere(radius: dotRadius)
        highlightGeometry.firstMaterial?.diffuse.contents = highlightColor
        highlightGeometry.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant
        
        let oceanGeometry = SCNSphere(radius: dotRadius)
        oceanGeometry.firstMaterial?.diffuse.contents = oceanColor
        oceanGeometry.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant
        
        var positions = [SCNVector3]()
        var dotNodes = [SCNNode]()
        
        var highlightedNode: SCNNode? = nil
        
        for i in 0...textureMap.count - 1 {
            let u = textureMap[i].x
            let v = textureMap[i].y
            
            let pixelColor = self.getPixelColor(x: Int(u), y: Int(v))
            let isHighlight = u == newYorkDot.x && v == newYorkDot.y
            
            if (isHighlight) {
                let dotNode = SCNNode(geometry: highlightGeometry)
                dotNode.name = "NewYorkDot"
                dotNode.position = textureMap[i].position
                positions.append(dotNode.position)
                dotNodes.append(dotNode)
                
                print("myloc \(textureMap[i].position)")
                
                highlightedNode = dotNode
            } else if (pixelColor.red < threshold && pixelColor.green < threshold && pixelColor.blue < threshold) {
                let dotNode = SCNNode(geometry: dotGeometry)
                dotNode.name = "Other"
                dotNode.position = textureMap[i].position
                positions.append(dotNode.position)
                dotNodes.append(dotNode)
            }
        }
        
        DispatchQueue.main.async {
            let dotPositions = positions as NSArray
            let dotIndices = NSArray()
            let source = SCNGeometrySource(vertices: dotPositions as! [SCNVector3])
            let element = SCNGeometryElement(indices: dotIndices as! [Int32], primitiveType: .point)
            
            let pointCloud = SCNGeometry(sources: [source], elements: [element])
            
            let pointCloudNode = SCNNode(geometry: pointCloud)
            for dotNode in dotNodes {
                pointCloudNode.addChildNode(dotNode)
            }
     
            self.sceneView.scene?.rootNode.addChildNode(pointCloudNode)
            
            //performing gestures before this causes the bug
            DispatchQueue.main.asyncAfter(deadline: .now() + 5) {
                if let highlightedNode = highlightedNode {
                    self.centerCameraOnDot(dotPosition: highlightedNode.position)
                }
            }
        }
   }
}

推荐答案

设置为sceneView.pointOfView时,摄影机的位置和方向会立即更改为pointOfView node 的变换,从而导致观察到的重置.

Try and preserve the current camera transformation: before setting sceneView.pointOfView = cameraNode, store the current camera transformation. That includes its position, rotation, and any other properties relevant to your scene setup.
Then, after setting the point of view, reapply the stored transformation to the camera. That should negate the resetting effect and maintain the continuity of the scene as seen by the user.

您的centerCameraOnDot功能将是:

func centerCameraOnDot(dotPosition: SCNVector3) {
    let p = sceneView.pointOfView?.transform
    cameraNode.transform = p!
    sceneView.pointOfView = cameraNode
    
    let fixedDistance: Float = 5.0
    let newCameraPosition = dotPosition.normalized().scaled(to: fixedDistance)

    let moveAction = SCNAction.move(to: newCameraPosition, duration: 1.5)

    let constraint = SCNLookAtConstraint(target: earthNode)
    constraint.isGimbalLockEnabled = true

    sceneView.gestureRecognizers?.forEach { $0.isEnabled = false }
    
    SCNTransaction.begin()
    SCNTransaction.animationDuration = 1.5

    self.cameraNode.constraints = [constraint]
    self.cameraNode.runAction(moveAction) {
        DispatchQueue.main.async {
            self.sceneView.gestureRecognizers?.forEach { $0.isEnabled = true }
        }
    }
    SCNTransaction.commit()
}

看看这是否有助于在不重置地球位置的情况下将相机转换到新的视点.


替代方法:在不更改pointOfView的情况下更新摄影机 node

您可以try 基于用户交互来更新cameraNode‘S的位置和方向,而不是直接操作sceneViewpointOfView属性.该方法包括截取用户手势并手动将其转换应用到cameraNode.以下是您可以如何实现这一点的概要:

Add custom gesture recognizers to the sceneView or utilize SceneKit's default gesture handling to detect user interactions.
When a user interaction is detected, calculate the necessary transformations and apply them to the cameraNode. That keeps the cameraNode in sync with the user's perspective.
When moving the camera to a new position, animate the cameraNode's position and orientation directly, instead of using sceneView.pointOfView.

这可能看起来像是:

override func viewDidLoad() {
    super.viewDidLoad()
    setupGestureRecognizers()
    // Other setup code 
}

private func setupGestureRecognizers() {
    let panGesture = UIPanGestureRecognizer(target: self, action: #selector(handlePanGesture(_:)))
    sceneView.addGestureRecognizer(panGesture)
    // Add other gestures as needed
}

@objc func handlePanGesture(_ gestureRecognizer: UIPanGestureRecognizer) {
    // Calculate the transformation based on the gesture
    // Apply the transformation to the cameraNode
    // Rest of the code
}

func centerCameraOnDot(dotPosition: SCNVector3) {
    // Directly animate cameraNode to new position
    // No need to alter sceneView.pointOfView
    // Rest of the code
}

这种方法需要更手动地处理相机变换,但提供了对相机行为的更大控制,以响应用户交互.它还避免了在更换pointOfView时重新设置地球位置的问题.

Try also to add logs to track the camera's position and orientation before and after user interactions and when animating to a new position. That can help identify unexpected changes. And use SceneKit's debugging tools, such as showing statistics or the debug options for the SCNView, to better understand the scene's state.
Test each part of your gesture handling and camera animation code separately to isolate the cause of the issue.


根据手势识别器的输出,实现手柄平移功能相当复杂.对于我的用例,我宁愿不手动处理翻译.

考虑到手动处理手势的复杂性以及不直接更改相机的Transform属性的要求,您需要考虑在这些约束范围内工作的其他方法.

由于手动处理手势很复杂,一种方法是利用SceneKit的默认相机控件.这将涉及配置SCNViewallowsCameraControl属性来自动处理用户交互.如果它已经被使用,您可以寻找扩展或自定义其行为的方法来满足您的需要.

或者:SceneKit提供可用于控制摄影机行为的各种摄影机约束.例如,可以使用SCNLookAtConstraint来保持摄影机聚焦于特定 node (如球体),同时仍允许用户交互围绕该 node 进行动态观察.这可能有助于在用户交互后保持一致的摄像头行为.

或者:如果问题主要是摄像头的状态被用户交互覆盖,请考虑在用户交互之前保存摄像头的状态,并在需要时恢复它.这包括存储相机的位置、方向和其他相关属性,然后在开始动画之前重新应用它们.

或者:使用SCNTransaction和动画块可以提供对相机动画的更多控制.您可以启动SCNTransaction,设置其完成块以再次启用用户交互,并在此块内执行相机动画.这可能有助于在没有突然变化的情况下平稳地过渡相机.

注意:可能存在计时问题,即在场景完全处理用户的最后一次交互之前触发相机动画.在开始相机动画之前引入轻微延迟有时可以解决此类计时相关的问题.

如果手势识别器有可能干扰相机动画,那么在动画开始之前调查它们的状态可以提供见解.手势识别器可能仍处于活动状态或处于意外状态,这可能会影响摄像头的行为.

但是,正如我前面提到的,围绕相机控制和动画代码添加大量日志(log)记录可以帮助识别任何意外的行为或状态.在用户交互和动画之前和之后记录相机的位置、方向和相关属性的状态可以提供线索.

Ios相关问答推荐

如何使视图完全适合屏幕,而不会在旋转时溢出手机屏幕的边界?

SwiftUI图表-默认情况下,X轴数据点的间距不相等

Xcode -Google Mobile Ads SDK在没有AppMeasurement的情况下初始化

如何在SwiftUI中为带有CornerRadius的矩形创建下边框?

Strange UIView.animate更改UIButton标题 colored颜色 的行为

为什么 Firebase Messaging 配置对于 IOS native 和 Flutter iOS 不同?

Toast 不显示 Flutter

为什么这个 SwiftUI 状态在作为非绑定参数传递时没有更新?

如何在 SwiftUI 中创建显示当前电池电量的电池徽章

一对多关系 Firebase 实时数据库(嵌入数百万条 comments )

SwiftUI:如何为 ScrollView 制作可拉伸(灵活)的粘性标题?

迁移到 UIKit 生命周期的应用程序不会调用 SceneDelegate

通过 tuist/XcodeProj 以编程方式将文件添加到本机目标

从远程通知启动时,iOS 应用程序加载错误

如何快速设置条形按钮的图像?

Xcode 缺少支持文件 iOS 12.2 (16E227)

无法验证客户端 3000

通过 segue 传递数据

为所有 UIImageViews 添加圆角

xcode 5.1:libCordova.a 架构问题