我利用前置TrueDepth摄像头与Vision相结合来识别图像中的点并进行一些测量.我知道Vision坐标是标准化的,所以我将Vision标准化的点转换为与View对应的CG点,然后将这些点与dataOutputSynchronizer中的depthData匹配以获得z值.然后使用相机的内部函数,我试图得到3D空间中两点之间的距离.
我已经成功地找到了点,并(我相信)将它们转换为屏幕点.我在这里的 idea 是,这些CGPoints将没有什么不同,如果我点击他们在屏幕上.
我的问题是,即使转换后的CGPoints基本保持相似(我的手在测试过程中确实会移动一点,但大部分时间都是在摄像头的平面上),而且我试图以同样的方式计算深度位置,深度可能会有很大的不同--特别是第二点.深度点2在计算距离方面似乎更准确(我的手距离摄像头约1英尺),但它变化很大,仍然不准确.
以下是包含相关数据的控制台打印
there are 2 points found
recognized points
[(499.08930909633636, 634.0807711283367), (543.7462849617004, 1061.8824380238852)]
DEPTH POINT 1 = 3.6312041
DEPTH POINT 2 = 0.2998223
there are 2 points found
recognized points
[(498.33644700050354, 681.3769372304281), (602.3667773008347, 1130.4955183664956)]
DEPTH POINT 1 = 3.6276162
DEPTH POINT 2 = 0.560331
以下是一些相关代码.
dataOutputSynchronizer个
func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer,
didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) {
var handPoints: [CGPoint] = []
// Read all outputs
guard renderingEnabled,
let syncedDepthData: AVCaptureSynchronizedDepthData =
synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData,
let syncedVideoData: AVCaptureSynchronizedSampleBufferData =
synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else {
// only work on synced pairs
return
}
if syncedDepthData.depthDataWasDropped || syncedVideoData.sampleBufferWasDropped {
return
}
let depthPixelBuffer = syncedDepthData.depthData.depthDataMap
guard let videoPixelBuffer = CMSampleBufferGetImageBuffer(syncedVideoData.sampleBuffer) else {
return
}
// Get the cameraIntrinsics
guard let cameraIntrinsics = syncedDepthData.depthData.cameraCalibrationData?.intrinsicMatrix else {
return
}
let image = CIImage(cvPixelBuffer: videoPixelBuffer)
let handler = VNImageRequestHandler(
cmSampleBuffer: syncedVideoData.sampleBuffer,
orientation: .up,
options: [:]
)
do {
try handler.perform([handPoseRequest])
guard
let results = handPoseRequest.results?.prefix(2),
!results.isEmpty
else {
return
}
var recognizedPoints: [VNRecognizedPoint] = []
try results.forEach { observation in
let fingers = try observation.recognizedPoints(.all)
if let middleTipPoint = fingers[.middleDIP] {
recognizedPoints.append(middleTipPoint)
}
if let wristPoint = fingers[.wrist] {
recognizedPoints.append(wristPoint)
}
}
// Store the Points in handPoints if they are confident points
handPoints = recognizedPoints.filter {
$0.confidence > 0.90
}
.map {
// Adjust the Y
CGPoint(x: $0.location.x, y: 1 - $0.location.y)
}
// Process the Points Found
DispatchQueue.main.sync {
self.processPoints(handPoints,depthPixelBuffer,videoPixelBuffer,cameraIntrinsics)
}
} catch {
// Be more graceful here
}
}
Process Points个
func processPoints(_ handPoints: [CGPoint],_ depthPixelBuffer: CVImageBuffer,_ videoPixelBuffer: CVImageBuffer,_ cameraIntrinsics: simd_float3x3) {
// This converts the normalized point to screen points
// cameraView.previewLayer is a AVCaptureVideoPreviewLayer inside a UIView
let convertedPoints = handPoints.map {
cameraView.previewLayer.layerPointConverted(fromCaptureDevicePoint: $0)
}
// We need 2 hand points to get the distance
if handPoints.count == 2 {
print("there are 2 points found");
print("recognized points")
print(convertedPoints)
let handVisionPoint1 = convertedPoints[0]
let handVisionPoint2 = convertedPoints[1]
let scaleFactor = CGFloat(CVPixelBufferGetWidth(depthPixelBuffer)) / CGFloat(CVPixelBufferGetWidth(videoPixelBuffer))
CVPixelBufferLockBaseAddress(depthPixelBuffer, .readOnly)
let floatBuffer = unsafeBitCast(CVPixelBufferGetBaseAddress(depthPixelBuffer), to: UnsafeMutablePointer<Float32>.self)
let width = CVPixelBufferGetWidth(depthPixelBuffer)
let height = CVPixelBufferGetHeight(depthPixelBuffer)
let handVisionPixelX = Int((handVisionPoint1.x * scaleFactor).rounded())
let handVisionPixelY = Int((handVisionPoint1.y * scaleFactor).rounded())
let handVisionPixe2X = Int((handVisionPoint2.x * scaleFactor).rounded())
let handVisionPixe2Y = Int((handVisionPoint2.y * scaleFactor).rounded())
CVPixelBufferLockBaseAddress(depthPixelBuffer, .readOnly)
let rowDataPoint1 = CVPixelBufferGetBaseAddress(depthPixelBuffer)! + handVisionPixelY * CVPixelBufferGetBytesPerRow(depthPixelBuffer)
let handVisionPoint1Depth = rowDataPoint1.assumingMemoryBound(to: Float32.self)[handVisionPixelX]
print("DEPTH POINT 1 = ", handVisionPoint1Depth)
let rowDataPoint2 = CVPixelBufferGetBaseAddress(depthPixelBuffer)! + handVisionPixe2Y * CVPixelBufferGetBytesPerRow(depthPixelBuffer)
let handVisionPoint2Depth = rowDataPoint2.assumingMemoryBound(to: Float32.self)[handVisionPixe2X]
print("DEPTH POINT 2 = ", handVisionPoint2Depth)
//Int((width - touchPoint.x) * (height - touchPoint.y))
}
在我的脑海中,我现在认为我在深度图中找到正确像素的逻辑是不正确的.如果不是这样,那么我想知道数据流是否不同步.但老实说,我现在只是有点迷路.感谢您的帮助!