Problem

我有一个.mp4文件,我想在其中添加圆角,并使背景(圆角后面)透明.我当前的代码有两个问题:

  1. 出于某种原因,我应用的圆角CALayer会使整个视频变黑,尽管将backgroundColor设置为clear.添加一个大的borderWidth会有帮助,但我很好奇是否有更好的替代方案,这有点像黑客.代码链接:https://github.com/Jasperav/VideoWithRoundedTransparantCorners/blob/fdc534a7a4541c58e04d48c1ed5c62fe9ad6e497/VideoWithRoundedTransparantCornersTests/VideoEditor.swift#L277
  2. 圆角的背景是黑色的,不是透明的.如何让它变得透明?

这是一个输出视频的示例,当CALayer属性maskLayer上的值为borderWidth或50(没有borderWidth会使整个视频变黑)时,会有一个不应该存在的大黑框:

enter image description here

borderWidth设置为greatestFiniteMagnitude可以修复大的黑匣子,但它很难看,我想我这样做是在滥用CALayer.

我也不知道如何使边缘透明而不是黑色.

Reproduction

为了非常容易地重现这个问题,我用一次测试创建了这个repo.运行该测试将创建具有上述问题的.mov文件.下面也是代码.

Repo(只需运行测试,判断输出的日志(log)并播放视频):https://github.com/Jasperav/VideoWithRoundedTransparantCorners/tree/main/VideoWithRoundedTransparantCornersTests

代码(VideoEditor.swft):

import AVFoundation
import Foundation
import Photos
import AppKit
import QuartzCore
import OSLog

let logger = Logger()

class VideoEditor {
    func export(
        url: URL,
        outputDir: URL,
        size: CGSize
    ) async -> String? {
        do {
            let (asset, video) = try await resizeVideo(videoAsset: AVURLAsset(url: url), targetSize: size, isKeepAspectRatio: false, isCutBlackEdge: false)
            
            try await exportVideo(outputPath: outputDir, asset: asset, videoComposition: video)

            return nil
        } catch let error as YGCVideoError {
            switch error {
            case .videoFileNotFind:
                return NSLocalizedString("video_error_video_file_not_found", comment: "")
            case .videoTrackNotFind:
                return NSLocalizedString("video_error_no_video_track", comment: "")
            case .audioTrackNotFind:
                return NSLocalizedString("video_error_no_audio_track", comment: "")
            case .compositionTrackInitFailed:
                return NSLocalizedString("video_error_could_not_create_composition_track", comment: "")
            case .targetSizeNotCorrect:
                return NSLocalizedString("video_error_wrong_size", comment: "")
            case .timeSetNotCorrect:
                return NSLocalizedString("video_error_wrong_time", comment: "")
            case .noDir:
                return NSLocalizedString("video_error_no_dir", comment: "")
            case .noExportSession:
                return NSLocalizedString("video_error_no_export_session", comment: "")
                case .exporterError(let exporterError):
                return String.localizedStringWithFormat(NSLocalizedString("video_error_exporter_error", comment: ""), exporterError)
            }
        } catch {
            assertionFailure()

            return error.localizedDescription
        }
    }

    private enum YGCVideoError: Error {
        case videoFileNotFind
        case videoTrackNotFind
        case audioTrackNotFind
        case compositionTrackInitFailed
        case targetSizeNotCorrect
        case timeSetNotCorrect
        case noDir
        case noExportSession
        case exporterError(String)
    }

    private enum YGCTimeRange {
        case naturalRange
        case secondsRange(Double, Double)
        case cmtimeRange(CMTime, CMTime)

        func validateTime(videoTime: CMTime) -> Bool {
            switch self {
            case .naturalRange:
                return true
            case let .secondsRange(begin, end):
                let seconds = CMTimeGetSeconds(videoTime)
                if end > begin, begin >= 0, end < seconds {
                    return true
                } else {
                    return false
                }
            case let .cmtimeRange(_, end):
                if CMTimeCompare(end, videoTime) == 1 {
                    return false
                } else {
                    return true
                }
            }
        }
    }

    private enum Way {
        case right, left, up, down
    }

    private func orientationFromTransform(transform: CGAffineTransform) -> (orientation: Way, isPortrait: Bool) {
        var assetOrientation = Way.up
        var isPortrait = false

        if transform.a == 0, transform.b == 1.0, transform.c == -1.0, transform.d == 0 {
            assetOrientation = .right
            isPortrait = true
        } else if transform.a == 0, transform.b == -1.0, transform.c == 1.0, transform.d == 0 {
            assetOrientation = .left
            isPortrait = true
        } else if transform.a == 1.0, transform.b == 0, transform.c == 0, transform.d == 1.0 {
            assetOrientation = .up
        } else if transform.a == -1.0, transform.b == 0, transform.c == 0, transform.d == -1.0 {
            assetOrientation = .down
        }

        return (assetOrientation, isPortrait)
    }

    private func videoCompositionInstructionForTrack(track: AVCompositionTrack, videoTrack: AVAssetTrack, targetSize: CGSize) async throws -> AVMutableVideoCompositionLayerInstruction {
        let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
        let transform = try await videoTrack.load(.preferredTransform)
        let naturalSize = try await videoTrack.load(.naturalSize)
        let assetInfo = orientationFromTransform(transform: transform)
        var scaleToFitRatio = targetSize.width / naturalSize.width

        if assetInfo.isPortrait {
            scaleToFitRatio = targetSize.width / naturalSize.height

            let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)

            instruction.setTransform(transform.concatenating(scaleFactor), at: CMTime.zero)
        } else {
            let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)

            var concat = transform.concatenating(scaleFactor).concatenating(CGAffineTransform(translationX: 0, y: targetSize.width / 2))

            if assetInfo.orientation == .down {
                let fixUpsideDown = CGAffineTransform(rotationAngle: CGFloat.pi)
                let yFix = naturalSize.height + targetSize.height
                let centerFix = CGAffineTransform(translationX: naturalSize.width, y: yFix)

                concat = fixUpsideDown.concatenating(centerFix).concatenating(scaleFactor)
            }

            instruction.setTransform(concat, at: CMTime.zero)
        }

        return instruction
    }

    private func exportVideo(outputPath: URL, asset: AVAsset, videoComposition: AVMutableVideoComposition?) async throws {
        let fileExists = FileManager.default.fileExists(atPath: outputPath.path())

        logger.debug("Output dir: \(outputPath), exists: \(fileExists)")

        if fileExists {
            do {
                try FileManager.default.removeItem(atPath: outputPath.path())
            } catch {
                logger.error("remove file failed")
            }
        }

        let dir = outputPath.deletingLastPathComponent().path()

        logger.debug("Will try to create dir: \(dir)")

        try? FileManager.default.createDirectory(atPath: dir, withIntermediateDirectories: true)

        var isDirectory = ObjCBool(false)

        guard FileManager.default.fileExists(atPath: dir, isDirectory: &isDirectory), isDirectory.boolValue else {
            logger.error("Could not create dir, or dir is a file")

            throw YGCVideoError.noDir
        }

        guard let exporter = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHighestQuality) else {
            logger.error("generate export failed")

            throw YGCVideoError.noExportSession
        }

        exporter.outputURL = outputPath
        exporter.outputFileType = .mov
        exporter.shouldOptimizeForNetworkUse = false

        if let composition = videoComposition {
            exporter.videoComposition = composition
        }

        await exporter.export()

        logger.debug("Status: \(String(describing: exporter.status)), error: \(exporter.error)")

        if exporter.status != .completed {
            throw YGCVideoError.exporterError(exporter.error?.localizedDescription ?? "NO SPECIFIC ERROR")
        }
    }

    private func resizeVideo(videoAsset: AVURLAsset,
                             targetSize: CGSize,
                             isKeepAspectRatio: Bool,
                             isCutBlackEdge: Bool) async throws -> (AVMutableComposition, AVMutableVideoComposition)
    {
        guard let videoTrack = try await videoAsset.loadTracks(withMediaType: .video).first else {
            throw YGCVideoError.videoTrackNotFind
        }


        guard let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first else {
            throw YGCVideoError.audioTrackNotFind
        }

        let resizeComposition = AVMutableComposition(urlAssetInitializationOptions: nil)

        guard let compositionVideoTrack = resizeComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: videoTrack.trackID) else {
            throw YGCVideoError.compositionTrackInitFailed
        }
        guard let compostiionAudioTrack = resizeComposition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: audioTrack.trackID) else {
            throw YGCVideoError.compositionTrackInitFailed
        }

        let duration = try await videoAsset.load(.duration)

        try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: duration), of: videoTrack, at: CMTime.zero)
        try compostiionAudioTrack.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: duration), of: audioTrack, at: CMTime.zero)

        let originTransform = try await videoTrack.load(.preferredTransform)
        let info = orientationFromTransform(transform: originTransform)
        let naturalSize = try await videoTrack.load(.naturalSize)
        let videoNaturaSize: CGSize = if info.isPortrait, info.orientation != .up {
            CGSize(width: naturalSize.height, height: naturalSize.width)
        } else {
            naturalSize
        }

        if videoNaturaSize.width < targetSize.width, videoNaturaSize.height < targetSize.height {
            throw YGCVideoError.targetSizeNotCorrect
        }

        let fitRect: CGRect = if isKeepAspectRatio {
            AVMakeRect(aspectRatio: videoNaturaSize, insideRect: CGRect(origin: CGPoint.zero, size: targetSize))
        } else {
            CGRect(origin: CGPoint.zero, size: targetSize)
        }

        let mainInstruction = AVMutableVideoCompositionInstruction()

        mainInstruction.timeRange = CMTimeRange(start: CMTime.zero, end: duration)

        let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)

        let finalTransform: CGAffineTransform = if info.isPortrait {
            if isCutBlackEdge {
                originTransform.concatenating(CGAffineTransform(scaleX: fitRect.width / videoNaturaSize.width, y: fitRect.height / videoNaturaSize.height))
            } else {
                originTransform.concatenating(CGAffineTransform(scaleX: fitRect.width / videoNaturaSize.width, y: fitRect.height / videoNaturaSize.height)).concatenating(CGAffineTransform(translationX: fitRect.minX, y: fitRect.minY))
            }

        } else {
            if isCutBlackEdge {
                originTransform.concatenating(CGAffineTransform(scaleX: fitRect.width / videoNaturaSize.width, y: fitRect.height / videoNaturaSize.height))
            } else {
                originTransform.concatenating(CGAffineTransform(scaleX: fitRect.width / videoNaturaSize.width, y: fitRect.height / videoNaturaSize.height)).concatenating(CGAffineTransform(translationX: fitRect.minX, y: fitRect.minY))
            }
        }
        layerInstruction.setTransform(finalTransform, at: CMTime.zero)
        mainInstruction.layerInstructions = [layerInstruction]

        let videoComposition = AVMutableVideoComposition()
        
        videoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
        
        let videoLayer = CALayer()
        
        videoLayer.frame = CGRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height)
        videoLayer.backgroundColor = .clear
        
        let maskLayer = CALayer()
        
        maskLayer.frame = videoLayer.bounds
        maskLayer.cornerRadius = 100
        maskLayer.masksToBounds = true
        maskLayer.borderWidth = 50
        maskLayer.backgroundColor = .clear

        videoLayer.mask = maskLayer
        
        videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: videoLayer)
        
        if isCutBlackEdge, isKeepAspectRatio {
            videoComposition.renderSize = fitRect.size
        } else {
            videoComposition.renderSize = targetSize
        }

        videoComposition.instructions = [mainInstruction]

        return (resizeComposition, videoComposition)
    }
}

Question

  • 如何将圆角应用于视频并使圆角的‘背景’/边缘透明?
  • 为什么我现在中间有一个大黑框,只有在设置一个巨大的borderWidth的时候才会被移除?

What I want

这就是我想要的最终输出,当黑色边缘(巨大的红色箭头)是透明的.

enter image description here

Update

当我使用CAShapeLayer号时,我只能看到它的形状,但我很难把边缘‘切割’成透明的.我try 添加子层,但无法使其正常工作.

这是输出(您可以看到黑色路径):

enter image description here

在此基础上,CAShapeLayer:

let shapeLayer = CAShapeLayer()

shapeLayer.frame = CGRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height)
shapeLayer.path = NSBezierPath(roundedRect: CGRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height), xRadius: cornerRadius, yRadius: cornerRadius).cgPath
shapeLayer.fillColor = .clear
shapeLayer.strokeColor = .black

let parentLayer = CALayer()
let videoLayer = CALayer()

parentLayer.frame = CGRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height)
videoLayer.frame = parentLayer.bounds
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(shapeLayer)

let animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)

extract.videoComposition.animationTool = animationTool

我怎样才能把边缘剪得透明些?

推荐答案

几个问题...

首先,您错误地使用了层蒙版.go 除边框,并将遮罩层背景 colored颜色 设置为任何不透明 colored颜色 -例如.white.

其次,您需要使用AVAssetExportPresetHEVCHighestQualityWithAlpha作为AVAssetExportSession预设(而不是AVAssetExportPresetHighestQuality).

以下是我对你的VideoEditor级课程所做的更改...


in 100:

    //guard let exporter = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHighestQuality) else {
    //    logger.error("generate export failed")
    //
    //    throw YGCVideoError.noExportSession
    //}

    // need to use AVAssetExportPresetHEVCHighestQualityWithAlpha preset
    guard let exporter = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHEVCHighestQualityWithAlpha) else {
        logger.error("generate export failed")
        
        throw YGCVideoError.noExportSession
    }
    

in 100:

    // get rid of border
    //maskLayer.borderWidth = 50
    
    // do not use .clear ... use any opaque color
    maskLayer.backgroundColor = .white // .clear

当我在带有黄色背景的视图上加载并播放输出视频时(在一个快速的iOS应用程序中),我得到这样的信息:

enter image description here

Ios相关问答推荐

如何保持拖动预览与原始项目完全相同?

PieChartView未以编程方式显示在屏幕上

SwiftUI.从自定义视图修改器访问自定义视图子视图

在iOS 17脉冲符号效果不起作用的情况下制作UIBarButtonItem动画(没有UIImageView)

在工作表上移动图像SwiftUI

在视图上执行手势会阻止相机操作

如何在用户点击的位置使用SceneKit渲染球体?

在 SwiftUI 中重构提取到子视图的 GeometryReader 代码

搜索在 ios xamarin.forms 中不起作用

将flutter android项目导入iOS

NSFetchedResultsController 由字符串的第一个字母创建的部分

UICollectionView - 动态单元格高度?

每个版本的 iOS 都附带什么版本的移动 Safari?

如何在 UILabel 中找到文本子字符串的 CGRect?

iOS UITextView 或 UILabel 带有可点击的动作链接

文本未包含在 swift UI 中

Swift - 获取设备的 WIFI IP 地址

UINavigationBar - 以编程方式设置标题?

如何为 NSDate 添加一个月?

打开 XIB 文件后 Xcode 6.3 冻结/挂起