ios - 如何在视频中定位 CALayer?
问题描述
我有一个 UIView (大小:W:375 H:667),其中的图像可以放置在其中的任何位置。稍后,此图像将覆盖视频并保存。我的问题是当我查看视频时,由于我的视频尺寸为(720 x 1280),因此在我的 UIView 中选择的同一位置上找不到图像。如何在 Video (720 x 1280 ) 内的 UIView 中反映所选图像的位置?这是我正在使用的代码:
private func watermark(video videoAsset:AVAsset,modelView:MyViewModel, watermarkText text : String!, imageName name : String!, saveToLibrary flag : Bool, watermarkPosition position : QUWatermarkPosition, completion : ((_ status : AVAssetExportSession.Status?, _ session: AVAssetExportSession?, _ outputURL : URL?) -> ())?) {
DispatchQueue.global(qos: DispatchQoS.QoSClass.default).async {
let mixComposition = AVMutableComposition()
let compositionVideoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
let clipVideoTrack:AVAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video)[0]
do {
try compositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration), of: clipVideoTrack, at: CMTime.zero)
}
catch {
print(error.localizedDescription)
}
let videoSize = self.resolutionSizeForLocalVideo(asset: clipVideoTrack)
print("DIMENSIONE DEL VIDEO W: \(videoSize.width) H: \(videoSize.height)")
let parentLayer = CALayer()
let videoLayer = CALayer()
parentLayer.frame = CGRect(x: 0, y: 0, width: videoSize.width, height: videoSize.height)
videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.width, height: videoSize.height)
parentLayer.addSublayer(videoLayer)
//My layer image
let layerTest = CALayer()
layerTest.frame = modelView.frame
layerTest.contents = modelView.image.cgImage
print("A: \(modelView.frame.origin.y) - \(modelView.frame.origin.x)")
print("B: \(layerTest.frame.origin.y) - \(layerTest.frame.origin.x)")
parentLayer.addSublayer(layerTest)
print("PARENT: \(parentLayer.frame.origin.y) - \(parentLayer.frame.origin.x)")
//------------------------
let videoComp = AVMutableVideoComposition()
videoComp.renderSize = videoSize
videoComp.frameDuration = CMTimeMake(value: 1, timescale: 30)
videoComp.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: mixComposition.duration)
let layerInstruction = self.videoCompositionInstructionForTrack(track: compositionVideoTrack!, asset: videoAsset)
layerInstruction.setTransform((clipVideoTrack.preferredTransform), at: CMTime.zero)
instruction.layerInstructions = [layerInstruction]
videoComp.instructions = [instruction]
let documentDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
let dateFormatter = DateFormatter()
dateFormatter.dateStyle = .long
dateFormatter.timeStyle = .short
let date = dateFormatter.string(from: Date())
let url = URL(fileURLWithPath: documentDirectory).appendingPathComponent("watermarkVideo-\(date).mp4")
let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
exporter?.outputURL = url
exporter?.outputFileType = AVFileType.mp4
exporter?.shouldOptimizeForNetworkUse = true
exporter?.videoComposition = videoComp
exporter?.exportAsynchronously() {
DispatchQueue.main.async {
if exporter?.status == AVAssetExportSession.Status.completed {
let outputURL = exporter?.outputURL
if flag {
// Save to library
// let library = ALAssetsLibrary()
if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputURL!.path) {
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: outputURL!)
}) { saved, error in
if saved {
completion!(AVAssetExportSession.Status.completed, exporter, outputURL)
}
}
}
// if library.videoAtPathIs(compatibleWithSavedPhotosAlbum: outputURL) {
// library.writeVideoAtPathToSavedPhotosAlbum(outputURL,
// completionBlock: { (assetURL:NSURL!, error:NSError!) -> Void in
//
// completion!(AVAssetExportSessionStatus.Completed, exporter, outputURL)
// })
// }
} else {
completion!(AVAssetExportSession.Status.completed, exporter, outputURL)
}
} else {
// Error
completion!(exporter?.status, exporter, nil)
}
}
}
}
}
private func videoCompositionInstructionForTrack(track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction {
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
let assetTrack = asset.tracks(withMediaType: AVMediaType.video)[0]
let scale : CGAffineTransform = CGAffineTransform(scaleX: 1, y:1)
instruction.setTransform(assetTrack.preferredTransform.concatenating(scale), at: CMTime.zero)
return instruction
}
解决方案
这个问题的答案可能会有所帮助。在尝试将用户生成的文本放置在视频上时,我遇到了类似的问题。这对我有用:
首先,我添加了一个辅助方法来将 CGPoint 从一个矩形转换为另一个:
func convertPoint(point: CGPoint, fromRect: CGRect, toRect: CGRect) -> CGPoint {
return CGPoint(x: (toRect.size.width / fromRect.size.width) * point.x, y: (toRect.size.height / fromRect.size.height) * point.y)
}
我使用其中心点定位了我的文本视图(在您的情况下为图像视图)。以下是使用辅助方法计算调整中心点的方法:
let adjustedCenter = convertPoint(point: imageView.center, fromRect: view.frame, toRect: CGRect(x: 0, y: 0, width: 720.0, height: 1280.0))
在那之后我不得不做一些额外的定位,因为 CALayers 的坐标系被翻转了,所以这就是最后一点的样子:
let finalCenter = CGPoint(x: adjustedCenter.x, y: (1280.0 - adjustedCenter.y) - (imageView.bounds.height / 2.0))
然后您将您的 CALayer 的位置属性设置为该点。
layerTest.position = finalCenter
希望有帮助!
推荐阅读
- c# - 模拟hostingenvironment.mappath apicontroller c#
- java - CloseableHttpClient 抛出错误:org.apache.http.impl.execchain.RetryExec execute with java.net.SocketException
- php - PHP将字节转换为字节数组
- javascript - React Table:row.original 属性在同一个 console.log 中评估不同
- javascript - 我正在处理的故障效果不起作用
- css - NativeScript:如何使 TabView 菜单缩小,以适应设备宽度?
- vue.js - 如果不满足条件,如何在 set 函数中不设置计算变量值
- jquery - 我需要帮助('focus')才能使用我的网站搜索框
- node.js - 从源“null”访问 Twitter_OAuth_Url 上的 XMLHttpRequest(从“http://localhost:3000/api/twitter-login”重定向)已被阻止
- angular - 为什么打开垫子对话框时选择了所有垫子单选按钮?