New Edit Below
I have already referenced
AVMutableComposition - Only Playing First Track (Swift)
but it is not providing the answer to what I am looking for.
I have a AVMutableComposition()
. I am trying to apply MULTIPLE AVCompositionTrack
, of a single type AVMediaTypeVideo
in this single composition. This is because I am using 2 different AVMediaTypeVideo
sources with different CGSize
's and preferredTransforms
of the AVAsset
's they come from.
So, the only way to apply their specified preferredTransforms
is to provide them in 2 different tracks. But, for whatever reason, only the first track will actually provide any video, almost as if the second track is never there.
So, I have tried
1) using AVMutableVideoCompositionLayerInstruction
's and applying an AVVideoComposition
along with an AVAssetExportSession
, which works okay, I am still working on the transforms, but is do-able. But the processing time's of the video's are WELL OVER 1 minute, which is just inapplicable in my situation.
2) Using multiple tracks, without AVAssetExportSession
and the 2nd track of the same type never appears. Now, I could put it all on 1 track, but all the videos will then be the same size and preferredTransform as the first video, which I absolutely do not want, as it stretches them on all sides.
So my question is, is it possible
1) Applying instructions to just a track WITHOUT using AVAssetExportSession
? //Preferred way BY FAR.
2) Decrease time of export? (I have tried using PresetPassthrough
but you cannot use that if you have a exporter.videoComposition
which are where my instructions are. This is the only place I know I can put instructions, not sure if I can place them somewhere else.
Here is some of my code (without the exporter as I don't need to export anything anywhere, just do stuff after the AVMutableComposition combines the items.
func merge() {
if let firstAsset = controller.firstAsset, secondAsset = self.asset {
let mixComposition = AVMutableComposition()
let firstTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
do {
//Don't need now according to not being able to edit first 14seconds.
if(CMTimeGetSeconds(startTime) == 0) {
self.startTime = CMTime(seconds: 1/600, preferredTimescale: Int32(600))
}
try firstTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600)),
ofTrack: firstAsset.tracksWithMediaType(AVMediaTypeVideo)[0],
atTime: kCMTimeZero)
} catch _ {
print("Failed to load first track")
}
//This secondTrack never appears, doesn't matter what is inside of here, like it is blank space in the video from startTime to endTime (rangeTime of secondTrack)
let secondTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
// secondTrack.preferredTransform = self.asset.preferredTransform
do {
try secondTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, secondAsset.duration),
ofTrack: secondAsset.tracksWithMediaType(AVMediaTypeVideo)[0],
atTime: CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600))
} catch _ {
print("Failed to load second track")
}
//This part appears again, at endTime which is right after the 2nd track is suppose to end.
do {
try firstTrack.insertTimeRange(CMTimeRangeMake(CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600), firstAsset.duration-endTime),
ofTrack: firstAsset.tracksWithMediaType(AVMediaTypeVideo)[0] ,
atTime: CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600))
} catch _ {
print("failed")
}
if let loadedAudioAsset = controller.audioAsset {
let audioTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: 0)
do {
try audioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, firstAsset.duration),
ofTrack: loadedAudioAsset.tracksWithMediaType(AVMediaTypeAudio)[0] ,
atTime: kCMTimeZero)
} catch _ {
print("Failed to load Audio track")
}
}
}
}
Edit
Apple states that "Indicates instructions for video composition via an NSArray of instances of classes implementing the AVVideoCompositionInstruction protocol. For the first instruction in the array, timeRange.start must be less than or equal to the earliest time for which playback or other processing will be attempted (note that this will typically be kCMTimeZero). For subsequent instructions, timeRange.start must be equal to the prior instruction's end time. The end time of the last instruction must be greater than or equal to the latest time for which playback or other processing will be attempted (note that this will often be the duration of the asset with which the instance of AVVideoComposition is associated)."
This just states that the entire composition must be layered inside instructions if you decide to use ANY instructions (this is what I am understanding). Why is this? How would I just apply instructions to say track 2 on this example without applying changing track 1 or 3 at all:
Track 1 from 0 - 10sec, Track 2 from 10 - 20sec, Track 3 from 20 - 30sec.
Any explanation on that would probably answer my question (if it is doable).
AVMutableComposition()
then it doesn't even work. The code above just cuts out the 2nd track, as if it is not allowed to have 2AVMediaTypeVideo
tracks, make sense? In the code above I am not performing any transforms – WordsworthpreferredTransforms
but the 2nd track is never showing. So, I can't use differentpreferredTransforms
because the 2nd track never shows. Now, I can useAVAssetExportSession
(I think, still working on it), but it takes about 60 seconds to merge everything. – Wordsworth