How to Play, Record and Merge Videos in iOS and Swift
Learn the basics of working with videos on iOS with AV Foundation in this tutorial. You’ll play, record and even do some light video editing!
Version
- Swift 5, iOS 13, Xcode 11

Recording videos and playing around with them programmatically is one of the coolest things you can do with your phone. However, not nearly enough apps offer this ability, which you can easily add using the AV Foundation framework.
AV Foundation has been a part of macOS since OS X Lion (10.7) and iOS since iOS 4 in 2010. It’s grown considerably since then, with well over 100 classes to date.
This tutorial gets you started with AV Foundation by covering media playback and some light editing. In particular, you’ll learn how to:
- Select and play a video from the media library.
- Record and save a video to the media library.
- Merge multiple clips into a single, combined video complete with a custom soundtrack.
Avoid running the code in this tutorial on the simulator because you’ll have no way to capture video. Plus, you’ll need to figure out a way to add videos to the media library manually. In other words, you really need to test this code on a device!
To do that, you’ll need to be a registered Apple developer. A free account will work just fine for this tutorial.
Ready? Lights, cameras, action!
Getting Started
Download the project materials by clicking the Download Materials button at the top or bottom of the tutorial.
Open the starter project and look around. This project contains a storyboard and several view controllers with the UI for a simple video playback and recording app.
The main screen contains the three buttons below, which segue to other view controllers:
- Select and Play Video
- Record and Save Video
- Merge Video
Build and run and test the buttons. Only the three buttons in the initial scene do anything, but you’ll change that soon!
Selecting and Playing Video
The Select and Play Video button on the main screen segues to PlayVideoController
. In this section of the tutorial, you’ll add the code to select a video file and play it.
Start by opening PlayVideoViewController.swift and add the following import statements at the top of the file:
import AVKit
import MobileCoreServices
Importing AVKit
gives you access to AVPlayer
, which plays the selected video. MobileCoreServices
contains predefined constants such as kUTTypeMovie
, which you’ll need later on.
Next, add the following class extensions at the end of the file. Make sure you add these to the very bottom of the file, outside the curly braces of the class declaration:
// MARK: - UIImagePickerControllerDelegate
extension PlayVideoViewController: UIImagePickerControllerDelegate {
}
// MARK: - UINavigationControllerDelegate
extension PlayVideoViewController: UINavigationControllerDelegate {
}
These extensions set up PlayVideoViewController
to adopt the UIImagePickerControllerDelegate
and UINavigationControllerDelegate
protocols.
You’ll use the system-provided UIImagePickerController
to let the user browse videos in the photo library. That class communicates back to your app through these delegate protocols. Although the class is named “image picker”, rest assured it works with videos too!
Next, head back to PlayVideoViewController
‘s main class definition and add the following code to playVideo(_:)
:
VideoHelper.startMediaBrowser(delegate: self, sourceType: .savedPhotosAlbum)
This is a call to a helper method called startMediaBrowser(delegate:sourceType:)
from VideoHelper
. This call will open the image picker, setting the delegate to self
. The source type of .savedPhotosAlbum
opts to choose an image from the camera roll. Later, you’ll add helper tools of your own in VideoHelper
.
To see what’s under the hood of this method, open VideoHelper.swift. It does the following:
- Checks if the source is available on the device. Sources include the camera roll, the camera itself and the full photo library. This check is essential whenever you use
UIImagePickerController
to pick media. If you don’t do it, you might try to pick media from a non-existent source, which will usually result in a crash. - If the source you want is available, it creates a
UIImagePickerController
and sets its source and media type. Since you only want to select videos, the code restricts the type tokUTTypeMovie
. - Finally, it presents
UIImagePickerController
modally.
Now, you’re ready to give your project another whirl! Build and run. Tap Select and Play Video on the first screen, then tap Play Video on the second screen. The camera roll will pop up like this:
Once you see the list of videos, select one. You’ll proceed to another screen that shows the video in detail, along with buttons to Cancel, Play and Choose. Tap the Play button and, unsurprisingly, the video will play.
If you tap the Choose button, however, the app just returns to the Play Video screen! This is because you haven’t implemented any delegate methods to handle choosing a video from the picker.
Back in Xcode open PlayVideoViewController.swift again and find the UIImagePickerControllerDelegate
extension. Then add the following delegate method implementation:
func imagePickerController(
_ picker: UIImagePickerController,
didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]
) {
// 1
guard
let mediaType = info[UIImagePickerController.InfoKey.mediaType] as? String,
mediaType == (kUTTypeMovie as String),
let url = info[UIImagePickerController.InfoKey.mediaURL] as? URL
else { return }
// 2
dismiss(animated: true) {
//3
let player = AVPlayer(url: url)
let vcPlayer = AVPlayerViewController()
vcPlayer.player = player
self.present(vcPlayer, animated: true, completion: nil)
}
}
Here’s what you’re doing in this method:
- You get the media type of the selected media and URL and ensure it’s a video.
- Next, you dismiss the image picker.
- In the completion block, you create an
AVPlayerViewController
to play the media.
Build and run. Tap Select and Play Video, then Play Video and choose a video from the list. The video will play in the media player.
Recording Video
Now that you have video playback working, it’s time to record a video using the device’s camera and save it to the media library.
Open RecordVideoViewController.swift and add the following import:
import MobileCoreServices
Then, add the following to the end of the file:
// MARK: - UIImagePickerControllerDelegate
extension RecordVideoViewController: UIImagePickerControllerDelegate {
}
// MARK: - UINavigationControllerDelegate
extension RecordVideoViewController: UINavigationControllerDelegate {
}
This adopts the same protocols as PlayVideoViewController
.
Next, add the following code to record(_:)
:
VideoHelper.startMediaBrowser(delegate: self, sourceType: .camera)
It uses the same helper method as in PlayVideoViewController
, except that it accesses .camera
to instruct the image picker to open in the built in camera mode.
Build and run to see what you have so far.
Go to the Record screen and tap Record Video. Instead of the Photo Gallery, the camera UI opens. When the alert dialog asks for camera permissions and microphone permissions, click OK.
Finally, start recording a video by tapping the red record button at the bottom of the screen; tap it again when you’re done recording.
Now, you have two options: use the recorded video or do a retake. Tap Use Video. You’ll notice that it just dismisses the view controller. That’s because — you guessed it — you haven’t implemented an appropriate delegate method to save the recorded video to the media library.
Saving Video
Back in RecordVideoViewController.swift, add the following method to the UIImagePickerControllerDelegate
extension:
func imagePickerController(
_ picker: UIImagePickerController,
didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]
) {
dismiss(animated: true, completion: nil)
guard
let mediaType = info[UIImagePickerController.InfoKey.mediaType] as? String,
mediaType == (kUTTypeMovie as String),
// 1
let url = info[UIImagePickerController.InfoKey.mediaURL] as? URL,
// 2
UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(url.path)
else { return }
// 3
UISaveVideoAtPathToSavedPhotosAlbum(
url.path,
self,
#selector(video(_:didFinishSavingWithError:contextInfo:)),
nil)
}
Don’t worry about the error — you’ll take care of that shortly.
- As before, the delegate method gives you a URL pointing to the video.
- Verify that the app can save the file to the device’s photo album.
- If it can, save it.
UISaveVideoAtPathToSavedPhotosAlbum
is the function provided by the SDK to save videos to the device’s photo album. You pass it the path to the video you want to save as well as a target and action to call back, which will inform you of the status of the save operation.
Next, add the implementation of the callback to the main class definition:
@objc func video(
_ videoPath: String,
didFinishSavingWithError error: Error?,
contextInfo info: AnyObject
) {
let title = (error == nil) ? "Success" : "Error"
let message = (error == nil) ? "Video was saved" : "Video failed to save"
let alert = UIAlertController(
title: title,
message: message,
preferredStyle: .alert)
alert.addAction(UIAlertAction(
title: "OK",
style: UIAlertAction.Style.cancel,
handler: nil))
present(alert, animated: true, completion: nil)
}
The callback method simply displays an alert to the user, announcing whether the video file was saved or not, based on the error status.
Build and run. Record a video and select Use Video when you’re done recording. If you’re asked for permission to save to your video library, tap OK. When the Video was saved alert pops up, you just successfully saved your video to the photo library!
Now that you can play videos and record videos, it’s time to take the next step and try some light video editing.
Merging Videos
The final piece of functionality for the app is to do a little editing. Your user will select two videos and a song from the music library, and the app will combine the two videos and mix in the music.
The project already has a starter implementation in MergeVideoViewController.swift, with the code similar to the code you wrote to play a video. The big difference is when merging, the user must select two videos. That part is already set up, so the user can make two selections that will be stored in firstAsset
and secondAsset
.
The next step is to add the functionality to select the audio file.
Selecting the Audio File
UIImagePickerController
provides functionality to select only video and images from the media library. To select audio files from your music library, you must use MPMediaPickerController
. It works essentially the same as UIImagePickerController
, but instead of images and video, it accesses audio files in the media library.
Open MergeVideoViewController.swift and add the following code to loadAudio(_:):
let mediaPickerController = MPMediaPickerController(mediaTypes: .any)
mediaPickerController.delegate = self
mediaPickerController.prompt = "Select Audio"
present(mediaPickerController, animated: true, completion: nil)
The code above creates a new MPMediaPickerController
instance and displays it as a modal view controller.
Build and run. Now, tap Merge Video, then Load Audio to access the audio library on your device.
Of course, you’ll need some audio files on your device. Otherwise, the list will be empty. The songs will also have to be physically present on the device, so make sure you’re not trying to load a song from the cloud.
Select a song from the list and you’ll notice that nothing happens. That’s right! MPMediaPickerController
needs delegate methods!
To implement them, find the MPMediaPickerControllerDelegate
extension at the bottom of the file and add the following two methods to it:
func mediaPicker(
_ mediaPicker: MPMediaPickerController,
didPickMediaItems mediaItemCollection: MPMediaItemCollection
) {
// 1
dismiss(animated: true) {
// 2
let selectedSongs = mediaItemCollection.items
guard let song = selectedSongs.first else { return }
// 3
let title: String
let message: String
if let url = song.value(forProperty: MPMediaItemPropertyAssetURL) as? URL {
self.audioAsset = AVAsset(url: url)
title = "Asset Loaded"
message = "Audio Loaded"
} else {
self.audioAsset = nil
title = "Asset Not Available"
message = "Audio Not Loaded"
}
// 4
let alert = UIAlertController(
title: title,
message: message,
preferredStyle: .alert)
alert.addAction(UIAlertAction(title: "OK", style: .cancel, handler: nil))
self.present(alert, animated: true, completion: nil)
}
}
func mediaPickerDidCancel(_ mediaPicker: MPMediaPickerController) {
// 5
dismiss(animated: true, completion: nil)
}
The code above is similar to the delegate methods for UIImagePickerController
. Here’s what it does:
- Dismiss the picker just like you did before.
- Find the selected songs and, from that, the first one in the case that multiple are selected.
- Obtain the URL to the media asset that backs the song. Then make an
AVAsset
pointing to the song that was chosen. - Finally, for
mediaPicker(_:didPickMediaItems:)
, show an alert to indicate if the asset was successfully loaded or not. - In the case the media picker was canceled, simply dismiss the view controller.
Build and run, then go to the Merge Videos screen. Select an audio file and you’ll see the Audio Loaded message.
You now have all your assets loading correctly so it’s time to merge the various media files into one file. But before you get into that code, you need to do a bit of setup.
Merging Completion Handler
You will shortly write the code to merge your assets. This will need a completion handler that saves the final video to the photo album. You’ll add this first.
Add the following import statement at the top of the MergeVideoViewController.swift file:
import Photos
Then, add the method below to MergeVideoViewController
:
func exportDidFinish(_ session: AVAssetExportSession) {
// 1
activityMonitor.stopAnimating()
firstAsset = nil
secondAsset = nil
audioAsset = nil
// 2
guard
session.status == AVAssetExportSession.Status.completed,
let outputURL = session.outputURL
else { return }
// 3
let saveVideoToPhotos = {
// 4
let changes: () -> Void = {
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: outputURL)
}
PHPhotoLibrary.shared().performChanges(changes) { saved, error in
DispatchQueue.main.async {
let success = saved && (error == nil)
let title = success ? "Success" : "Error"
let message = success ? "Video saved" : "Failed to save video"
let alert = UIAlertController(
title: title,
message: message,
preferredStyle: .alert)
alert.addAction(UIAlertAction(
title: "OK",
style: UIAlertAction.Style.cancel,
handler: nil))
self.present(alert, animated: true, completion: nil)
}
}
}
// 5
if PHPhotoLibrary.authorizationStatus() != .authorized {
PHPhotoLibrary.requestAuthorization { status in
if status == .authorized {
saveVideoToPhotos()
}
}
} else {
saveVideoToPhotos()
}
}
Here’s what that code does:
- There’s a spinner that will animate when the assets are being processed. This stops the spinner and then clears the assets ready to select new ones.
- Ensure that the processing is complete and there is a URL of the resulting video.
- Create a closure that…
- Tells the photo library to make a “create request” from the resulting video before showing an alert to indicate if this succeeds or fails.
- Check if there is permission to access the photo library. If there is not permission, then ask for it before running the closure that saves the video. Otherwise, simply run the closure immediately as permission is already granted.
Now, you’ll add some code to merge(_:)
. Because there’s a lot of code, you’ll complete this in steps.
Merging: Step 1
In this step, you’ll merge the videos into one long video.
Add the following code to merge(_:)
:
guard
let firstAsset = firstAsset,
let secondAsset = secondAsset
else { return }
activityMonitor.startAnimating()
// 1
let mixComposition = AVMutableComposition()
// 2
guard
let firstTrack = mixComposition.addMutableTrack(
withMediaType: .video,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
else { return }
// 3
do {
try firstTrack.insertTimeRange(
CMTimeRangeMake(start: .zero, duration: firstAsset.duration),
of: firstAsset.tracks(withMediaType: .video)[0],
at: .zero)
} catch {
print("Failed to load first track")
return
}
// 4
guard
let secondTrack = mixComposition.addMutableTrack(
withMediaType: .video,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
else { return }
do {
try secondTrack.insertTimeRange(
CMTimeRangeMake(start: .zero, duration: secondAsset.duration),
of: secondAsset.tracks(withMediaType: .video)[0],
at: firstAsset.duration)
} catch {
print("Failed to load second track")
return
}
// 5
// TODO: PASTE CODE A
In the code above:
- You create an
AVMutableComposition
to hold your video and audio tracks. - Next, you create an
AVMutableCompositionTrack
for the video and add it to yourAVMutableComposition
. - Then, you insert the video from the first video asset to this track.
Notice that
insertTimeRange(_:ofTrack:atStartTime:)
allows you to insert a part of a video, rather than the whole thing, into your main composition. This way, you can trim the video to a time range of your choice.In this instance, you want to insert the whole video, so you create a time range from
CMTime.zero
to your video asset duration. - Next, you do the same thing with the second video asset.
Notice how the code inserts
firstAsset
at time.zero
, and it insertssecondAsset
at the end of the first video. That’s because this tutorial assumes you want your video assets one after the other, but you can also overlap the assets by playing with the time ranges. -
// TODO: PASTE CODE A
is a marker — you’ll replace this line with the code in the next section.
In this step, you set up two separate AVMutableCompositionTrack
instances. Now, you need to apply an AVMutableVideoCompositionLayerInstruction
to each track to make some editing possible.
Merging the Videos: Step 2
Next up is to add instructions to the composition to tell it how you want the assets to be merged.
Add the next section of code after track code above in merge(_:)
. Replace // TODO: PASTE CODE A
with the following code:
// 6
let mainInstruction = AVMutableVideoCompositionInstruction()
mainInstruction.timeRange = CMTimeRangeMake(
start: .zero,
duration: CMTimeAdd(firstAsset.duration, secondAsset.duration))
// 7
let firstInstruction = AVMutableVideoCompositionLayerInstruction(
assetTrack: firstTrack)
firstInstruction.setOpacity(0.0, at: firstAsset.duration)
let secondInstruction = AVMutableVideoCompositionLayerInstruction(
assetTrack: secondTrack)
// 8
mainInstruction.layerInstructions = [firstInstruction, secondInstruction]
let mainComposition = AVMutableVideoComposition()
mainComposition.instructions = [mainInstruction]
mainComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
mainComposition.renderSize = CGSize(
width: UIScreen.main.bounds.width,
height: UIScreen.main.bounds.height)
// 9
// TODO: PASTE CODE B
Here’s what’s happening in this code:
- First, you set up
mainInstruction
to wrap the entire set of instructions. Notice that the total time here is the sum of the first asset’s duration and the second asset’s duration. - Next, you set up two instructions, one for each asset. The instruction for the first video needs one extra addition: You set its opacity to 0 at the end so it becomes invisible when the second video starts.
- Now that you have your
AVMutableVideoCompositionLayerInstruction
instances for the first and second tracks, you simply add them tomainInstruction
. Next, you addmainInstruction
to the instructions property of an instance ofAVMutableVideoComposition
. You also set the frame rate for the composition to 30 frames/second. -
// TODO: PASTE CODE B
is a marker — you’ll replace this line with the code in the next section.
OK, so you’ve now merged your two video files. It’s time to spice them up with some sound!
Merging the Audio: Step 3
To give your clip some musical flair, add the following code to merge(_:)
. Replace // TODO: PASTE CODE B
with the following code:
// 10
if let loadedAudioAsset = audioAsset {
let audioTrack = mixComposition.addMutableTrack(
withMediaType: .audio,
preferredTrackID: 0)
do {
try audioTrack?.insertTimeRange(
CMTimeRangeMake(
start: .zero,
duration: CMTimeAdd(
firstAsset.duration,
secondAsset.duration)),
of: loadedAudioAsset.tracks(withMediaType: .audio)[0],
at: .zero)
} catch {
print("Failed to load Audio track")
}
}
// 11
guard
let documentDirectory = FileManager.default.urls(
for: .documentDirectory,
in: .userDomainMask).first
else { return }
let dateFormatter = DateFormatter()
dateFormatter.dateStyle = .long
dateFormatter.timeStyle = .short
let date = dateFormatter.string(from: Date())
let url = documentDirectory.appendingPathComponent("mergeVideo-\(date).mov")
// 12
guard let exporter = AVAssetExportSession(
asset: mixComposition,
presetName: AVAssetExportPresetHighestQuality)
else { return }
exporter.outputURL = url
exporter.outputFileType = AVFileType.mov
exporter.shouldOptimizeForNetworkUse = true
exporter.videoComposition = mainComposition
// 13
exporter.exportAsynchronously {
DispatchQueue.main.async {
self.exportDidFinish(exporter)
}
}
Here’s what the code above does:
- Similarly to video tracks, you create a new track for your audio and add it to the main composition. You set the audio time range to the sum of the duration of the first and second videos, because that will be the complete length of your video.
- Before you can save the final video, you need a path for the saved file. Create a unique file name based upon the current date and time that points to a file in the documents folder.
- Render and export the merged video. To do this, you create an
AVAssetExportSession
that transcodes the contents of the composition to create an output of the form described by a specified export preset. Because you’ve already configuredAVMutableVideoComposition
, all you need to do is assign it to your exporter. - After you’ve initialized an export session with the asset that contains the source media, the export
presetName
andoutputFileType
, you run the export by invokingexportAsynchronously()
.Because the code performs the export asynchronously, this method returns immediately. The code calls the completion handler you supply to
exportAsynchronously()
whether the export fails, completes or the user cancels it.Upon completion, the exporter’s status property indicates whether the export has completed successfully. If it fails, the value of the exporter’s error property supplies additional information about the reason for the failure.
AVComposition
combines media data from multiple file-based sources. At its top level, AVComposition
is a collection of tracks, each presenting media of a specific type such as audio or video. An instance of AVCompositionTrack
represents a single track.
Similarly, AVMutableComposition
and AVMutableCompositionTrack
also present a higher-level interface for constructing compositions. These objects offer insertion, removal and scaling operations that you’ve seen before and that will come up again.
Finally, build and run.
Select two videos and an audio file and merge the selected files. You’ll see a Video Saved message, which indicates that the merge was successful. At this point, your new video will be present in the photo album.
Go to the photo album or browse using the Select and Play Video screen in the app and you might notice some orientation issues in the merged video. Maybe portrait video is in landscape mode, and sometimes videos are upside down.
This is due to the default AVAsset
orientation. All movie and image files recorded using the default iPhone camera app have the video frame set to landscape, so the iPhone saves the media in landscape mode. You’ll fix these problems next.
Orienting Video
AVAsset
has a preferredTransform
that contains the media orientation information. It applies this to a media file whenever you view it using the Photos app or QuickTime.
In the code above, you haven’t applied a transform to your AVAsset
s, hence the orientation issue. Fortunately, this is an easy fix.
Before you can do it, however, you need to add the following helper method to VideoHelper
in VideoHelper.swift:
static func orientationFromTransform(
_ transform: CGAffineTransform
) -> (orientation: UIImage.Orientation, isPortrait: Bool) {
var assetOrientation = UIImage.Orientation.up
var isPortrait = false
let tfA = transform.a
let tfB = transform.b
let tfC = transform.c
let tfD = transform.d
if tfA == 0 && tfB == 1.0 && tfC == -1.0 && tfD == 0 {
assetOrientation = .right
isPortrait = true
} else if tfA == 0 && tfB == -1.0 && tfC == 1.0 && tfD == 0 {
assetOrientation = .left
isPortrait = true
} else if tfA == 1.0 && tfB == 0 && tfC == 0 && tfD == 1.0 {
assetOrientation = .up
} else if tfA == -1.0 && tfB == 0 && tfC == 0 && tfD == -1.0 {
assetOrientation = .down
}
return (assetOrientation, isPortrait)
}
This code analyzes an affine transform to determine the input video’s orientation.
Next, add the following import:
import AVFoundation
and one more helper method to the class:
static func videoCompositionInstruction(
_ track: AVCompositionTrack,
asset: AVAsset
) -> AVMutableVideoCompositionLayerInstruction {
// 1
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
// 2
let assetTrack = asset.tracks(withMediaType: AVMediaType.video)[0]
// 3
let transform = assetTrack.preferredTransform
let assetInfo = orientationFromTransform(transform)
var scaleToFitRatio = UIScreen.main.bounds.width / assetTrack.naturalSize.width
if assetInfo.isPortrait {
// 4
scaleToFitRatio = UIScreen.main.bounds.width / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransform(
scaleX: scaleToFitRatio,
y: scaleToFitRatio)
instruction.setTransform(
assetTrack.preferredTransform.concatenating(scaleFactor),
at: .zero)
} else {
// 5
let scaleFactor = CGAffineTransform(
scaleX: scaleToFitRatio,
y: scaleToFitRatio)
var concat = assetTrack.preferredTransform.concatenating(scaleFactor)
.concatenating(CGAffineTransform(
translationX: 0,
y: UIScreen.main.bounds.width / 2))
if assetInfo.orientation == .down {
let fixUpsideDown = CGAffineTransform(rotationAngle: CGFloat(Double.pi))
let windowBounds = UIScreen.main.bounds
let yFix = assetTrack.naturalSize.height + windowBounds.height
let centerFix = CGAffineTransform(
translationX: assetTrack.naturalSize.width,
y: yFix)
concat = fixUpsideDown.concatenating(centerFix).concatenating(scaleFactor)
}
instruction.setTransform(concat, at: .zero)
}
return instruction
}
This method takes a track and an asset and returns a AVMutableVideoCompositionLayerInstruction
which wraps the affine transform needed to get the video right-side up. Here’s what’s going on, step-by-step:
- You create
AVMutableVideoCompositionLayerInstruction
and associate it with yourtrack
. - Next, you create
AVAssetTrack
from yourAVAsset
. AnAVAssetTrack
provides the track-level inspection interface for all assets. You need this object to access the dimensions of the asset andpreferredTransform
. - Then, you save the preferred transform and the amount of scale required to fit the video to the current screen. You’ll use these values in the following steps.
- If the video is in portrait, you need to recalculate the scale factor — the default calculation is for videos in landscape. All you need to do then is apply the orientation rotation and scale transforms.
- If the video is in landscape, there’s a similar set of steps to apply the scale and transform. However, there’s one extra check, because the user could have produced the video in either landscape left or landscape right.
Because there are two landscapes, the aspect ratio will match but the video might be rotated 180 degrees. The extra check for a video orientation of
.down
handles this case.
With the helper methods set up, find merge(_:)
in MergeVideoViewController.swift. Locate where firstInstruction
and secondInstruction
are created and replace them with the following:
let firstInstruction = VideoHelper.videoCompositionInstruction(
firstTrack,
asset: firstAsset)
let secondInstruction = VideoHelper.videoCompositionInstruction(
secondTrack,
asset: secondAsset)
The changes above will use the new helper functions and implement the rotation fixes you need.
Whew — that’s it!
Build and run. Create a new video by combining two videos — and, optionally an audio file — and you’ll see that the orientation issues disappear when you play it back.
Where to Go From Here?
Download the final project using the Download Materials link at the top or bottom of this tutorial.
You should now have a good understanding of how to play video, record video and merge multiple videos and audio in your apps.
AV Foundation gives you a lot of flexibility when playing with videos. You can also apply any kind of CGAffineTransform
to merge, scale or position videos.
If you haven’t already done so, take a look at the WWDC videos on AV Foundation, such as WWDC 2016 session 503, Advances in AVFoundation Playback.
Also, be sure to check out the Apple AVFoundation Framework documentation.
I hope this tutorial has been useful to get you started with video manipulation in iOS. If you have any questions, comments or suggestions for improvement, please join the forum discussion below!
Comments