When I'm on vacation, I usually use our camcorder to record videos. Since they're all the same format, I can use ffmpeg to concat them into one large, smooth video without re-encoding.
However, sometimes I will use a phone or other camera to record a video (if the camcorder ran out of space/battery or was left at a hotel).
I'd like to determine the codec, framerate, etc used by my camcorder and use those parameters to convert the phone vidoes into the same format. That way, I will be able to concatonate all the videos without re-encoding the camcorder videos.
Using ffprobe, I found my camcorder has this encoding:
Input #0, mpegts, from 'camcorderfile.MTS':
Duration: 00:00:09.54, start: 1.936367, bitrate: 24761 kb/s
Program 1
Stream #0:0[0x1011]: Video: h264 (High) (HDPR / 0x52504448), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
Stream #0:1[0x1100]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 256 kb/s
Stream #0:2[0x1200]: Subtitle: hdmv_pgs_subtitle ([144][0][0][0] / 0x0090), 1920x1080
The phone (iPhone 5s) encoding is:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'mov.MOV':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
creation_time : 2017-01-02T03:04:05.000000Z
com.apple.quicktime.location.ISO6709: +12.3456-789.0123+456.789/
com.apple.quicktime.make: Apple
com.apple.quicktime.model: iPhone 5s
com.apple.quicktime.software: 10.2.1
com.apple.quicktime.creationdate: 2017-01-02T03:04:05-0700
Duration: 00:00:14.38, start: 0.000000, bitrate: 11940 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 11865 kb/s, 29.98 fps, 29.97 tbr, 600 tbn, 1200 tbc (default)
Metadata:
creation_time : 2017-01-02T03:04:05.000000Z
handler_name : Core Media Data Handler
encoder : H.264
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 63 kb/s (default)
Metadata:
creation_time : 2017-01-02T03:04:05.000000Z
handler_name : Core Media Data Handler
Stream #0:2(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
Metadata:
creation_time : 2017-01-02T03:04:05.000000Z
handler_name : Core Media Data Handler
Stream #0:3(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
Metadata:
creation_time : 2017-01-02T03:04:05.000000Z
handler_name : Core Media Data Handler
I'm presuming that ffmpeg will automatically take any acceptable video format, and that I only need to figure out the output settings. I think I need to use -s 1920x1080
and -pix_fmt yuv420p
for the output, but what other flags do I need in order to make the phone video into the same encoding as the camcorder video?
Can I get some pointers as to how I can translate the ffprobe output into the flags I need to give to ffmpeg?
Edit: Added the entire Input #0 for both media files.
Non-monotonous DTS in output stream
, even after I changed the framerate to match as well. There must be more that is needed to makeconcat
work. – Amoebaeanffmpeg
concat
– Intellect