#StackBounty: #video #ffmpeg #jpeg Create a video from JPG without have all pictures at begining with ffmpeg

Bounty: 100

I have a game which can create some screenshots, and I want to transform them to mp4 video. So I’ve the next command:

ffmpeg -framerate 15 -i %06d.png -s hd1080 -vcodec libx264 -r 30 timelapse.mp4

But my game lasts 8h, so, after have auto-compress pictures, I’ve more than 9To of pictures. So I want to start the ffmpeg process before the end of pictures generation, so I want that ffmpeg wait the next picture to digest it.

How can I do it?


Get this bounty!!!

#StackBounty: #ffmpeg #video #video-conversion #video-streaming Ffmpeg hardware acceleration unsupported formats betwen transpose and a…

Bounty: 50

I am trying to develop a transoding service which makes use of nvidia hardware acceleration capabilities ( The gpu used in this process is a Tesla T4); I want to generate a mpeg-dash playlist for my video so that i can stream it;

ffmpeg -y -hwaccel cuda -hwaccel_output_format cuda -i mobil1.mp4 -c:v h264_nvenc -c:a aac  
-map v:0 -b:v:0 4000k -maxrate:0 5500k -bufsize:0 4500k -filter:v:0 "scale_npp=1920:1080:force_original_aspect_ratio=decrease" -map 0:a -b:a 128k 
-f dash dash.mpd

But when mobile videos are uploaded ( which have rotation metadata ) I encounter the following error:

Impossible to convert between the formats supported by the filter 'transpose' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0

How can i solve this issue? I am using the following docker image:
jrottenberg/ffmpeg:4.4-nvidia

ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
  configuration: --disable-debug --disable-doc --disable-ffplay --enable-shared --enable-avresample --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-gpl --enable-libass --enable-fontconfig --enable-libfreetype --enable-libvidstab --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libxcb --enable-libx265 --enable-libxvid --enable-libx264 --enable-nonfree --enable-openssl --enable-libfdk_aac --enable-postproc --enable-small --enable-version3 --enable-libbluray --enable-libzmq --extra-libs=-ldl --prefix=/opt/ffmpeg --enable-libopenjpeg --enable-libkvazaar --enable-libaom --extra-libs=-lpthread --enable-libsrt --enable-libaribb24 --enable-nvenc --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags='-I/opt/ffmpeg/include -I/opt/ffmpeg/include/ffnvcodec -I/usr/local/cuda/include/' --extra-ldflags='-L/opt/ffmpeg/lib -L/usr/local/cuda/lib64 -L/usr/local/cuda/lib32/'
  libavutil      56. 70.100 / 56. 70.100
  libavcodec     58.134.100 / 58.134.100
  libavformat    58. 76.100 / 58. 76.100
  libavdevice    58. 13.100 / 58. 13.100
  libavfilter     7.110.100 /  7.110.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  9.100 /  5.  9.100
  libswresample   3.  9.100 /  3.  9.100
  libpostproc    55.  9.100 / 55.  9.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'mobil1.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    creation_time   : 2021-04-25T16:21:32.000000Z
    com.android.version: 11
    com.android.capture.fps: 30.000000
  Duration: 00:02:17.75, start: 0.000000, bitrate: 17255 kb/s
  Stream #0:0(eng): Video: h264 (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 16996 kb/s, SAR 1:1 DAR 16:9, 30.02 fps, 30 tbr, 90k tbn, 180k tbc (default)
    Metadata:
      rotate          : 90
      creation_time   : 2021-04-25T16:21:32.000000Z
      handler_name    : VideoHandle
      vendor_id       : [0][0][0][0]
    Side data:
      displaymatrix: rotation of -90.00 degrees
  Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 256 kb/s (default)
    Metadata:
      creation_time   : 2021-04-25T16:21:32.000000Z
      handler_name    : SoundHandle
      vendor_id       : [0][0][0][0]
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (h264_nvenc))
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
Impossible to convert between the formats supported by the filter 'transpose' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0
[aac @ 0x5618677050c0] Qavg: 9442.968
[aac @ 0x5618677050c0] 2 frames left in the queue on closing
Conversion failed!


Get this bounty!!!

#StackBounty: #windows #ffmpeg #cmd.exe FFmpeg multiple commands (convert, then join)

Bounty: 50

I have e.g. file 1.mp4, 2.mp4, 3.mp4 etc. that needs converting

for %i in (*.mp4) do ffmpeg -y -i "%i" -vf scale=1280:720 -crf 17 -c:v libx265 "%~ni.mp4"

Then I concatenate

ffmpeg -f concat -safe 0 -i xmylist.txt -crf 17 -c copy x1.mp4

Now, I wanna do this in one step and failed with

for %i in (*.mp4) do ffmpeg -y -i "%i" -vf scale=1280:720 -crf 17 -c:v libx265, -f concat -safe 0 -i xmylist.txt -crf 17 -c copy "%~ni.mp4"

How to fuse these two together properly?
The error I got was

Option vf (set video filters) cannot be applied to input url
xmylist.txt — you are trying to apply an input option to an output
file or vice versa. Move this option before the file it belongs to.
Error parsing options for input file xmylist.txt. Error opening input
files: Invalid argument


Get this bounty!!!

#StackBounty: #ffmpeg Use ffmpeg to split a file output by size

Bounty: 50

I can split an aid (or video) file by time, but how do I split it by file size?

ffmpeg -i input.mp3 -ss S -to E -c copy output1.mp3 -ss S -to E -c copy output2.mp3

Which is fine if I have time codes, but if I want the output files to be split at 256MB regardless of the time length, what do I do? (What I am doing now is estimating, but that often means I have to make multiple runs at it with -ss S -to E to get files that are close to where I want in size).


Get this bounty!!!

#StackBounty: #x11 #pulseaudio #alsa #video #ffmpeg Recording screen with pulseaudio causes desyncs

Bounty: 100

Following the ffmpeg’s Capturing Desktop guide for linux, when audio source is pulse format it causes frame freezes and desyncs, fps slows down, and video is delayed compared to audio.

I have tried several arguments’ combinations that I have found in my research trying to fix this issue, following is the current command I’m using:

ffmpeg 
  -video_size 1920x1080 
  -framerate 60 
  -f x11grab 
  -probesize 42M 
  -thread_queue_size 64 
  -i :0.0 
  -f pulse 
  -thread_queue_size 64 
  -i default 
  -c:v libx264rgb 
  -crf 0 
  -preset ultrafast 
  -c:a aac 
  -ac 2 
  -b:a 160k 
  -ar 44100 
  -strict experimental 
  -threads 8 
  -vsync vfr 
  -max_muxing_queue_size 64 
  -f mp4 
  -y o.mp4

But the funny fact is that if I replace the audio source for -f alsa -i pulse it works and the warning message "100 buffers queued in out_0_1, something may be wrong." is gone.

I’m using the version 4.2.4-1ubuntu0.1, since pulse is more versatile, allowing volume control by application and custom modules, I’d like ffmpeg would work fine with pulse.

Evidences: https://youtu.be/rhjfQNd5lP4

Here is my default source (pactl list sources):

Source #1
        State: SUSPENDED
        Name: alsa_input.pci-0000_27_00.3.analog-stereo
        Description: Family 17h (Models 00h-0fh) HD Audio Controller Estéreo analógico
        Driver: module-alsa-card.c
        Sample Specification: s16le 2ch 48000Hz
        Channel Map: front-left,front-right
        Owner Module: 9
        Mute: no
        Volume: front-left: 19661 /  30% / -31.37 dB,   front-right: 19661 /  30% / -31.37 dB
                balance 0.00
        Base Volume: 6554 /  10% / -60.00 dB
        Monitor of Sink: n/a
        Latency: 0 usec, configured 0 usec
        Flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY 
        Properties:
                alsa.resolution_bits = "16"
                device.api = "alsa"
                device.class = "sound"
                alsa.class = "generic"
                alsa.subclass = "generic-mix"
                alsa.name = "ALC892 Analog"
                alsa.id = "ALC892 Analog"
                alsa.subdevice = "0"
                alsa.subdevice_name = "subdevice #0"
                alsa.device = "0"
                alsa.card = "1"
                alsa.card_name = "HD-Audio Generic"
                alsa.long_card_name = "HD-Audio Generic at 0xfe800000 irq 69"
                alsa.driver_name = "snd_hda_intel"
                device.bus_path = "pci-0000:27:00.3"
                sysfs.path = "/devices/pci0000:00/0000:00:08.1/0000:27:00.3/sound/card1"
                device.bus = "pci"
                device.vendor.id = "1022"
                device.vendor.name = "Advanced Micro Devices, Inc. [AMD]"
                device.product.id = "1457"
                device.product.name = "Family 17h (Models 00h-0fh) HD Audio Controller"
                device.string = "front:1"
                device.buffering.buffer_size = "17664"
                device.buffering.fragment_size = "2944"
                device.access_mode = "mmap"
                device.profile.name = "analog-stereo"
                device.profile.description = "Estéreo analógico"
                device.description = "Family 17h (Models 00h-0fh) HD Audio Controller Estéreo analógico"
                module-udev-detect.discovered = "1"
                device.icon_name = "audio-card-pci"
        Ports:
                analog-input-front-mic: Microfone frontal (priority: 8500, available)
                analog-input-rear-mic: Microfone traseiro (priority: 8200, not available)
                analog-input-linein: Entrada de linha (priority: 8100, not available)
        Active Port: analog-input-front-mic
        Formats:
                pcm

As I don’t know if this problem is due to my audio card or a missing ffmpeg’s setting, could you please try the command I’ve tried and feed me back if it worked for you or not? And if you have any idea I could try in order to make it work please help me out.


Get this bounty!!!

#StackBounty: #ffmpeg #batch There is a difference while debayer an image with cmd ffmpeg

Bounty: 50

I have such a bayer image https://drive.google.com/file/d/1OjHQyR44ECMMs4BtlZacejkmgSjeY2Nq/view?usp=sharing

I need to get two output files

  1. Directly debayer this image and save it as .bmp
  2. Debayer the image compress it to .h264 and the decompress and save as .bmp

In order to do this, I use such a batch script

@echo off

set main_dir=my_main_dir
set file_name=orig_bayer
set input=%main_dir%%file_name%.bmp
set output_direct_debayer_bmp=%main_dir%gpl_cmd_direct_decompress.bmp
set output_h264=%main_dir%result_h264_%file_name%.h264
set output_h264_to_bmp=%main_dir%gpl_cmd_decompress.bmp
set video_size=4096x3000

rem direct debayering 
ffmpeg -y -hide_banner -i %input% -vf format=gray -f rawvideo pipe: | ffmpeg -hide_banner -y -f rawvideo -pixel_format bayer_rggb8 -video_size %video_size% -i pipe: -pix_fmt yuv420p %output_direct_debayer_bmp%

rem debaer -> h264 -> decompress
ffmpeg -y -hide_banner -i %input% -vf format=gray -f rawvideo pipe: | ffmpeg -hide_banner -y -framerate 30 -f rawvideo -pixel_format bayer_rggb8 -video_size %video_size% -i pipe: -c:v hevc_nvenc -qp 0 -pix_fmt yuv420p %output_h264%
ffmpeg -y -i %output_h264% -f image2 %output_h264_to_bmp% -hide_banner

pause

So the problem that with #1 approach I

...
rem direct debayering 
ffmpeg -y -hide_banner -i %input% -vf format=gray -f rawvideo pipe: | ffmpeg -hide_banner -y -f rawvideo -pixel_format bayer_rggb8 -video_size %video_size% -i pipe: -pix_fmt yuv420p %output_direct_debayer_bmp%
...

I get such an output https://drive.google.com/file/d/1-DA2440zZ2F9WRcd15iqFUt3UhtkQRZT/view?usp=sharing

and with #2 approach

...
rem debaer -> h264 -> decompress
ffmpeg -y -hide_banner -i %input% -vf format=gray -f rawvideo pipe: | ffmpeg -hide_banner -y -framerate 30 -f rawvideo -pixel_format bayer_rggb8 -video_size %video_size% -i pipe: -c:v hevc_nvenc -qp 0 -pix_fmt yuv420p %output_h264%
ffmpeg -y -i %output_h264% -f image2 %output_h264_to_bmp% -hide_banner
...

I get such an output https://drive.google.com/file/d/103dtgaDVaXsNy13XHaVLSgugQhh0Uj9N/view?usp=sharing

The second one has a difference in the colors…

What am I doing wrong? How to get the same image?


Get this bounty!!!

#StackBounty: #ffmpeg #video #video-conversion ffmpeg alphamerge two videos into a gif with transparent background

Bounty: 50

I had this question answered that helped me merge two mp4 videos with alphamerge with the following command to an .mov file

ffmpeg -i video.mp4 -i mask.mp4 -filter_complex "[1][0]scale2ref[mask][main];[main][mask]alphamerge" -c:v qtrle output.mov

Now, I was wondering how I would change this for the output of a gif. When I tried

ffmpeg -i word.mp4 -i word.matte.mp4 -filter_complex "[1][0]scale2ref[mask][main];[main][mask]alphamerge" -c:v qtrle output.gif

I got this error

[swscaler @ 0x5601daaa4d40] No accelerated colorspace conversion found from yuv420p to argb.
[gif @ 0x5601d9ad2d00] GIF muxer supports only a single video GIF stream.
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument

I’m using ffmpeg 4.2.4


Get this bounty!!!

#StackBounty: #ffmpeg #video Have ffmpeg merge a matte key file over the normal video file removing the background

Bounty: 50

I currently have two ffmpeg videos. Video one is a matte key file , that looks like so

Example of output

Video two, would be the normal video without the background and just the person.

How would I go about merging these two videos in ffmpeg to have it look like this

enter image description here

I’ve tried the command

ffmpeg -i word.mp4 -i word.matte.mp4 -filter_complex "[0:v][1:v]alphamerge" -shortest -c:v qtrle -an output.mp4

but I get the following error

ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
  configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'word.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.29.100
  Duration: 00:00:06.24, start: 0.000000, bitrate: 741 kb/s
    Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 738 kb/s, 25.79 fps, 25.79 tbr, 11040 tbn, 51.59 tbc (default)
    Metadata:
      handler_name    : VideoHandler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'word.matte.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2mp41
    encoder         : Lavf58.29.100
  Duration: 00:00:06.19, start: 0.000000, bitrate: 493 kb/s
    Stream #1:0(und): Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D), yuv420p, 568x320 [SAR 1:1 DAR 71:40], 491 kb/s, 26 fps, 26 tbr, 13312 tbn, 26 tbc (default)
    Metadata:
      handler_name    : VideoHandler
Stream mapping:
  Stream #0:0 (h264) -> alphamerge:main
  Stream #1:0 (mpeg4) -> alphamerge:alpha
  alphamerge -> Stream #0:0 (qtrle)
Press [q] to stop, [?] for help
[swscaler @ 0x5619d30b9900] No accelerated colorspace conversion found from yuv420p to argb.
[Parsed_alphamerge_0 @ 0x5619d2f57700] Input frame sizes do not match (1280x720 vs 568x320).
[Parsed_alphamerge_0 @ 0x5619d2f57700] Failed to configure output pad on Parsed_alphamerge_0
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #1:0

Any help would be appreciated. Here is a link to both video files, the matte file, and the normal video. https://imgur.com/a/rbpJyD5


Get this bounty!!!

#StackBounty: #ffmpeg #video FFMPEG : Merge multiple image inputs over video input using 'xfade' filter

Bounty: 50

I’ve been trying to figure out the way to merge image inputs over one video.
Basically I have few images which I takes as input and then applies some filters and then merge it with video like below (multiple images with zoompan filter)

ffmpeg -i "inputs/image0.png" -i "inputs/image1.png" -i "inputs/image2.png" -i "inputs/image3.png" -i "inputs/background_video.mp4" -filter_complex "[4]split=2[color][alpha];[color]crop=iw/2:ih:0:0[color];[alpha]crop=iw/2:ih:iw/2:0[alpha];[color][alpha]alphamerge[ovrly];[0]scale=540:960,setsar=1[0_scaled];[1]scale=540:960,setsar=1[1_scaled];[2]scale=540:960,setsar=1[2_scaled];[3]scale=540:960,setsar=1[3_scaled];[0_scaled]scale=2700x4800,zoompan=z='min(zoom+0.0010,1.20)':x='iw/2-iw*(1/2-0/100)*on/201-iw/zoom/2':y='ih/2-ih*(1/2-0/100)*on/201-ih/zoom/2':d=25*8.04:s=540x960[v0];[1_scaled]scale=2700x4800,zoompan=z='min(zoom+0.0013,1.20)':x='iw/2-iw*(1/2-100/100)*on/151-iw/zoom/2':y='ih/2-ih*(1/2-0/100)*on/151-ih/zoom/2':d=25*6.04:s=540x960[v1];[2_scaled]scale=2700x4800,zoompan=z='min(zoom+0.0010,1.20)':x='iw/2-iw*(1/2-100/100)*on/201-iw/zoom/2':y='ih/2-ih*(1/2-100/100)*on/201-ih/zoom/2':d=25*8.04:s=540x960[v2];[3_scaled]scale=2700x4800,zoompan=z='min(zoom+0.0010,1.20)':x='iw/2-iw*(1/2-50/100)*on/201-iw/zoom/2':y='ih/2-ih*(1/2-50/100)*on/201-ih/zoom/2':d=25*8.04:s=540x960[v3];[v0][v1][v2][v3]concat=n=4:v=1:a=0,format=yuv420p[concatenated_video];[concatenated_video][ovrly]overlay=0:0" "outputs/zoomInTest.mp4"

I’m tried several similar ideas to get xfade working out but I’m getting issues mentioned below.

ffmpeg -loop 1 -t 8.04 -i "inputs/image0.png" -loop 1 -t 5.60 -i "inputs/image1.png" -loop 1 -t 8.36 -i "inputs/image2.png" -loop 1 -t 8.16 -i "inputs/image3.png" -i "inputs/background_video.mp4" -filter_complex "[4]split=2[color][alpha];[color]crop=iw/2:ih:0:0[color];[alpha]crop=iw/2:ih:iw/2:0[alpha];[color][alpha]alphamerge[ovrly];[0][1]xfade=transition=circleopen:duration=1.00:offset=7.54[v0];[1][2]xfade=transition=diagbl:duration=1.00:offset=13.14[v1];[2][3]xfade=transition=slideright:duration=1.00:offset=21.00[v2];[3]fade=t=in:st=29.0:d=1.0[v3];[v0][v1][v2][v3]concat=n=4:v=1:a=0,format=yuv420p[concatenated_video];[concatenated_video][ovrly]overlay=0:0" "outputs/fadeTestNew.mp4" 

Below picture is reference of my process.
enter image description here

What I’ve tried?

1.Output wasn’t being animated at all (solved)

I solved this issue by specifying -loop 1 -t 8.04 -i instead of only -i to all input images.

2.Filter wasn’t mapped because was unused etc. (solved)

I solved this issue by removing scale filter which was producing outputs like [0_scaled] which were unused.

3.xfade filter doesn’t work properly. (unsolved)

Watch above 2nd ffmpeg command which is the last update on my command. the thing is first two input image is being animated perfectly(like I want first input to be fade out by transition circle open at around 7.54s and it works well too but rest are just messed up and can’t describe what happens!

I found out that xfade filter requires 2x input so I could only use transition on first 3 input and for 4th image I used fade filter to fade it out on last second.

[TLDR]: All I want is to use all this 4 image. transition them via xfade filter at different time like first transition at 8sec, 2nd at 14sec and 3rd at 22sec, and map them all on my background video and generate final output video!

Thanks!

[Update 1]: I’ve managed to merge all transitions but above issue stays same because in new video the transition are being placed on different time(my video is 30sec long but now it’s being outputted of 37sec). anyway you can check my new latest command here.

ffmpeg -loop 1 -t 8.54 -i "inputs/image0.png" -loop 1 -t 6.10 -i "inputs/image1.png" -loop 1 -t 8.86 -i "inputs/image2.png" -loop 1 -t 8.66 -i "inputs/image3.png" -i "inputs/background_video.mp4" -filter_complex "[4]split=2[color][alpha];[color]crop=iw/2:ih:0:0[color];[alpha]crop=iw/2:ih:iw/2:0[alpha];[color][alpha]alphamerge[ovrly];[0][1]xfade=transition=circleopen:duration=1.00:offset=7.54[v0];[v0][2]xfade=transition=diagbl:duration=1.00:offset=12.14[v1];[v1][3]xfade=transition=slideright:duration=1.00:offset=20.00[v2];[v2][3]xfade=transition=slideright:duration=1.00:offset=27.00[v3];[v3]concat=n=1:v=1:a=0,format=yuv420p[concatenated_video];[concatenated_video][ovrly]overlay=0:0" "outputs/fadeTestNew2.mp4" 


Get this bounty!!!