#StackBounty: #ios #swift #avfoundation What AVCaptureDevice does Camera.app use?

Bounty: 50

I have a camera in my app. It’s been carefully implemented following a lot of documentation, but it still has a major annoyance; the field of view is significantly smaller than the stock camera app. Here’s two screenshots taken at approx the same distance for reference. My app is on the right, showing the entire preview stream from the camera.

enter image description here

Apple docs suggest using AVCaptureDevice.default or AVCaptureDevice.DiscoverySession, and my app uses the former;

AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)

I’ve tried many of the different capture devices, and none of them give me the same wide preview as the stock Camera app.

What am I doing wrong?


Get this bounty!!!

#StackBounty: #ios #avfoundation #avasset #avassetreader #avcomposition AVVideoComposition fails while trying to read video frame

Bounty: 100

I have a source video and I want to generate a new video from it by taking a region of each frame of the source video. For example, if I have a video with resolution A x B, a content size of X x Y and an output resolution of C x D, then I want to create a video of resolution C x D whose content will be the first X x Y pixels of each frame from the original video.

To achieve this I’m using an AVAssetReader for reading the source video and an AVAssetWriter for writing the new one. For extracting just the region X x Y of the source video I’m using an AVAssetReaderVideoCompositionOutput object as the output of the asset reader. The setup code is something like:

let output = AVAssetReaderVideoCompositionOutput(...)
output.videoComposition = AVMutableVideoComposition(
    asset: asset, 
    videoTrack: videoTrack, 
    contentRect: contentRect, 
    renderSize: renderSize
)

Then the logic for cropping the video content happens in the following custom initialiser:

extension AVMutableVideoComposition {
    convenience init(asset: AVAsset, videoTrack: AVAssetTrack, contentRect: CGRect, renderSize: CGSize) {
        // Compute transform for rendering the video content at `contentRect` with a size equal to `renderSize`.
        let trackFrame = CGRect(origin: .zero, size: videoTrack.naturalSize)
        let transformedFrame = trackFrame.applying(videoTrack.preferredTransform)
        let moveToOriginTransform = CGAffineTransform(translationX: -transformedFrame.minX, y: -transformedFrame.minY)
        let moveToContentRectTransform = CGAffineTransform(translationX: -contentRect.minX, y: -contentRect.minY)
        let scaleTransform = CGAffineTransform(scaleX: renderSize.width / contentRect.width, y: renderSize.height / contentRect.height)
        let transform = videoTrack.preferredTransform.concatenating(moveToOriginTransform).concatenating(moveToContentRectTransform).concatenating(scaleTransform)

        let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
        layerInstruction.setTransform(transform, at: .zero)

        let instruction = AVMutableVideoCompositionInstruction()
        instruction.timeRange = CMTimeRange(start: .zero, duration: asset.duration)
        instruction.layerInstructions = [layerInstruction]

        self.init(propertiesOf: asset)
        instructions = [instruction]
        self.renderSize = renderSize
    }
}

This code works fine for some cases, for example, for a content size (origin = (x = 0, y = 0), size = (width = 1416, height = 1920)). However, if I change the width to 1417 then it doesn’t work and I get the error message:

Error Domain=AVFoundationErrorDomain Code=-11858 “Source frame unsupported format” UserInfo={NSUnderlyingError=0x283e689c0 {Error Domain=NSOSStatusErrorDomain Code=-12502 “(null)”}, NSLocalizedFailureReason=The video could not be composited., NSDebugDescription=Source frame unsupported format, NSLocalizedDescription=Operation Stopped}

Here is a link to a sample project with the test video I get the error. The cases were this fails look random to me since it works for the widths 1416, 1421, 1422, 1423, 1429 and fails for all the others width values between 1416 and 1429.
What’s the problem here and how can I fix the issue?

Why am I using this approach?

The reason why I’m using an AVAssetReaderVideoCompositionOutput instead of using an AVAssetReaderTrackOutput and then doing the cropping manually is because using the former I can reduce the memory footprint of the app, since in my use case the output render size will be much smaller than the video original size. This is relevant when I’m processing several videos at the same time.


Get this bounty!!!

#StackBounty: #ios #avfoundation #avcapturesession #autofocus #avcapturedevice AVCaptureDeviceFormat 1080p 60 fps Autofocus issue

Bounty: 100

I noticed that AVCaptureDeviceFormat 1080p 60 fps on iPhone 6s does not supports focus pixels, so in low light conditions the camera continues to autofocus when moved. This creates an issue with video recording as focus hunting is an issue. However the native camera app works wonderfully with 1080p 60 fps setting without any focus hunting in the same scenario. How does native camera achieve it? I tried locking focus before recording and also tried setting device.smoothAutoFocusEnabled to YES but the results are still not good enough like native Camera app. Any ideas ?


Get this bounty!!!

#StackBounty: #ios #swift #avfoundation #avplayer Streaming video from https with AVPlayer causes initial delay

Bounty: 100

I am using AVPlayer to play a video from an https url with a setup this:

player = AVPlayer(url: URL(string: urlString))
player?.automaticallyWaitsToMinimizeStalling = false

But since the video is a little long, there is a short blank screen delay before the video actually starts to play. I think this is because it is being loaded from https.

Is there anyway to remove that delay by making AVPlayer play the video right away without loading the whole thing?

I added .automaticallyWaitsToMinimizeStalling but that does not seem to make a difference.

If anyone has any other suggestions please let me know.


Get this bounty!!!

#StackBounty: #ios #react-native #avfoundation #react-native-video #aws-cloudfront domain = AVFoundationErrorDomain , code = -11828

Bounty: 100

I am using the streaming url from CloudFront.

Sample url:
https://d14nt81hc5bide.cloudfront.net/qyYj1PcUkYg2ALDfzAdhZAmb

On android , it is working fine but in iOS it says:
domain = AVFoundationErrorDomain , code = -11828

From apple doc the error code 11828 is AVErrorFileFormatNotRecognized.
The media could not be opened because it is not in a recognized format.

Can someone suggest how to fix this error ?


Get this bounty!!!

#StackBounty: #ios #swift #avfoundation #zoom #uipangesturerecognizer Pan gesture (hold/drag) zoom on camera like Snapchat

Bounty: 50

I am trying to replicate Snapchat camera’s zoom feature where once you have started recording you can drag your finger up or down and it will zoom in or out accordingly. I have been successful with zooming on pinch but have been stuck on zooming with the PanGestureRecognizer.

Here is the code I’ve tried the problem is that I do not know how to replace the sender.scale that I use for pinch gesture recognizer zooming. I’m using AVFoundation. Basically, I’m asking how I can do the hold zoom (one finger drag) like in TikTok or Snapchat properly.

let minimumZoom: CGFloat = 1.0
let maximumZoom: CGFloat = 15.0
var lastZoomFactor: CGFloat = 1.0
var latestDirection: Int = 0

@objc func panGesture(_ sender: UIPanGestureRecognizer) {

    let velocity = sender.velocity(in: doubleTapSwitchCamButton)
    var currentDirection: Int = 0

    if velocity.y > 0 || velocity.y < 0 {

                let originalCapSession = captureSession
                var devitce : AVCaptureDevice!

                let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera, .builtInDuoCamera], mediaType: AVMediaType.video, position: .unspecified)
                let devices = videoDeviceDiscoverySession.devices
                devitce = devices.first!

                guard let device = devitce else { return }

                    // Return zoom value between the minimum and maximum zoom values
                    func minMaxZoom(_ factor: CGFloat) -> CGFloat {
                        return min(min(max(factor, minimumZoom), maximumZoom), device.activeFormat.videoMaxZoomFactor)
                        }

                        func update(scale factor: CGFloat) {
                            do {

                                try device.lockForConfiguration()
                                defer { device.unlockForConfiguration() }
                                device.videoZoomFactor = factor
                            } catch {
                                print("(error.localizedDescription)")
                            }
                        }

//These 2 lines below are the problematic ones, pinch zoom uses this one below, and the newScaleFactor below that is a testing one that did not work.
                let newScaleFactor = minMaxZoom(sender.scale * lastZoomFactor)
                       //let newScaleFactor = CGFloat(exactly: number + lastZoomFactor)


                    switch sender.state {

                    case .began: fallthrough                
                    case .changed: update(scale: newScaleFactor!)                
                    case .ended:
                        lastZoomFactor = minMaxZoom(newScaleFactor!)
                        update(scale: lastZoomFactor)

                    default: break
                    }

        } else {

        }

        latestDirection = currentDirection

    }


Get this bounty!!!