`[AVCaptureSession canAddOutput:output]` returns NO intermittently. Can I find out why?
Asked Answered
C

5

14

I am using canAddOutput: to determine if I can add a AVCaptureMovieFileOutput to a AVCaptureSession and I'm finding that canAddOutput: is sometimes returning NO, and mostly returning YES. Is there a way to find out why a NO was returned? Or a way to eliminate the situation that is causing the NO to be returned? Or anything else I can do that will prevent the user from just seeing an intermittent failure?

Some further notes: This happens approximately once in 30 calls. As my app is not launched, it has only been tested on one device: an iPhone 5 running 7.1.2

Colloid answered 1/7, 2014 at 2:14 Comment(1)
Can you show me a little code of what you are trying to do?Infinitive
T
15

Here is quote from documentation (discussion of canAddOutput:)

You cannot add an output that reads from a track of an asset other than the asset used to initialize the receiver.

Explanation that will help you (Please check if your code is matching to this guide, if you're doing all right, it should not trigger error, because basically canAddOuput: checks the compatibility).

AVCaptureSession
Used for the connection between the organizations Device Input and output, similar to the connection of the DShow the filter. If you can connect the input and output, after the start, the data will be read from input to the output. Several main points:
a) AVCaptureDevice, the definition of equipment, both camera Device.
b) AVCaptureInput
c) AVCaptureOutput
Input and output are not one-to-one, such as the video output while video + audio Input. Before and after switching the camera:

AVCaptureSession * session = <# A capture session #>; 
[session beginConfiguration]; 
[session removeInput: frontFacingCameraDeviceInput]; 
[session addInput: backFacingCameraDeviceInput]; 
[session commitConfiguration];

Add the capture INPUT:
To add a capture device to a capture session, you use an instance of AVCaptureDeviceInput (a concrete subclass of the abstract AVCaptureInput class). The capture device input manages the device's ports.

NSError * error = nil; 
AVCaptureDeviceInput * input = 
[AVCaptureDeviceInput deviceInputWithDevice: device error: & error]; 
if (input) { 
   // Handle the error appropriately. 
}

Add output, output classification:

To get output from a capture session, you add one or more outputs. An output is an instance of a concrete subclass of AVCaptureOutput;
you use:
AVCaptureMovieFileOutput to output to a movie file
AVCaptureVideoDataOutput if you want to process frames from the video being captured
AVCaptureAudioDataOutput if you want to process the audio data being captured
AVCaptureStillImageOutput if you want to capture still images with accompanying metadata You add outputs to a capture session using addOutput:.
You check whether a capture output is compatible with an existing session using canAddOutput:.
You can add and remove outputs as you want while the session is running.

AVCaptureSession * captureSession = <# Get a capture session #>; 
AVCaptureMovieFileOutput * movieInput = <# Create and configure a movie output #>; 
if ([captureSession canAddOutput: movieInput]) { 
   [captureSession addOutput: movieInput]; 
} 
else {
   // Handle the failure. 
}

Save a video file, add the video file output:

You save movie data to a file using an AVCaptureMovieFileOutput object. (AVCaptureMovieFileOutput is a concrete subclass of AVCaptureFileOutput, which defines much of the basic behavior.) You can configure various aspects of the movie file output, such as the maximum duration of the recording, or the maximum file size. You can also prohibit recording if there is less than a given amount of disk space left.

AVCaptureMovieFileOutput * aMovieFileOutput = [[AVCaptureMovieFileOutput alloc] 
init]; 
CMTime maxDuration = <# Create a CMTime to represent the maximum duration #>; 
aMovieFileOutput.maxRecordedDuration = maxDuration; 
aMovieFileOutput.minFreeDiskSpaceLimit = <# An appropriate minimum given the quality 
of the movie format and the duration #>;

Processing preview video frame data, each frame view finder data can be used for subsequent high-level processing, such as face detection, and so on.
An AVCaptureVideoDataOutput object uses delegation to vend video frames. You set the delegate using
setSampleBufferDelegate: queue:.
In addition to the delegate, you specify a serial queue on which they delegate methods are invoked. You must use a serial queue to ensure that frames are delivered to the delegate in the proper order.
You should not pass the queue returned by dispatch_get_current_queue since there is no guarantee as to which thread the current queue is running on. You can use the queue to modify the priority given to delivering and processing the video frames. Data processing for the frame, there must be restrictions on the size (image size) and the processing time limit, if the processing time is too long, the underlying sensor will not send data to the layouter and the callback.

You should set the session output to the lowest practical resolution for your application.
Setting the output to a higher resolution than necessary wastes processing cycles and needlessly consumes power. You must ensure that your implementation of captureOutput: didOutputSampleBuffer: fromConnection: is able to process a sample buffer within the amount of time allotted to a frame. If it takes too long, and you hold onto the video frames, AVFoundation will stop delivering frames, not only to your delegate but also other outputs such as a preview layer.

Deal with the capture process:

AVCaptureStillImageOutput * stillImageOutput = [[AVCaptureStillImageOutput alloc] 
init]; 
NSDictionary * outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, 
AVVideoCodecKey, nil]; 
[StillImageOutput setOutputSettings: outputSettings];

Able to support different format also supports directly generate jpg stream. If you want to capture a JPEG image, you should typically not specify your own compression format. Instead, you should let the still image output do the compression for you, since its compression is hardware-accelerated. If you need a data representation of the image, you can use jpegStillImageNSDataRepresentation: to get an NSData object without re-compressing the data, even if you modify the image's metadata.

Camera preview display:

You can provide the user with a preview of what's being recorded using an AVCaptureVideoPreviewLayer object. AVCaptureVideoPreviewLayer is a subclass of CALayer (see Core Animation Programming Guide. You don't need any outputs to show the preview.

AVCaptureSession * captureSession = <# Get a capture session #>; 
CALayer * viewLayer = <# Get a layer from the view in which you want to present the 
The preview #>; 
AVCaptureVideoPreviewLayer * captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer 
alloc] initWithSession: captureSession]; 
[viewLayer addSublayer: captureVideoPreviewLayer];

In general, the preview layer behaves like any other CALayer object in the render tree (see Core Animation Programming Guide). You can scale the image and perform transformations, rotations and so on just as you would any layer. One difference is that you may need to set the layer's orientation property to specify how it should rotate images coming from the camera. In addition, on iPhone 4 the preview layer supports mirroring (This is the default when previewing the front-facing camera).

Trehalose answered 10/7, 2014 at 11:40 Comment(0)
I
2

Referring from this answer, there might be a possibility that this delegate method may be running in the background, which causes the previous AVCaptureSession not disconnected properly sometimes resulting in canAddOutput: to be NO sometimes.

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection

The solution might be to use stopRunning in the above delegate(Of course after doing necessary actions and condition checks, you need to finish off your previous sessions properly right?).

Adding on to that, It would be better if you provide some code of what you are trying to do.

Infinitive answered 7/7, 2014 at 5:53 Comment(0)
C
1

It's can be one from this 2 cases
1) Session is running
2) You already added output
You can't add 2 output or 2 input, and also you can't create 2 different sessions

Cord answered 10/7, 2014 at 16:26 Comment(0)
N
0

It may be a combination of:

  • Calling this method when the camera is busy.
  • Not properly removing your previously connected AVCaptureSession.

You should try to only add it once (where I guess canAddOutput: will always be YES) and just pause/resume your session as needed:

// Stop session if possible
if (_captureSession.running && !_captureInProgress)
{
    [_captureSession stopRunning];
    NBULogVerbose(@"Capture session: {\n%@} stopped running", _captureSession);
}

You can take a look here.

Neckline answered 7/7, 2014 at 2:17 Comment(0)
A
0

I think this will help you canAddOutput: Returns a Boolean value that indicates whether a given output can be added to the session.

- (BOOL)canAddOutput:(AVCaptureOutput *)output

Parameters output An output that you want to add to the session. Return Value YES if output can be added to the session, otherwise NO.

Availability Available in OS X v10.7 and later.

Here is the link for apple doc Click here

Artina answered 10/7, 2014 at 11:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.