apple-vision Questions

1

I am using this code for extracting text from image, First time the code runs perfectly after that its start giving this error message [coreml] Failed to get the home directory when checking model ...
Isaacs asked 6/12, 2022 at 9:39

1

I am classifying images per frame from ARSession delegate by Vision framework and CoreML in an Augmented Reality app, with ARKit and RealityKit. While processing a frame.capturedImage I am not requ...
Altitude asked 29/12, 2021 at 11:42

3

Solved

In iOS 16 you can lift the subject from an image or isolate the subject by removing the background. You can see it in action here: https://developer.apple.com/wwdc22/101?time=1101 I wonder whether ...
Harry asked 16/9, 2022 at 10:31

3

Solved

I'm trying to add the option to my app to allow for different languages when using Apple's Vision framework for recognising text. There seems to be a function for programmatically returning the su...

1

Solved

I hope it's not a silly question, but why this iOS Swift code compiles successfully? import UIKit import ARKit class ViewController: UIViewController { private let sequenceHandler = VNSequenceR...
Pneumograph asked 14/7, 2021 at 14:42

1

I want to use VNDetectTextRectanglesRequest from a Vision framework to detect regions in an image containing only one character, number '9', with the white background. I'm using following code to d...
Bandicoot asked 6/1, 2018 at 11:47

1

Solved

Problem: I am trying to get facial features through CIDetector from CMSampleBuffer from AVCaptureVideoDataOutput. On execution of the program, 9 out of 10 time the program crashes and only once it ...
Tedious asked 13/2, 2021 at 7:51

2

How can I use the depth data captured using iPhone true-depth Camera to distinguish between a real human 3D face and a photograph of the same? The requirement is to use it for authentication. Wha...
Cockchafer asked 25/4, 2019 at 9:54

3

Solved

As many other developers, I have plunged myself into Apple's new ARKit technology. It's great. For a specific project however, I would like to be able to recognise (real-life) images in the scene,...

3

I am using Vision framework for iOS 11 to detect text on image. The texts are getting detected successfully, but how we can get the detected text?
Listing asked 15/6, 2017 at 11:25

0

I’m currently working on a feature of my app, which recognizes faces in a camera stream. I’m reading landmark features like the mouth etc. Everything works fine when light conditions are sufficient...
Slipper asked 5/3, 2020 at 21:57

2

Solved

I'm trying to get the dimensions of a displayed image to draw bounding boxes over the text I have recognized using apple's Vision framework. So I run the VNRecognizeTextRequest uppon the press of a...
Perren asked 6/1, 2020 at 23:27

3

I would really like some guidance on combining Apple's new Vision API with ARKit in a way that enables object recognition. This would not need to track the moving object, just recognize it stable i...
Continue asked 30/8, 2017 at 10:46

0

I try to crop a CVImageBuffer (from AVCaptureOutput) using the boundingBox of detected face from Vision (VNRequest). When I draw over the AVCaptureVideoPreviewLayer using : let origin = previewLay...
Bramante asked 8/10, 2019 at 14:45

2

I'm trying to get a simple rectangle tracking controller going, and I can get rectangle detection going just fine, but the tracking request always ends up failing for a reason I can't quite find. ...
Sulfa asked 6/9, 2017 at 9:7

3

Solved

Is it possible to ship an iOS app with a CoreML model and then have the app continue improving (training) the model on device based on user behaviour for example? So, then the model would keep grow...
Odyssey asked 27/4, 2018 at 20:21

8

Solved

I'm looking through the Apple's Vision API documentation and I see a couple of classes that relate to text detection in UIImages: 1) class VNDetectTextRectanglesRequest 2) class VNTextObservation...

3

Solved

I need to convert the VNRectangleObservation received CGPoints (bottomLeft, bottomRight, topLeft, topRight) to another coordinate system (e.g. a view's coordinate on screen). I define a request: ...
Featherston asked 21/12, 2017 at 16:8

0

I want to detect foot using Scenekit(ARKit) and place 3D shoes same like body detect and place shirts onto it. Is there any help ? how can i do that ? ARkit ? Vision ? or CoreML What will be the ...
Satyr asked 18/2, 2019 at 18:22

0

I want to reshape the face coordinate like showing in the video: https://www.dropbox.com/s/vsttylwgt25szha/IMG_6590.TRIM.MOV?dl=0 (Sorry, unfortunetly the video is about 11 MB in size). I've just ...
Conference asked 31/12, 2018 at 11:12

1

Solved

I'm developing ARKit app along with Vision/AVKit frameworks. I'm using MLModel for classification of my hand gestures. My app recognizes Victory, Okey and ¡No pasarán! hand gestures for controlling...
Dragoon asked 1/12, 2018 at 13:37

3

Solved

I'm working with Vision framework to detect faces and objects on multiple images and works fantastic. But I have a question that I can't find on documentation. The Photos app on iOS classify faces...

3

Solved

I am creating my request with the following code: let textRequest = VNDetectTextRectanglesRequest(completionHandler: self.detectTextHandler) textRequest.reportCharacterBoxes = true self.requests ...
Foretoken asked 25/7, 2017 at 9:0

1

I wonder is it theoretically possible to detect wall edges/lines (like in the picture)? All I could achieve is detecting the vertices of rectangles that are visible to Camera Preview. But we can'...

1

I'm trying to use the new Apple Vision API to detect a barcode from an image and return its details. I've successfully detected a QR code and returned a message using the CIDetector. However I can'...
Duplex asked 3/7, 2017 at 11:16

© 2022 - 2024 — McMap. All rights reserved.