项目作者: Willjay90

项目描述 :
Face Detection with CoreML
高级语言: Swift
项目地址: git://github.com/Willjay90/AppleFaceDetection.git
创建时间: 2017-06-09T06:28:46Z
项目社区:https://github.com/Willjay90/AppleFaceDetection

开源协议:

下载


Face Detection with Vision Framework

ios11+
swift4+

Previously, in iOS 10, to detect faces in a picture, you can use CIDetector (Apple)
or Mobile Vision (Google)

In iOS11, Apple introduces CoreML. With the Vision Framework, it’s much easier to detect faces in real time 😃

Try it out with real time face detection on your iPhone! 📱

You can find out the differences between CIDetector and Vison Framework down below.

Moving From Voila-Jones to Deep Learning


Details

Specify the VNRequest for face recognition, either VNDetectFaceRectanglesRequest or VNDetectFaceLandmarksRequest.

  1. private var requests = [VNRequest]() // you can do mutiple requests at the same time
  2. var faceDetectionRequest: VNRequest!
  3. @IBAction func UpdateDetectionType(_ sender: UISegmentedControl) {
  4. // use segmentedControl to switch over VNRequest
  5. faceDetectionRequest = sender.selectedSegmentIndex == 0 ? VNDetectFaceRectanglesRequest(completionHandler: handleFaces) : VNDetectFaceLandmarksRequest(completionHandler: handleFaceLandmarks)
  6. }

Perform the requests every single frame. The image comes from camera via captureOutput(_:didOutput:from:), see AVCaptureVideoDataOutputSampleBufferDelegate

  1. func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
  2. guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer),
  3. let exifOrientation = CGImagePropertyOrientation(rawValue: exifOrientationFromDeviceOrientation()) else { return }
  4. var requestOptions: [VNImageOption : Any] = [:]
  5. if let cameraIntrinsicData = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil) {
  6. requestOptions = [.cameraIntrinsics : cameraIntrinsicData]
  7. }
  8. // perform image request for face recognition
  9. let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: requestOptions)
  10. do {
  11. try imageRequestHandler.perform(self.requests)
  12. }
  13. catch {
  14. print(error)
  15. }
  16. }

Handle the return of your request, VNRequestCompletionHandler.

  • handleFaces for VNDetectFaceRectanglesRequest
  • handleFaceLandmarks for VNDetectFaceLandmarksRequest

then you will get the result from the request, which are VNFaceObservations. That’s all you got from the Vision API

  1. func handleFaces(request: VNRequest, error: Error?) {
  2. DispatchQueue.main.async {
  3. //perform all the UI updates on the main queue
  4. guard let results = request.results as? [VNFaceObservation] else { return }
  5. print("face count = \(results.count) ")
  6. self.previewView.removeMask()
  7. for face in results {
  8. self.previewView.drawFaceboundingBox(face: face)
  9. }
  10. }
  11. }
  12. func handleFaceLandmarks(request: VNRequest, error: Error?) {
  13. DispatchQueue.main.async {
  14. //perform all the UI updates on the main queue
  15. guard let results = request.results as? [VNFaceObservation] else { return }
  16. self.previewView.removeMask()
  17. for face in results {
  18. self.previewView.drawFaceWithLandmarks(face: face)
  19. }
  20. }
  21. }

Lastly, DRAW corresponding location on the screen!

  1. func drawFaceboundingBox(face : VNFaceObservation) {
  2. // The coordinates are normalized to the dimensions of the processed image, with the origin at the image's lower-left corner.
  3. let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -frame.height)
  4. let scale = CGAffineTransform.identity.scaledBy(x: frame.width, y: frame.height)
  5. let facebounds = face.boundingBox.applying(scale).applying(transform)
  6. _ = createLayer(in: facebounds)
  7. }
  8. // Create a new layer drawing the bounding box
  9. private func createLayer(in rect: CGRect) -> CAShapeLayer {
  10. let mask = CAShapeLayer()
  11. mask.frame = rect
  12. mask.cornerRadius = 10
  13. mask.opacity = 0.75
  14. mask.borderColor = UIColor.yellow.cgColor
  15. mask.borderWidth = 2.0
  16. maskLayer.append(mask)
  17. layer.insertSublayer(mask, at: 1)
  18. return mask
  19. }