In iOS, the AVMetadataFaceObject in AVFoundation is used for in vivo detection, and the information of shaking the head of the user can not be obtained.

if you want to carry out simple live detection, you need to get the information of shaking your head through the camera. You want to achieve this through yawAngle in AVMetadataFaceObject. Now you find that if the face is already in the lens when [session startRunning], no matter how you shake your head at this time, yawAngle is always equal to 0, but you just need to enter the camera after the face is removed. Is this BUG? How to solve it?

Oct.10,2021
Menu