Using features from the .NET, OpenCV and Microsoft APIs, we were able to make an complete app. For the app, we needed buttons, serialport, and text box.
In the black window, it will display the image from the webcam where it adds a frame with the name of the person within the frame.
In the white window from the left, it will add the text from the TTS API (Text-to-Speech), providing feedback to check the work and see if the spoken words are in compliance with the displayed ones.
In the training mode, you need to add multiple recordings of your faces in order to achieve better results from the program. The training is formed from taking pictures in multiple face positions for an optimal recognition (different angle shots, light conditions, etc). You need to record with the app by pressing the button Record 10 Faces.
In the code, this was made with the Face = Parent.faceClassifier; using the EmguCV platform. Face detection is made with complex algorithms based on Principal Component Analysis (PCA), making multiple comparisons between the detected face and the one stored after the training.
The image that will be stored is the one in the blue rectangle, because it contains the distinct features of the face.
We need to take more sets of the face characteristics in different positions and under different light:
After a session of face detection shots, if you think that recorded faces were not good for the algorithm, Restart 1 Face button can delete the saved files and it will no longer detect your face.
Delete Data is used to delete the photos when you want to record more than one person, so the robotic arm can distinguish between many people.
After you made more recordings, the algorithm should be able to detect your face and the name will be shown above the blue frame. Features like glasses can be good in a precise detection because those are distinctive characteristics.