Python Tutorial

more+

Machine Learning

more+

OpenCV Tutorial

more+
Deploying a Flask Application to the Cloud: A Step Guide

Deploying a Flask application to the cloud If you have developed a Flask application you can run in your computer, you can easily make it public by deploying it to the cloud. There are a lot of options if you want to deploy your application to the cloud (for example, Google App Engine: https://cloud.google.com/appengine/, Microsoft Azure: https://azure.microsoft. com, Heroku: https://devcenter.heroku.com/, and Amazon Web Services: https://aws.am azon.com, among others). Additionally, you can also use PythonAnywhere (www.pyth onanywhere.com), which is a Python online integrated development environment (IDE) and web hosting environment, making it easy to create and run Python programs in the cloud. PythonAnywhere is very simple, and also the recommended way of hosting machine learning-based web applications. PythonAnywhere provides some interesting features, such as WSGI-based web hosting (for example, Django, Flask, and Web2py). In this section, we will see how to create a Flask application and how to deploy it on PythonAnywhere. To show you how to deploy a Flask application to the cloud using PythonAnywhere, we are going to use the code of the mysite project. This code is very similar (with minor modifications) to the minimal face API we have previously seen in this chapter. These modifications will be explained after creating the site: 1. The first step is to create a PythonAnywhere account. For this example, a beginner account is enough (https://www.pythonanywhere.com/pricing/): 2. After registering, you will have access to your dashboard. This can be seen in the next screenshot: As you can see, I have created the user opencv. 3. The next step is to click on Web menu and then, click the Add new web app button, as shown in the next screenshot: 4. At this point, you are ready to create the new web app, as shown in the next screenshot: 5. Click Next and then, click Flask and also, click on the latest version of Python. Finally, click Next to accept the project path: This will create a Hello world Flask application that you can see if you visit https://your_user_name.pythonanywhere.com. In my case, the URL will be https://opencv.pythonanywhere.com. 6. At this point, we are ready to upload our own project. The first step is to click on Go to directory in the Code section of the Web menu, as shown in the next screenshot: 7. We can upload files to our site using the Upload a file button. We have uploaded three files, as follows: flask_app.py face_processing.py haarcascade_frontalface_alt.xml This can be seen in the next screenshot: You can see the uploaded content of these files by clicking the download icon. In this case, you can see the content of these files in the following URLs: […]

Web Computer Vision Applications Using OpenCV and Flask: A Beginner Tutorial

Web computer vision applications using OpenCV and Flask In this section, we will see how to create web computer vision applications using OpenCV and Flask. We will start with the equivalent Hello world application using OpenCV and Flask. A minimal example to introduce OpenCV and Flask Script hello_opencv.py is coded in order to show how you can use OpenCV to perform a very basic web computer vision application. The code of this script is shown next: # Import required packages: import cv2 from flask import Flask, request, make_response import numpy as np import urllib.request app = Flask(__name__) @app.route('/canny', methods=['GET']) def canny_processing(): # Get the image: with urllib.request.urlopen(request.args.get('url')) as url: image_array = np.asarray(bytearray(url.read()), dtype=np.uint8) # Convert the image to OpenCV format: img_opencv = cv2.imdecode(image_array, -1) # Convert image to grayscale: gray = cv2.cvtColor(img_opencv, cv2.COLOR_BGR2GRAY) # Perform canny edge detection: edges = cv2.Canny(gray, 100, 200) # Compress the image and store it in the memory buffer: retval, buffer = cv2.imencode('.jpg', edges) # Build the response: response = make_response(buffer.tobytes()) response.headers['Content-Type'] = 'image/jpeg' # Return the response: return response if __name__ == "__main__": # Add parameter host='0.0.0.0' to run on your machines IP address: app.run(host='0.0.0.0') The previous code can be explained with the help of the following steps: 1. The first step is to import the required packages. In this example, we have used the route() decorator to bind the canny_processing() function to the /canny URL. Additionally, the url parameter is also needed to perform the GET request correctly. In order to get this parameter, the request.args.get() function is used. 2. The next step is to read the image this URL holds as follows: with urllib.request.urlopen(request.args.get('url')) as url: image_array = np.asarray(bytearray(url.read()), dtype=np.uint8) This way, the image is read as an array. 3. The next step is to convert the image to OpenCV format and perform Canny edge processing, which should be performed on the corresponding grayscale image: # Convert the image to OpenCV format: img_opencv = cv2.imdecode(image_array, -1) […]

Implement Face Recognition with face_recognition Package: A Beginner Guide

face_recognition Face recognition with face_recognition uses the dlib functionality for both encoding the faces and calculating the distances for the encoded faces. Therefore, you do not need to code the face_encodings() and compare_faces() functions, but just make use of them. The encode_face_fr.py script shows you how to create the 128D descriptor that makes use of the face_recognition.face_encodings() function: # Load image: image = cv2.imread("jared_1.jpg") # Convert image from BGR (OpenCV format) to RGB (face_recognition format): image = image[:, :, ::-1] # Calculate the encodings for every face of the image: encodings = face_recognition.face_encodings(image) # Show the first encoding: print(encodings[0]) To see how to compare faces using face_recognition, the compare_faces_fr.py script has been coded. The code is as follows: # Load known images (remember that these images are loaded in RGB order): known_image_1 = face_recognition.load_image_file("jared_1.jpg") known_image_2 = face_recognition.load_image_file("jared_2.jpg") known_image_3 = face_recognition.load_image_file("jared_3.jpg") known_image_4 = face_recognition.load_image_file("obama.jpg") # Crate names for each loaded image: names = ["jared_1.jpg", "jared_2.jpg", "jared_3.jpg", "obama.jpg"] # Load unknown image (this image is going to be compared against all the previous loaded images): unknown_image = face_recognition.load_image_file("jared_4.jpg") # Calculate the encodings for every of the images: known_image_1_encoding = face_recognition.face_encodings(known_image_1)[0] known_image_2_encoding = face_recognition.face_encodings(known_image_2)[0] known_image_3_encoding = face_recognition.face_encodings(known_image_3)[0] known_image_4_encoding = face_recognition.face_encodings(known_image_4)[0] known_encodings = [known_image_1_encoding, known_image_2_encoding, known_image_3_encoding unknown_encoding = face_recognition.face_encodings(unknown_image)[0] # Compare the faces: results = face_recognition.compare_faces(known_encodings, unknown_encoding) # Print the results: print(results) The results obtained are [True, True, True, False]. Therefore, the first three loaded images ("jared_1.jpg", "jared_2.jpg", and "jared_3.jpg") are considered to be the same person as the unknown image ("jared_4.jpg"), while the fourth loaded image ("obama.jpg") is considered to be a different person.

Implement Face Recognition with dlib Library: A Beginner Guide

Face recognition with dlib Dlib offers a high-quality face recognition algorithm based on deep learning. Dlib implements a face recognition algorithm that offers state-of-the-art accuracy. More specifically, the model has an accuracy of 99.38% on the labeled faces in the wild database. The implementation of this algorithm is based on the ResNet-34 network proposed in the paper Deep Residual Learning for Image Recognition (2016), which was trained using three million faces. The created model (21.4 MB) can be downloaded from https://github.com/davisking/dlib-models/blob/master/dlib_face_rec ognition_resnet_model_v1.dat.bz2. This network is trained in a way that generates a 128-dimensional (128D) descriptor, used to quantify the face. The training step is performed using triplets. A single triplet training example is composed of three images. Two of them correspond to the same person. The network generates the 128D descriptor for each of the images, slightly modifying the neural network weights in order to make the two vectors that correspond to the same person closer and the feature vector from the other person further away. The triplet loss function formalizes this and tries to push the 128D descriptor of two images of the same person closer together, while pulling the 128D descriptor of two images of different people further apart. This process is repeated millions of times for millions of images of thousands of different people and finally, it is able to generate a 128D descriptor for each person. So, the final 128D descriptor is good encoding for the following reasons: The generated 128D descriptors of two images of the same person are quite similar to each other. The generated 128D descriptors of two images of different people are very different. Therefore, making use of the dlib functionality, we can use a pre-trained model to map a face into a 128D descriptor. Afterward, we can use these feature vectors to perform face recognition. The encode_face_dlib.py script shows how to calculate the 128D descriptor, used to quantify the face. The process is quite simple, as shown in the following code: # Load image: image = cv2.imread("jared_1.jpg") # Convert image from BGR (OpenCV format) to RGB (dlib format): rgb = image[:, :, ::-1] # Calculate the encodings for every face of the image: encodings = face_encodings(rgb) # Show the first encoding: print(encodings[0]) As you can guess, the face_encodings() function returns the 128D descriptor for each face in the image: pose_predictor_5_point = dlib.shape_predictor("shape_predictor_5_face_landmarks.dat") face_encoder = dlib.face_recognition_model_v1("dlib_face_recognition_resnet_model_v1.dat") detector = dlib.get_frontal_face_detector() def face_encodings(face_image, number_of_times_to_upsample=1, num_jitters=1): """Returns the 128D descriptor for each face in the image""" # Detect faces: face_locations = detector(face_image, number_of_times_to_upsample) # Detected landmarks: raw_landmarks = [pose_predictor_5_point(face_image, face_location) for face_location # Calculate the face encoding for every detected face using the detected landmarks for each one: return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, raw_landmark_set in raw_landmarks] As you can see, the key point is to calculate the face encoding for every detected […]

Implement Face Recognition with OpenCV: A Beginner Guide

Face recognition with OpenCV OpenCV provides support to perform face recognition (https://docs.opencv.org/4.0. 1/dd/d65/classcv_1_1face_1_1FaceRecognizer.html). Indeed, OpenCV provides three different implementations to use: Eigenfaces Fisherfaces Local Binary Patterns Histograms (LBPH) These implementations perform the recognition in different ways. However, you can use any of them by changing only the way the recognizers are created. More specifically, to create these recognizers, the following code is necessary: face_recognizer = cv2.face.LBPHFaceRecognizer_create() face_recognizer = cv2.face.EigenFaceRecognizer_create() face_recognizer = cv2.face.FisherFaceRecognizer_create() Once created, and independently of the specific internal algorithm OpenCV is going to use to perform the face recognition, the two key methods, train() and predict(), should be used to perform both the training and the testing of the face recognition system, and it should be noted that the way we use these methods is independent of the recognizer created. Therefore, it is very easy to try the three recognizers and select the one that offers the best performance for a specific task. Having said that, LBPH should provide better results than the other two methods when recognizing images in the wild, where different environments and lighting conditions are usually involved. Additionally, the LBPH face recognizer supports the update() method, where you can update the face recognizer given new data. For the Eigenfaces and Fisherfaces methods, this functionality is not possible. In order to train the recognizer, the train() method should be called: face_recognizer.train(faces, labels) The cv2.face_FaceRecognizer.train(src, labels) method trains the specific face recognizer, where src corresponds to the training set of images (faces), and parameter labels set the corresponding label for each image in the training set. To recognize a new face, the predict() method should be called: label, confidence = face_recognizer.predict(face) The cv2.face_FaceRecognizer.predict(src) method outputs (predicts) the recognition of the new src image by outputting the predicted label and the associated confidence. Finally, OpenCV also provides the write() and read() methods to save the created model and to load a previously created model, respectively. For both methods, the filename parameter sets the name of the model to save or load: cv2.face_FaceRecognizer.write(filename) cv2.face_FaceRecognizer.read(filename) As mentioned, the LBPH face recognizer can be updated using the update() method: cv2.face_FaceRecognizer.update(src, labels) Here, src and labels set the new training examples that are going to be used to update the LBPH recognizer.

Implement Face Tracking with the dlib DCF-based Tracker: A Beginner Guide

Face tracking with the dlib DCF- based tracker In the face_tracking_correlation_filters.py script, we perform face tracking using the dlib frontal face detector for initialization and the dlib DCF-based tracker DSST for face tracking. In order to initialize the correlation tracker, we execute the following command: tracker = dlib.correlation_tracker() This initializes the tracker with default values (filter_size = 6,num_scale_levels = 5, scale_window_size = 23, regularizer_space = 0.001, nu_space = 0.025, regularizer_scale = 0.001, nu_scale = 0.025, and scale_pyramid_alpha = 1.020). A higher value of filter_size and num_scale_levels increases tracking accuracy, but it requires more computational power, increasing CPU processing. The recommended values for filter_size are 5, 6, and 7, and for num_scale_levels, 4, 5, and 6. To begin tracking the method, tracker.start_track() is used. In this case, we perform face detection. If successful, we will pass the position of the face to this method, as follows: if tracking_face is False: gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Try to detect a face to initialize the tracker: rects = detector(gray, 0) # Check if we can start tracking (if we detected a face): if len(rects) > 0: # Start tracking: tracker.start_track(frame, rects[0]) tracking_face = True This way, the object tracker will start tracking what is inside the bounding box, which, in this case, is the detected face. Additionally, to update the position of the tracked object, the tracker.update() method is called: tracker.update(frame) This method updates the tracker and returns the peak-to-side-lobe ratio, which is a metric that measures how confident the tracker is. Larger values of this metric a metric that measures how confident the tracker is. Larger values of this metric indicate high confidence. This metric can be used to reinitialize the tracker with frontal face detection. To get the position of the tracked object, the tracker.get_position() method is called: pos = tracker.get_position() This method returns the position of the object being tracked. Finally, we can draw the predicted position of the face: cv2.rectangle(frame, (int(pos.left()), int(pos.top())), (int(pos.right()), int(pos.bottom())) In this script, we coded the option to reinitialize the tracker if the number 1 is pressed. If this number is pressed, we reinitialize the tracker trying to detect a frontal face. To clarify how this script works, the following two screenshots are included. In the first screenshot, the tracking algorithm is waiting until a frontal face detection is performed to initialize the tracking: In the second screenshot, the tracking algorithm is currently tracking a previously detected face: In the previous screenshot you can see that the algorithm is currently tracking the detected face. You can also see that you can also press the number 1 in order to re-initialize the tracking.

Detecting Facial Landmarks with face_recognition Package: A Beginner Guide

face_recognition The landmarks_detection_fr.py script shows you how to both detect and draw facial landmarks using the face_recognition package. In order to detect landmarks, the face_recognition.face_landmarks() function is called, as follows: # Detect 68 landmarks: face_landmarks_list_68 = face_recognition.face_landmarks(rgb) This function returns a dictionary of facial landmarks (for example, eyes and nose) for each face in the image. For example, if we print the detected landmarks, the output is as follows: [{'chin': [(113, 251), (111, 283), (115, 315), (122, 346), (136, 376), (154, 402), (177, 425), (203, 442), (231, 447), (260, 442), (285, 426), (306, 403), (323, 377), (334, 347), (340, 315), (343, 282), (343, 251)], 'left_eyebrow': [(123, 223), (140, 211), (163, 208), (185, 211), (206, 220)], 'right_eyebrow': [(240, 221), (263, 212), (288, 209), (312, 211), (332, 223)], 'nose_bridge': [(225, 249), (225, 272), (225, 295), (226, 319)], 'nose_tip': [(201, 337), (213, 340), (226, 343), (239, 339), (252, 336)], 'left_eye': [(144, 248), (158, 239), (175, 240), (188, 254), (173, 255), (156, 254)], 'right_eye': [(262, 254), (276, 240), (293, 239), (308, 248), (295, 254), (278, 255)], 'top_lip': [(185, 377), (200, 370), (216, 364), (226, 367), (238, 364), (255, 370), (274, 377), (267, 378), (238, 378), (227, 380), (215, 379), (192, 378)], 'bottom_lip': [(274, 377), (257, 391), (240, 399), (228, 400), (215, 398), (200, 391), (185, 377), (192, 378), (215, 381), (227, 382), (239, 380), (267, 378)]}] The final step is to draw the detected landmarks: # Draw all detected landmarks: for face_landmarks in face_landmarks_list_68: for facial_feature in face_landmarks.keys(): for p in face_landmarks[facial_feature]: cv2.circle(image_68, p, 2, (0, 255, 0), -1) It should be noted that the signature of the face_recognition.face_landmarks() method is as follows: face_landmarks(face_image, face_locations=None, model="large") Therefore, by default, the 68 feature points are detected. If model="small", only 5 feature points will be detected: # Detect 5 landmarks: face_landmarks_list_5 = face_recognition.face_landmarks(rgb, None, "small") If we print face_landmarks_list_5, we get the following output: [{'nose_tip': [(227, 343)], 'left_eye': [(145, 248), (191, 253)], 'right_eye': [(307, 248), (262, 252)]}] In this case, the dictionary only contains facial feature locations for both eyes and the tip of the nose. The output of the landmarks_detection_fr.py script can be seen in the following screenshot: In the screenshot above, you can see the result of drawing both the detected 68 and 5 facial landmarks using face_recognition package.


Scikit-Learn

more+