Deploying a Flask Application to the Cloud: A Step Guide
OpenCV Tutorial
Deploying a Flask Application to the Cloud: A Step Guide

Deploying a Flask application to the

cloud

If you have developed a Flask application you can run in your computer, you can

easily make it public by deploying it to the cloud. There are a lot of options if

you want to deploy your application to the cloud (for example, Google App

Engine: https://cloud.google.com/appengine/, Microsoft Azure: https://azure.microsoft.

com, Heroku: https://devcenter.heroku.com/, and Amazon Web Services: https://aws.am

azon.com, among others). Additionally, you can also use PythonAnywhere (www.pyth

onanywhere.com), which is a Python online integrated development environment

(IDE) and web hosting environment, making it easy to create and run Python

programs in the cloud.

PythonAnywhere is very simple, and also the recommended way of hosting

machine learning-based web applications. PythonAnywhere provides some

interesting features, such as WSGI-based web hosting (for example, Django,

Flask, and Web2py).

In this section, we will see how to create a Flask application and how to deploy

it on PythonAnywhere.

To show you how to deploy a Flask application to the cloud

using PythonAnywhere, we are going to use the code of the mysite project. This

code is very similar (with minor modifications) to the minimal face API we have

previously seen in this chapter. These modifications will be explained after

creating the site:

1. The first step is to create a PythonAnywhere account. For this example, a

beginner account is enough (https://www.pythonanywhere.com/pricing/):

2. After registering, you will have access to your dashboard. This can be seen

in the next screenshot:

As you can see, I have created the user opencv.

3. The next step is to click on Web menu and then, click the Add new web app

button, as shown in the next screenshot:

4. At this point, you are ready to create the new web app, as shown in the next

screenshot:

5. Click Next and then, click Flask and also, click on the latest version of

Python. Finally, click Next to accept the project path:

This will create a Hello world Flask application that you can see if you

visit https://your_user_name.pythonanywhere.com. In my case, the URL will

be https://opencv.pythonanywhere.com.

6. At this point, we are ready to upload our own project. The first step is to

click on Go to directory in the Code section of the Web menu, as shown in

the next screenshot:

7. We can upload files to our site using the Upload a file button. We have

uploaded three files, as follows:

flask_app.py

face_processing.py

haarcascade_frontalface_alt.xml

This can be seen in the next screenshot:

You can see the uploaded content of these files by clicking the

download icon. In this case, you can see the content of these files

in the following URLs:

The flask_app.py: https://www.pythonanywhere.com/user/opencv/files/home/opencv

/mysite/flask_app.py

The face_processing.py: https://www.pythonanywhere.com/user/opencv/files/home/

opencv/mysite/face_processing.py

The haarcascade_frontalface_alt.xml: https://www.pythonanywhere.com/user/openc

v/files/home/opencv/mysite/haarcascade_frontalface_alt.xml

8. Next step is to set up the virtual environment. To accomplish this, a bash

console should be opened by clicking on Open Bash console here (see the

previous screenshot). Once it's opened, run the following command:

$ mkvirtualenv --python=/usr/bin/python3.6 my-virtualenv

You will see the prompt changes from $ to (my-virtualenv)$. This

means that the virtual environment has been activated. At this point,

we will install all the required packages (flask and opencv-contrib-

python):

(my-virtualenv)$ pip install flask

(my-virtualenv)$ pip install opencv-contrib-python

You can see that numpy is also installed. All these steps can be seen in

the next screenshot:

If you want to install additional packages, do not forget to activate

the virtual environment you have created. You can reactivate it with

the following command:

$ workon my-virtualenv

(my-virtualenv)$

9. At this point, we have almost finished. The final step is to reload the

uploaded project by clicking on the Web option in the menu and reloading

the site, which can be seen in the next screenshot:

Hence, we are ready to test the face API uploaded to

PythonAnywhere, which can be accessed using https://opencv.pytho

nanywhere.com/. You will see something like the following

screenshot:

You can see a JSON response. This JSON response is obtained because we have

used the route() decorator to bind the info_view() function to the URL /. This is

one of the modifications we have performed in this example in comparison with

the minimal face API we have seen in this chapter. Therefore, we have modified

the flask_app.py script to include:

@app.route('/', methods=["GET"])

def info_view():

# List of routes for this API:

output = {

'info': 'GET /',

'detect faces via POST': 'POST /detect',

'detect faces via GET': 'GET /detect',

}

return jsonify(output), 200

This way, when accessing https://opencv.pythonanywhere.com/, we will get the list of

routes for this API. This is helpful when uploading a project to

PythonAnywhere in order to see that everything is working fine. The second

(and final) modification is performed in the face_processing.py script. In this script,

we have changed the path of the haarcascade_frontalface_alt.xml file, which is used

by the face detector:

class FaceProcessing(object):

def __init__(self):

self.file = "/home/opencv/mysite/haarcascade_frontalface_alt.xml"

self.face_cascade = cv2.CascadeClassifier(self.file)

See the path of the file, which matches with the new path assigned when

uploading the haarcascade_frontalface_alt.xml file to PythonAnywhere.

This path should be changed according to the username (opencv in this case).

In the same way as we did in previous examples, we can perform a POST request

to the face API uploaded to PythonAnywhere. This is performed in

the demo_request.py script:

# Import required packages:

import cv2

import numpy as np

import requests

from matplotlib import pyplot as plt

def show_img_with_matplotlib(color_img, title, pos):

"""Shows an image using matplotlib capabilities"""

img_RGB = color_img[:, :, ::-1]

ax = plt.subplot(1, 1, pos)

plt.imshow(img_RGB)

plt.title(title)

plt.axis('off')

FACE_DETECTION_REST_API_URL = "http://opencv.pythonanywhere.com/detect"

IMAGE_PATH = "test_face_processing.jpg"

# Load the image and construct the payload:

image = open(IMAGE_PATH, "rb").read()

payload = {"image": image}

# Submit the POST request:

r = requests.post(FACE_DETECTION_REST_API_URL, files=payload)

# See the response:

print("status code: {}".format(r.status_code))

print("headers: {}".format(r.headers))

print("content: {}".format(r.json()))

# Get JSON data from the response and get 'result':

json_data = r.json()

result = json_data['result']

# Convert the loaded image to the OpenCV format:

image_array = np.asarray(bytearray(image), dtype=np.uint8)

img_opencv = cv2.imdecode(image_array, -1)

# Draw faces in the OpenCV image:

for face in result:

left, top, right, bottom = face['box']

# To draw a rectangle, you need top-left corner and bottom-right corner of rectangle:

cv2.rectangle(img_opencv, (left, top), (right, bottom), (0, 255, 255), 2)

# Draw top-left corner and bottom-right corner (checking):

cv2.circle(img_opencv, (left, top), 5, (0, 0, 255), -1)

cv2.circle(img_opencv, (right, bottom), 5, (255, 0, 0), -1)

# Create the dimensions of the figure and set title:

fig = plt.figure(figsize=(8, 6))

plt.suptitle("Using face API", fontsize=14, fontweight='bold')

fig.patch.set_facecolor('silver')

# Show the output image

show_img_with_matplotlib(img_opencv, "face detection", 1)

# Show the Figure:

plt.show()

There is nothing new in this script, with the exception of the following line:

FACE_DETECTION_REST_API_URL = "http://opencv.pythonanywhere.com/detect"

Note that we are requesting our cloud API. The output of this script can be seen

in the next screenshot:

This way we can confirm that our cloud API is up and running.

Related

Implement Skin Segmentation of Images in Different Color Space Using OpenCV

Skin segmentation in different color spaces The aforementioned color spaces can be used in different image processing tasks and techniques. For example, the skin_segmentation.py script implements different algorithms to perform skin segmentation working in different color spaces (YCrCb, HSV, and RGB). This script also loads several test images to see how these algorithms work. The key functions in this script are cv2.cvtColor(), which we have already mentioned on, and cv2.inRange(), which checks whether the elements contained in an array lie between the elements of two other arrays (the lower boundary array and the upper boundary array). Therefore, we use the cv2.inRange() function to segment the colors corresponding to the skin. As you can see, the values defined for these two arrays (lower and upper boundaries) play a critical role in the performance of the segmentation algorithms. In this way, a wide investigation has been carried out in order to set them properly. In this example, the values are obtained from the following research papers: RGB-H-CbCr Skin Color Model for Human Face Detection by Nusirwan Anwar, Abdul Rahman, K. C. Wei, and John See Skin segmentation algorithm based on the YCrCb color space by Shruti D Patravali, Jyoti Waykule, and Apurva Katre Face Segmentation Using Skin-Color Map in Videophone Applications by D. Chai and K.N. Ngan The skin_detectors dictionary has been built to apply all the skin segmentation algorithms to the test images. If we print it, the output will be as follows: index: '0', key: 'ycrcb', value: '<function skin_detector_ycrcb at 0x07B8C030>' index: '1', key: 'hsv', value: '<function skin_detector_hsv at 0x07B8C0C0>' index: '2', key: 'hsv_2', value: '<function skin_detector_hsv_2 at 0x07B8C108>' index: '3', key: 'bgr', value: '<function skin_detector_bgr at 0x07B8C1E0>' You can see that there are four skin detectors defined. In order to call a skin segmentation detector (for example, skin_detector_ycrcb), you must perform the following: detected_skin = skin_detectors['ycrcb'](image) The output of the script can be seen in the following screenshot: You can see the effect of applying different skin segmentation algorithms using several test images to see how these algorithms work under different conditions.

Scaling an Image in OpenCV: A Simple Introduction

Scaling an image When scaling an image, you can call cv2.resize() with a specific size, and the scaling factors (fx and fy) will be calculated based on the provided size, as shown in the following code: resized_image = cv2.resize(image, (width * 2, height * 2), interpolation=cv2.INTER_LINEAR) On the other hand, you can provide both the fx and fy values. For example, if you want to shrink the image by a factor of 2, you can use the following code: dst_image = cv2.resize(image, None, fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA) If you want to enlarge the image, the best approach is to use the cv2.INTER_CUBIC interpolation method (a time-consuming interpolation method) or cv2.INTER_LINEAR. If you want to shrink the image, the general approach is to use cv2.INTER_LINEAR. The five interpolation methods provided with OpenCV are cv2.INTER_NEAREST (nearest neighbor interpolation), cv2.INTER_LINEAR (bilinear interpolation), cv2.INTER_AREA (resampling using pixel area relation), cv2.INTER_CUBIC (bicubic interpolation), and cv2.INTER_LANCZOS4 (sinusoidal interpolation). Scaling an image When scaling an image, you can call cv2.resize() with a specific size, and the scaling factors (fx and fy) will be calculated based on the provided size, as shown in the following code: resized_image = cv2.resize(image, (width * 2, height * 2), interpolation=cv2.INTER_LINEAR) On the other hand, you can provide both the fx and fy values. For example, if you want to shrink the image by a factor of 2, you can use the following code: dst_image = cv2.resize(image, None, fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA) If you want to enlarge the image, the best approach is to use the cv2.INTER_CUBIC interpolation method (a time-consuming interpolation method) or cv2.INTER_LINEAR. If you want to shrink the image, the general approach is to use cv2.INTER_LINEAR. The five interpolation methods provided with OpenCV are cv2.INTER_NEAREST (nearest neighbor interpolation), cv2.INTER_LINEAR (bilinear interpolation), cv2.INTER_AREA (resampling using pixel area relation), cv2.INTER_CUBIC (bicubic interpolation), and cv2.INTER_LANCZOS4 (sinusoidal interpolation).

Detecting Facial Landmarks with OpenCV: A Beginner Guide

OpenCV The OpenCV facial landmark API is called Facemark (https://docs.opencv.org/4.0. 1/db/dd8/classcv_1_1face_1_1Facemark.html). It has three different implementations of landmark detection based on three different papers: FacemarkLBF FacemarkKamezi FacemarkAAM The following example shows how to detect facial landmarks using these algorithms: # Import required packages: import cv2 import numpy as np # Load image: image = cv2.imread("my_image.png",0) # Find faces: cas = cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml") faces = cas.detectMultiScale(image , 1.5, 5) print("faces", faces) # At this point, we create landmark detectors and test them: print("testing LBF") facemark = cv2.face.createFacemarkLBF() facemark .loadModel("lbfmodel.yaml") ok, landmarks = facemark.fit(image , faces) print ("landmarks LBF", ok, landmarks) print("testing AAM") facemark = cv2.face.createFacemarkAAM() facemark .loadModel("aam.xml") ok, landmarks = facemark.fit(image , faces) print ("landmarks AAM", ok, landmarks) print("testing Kazemi") facemark = cv2.face.createFacemarkKazemi() facemark .loadModel("face_landmark_model.dat") ok, landmarks = facemark.fit(image , faces) print ("landmarks Kazemi", ok, landmarks) This example should detect the facial landmarks using the three different algorithms provided by OpenCV. However, the generated Python wrappers for the fit() function are not correct. Therefore, at the time of writing, and using OpenCV 4.0, this script does not work in Python. To solve this problem, we need to modify the C++ code of the fit() function and install OpenCV from the source code. For example, here's the actual code for the FacemarkLBFImpl::fit() method: // C++ code bool FacemarkLBFImpl::fit( InputArray image, InputArray roi, OutputArrayOfArrays _landmarks ) { // FIXIT std::vector<Rect> & faces = *(std::vector<Rect> *)roi.getObj(); if (faces.empty()) return false; std::vector<std::vector<Point2f> > & landmarks = *(std::vector<std::vector<Point2f> >*) _landmarks.getObj(); landmarks.resize(faces.size()); for(unsigned i=0; i<faces.size();i++){ params.detectROI = faces[i]; fitImpl(image.getMat(), landmarks[i]); } return true; […]

Understanding Color Maps in OpenCV: A Beginner Introduction

Color maps in OpenCV In order to perform this transformation, OpenCV has several color maps to enhance visualization. The cv2.applyColorMap() function applies a color map on the given image. The color_map_example.py script loads a grayscale image and applies the cv2.COLORMAP_HSV color map, as shown in the following code: img_COLORMAP_HSV = cv2.applyColorMap(gray_img, cv2.COLORMAP_HSV) Finally, we are going to apply all the color maps to the same grayscale image and plot them in the same figure. This can be seen in the color_map_all.py script. The color maps that OpenCV has defined are listed as follows: COLORMAP_AUTUMN = 0 COLORMAP_BONE = 1 COLORMAP_JET = 2 COLORMAP_WINTER = 3 COLORMAP_RAINBOW = 4 COLORMAP_OCEAN = 5 COLORMAP_SUMMER = 6 COLORMAP_SPRING = 7 COLORMAP_COOL = 8 COLORMAP_HSV = 9 COLORMAP_HOT = 11 COLORMAP_PINK = 10 COLORMAP_PARULA = 12 The color_map_all.py script applies all these color maps to a grayscale image. The output of this script can be seen in the following screenshot: In the previous screenshot, you can see the effect of applying all the predefined color maps to a grayscale image with the objective of enhancing the visualization. Custom color maps You can also apply custom color maps to an image. This functionality can be achieved in several ways. The first approach is to define a color map that maps the 0 to 255 grayscale values to 256 colors. This can be done by creating an 8-bit color image of size 256 x 1 in order to store all the created colors. After that, you map the grayscale intensities of the image to the defined colors by means of a lookup table. In order to achieve this, you can do one of the following: Make use of the cv2.LUT() function Map the grayscale intensities of the image to the defined colors so you can make use of cv2.applyColorMap() One key point is to store the created colors when creating the 8-bit color image of size 256 x 1. If you are going to use cv2.LUT(), the image should be created as follows: lut = np.zeros((256, 3), dtype=np.uint8) If you are going to use cv2.cv2.applyColorMap(), then it should be as follows: lut = np.zeros((256, 1, 3), dtype=np.uint8) The full code for this can be seen in color_map_custom_values.py. The output of this script can be seen in the following screenshot: The second approach to define a color map is to provide only some key colors and then interpolate the values in order to get all the necessary colors to build the lookup table. The color_map_custom_key_colors.py script shows how to achieve this. The build_lut() function builds the lookup table based on these key colors. Based on five color points, this function calls np.linespace() to get all the 64 evenly spaced colors calculated over the interval, each defined by two color points. To understand this better, look at the following screenshot: In this screenshot, you can see, for example, how to calculate all the 64 evenly […]

Comparing OpenCV, NumPy, and Matplotlib Histograms: A Simple Introduction

Matplotlib histograms We have already seen that OpenCV provides the cv2.calcHist() function to calculate histograms. Additionally, NumPy and Matplotlib offer similar functions for the creation of histograms. In the comparing_opencv_numpy_mpl_hist.py script, we are comparing these functions for performance purposes. In this sense, we are going to see how to create histograms with OpenCV, NumPy, and Matplotlib, and then measure the execution time for each one and plot the results in a figure. With the purpose of measuring the execution time, we are using timeit.default_timer because it provides the best clock available on your platform and version of Python automatically. In this way, we import it at the beginning of the script: from timeit import default_timer as timer The way we use the timer is summarized here: start = timer() # ... end = timer() execution_time = start - end It should be taken into account that default_timer() measurements can be affected by other programs running on the same machine at the same time. Therefore, the best approach to performing accurate timing is to repeat it several times and take the best time. In order to calculate the histograms, we are going to use the following functions: cv2.calcHist() provided by OpenCV np.histogram() provided by NumPy plt.hist() provided by Matplotlib Hence, the code for calculating the execution for each of the aforementioned functions is provided as follows: start = timer() # Calculate the histogram calling cv2.calcHist() hist = cv2.calcHist([gray_image], [0], None, [256], [0, 256]) end = timer() exec_time_calc_hist = (end - start) * 1000 start = timer() # Calculate the histogram calling np.histogram(): hist_np, bins_np = np.histogram(gray_image.ravel(), 256, [0, 256]) end = timer() exec_time_np_hist = (end - start) * 1000 start = timer() # Calculate the histogram calling plt.hist(): (n, bins, patches) = plt.hist(gray_image.ravel(), 256, [0, 256]) end = timer() exec_time_plt_hist = (end - start) * 1000 We multiply the value to get milliseconds (rather than seconds). The output for the comparing_opencv_numpy_mpl_hist.py script can be seen in the following screenshot: As can be seen, cv2.calcHist() is faster than both np.histogram() and plt.hist(). Therefore, for performance purposes, you can use the OpenCV function.