Continuing with the topic of our previous posts, we will now stream and overlay a real-time youtube feed into our Colab notebook for future processing. The video streaming and any later AI-powered inference we may apply to this video will be slow; if your project needs relatively fast performance, in the order of several detections per second, this code should be run locally. All the necessary preliminary information is in this previous post. The whole notebook is at the end of the publication.
!pip install pafy !pip install youtube_dl
Youtube alters the structure of its video metadata elements quite frequently; sometimes, open-source code that interacts with these closed services struggles to keep up with the changes from these vendors. In this case, a missing metadata element in the video files may cause our program to crash; before importing pafy, we are hot fixing it on the fly extremely to prevent this error:
offender = '/usr/local/lib/python3.7/dist-packages/pafy/backend_youtube_dl.py' error = '''self._dislikes = self._ydl_info['dislike_count']''' fix = '''#self._dislikes = self._ydl_info['dislike_count']''' # Read in the file with open(offender, 'r') as file : erroneous_code = file.read() # Replace the target string correct_code = erroneous_code.replace(error, fix) # Write the file out again with open(offender, 'w') as file: file.write(correct_code)
We are commenting out the offending line of code that will try to access the missing video metadata. The problem is being fixed here, one among many other pafy clones available on Github. With the pafy file corrected, we can now import it and create our updated streaming function:
import pafy def overlay_yt(image, output_image): start_time = time.time() url = 'CvvpsRJHS3o' video = pafy.new(url) best = video.getbest(preftype="mp4") stream = cv2.VideoCapture(best.url) ret, frame = stream.read() size = (600, 800) frame = cv2.resize(frame, size) output_image[:, :, 0:3] = frame output_image[:, :, 3] = 1 # Add our logo if present: try: logo_file = '/content/ostirion_logo.jpg' img = cv2.imread(logo_file) new_size = (50, 50) img = cv2.resize(img, new_size, interpolation = cv2.INTER_AREA) lim = -new_size-1 output_image[lim:-1, lim:-1, 0:3] = img output_image[lim:-1, lim:-1, 3] = 1 except: pass return output_image[:,:,::-1]
We are using this stream from Laredo in Spain: https://www.youtube.com/watch?v=CvvpsRJHS3o. Notice that this feed may become unavailable in the future; just select a video feed you need and replace the URL value in the code above.
The last part of the URL becomes the argument for the pafy video object. Then we overlay the image into our webcam feed, returning an output with inverted colors.
To show the feed-in Colab, we use:
start_input() label_html = 'Capturing Youtube Stream.' img_data = '' while True: js_reply = take_photo(label_html, img_data) if not js_reply: break image = js_reply_to_image(js_reply) drawing_array = get_drawing_array(image, overlay_function=overlay_yt) drawing_bytes = drawing_array_to_bytes(drawing_array) img_data = drawing_bytes
Now we should see the lagging video feed in the Colab notebook result cell. The overlay function may contain anything to be labeled or segmented by modifying the corresponding function to include detection on top. Of course, this will slow down even more; the system is possibly only useful as a demonstration, any actual detection and overlay happening in a local or edge device.
Do not hesitate to contact us if you require quantitative model development, deployment, verification, or validation. We will also be glad to help you with your machine learning or artificial intelligence challenges when applied to asset management, automation, or intelligence gathering from satellite, drone, or fixed-point imagery.
The demonstration notebook is here.