In some cases, it is necessary to conduct a rapid survey of the data obtained from source data without devoting too much development effort, trading off processing time in return. Today, we needed to check if, for a given drone model and given footage quality, it is possible to obtain actionable intelligence.
As we usually say and repeat, Python is slow to compute and fast to develop, so, if we are looking for a quick prototype or a feasibility study, the best approach is first to tap into the Python open-source material for similar problems to the one we have at hand, check that availability of solutions, try the solutions and, if successful, recode and adapt in another, a fastest performing language with the full systems engineering or SCRUM requirements in place. The final part is possibly several orders of magnitude more effort-consuming than the first part; if both are done right. Our initial prototypes do not have to be fast, reliable, or safe, as speed, reliability, and safety are always implementable by infrastructure. The technical solution may not always be implementable by infrastructure; when trying to develop a new solution, it is wise to prototype as fast as possible and the code, recode or translate with as much effort as necessary. Not the opposite. We promise in the past to discuss the clash between SCRUM and ISO 15288, and... we are not going to discuss it today; note that these are tools, just tools with a purpose and that it is possibly good to apply SCRUM to the creation of a prototype or working concept and good to then apply ISO 15288 to the full development of a product that is constrained by reliability and safety requirements. And both in whatever language you need.
So, we start with this drone footage and the question of whether it could be suitable for remote infrastructure state inspection:
Initially, we would have to make the system extract the infrastructure features and then decide on the state. For the first task, as the second task requires a second-level intelligent system, we can use out-of-the-box semantic segmentation and check what a COCO dataset trained segmented tells us about the footage. We are using pixellib, which enables very quick, although a bit inefficient in speed, semantic segmentation of video footage. The documentation is found in this link. We will need to download the pre-trained COCO model or any other model that the pixellib authors make available in this link. We are using the Mask R-CNN model for this trial, see what classes we can detect out of the box and if any of our elements of interest in a water treatment installation are highlighted.
The code, in a single go, is:
!pip install pixellib
!pip install tensorflow
from google.colab import files
from pixellib.instance import instance_segmentation
input_video = 'Irizar_trimmed.mp4'
seg_video = 'Irizar_segmented.mp4'
segment_video = instance_segmentation()
segment_video.process_video(input_video, show_bboxes=True, frames_per_second= 20, output_video_name=seg_video)
Import required elements, remember to upload the model and the video, either our video file or your own video file, and run. Make sure your system is using GPU acceleration, inference will take approximately 1-second per frame, so it is not well suited for near-real-time applications. After 10, 12 minutes, we have the results, download; the result is both hilarious and interesting at the same time:
It captures the main facility as a train and the pools as a boat:
The main methane tank as a "sports ball":
The decanters are a "sink" and a "clock", and sure they look like these objects:
The results do not identify the real objects, as the model has not been trained to identify these specific, technical pieces of equipment. So the results are hilarious and useful. In under 2 hours, we have determined that drone footage, this specific type of footage, could help determine and record certain industrial assets if trained properly. The data is not good, the model is not good, the direction is correct, and the industrial application waits behind many more hours of model training, development, and testing. We have to understand that water treatment facilities are safety-critical. Any hardware and software combination that helps control or inspect it will undergo extensive reliability and safety trials. We are thousands of hours away, on the right track.
If you require quantitative model development, deployment, verification, or validation, do not hesitate and contact us. We will also be glad to help you with your machine learning or artificial intelligence challenges when applied to asset management, automation, or intelligence gathering from satellite, drone, or fixed-point imagery.
The notebook, in Google Colab, for this post is located here.