PyCon 2016 in Portland, Or
hills next to breadcrumb illustration

Tuesday 3:15 p.m.–3:45 p.m.

Trainspotting: real-time detection of a train’s passing from video

Chloe Mawer

Audience level:
Intermediate
Category:
Science

Description

Almost anyone can set up their motion detection surveillance using just a few Python functions. This talk will walk through the development of a model used to detect whether a train is passing and in what direction it is going using a real-time video feed. You’ll learn some basic motion detection techniques in Python as well as the effects of video quality on implementation.

Abstract

Thousands of Bay Area residents commute everyday on the Caltrain but, unfortunately, it is very unreliable and reported delay predictions are often completely wrong. In the bigger quest to create an algorithm for making better predictions, we needed ground truth information on the actual location of the Caltrain. Being near the Caltrain tracks, we decided to point a camera at them to record when the trains pass. In this talk, I will detail how I developed a model that uses as input a real-time video feed to detect whether a train is passing and in what direction. This model builds on traditional image processing and motion detection techniques, implemented through the OpenCV package, but then takes advantage of the unique attributes of the problem: the unsymmetric stationary environment across the frame of the camera and the length of the train relative to other moving objects. The model consists of a number of processing and classification steps. At each step, I will show the video of the intermediate output, providing a visual understanding to every piece of the model. Attendees will gain an understanding of the basics of motion detection with access to a Github repository with the full set of code, which includes a number of helper image processing and motion detection functions that can be used outside the train detection algorithm. After demonstrating the model, I will discuss how the Python code for the model was structured such that it could be deployed and run continuously in real-time. Lastly, I will walk through how the parameterization and quality of the model changes with both video quality (resolution and frames per second) and the environment (lighting, noise). Attendees will get exposure to how code and video quality interact and will know what to consider when performing their own image processing and motion detection algorithms.