Vaibhav Aher · Portfolio FAU Erlangen · M.Sc. ICT Embedded Systems
Computer Vision · Embedded Linux · Python

RPi5
Object
Orientation
Tracker

Real-time orientation measurement for objects in a live camera feed, running on a Raspberry Pi 5 with 1GB RAM - no GPU, no cloud. Haar cascade detection, Canny edge + Hough transform for angle, sliding window filter for stable output, CSV logging. Extended from the BiViP image processing lab at FAU Erlangen-Nürnberg.

Hardware Raspberry Pi 5
Speed 10 fps
Resolution 640 × 480
Stack Python · OpenCV
Frame budget 100 ms
01Camera inputPicamera2 · 640×480 · XRGB8888
02PreprocessGaussian blur 5×5 · grayscale
03DetectionHaar cascade · scaleFactor 1.1
04ROI crop±200px margin · boundary check
05Canny edgesTlow=50 · Thigh=150 · apt=3
06Hough linesSelect line closest to 90°
07Angletheta → degrees · 0–180 range
08Filterdeque window=5 · clear on miss
09CSV logtimestamp · raw · filtered · count
10DisplayBounding box · angle overlay

Run it

# clone git clone https://github.com/VaibhavAher100/rpi-object-orientation-tracker cd rpi-object-orientation-tracker # laptop — no Pi required for testing pip install -r requirements.txt python src/detector.py --no-picamera # on the Pi pip install -r requirements-rpi.txt python src/detector.py # plot the angle trace after a run python src/visualize_results.py # tune the detector python src/detector.py --window 8 --cascade classifiers/pen_vertical_classifier.xml

Pipeline detail

Each stage feeds the next. The design choice that matters: in stage 06, the code selects the Hough line closest to vertical (90°) rather than blindly taking the first line returned. Blindly taking lines[0] gave unstable angle readings when multiple lines were detected.

01
Camera input
Picamera2 captures 640×480 in XRGB8888. --no-picamera flag swaps in cv2.VideoCapture for laptop testing.
02
Preprocess
Gaussian blur (5×5, σ=0) then grayscale. Reduces noise before the cascade fires.
03
Detection
detectMultiScale — scaleFactor=1.1, minNeighbors=5, minSize=(40,40). Returns a list of bounding boxes.
04
ROI crop
Expand the bounding box by ±200px / ±100px. Falls back to ±40px / ±20px when near the frame edge.
05
Canny edges
Applied to the cropped grayscale region. Tlow=50, Thigh=150, apertureSize=3.
06
Hough lines
cv2.HoughLines on the edge image. Selects the line with theta closest to π/2 (vertical). Returns -1.0 if nothing found.
07
Angle
theta × 180/π. HoughLines returns theta in [0, π) so the result is already in [0, 180) degrees.
08
Moving average
deque(maxlen=5). Window clears when object count drops to zero — prevents stale angle bleeding into the next detection.
09
CSV log
One row per frame: unix timestamp, raw_angle, filtered_angle, object_count. Header written on first call.
10
Display
Green bounding box. Angle printed top-left in red. cv2.imshow — press Q to quit.

Limitations

The worst-case detection sample (three false positives) is committed to results/. These limitations are not hidden.

False +
Cable edge fires the classifier. A vertical cable on the right of the frame has a similar edge profile to a vertical pen. Raising minNeighbors reduces it but also misses weaker real detections.
Laptop
Classifier was not trained for general objects. When testing on a person video on laptop, it fires occasionally on jacket and collar edges, returning 0.0 angle. The cascade knows vertical pens, not everything.
Multi
Only the first detection gets an angle. object_count reflects all bounding boxes but raw_angle and filtered_angle come from the first box only. Tracking angle per object ID was out of scope.
Light
Low contrast breaks detection. The cascade needs reasonable lighting contrast. Tuning scaleFactor down helps sensitivity but slows the pipeline.
Vaibhav Aher

M.Sc. ICT · Embedded Systems · FAU Erlangen-Nürnberg
B.E. Electronics and Telecommunication · University of Mumbai