Conference attendances in 2014-2015

We have recently presented our research in the areas of multi-object visual tracking, heterogeneous computing and feature selection at the following conferences:

SPIE Defense + Security 2015: Automatic Target Recognition XXV

The following SPIE proceedings paper may be found through the following link or downloaded here (see below):

http://spie.org/DEF/conferencedetails/automatic-target-recognition

Mutual information for enhanced feature selection in visual tracking
Victor Stamatescu, Sebastien Wong, David Kearney, Ivan Lee, Anthony Milton

Winner of the Lockheed Martin 2015 ATR BEST PAPER AWARD

def15_paper9476-2_rcl

Image and Vision Computing New Zealand (IVCNZ) Conference 2014

The following two ACM proceedings papers may be found through the following link:

http://dl.acm.org/citation.cfm?id=2683405&picked=prox 

A Competitive Attentional Approach to Mitigating Model Drift in Adaptive Visual Tracking
Sebastien Wong, Adam Gatt, David Kearney, Anthony Milton, Victor Stamatescu
Pages: 1-6
doi>10.1145/2683405.2683406

The CACTuS multi-object visual tracking algorithm on a heterogeneous computing system
Anthony Milton, Sebastien Wong, David Kearney, Simon Lemmo
Pages: 19-24
doi>10.1145/2683405.2683426

 

Honours projects in 2015

Tracking multiple objects with UAV and 360º cameras

Prerequisites: Coding skills in C/C++ are required, knowledge of MATLAB a plus

The Reconfigurable Computing Lab is offering two related honours research projects that involve collecting video data with a quad-copter mounted camera or with a 360º Bublcam for use in multi-object tracking. Visual tracking systems work by adaptively learning an object’s position, velocity and shape. The following video demonstrates the capabilities of the CACTuS-FL visual tracker:

Projects:

Panormamic vision: tracking multiple objects with a 360º camera
New data sets for visual tracking will be obtained using a 360º (full spherical) field of view Bublcam (http://www.bublcam.com). The project involves collecting positional and video data, interfacing with the camera via its API, generating ground truth annotations (e.g. bounding boxes around each object of interest), and applying multi-object tracking software to the data. An additional objective will be the development of new semi-automated ground truth annotation tools.
Video Deblurring for Unmanned Aerial Vehicles
Video captured from unmanned aerial vehicles (UAVs) often suffers from motion blur due to movement (see [1] for examples). The objective for this project is to investigate motion deblur and video stabilisation techniques for UAVs. The student will initially be provided with a pre-recorded video captured on a UAV and will apply deblurring algorithms such as [2] to improve video quality. The student will then apply the deblurring algorithm to an off-the-shelf quad-copter and a head mounted display in order to demonstrate an improved First-person view (FPV) video.
 [1]   Jinhai Cai, and Ivan Lee, “The stitching of aerial videos from UAVs,” 2013 28th International Conference of Image and Vision Computing New Zealand (IVCNZ), pp.448-452, 27-29 Nov. 2013 doi: 10.1109/IVCNZ.2013.6727056
[2]   Sunghyun Cho, Jue Wang, and Seungyong Lee, “Video deblurring for hand-held cameras using patch-based synthesis.” ACM Transactions on Graphics (TOG) 31.4 (2012): 64