Skip to main content
blue simple circle header image

Toby Breckon

Tony discusses the work of his research team, which focuses on computer vision and robotic sensing.

Professor of Computer Vision

Durham University

Researcher Profile

Toby Breckon is Professor of Computer Vision, and Head of Visual Computing, at Durham University (UK) where he leads 'visual AI' research spanning sensing for autonomous road vehicles, robotic perception, automated visual surveillance and security X-ray image understanding with a strong emphasis on generalized machine learning and pattern recognitiontechniques. Work from his research team has had significant impact within aviation security where it helps secure 500+ million passengers per annum globally, within multi-modal wide-area surveillance (UK SAPIENT programme - British Standard Flex 355) and in anomaly detection with COSMONiO, a product-based startup company founded by former members of his research team, that was acquired by Intel in 2020. He received the Royal Photographic Society Selwyn Award for early-career contribution to imaging science (2011) whilst his algorithmic contributions to global aviation security are recognised by the Royal Photographic Society Award for Imaging Science (2024). Prof. Breckon holds a PhD in 3D Computer Vision from the University of Edinburgh and is a Chartered Engineer, Chartered Scientist and Fellow of both the British Computer Society (FBCS) and Institute for Engineering & Technology (FIET) in addition to being an Accredited Senior Imaging Scientist and Fellow of the Royal Photographic Society (ASIS FRPS).


Computer Vision - Advancing Automated Image Understanding

The work of my research team relates to computer vision and robotic sensing – the automatic interpretation of images by computer as an aspect of machine intelligence. Our work is enabled by both the fundamentals of image processing and recent advances in deep machine learning.

Within this domain, we specialize in several industry-facing problem domains spanning X-ray image understanding, automotive vision (autonomous vehicles), visual surveillance, robotic sensing, and general topics in object detection, classification, and broader image understanding.

This has resulted in over £7+ million of research income, collaboration with 40+ government and industry partners (2007- 2021+), and supported the development of AI software start-up COSMONiO by former team members (acquired by Intel, 2020).

Executive summary of project results
The work of my research team relates to computer vision and robotic sensing – the automatic understanding of images by computer as an aspect of artificial intelligence using deep learning (i.e. "visual AI"). Our work is enabled by both the fundamentals of image processing and recent advances in deep machine learning. Within this domain, we specialize in several industry-facing problem domains spanning X-ray image understanding, automotive vision (autonomous vehicles), visual surveillance, robotic sensing, and general topics in object detection, classification, and broader image understanding. Within aviation security, his research work on X-ray image understanding pioneered the use of automated prohibited item detectionalgorithms within the sector and his team are credited with designing the first complete solutionfor threat image projection (TIP) within 3D CT security scan imagery (E&T Innovation Awards2020, Highly Commended, Dynamites Technology Awards 2021, Innovator of the Year - HighlyCommended). Their 3D TIP approach is now used globally by several major security scanners and manufacturers, in numerous major international airports, and helps to secure over 500+ million passenger journeys per annum across five continents (2020). The work of this team on anomaly detection was used by COSMONiO in their NOUS product. COSMONiO, founded by former members of his research team, was acquired by Intel in 2020. His team was a collaborator in the original UK SAPIENT programme, and developed an infrared (thermal) based autonomous sensor unit to demonstrate 'the art of the possible' in inter-operable AI for multi-sensor wide-area surveillance. As of 2023, SAPIENT is a British Standard (BSI Flex 355) and the UK MoD inter-operabilty standard for counter-UAS (uncrewed air system) technology. The broader international reach of his research is further chronicled in three research impact case studies submitted as part of the UK National Research Evaluation Framework (REF) spanning work onX-ray security imaging, automotive sensing and wide-area visual surveillance (2020/21) and he is the recipient of the Durham University Award for Excellence in Knowledge Transfer inrecognition of his outstanding contribution to the public benefit of research (2022).

How has your research benefited from using Bede?
Access to high quality, high performance and well maintained GPU computing has enabled us to perform more experiments in the same period, explore more use cases and datasetsand produce research outputs with significantly higher scientific rigour based on more experiments being performed in a given period of time (prior to publication or end of project deadlines).

Has using Bede allowed you to apply for further research funding?
Yes. This project is an ongoing theme of work in my team spanning a breadth of computer vision, image processing, and robotic sensing application domains, including automotive sensing, X-ray security image understanding, automated visual surveillance, and robotics. Since 2021 (start of Bede use) it is supported the capture and delivery of £2 million of research, andgoing forward (2025+) supports the new Durham-based Centre for Algorithmic Life as part of a landmark £10 million research investment by the Leverhulme Trust (2025-2035) to investigatethe way we understand and study the interaction between people, machine learning andartificial intelligence algorithms with cross-disipline collaborations spanning Geography, Computer Science, Business, Sociology, Mathematics & Philosophy.

Publications
  • TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training (L. Li, T. Qiao, H.P.H Shum, T.P. Breckon), In Proc. British Machine Vision Conference, BMVA, 2024. https://arxiv.org/abs/2408.13902
  • Towards Open-World Object-based Anomaly Detection via Self-Supervised Outlier Synthesis (B.K.S. Isaac-Medina, Y.F.A. Gaus, N. Bhowmik, T.P. Breckon), In Proc. European Conference on Computer Vision, Springer, pp. 196-214, 2024. https://doi.org/10.1007/978-3-031-73209-6_12
  • Disentangling Racial Phenotypes: Fine-Grained Control of Race-related Facial Phenotype Characteristics (S. Yucer, A. Atapour-Abarghouei, N. Al Moubayed, T.P. Breckon), In Proc. Int. Conf. Neural Networks, IEEE , pp. 1-10, 2024. https://doi.org/10.1109/IJCNN60899.2024.10650732
  • RAPiD-Seg: Range-Aware Pointwise Distance Distribution Networks for LiDAR Semantic Segmentation (L. Li, H.P.H Shum, T.P. Breckon), In Proc. European Conference on Computer Vision , Springer, pp. 222-241, 2024. https://doi.org/10.1007/978-3-031-72667-5_1
  • Less is More: Reducing Task and Model Complexity for Semi-Supervised 3D Point Cloud Semantic Segmentation (L. Li, H.P.H. Shum, T.P. Breckon), In Proc. Computer Vision and Pattern Recognition, IEEE/CVF, pp. 9361-9371, 2023. https://doi.org/10.1109/CVPR52729.2023.00903
  • Exact-NeRF: An Exploration of a Precise Volumetric Parameterization for Neural Radiance Fields (B.K.S. Isaac-Medina, C.G. Willcocks, T.P. Breckon), In Proc. Computer Vision and Pattern Recognition, IEEE/CVF, pp. 66-75, 2023. https://doi.org/10.1109/CVPR52729.2023.00015
  • Robust Semi-Supervised Anomaly Detection via Adversarially Learned Continuous Noise Corruption (J.W. Barker, N. Bhowmik, Y.F.A. Gaus, T.P. Breckon), In Proc. Int. Conf. on Computer Vision Theory and Applications, Scitepress, Volume 5, pp. 615-625, 2023. https://doi.org/10.5220/0011684700003417
  • Joint Sub-component Level Segmentation and Classification for Anomaly Detection within Dual-Energy X-Ray Security Imagery (N. Bhowmik, T.P. Breckon), In Proc. Int. Conf. on Machine Learning Applications, IEEE, pp. 1463-1467, 2022. https://doi.org/10.1109/ICMLA55696.2022.00230
  • Does lossy image compression affect racial bias within face recognition? (S. Yucer, M. Poyser, N. Moubayed, T.P. Breckon), In Proc. Int. Joint Conf. on Biometrics, IEEE, pp. 1-10, 2022. https://doi.org/10.1109/IJCB54206.2022.10007956.
  • Semi-Supervised Surface Anomaly Detection of Composite Wind Turbine Blades From Drone Imagery (J.W. Barker, N. Bhowmik, T.P. Breckon), In Proc. Int. Conf. on Computer Vision Theory and Applications, IEEE, pp. 868-876, 2022. https://doi.org/10.5220/0010842100003124
  • Measuring Hidden Bias within Face Recognition via Racial Phenotypes (S. Yucer, F. Tekras, N. Al Moubayed, T.P. Breckon), In Proc. Winter Conference on Applications of Computer Vision, IEEE, pp. 3202-3211, 2022. https://doi.org/10.1109/WACV51458.2022.00326
  • DurLAR: A High-fidelity 128-channel LiDAR Dataset with Panoramic Ambient and Reflectivity Imagery for Multi-modal Autonomous Driving Applications (L. Li, K.N. Ismail, H.P.H. Shum, T.P. Breckon), In Proc. Int. Conf. on 3D Vision, IEEE, pp. 1227-1237, 2021. https://doi.org/10.1109/3DV53792.2021.00130
  • Contraband Materials Detection Within Volumetric 3D Computed Tomography Baggage Security Screening Imagery (Q. Wang, T.P. Breckon), In Proc. Int. Conf. on Machine Learning Applications, IEEE, pp. 75-82, 2021. https://doi.org/10.1109/ICMLA52953.2021.00020
  • On the Impact of Using X-Ray Energy Response Imagery for Object Detection via Convolutional Neural Networks (N. Bhowmik, Y.F.A. Gaus, T.P. Breckon), In Proc. Int. Conf. on Image Processing, IEEE, pp. 1224-1228, 2021. https://doi.org/10.1109/ICIP42928.2021.9506608
  • Multi-Modal Learning for Real-Time Automotive Semantic Foggy Scene Understanding via Domain Adaptation (N. Alshammari, S. Akcay, T.P. Breckon), In Proc. Intelligent Vehicles Symposium, IEEE, pp. 1428-1435, 2021.https://doi.org/10.1109/IV48863.2021.9575309
  • Competitive Simplicity for Multi-Task Learning for Real-Time Foggy Scene Understanding via Domain Adaptation (N. Alshammari, S. Akcay, T.P. Breckon), In Proc. Intelligent Vehicles Symposium, IEEE, pp. 1413-1420, 2021. https://doi.org/10.1109/IV48863.2021.9575633
  • Autoencoders Without Reconstruction for Textural Anomaly Detection (P.A. Adey, S. Akcay, M.J.R. Bordewich, T.P. Breckon), In Proc. Int. Joint Conference on Neural Networks, IEEE, pp. 1-8, 2021. https://doi.org/10.1109/IJCNN52387.2021.9533804
  • On the Evaluation of Semi-Supervised 2D Segmentation for Volumetric 3D Computed Tomography Baggage Security Screening (Q. Wang, T.P. Breckon), In Proc. Int. Joint Conference on Neural Networks, IEEE, pp. 1-8, 2021. https://doi.org/10.1109/IJCNN52387.2021.9533631

Return to article index