Rahman, Quazi Marufur, Niko Sünderhauf, and Feras Dayoub. "Online Monitoring of Object Detection Performance Post-Deployment." arXiv preprint arXiv:2011.07750 (2020).

Post-deployment, an object detector is expected to operate at a similar level of performance that was reported on its testing dataset. However, when deployed onboard mobile robots that operate under varying and complex environmental conditions, the detector's performance can fluctuate and occasionally degrade severely without warning. Undetected, this can lead the robot to take unsafe and risky actions based on low-quality and unreliable object detections. We address this problem and introduce a cascaded neural network that monitors the performance of the object detector by predicting the quality of its mean average precision (mAP) on a sliding window of the input frames. The proposed cascaded network exploits the internal features from the deep neural network of the object detector. We evaluate our proposed approach using different combinations of autonomous driving datasets and object detectors.

Rahman, Quazi Marufur, Niko Sünderhauf, and Feras Dayoub. "Per-Frame mAP Prediction for Continuous Performance Monitoring of Object Detection During Deployment." in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, Autonomous Vehicle Vision, January 2021, pp 152-160

Performance monitoring of object detection is crucial for safety-critical applications such as autonomous vehicles that operate under varying and complex environmental conditions. Currently, object detectors are evaluated using summary metrics based on a single dataset that is assumed to be representative of all future deployment conditions. In practice, this assumption does not hold, and the performance fluctuates as a function of the deployment conditions. To address this issue, we propose an introspection approach to performance monitoring during deployment without the need for ground truth data. We do so by predicting when the per-frame mean average precision drops below a critical threshold using the detector’s internal features. We quantitatively evaluate and demonstrate our method’s ability to reduce risk by trading off making an incorrect decision by raising the alarm and absenting from detection.

Rahman, Quazi Marufur, Niko Sünderhauf, and Feras Dayoub. "Did you miss the sign? A false negative alarm system for traffic sign detectors." 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2019): 3748-3753.

Object detection is an integral part of an autonomous vehicle for its safety-critical and navigational purposes. Traffic signs as objects play a vital role in guiding such systems. However, if the vehicle fails to locate any critical sign, it might make a catastrophic failure. In this paper, we propose an approach to identify traffic signs that have been mistakenly discarded by the object detector. The proposed method raises an alarm when it discovers a failure by the object detector to detect a traffic sign. This approach can be useful to evaluate the performance of the detector during the deployment phase. We trained a single shot multi-box object detector to detect traffic signs and used its internal features to train a separate false negative detector (FND). During deployment, FND decides whether the traffic sign detector (TSD) has missed a sign or not. We are using precision and recall to measure the accuracy of FND in two different datasets. For 80% recall, FND has achieved 89.9% precision in Belgium Traffic Sign Detection dataset and 90.8% precision in German Traffic Sign Recognition Benchmark dataset respectively. To the best of our knowledge, our method is the first to tackle this critical aspect of false negative detection in robotic vision. Such a fail-safe mechanism for object detection can improve the engagement of robotic vision systems in our daily life.



Rahman, Quazi Marufur, et al. "A sliding window-based algorithm for detecting leaders from social network action streams." 2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT). Vol. 1. IEEE, 2015.

Influential users or leaders in a social network play important roles in viral marketing by spreading news quickly to a large number of people. Hence, various organizations aim to discover these leaders as campaign targets for advertisement so as to maximize customer reachability. Existing approaches detect leaders from a static social network. However, as social networks are evolving, detecting leaders from dynamic streams of social network data is in demand. In this paper, we propose a sliding window-based leader detection (SWLD) algorithm for discovering leaders from streams of user actions in social networks. Experimental results show that SWLD is accurate, requires short runtime and a small amount of memory space.



Rahman, Quazi Marufur, Peter Corke, and Feras Dayoub. "Run-Time Monitoring of Machine Learning for Robotic Perception: A Survey of Emerging Trends." arXiv e-prints (2021): arXiv-2101.

As deep learning continues to dominate all state-of-the-art computer vision tasks, it is increasingly becoming the essential building blocks for robotic perception. As a result, the research questions concerning the safety and reliability of learning-based perception are gaining increased importance. Although there is an established field that studies safety certification and convergence guarantee of complex software systems for decision-making during design-time, the uncertainty in run-time conditions and the unknown future deployment environments of autonomous systems as well as the complexity of learning-based perception systems make the generalisation of the verification results from design-time to run-time problematic. More attention is starting to shift towards run-time monitoring of performance and reliability of perception systems with several trends emerging in the literature in the face of such a challenge. This paper attempts to identify these trends and summarise the various approaches on the topic.