2024 |
Gkouvra, Elpida; Betsas, Thodoris; Pateraki, Maria Exploitation of Open Source Datasets and Deep Learning Models for the Detection of Objects in Urban Areas Proceedings Article In: 2024 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW), pp. 4103-4108, 2024. Abstract | Links | BibTeX | Tags: Cameras, Conferences, Context modeling, Data models, deep learning, deep learning models, Image segmentation, mobile mapping, Object detection, open-source datasets, Training, Transfer learning, Urban areas @inproceedings{10769185, In this work we utilize different open-source datasets and deep learning models for detecting objects from image data captured by a mobile mapping system integrating the multi-camera Ladybug 5+1 in an urban area. In our experiments we exploit sets of pre-trained models and models trained via transfer learning techniques with available open source datasets for object detection, semantic-, instance-, and panoptic segmentation. Tests with the trained models are performed with image data from the Ladybug 5+ camera. |
2021 |
Lourakis, Manolis; Pateraki, Maria Markerless Visual Tracking of a Container Crane Spreader Proceedings Article In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pp. 2579-2586, 2021. Abstract | BibTeX | Tags: Cranes, Image segmentation, Motion segmentation, Solid modeling, Three-dimensional displays, Tracking, visualization @inproceedings{Lourakis_2021_ICCV, Crane systems play a crucial role in container transport logistics. This paper presents an approach for visually tracking the position and orientation in 3D space of a container crane spreader. An initial pose estimate is first employed to render a 3D triangle mesh model of the spreader as a wireframe with hidden lines removed. The initial pose is then refined so that the visible lines of the wireframe match the straight line segments detected in an input image. Line segment matching relies on fast, local one-dimensional searches along a segment’s normal direction. Matched line segments yield constraints on the spreader motion which are processed with robust parameter estimation techniques that safeguard against outliers stemming from mismatches. The tracker automatically determines the visibility of segments, without making limiting assumptions regarding the spreader’s 3D mesh model. It is also robust to parts of the tracked spreader being out of view, occluded, shadowed or simply undetected. Experimental results demonstrating the tracker’s performance are additionally included. |
2024 |
Exploitation of Open Source Datasets and Deep Learning Models for the Detection of Objects in Urban Areas Proceedings Article In: 2024 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW), pp. 4103-4108, 2024. |
2021 |
Markerless Visual Tracking of a Container Crane Spreader Proceedings Article In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pp. 2579-2586, 2021. |