| |
Last updated on September 25, 2017. This conference program is tentative and subject to change
Technical Program for Tuesday September 20, 2016
|
TuAT1 Special Session, Room A |
Add to My Program |
SS4 Multimodal Image Processing and Fusion |
|
|
Chair: Faion, Florian | Karlsruhe Inst. of Tech |
Co-Chair: Zea, Antonio | Karlsruhe Inst. of Tech |
|
10:30-10:50, Paper TuAT1.1 | Add to My Program |
Collaborative Multi-Sensor Image Transmission and Data Fusion in Mobile Visual Sensor Networks Equipped with RGB-D Cameras |
Wang, Xiaoqin | Monash Univ |
Sekercioglu, Ahmet | Univ. De Tech. De Compiègne |
Drummond, Tom | Monash Univ |
Natalizio, Enrico | Univ. De Tech. De Compiègne |
Fantoni, Isabelle | Heudiasyc - Univ. De Tech. De Compiègne - CNRS |
Fremont, Vincent | Utc - Heudiasyc Cnrs |
Keywords: Information Fusion, Vision, Sensors
Abstract: We present a scheme for multi-sensor data fusion applications, called Relative Pose based Redundancy Removal (RPRR), that efficiently enhances the wireless channel utilization in bandwidth-constrained operational scenarios for RGB-D camera equipped visual sensor networks. Pairs of nodes cooperatively determine their own relative pose, and by using this knowledge they identify the correlated data related to the common regions of the captured color and depth images. Then, they only transmit the non-redundant information present in these images. As an additional benefit, the scheme also extends the battery life through reduced number of packet transmissions. Experimental results confirm that significant gains in terms of wireless channel utilization and energy consumption would be achieved when the RPRR scheme is used in visual sensor network operations.
|
|
10:50-11:10, Paper TuAT1.2 | Add to My Program |
Depth Data Fusion for Simultaneous Localization and Mapping -- RGB-DD SLAM |
Walas, Krzysztof, Tadeusz | Poznan Univ. of Tech |
Nowicki, Michal | Poznan Univ. of Tech |
Ferstl, David | Graz Univ. of Tech |
Skrzypczynski, Piotr | Poznan Univ. of Tech |
Keywords: SLAM, Vision, SS4 Multimodal Image Processing and Fusion
Abstract: This paper presents an approach to data fusion from multiple depth sensors with different principles of range measurements. This concept is motivated by the observation that depth sensors exploiting different range measurement techniques have also distinct characteristics of the uncertainty and artifacts in the obtained depth images. Thus, fusing the information from two or more measurement channels allows us to mutually compensate for some of the unwanted effects. The target application for our combined sensor is Simultaneous Localization and Mapping (SLAM). We demonstrated that fusing depth data from two sources in the convex optimization framework yields better results in feature-based 3-D SLAM, than the use of individual sensors for this task. The experimental part is based on data registered with a calibrated rig comprising ASUS Xtion Pro Live and MESA SwissRanger SR-4000 sensors, and ground truth trajectories obtained from a motion capture system. The results of sensor trajectory estimation are demonstrated in terms of the ATE and RPE metrics, widely adopted by the SLAM community.
|
|
11:10-11:30, Paper TuAT1.3 | Add to My Program |
Active Spatial Interface Projecting Luminescent Augmented Reality Marker |
Tsujimura, Takeshi | Saga Univ |
Izumi, Kiyotaka | Saga Univ |
Keywords: Virtual Reality, Cognitive Systems, Localization, Tracking and Navigation
Abstract: This paper presents newly developed augmented reality system, which emits luminescent AR markers using a projector. It grants an active execution in navigation mobile robots, such as marker replacement operation, traveling marker operation and postural deception operation. Experiments clarify its availability.
|
|
11:30-11:50, Paper TuAT1.4 | Add to My Program |
Blind Model-Based Fusion of Multi-Band and Panchromatic Images |
Wei, Qi | Univ. of Cambridge |
Bioucas-Dias, Jose | Univ. of Lisbon |
Dobigeon, Nicolas | Univ. of Toulouse |
Tourneret, Jean-Yves | Univ. of Toulouse |
Godsill, Simon | Univ. of Cambridge |
Keywords: SS4 Multimodal Image Processing and Fusion, Information Fusion
Abstract: This paper proposes a blind model-based fusion method to combine a low-spatial resolution multi-band image and a high-spatial resolution panchromatic image. This method is blind in the sense that the spatial and spectral responses in the degradation model are unknown and estimated from the observed data pair. The Gaussian and total variation priors have been used to regularize the ill-posed fusion problem. The formulated optimization problem associated with the image fusion can be attacked efficiently using a recently developed robust multi-band image fusion algorithm in [1]. Experimental results including qualitative and quantitative ones show that the fused image can combine the spectral information from the multi-band image and the high spatial resolution information from the panchromatic image effectively with very competitive computational time.
|
|
11:50-12:10, Paper TuAT1.5 | Add to My Program |
An Improved ViBe for Video Moving Object Detection Based on Evidential Reasoning |
Yang, Yun | Xi'an Jiaotong Univ |
Han, Deqiang | Xi'an Jiaotong Univ |
Ding, Jiankun | Xi'an Jiaotong Univ |
Yang, Yi | Xi'an Jiaotong Univ |
Keywords: Vision, Information Fusion, SS4 Multimodal Image Processing and Fusion
Abstract: Abstract— Visual Background Extractor (ViBe) is a video moving object detection method with simple implementation and fast speed. ViBe uses a detection threshold (neighborhood size) to judge whether a pixel belongs to the background or the foreground. However, in some complicated scenes, the belongingness of the pixels is ambiguous. One cannot well perform the object detection using the ViBe with a single threshold, which is a kind of hard decision without considering the uncertainty incorporated in. In this paper, we use two thresholds to describe the uncertainty in the ViBe-based color video detection, and use the evidence theory to model and handle the uncertainty. Experimental results show that the proposed approach achieves better detection performance compared with the original ViBe method.
|
|
12:10-12:30, Paper TuAT1.6 | Add to My Program |
High Accuracy 3D Data Acquisition Using Co-Registered OCT and Kinect |
Rajput, Omer | Hamburg Univ. of Tech |
Antoni, Sven-Thomas | Hamburg Univ. of Tech |
Otte, Christoph | Hamburg Univ. of Tech |
Saathoff, Thore | Hamburg Univ. of Tech |
Matthäus, Lars | Univ. of Luebeck |
Schlaefer, Alexander | Hamburg Univ. of Tech |
Keywords: Localization, Tracking and Navigation, Sensors, Biomedical Robotics
Abstract: In many clinical scenarios, the spatial resolution of typical time-of-flight devices is not sufficient. Optical coherence tomography (OCT) presents an interesting high resolution modality with applications in image guided surgery and tissue characterization. However, the small field of view makes scanning larger areas difficult. We therefore consider co-registering a ToF depth camera (Kinect) with its large field-of-view but limited accuracy and OCT with its smaller field-of-view but very high accuracy. We study two approaches to obtain a registration between Kinect and OCT. The first approach is based on a novel marker and a direct registration between the two devices, either with or without using the construction of the novel marker. The second approach uses the marker to obtain a calibration between the OCT and a hexapod (Stewart platform) carrying it, and separate calibration between hexapod and Kinect. We show that the first approach typically results in better registration between Kinect and OCT with translational and rotational errors of (2.25 +/- 1.23) mm and (1.54 +/- 0.77)°, respectively. Furthermore, we demonstrate the use of the combined system to obtain a high resolution scan of the irregularly shaped surface of a head phantom.
|
|
TuAT2 Regular Session, Room B |
Add to My Program |
Sensor Registration and Management |
|
|
Chair: Bender, Daniel | Fraunhofer FKIE |
Co-Chair: Sander, Jennifer | Fraunhofer IOSB, Karlsruhe, Germany |
|
10:30-10:50, Paper TuAT2.1 | Add to My Program |
A Computer-Aided Assistance System for Ressource-Optimal Sensor Scheduling in Intelligence, Surveillance, and Reconnaissance |
Sander, Jennifer | Fraunhofer IOSB, Karlsruhe, Germany |
Reinert, Frank | Fraunhofer IOSB, Karlsruhe, Germany |
Keywords: Planning and Control, Sensor Registration and Management, Information Fusion
Abstract: For maximizing the benefit of today’s ISR (Intelligence, Surveillance, and Reconnaissance) systems improved collection planning is essential. A two-step approach for resource-optimal sensor scheduling has been developed in close cooperation with subject matter experts from the military ISR domain. It consists of a pre-selection step where, for each target, the set of available assets being principally suited is determined. In the second step, an automatic planning component tries to derive concrete proposals which assets have to be finally assigned to which targets and for the dedicated routes individual assets have to follow in order to serve these targets. The preselection step is realized as interactive step and based on the concept of a chain of filters in which individual filters work independently from each other on the set of available assets. The automatic planning component is based on a combinatorial optimization problem being solved using a dedicated metaheuristic approach. The two-step approach for resource-optimal sensor scheduling constitutes the essential basis of a computeraided assistance system for ISR management personal which has been implemented as a demonstrator. Coming from a military context, the described concepts and solutions also apply to civil applications with similar requirements.
|
|
10:50-11:10, Paper TuAT2.2 | Add to My Program |
Towards Integrated Threat Assessment and Sensor Management: Bayesian Multi-Target Search |
Oldfield, James Peter | Cubica Tech |
Page, Scott | Cubica Tech |
Thomas, Paul | Dstl |
Keywords: Sensor Registration and Management, Probabilistic Methods, Information Fusion
Abstract: Currently, most land intelligence, surveillance and reconnaissance (ISR) systems, especially those employed in critical infrastructure protection contexts, comprise of a suite of sensors (e.g. EO/IR, radar, etc.) loosely integrated into a central command and control (C2) system with limited autonomy. We consider a concept of a modular and autonomous architecture where a set of heterogeneous autonomous sensor modules (ASMs) connect to a high-level decision making module (HLDMM) in a plug and play manner. Working towards an integrated threat evaluation and sensor management approach which is capable of optimizing the ASM suite to search for, localise, and capture relevant imagery of multiple threats in and around the area under protection, we propose a Bayesian multi-target search algorithm. In contrast to earlier work we demonstrate how the algorithm can reduce the time to acquire threats through incorporation of target dynamics. The derivation of the algorithm from an information-theoretic perspective is given and its links with the probability hypothesis density (PHD) filter are explored. We discuss the results of a demonstration HLDMM system which embodies the search algorithm and was tested in realistic base protection scenarios with live sensors and targets.
|
|
11:10-11:30, Paper TuAT2.3 | Add to My Program |
A Position Free Boresight Calibration for INS-Camera Systems |
Bender, Daniel | Fraunhofer FKIE |
Cremers, Daniel | Tech. Univ. of Munich |
Koch, Wolfgang | FGAN-FKIE |
Keywords: Sensor Registration and Management, Sensors, Vision
Abstract: In this paper, we present an innovative calibration procedure to determine the angle misalignments, also known as boresight, between the coordinate systems of an inertial navigation system (INS) and a camera. All currently known approaches integrate positional information from the INS in the optimization process. Thereby, the position errors in the range of a few meters of most INS devices negatively influence the accuracy of the boresight estimation in state-of-the-art calibration methods. By using line instead of classical point features within the calibration process, we are able to perform the optimization without positional information and avoid being affected by the corresponding noisy data. This can improve the calibration results for systems of all accuracy levels. For the first time, a reliable calibration for systems with poor positional estimations is possible. The presented approach can be applied to images observing a checkerboard, which allows the calibration of the intrinsic camera parameters and boresight misalignment angles from the same dataset. We confirm the high performance of the presented procedure by evaluating simulated and real-world experiments. The achieved results show the capability to reduce the boresight errors to small sub-degree values.
|
|
11:30-11:50, Paper TuAT2.4 | Add to My Program |
Proton: A Visuo-Haptic Data Acquisition System for Robotic Learning of Surface Properties |
Burka, Alexander | Univ. of Pennsylvania |
Hu, Siyao | Univ. of Pennsylvania |
Helgeson, Stuart | Univ. of Pennsylvania |
Krishnan, Shweta | Univ. of Pennsylvania |
Gao, Yang | UC Berkeley |
Hendricks, Lisa Anne | UC Berkeley |
Darrell, Trevor | UC Berkeley |
Kuchenbecker, Katherine J. | Univ. of Pennsylvania |
Keywords: Sensors, Machine Learning and Artificial Intelligence, Sensor Registration and Management
Abstract: Autonomous robots need to efficiently walk over varied surfaces and grasp diverse objects. We hypothesize that the association between how such surfaces look and how they physically react during contact can be learned from a database of matched haptic and visual data recorded from various end-effectors’ interactions with thousands of real-world surfaces, such as wood flooring, upholstered fabric, asphalt, grass, and anodized aluminum. As the first step in this effort, we detail the design and construction of the Proton, a multimodal data acquisition system that a human operator can use to gather the envisioned data set. Its sensory modalities include RGBD vision, egomotion, contact force, and contact vibration. Three interchangeable end-effectors (a SynTouch BioTac artificial fingertip, an OptoForce three-axis force sensor, and a steel tooling ball) allow for different material properties at the contact point and provide additional tactile data. This sensor suite emulates the capabilities of the human senses of vision and touch, with the goal of learning surface classification methods that are robust over different sensory modalities. We detail the calibration process for the motion and force sensing systems, as well as a proof-of-concept surface discrimination experiment using the tooling ball end-effector and a Vicon motion tracker. A multi-class SVM trained on the collected force and vibration data achieved 86% classification accuracy among five sample surfaces.
|
|
11:50-12:10, Paper TuAT2.5 | Add to My Program |
Fusion of Wearable Sensors and Mobile Haptic Robot for the Assessment in Upper Limb Rehabilitation |
Saracino, Lucia Alessia | Scuola Superiore Sant'Anna |
Ruffaldi, Emanuele | Scuola Superiore Sant'Anna |
Graziano, Alessandro | Scuola Superiore Sant'Anna |
Avizzano, Carlo Alberto | Scuola Superiore Sant'Anna |
Keywords: Biomedical Robotics, Sensors
Abstract: Robot based rehabilitation is gaining traction also thanks to a generation of light and portable devices. This type of rehabilitation offers a high degree of flexibility in the design of interaction software and therapeutic process. There is therefore the need to perform assessment of the patient upper limb state during and after treatment. This paper presents the integration and fusion of a portable rehabilitation robot called MOTORE++ with a wearable tracking system for assessment purposes. The wearable system is based on inertial units together with EMG signals. The combination of the data from both the devices allows to partially evaluate the physiological condition of the user and the influence of the robot in the rehabilitation procedure. Results of an experimental campaign with patients is presented. This work opens also a spectrum of possible developments of adaptive behavior of the robot in the interaction with the patient.
|
|
12:10-12:30, Paper TuAT2.6 | Add to My Program |
Fusing Cyclic Sensor Data with Different Cycle Length |
Bastuck, Manuel | Saarland Univ |
Baur, Tobias | Saarland Univ |
Schütze, Andreas | Saarland Univ |
Keywords: Information Fusion, Machine Learning and Artificial Intelligence, Sensors
Abstract: Cyclic modulation of sensor parameters can improve sensitivity and selectivity of gas sensors. If the modulated parameter influences the sensor’s reaction to its environment, several readings can be gained, eventually resulting in a multi-dimensional response which can be analyzed with, e.g., principal component analysis. In certain cases, e.g. temperature modulated gas sensors with different thermal time constants, the length of the used cycles, and, thus, the temporal resolution of the sensors can differ. As a consequence, different sensors can produce datasets with an unequal number of observations which, nevertheless, cover the same interval of time. In this work, we explore three different strategies which enable combination of those datasets in order to retain the maximum amount of information from two sensors when used in parallel. Simulated data show that simple combination of a short cycle with the last complete long cycle can improve correct classification rate by 15 percent points while maintaining the better temporal resolution. On the other hand, performance can be further increased at the expense of temporal resolution by adding either several of the short cycles, or their mean, to a long cycle, effectively reducing noise. The proposed combination strategies and their dependence on preprocessing are validated with a real dataset of two gas sensors. Overall, and taking into account differences in data structure, good accordance between the strategies’ performance for simulated and real data is observed.
|
|
TuAT3 Regular Session, Room C |
Add to My Program |
Sensors |
|
|
Chair: Strand, Marcus | Baden-Wuerttemberg Cooperative State Univ. Karlsruhe |
Co-Chair: Baier, Stephan | Ludwig Maximilian Univ. München |
|
10:30-10:50, Paper TuAT3.1 | Add to My Program |
Accuracy Specifications of Calibration Device for Force-Torque Sensors |
Zarutckii, Nikolai | Central R&D Inst. of Robotics and Tech. Cybernetics |
Bulkin, Roman | Central R&D Inst. of Robotics and Tech. Cybernetics |
Keywords: Automation and Industry 4.0, Sensors
Abstract: The paper deals with a calibration method that allows performing automated force-torque sensor calibration (with a number of components from one to six) both with selected components of the main vector of forces and moments and with complex loading. Thus, two main advantages of the proposed calibration method are achieved: the automation of the calibration process and universality. An emphasis in the paper is on a mathematical model of the calibration device and it’s accuracy specifications.
|
|
10:50-11:10, Paper TuAT3.2 | Add to My Program |
Learning Representations for Discrete Sensor Networks Using Tensor Decompositions |
Baier, Stephan | Ludwig Maximilian Univ. München |
Krompass, Denis | Siemens AG |
Tresp, Volker | Siemens AG |
Keywords: Sensor/Actuator Networks, Information Fusion, Machine Learning and Artificial Intelligence
Abstract: With the rising number of sensing devices installed in today's and future sensor networks, there is an increasing demand for machine learning solutions performing tasks like automatic behavior detection and decision making. In particular, to classify the state of the complete sensor network, machine learning models are needed, which are capable of fusing the information from multiple sensors. In this paper we examine the use of tensor models to describe the relationship between multiple discrete sensor outputs and attendant class labels describing the overall system state. Tensor decompositions can be considered as a form of representation learning and they have been used for a variety of tasks, e.g. knowledge graph modeling and EEG data analysis. We propose a new approach for multiclass classification using tensor decompositions. As the dimensions of the tensors used in the multi-sensor classification are much higher than in traditional tasks, not all standard decomposition approaches are applicable due to scaling problems. In our experiments on real data, we show that the PARAFAC and Tensor Train decompositions work well for discrete multi-sensor fusion tasks and outperform other state-of-the-art machine learning algorithms.
|
|
11:10-11:30, Paper TuAT3.3 | Add to My Program |
Criminal Fishing System Based on Wireless Local Area Network Access Points |
Togashi, Hiroaki | Kyushu Univ |
Koga, Yasuaki | Graduate School of Information Science and Electrical Engineerin |
Furukawa, Hiroshi | Faculty of Information Science and Electrical Engineering, Kyush |
Keywords: Cyber-Physical Systems, Internet-of-Things, Sensors
Abstract: This paper describes a system for criminal identification that utilizes a large number of wireless local area network (LAN) access points and cameras. The proposed “Criminal Fishing system” identifies the media access control (MAC) address of a culprit’s device from probe request signals captured by access points during the period in which a culprit remains near the incident scene. Experimental results demonstrate that the proposed system could identify the culprit MAC address 10 out of 10 times in an indoor experiment and 8 out of 8 times in an outdoor experiment, in case that the culprit’s radio wave fingerprint could be captured.
|
|
11:30-11:50, Paper TuAT3.4 | Add to My Program |
Evaluation of Motion Tracking Methods for Therapeutic Assistance in Everyday Living Environments |
Vox, Jan Paul | Jade Univ. of Applied Sciences |
Wallhoff, Frank | Jade Univ. of Applied Sciences |
Keywords: Evaluation, Verification and Validation, Sensors
Abstract: In this paper we compare the performance of the MicrosoftTM Kinect v2 with a high precision measurement system for therapeutic purposes. A precise and low cost motion tracking system is of importance for the development of therapeutic assistance in everyday living environments. Upcoming therapeutic assistance systems have to be affordable and able to analyze the motion of the inhabitant. Therefore, an evaluation of the low cost Kinect v2 sensor is necessary to examine the usability of the sensor for joint angle measurements. We will show that a median deviation up to 8.4 degree in comparison with the high precision measurement may be achieved. We will conclude that the Kinect v2 sensor offers an adequate opportunity to analyze therapeutic exercises in living environments.
|
|
11:50-12:10, Paper TuAT3.5 | Add to My Program |
Multi-Sensor Based Fall Prediction Method for Humanoid Robots |
Subburaman, Rajesh | Istituto Italiano Di Tecnologia |
Lee, Jinoh | Fondazione Istituto Italiano Di Tecnologia |
Caldwell, Darwin G. | Istituto Italiano Di Tecnologia |
Tsagarakis, Nikos | Istituto Italiano Di Tecnologia |
Keywords: Humanoids, Information Fusion, Sensors
Abstract: This paper proposes a multi-sensor based method to predict the falling of the humanoid in a reliable and agile manner.The fusion of multisensors such as an inertial measurement unit and foot pressure sensors are considered, which can be regarded as human's vestibular and proprioception. We define a set of feature-based fall indicator variables (FIVs) with manually extracted thresholds for four major disturbance scenarios, which are incorporated with an online threshold interpolation technique to manage generic disturbances. Indeed, a general falling is predicted by using a normalized value of the instantaneous and cumulative sum of each FIVs compared to a predefined set-value of the falling indication. The proposed method is evaluated by numerical experiments under 36 different scenarios, involving random disturbances applied at distinct heights. The results depict that the developed method is generic in terms of handling disturbances as well as different configurations of the robot and the use of fused FIVs performs better than that of a single FIV; in particular, the fusion with the foot pressure sensor based indicator increases the overall performance of the prediction.
|
|
12:10-12:30, Paper TuAT3.6 | Add to My Program |
Ground Reaction Force Estimation Using Insole Plantar Pressure Measurement System from Single-Leg Standing |
Eguchi, Ryo | Keio Univ |
Yorozu, Ayanori | Keio Univ |
Fukumoto, Takahiko | Kio Univ |
Takahashi, Masaki | Keio Univ |
Keywords: Sensors
Abstract: A long-range, continuous, and accessible kinetic measurement system is required for evaluating gait disorders. Although methods employing force plates are gold standard in kinetic gait analysis, their usage is often limited by their measurement range, and they are cost-prohibitive for general clinics. Instrumented insole-based gait analysis systems using accessible sensors were proposed in previous works. However, these systems rely on the use of force plates to construct models to estimate ground reaction force. In this study, a method to construct models without using force plates was developed and evaluated. Subject-specific linear least squares regression models (with bounds and linear constraints) using data from two types of single-leg standing (SLS) tasks; static SLS and voluntary weight shift during SLS were used to determine ground reaction force. Comparison of the results with force plate data for straight walking in terms of the %RMSE, which indicates estimation accuracy of the models, showed that the results were about the same accurate as the models using force plates. In addition, we found possibility that voluntary weight shift during SLS can improve estimation accuracy of models.
|
|
TuBT1 Special Session, Room A |
Add to My Program |
SS1 Multi-Sensor Data Fusion for Autonomous Vehicles - Part 1 |
|
|
Chair: Fiegert, Michael | Siemens AG |
Co-Chair: Zhang, Feihu | TU München |
|
13:30-13:50, Paper TuBT1.1 | Add to My Program |
Environment-Aware Sensor Fusion for Obstacle Detection |
Rechy Romero, Adrian | CSIRO |
Borges, Paulo Vinicius Koerich | CSIRO |
Elfes, Alberto | CSIRO |
Pfrunder, Andreas | Commonwealth Scientific and Industrial Res. Organisation |
Keywords: SS1 Multi-Sensor Data Fusion for Autonomous Vehicles, Mobile Robots (Land, Sea, Air), Sensors
Abstract: Reliably detecting obstacles and identifying traversable areas is a key challenge in mobile robotics. For redundancy, information from multiple sensors is often fused. In this work we discuss how prior knowledge of the environment can improve the quality of sensor fusion, thereby increasing the performance of an obstacle detection module. We define a methodology to quantify the performance of obstacle detection sensors and algorithms. This information is used for environment-aware sensor fusion, where the fusion parameters are dependent on the past performance of each sensor in different parts of an operation site. The method is suitable for vehicles that operate in a known area, as is the case in many practical scenarios (warehouses, factories, mines, etc). The system is “trained” by manually driving the robot through a suitable trajectory along the operational areas of a site. The performance of a sensor configuration is then measured based on the similarity between the manually-driven trajectory and the trajectory that the path planner generates after detecting obstacles. Experiments are performed on an autonomous ground robot equipped with 2D laser sensors and a monocular camera with road detection capabilities. The results show an improvement in obstacle detection performance in comparison with a “naive” sensor fusion, illustrating the applicability of the method.
|
|
13:50-14:10, Paper TuBT1.2 | Add to My Program |
Joint Bias Estimation and Localization in Factor Graph |
Zhang, Feihu | TU München |
Malovetz, Daniel | Tech. Univ. of Munich |
Gulati, Dhiraj | Fortiss GmbH |
Clarke, Daniel Stephen | Cranfield Univ |
Knoll, Alois | Tech. Univ. Muenchen TUM |
Keywords: SS1 Multi-Sensor Data Fusion for Autonomous Vehicles
Abstract: This paper describes a new approach for cooperative localization by using both internal and external sensors. In contrast to the state-of-the-art methods, the proposed approach analyses the statistical properties of the systematic error during the transformation phase. A factor graph is formulated which jointly estimates both the biases and the locations. The proposed approach is evaluated by using simulated data from odometry, GPS and radar measurements. The experiment demonstrates excellent performance of the proposed approach in comparison to traditional techniques.
|
|
14:10-14:30, Paper TuBT1.3 | Add to My Program |
A New Concept for a Cooperative Fusion Platform |
Feiten, Wendelin | Siemens AG |
Alcalde Baguees, Susana | Siemens AG |
Fiegert, Michael | Siemens AG |
Zhang, Feihu | TU München |
Gulati, Dhiraj | Fortiss GmbH |
Tiedemann, Tim | DFKI GmbH, Robotics Innovation Center |
Keywords: SS1 Multi-Sensor Data Fusion for Autonomous Vehicles
Abstract: The increasing traffic and the increasing number of sensors both in cars and in the infrastructure pose new challenges but also create new opportunities for traffic control. If the sensor data in various states of interpretation and aggregation could be shared and reused, it would be possible to minimize accidents and improve the traffic situation. In this paper we describe an approach to automatically configure sensor data fusion systems across the boundaries of independent subsystems, where information on all levels can be exchanged. The basis for this is a formal description of all required meta-information that enables the reasoning for automatic configuration.
|
|
14:30-14:50, Paper TuBT1.4 | Add to My Program |
Object Management Strategy for an Unified High Level Automotive Sensor Fusion Framework |
Duraisamy, Bharanidhar | Daimler AG |
Schwarz, Tilo | Daimler AG |
Löhlein, Otto | Daimler AG |
Bertolucci, Matteo | Univ. of Pisa |
Keywords: SS1 Multi-Sensor Data Fusion for Autonomous Vehicles, Information Fusion, Distributed Methods
Abstract: An important requirement in autonomous driving for many complex scenarios is to correctly detect static and dynamic targets under various states of motion. The possibility of fulfilling this requirement depends upon the availability of different sensor data to the sensor fusion module. This paper uses data from sensors with built-in tracking modules and our objective is to make the resultant of two different sensor fusion modules that use the same sensor tracked data, to be statistically relevant based on the respective operational requirements despite this commmon prior set-up. In our case, we have two sensor fusion modules. One sensor fusion module deals with dynamic targets with well-defined object representation and other module deals only with static targets of undefined shapes. The authors have developed different concepts to manage the relevancy of the deliverables of the two modules. A novel approach based on multi-hypothesis tracking is presented. The results are evaluated using simulation and as well as with real world sensor data with reference ground truth target data.
|
|
14:50-15:10, Paper TuBT1.5 | Add to My Program |
Learning of Lane Information Reliability for Intelligent Vehicles |
Nguyen, Tran Tuan | Volkswagen AG |
Zug, Sebastian | Otto-Von-Guericke-Univ. Magdeburg |
Kruse, Rudolf | Otto-Von-Gueriche Univ. Magdeburg |
Spehr, Jens | Tech. Univ. of Braunschweig |
Uhlemann, Matthias | Volskwagen AG |
Keywords: Information Fusion, SS1 Multi-Sensor Data Fusion for Autonomous Vehicles, Machine Learning and Artificial Intelligence
Abstract: Automated driving is becoming the focus of various research institutions and companies. In this context, road estimation is one of the most important tasks. Many works propose to realize this task by employing one or multiple of the following orthogonal information sources: road markings from optical lane recognition, leading vehicle, digital map, etc. Each of them has its own strength and drawbacks in different situations. However, many existing approaches assume that the sources are equally reliable. Incorporating reliability estimates into the fusion of these sources can significantly increase the availability of automated driving in most scenarios. In this work, we propose a novel concept to define, measure, learn and integrate reliabilities into the road estimation task. We introduce a new error metric in which the reliability is defined as the angle discrepancy between the estimated road course and the manually driven trajectory. Based on a large database containing sensor and context information from different situations, a Bayesian Network and Random Forests are trained to learn the reliabilities. The estimated reliabilities are used to discard unreliable sources in the fusion process. Experimental results prove our concept.
|
|
15:10-15:30, Paper TuBT1.6 | Add to My Program |
Selected Aspects Important from an Applied Point of View to the Fusion of Collective Vehicle Data |
Skibinski, Sebastian | Audi Ag |
Weichert, Frank | TU Dortmund |
Müller, Heinrich | TU Dortmund |
Keywords: SS1 Multi-Sensor Data Fusion for Autonomous Vehicles, Information Fusion, Multi-Robot Systems and Swarm Robotics
Abstract: It can be considered as a current advancement observed within the automotive industry that vehicles are more and more tightly interconnected. By utilizing the data provided by the manifold onboard sensors the vehicles can exchange considerable amounts of perceived environmental data with each other or a common fusion center. This way comprehensive, richly detailed, and by now unmatched up-to-date maps can be deduced by aggregating the received data. Subsequently, these maps can be utilized as an additional, virtual, ultra-long-range sensor for supporting next generation driver assistance or piloted driving functions. The focus of the research on this topic has usually concerned the key challenge, data aggregation; this means how to fuse multiple sensor readings afflicted with imperfections and uncertainties. However, to achieve generalized, reliable, precise, and computationally feasible aggregates the full data acquisition and processing chain needs to be considered holistically as affirmed by our research. At this, highly interesting, however, usually neglected challenges arise which this paper is dedicated to. We illuminate the for a real-world automotive application crucial aspects of sensor data fusion, such as coping with the temporal decay of measurements, the precise vehicle localization by utilizing commercially viable sensors, the generalized storage of different types of sensor data, and the definition of generalized acquisition and processing chains to provide a fast adaptation to new kinds of input data. Furthermore, we present how these aspects lead to an adaptable, efficient, and accurate data fusion of collective vehicle data concerning both areal and point-shaped/complex data.
|
|
TuBT2 Regular Session, Room B |
Add to My Program |
Machine Learning and Artificial Intelligence |
|
|
Chair: Huber, Marco F. | USU Software AG |
Co-Chair: Rajput, Omer | Hamburg Univ. of Tech |
|
13:30-13:50, Paper TuBT2.1 | Add to My Program |
Intelligent Scheduling Method for Life Science Automation Systems |
Gu, Xiangyu | Univ. of Rostock |
Neubert, Sebastian | Univ. of Rostock |
Stoll, Norbert | Univ. of Rostock |
Thurow, Kerstin | Univ. Rostock |
Keywords: Planning and Control, Machine Learning and Artificial Intelligence
Abstract: Modern life science automation combines laboratory automation systems and mobile transportation systems. The distributed automated workstations integrate different kinds of automatic devices to increase the throughput and quality. The mobile transportation system manages mobile robots and human laboratory assistants for transfer services between interacting automation systems. In order to combine laboratory automation systems and mobile transportation systems, a superordinate management system - Hierarchical Workflow Management System (HWMS) - is established to handle the life science laboratories (celisca Germany). In this paper, a new scheduling method for life science automation systems is proposed to reduce the costs of workflows. Typically, the methods of automation systems are prepared and executed directly on the local computer of the distributed automated stations. According to this feature, the strategy for the scheduling is determined. The algorithm of scheduling in this paper is a Genetic Algorithm (GA). The simulation of this scheduler is used to evaluate this approach and to make a comparison with an intrinsic solution. Results indicate that this approach improves the efficiency and stability of the life science automation system.
|
|
13:50-14:10, Paper TuBT2.2 | Add to My Program |
A Two-Step Learning Approach about Normal and Exceptional Human Behavior Patterns |
Lim, Gi Hyun | Univ. De Aveiro |
Keywords: Machine Learning and Artificial Intelligence, Probabilistic Methods, Human-Robot Interaction
Abstract: Human activity recognition, especially exceptional activity recognition has been regarded as an important aspect in intelligent service robotics. Several challenges in activity recognition - unexpected and untypical exceptional behaviors, a small but growing number of training examples - make it hard to solve this problem. Despite the variety of human behaviors, there are some normal patterns, especially scenario-oriented human activities. This paper presents an incremental learning method for exceptional behavior patterns based on prerequisites. The proposed method models the normal activities as prerequisites from several demonstrations following a given scenario, and learns autonomously and incrementally new exceptional activities, which may not follow the scenario. Case studies show that the proposed method can gradually improve the recognition rate, and incrementally learn new exceptional human activities.
|
|
14:10-14:30, Paper TuBT2.3 | Add to My Program |
Comparative Study of Machine Learning Algorithms for Activity Recognition with Data Sequence in Home-Like Environment |
Fan, Xiuyi | Nanyang Tech. Univ |
Zhang, Huiguo | Nanyang Tech. Univ |
Leung, Cyril | Univ. of British Columbia |
Miao, Chunyan | Nanyang Tech. Univ |
Keywords: Machine Learning and Artificial Intelligence, Sensor/Actuator Networks, Internet-of-Things
Abstract: Activity recognition is a key problem in multi-sensor systems. With data collected from different sensors, a multi-sensor system identifies activities performed by the inhabitants. Since an activity always lasts a certain duration, it is beneficial to use data sequence for the desired recognition. In this work, we experiment several machine learning techniques, including Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) and Meta-Layer Network for solving this problem. We observe that (1) compare with “single-frame” activity recognition, data sequence based classification gives better performance; (2) directly using data sequence information with a simple “mete layer” network model yields a better performance than memory based deep learning approaches.
|
|
14:30-14:50, Paper TuBT2.4 | Add to My Program |
Adaptive Flight Control for Quadrotor UAVs with Dynamic Inversion and Neural Networks |
Xiang, Tian | South Univ. of Science and Tech. of China |
Jiang, Fan | Southern Univ. of Science and Tech |
Hao, Qi | SOUTHERN Univ. OF SCIENCE AND Tech |
Keywords: Planning and Control, Machine Learning and Artificial Intelligence
Abstract: In this paper, we develop an adaptive nonlinear controller based on dynamic inversion and neural network for quadrotor UAVs in the presence of uncertainties in UAV and actuator dynamics. The basic control law is first designed by the conventional PID control, and then the nonlinear dynamic inversion control is provided for the purpose of stabilization and robustness. The neural network is used to eliminate the inversion error due to parameter uncertainty, disturbance, etc. The simulation and real experimental results both demonstrate that NN can effectively eliminate the inversion error which can improve the robustness of the whole system and achieve accurate attitude and trajectory control.
|
|
14:50-15:10, Paper TuBT2.5 | Add to My Program |
Towards Force Sensing Based on Instrument-Tissue Interaction |
Otte, Christoph | Hamburg Univ. of Tech |
Beringhoff, Jens | Inst. of Medical Tech. Hamburg Univ. of Tech |
Latus, Sarah | Hamburg Univ. of Tech |
Antoni, Sven-Thomas | Hamburg Univ. of Tech |
Rajput, Omer | Hamburg Univ. of Tech |
Schlaefer, Alexander | Hamburg Univ. of Tech |
Keywords: Biomedical Robotics, Localization, Tracking and Navigation, Machine Learning and Artificial Intelligence
Abstract: The missing haptic feedback in minimally invasive and robotic surgery has prompted the development of a number of approaches to estimate the force acting on the instruments. Modifications of the instrument can be costly, fragile, and harder to sterilize. We propose a method to estimate the forces from the tissue deformation, hence working with multiple instruments and avoiding any modification to their design. Using optical coherence tomography to get precise deformation estimates, we have studied the deformations for different instrument trajectories and mechanical tissue properties. Surface deformation profiles for three different soft tissue phantoms and the resulting forces where monitored. Our results show a systematic and constant relationship between deformation and interaction force. Different tissue elasticities result in different but consistent deformation-force mappings. For a series of independent measurements the root mean squared error between estimated and measured force was below 3~mN. The results indicate that it is possible to estimate the force acting between tissue and instrument based on the deformation caused by the instrument. Given that in robotic surgery the pose of the instrument head is known and hence the respective tissue deformation caused by the instrument can be measured in a well-defined relative position, the method allows for force estimation without any changes to the instruments.
|
|
15:10-15:30, Paper TuBT2.6 | Add to My Program |
A First Step towards Explained Activity Recognition with Computational Abstract Argumentation |
Fan, Xiuyi | Nanyang Tech. Univ |
Zhang, Huiguo | Nanyang Tech. Univ |
Miao, Chunyan | Nanyang Tech. Univ |
Leung, Cyril | Univ. of British Columbia |
Keywords: Sensor/Actuator Networks, Internet-of-Things, Machine Learning and Artificial Intelligence
Abstract: Activity recognition is a key problem in multi-sensor systems. In a home-like environment, from several sensors of different types, the multi-sensor system identifies activities performed by the inhabitants. Many supervised learning techniques exist for solving this problem. In this paper, we present a novel argumentation based approach that seamlessly combines low level sensor data processing, realized with Neural Network classifiers with high level activity recognition, represented by argumentation computation. The proposed framework gives classification results comparable to pure learning based approaches with significantly reduced training time while giving argumentative explanations.
|
|
TuBT3 Regular Session, Room C |
Add to My Program |
Vision |
|
|
Chair: Henderson, Thomas C. | Univ. of Utah |
Co-Chair: Luo, Ren | National Taiwan Univ |
|
13:30-13:50, Paper TuBT3.1 | Add to My Program |
Wreath Product Cognitive Architecture (WPCA) |
Joshi, Anshul | Univ. of Utah |
Henderson, Thomas C. | Univ. of Utah |
Keywords: Cognitive Systems, Vision, Localization, Tracking and Navigation
Abstract: A Belief-Desire-Intention (BDI) framework closely resembles human practical reasoning approach in day-to-day life, and is a well-studied architecture. The wreath product cognitive model, first described by Leyton is an abstract, although powerful, model which closely couples perception and actuation for representing shape. However, no implementation of the wreath product model exists. Our work is an attempt to combine the wreath product knowledge representation mechanism with a BDI architecture that works in a real-world setting. A prototype implementation of this combination is demonstrated on an iRobot Create differential-drive robot, with a Kinect One structural sensor, in an indoor environment. The effectiveness of our framework is demonstrated by its accuracy for mapping the environment and localization of the robot for navigation purposes.
|
|
13:50-14:10, Paper TuBT3.2 | Add to My Program |
A Sensorimotor Approach to Concept Formation Using Neural Networks |
Henderson, Thomas C. | Univ. of Utah |
Beall, Tanya | Univ. of Utah |
Keywords: Cognitive Systems, Vision, Sensor/Actuator Networks
Abstract: We propose an active perception paradigm which combines actuation (control signals) and perception (sensor signals) to form concepts of shape using recurrent neural networks; this representation characterizes not only what the shape is, but also how it is created. The approach is based on the group theoretic wreath product which specifies a sequence of actions on a set of points which when completed comprise the shape. Leyton originally proposed the use of wreath products for concept representation. Wreath product descriptions provide an abstract generative representation of shape, but can be annotated for specific actuation systems; this provides a mechanism for knowledge transfer across different motor systems (e.g., visual vs. arm control). We describe how wreath products can be implemented as recurrent neural networks, and demonstrate their application to shape recognition.
|
|
14:10-14:30, Paper TuBT3.3 | Add to My Program |
Landmark Detection with Surprise Saliency Using Convolutional Neural Networks |
Tang, Feng | Fordham Univ |
Lyons, Damian | Fordham Univ |
Leeds, Daniel | Fordham Univ |
Keywords: Machine Learning and Artificial Intelligence, Vision, Probabilistic Methods
Abstract: Landmarks can be used as a reference to enable people or robots to localize themselves or to navigate in their environment. Automatic definition and extraction of appropriate landmarks from the environment has proven to be a challenging task when pre-defined landmarks are not present. We propose a novel computational model of automatic landmark detection from a single image without any pre-defined landmark database. The hypothesis is that if an object looks abnormal due to its atypical scene context (what we call surprise saliency), it then may be considered as a good landmark because it is unique and easy to spot by different viewers (or the same viewer at different times). We leverage state-of-the-art algorithms based on convolutional neural networks to recognize scenes and objects. For each detected object, a surprise saliency score, a fusion of scene and object information, is calculated to determine if it is a good landmark. In order to evaluate the performance of the proposed model, we collected a landmark image dataset which consists of landmark images, as we have defined them with surprise saliency above, and non-landmark images. The experimental results show that our model achieves good performance in automatic landmark detection and automatic landmark image classification.
|
|
14:30-14:50, Paper TuBT3.4 | Add to My Program |
3-Point RANSAC for Fast Vision Based Rotation Estimation Using GPU Technology |
Kamran, Danial | Sharif Univ. of Tech |
Manzuri-shalmani, Mohamad Taghi | Sharif Univ. of Tech |
Marjovi, Ali | EPFL |
Karimian, Mahdi | Sharif Univ. of Tech |
Keywords: Vision, Localization, Tracking and Navigation, SS1 Multi-Sensor Data Fusion for Autonomous Vehicles
Abstract: In many sensor fusion algorithms, the vision based RANdom Sample Consensus (RANSAC) method is used for estimating motion parameters for autonomous robots. Usually such algorithms estimate both translation and rotation parameters together which makes them inefficient solutions for merely rotation estimation purposes. This paper presents a novel 3-point RANSAC algorithm for estimating only the rotation parameters between two camera frames which can be utilized as a high rate source of information for a camera-IMU sensor fusion system. The main advantage of our proposed approach is that it performs less computations and requires fewer iterations for achieving the best result. Despite many previous works that validate each hypothesis for all of data points and count the number of inliers for it, we use a voting based scheme for selecting the best rotation among all primary answers. This methodology is much more faster than the traditional inlier based approach and is more efficient for parallel implementation of RANSAC iterations. We also investigate parallel implementation of the proposed 3-point RANSAC using CUDA technology which leads to a great improvement in the processing time of estimation algorithm. We have utilized real datasets for evaluation of our algorithm and also compared it with the well-known 8-point algorithm in terms of accuracy and speed. The results show that the proposed approach improves the speed of estimation algorithm up to 150 times faster than the 8-point algorithm with similar accuracy.
|
|
14:50-15:10, Paper TuBT3.5 | Add to My Program |
Autonomous Flame Detection in Video Based on Saliency Analysis and Optical Flow |
Li, Zhenglin | The Univ. of Sheffield |
Isupova, Olga | The Univ. of Sheffield |
Mihaylova, Lyudmila | Univ. of Sheffield |
Rossi, Lucile | UMR CNRS 6134 SPE - Univ. of Corsica |
Keywords: Vision, Probabilistic Methods, Information Fusion
Abstract: The paper proposes a flame detection method based on saliency analysis, optical flow estimation and temporal wavelet transform. Two separate saliency maps are first obtained based on the grayscale values and optical flow magnitudes of each frame using a saliency detector. Subsequently, the two maps are combined to extract candidate flame regions. To further discard falsely detected pixels, a colour model of flames and temporal wavelet transform are employed. The proposed algorithms can be applied in the autonomous and semi-autonomous systems for environmental surveillance and can reduce the load of human operators. Experiments illustrate the introduced method achieves around 91% true positive rate and 97% true negative rate.
|
|
15:10-15:30, Paper TuBT3.6 | Add to My Program |
Acoustic Camera-Based 3D Measurement of Underwater Objects through Automated Extraction and Association of Feature Points |
Ji, Yonghoon | The Univ. of Tokyo |
Kwak, Seungchul | The Univ. of Tokyo |
Yamashita, Atsushi | The Univ. of Tokyo |
Asama, Hajime | The Univ. of Tokyo |
Keywords: Vision, SLAM, Sensors
Abstract: This paper presents a novel scheme for the three-dimensional (3D) reconstruction of underwater objects by using multiple acoustic views based on geometric and image processing approaches. Underwater tasks such as maintenance, ship hull inspection, and harbor surveillance require accurate underwater information. In such cases, 3D reconstructed information would greatly contribute to a better understanding of the underwater environment. Acoustic cameras are the most suitable sensors because they provide acoustic images with more accurate details than other sensors, even in turbid water. In order to enable 3D measurement, feature points of each acoustic image should be extracted and associated in advance. In a previous study, we proposed a 3D measurement method, but it was limited by the assumption of complete correspondence information between feature points. This new methodology establishes a 3D measurement model by automatically determining correspondences between feature points through the application of geometric constraints and extracting these points. The result of the real experiment demonstrated that the proposed framework can automatically perform 3D measurement tasks of underwater objects.
|
|
TuCT1 Special Session, Room A |
Add to My Program |
SS1 Multi-Sensor Data Fusion for Autonomous Vehicles - Part 2 |
|
|
Chair: Zhang, Feihu | TU München |
Co-Chair: Fiegert, Michael | Siemens AG |
|
16:00-16:20, Paper TuCT1.1 | Add to My Program |
Spatiotemporal Alignment for Low-Level Asynchronous Data Fusion with Radar Sensors in Grid-Based Tracking and Mapping |
Tanzmeister, Georg | BMW Group |
Steyer, Sascha | BMW Group Res. and Tech |
Keywords: SS1 Multi-Sensor Data Fusion for Autonomous Vehicles, Sensors, Localization, Tracking and Navigation
Abstract: Fusion of data from multiple sensors is often necessary to achieve an environment model, which meets the requirements of real-world applications, in particular those of autonomous vehicles. Sensor data fusion at a low-level yields potential advantages, as the data is fused before its interpretation with models and assumptions. However, spatiotemporal alignment, required for a precise fusion in dynamic environments, is difficult, as the sensors often cannot be synchronized. In this work, different approaches for spatiotemporal alignment of data from asynchronous sensors for low-level fusion are presented. Focus is given on radar sensors, as they allow measuring radial velocities in addition to range and bearing. The results are used to calculate fused measurement grids for grid-based tracking and mapping.
|
|
16:20-16:40, Paper TuCT1.2 | Add to My Program |
Synthetic Aperture Radar for Lane Boundary Detection in Driver Assistance Systems |
Clarke, Daniel Stephen | Cranfield Univ |
Andre, Daniel | Cranfield Univ |
Zhang, Feihu | TU München |
Keywords: SS1 Multi-Sensor Data Fusion for Autonomous Vehicles, Mobile Robots (Land, Sea, Air), Sensors
Abstract: In this paper we investigate the feasibility for using a Synthetic Aperture Radar (SAR) to detect radar scatterers in support of advanced driver assistance systems. Specifically, we consider the detection of radar scatterers physically embedded into lane and carriageway boundaries similar to way optical retroreflectors (cats eyes) are used in present infrastructure. We use simulations to generate high resolution SAR images for detecting and localizing radar scatterers. The simulated results presented here highlight the feasibility of the technique and provide a platform for further investigation. This paper facilitates the realization of the role of modified infrastructure for improving the sensing capability of highly assisted and autonomous vehicles.
|
|
16:40-17:00, Paper TuCT1.3 | Add to My Program |
CSI-Based WiFi-Inertial State Estimation |
Li, Bing | Hong Kong Univ. of Science and Tech |
Zhang, Shengkai | Hong Kong Univ. of Science and Tech |
Shen, Shaojie | Hong Kong Univ. of Science and Tech |
Keywords: SS1 Multi-Sensor Data Fusion for Autonomous Vehicles, Localization, Tracking and Navigation, SLAM
Abstract: WiFi-based localization has received increasing attentions these years as WiFi devices are low-cost and universal. Recent years, tens of WiFi-based localization systems have been proposed which could achieve decimeter-level accuracy with the commercial wireless cards and with no specialized infrastructure. However, such systems require the positions of the Access Points or fingerprint map to be known in advance. In this paper, we present CWISE, an accurate WiFi-Inertial SLAM system without the requirement for Access Points' positions, specialized infrastructure and fingerprinting. CWISE relies only on a commercial wireless card with two antennas and an IMU. We test the CWISE system on a flying quadrotor and it shows that the system is able to work in real time and achieves the mean accuracy of 1.60m.
|
|
17:00-17:20, Paper TuCT1.4 | Add to My Program |
Object Level Fusion of Extended Dynamic Objects |
Nilsson, Sofie | Fraunhofer IPA |
Klekamp, Axel | Valeo Schalter Und Sensoren GmbH |
Keywords: SS9 Multiple (Extended) Object Tracking, SS1 Multi-Sensor Data Fusion for Autonomous Vehicles, Information Fusion
Abstract: This paper presents an approach to enable general tracking of extended objects of multiple sensors. Expert information in each input providing sensor module is mapped into simple model parameters and allows the fusion center to use a generalized version of such information. The model type and parameters are presented and a classical Kalman based fusion is extended with a method for integrated extent handling. By this approach the central fusion node can take into account both the track level information and extent estimate from each sensor. The proposed method is compared with a classic method of fusing the object center and the extent estimate separately. Simulated data is used to show that our proposed approach is general in the sense that it can be used on various setups without adaption. Detailed performance results are given based on estimation errors of the extended object space vector. The findings based on simulated data are completed by real world data from a front facing sensor setup. It is shown that the proposed method offers a benefit in position accuracy, especially when the measurement information does not contain a complete extent information in all directions.
|
|
17:20-17:40, Paper TuCT1.5 | Add to My Program |
Extracting Sensor Models from a Scene Based Simulation |
Simon, Carsten | IAV GmbH |
Ludwig, Thomas | IAV GmbH |
Kruse, Markus | IAV GmbH Ingenieurgesellschaft Auto Und Verkehr |
Keywords: SS1 Multi-Sensor Data Fusion for Autonomous Vehicles, Sensors, Information Fusion
Abstract: A safety critical system like an autonomous car has to rely on an excellent perception, both in accuracy and reliability to allow a correct functional behaviour of the system. It is not conceivable that one sensor or even a set of sensors will be able to provide this demanded quality on its own. Current approaches are aiming for car external solutions like smart infrastructure or Vehicle2X communication to achieve the desired trust into the perception. However this paper is focusing on the enhancement of the internal sensor set and the corresponding Bayesian multi sensor data fusion. This kind of fusion is dependent on sufficient sensor models to weight the measurements correctly. In the paper at hand a method is shown how these models can be derived from real world measurement data and simulation results using a scene based pattern recognition.
|
|
TuCT2 Special Session, Room B |
Add to My Program |
SS7 Multi-Robot Systems and Mobile Sensor Networks |
|
|
Chair: Horn, Joachim | Helmut-Schmidt-Univ. / Univ. of the Federal Armed Forces Hamburg |
Co-Chair: Dang, Anh Duc | HSU Univ |
|
16:00-16:20, Paper TuCT2.1 | Add to My Program |
Safe Fusion Compared to Established Distributed Fusion Methods |
Nygards, Per Eric Jonas | Swedish Defence Res. Agency |
Deleskog, Viktor | Swedish Defence Res. Agency |
Hendeby, Gustaf | Linköping Univ |
Keywords: Distributed Methods, Localization, Tracking and Navigation, SS7 Multi-Robot Systems and Mobile Sensor Networks
Abstract: The safe fusion algorithm is benchmarked against three other methods in distributed target tracking scenarios. Safe fusion is a fairly unknown method similarly to, e.g., covariance intersection, that can be used to fuse potentially dependent estimates without double counting data. This makes it suitable for distributed target tracking, where dependencies are often unknown or difficult to derive. The results show that safe fusion is a very competitive alternative in five evaluated scenarios, while at the same time easy to implement and compute compared to the other evaluated methods. Hence, safe fusion is an attractive alternative in track to track fusion systems.
|
|
16:20-16:40, Paper TuCT2.2 | Add to My Program |
Fault Tolerant Multi-Sensor Fusion for Multi-Robot Collaborative Localization |
Al Hage, Joelle | Univ. of Lille , Lab. CRIStAL |
El Badaoui El Najjar, Maan | Univ. of Lille , Lab. CRIStAL |
Pomorski, Denis | LAGIS |
Keywords: Information Fusion, SS7 Multi-Robot Systems and Mobile Sensor Networks, Probabilistic Methods
Abstract: In the last decades, the multi-robot system has been widely investigated in mission that cannot be achieved by using a single robot or in area presenting danger to human life. Each robot needs to have an accurate position estimation of itself and of the others in the team. In this paper, we present a framework for localizing a group of robots with sensors Fault Detection and Exclusion (FDE) step. The Collaborative Localization (CL) is formulated using the Information Filter (IF) estimator which is the informational form of the Kalman Filter (KF). Residual tests calculated in term of divergence between the priori and posteriori distributions of the IF are developed in order to perform the FDE step. These residuals are based on the Kullback-Leibler Divergence (KLD) and they are generated from two tests: One acts on the means, and the other acts on the covariance matrices of the probability data distributions. Optimal thresholding method using entropy criterion is discussed and developed. Finally, the validation of this framework is studied on real experimental data from a group of robots.
|
|
16:40-17:00, Paper TuCT2.3 | Add to My Program |
Development of a Smart Wheelchair for People with Disabilities |
Leaman, Jesse | Univ. of Nevada |
La, Hung | Univ. of Nevada at Reno |
Nguyen, Luan | Nevada Univ |
Keywords: SS7 Multi-Robot Systems and Mobile Sensor Networks
Abstract: The intelligent power wheelchair (iChair) is designed to assist people with mobility, sensory, and cognitive impairment lead a higher quality, more independent lifestyle. The iChair includes a power wheelchair (PW), laptop computer, laptop mount, multi-modal input platform, and a custom plastic enclosure for the environmental sensors, made with a 3D printer. We have developed the configuration of sensors to facilitate scientific observation, while maintaining the flexibility to mount the system on almost any power wheelchair, and remain easy to remove for maintenance or travel. The first scientific observations have been used to compile ACCESS Reports that quantity a location or event's level of accessibility. If barriers exists we collect a 3D point cloud to be used as evidence and to make recommendations on how to remedy the problem. The iChair will serve a wide variety of disability types by incorporating several input methods, voice, touch, proximity switch, and head tracking camera. The HD camera and 3D scanner have been mounted in such a way as to provide reliable data with the precision necessary to detect obstacles, build 3D maps, follow guides, anticipate events, and provide navigational assistance. We evaluate the human factors in the current prototype to ensure that the technology will be accepted by those it is designed to serve, and propose a wheelchair skills test for future trial participants.
|
|
17:00-17:20, Paper TuCT2.4 | Add to My Program |
Distributed Formation Control for Autonomous Robots Following Desired Shapes in Noisy Environment |
Dang, Anh Duc | HSU Univ |
La, Hung | Univ. of Nevada at Reno |
Horn, Joachim | Helmut-Schmidt-Univ. / Univ. of the Federal Armed Forc |
Keywords: SS7 Multi-Robot Systems and Mobile Sensor Networks
Abstract: In this paper, we propose a novel and distributed formation control method for autonomous robots to follow the desired formation while tracking a moving target under influence of the dynamic and noisy environments. In our approach, the desired formations, which include the virtual nodes arranged into specific shapes, are first generated. Then, autonomous robots are controlled by the proposed artificial force fields in order to converge to these virtual nodes without collisions. The stability analysis based on the Lyapunov approach is given. Moreover, a new combination of rotational force field and repulsive force field in designing an obstacle avoidance controller allows the robot to avoid and escape the convex and non convex obstacle shapes. The V-shape and circular shape formations with their advantages are utilized to test the effecttiveness of the proposed method.
|
|
17:20-17:40, Paper TuCT2.5 | Add to My Program |
Comparison of Consensus Loop Designs with a Mission Error Signal |
Gronemeyer, Marcus | Helmut-Schmidt-Univ. / Univ. of the Federal Armed Forc |
Bartels, Marcus | Hamburg Univ. of Tech. (TUHH) |
Horn, Joachim | Helmut-Schmidt-Univ. / Univ. of the Federal Armed Forc |
Keywords: SS7 Multi-Robot Systems and Mobile Sensor Networks, Distributed Methods, Localization, Tracking and Navigation
Abstract: A multi-agent system trying to complete a mission has to fulfill a number of objectives. In case of a combination of formation control and other mission-related goals such as source seeking, both objectives have to be integrated. This paper presents different designs which incorporate the mission error signal as well as the corresponding generalized plants needed for the synthesis of the corresponding H1-optimal information flow filter. The comparison of the designs is then performed by evaluation of the simulation results for a sample scenario. The results show different behaviors dependent on the choice of design and weighting filters. The variety in behavior implies to choose the design according to the performance requirements of the particular task.
|
|
17:40-18:00, Paper TuCT2.6 | Add to My Program |
Cooperative Longterm SLAM for Navigating Mobile Robots in Industrial Applications |
Dörr, Stefan | Fraunhofer Inst. for Manufacturing Engineering and Automatio |
Barsch, Paul | Dresden Univ. of Tech |
Gruhler, Matthias David | Fraunhofer IPA |
Garcia Lopez, Felipe | Fraunhofer Inst. for Manufacturing Engineering and Automatio |
Keywords: SS7 Multi-Robot Systems and Mobile Sensor Networks, SS1 Multi-Sensor Data Fusion for Autonomous Vehicles, SLAM
Abstract: Precise and reliable localization as well as dynamic path planning are key components to enable flexibly and efficiently operating mobile robots in industrial applications. Both strongly depend on up-to-date navigation maps of the respective environment. However, in these particular applications, providing those maps can be very challenging due to the typical dynamics and size of the environment. Promising approaches tackle the issue of localization in dynamic environments by estimating an update of the map while simultaneously localizing in it. In order to have a good estimate of the dynamics of the environment and update the map accordingly, frequent observations of all areas of the environment are required. This is often not possible, especially in large environments and from a single robot’s perspective. To overcome this problem, we present a cooperative approach which uses the sensor information of all mobile robots and possibly available stationary sensors to generate an up-to-date global map and precisely localize the robots within it. We use dynamic occupancy grid maps with Rao-Blackwellized particle filters in combination with a suitable server-agent architecture to allow cooperation. The advantage of our approach is shown both in simulation and on real hardware.
|
|
TuCT3 Regular Session, Room C |
Add to My Program |
Localization, Tracking and Navigation |
|
|
Chair: Hörst, Julian | Fraunhofer FKIE |
Co-Chair: Kurz, Gerhard | Karlsruhe Inst. of Tech. (KIT) |
|
16:00-16:20, Paper TuCT3.1 | Add to My Program |
Distributed Consensus Based IPDAF for Tracking in Vision Networks |
Stankovic, Srdjan | Univ. of Belgrade |
Ilic, Nemanja | Coll. of Tech. and Tech. Krusevac, Serbia |
Al Ali, Khaled | Vlatacom Inst |
Stankovic, Milos | Royal Inst. of Tech. (KTH) |
Keywords: Distributed Methods, Localization, Tracking and Navigation, Vision
Abstract: In this paper consensus based algorithms for distributed target tracking in large scale camera networks are discussed and a new adaptive algorithm is proposed. Camera networks are typically characterized by sparse communication and coverage topologies, as well as by the presence of multiple targets and clutter. The proposed algorithm (IPDA-ACF) is a result of the introduction of the probabilities of target perceivability and target existence in the basic distributed consensus based tracking algorithm (ACF). The distributed adaptation scheme for information fusion allows obtaining robustness in the cases of high level clutter and occulted targets, together with high level agreement between the nodes. A comparison with analogous methods derived from the Kalman Consensus Filter (KCF) and the Information Consensus Filter (ICF) shows that the proposed method achieves better performance, along with reduced communication requirements.
|
|
16:20-16:40, Paper TuCT3.2 | Add to My Program |
Encoding Context Likelihood Functions As Classifiers in Particle Filters for Target Tracking |
Vaci, Lubos | Univ. of Udine |
Snidaro, Lauro | Univ. of Udine |
Foresti, Gian Luca | Univ. of Udine |
Keywords: Localization, Tracking and Navigation, Information Fusion
Abstract: In this work we address the problem of multi-level context representation and exploitation for target tracking. Specifically, we present an approach for encoding different types of contextual information (CI) as likelihood functions via classifiers in particle filters. The proposed solution is sufficiently versatile as to be able to couch different types of CI. Promising results have been obtained from our simulations on synthetic data.
|
|
16:40-17:00, Paper TuCT3.3 | Add to My Program |
Observability Analysis for Heterogeneous Passive Sensors Exploiting Signal Propagation Velocities |
Hörst, Julian | Fraunhofer FKIE |
Koch, Wolfgang | FGAN-FKIE |
Keywords: Localization, Tracking and Navigation, Information Fusion
Abstract: Different signal propagation velocities can be advantageous for passive tracking. For example, electronic and acoustic sensors can be used in conjunction to localize objects emitting electromagnetic waves and sound. In a heterogeneous passive sensor setup involving electromagnetic detection and acoustic bearing sensors, observability is studied, even for the case that the signals are not emitted simultaneously. It is shown that observability can be established and target maneuvers are not necessary. Finally, a numerical analysis of the Cramér-Rao lower bound is performed to verify the results.
|
|
17:00-17:20, Paper TuCT3.4 | Add to My Program |
Testing Trajectories against Pre-Defined Scenarios |
Krause, Tim | Univ. Bonn, Fraunhofer FKIE |
Govaers, Felix | Univ. Bonn, Fraunhofer FKIE |
Koch, Wolfgang | FGAN-FKIE |
Keywords: Localization, Tracking and Navigation, Information Fusion, Planning and Control
Abstract: In this paper we introduce a new method of deciding, if a trajectory is following a pre-defined path. This is achieved by representing hypotheses as trajectories themselves using Accumulated State Densities. Live tracking data is incorporated into the trajectories via out of sequence processing. Through this, we gain two representations of the sensor data, each conditioned on a hypotheses. By using an adapted version of the sequential likelihood ratio test, we can test which hypotheses is more likely and therefore arrive at a decision. Our method can be used for e.g. surveillance of sea lane traffic or other kinds of object movements where a strict adherence to a path is necessary. Our approach can easily be incorporated into existing sensor data fusion methods as most of the calculation is derived from usual tracking algorithms. Simulations show, that the approach is capable of delivering reliable decisions even with high noise and a low frequency of measurements.
|
|
17:20-17:40, Paper TuCT3.5 | Add to My Program |
Self-Localization by Eavesdropping in Acoustic Underwater Sensor Networks |
Neumann, Sergej | Karlsruhe Inst. of Tech. (KIT) |
Oertel, David | Karlsruhe Inst. of Tech |
Woern, Heinz | KIT Karlsruhe Inst. of Tech |
Keywords: Localization, Tracking and Navigation, Information Fusion, SS1 Multi-Sensor Data Fusion for Autonomous Vehicles
Abstract: Localization is a common problem in underwater sensor networks. Since global navigation satellite systems do not work underwater, geo-referencing of underwater sensors requires other technologies. In this paper, we present an novel localization approach for nodes in an acoustic underwater sensor network. By combining pressure sensors with the functionality of modern acoustic USBL modems, the nodes are able to self-localize their position inside the network. This can be done in a passive manner, just by listening to the transmitted messages of other network nodes, thereby saving energy and sparing the communication channel from additional traffic. In order to evaluate the performance of the method, experiments have been conducted in simulation and under real conditions in the Middle Atlantic Ocean.
|
|
17:40-18:00, Paper TuCT3.6 | Add to My Program |
Assessing the Accuracy of Industrial Robots through Metrology for the Enhancement of Automated Non-Destructive Testing |
Morozov, Maxim | Univ. of Strathclyde |
Riise, Jonathan | Univ. of Strathclyde |
Summan, Rahul | Univ. of Strathclyde |
Pierce, Stephen Gareth | Univ. of Strathclyde |
Mineo, Carmelo | Univ. of Strathclyde |
Macleod, Charles Norman | Univ. of Strathclyde |
Brown, Roy Hutton | Univ. of Strathclyde |
Keywords: Evaluation, Verification and Validation, Localization, Tracking and Navigation, Planning and Control
Abstract: This work presents the study of the accuracy of an industrial robot, KUKA KR5 arc HW, used to perform quality inspections of components with complex shapes. Laser tracking and large volume photogrammetry were deployed to quantify both pose and dynamic path accuracies of the robot in accordance with ISO 9283:1998. The overall positioning pose inaccuracy of the robot is found to be almost 1 mm and path inaccuracy at 100% of the robot rated velocity is 4.5 mm. The maximum pose orientation inaccuracy is found to be 14 degrees and the maximum path orientation inaccuracy is 5 degrees. Local positional errors manifest pronounced dependence on the position of the robot end effector in the working envelope. The uncertainties of the measurements are discussed and deemed to be caused by the tool centre point calibration, the reference coordinate system transformation and the low accuracy of the photogrammetry system.
|
| |