Academic Contributions

Digital Twinning Stacks, a modular communication system approach for IOT implementation in Unity3D

CIRP CMS 2024 - DOI: tbd.

Manufacturing scenarios have become more data-intense and displaying the vast amount of data for human oversight has become a major task in various applications. The growing need for real-time data visualization in industrial applications has led to the development of numerous communication frameworks, each often designed for specific methods and purposes. This specialization requires developers to master multiple frameworks and integrate them into cohesive applications, a process that can be both complex and time-consuming. Human oversight interface for users in manufacturing can be categorized into displaying the (aggregated) past, current or future state of robots or different manufacturing equipment. This publication look into realizing these industrial XR applications with Unity3D and introduces a modular, open-source framework designed to streamline the integration of real-time data communication pipelines. This framework supports inputs from a diverse range of sources, offering a unified solution that simplifies the development process for applications requiring real-time data integration and is available as an open-source solution for researchers and practitioners.

A User-Study on proximity-based scene transitioning for contextual information display in learning and smart factories

CLF 2024 - DOI: tbd.
Contextual information display is a concept of lowering the cognitive load of a user in a smart factory or learning factory environment. Due to the increasing amount of sensors and their collected data, measures must be taken to prevent overwhelming the human worker and ’update’ them to the ’operator 4.0’. In this publication, an online user study was conducted to find a preferable method of transitioning between contexts if a change of spatial location was detected. The participants (n = 36, age = 22-55) took part in an immersive experiment in which four transition mechanisms (distance indication, timer indication, button activation, manual selection) were presented on a simulated Augmented Reality headset in Virtual Reality. It was found, using the System-Usability-Scale (SUS), that users prefer an highly automatic transition with an indication of the spatial distance toward a change in the displayed information (n = 36, SUS = 76.45, σ = 12.08).

Framework for armature-based 3d shape reconstruction of sensorized soft robots in extended reality

Frontiers in Robotics and AI - DOI: 10.3389/frobt.2022.8103
Due to the compliance of soft robots, their response to actuation inputs heavily depends on the environment in which they operate. Therefore, the ability to study soft robots' behavior while interacting with their environment is crucial for improving their design and control strategies. However, soft robots are often intended to operate in confined environments that impede direct observation of the robot. Although recent developments in proprioceptive sensors for soft robots have enabled accurate real-time capture of a soft robot’s configuration while operating in such environments, these complex three-dimensional configurations can be difficult to interpret using traditional visualization techniques. In this work, we present an open-source framework for real-time three-dimensional reconstruction of soft robots in eXtended Reality (Augmented and Virtual Reality). XR offers the opportunity to visualize the simulation as an overlay to the real environment and enhance the experience by displaying additional information on the soft robot that cannot be observed during direct observation. The approach is demonstrated in Augmented Reality using a Microsoft Hololens device and runs at up to 60~FPS. We explore the influence that system parameters such as mesh density, neural network design, and armature complexity have on the reconstruction (i.e., speed, scalability). The open-source framework is expected to function as a platform for future research and developments on real-time control and investigation of soft robots operating in environments that impede direct observation of the robot.

Effective close-range accuracy comparison of Microsoft HoloLens Generation one and two using Vuforia ImageTargets

IEEEVR 2021 - DOI: 10.1109/VRW52623.2021.00158

This paper analyzes the effective accuracy for close-range operations for the first and the second generation of Microsoft HoloLens in combination with Vuforia Image Targets in a black-box approach. The implementation of Augmented Reality (AR) on optical see-through (OST), head-mounted devices (HMDs) has been proven viable for a variety of tasks, such as assembly, maintenance, or educational purposes. For most of these applications, minor localization errors are tolerated since no accurate alignment between the artificial and the real parts is required. For other potential applications, these accuracy errors represent a major obstacle. The “realistically achievable” accuracy remains largely unknown for close-range usages (e.g. within “arms-reach” of a user) for both generations of Microsoft HoloLens. Thus, the authors developed a method to benchmark and compare the applicability of these devices for tasks that demand a higher accuracy like composite manufacturing or medical surgery assistance. Furthermore, the method can be used for a broad variety of devices, establishing a platform for bench-marking and comparing these and future devices. This paper analyzes the performance of test users, which were asked to pinpoint the perceived location of holographic cones. The image recognition software package “Vuforia” was used to determine the spatial transform of the predefined ImageTarget. By comparing the user-markings with the algorithmic locations, a mean deviation of 2.59 ±1.79 [mm] (HL 1) and 1.11 ±0.98 [mm] (HL 2) has been found, which means that the mean accuracy improved by 57.1% and precision by 45.4%. The highest mean accuracy of a single test user has been measured with 0.47 ±1.683 [mm] (HL 1) and 0.085 ± 0.567 [mm] (HL 2).

Mirrorlabs-creating accessible Digital Twins of robotic production environment with Mixed Reality

IEEEAIVR 2020 - DOI: 10.1109/AIVR50618.2020.00017

How to visualize recorded production data in Virtual Reality? How to use state of the art Augmented Reality displays that can show robot data? This paper introduces an opensource ICT framework approach for combining Unity-based Mixed Reality applications with robotic production equipment using ROS Industrial. This publication gives details on the implementation and demonstrates the use as a data analysis tool in the context of scientific exchange within the area of Mixed Reality enabled human-robot co-production.