Lade Veranstaltungen

« Alle Veranstaltungen

  • Diese Veranstaltung hat bereits stattgefunden.

MCI Paper Session 5 – AR

September 8 @ 15:30 - 16:35

Diese Session wird live auf YouTube übertragen:
https://www.youtube.com/watch?v=IzJ_dh26bxQ


Effects of Position of Real-Time Translation on AR Glasses

Rufat Rzayev, Sabrina Hartl, Vera Wittmann, Valentin Schwind, Niels Henze
University of Regensburg, Germany

Augmented reality (AR) provides users with contextually relevant multimedia content by overlaying it on real-world objects. However, overlaying virtual content on real-world objects can cause occlusion. Especially for learning use cases, the occlusion might result in missing real-world information important for learning gain. Therefore, it is important to understand how virtual content should be positioned relative to the related real-world information without negatively affecting the learning experience. Thus, we conducted a study with 12 participants using AR glasses to investigate the position of virtual content using a vocabulary learning task. Participants learned foreign words shown in the surrounding while viewing translations using AR glasses as an overlay, on the right or below the foreign word. We found that showing virtual translations on top of foreign words significantly decreases comprehension and increase users’ task load. Insights from our study inform the design of applications for AR glasses supporting vocabulary learning.


Mind the ARm: Realtime Visualization of Robot Motion Intent

Uwe Gruenefeld, Lars Prädel, Jannike Illing, Tim Claudius Stratmann, Sandra Drolshagen, Max Pfingsthorn
OFFIS – Institute for Information Technology, Germany

Established safety sensor technology shuts down industrial robots when a collision is detected, causing preventable loss of productivity. To minimize downtime, we implemented three Augmented Reality (AR) visualizations (Path, Preview, and Volume) which allow humans to understand robot motion intent and give way to the robot. We compare the different visualizations in a user study in which a small cognitive task is performed in a shared workspace. We found that Preview and Path required significantly longer head rotations to perceive robot motion intent. Volume, however, required the shortest head rotation and was perceived as most safe, enabling closer proximity of the robot arm before one left the shared workspace without causing shutdowns.


Show me your Living Room: Investigating the Role of Representing User Environments in AR Remote Consultations

Nicolas Kahrl, Michael Prilla, Oliver Blunk
TU Clausthal, Germany

The study reported here investigates AR based support for remote consultations, in which an on-site user is supported by a remote helper. In such situations, it is important for the remote helper (or, in our case, the consultant) to see the environment of the person asking for support in order to relate to it. Based on literature, we created and tested different mechanisms using a 2D video stream with a captured 2D/3D texturized virtual model of the room. In addition, we compared the often-used way of fixing the remote helper’s view to the view of the on-site user with the possibility to move around freely in the 2D/3D model. The aim of the study was evaluating how to support an on-site user wearing an AR HMD. The study tested four conditions composed from these differences and with nine real furniture consultants. In the study, we compared four mechanisms in which the consultants were able to place furniture in the living room of a customer and advise the customer on their purchase. We found that there were hardly any differences in task load, social presence or perceived support between the four different conditions. However, participants had clear preferences for certain conditions and aspects of them. From our analysis, we provide recommendations for the design of mixed reality support for remote consultations.


Impact of Augmented Reality Guidance for Car Repairs on Novice Users of AR – A Field Experiment on Familiar and Unfamiliar Tasks

Clemens Hoffmann1, Sebastian Büttner2, Kai Wundram3, Michael Prilla2
1Volkswagen AG, Germany; 2Human-Centered Information Systems, Clausthal University of Technology; 3Institute of Vehicle Systems and Service Activities, Ostfalia University of Applied Science

The use of augmented reality (AR) guidance is seen as an opportunity to address the growing complexity of industrial tasks. Previous research showed benefits of AR for different industrial tasks especially for novice users, while other research suggests that AR was not superior to other means for novices. However, there is not much work that looks at the relation between initial exposure of users to AR (that is, if users have never used AR before) and different types of tasks. In this paper, addressing the field of car maintenance and repair, we look into the question of how AR support impacts the performance in familiar and unfamiliar task if the AR user has never used AR before. By running an experiment under field conditions, we investigate whether the familiarity of a specific repair task has an impact on the performance under AR guidance compared to a traditional repair guideline. Our experiment reveals interesting insights. First, we show that familiarity and routine have an important impact on adherence to (all) repair guidelines, which should be regarded in future studies. Second, despite its novelty and the corresponding added time to deal with AR, we found that guidance via AR worked better for unfamiliar tasks. This shows the potential of AR for guidance of industrial tasks in practice, and it brings up design suggestions for the implementation of this guidance in practice.


Augmented Reality Training for Industrial Assembly Work – Are Projection-based AR Assistive Systems an Appropriate Tool for Assembly Training?

Sebastian Büttner1, Michael Prilla2, Carsten Röcker1
1OWL University of Applied Sciences, Lemgo; 2Clausthal University of Technology

Augmented Reality (AR) systems are on their way to industrial application, e.g. projection-based AR is used to enhance assembly work. Previous studies showed advantages of the systems in permanent-use scenarios, such as faster assembly times. In this paper, we investigate whether such systems are suitable for training purposes. Within an experiment, we observed the training with a projection-based AR system over multiple sessions and compared it with a personal training and a paper manual training. Our study shows that projectionbased AR systems offer only small benefits in the training scenario. While a systematic mislearning of content is prevented through immediate feedback, our results show that the AR training does not reach the personal training in terms of speed and recall precision after 24 hours. Furthermore, we show that once an assembly task is properly trained, there are no differences in the long-term recall precision, regardless of the training method.


VacuumCleanAR: Augmented Reality-based Self-explanatory Physical Artifacts

Thomas Ludwig, Sven Hoffmann, Florian Jasche, Marius Ruhrmann
Universität Siegen, Germany

Consumer purchase decisions are not only determined by the quality or price of a product. Customers also want an innovative product that they can identify with in something more than just a functional way. Much of this appeal is often bound up with the innovative character of a product. However, the global market and the huge variety of products available make it challenging for companies to help customers understand the particular innovations in their products, especially in terms of technical “hidden” innovations. Augmented reality (AR) offers interactive experiences in real-world environments through digitalized information. In this paper, we present a design case study about an AR-based approach to reveal the hidden innovations to potential users in an engaging and “emotional” way by using the example of a vacuum cleaner. Based on an empirical study, we designed and implemented the fully functional HoloLens application VacuumCleanAR, which allows users to discover the hidden innovations of a vacuum cleaner in a less functional and more consumer-centric way. This reveals the scope for augmenting other physical artifacts in a similar fashion.


Horst – The Teaching Frog: Learning the Anatomy of a Frog Using Tangible AR

Sebastian Oberdörfer, Anne Elsässer, David Schraudt, Silke Grafe, Marc Erich Latoschik
University of Würzburg, Germany

Learning environments targeting Augmented Reality (AR) visualize complex facts, can increase a learner’s motivation, and allow for the application of learning contents. When using tangible user interfaces, the learning process receives a physical aspect improving the overall intuitiveness. We present a tangible AR system targeting the learning of a frog’s anatomy. The learning environment bases on a plushfrog containing removable markers. Detecting the markers, replaces them with 3D models of the organs. By extracting individual organs, learners can inspect them up close and learn more about their functions. Our AR frog further includes a quiz for a self-assessment of the learning progress and a gamification system to raise the overall motivation.

Details

Datum:
September 8
Zeit:
15:30 - 16:35
Veranstaltungskategorien:
,
Website:
https://www.youtube.com/watch?v=IzJ_dh26bxQ

Weitere Angaben

Session Chair
Susanne Boll