top of page

Multimodal and Context-Aware Interaction in Augmented Reality for Active Assistance

Augmented reality eyewear devices (e.g. glasses, headsets) are poised to become ubiquitous in a similar way than smartphones, by providing a quicker and more convenient access to information. There is theoretically no limit to their applicative area and use cases and many of them are already explored such as medical, education, industry, entertainment, military... Some interactions to these eyewear devices are becoming a standard such as mid-air hand gestures and voice command. Paradoxically, nowadays, in many use cases where these kinds of eyewear devices are currently implemented, the users cannot perform these available interactions without constraints: e.g. when the users are already using their hands, when they are in a noisy environment or the opposite where silence is required and the vocal command could not be used properly, or even in a social context where both mid-air hand gestures and vocal command could be seen as weird or undesired for the users. Thus, this thesis project aims to extend interactivity of augmented reality eyewear devices: 1) by providing more discrete interaction such as head gesture based on cognitive image schemas theory, metaphorical extension and natural user interfaces based on the smart watch finger touch gesture, 2) by using the context of the user to provide the more convenient interface and feedback in the right space and time. The underlying objective of this project is to facilitate the acceptance and usage of augmented reality eyewear devices.


1 INTRODUCTION

Eyewear devices dedicated to augmented reality allow us to add digital information directly to the real world and, like all technologies relying on semiconductors, they are subject to Moore’s Law: at constant price, the capacity of microprocessors doubles approximately every two years. Thus, although augmented reality eyewear devices have been around for decades, we are just at the dawn of their democratization. The number of augmented reality eyewear devices are growing, they are getting more efficient, smaller, more ergonomic and recently, more affordable. Many predict their massive use [12, 35] in various fields [6] already explored in research: health [25, 38], industry and maintenance [29, 34], education and training [23, 43], military [24], entertainment and arts [13], etc. Although augmented reality eyewear devices can offer rewarding sensorimotor and cognitive activities, the main reason why augmented reality devices are more and more used, is that they have the potential to provide faster access to information, just as personal computers and smartphones have done before them.

Nevertheless, to ensure rapid access to this information and consequently its adoption, it is essential to be able to manipulate the information and by extension the augmented reality system, in an appropriate manner, regardless of the context of use [1, 40]. Currently, augmented reality interaction for eyewear devices use mainly two types of modalities because of their natural integration [10]: hand gestures and voice command. Paradoxically, we are convinced that these interactions are inadequate in many situations commonly encountered for the actual use of an augmented reality eyewear device. For instance, in a museum, calm is sometimes required and hand gestures can inadvertently alter the arts exhibited and harm the tranquility of other visitors. For instance, in an industrial context, at the top of a wind turbine, one hand on a ladder or suspended to a rope, the use of hands becomes valuable for obvious security reasons, the wind or the noise of machines can also alter the use of the voice to interact with the system. For instance, for a worker at a supermarket dedicated to manage and take care of the produces department, in addition to the unconditional use of both hands to perform the regular tasks (such as dispose fruits and vegetables), it is also very unfortunate to make large gestures or talk with the system in front of the customers. These examples are not insignificant. Whatever the domain, we regularly found similarly problematic from the interaction perspective. Moreover, these issues represent the interest of our industrial partner Black Artick involved in the thesis project.


2 BACKGROUND AND RELATED WORK

Augmented reality systems are intrinsically multimodal because of their symbiosis of the virtual and real world. It is hard to imagine a rich augmented reality experience with only one modality to interact with the system. In the field of human- computer interaction, multimodality concerns the use of several modalities to understand and manipulate virtual entity.

In our work, we focus on the multimodality in the context of eyewear augmented reality devices. Many studies including augmented reality and multimodalities have been conducted. Among them, several focus on the relation between eyewear and other wearable devices.

Rupprecht et al. [37] connected a smartwatch to a smart glass in order to improve gesture recognition by using inertial sensors. Kharlamov et al. [22] also used a smartwatch to improve gesture interaction to eyewear devices by offering a way to point virtual content. Both of these settings give the advantage to not rely on the camera for the gesture recognition but none of them took advantage of the touch screen to offer other complementary interactions.

Still in gesture recognition, Yi et al. [44] explored the head gestures interaction from the eyewear device’s inertial sensor. They reported a very effective recognition system to simple shapes (triangle, rectangle, circle) and even number or alphabetical input, but they did not perform user test with an actual context on real use cases.

Gaze gestures have rarely been experimented for augmented reality eyewear devices but Van der Meulen [27] conducted a study to measure the difference between the real gaze gestures and head gestures wrongly often considered as gaze for eyewear devices. They did not intend to use gaze gesture to design new interactions, such as combining eyes and head movements. Voice Command is common to augmented reality systems, Irawati et al. [20] use it as a complementary modality with paddle gesture to interact with virtual objects in the real world. The results show an increase of efficiency from the multimodal input. Heidemann et al. [16] showed an augmented reality system where the user could interact from voice command or hand gestures. In this case, it is a supplementary modality.

Finally, input multimodal interfaces are not always related to the user’s environment or motor actions, they could come from the “mind” or at least the consequences of its activity from brain computer interfaces. It has been explored in augmented reality, Mercier-Ganady et al. [26] connected an EEG device with a computer coupled with optical sensors (Kinect) in order to offer mind-controlled “invisibility super-power” experience to the user. Cardin et al. [5] also used an EEG device coupled with eyewear devices to provide an immersive experience to watch in real time the brain activity.

From an output perspective to multimodality, augmented reality systems have also been largely explored, reducing the cognitive load and gain in efficiency when providing multiple modalities in comparison to only one [9, 11, 17, 33]. In addition to visual, the auditive and haptic feedback are the most common way to add another modality [7, 21, 39, 42]. However, a few attempts to exploit olfaction and gustation in augmented reality have been made, such as the MetaCookie+ from Narumi et al. [28] which allows users to taste “different” cookies from one neutral cookie coupled with other smells and visual cues.

We have seen through several research that mobile and wearable computing allows to extend the interactive possibilities between the human and the machine in several ways. However, many interaction modalities are quite difficult to use in an experimental context, and may be even more difficult for real mobile uses. Moreover, many involve intrusive and poorly adapted devices, such as neural interfaces, force feedback and other devices experiencing the olfactory and taste sensations of the user.

Context awareness and augmented reality have also been extensively explored before, but often focusing on content, that is, bringing the right information into a specific context [8, 18]. However, some works have shown the value of using multiple mobile and wearable devices together to improve activity recognition [37], which could be part of the context. Some studies use context awareness to improve interfaces or interaction, Orlosky et al. [32] moved the augmented visual content depending on the faces present in the user’s field of view, to avoid overlays. Ghouaiel et al. [14] offer a system that change the size of the content based on the geoposition context, or that change the sound volume based on the ambient noise.

3 STATEMENT OF THESIS

Problematic. Eyewear devices dedicated to augmented reality are about to becoming widely popular, but software implementations do not always provide adequate interfaces and interactions to their context of use. These interfaces are currently focused on hand gestures or voice recognition, while a major part of the use of augmented reality eyewear devices are implicitly linked to situations where the user in his environment is not able to perform these interactions (e.g. a noisy environment or requiring discretion, or with both hands occupied). Hypothesis 1. Can mobile and wearable computing enable us to offer more efficient (more accurate and faster) interaction to augmented reality eyewear devices by increasing modalities (in terms of quantity and quality) to the interfaces? Hypothesis 2. Does context-aware mobile and wearable computing systems offer better interactivity (efficiency, appreciation, cognitive load, fatigue) for real situations where the use of hand and voice is impeded? 4 RESEARCH OBJECTIVES Based on the literature review, previous experiments and real use cases from an industrial partner we define two main objectives to our research. Objective 1 – Context-awareness. Develop a computing system able to recognize common situations which are currently constrained by usual current augmented reality eyewear interfaces (e.g. having to navigate a menu or move a virtual object with both hands occupied, or while being in movement, being lying down, in a confined space, or even surrounded by unknown people...) This objective includes:

  • Identify current situations, and analyze their constraints for augmented reality interaction.

  • Choose devices, sensors and contextual data adequately to the identified situation with constraints.

  • Train a machine learning model to recognize the chosen situations and constraints linked to the activity supported.

  • Integrate the models into an augmented reality computing system.

  • Experiments with users tests to validate the implementation (ensure the validity of the inferences in real time).

Objective 2 – Interaction. Develop multiple interactions for augmented reality eyewear devices to address the previously identified situations:

  • Model interfaces and interactions by relying on:

  • an interactive model of an already existing technology (video game controller) then adapted to smartwatch.

  • a so-called natural communication (head gesture) coupled with an exploration of image schema theory and metaphorical extension (abstract representation of recurring sensorimotor patterns of experience [19]).

  • Integrate the new interfaces and interactions with the augmented reality system for eyewear devices.

  • Experiment and validate implementations with user tests and proven methodologies.

Additional objectives Based on the results from the experiments, create a framework (in the form of a pivot table) for systematic comparison between the set of interfaces for a set of selected tasks in various contexts previously identified. By this table, the aim is to recommend a specific interaction following different criteria (efficiency, time, cognitive load, fatigue, etc.) for a dedicated task in a given context. This framework will evolve after the research, including new situations, tasks and interfaces. Indeed, as future works, we are considering exploring the use of real-world objects (such as pens, scissors, rules...) to create interactions that take full advantage of the augmented reality technology.

5 PRELIMINARY RESULTS

Following the creation of the project Carton [4], a do-it-yourself smart eyewear device (with an optical see-through display based on a mobile phone positioned near the head thanks to a cardboard support and other simple materials), we started the project ControlWear [4] to investigate the use of a smartwatch to interact with an eyewear device. The project Carton itself included two other interaction modalities, a basic head-gesture recognition (tilt and nod) and finger touch gesture recognition from the device’s screen. We conducted user experiments with 10 participants to evaluate the interaction in relation to different tasks. The participants were asked to go through the three interaction modalities to perform three different tasks: a training to the interactions, a virtual maze race (only four direction control needed: up, down, left, right) to measure speed with time pressure, and the making of one origami (a tulip) to make sure their hands were busy. Each task is performed four times: once with each interaction modality, and a fourth time with the interaction of their own choice.

In total, 2114 interactions have been done during the study, on average 211 interactions per user. As the Figure1 shows, the smartwatch appear to be an ideal use of the Carton eyewear device in comparison to the two other interactions: head gesture and finger touch. When participants had the option to choose the interaction modality, they selected the smartwatch 64% of the time. Moreover, if we consider the race activity alone, they chose the smartwatch 100% of the time, despite being a device that has never been used before by all the participants. However, as seen on the right of the Figure 1, there was a use case where the head gesture was the most performed interaction, during the origami activity. It is coherent and more intuitive when the user’s hands are not free. The finger touch on the Carton device itself seems to be the last used interaction whenever the situation, but this could be explained by a lower success rate (due to our implementation) and users tend to select the mode of interaction which is the most well-engineered.


Figure 1: Interaction choice done by participants for each task from left to right: training, maze and origami.


Thus, in the future, it would also be interesting to look at the mode of interactions the users initially chose.

It was then confirmed by a post-experiment questionnaire, where the only negative part about the finger touch interaction was due to some attempt which didn’t work, pointing the importance of interaction implementation and the error/success rates that follow, which may overpass the user’s preference.

In relation to the timing to complete a task using each interaction modality, the results indicate that the smartwatch was the faster interaction strategy in our case. This was a big advantage for the race task, where the smartwatch interaction showed a timing four times quicker than the two other interactions as shown in Figure 2.


Figure 2: Average timing in seconds to finish the maze race task for each interaction All those results [4], comfort the idea to explore the use of smartwatch, and the head gesture to control and interact with an eyewear device dedicated to augmented reality in the context of real use cases.

6 METHODOLOGY

In order to achieve our previous objectives, we propose a methodology based on an experimental approach.

6.1 Experiment 1 – Validate models

After choosing the context (in relation to the real use cases from the industrial partner), the interactions (smartwatch touch finger recognition and head gesture recognition), the tasks (3D manipulation, menu navigation and micro-interaction) and the devices (Microsoft Hololens + Android smartphone and OS Wear smartwatch) for the system, we will develop the system architecture and build the models dedicated to the context awareness from many users.

We will recruit participants (n=20) with different characteristic (such as physiology and skills) depending on the context. Each participant will have to perform scenarios in order to validate the inference. Due to the industrial partner, the scenario will be defined from empirical observation of real situations for the inspecting workers on a nuclear power plant. To the user, this includes at least being in a stressful context, regularly using both hands to move around the hazardous piping while holding a telescopic inspection mirror, in addition to maintain a difficult posture in a noisy environment.


6.2 Experiment 2 – Laboratory Validation

After implementing the new interactions and interfaces into the system, we will test it in a simulation. We will recruit participants (n=20), which is enough to bring out more than 90% of interfaces and interactions issues [30]. The participants will be asked to perform each task with four interaction modalities, two already implemented by the eyewear device (voice command and mid-air hand gesture), and two newly designed. We will save logs, record video and use questionnaire [3] to track error and success rates, efficiency, reactive time, cognitive load [15] and fatigue [2].

6.3 Experiment 2 – Validate multimodal interfaces and interactions

After the comparative results from the laboratory, we will update the system and test it in real use cases, under working conditions, in situ. In addition to specific scenarios, thanks to the system’s multimodality, users will also be able to choose the most appropriate interaction to their context.

7 EXPECTED CONTRIBUTIONS

This project aims to extend in a pragmatic way the interactive possibilities of augmented reality eyewear devices and thus brings several scientific and industrial contributions.

  • Design and implementation of two new interactions for augmented reality eyewear devices.

  • Context awareness to unexplored situations.

  • Bring a non-existent comparative basis of the interfaces and interactions for this or that task in a particular situation in the context of augmented reality eyewear devices. This comparison must serve as a foundation and to be extended (in number of interfaces, task and context) by other studies.

  • Develop a computing architecture for the realization of an augmented reality system focused on the combined use of technologies present on the user (eyewear, phone and watch) while considering their design capabilities and limitations (battery, size...).

ACKNOWLEDGMENTS

The author would like to acknowledge his thesis advisors Charles Gouin-Vallerand, Ph.D., and Prof. Sébastien George for their support. The author is also grateful to the participants who enthusiastically participated in the previous experiments and all publications support and staff, who wrote and provided helpful comments on previous versions of this document. This research study is funded by the Natural Sciences and Engineering Research Council of Canada and the company Black Artick.


REFERENCES

  1. [1] Bemmann, Florian. User Preference for Smart Glass Interaction. Media Informatics Proseminar, 2015.

  2. [2] Borg, Gunnar. Psychophysical Scaling with Applications in Physical Work and the Perception of Exertion. Scandinavian Journal of Work, Environment & Health, 1990, 55–58.

  3. [3] Brooke, John. SUS-A Quick and Dirty Usability Scale. Usability Evaluation in Industry 189, no. 194 (1996): 4–7.

  4. [4] Brun, Damien, Susan M. Ferreira, Charles Gouin-Vallerand, and Sébastien George. A Mobile Platform for Controlling and Interacting with a Do-It- Yourself Smart Eyewear. International Journal of Pervasive Computing and Communications 13, no. 1 (2017): 41–61.

  5. [5] Cardin, Sylvain, Howard Ogden, Daniel Perez-Marcos, John Williams, Tomo Ohno, and Tej Tadi. Neurogoggles for Multimodal Augmented Reality. In Proc. of the 7th Augmented Human International Conference 2016, 48:1–48:2. AH ’16. New York, NY, USA: ACM, 2016.

  6. [6] Cieutat, Jean-Marc. A Few Applications of Augmented Reality : New Modes of Information and Communication Technologies. Effects on Perception, Cognition and Action. Accreditation to supervise research, Université Paul Sabatier - Toulouse III, 2013.

  7. [7] Coles, Timothy R., Nigel W. John, Derek Gould, and Darwin G. Caldwell. Integrating Haptics with Augmented Reality in a Femoral Palpation and Needle Insertion Training Simulation. IEEE Transactions on Haptics 4, no. 3 (2011): 199–209.

  8. [8] Delail, Buti Al, Luis Weruaga, and M. Jamal Zemerly. CAViAR: Context Aware Visual Indoor Augmented Reality for a University Campus. In Proc. of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology - Volume 03, 286–290. WI-IAT ’12. Washington, DC, USA: IEEE Computer Society, 2012.

  9. [9] Diederich, Adele, and Hans Colonius. Bimodal and Trimodal Multisensory Enhancement: Effects of Stimulus Onset and Intensity on Reaction Time. Attention, Perception, & Psychophysics 66, no. 8 (2004): 1388–1404.

  10. [10] Dorabjee, Rohann, Oliver Bown, Somwrita Sarkar, and Martin Tomitsch. Back to the Future: Identifying Interface Trends from the Past, Present and Future in Immersive Applications. In Proceedings of the Annual Meeting of the Australian Special Interest Group for Computer Human Interaction, 540–544. OzCHI ’15. New York, NY, USA: ACM, 2015.

  11. [11] Doyle, Melanie C., and Robert J. Snowden. Identification of Visual Stimuli Is Improved by Accompanying Auditory Stimuli: The Role of Eye Movements and Sound Location. Perception 30, no. 7 (2001): 795–810.

  12. [12] Fink, Charlie. The Inevitability Of Augmented Reality HMDs. Retrieved June 10, 2018. https://www.forbes.com/sites/charliefink/2017/11/13/the- inevitability-of-augmented-reality-hmds

  13. [13] Gilroy, Stephen W., Marc Cavazza, Rémi Chaignon, Satu-Marja Mäkelä, Markus Niranen, Elisabeth André, Thurid Vogt, Jérôme Urbain, Mark Billinghurst, and Hartmut Seichter. E-Tree: Emotionally Driven Augmented Reality Art. In Proc. of the 16th ACM International Conference on Multimedia, 945–948. ACM, 2008.

  14. [14] Ghouaiel, Nehla, Jean-Marc Cieutat, and Jean-Pierre Jessel. Adaptive Augmented Reality: Plasticity of Augmentations. In Proc. of the 2014 Virtual Reality International Conference, 10:1–10:4. VRIC ’14. New York, NY, USA: ACM, 2014.

  15. [15] Hart, Sandra G., and Lowell E. Staveland. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Advances in Psychology 52 (1988): 139–183.

  16. [16] Heidemann, Gunther, Ingo Bax, and Holger Bekel. Multimodal Interaction in an Augmented Reality Scenario. In Proc. of the 6th International Conference on Multimodal Interfaces, 53–60. ACM, 2004.

  17. [17] Heller, Morton A. Visual and Tactual Texture Perception: Intersensory Cooperation. Perception & Psychophysics 31, no. 4 (1982): 339–344.

  18. [18] Henrysson, Anders, and Mark Ollila. UMAR: Ubiquitous Mobile Augmented Reality. In Proc. of the 3rd International Conference on Mobile and Ubiquitous Multimedia, 41–45. MUM ’04. ACM, 2004.

  19. [19] Hurtienne, Jörn. Cognition in HCI: An Ongoing Story. Human Technology: An Interdisciplinary Journal on Humans in ICT Environments, 2009.

  20. [20] Irawati, Sylvia, Scott Green, Mark Billinghurst, Andreas Duenser, and Heedong Ko. An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures. Advances in Artificial Reality and Tele- Existence, 2006, 272–283.

  21. [21] Jeon, Seokhee, and Seungmoon Choi. Haptic Augmented Reality: Taxonomy and an Example of Stiffness Modulation. Presence: Teleoperators and Virtual Environments 18, no. 5 (2009): 387–408.

  22. [22] Kharlamov, Daniel, Brandon Woodard, Liudmila Tahai, and Krzysztof Pietroszek. TickTockRay: Smartwatch-Based 3D Pointing for Smartphone- Based Virtual Reality. In Proc. of the 22nd ACM Conference on Virtual Reality Software and Technology, 365–366. VRST ’16. ACM, 2016.

  23. [23] Lee, Kangdon. Augmented Reality in Education and Training. TechTrends 56, no. 2 (2012): 13.

  24. [24] Livingston, Mark A., Lawrence J. Rosenblum, Dennis G. Brown, Gregory S. Schmidt, Simon J. Julier, Yohan Baillot, J. Edward Swan II, Zhuming Ai, and Paul Maassel. Military Applications of Augmented Reality. In Handbook of Augmented Reality, 671–706. Springer, 2011.

  25. [25] Mentler, Tilo, Henrik Berndt, and Michael Herczeg. Optical Head-Mounted Displays for Medical Professionals: Cognition-Supporting Human-Computer Interaction Design. In Proc. of the European Conference on Cognitive Ergonomics, 26:1–26:8. ECCE ’16. New York, NY, USA: ACM, 2016.

  26. [26] Mercier-Ganady, Jonathan, Maud Marchal, and Anatole Lécuyer. B-C- Invisibility Power: Introducing Optical Camouflage Based on Mental Activity in Augmented Reality. In Proc. of the 6th Augmented Human International Conference, 97–100. AH ’15. New York, NY, USA: ACM, 2015.

  27. [27] Meulen, Hidde van der, Andrew L. Kun, and Orit Shaer. What Are We Missing?: Adding Eye-Tracking to the HoloLens to Improve Gaze Estimation Accuracy. In Proc. of the 2017 ACM International Conference on Interactive Surfaces and Spaces, 396–400. ISS ’17. New York, NY, USA: ACM, 2017.

  28. [28] Narumi, Takuji, Shinya Nishizaka, Takashi Kajinami, Tomohiro Tanikawa, and Michitaka Hirose. Augmented Reality Flavors: Gustatory Display Based on Edible Marker and Cross-Modal Interaction. In Proc. of the Conference on Human Factors in Computing Systems, 93–102. CHI ’11. ACM, 2011.

  29. [29] Neumann, U., and A. Majoros. Cognitive, Performance, and Systems Issues for Augmented Reality Applications in Manufacturing and Maintenance. In Virtual Reality Annual International Symposium, 1998. Proc., IEEE 1998, 8,

  30. [30] Nielsen, Jakob, and Rolf Molich. Heuristic Evaluation of User Interfaces. In Proc. of the SIGCHI Conference on Human Factors in Computing Systems, 249–256. ACM, 1990.

  31. [31] Obrenovic, Zeljko, Julio Abascal, and Dusan Starcevic. Universal Accessibility as a Multimodal Design Issue. Communications of the ACM 50, no. 5 (2007): 83–88.

  32. [32] Orlosky, Jason, Kiyoshi Kiyokawa, Takumi Toyama, and Daniel Sonntag. Halo Content: Context-Aware Viewspace Management for Non-Invasive Augmented Reality. In Proc. of the 20th International Conference on Intelligent User Interfaces, 369–373. IUI ’15. ACM, 2015.

  33. [33] Oviatt, Sharon, Rachel Coulston, and Rebecca Lunsford. When Do We Interact Multimodally?: Cognitive Load and Multimodal Communication Patterns. In Proc. of the 6th International Conference on Multimodal Interfaces, 129–136. ACM, 2004.

  34. [34] Petersen, Nils, and Didier Stricker. Cognitive Augmented Reality. Computers & Graphics, 40 years of Computer Graphics in Darmstadt, 53 (December 1, 2015): 82–91.

  35. [35] Rauschnabel, Philipp A., and Young K. Ro. Augmented Reality Smart Glasses: An Investigation of Technology Acceptance Drivers. International Journal of Technology Marketing 11, no. 2 (2016)

  36. [36] Rosa, Nina, Peter Werkhoven, and Wolfgang Hürst. (Re-)Examination of Multimodal Augmented Reality. In Proc. of the 2016 Workshop on Multimodal Virtual and Augmented Reality, 2:1–2:5. MVAR ’16. New York, NY, USA: ACM, 2016.

  37. [37] Rupprecht, Franca Alexandra, Achim Ebert, Andreas Schneider, and Bernd Hamann. Virtual Reality Meets Smartwatch: Intuitive, Natural, and Multi- Modal Interaction. In Proc. of the 2017 CHI Conference Ext. Abst. on Human Factors in Computing Systems, 2884–2890. CHI EA ’17. ACM, 2017.

  38. [38] Sielhorst, Tobias, Marco Feuerstein, and Nassir Navab. Advanced Medical Displays: A Literature Review of Augmented Reality. Journal of Display Technology 4, no. 4 (2008): 451–467.

  39. [39] Sodnik, Jaka, Saso Tomazic, Raphael Grasset, Andreas Duenser, and Mark Billinghurst. Spatial Sound Localization in an Augmented Reality Environment. In Proc. of the 18th Australia Conf. on Computer-Human Interaction: Design: Activities, Artefacts and Environments, 111–118. 2006.

  40. [40] Vogl, Anita, Nicolas Louveton, Rod McCall, Mark Billinghurst, and Michael Haller. Understanding the Everyday Use of Head-Worn Computers. In Human System Interactions (HSI), 2015 8th International Conference On, 2015

  41. [41] Waibel, Alex, Minh Tue Vo, Paul Duchnowski, and Stefan Manke. Multimodal Interfaces. In Integration of Natural Language and Vision Processing, 299– 319. Springer, 1996.

  42. [42] Webel, Sabine, Uli Bockholt, Timo Engelke, Nirit Gavish, Manuel Olbrich, and Carsten Preusche. An Augmented Reality Training Platform for Assembly and Maintenance Skills. Robotics and Autonomous Systems, Models and Technologies for Multi-modal Skill Training, 61, no. 4 (2013)

  43. [43] Wu, Hsin-Kai, Silvia Wen-Yu Lee, Hsin-Yi Chang, and Jyh-Chong Liang. Current Status, Opportunities and Challenges of Augmented Reality in Education. Computers & Education 62, 2013:41–49.

  44. [44] Yi, Shanhe, Zhengrui Qin, Ed Novak, Qun Li, and Yafeng Yin. GlassGesture: Exploring Head Gesture Interface of Smart Glasses. In Computer Communications, IEEE INFOCOM 2016-The 35th Annual IEEE International Conference On, 1–9. IEEE, 2016.


Damien Brun. 2018. Multimodal and Context-Aware Interaction in Augmented Reality for Active Assistance. In Proceedings of 20th ACM International Conference on Multimodal Interaction (ICMI’18). ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3242969.3264966


Si vous souhaitez en savoir plus sur cette étude, contactez-nous sur notre formulaire de contact.


Comments


Commenting has been turned off.
bottom of page