top of page

A mobile platform forcontrolling and interacting witha do-it-yourself smart eyewear

Smart eyewear, such as augmented or virtual reality headset, allows the projection of virtual content through a display worn on the user’s head. This paper aims to present a mobile platform, named “CARTON”, which transforms a smartphone into smart eyewear, following a do-it-yourself (DIY) approach. This platform is composed of three main components: a blueprint to build the hardware prototype with very simple materials and regular tools; a software development kit (SDK) to help with the development of new applications (e.g. augmented reality app); and, finally, a second SDK (ControlWear) to interact with mobile applications through a Smartwatch.


1. Introduction

Since a decade, the progress in mobile computing (e.g. battery, processing power and display) brings a growing interested in smart eyewears and other assimilate optical head-mounted displays (OHMD), such as the Google Glass, Vuzix, Microsoft Hololens or Ora-X. However, such devices are still very expensive or, worst, sometimes even unavailable or hard to obtain, depending on the end user’s location. Those factors are limiting the access to the technology and reducing a broader utilization by the population. For researchers and academics, those factors make it more difficult to conduct research and develop prototypes with smart eyewear. For instance, testing and evaluating a prototype of collaborative learning based on augmented reality (AR) for primary or secondary schools can be costly and hard to obtain the needed amount of devices.

Very few studies focused on solving this problem. Some projects intended to reduce costs by applying smartphones to create an inexpensive AR headset. The use of a smartphone to create an accessible smart eyewear is considered a key strategy for two reasons:

  1. (1) It provides everything needed for the “smart” part of this eyewear (e.g., chip, screen, camera, battery, connectivity and is usually packed with plenty of sensors).

  2. (2) It is widely available. Just for Android in September 2015, Google claims there were 1.4 billion 30-days active devices around the world (Google, 2015).

Unfortunately, previous research applying the smartphone approach for developing smart eyewear has not solved the problem yet. Whether their solutions were not similar enough to smart eyewear, but closer to a virtual reality headset (Ahmed and Peralez, 2014) or they lacked reachability (De Angeli and O’Neill, 2015), considering the process of creation is not open or it includes tools/materials not yet widely available.

In this paper, by smart eyewear, we mean any device which can carry out many functions of a mobile computer, but has to be worn on the head, implying the use of eyes. Therefore, a smart eyewear is not exclusively related to AR technology and could be applied for other experiences.

In this study, we aim to solve these issues by designing a do-it-yourself (DIY) smart eyewear prototype (Figure 1), which requires only simple material and regular tools that mostly everyone has access to around the world, at least where smartphones are present. By using standard material and tools, we aim to create favorable conditions for knowledge-sharing with easy-to-follow instructions (Tanenbaum et al., 2013). Information and knowledge exchange is a core value for the motivation for contributing in the DIY community (Kuznetsov and Paulos, 2010): allowing creators and contributors to receive feedback, educate others and showcase projects, among others. Highlighting its value, DIY practices are also popular among Human-Computer Interaction (CHI) researchers (Buechley et al., 2009). Therefore, to make it truly reachable, we naturally decided to make it open-hardware. Thus, all guides and blueprints are available directly on the Internet (https:// mobicarton.github.io).

“CARTON”, as a complete toolkit (Brun et al., 2016), also includes an open-source software development kit (SDK). The SDK aims to accelerate and simplify the adaptation, or development, of existing and new mobile app compatible with our device (e.g. mostly any application dedicated to Google Glass and assimilate could be adapted for CARTON, thanks to similar features and characteristics). The open-source community based on knowledge-sharing is quite similar to the DIY community sharing some identical cores. Furthermore, open-source practices have already proved to be effective in software development (Yamauchi et al., 2000). Almost as old as the early age of software development, the open-source communities have grown and are very widespread around the world, creating a huge number of tools and new technology to improve their effectiveness.


In our solution, to complete the open-source SDK, the toolkit also contains a full documentation and two sample apps used during our experiments. Indeed, we conducted two sets of experiments with 16 and 10 participants to confirm that this DIY smart eyewear is feasible following the guide, the blueprint and usable/functional following its utilization/ interaction with real applications. Therefore, the contribution of this project is triple:

  • providing blueprints and guidelines to build low-cost, open source and easy-to-do DIY smart eyewear devices;

  • providing an SDK to support the development and the adaptation of mobile apps for eye-wearable; and

  • providing different modalities of interaction, including a regular smartwatch to make sure a wide number of people can use it in different situations.

At last, CARTON project allows end users easy access to another kind of multimedia (e.g. interface/interaction related to smart eyewear) just with a mobile phone.

This paper is structured as follows. The first section presents a review of the related work around the development of smart eyewear. The second section presents the CARTON project with a description of its hardware, the developed software (including ControlWear) and a description of the sample applications that goes with the SDKs. The third section presents the evaluation of our solution, followed by the presentation of the results. Finally, we conclude this paper with future work and a conclusion.

2. Related work

Motivated by the inaccessibility of augmented and virtual reality technologies for developing countries’ population, Ahmed and Peralez (2014) were one of the first researchers to present affordable altered perspectives to make augmented (and virtual) reality technology. They presented a tool similar to the well-known Google Cardboard (DIY virtual reality headset) (Google, 2014) but with foam board, welding goggles and concave lenses. With the help of stereoscopy technique and smartphone’s camera, their device can create a virtual or AR experience, among others. The major problem of this strategy is that the entire view of the user is limited to the focal length of the smartphone’s camera. Also, the reality is altered by the smartphone’s screen and camera characteristics (density and color restitution), feeling less natural. Another limitation of this tool is that their system requires a large amount of initial manual adjustment before use.

After this first initiative, De Angeli and O’Neill (2015) outlined their study in developing an AR headset with low purchase and maintenance costs. Their work is similar to a product previously presented by the company Seebright (2014). In their study, a lightweight frame that holds a smartphone above the user’s eye line was applied. A piece of Plexiglas was added in front of the user’s eye line which refracts the light of the smartphone’s screen; therefore, making it possible to add a virtual layer to the reality. Because the rear camera’s phone faces the ceiling, a piece of mirror was added on the top of the headset, aligning the camera’s sight with the eye line. The authors conducted a study to test the visual capacity of the tool, depending on light conditions of the environment and smartphone’s screen brightness. The results showed that their headset, using smartphones, delivers very good results with ambient illumination similar to interior environment. Unfortunately, this tool has a huge limitation in relation to its reachability to everyone. Despite the fact that the material does not have an excessive cost, in their last version they used tools which are currently not accessible everywhere, such as a three-dimensional (3D) printer. Although adaptations on mobile apps are mandatory to use their solution, no SDK is provided. But, overall, and most important, all of these previous works did not provide their creation openly to the community, which is unfavorable to its reachability.

3. CARTON

Based on an analysis of the related works, we propose the CARTON project, which aims a broader use of smart eyewear following a DIY approach.


3.1 Hardware

Aiming to ensure our eyewear’s reachability, we focused our design in employing materials that were easy to obtain and affordable. CARTON is made of very simple materials and applying regular tools. The complete list of needed tools and material is shown in Table I. 3.1.1 Design strategies. Inspired in more consolidated products, such as Meta 2 (Meta, 2016), and a previous research on an inexpensive AR headset (De Angeli and O’Neill, 2015), we use Pepper’s ghost effect to add a layer on the reality (Bimber and Raskar, 2003). Using this effect, the smartphone’s screen is projected directly into the transparent plastic sheet (Plate 1), which is positioned in front of the user’s eyes. Because the plastic sheet is perfectly transparent (such as a glass lenses), the real-world look through it remains unchanged and normal. An important factor to be taken into consideration in this approach is that the transparent plastic sheet should not be thicker than 0.5 mm (0.020 inch), otherwise the projection will appear twice, one on each side, and the quality will be affected (with a blurred effect). Another solution to avoid this issue is to apply an anti-reflective transparent material, but unfortunately, this material is not easily available; hence, against the philosophy of this project. This step is very sensible because if the transparent plastic sheet is too thin, then it can bend and the result is distorted. As a strategy to solve this issue, we added to the design a flat stick to prevent the transparent plastic sheet bend. This transparent plastic sheet can be found in any regular stationery shop.



In this current configuration, as shown in Figure 2, the rear camera of the mobile phone faces the ceiling. We use a small mirror piece to track the view in front of the user’s eyes; it is placed on the top of the mobile phone at 45°. We designed the prototype so that the mirror is separated from the main part in another module for three reasons:

  1. (1) Tracking the user’s view is optional for many use cases. For instance, to receive context-awareness notification, we could use other mobile’s features and sensors (e.g. GPS, Wi-Fi, accelerometer [...]).

  2. (2) The mirror is probably the hardest piece of material to acquire, so we can still create and use the CARTON without it.

  3. (3) Due to multiple rear camera’s position (depending on the model of the mobile, the position is usually top-left, but sometime top center); thus, the separate module makes the CARTON device adaptable to those differences.

Taking into account facial anthropometric (Naini, 2011), we tilted down the main structure to 15°, because the forehead is not perfectly perpendicular to the straight-ahead look of the eyes. Adding to it, this strategy contributed to correcting the upper position of the smartphone and its camera from the horizontal eye line (Figure 2).

Besides the materials previously discussed, extra articles were added to the toolkit aiming to improve its design. The sponge acts as a cushion cover to make this wearable tool more comfortable. The rubber band has been placed to avoid the smartphone to fall out from the carton prototype. The utility stretch strap is there to fix it on the head, adding the hand-free ability to this project. In overall, the total cost of the material, to produce one device, does not exceed US$9. Adding to it (only if needed), the additional cost of the tools shown in Table I, the maximal cost is around US$13, depending on the shop and location. However, these kinds of tools and materials are usually already owned or can be found for free, just as the main material: cardboard.

Almost all the blueprint has been designed using the open-source software LibreOffice Draw or Inkscape, and it has been initially inspired by the Google Cardboard v1. After several design iterations, it evolved to the blueprint presented in Figure 3. During the designing of the blueprint, to accelerate each iteration process, we have been to a fabrication laboratory (FabLab) to laser cut the cardboard. For that reason, even if it is not the main purpose of this project, we also released a version of the blueprint adapted to laser-cut. The final blueprint is on-scale and adapted to be printed on regular paper size Letter or A4 format.

3.2 Software

Our motivation to create an SDK with two sample mobile apps is deeply linked with its reachability. Instead of just creating a unique demonstration app, as a proof of concept, we considered it necessary to provide an easy way to develop a new experience using CARTON devices. Indeed, some coding adaptations to the apps are necessary to be compatible (e.g. with respect to some size and margin); therefore, it would be considered as a boilerplate code without the SDK. Everything has been created natively with Android Studio 1.5 and Android SDK version 23.


3.2.1 SDK.


The SDK has been created to facilitate the development or adaptation of a mobile app compatible with the CARTON device. It is available for the platform Android 4.1􏰀 and can be downloaded from the project-related GitHub repository (https://github.com/ mobicarton). In addition to examples, the SDK is accompanied with documentation and tutorials, which helps his integration.

The CARTON SDK includes the following features:

  • Auto-adaptive screen: It configures the smartphone’s screen automatically to the correct size (width, height), position (margin – left and top) and brightness. The current position size has been chosen considering a wide difference of mobile phone screens’ size between 3.5 and 6 inches. It limits the usable screen zone to 30 􏰁 60 mm with a 10-mm margin from top and left, as shown in Figure 4. We measure one of the smallest popular smartphones (iPhone 4 with 3.5 inches’ screen) and choose those limits based on the minimal. We made sure it fit inside. A design choice made to avoid different experiences depending on the user’s phone. Then, the brightness of the mobile’s screen is set to maximum to make it more visible. Finally, a mirrored effect is added to the whole interface because of the Pepper’s ghost effect, otherwise everything is horizontally reversed.

  • A default launcher: This provides a standard launcher to get information on how to place the phone inside the CARTON device. Furthermore, it includes a button that allows users to launch the app without CARTON, the mirrored feature is automatically deactivated, a functionality that is particularly convenient for developers. It also provides a link directly to the construction guide for people without CARTON who wants to build their own.

  • Head gesture recognition Application Programming Interface (API): It helps to integrate spherical tracking and head gesture recognition such as tilt and nod. These two gestures are considered as a natural and intuitive interaction, which can be performed without disturbing a primary activity (Morency and Darrell, 2006). We use high-level sensor fusion from Android that mixes different sensors: accelerometer, gyroscope and magnetometer. To recognize tilting (right or left) or nodding (up and down), the API measures how far user inclines her/his head with a threshold (15°) and how long does it take to go back to origin (1,000 milliseconds), as shown in Figure 5 (left). These parameters were chosen considering internal tests and have been refined after the pilot test.

  • Finger gesture: The smartphone’s screen is still reachable to the user to conduct touch gestures. Therefore, we included a layer over the interface to interact with common patterns: swipe/fling left, right and forward/backward (top/down), as shown in Figure 5 (right). Quick gestures have been chosen to avoid causing a lack of interactivity, with the finger cutting off the user’s view. The finger gesture feature was added to provide a better accessibility of the device (e.g. people with disability who could not use head gesture or voice recognition). Voice recognition: The native Android speech recognition service is used to allow the user to easily interact with the voice.


3.2.2 ControlWear.


After the first evaluation with users, we included an optional fourth way to interact with CARTON, by using a regular Android Wear smartwatch. The solution is called ControlWear and is a complete standalone project. In addition to being fully compatible and integrated with CARTON, we decided to create it separately because it could be used with any other kind of AR and/or virtual reality tools and systems available for Android mobile. Therefore, we develop another software library for Android available on GitHub in another repository (https://github.com/controlwear). It includes a touch gesture recognition (from the smartwatch’s screen) with pattern similar to what we get with a trackpad, such as a multi-direction “Swipe” (right/left/up/down) and simple or double tap, as shown in Figure 6 (left). Because the Trackpad does not need an elaborate interface, we created an official generic app for smartwatch, as shown in Figure 6 (right).

To detect a swipe gesture, we based our algorithm on the Android GestureDetector which allows us to receive the position (two-dimension coordinates) of the initial touch and last touch of the gesture. Then, we calculate the relative angle between the two points and sort it in the correct direction. The direction is based on 360° full circle divide by four ranges of 90°:

  • 315-45: right;

  • 45-135: up;

  • 135-225: left; and

  • 225-315: down.

The aim of the library is to provide an interface to communicate between any kind of mobile app (3) in Figure 7 and the official generic apps which run on the smartwatch (1) on Figure 7. Because applications on Android Wear 1.4 (and below) could not be standalone, the communication pass through another official generic app which is run as a service on the mobile phone (2) on Figure 7.

3.2.3 Sample (Demos).



Three mobile apps have been created using the CARTON SDK. They are also openly available on the GitHub repository. The first application, which is the main one, called “CARTON”, include six features accessible from a menu composed of simple tiles. This app is available for Android 4.2.x, or more, because of a used component added in API 17 (TextClock). We can navigate through it with different kinds of interactions: fingers, head gesture and voice commands. Directly from the main menu (Figure 8) you can access:


  • Clock: A simple tile providing time.

  • Compass: The compass app of Google Glass adapted for CARTON with our SDK. The aim of this adaptation is to show that we can easily create and experience something similar to a consolidated smart glass using our tool.

  • Live Subtitle: Use the voice recognition feature from CARTON SDK to create subtitles live-on. Its aim is to help hearing-impaired people, by listening to conversation then transcript it on the air as subtitles.

  • Origami: Five origami diagrams are included (Frog, Mouse, Tulip, Lily and Cranevar), all of them under a Creative-Common license. They have been adapted to provide guides of each step in the user’s view, and uses hands-free control to navigate through the different steps.

  • Tutorial: There is a tutorial automatically started when the app launch for the first time, but we can access it again directly from the menu. Here, we can learn how to interact with CARTON, using some basic finger patterns, head gesture recognition and voice recognition.



Compatible app: It retrieves the name and the description of any app installed on the current phone that is compatible with CARTON. It allows a direct launch from this main app; for example, we can launch the second sample that way without leaving this one.


The second application, named “Poster Target”, is used to create an AR experience with the camera’s phone. The view of the phone camera is projected to the screen of CARTON using the top mirror (Plate 2), then the user is able to focus at several selected printed posters and get additional information about them. The poster recognition is supported by the AR technology Vuforia v5 (PTC Inc., 2016). We use the image target feature to provide tracking and recognition of the posters. When a poster is recognized and tracked the app shows to the user, through the CARTON screen, some textual information, such as a simple description of the poster in a virtual text box. The posters used for this sample are a courtesy of NASA/ JPL-Caltech, dealing with space exploration and called the visions of the future (http://jpl. nasa.gov/visions-of-the-future/). This app is built to check two characteristics of CARTON:

  • Hardware: use of the camera’s phone and so the mirror module.

  • Software: CARTON SDK and its compatibility with AR technologies (e.g. Vuforia).

The third application integrates the ControlWear solution. It is a separated application, which includes two features similar to the main app (tutorial and origami assistant), and add a new one: race. The aim of this application is to evaluate three different kinds of interaction (finger touch, head gesture and smartwatch control) and include chronologically three parts:



  1. Training: Tutorial on how to control CARTON with the three different interaction modalities (finger touch, head gesture and smartwatch control).

  2. Race: A small game in which a box move in a table 4 􏰁 4 from one square to another, with checkpoints. The time is measured, as shown in Figure 9. In this configuration, each directional interaction has to be done at least two to four times, for a total of 12 movements minimum.

  3. Origami: Tutorial to build an origami tulip with textual help.

To make sure the sample (first and second) apps and the CARTON prototype work properly with multiple devices, we successfully conduct internal tests with popular android phones, as shown in Table II. iPhone has not been tested yet because the SDK is currently available only for Android.

4. Evaluation

The evaluation included two studies. In the first study, we evaluated the construction process and user interaction with CARTON’s two first apps. The results of this first evaluation were applied into the second study, which focused in evaluating three different interaction modalities using CARTON’s third app.

The main objective of the first stage of the evaluation is to answer two main questions to make sure this project is appropriate and reachable to a wide community: is it feasible? Is it functional? First, we aimed to evaluate the whole DIY creation process, including the quality of blueprints, guide, tools and materials used. Furthermore, in a second part, we aimed to evaluate CARTON’s functionality and usability with two sample apps. Regarding the second study, the main objective is to analyze: how do we interact with CARTON? Does the provided solution is sufficient to be manipulated? And finally, is the use of multiple devices such as a smartwatch coherent to control an AR system?

4.1 Methodology study 1 – construction and usability

4.1.1 Participants.

We recruited 16 participants for the experiments, after publishing a recruitment poster in a university and in a video game co-working space. Therefore, the user groups were composed mostly of students (seven) or people related to the video game industry (four), 12 men and four women aged between 25 and 56 years. In exchange for their participation in the study, participants received their CARTON creation and US$10, as a lump sum refund for public transport. Because they also kept their own creation, we hoped it made participants more conscientious with the assigned tasks. Our aim was attracting people without specific skills which could advantage them to create a CARTON prototype. Only six of the participants already created some DIY project before. Few confess, in the survey accompanying the evaluation, that they were not particularly handymen. Three participants had knowledge and skills to develop mobile app, so they were interested in using it after the experiments, but also two of them did not own a smartphone.

During one week we managed four sessions, with four participants each. In each session, we provided all the material and tools required that we found easily in regular shops. Out of the 16 participants, four took part in a pre-test (the first session) to update the guide and blueprint. Thus, the main evaluation group included 12 participants, which is considered enough participants to identify around 95.0 per cent of significant usability problems (Bastien and Scapin, 1993).

4.1.2 Sessions.


Each session lasted around two and a half hours and was organized in two parts. In the first part, participants were asked to follow the guidelines and blueprint to create their own CARTON. In the second part, participants tested their creation using the sample demos.

4.1.3 Construction process.


During this first part of the experimentation, participants created their own CARTON (Plate 3). The aim of this activity was to reproduce a situation as if they were at home, following guidelines and blueprints that they could find on the Internet. Participants were encouraged making comments (i.e. think aloud method) during the experimentation. When they finished the creation of their CARTON device, they had to answer the first part of a questionnaire that includes 22 questions (mainly about the guide, the blueprint, the tools and their feelings). To make the experimentation process more similar to what we expect to happen in real life, we intentionally choose a regular to low quality of cardboard, as it is the most available in our daily life.



4.1.4 Usability tests.


During the second part of the first experimentation, participants had to use their own creation with a mobile phone we provided to them: Nexus 6 or Nexus 6P, both running on Android 6.0.1. We did not let them use their own smartphone to reproduce the same experience across participants with similar characteristics. For this activity, we used the two sample apps compatible with CARTON that were presented in the previous section. Participants had to go through the tutorial in the main app and navigate in the menu with fingers or head gesture. Then they were asked to launch the second app to test the mirror module. Finally, participants answered the last part of the questionnaire, with 29 questions about their experience (about the comfort, the display, few other characteristics and their feeling).

4.2 Methodology study 2 – interaction

4.2.1 Participants.


After we analyzed the results of the first experiments, we recruited ten participants for the second evaluation. During four days, we managed ten individual sessions with each participant. The user groups were composed of five women and five men aged between 25 and 69 years. In exchange for their participation in the study, participants kept their origami creation (a two-part tulip). All participants had a university degree and were professionals from different fields. All of them used (at least few times) a smartphone; therefore, they knew the “swipe” gesture, but none of them had used a regular smartwatch before and some of them (6/10 participants) already tried an AR headset. 4.2.2 Sessions.


Each session lasted around 40 minutes. Participants were asked to go through the whole third app. We provided to them a CARTON (already made), a smartphone Motorola Nexus 6 (Android 7.0) and a smartwatch Motorola 360 (1st generation – Android Wear 1.4.0). The activity included three tasks and participants used all the interaction available for the application: finger touch, head gesture recognition and ControlWear (smartwatch). In the first task, participants had to do a training to be familiar with the three 54 interaction modalities. The second task was to play a small game, a race activity and the third task was to create one origami (in 23 steps). At the end of the experimentation, we asked to the participants to answer a questionnaire containing 16 questions about their feeling, preferences and previous experiences.

4.2.3 Interaction evaluation.


We chose to evaluate these three interactions (finger touch, head gesture recognition and ControlWear) because they share similarity and behavior: four directional way. For each task, we measured the rate of success/failure to interact, the time to do it and their potential preferred interaction when they were allowed to choose. Therefore, participants had to do the first and second tasks four times, at least one time for each interaction and one time by choosing the interaction they wanted. In the third task, participants did three origami’s steps using each interaction (nine first steps) and the last 14 steps they could choose the interaction modality. We attempted to reduce the bias when participants could choose the interaction modalities by two ways. First, we made a training activity for each interaction, so they would have previous knowledge about all of them. And second, we changed the order of the interaction modalities presented for each participant, some started with the smart watch, others with the head gesture recognition, etc.

5. Results

5.1 Construction process

The results from the participants’ observation and questionnaire show positive outcomes regarding the CARTON construction process. Most participants could successfully create their own smart eyewear using the CARTON toolkit and all of them have been proud of using a product they build by themselves. Only one participant could not finish his construction properly, as its mirror module could not fit well. Except for this participant, all other participants affirmed to get some pleasure to use their smart eyewear.

Furthermore, participants rated positively their experience in the creation process and the provided user guide, blueprint and tools. Participants’ satisfaction in the creation process was rated as 4.33/5 (1 unsatisfied/5 satisfied). Regarding the user guide and blueprint, most participants considered both as good, 91.7 per cent (11/12) and 83.3 per cent (10/12), respectively. The recommended tools’ choice (that corresponds to: utility knives, scissors, ruler and glue stick) was also rated positively, 100 per cent of the participants considered it suitable/adequate.

Besides showing the value of the CARTON to allow users, without specialized skill, to build a low-cost, open source and DIY smart eyewear, the results of the evaluation also allowed us to identify improvement aspects in relation to difficulties to build the device and its robustness. Participants affirmed that building the headset was not so easy 2.66/5 (1: hard, 5: easy). However, beyond the questionnaire, we notice during the process of construction that most participants tend to not read the guide and rely more on the blueprint. This could sometimes be an issue with forgotten and broken parts that made the construction more difficult, lengthen the process and also weakened the CARTON. Possible strategies to facilitate the construction process could be providing a video tutorial, and performing some improvements to the actual guide and blueprint.


In relation to the device robustness, even if 66.7 per cent (8/12) of the participants think their CARTON is not robust mainly because of the cardboard, 83.3 per cent (10/12) think the material’s choice is suitable. Both of the two “inadequate” answers were due to the quality of the cardboard, participants said the cardboard was “coarse for a fine cut” and “too thick”. We intentionally selected a low-quality cardboard for reachability reasons previously explained; therefore, this problem could be easily minimized. As a reflection of their judgment on robustness, 66.7 per cent (8/12) of the participants affirmed they would prefer to create their own version with a 3D printer. Their motivations for that were: a need for “more robust” (said by five participants) but also “more accurate” (three participants) and two of them for the “aesthetic”. We consider that a 3D printing solution is not as highly customizable by a wide community, as the cardboard solution of CARTON, because it requires more specialized knowledge and skills (e.g. in 3D modeling) to update the prototype. With our cardboard solution, we ensure flexibility with customization, adaptation and personalization to satisfy more users (Genaro Motti and Caine, 2014) without adding extra cost.

5.2 Ergonomics

The results of the experimentation highlighted few improving aspects regarding the CARTON’s comfort and ergonomics. Measure of the easiness to put on and take off the device was rated 3.33/5 (1: hard, 5: easy), which is correct (close to easy). Sometimes due to the chosen material, the utility stretch straps were judged “too small”, not big enough for some heads or it could stick with long hairs. The headset comfort was judged to 3.5/5 (1: bad, 5: good), still correct (close to good), but not a perfect score that is linked with the precedent minor and correctable default where the small utility stretch was judged “too tight” or the sponge “too harsh”.

Half of the participants felt the weight of the device, but only one pointed “cumbersome” as a comfort issue. This issue could be easily minimized with another lighter smartphone than those used during the experiment. Nexus 6P and Nexus 6 are both of the heaviest phone with 178 g (6.28 oz.) and 184 g (6.49 oz.). The headset weight 85 g (3.0 oz.), so the total was 263 g and 269 g. To compare with Google Glass which weighs only 36 g (1.27 oz.), it is more than 7 times more. If we change for a lighter phone such as Nexus 5X which weight 136g (4.80 oz.), we would gain around 20 per cent.

The size of the display was judged large enough by 66.7 per cent (8/12) of the participants and its render quality get 3.25/5 (1: bad, 5: good), which is correct (above average and closer to good), particularly considering that 2 participants had some trouble because their eyes have refractive errors; therefore, they scored low this quality. Also, due to a difficulty in the construction process, two other participants had issues with a distorted plastic sheet that makes the result a bit blurred. As a consequence of the stretch being too tight, half of the participants felt physical discomfort and 75 per cent of them felt a bit of fatigue but still tolerable after using the headset. However, these outcomes concur with the results of a previous research (Wille et al., 2014) on smart eyewear (commercialized HMD). Indeed, their results showed that even that there were no significant quantitative results proving a visual fatigue of the users (such as blink rate) participants answered subjectively they felt such a feeling. Also, we point out that the answer of our survey might be influenced because the usability tests took place at the end of the experiments, straight after almost two hours of concentration on the creation process.

5.3 CARTON as a smart eyewear

In the survey, participants were asked to mark some characteristics, between 1 (bad) and 10 (good), regarding their experience on using CARTON as a smart eyewear device. Figure 10 shows a summary of their responses in a scale (1 to 5) consistent to other questions.

As Figure 10 illustrates, the users rated all characteristics of the demos above 3, except “Visibility” which shows some limits depending on the environment. We did not experiment outdoor with the participants, but we could already presume that it would be a limit as the display visibility of smartphones is difficult to adapt to very bright outdoor conditions. By using a slightly darker transparent plastic it would maybe decrease this issue, but it would also decrease a bit of its reachability by using a material harder to get. Noted that CARTON device has a peripheral see-through display, meaning the eyes have to focus on the display, which is quite different than other devices using stereoscopy technique such as Epson BT, Hololens and MetaVision. However, perhaps as a positive consequence of the precedent imperfect visibility, participants did not identify any issue due to eyes refocusing between the close display and long-distance real world.

Despite that “Interactions” was the second-best score, voice recognition could not be tested properly to navigate or interact in CARTON. This feature had been tested only in the tutorial and with the “Live Subtitle”, but we could detect some issues. Sometime the device is not listening during 1 s, which made users repeat. Only 45 per cent of the participants felt that interacting with head gesture (tilting/nodding) was natural. This was mainly because the nod gesture had a critical issue. The forehead slope of each participant was relatively different and for some of them 15 degrees opposite tilt of the headset was not enough. The mobile (hence the axis using for detecting nodding) was always inclined and far from being horizontal as expected. It resulted that nod was not detected. Also one of the participants wanted a better tutorial, which seems quite important for unusual gesture. These results show that there is room for improvements with further research to this kind of head gesture recognition.

The ease to aim a target was judged to 4.0/5, which is good enough to consider this kind of hardware compatible with AR technologies.

Interesting fact, even if it was not designed to, and not intended to be used in a production level (initially dedicated only to experimentation, research and development), half of the participants (6/12) were likely to wear CARTON in a museum. The main resistance of the others was because of the aesthetics. Finally, all the participants were enthusiastic, providing some new ideas, different use cases, how to improve it and showed a real interest. Furthermore, 66.7 per cent of the participants said they would like to change and customize their CARTON to make it more accurate, robust and improve the aesthetics, or just to add colors and stickers. This result strengthens the importance to open and share the blueprints, guidelines and SDK related to this solution.

5.4 Interactions

In total, 2,114 interactions have been done in the second study, on average, 211 interactions per user for each session. As the left graph show on Figure 11, the smartwatch appears to be an ideal use of CARTON in comparison to the two other interactions: head gesture and finger touch on the phone screen. When participants had the option to choose the interaction modality, they selected the Smartwatch 67 per cent of the time. Moreover, if we consider only the race activity, they chose the Smartwatch 100 per cent of the time. One justification for this choice could be that despite that none of participants (100 per cent) had used a regular smartwatch before, its use is similar to what they (80 per cent) already use every day: a mobile phone. However, their interaction choice varied depending on the task. There was a use case where the head gesture was the most performed interaction; it is coherent and more intuitive or natural: when the user’s hands are not free, such as creating an origami. Indeed for this activity, the difference to the choice made by users is significant, with most users choosing the head gesture, as shown in the right graph on Figure 11. Finally, the finger touch seems to be the last used interaction whenever the situation. This could be explained by a lower success rate presented by this modality in relation to the two other, as shown in the Figure 12.

Indeed, it was confirmed by the questionnaire, where the only negative part about the finger touch was due to some attempt which did not work. Our observation made during the experiments found that the area dedicated to the “swipe” gesture was not big enough and some participants did the gesture out of the area. An upgrade to the SDK would minimize this aspect. Moreover, despite this minor issue, some participants perform very well in all interaction including finger touch with a success rate of 95 per cent.

In relation to the timing to complete a task using each interaction modality, the results indicate that the Smartwatch was the faster interaction strategy. This was a big advantage for the race activity, where the Smartwatch interaction showed a much better timing (around four times quicker than the two other interactions), as shown in Figure 13. This fact could also justify the fact that 100 per cent of the participants chose the Smartwatch to perform the Race task.

5.5 Post-results

With all the data collected during the experiment, we already improved CARTON. For the hardware part, there were a few minor upgrades to the blueprint (changed some lines to make it clearer and moved the rubber band’s location) and guide (instruction to choose a longer utility straps and the softest sponge possible which now fix trouble with the comfort part). Also, a video and a tutorial publicly accessible on YouTube (https://youtu.be/3ww5lE8PVsc) and Instructables (a very famous DIY community website) have been published (www. instructables.com/id/Carton-DIY-Smart-Eyewear) to make the creation process easier and facilitates the distributed knowledge sharing (Tanenbaum et al., 2013).

The Carton SDK for Android has also been updated. There is now a calibration feature for the head gesture recognition. The tilt forehead is adapted to the wearer of the device, which makes the nod detection more accurate and efficient. Furthermore, there is another kind of head gesture implemented: head shaking (right/left), detected by counting a number of changing directions in an allotted time. The Smartwatch solution includes two new patterns to interact with: virtual joystick and buttons. The sample apps are in the process of publication on the Google Play Store, to make it easier for everyone to test it, before digging into it.

6. Future work

Despite the fact that CARTON is already usable, due to the lesson learned from the result, we will still slightly improve the comfort and visibility part mainly by trying new materials. Following a reviewer’s suggestion, we would like to add an aluminum foil between the cardboard and the sponge. We aim to measure the actual radiation to verify if this new material could protect the brain from the emitted smartphone’s radiation. Also, whereas that some effort was made taking into consideration CARTON’s accessibility for people with special needs (e.g. different ways to interact with CARTON), it is definitely not enough and we want to go further into this domain. By following guidelines to design accessible wearable technology, as proposed by Wentzel et al. (2016), we would then be able to use it as a prototype for this area. It would be interesting to go deeper into gesture recognition research which could in addition to improve it, help to make it more accessible too. For instance, we

could use the mirror module to add hand gesture recognition, a natural and intuitive way to interact with AR systems (Piumsomboon et al., 2013). Based on the positive and very enthusiastic results from the interaction with the Smartwatch, we will conduct new research with the already developed new pattern coming from the post-results. We also hope to create a community around CARTON, which could, for example, develop an iOS SDK, Unity Package and sample apps with other AR technologies, such as ARToolKit (DAQRI, 2015). We also may consider doing it ourselves. Also, despite the fact that we were against at the beginning, we consider now creating a 3D printable version of CARTON that we will publish openly on platforms such as Thingiverse. This motivation was raised as we learned from the study that users may also want a plastic version, also community labs, which provide access to 3D printers, are getting more and more accessible and known by people. Furthermore, another conceivable approach would be to contact some professional maker (e.g. Dodocase), which then could provide a full kit with some preconceived part based on the free blueprint and guide. This solution may be interesting for two reasons:

  1. (1) First, preconceived part should be more accurate than doing it manually; thus, the conception will be quicker and the result more accurate and robust.

  2. (2) Second, the constructor should be able to provide an even cheaper solution thanks to economies of scale, which would benefit for everyone by making the CARTON even more widely reachable.


Finally, now that we can use CARTON as a smart eyewear, we aim to apply it to conduct research into collaborative multi-user and mobile/wearable computing.

7. Conclusion

To conclude, this paper presents an initiative, the CARTON project, to create a reachable smart eyewear device based on a DIY approach. Thus, this project allows end users to easily access another kind of multimedia (e.g. interface/interaction related to smart eyewear) just with a mobile phone. Moreover, this initiative includes two SDKs. The first one, support the creation or the adaptation of existing Android apps for the CARTON device. The second SDK (ControlWear) allows control and interaction with mobile applications through a Smartwatch.

We conducted a study to evaluate the effectiveness of the guidelines for building a CARTON device, while testing at the same time the usability of the proposed device through manipulation of provided sample apps. The results showed that all participants were able to create an effective CARTON with simple material and tools despite minor issues appeared due to heterogeneity of people physiognomy and skills. Thus, the contribution of this study is triple:

  • providing blueprints and guidelines to build a low-cost, open source and easy to do DIY smart eyewear devices;

  • providing an SDK to support the development and the adaptation of mobile apps for eye-wearable; and

  • providing different modalities of interaction, including a regular smartwatch to make sure a wide number of people can use it in different situations.

Indeed, we conducted a second study aiming to evaluate three different interaction modalities with CARTON [finger touch, head gesture recognition and ControlWear (smart watch)]. The overall results showed better outcomes for the ControlWear modality, in relation to users’ preference, time and successful rate to perform tasks. Furthermore, the outcomes indicate that user’s interaction preference would vary depending on the task. For instance, the head gesture recognition was the most popular modality when users had their hands occupied building their origami.

By providing all of our works openly, not only our blueprint and code but also guides and full documentation, it makes much easier to reproduce experimentation, improvements, and to involve a community in producing a new version of CARTON and ControlWear. The fact of sharing is definitely not new and unique, but we still hope some other studies will follow this path, in this area or another.

Finding the appropriate balance between reachability and effectiveness was one of the hardest part related to this project. Is sacrificing reachability with such a material or tool worth it? This is particularly what the DIY community is very good at: finding new creative ways by combining multiple domains, such as crafting, electronic, sewing [...] a perfect match for mobile and wearable technology.

References Ahmed, A. and Peralez, P. (2014), “Affordable altered perspectives: making augmented and virtual reality technology accessible”, Proceeding of IEEE Global Humanitarian Technology Conference, San Jose, CA, 10-13 October, IEEE, pp. 603-608. ARToolKit (2015), “SDK website”, available at: http://artoolkit.org/ Bastien, J.M.C. and Scapin, D.L. (1993), “Ergonomic criteria for the evaluation of human-computer interfaces”, RT-0156, INRIA, p. 72, available at: inria-00070012 Bimber, O. and Raskar, R. (2003), “Alternative augmented reality approaches: concepts, techniques, and applications”, Eurographics 2003 (Tutorial Notes). Brun, D., Ferreira, S.M., Gouin-Vallerand, C. and George, S. (2016), “CARTON project: do-it-yourself approach to turn a smartphone into a smart eyewear”, Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media, New York, NY, MoMM ’16, ACM, pp. 128-136. Buechley, L., Rosner, D.K., Paulos, E. and Williams, A. (2009), “DIY for CHI: methods, communities, and values of reuse and customization”, Extended Abstracts on Human Factors in Computing System, Boston, MA, 4-9 April, ACM Press, New York, NY, pp. 4823-4826. De Angeli, D. and O’Neill, E.J. (2015), “Development of an inexpensive augmented reality (AR) headset”, Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, Seoul, 18-23 April, ACM Press, New York, NY, pp. 971-976. Genaro Motti, V. and Caine, K. (2014), “Understanding the wearability of head-mounted devices from a human-centered perspective”, International Symposium on Wearable Computers, Seattle, WA, 13-17 September, ACM Press, New York, NY, pp. 83-86. Kuznetsov, S. and Paulos, E. (2010), “Rise of the expert amateur: DIY projects, communities, and cultures”, Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, Reykjavik, 16-20 October, ACM Press, New York, NY, pp. 295-304. Metavision (2016), available at: http://metavision.com/ Morency, L.-P. and Darrell, T. (2006), “Head gesture recognition in intelligent interfaces: the role of context in improving recognition”, Proceedings of the 11th International Conference on Intelligent User Interfaces, Sydney, 29 January-1 February, ACM Press, New York, NY, pp. 32-38. Naini, F.B. (2011), Facial Aesthetics: Concepts and Clinical Diagnosis, John Wiley & Sons. Piumsomboon, T., Clark, A., Billinghurst, M. and Cockburn, A. (2013), “User-defined gestures for augmented reality”, CHI’13 Extended Abstracts on Human Factors in Computing System, Paris, 27 April-2 May, ACM Press, New York, NY. Seebright (2014), available at: http://seebright.com/


Damien Brun, Susan M. Ferreira and Charles Gouin-Vallerand - LICEF Research Center, Télé-Université du Québec, Montréal, Canada, and Sébastien George - The Laboratory of Computer Science, Université du Maine, Le Mans, Canada

Comentários


Os comentários foram desativados.
bottom of page