見出し画像

【ISS2023に参加】Seeing the Wind: An Interactive Mist Interface for Airflow Input

はじめに

杉浦裕太研究室M2のWelkinです。
11/05~11/08にアメリカ・ピッツバーグで開催されたISS2023に参加し、「Seeing the Wind: An Interactive Mist Interface for Airflow Input 」と「Assisting the Multi-Directional Limb Motion Exercise with Spatial Audio and Interactive Feedback」いうタイトルで発表をしましたので、報告させていただきます。

研究の概要

Seeing the Wind: An Interactive Mist Interface for Airflow Input Human activities can introduce variations in various environmental cues, such as light and sound, which can serve as inputs for interfaces. However, one often overlooked aspect is the airflow variation caused by these activities, which presents challenges in detection and utilization due to its intangible nature. In this paper, we have unveiled an approach using mist to capture invisible airflow variations, rendering them detectable by Time-of-Flight (ToF) sensors. We investigate the capability of this sensing technique under different types of mist or smoke, as well as the impact of airflow speed. To illustrate the feasibility of this concept, we created a prototype using a humidifier and demonstrated its capability to recognize motions. On this basis, we introduce potential applications, discuss inherent limitations, and provide design lessons grounded in mist-based airflow sensing.

Assisting the Multi-Directional Limb Motion Exercise with Spatial Audio and Interactive Feedback: Guiding users with limb motion exercises can assist them in their physical recovery and muscle training. However, traditional visual tracking methods often require multiple camera angles to help the user understand their movements and require the user to be within the range of the screen. This is not effective for more ubiquitous situations. Therefore, we propose a method that uses spatial audio to guide the user with a multiple-directional limb motion exercise. Depending on the user’s perception of spatial audio’s direction, we designed feedback to help the user adjust the movement height and complete the move to the designated position. We attached a smartphone to the body limb to capture the user’s motion data and generate feedback. The experiment showed that with our designed methods, the user could conduct the multiple-directional limb exercise to the designated position. The spatial audio-based limb’s motion exercise system could create a natural, pervasive, and non-visual exercise training system in daily life.

感想

世界中のさまざまな研究者とアイデアを交換する絶好の機会でした。学会がピッツバーグで開催され、HCIの分野の発祥地であり、イベントの主催者の一つであるカーネギーメロン大学があることから、この分野の先駆的な学者たちと交流する絶好のチャンスでした。この学会から得た知識とアドバイスを今後の活動に取り入れ、研究を発展させていきたいです。

発表文献情報

Tian Min, Chengshuo Xia, Takumi Yamamoto, and Yuta Sugiura. 2023. Seeing the Wind: An Interactive Mist Interface for Airflow Input. Proc. ACM Hum.-Comput. Interact. 7, ISS, Article 444 (December 2023), 22 pages.

Tian Min, Chengshuo Xia, and Yuta Sugiura. 2023. Assisting the Multi-directional Limb Motion Exercise with Spatial Audio and Interactive Feedback. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion '23). Association for Computing Machinery, New York, NY, USA, 53–56.

発表スライド



この記事が気に入ったらサポートをしてみませんか?