A Multimodal Interaction Framework for Blended Learning
Journal Title: EAI Endorsed Transactions on Creative Technologies - Year 2017, Vol 4, Issue 10
Abstract
Humans interact with each other by utilizing the five basic senses as input modalities, whereas sounds, gestures, facial expressions etc. are utilized as output modalities. Multimodal interaction is also used between humans and their surrounding environment, although enhanced with further senses such as equilibrioception and the sense of balance. Computer interfaces that are considered as a different environment that human can interact with, lack of input and output amalgamation in order to provide a close to natural interaction. Multimodal human-computer interaction has sought to provide alternative means of communication with an application, which will be more natural than the traditional “windows, icons, menus, pointer” (WIMP) style. Despite the great amount of devices in existence, most applications make use of a very limited set of modalities, most notably speech and touch. This paper describes a multimodal framework enabling deployment of a vast variety of modalities, tailored appropriately for use in blended learning environment and introduces a unified and effective framework for multimodal interaction called COALS.
Authors and Affiliations
N. Vidakis
Rendering style and viewer’s perception of historic virtual architecture
The paper presents a study that investigated the effect of rendering style on users’ perception of 3D historic architectural environments. Three architectural styles were considered (Traditional Chinese, Gothic and Class...
Auditory and Visual based Intelligent Lighting Design for Music Concerts
Playing music is about conveying emotions and the lighting at a concert can help do that. However, without a dedicated light technician, many bands have to miss out on lighting that will help them to convey the emotions...
A Multimodal Interaction Framework for Blended Learning
Humans interact with each other by utilizing the five basic senses as input modalities, whereas sounds, gestures, facial expressions etc. are utilized as output modalities. Multimodal interaction is also used between hum...
Implementation of Human Cognitive Bias on Naïve Bayes
We propose a human-cognition inspired classification model based on Naïve Bayes. Our previous study showed that human-cognitively inspired heuristics is able to enhance the prediction accuracy of text classifier based on...
Upcoming Creative Events
No Abstract