The School of Information Sciences publishes the Dissertation in Interactive Technology series (ISSN 1795-9489). Before 2005, dissertations in the HCI field were published in the report series of the (then) Department of Computer Sciences. Also after 2005 some dissertations have been published elsewhere, e.g. in the report series of the employer of the doctoral candidate.

This web page lists first in chronological order (starting from the most recent) the dissertations published in the series, and then the dissertations published in other series.

Dissertations in Interactive Technology

35. Zhenxing Li

Title: Efficient and Accurate Hand-based Kinesthetic Interaction for Virtual Reality
Date of defense: 2021-08-13
Opponent: Nigel W. John
Reviewers: William Harwin, Ken Pfeuffer
Supervisor: Roope Raisamo


The aim of this dissertation is to enable efficient and accurate hand-based kinesthetic interaction for VR. The research was divided into three steps: problem understanding, development and application. First, the research was focused on the technique of CD gain. Multiple studies have argued that the mismatch between the hand motion and the cursor motion caused by using the CD gain method may affect kinesthetic interaction. However, it is unclear how it affects kinesthetic tasks in terms of task performance and user experience. The present research filled this gap and examined the effects of CD gain based on a kinesthetic task. Second, to address the issue of limited workspace for force-feedback devices, three multimodal kinesthetic interfaces were developed by using the user’s gaze as an input modality. These novel kinesthetic interfaces avoided the use of CD gain, and they also potentially relieved hand fatigue from prolonged operation. Third, the research explored medical applications of a kinesthetic VR interface and a vibrotactile VR interface associated with current popular VR equipment and force-feedback devices. To explore their practical usability, the two VR interfaces were compared with the state-of-the-art 2D interface (2D display + mouse) as the baseline.

34. Jukka Selin

Title: Tietomallin pelillistäminen ja toiminnallisen suunnittelun menetelmä rakennusten suunnittelun apuna
Date of defense: 2021-03-26
Opponent: Markku Tukiainen
Reviewers: Arto Kiviniemi, Miro Ristimäki
Supervisor: Markku Turunen


Gamification means at least free movement in building but may also include many other features that support the design and visualization of the object. Many CAD softwares already include a simple gamification, such as free movement. If gamification is done using by game engines, the gamified data model can include exactly the features we want.

The dissertation presents ideas and methods, as well as pilots carried out to test them in order to gamified the data model and utilize the gamified data model at different stages of the building’s life cycle, from design to building maintenance. The main topics are mobility, sizing, accessibility, and building safety. In addition, the thesis presents ideas and methods for crowdsourcing of design with a gamified data model and multiplayer techniques. The work also utilizes my patented Functional Design Method (FDM), which provides an idea and method for better consideration of the actual space requirements of a building at different stages of its life cycle. The FDM method has been granted patent in Finland and the USA.

33. Antti Sand

Title: On Adding Haptic Feedback to Interaction with Unconventional Display Devices
Date of defense: 2021-02-26
Opponent: Sriram Subramanian
Reviewers: Diego Martinez Plasencia, Eve Hoggan
Supervisor: Ismo Rakkolainen, Veikko Surakka


The sense of touch is important to human interaction as well as for interacting with our surroundings. Touch and tactile feedback are essential also when interacting with digital technology. The present doctoral thesis aimed to discover a profound understanding of how touch sensations could be added to unconventional display devices that do not inherently provide tactile feedback. Novel and less explored devices and use cases were studied in five carefully controlled studies.

The results showed that ultrasonic mid-air haptic stimuli are a well-suited method for feedback delivery in permeable and virtual displays. Further, the results showed that users uniformly preferred the addition of haptic feedback to interaction with intangible user interfaces. In addition, the results showed that mid-air tactile stimuli can be designed so that they were reliably identifiable after minimal familiarization and hence can be utilized for efficient information transfer on tactile displays. Taken together, the findings of this thesis suggest functional solutions for adding haptic feedback in interaction with displays that currently are classified as unconventional but will become more mainstream technologies in the future.

32. Toni Pakkanen

Title: Supporting Eyes-Free Human–Computer Interaction with Vibrotactile Haptification
Date of defense: 2020-12-18
Opponent: Jan van Erp
Reviewers: Eve Hoggan, Vincent Lévesque
Supervisor: Roope Raisamo


The primary aim for this series of experiments was to evaluate if utilizing the current level of actuation technology could be used more comprehensively than in current-day solutions with simple haptic alerts and notifications. Thus, to find out if the comprehensive use of vibrotactile feedback in interactions would provide additional benefits for the users, compared to the current level of haptic interaction methods and non-haptic interaction methods.

The main finding of this research is that while using more comprehensive HUIs in eyes-free distracted-use scenarios, such as while driving a car, the user’s main task, driving, is performed better. Furthermore, users liked the comprehensively haptified user interfaces.

31. Deepak Akkil

Title: Gaze Awareness in Computer-Mediated Collaborative Physical Tasks
Date of defense: 2019-08-30
Opponent: Roman Bednarik
Reviewers: Susan Fussell, Sebastian Pannasch
Supervisor: Poika Isokoski


This thesis comprises six publications which enhance our understanding of the everyday use of gaze-tracking technology and the value of shared gaze to remote collaborations in the physical world. The studies focused on a variety of collaborative scenarios involving different camera configurations (stationary, handheld, and head-mounted cameras), display setups (screen-based and projection displays), mobility requirements (stationary and mobile tasks), and task characteristics (pointing and procedural tasks). The aim was to understand the costs and benefits of shared gaze in video-based collaborative physical tasks.

The findings suggest that gaze awareness is useful in remote collaboration for physical tasks. Shared gaze enables efficient communication of spatial information, helps viewers to predict task-relevant intentions, and enables improved situational awareness. However, different contextual factors can influence the utility of shared gaze. Shared gaze was more useful when the collaborative task involved communicating pointing information instead of procedural information, the collaborators were mutually aware of the shared gaze, and the quality of gaze-tracking was accurate enough to meet the task requirements. In addition, the results suggest that the collaborators’ roles can also affect the perceived utility of shared gaze.

30. Tomi Nukarinen

Title: Assisting Navigation and Object Selection with Vibrotactile Cues
Date of defense: 2019-03-07
Opponent: Stephen Brewser
Reviewers: Jan van Erp, David McGookin
Supervisor: Roope Raisamo, Veikko Surakka


The primary aim of this thesis was to investigate how to assist movement control with vibrotactile cues. Vibrotactile cues refer to technologymediated vibrotactile signals that notify users of perceptual events, propose users to make decisions, and give users feedback from actions. To explore vibrotactile cues, we carried out five experiments in two contexts of movement control: navigation and object selection. The goal was to find ways to reduce information load in these tasks, thus helping users to accomplish the tasks more effectively. We employed measurements such as reaction times, error rates, and task completion times. We also used subjective rating scales, short interviews, and free-form participant comments to assess the vibrotactile assisted interactive systems.

The findings of this thesis can be summarized as follows. First, if the context of movement control allows the use of both feedback and feedforward cues, feedback cues are a reasonable first option. Second, when using vibrotactile feedforward cues, using low-level abstractions and supporting the interaction with other modalities can keep the information load as low as possible. Third, the temple area is a feasible actuation location for vibrotactile cues in movement control, including navigation cues and object selection cues with head turns. However, the usability of the area depends on contextual factors such as spatial congruency, the actuation device, and the pace of the interaction task.

29. Sumita Sharma

Title: Collaborative Educational Applications for Underserved Children : Experiences from India
Date of defense: 2018-10-13
Opponent: Payal Arora
Reviewers: Netta Iivari, Laura Malinverni
Supervisor: Markku Turunen


As interactive technology permeates into the classroom environment, it promises potential educational benefits to children across the world. In the Global South, however, certain communities are largely left out. This includes children with developmental disabilities, who face several social and cognitive challenges. Technology provides them a safe, controlled, and predictable environment for therapeutic and learning interventions. Children from underprivileged backgrounds face several socio-economic challenges towards access to technology that caters to developing 21st century career skills. These skills include fluency in communicating in English, proficiency in computer and internet usage, and ability to work in culturally diverse teams.

In this research, culturally appropriate gesture-based applications were designed, developed, and evaluated for children with developmental disabilities, at two centers of an NGO in New Delhi, called Tamana. The applications focused on social and life skills, where children with little verbal skills could use hand gestures, like pointing, to participate in collaborative tasks or practice buying groceries from a local store. Cross-cultural online collaboration using conversational English was explored with underprivileged children at another NGO in New Delhi, called Deepalaya. The children, who had little technology experience, used a web-based application with a Finnish partner to solve a navigational task. Overall, the outcome of this research is a set of guidelines for designing, developing, introducing, and evaluating technology for underserved children in India.

28. Pekka Kallioniemi

Title: Collaborative Wayfinding in Virtual Environments
Date of defense: 2018-06-15
Opponent: Daphne Economou
Reviewers: Tassos A. Mikropoulos, Jayesh S. Pillai
Supervisor: Markku Turunen


Wayfinding is a complex process in which people orient themselves in the surrounding space and navigate from one place to another. The path selected may vary based on the purpose of the trip, but generally, people want to move from their origin to their destination as effortlessly as possible. Wayfinding often has collaborative aspects, for example, in situations where one person is guiding another.

This dissertation evaluates aspects of collaborative wayfinding in virtual environments. It suggests several factors that affect collaborative wayfinding in these environments, including immersion, gender and video game experience. This work also introduces a collaborative virtual environment application family called CityCompass with three different evolutionary stages. All these applications have the same approach for measuring spatial ability through collaborative wayfinding tasks, but they also have unique features, for example, regarding interaction. The results from this thesis should be considered as generic guidelines when designing virtual environments with wayfinding aspects.

27. Ville Mäkelä

Title: Design, Deployment, and Evaluation of Gesture-Controlled Displays in Ubiquitous Environments
Date of defense: 2018-05-26
Opponent: Nigel Davies
Reviewers: Edward Lank, Sarah Clinch
Supervisor: Markku Turunen


Mid-air gestures are a novel way of interacting with technology. However, mid-air gestures have not been widely used in interaction with information displays, found in many public settings. In particular, the unique benefit of gestures — interaction from a distance — has not been fully utilized. It remains unclear how such interactions should be designed, and what their user experience and true potential are. Moreover, deploying gesture-controlled displays in real-world settings is challenging because many external forces, such as the weather, people, and other technology affect the display and its use. In addition, evaluating the use of interactive displays is time-consuming and demanding.

The research goal of this dissertation is to study the design of meaningful and streamlined gestural interactions, and assess how gesture-controlled displays can be successfully deployed in public spaces, and how they can be evaluated effectively. To achieve this, four gesture-controlled displays were developed and studied, resulting in three main contributions. First, this work recognizes external factors that have a negative effect on interactive displays, and provides guidelines for dealing with them. Second, this research shows that the usage of interactive displays can be evaluated on a large scale by employing a semi-automated log-based process. Third, this dissertation provides guidelines and techniques that aid in developing easy-to-use gestural interactions. Most importantly, the results indicate that mid-air gestures can provide an impressive user experience in the context of cross-device interaction.

26. Ahmed Farooq

Title: Developing technologies to provide haptic feedback for surface based interaction in mobile devices
Date of defense: 2017-12-14
Opponent: Hong Z. Tan
Reviewers: Seungmoon Choi, Yon Visell
Supervisor: Roope Raisamo, Grigori Evreinov


The goal of this thesis was to understand the issues in providing vibrotactile signals during interaction with mobile devices and smart surfaces, and to resolve these challenges by improving methods of actuation and mediation. Interaction using touch, as with any form of communication, requires end to end verification. Until now, most haptic communication systems only focused on signal generation and actuation, ignoring key issues, such as signal transmission, and the integrity of the generated signal at the point of contact. In this thesis I focused on understanding the possible limitations of the current approach in developing effective vibrotactile environments for mobile devices. By conducting research in signal transmission and mediation from the source to the point(s) of contact, I determined the possible degree of attenuation of the intended signal in current mobile touchscreen devices. By analyzing the results of these studies, I was able to find possible solutions for limiting the degradation of haptic signals and developed proof of concept systems validating my assumptions.

Currently, most traditional systems use haptic feedback as a secondary support mechanism to auditory and visual modalities. However, it can be argued that with minor redesigning of current interaction systems, the role of haptics can be greatly enhanced to unlock the true potential of haptic communication. One of the methods of achieving this is to provide a kinesthetic component to the haptic feedback alongside the vibrotactile signals currently observed in today’s mobile and hand held devices. This thesis illustrates how it may be possible to generate kinesthetic afferentation without the need for cumbersome high-powered manipulators, generating not only confirmation feedback but the ability to supplement virtual object manipulation in real-time, linkage free, opening up a wide range of interaction scenarios and implementation techniques.

25. Jani Lylykangas

Title: Regulating human behavior with vibrotactile stimulation
Date of defense: 2017-05-19
Opponent: Mounia Ziat
Reviewers: Jan B.F. van Erp, Eve Hoggan
Supervisor: Veikko Surakka


The sense of touch is a profound means of interaction with objects, surroundings, and other people. Additionally, touch sensations offer essential information for the regulation of behavior by directing our attention and controlling our body movements. This thesis investigated technology-mediated vibrotactile touch information in interactive applications guiding human behavior. The aim was to contribute a profound understanding of how vibrotactile stimulations can be designed to be easily understood to, for example, change tempo in physical exercise, and result in fast response to, for example, apply brakes while driving.

The results of four controlled laboratory experiments showed that relatively simple vibrotactile technology can be used to effectively communicate different types of instructions on how to regulate behavior. Further, the results showed that vibrotactile cues can be experimentally designed so that they are intuitive in terms of being intelligible even without teaching them prior to the usage. Vibrotactile instructions turned out to be effective, comfortable, and preferred in compensating and augmenting visual instructions. The results can be utilized in designing touch-based information delivery, which would enable better concentration on primary functions such as observing the environment and improve safety while interacting with technology.

24. Hannu Korhonen

Title: Evaluating Playability of Mobile Games with the Expert Review Method
Date of defense: 2016-08-26
Opponent: Regina Bernhaup
Reviewers: Effie Lai-Chong Law, Magy Seif El-Nasr
Supervisor: Kari-Jouko Räihä, Frans Mäyrä


The thesis introduces the expert review method for mobile game evaluations. In this method an evaluation is conducted by experts instead of players. The inspectors identify issues in the game design which might cause problems for players and decrease attractiveness of the game. The advantage of the method is that it can be used to evaluate very early prototypes of the game and fix problems in the design before the game is implemented.

The result of the research work is 47 domain-specific heuristics to be used with the expert review method. The heuristics consider the most important aspects of the playability including game usability, gameplay, mobility, multi-player, and context-aware. These aspects define the playability of the game. The heuristics can be used to evaluate mobile games and videogames in general, and they can also be applied in evaluation of other mobile services.

23. Adewunmi Obafemi Ogunbase

Title: Pedagogical Design and Pedagogical Usability of Web-Based Learning Environments: Comparative Cultural Implications from Africa and Europe
Date of defense: 2016-02-12
Opponent: Nerey H. Mvungi
Reviewers: Elena Dikova Shoikova, Sabine Graf
Supervisor: Roope Raisamo


This dissertation focuses on studies on pedagogical design and pedagogical usability of web-based learning environments. Related themes are acceptance and use of web-based learning environments in higher education, including the cultural issues related to the appropriateness of using learning technologies among African learners and European learners.

The ultimate goal of these studies is to establish a West African Digital University as a case for reconstructing education and seizing the peace premium towards promoting a culture of peace and tolerance in post-conflict situations in West Africa. Hence, findings, recommendations and comments from these studies are to be implemented in the proposed West African Digital University, which is already an on-going project in West Africa.

22. Tuuli Keskinen

Title: Evaluating the User Experience of Interactive Systems in Challenging Circumstances
Date of defense: 2015-11-28
Opponent: Eva-Lotta Sallnäs Pysander
Reviewers: Anirudha Joshi, Thomas Olsson
Supervisor: Markku Turunen


This dissertation considers user experience within the field of human-technology interaction from a practical perspective. The aim is to fulfill the research gap of how to evaluate user experience in practice. This dissertation focuses on evaluating the user experience of interactive systems utilizing new interaction techniques in challenging circumstances outside of laboratories. Seven interactive systems and especially their eight user experience evaluations are reported in detail. The main challenges in the case studies have arisen either from the context or the user group(s). The basis for the evaluations has been to focus on taking into account the different circumstances, and to design a user experience evaluation approach accordingly.

The main contribution of this dissertation is a process model on how to evaluate the user experience of interactive systems in practice. The model comprises the whole life cycle of user evaluations, i.e., what needs to be considered before, during, and after the evaluation situation itself. The model provides a set of practical guidelines, and it can be utilized in the design and execution of user experience evaluations in various circumstances.

21. Selina Sharmin

Title: Eye Movements in Reading of Dynamic On-screen Text in Various Presentation Formats and Contexts
Date of defense: 2015-04-17
Opponent: Jukka Hyönä
Reviewers: Roman Bednarik, Kenneth Holmqvist
Supervisor: Kari-Jouko Räihä


This dissertation presents eye movement studies of reading dynamically rendered text. Technological development has made reading of electronic text commonplace. A number of studies have been conducted to find a suitable presentation format for text display on electronic devices. Nevertheless, improved text presentation formats, especially for dynamically presented text, are still being developed and studied. Eye movements in on-screen reading provide a useful source of information about reading patterns under different circumstances, such as content and context of the text. This thesis collects six publications on eye movements in reading on-screen text with diverse presentation formats and also with variety in contexts, such as reading print interpreted text and reading for translation.

The results show that the number of fixations and regressions are highly influenced by the design of the text presentation formats. Eye movement metrics are significantly affected when text rendering shrinks from the largest presentation unit (full paragraph) to the smallest unit (phrase of a few words) at a time and further when text appears letter-by-letter. Reading one’s own emerging dynamic text shows different gaze behavior than reading static text. Gaze can also be used as an active channel to instigate automatic scrolling of long pieces of texts. Our studies indicate similar readability with manual scrolling and the gaze-enhanced auto scrolling technique.

20. Katri Salminen

Title: Emotional Responses to Friction-based, Vibrotactile, and Thermal Stimuli
Date of defense: 2015-04-28
Opponent: Stephen Brewster
Reviewers: Satu Jumisko-Pyykkö, Eva-Lotta Sallnäs Pysander
Supervisor: Veikko Surakka


Touch has an essential role in socio-emotional communication. Currently, different haptic technologies provide a high potential for scientific research to study the relationship between touch and emotions as they enable the accurate creation of stimulation. This thesis summarizes results of six publications that studied systematically emotion-related responses to different haptic stimuli (i.e., friction, vibrotactile, and thermal). For this purpose, subjective experiences (i.e., emotion-related ratings of the stimuli), psychophysiological measurements (i.e., activation of sweat glands), and behavioral responses (e.g., differentiation of the stimuli) were measured.

The results of the thesis showed that different haptic stimulations activated the human emotion system differently, as evidenced by subjective ratings and behavioral and physiological responses. In respect of the ratings, it can be concluded that friction and thermal stimulations were better at evoking changes in the ratings of pleasantness and approachability than vibrotactile stimuli. Vibrotactile stimuli were associated with a higher level of arousal and a feeling of being controlled by the stimulation. As there is growing interest in using stimulation of the sense of touch in human-technology interaction, it is likely that the results of the current thesis can be utilized in designing haptics-based affective computing.

19. Jussi Rantala

Title: Spatial Touch in Presenting Information with Mobile Devices
Date of defense: 2014-11-13
Opponent: Karon MacLean
Reviewers: Vincent Lévesque, Antti Oulasvirta
Supervisor: Roope Raisamo


Touch is essential in human-computer interaction with mobile devices. Devices such as smart phones sense input through touchscreens to provide users with information. However, this information is presented mainly via visual and auditory modalities. The use of touch output has been limited to vibration alerts and feedback of touchscreen buttons. This thesis focused on ways to use touch output for presenting alphabetical and emotional information. Special emphasis was put on studying how spatial touch output presented to different areas of the user’s hand could be utilized with mobile devices.

This is achieved by designing, implementing, and evaluating novel interaction methods based on handheld device prototypes. The results indicate that alphabetical information can be presented to users by vibrating the touchscreen of a mobile device. In addition, different emotional information could be communicated between people who used devices capable of presenting touch gestures such as squeezing and stroking with vibration. These findings can be utilized in future human-computer interaction research aiming at supporting more active use of touch with mobile devices.

18. Joel S. Mtebe

Title: Acceptance and Use of eLearning Technologies in Higher Education in East Africa
Date of defense: 2014-10-31
Opponent: Ruth de Villiers
Reviewers: Morten Flate Paulsen, Matti Tedre
Supervisor: Roope Raisamo


The significance of eLearning solutions to overcome challenges facing the education sector in higher education in East Africa cannot be overstated. Appropriate use of eLearning solutions has the potential to reduce costs, to widen access, and to improve the quality of teaching and learning. Although these solutions have been succesfully implemented in many developed countries, the degree of acceptance and usage is low in the majority of higher education in East Africa.

The aim of this thesis was to investigate factors that influence acceptance and use of various eLearning solutions in higher education in East Africa. It is a compound thesis comprising five articles that describe four distinct case studies.

The results described in the thesis should enable institutions to find strategies that promote greater use and accceptance of eLearning solutions in higher education in East Africa. They also give developers tools to develop eLearning services that are relevant and acceptable to intended users in the region.

17. Juha Leino

Title: User Factors in Recommender Systems: Case Studies in e-Commerce, News Recommending, and e-Learning
Date of defense: 2014-08-29
Opponent: Katrien Verbert
Reviewers: Dietmar Jannach, Martin Schmettow
Supervisor: Kari-Jouko Räihä


Recommender systems have become omnipresent in e-commerce and many other domains, as the number of items available to us has exceeded our ability to consider them individually. Recommenders help us make better decisions with less effort and uncertainty by helping us find salient items (e.g. suggesting books) and decide which one(s) to choose (e.g. which book to buy). However, recommender system research has largely focused on system-centric aspects, e.g. algorithms, and ignored user-centric aspects. This is now seen as having been detrimental to the field, as it is user-centric aspects that determine the adoption and use of recommenders.

In this work, we study user-centric aspects of recommender systems in three domains, e-commerce, news recommending, and e-learning. All studies except one involved actual users using actual systems in authentic use contexts. Consequently, this work offers us windows into the actual use of recommender systems. The results underline that there are few universal truths as far as user-related factors in recommender systems are concerned. In fact, what is true in one context can be the opposite in another context due to changing user tasks and goals. Consequently, in developing recommender systems that truly serve their users, user-centric testing is a must.

16. Outi Tuisku

Title: Face Interface
Date of defense: 2014-05-23
Opponent: Jukka Hyönä
Reviewers: John Paulin Hansen, Markku Tukiainen
Supervisor: Veikko Surakka


The use of facial information is imperative in human to human communication. For example, people naturally gaze at the person they are interacting with. Further, they use facial muscles to generate facial expressions in order to express emotions and intentions. In addition to the communicative purposes, gaze and facial muscle system can also be used for interacting with and controlling computers. The current thesis introduces a wearable prototype device called Face Interface that measures both gaze direction and facial expressions like frowning and smiling for pointing and selecting objects while interacting with graphical user interfaces. Three different versions of the prototype have been used in the course of this thesis by improving the prototype according to the objective and subjective results.

The results show that Face Interface functions promisingly as a pointing and selection technique. Along the iterations significant improvements have been achieved in the pointing task times (i.e. from 2.5 seconds (first prototype) to 1.3 seconds (third prototype)). Further the research has shown, for example, that it is easy to learn the use of these two different modalities together, and the use of them does not require much practice. These are clear indications that the use of facial information has great potential in human-computer interaction.

15. Mirja Ilves

Title: Human Responses to Machine-Generated Speech with Emotional Content
Date of defense: 2013-06-19
Opponent: Göte Nyman
Reviewers: Timo Saari, Martti Vainio
Supervisor: Veikko Surakka


In spoken language, both the content of spoken words and the prosody of speech can mediate emotion related information. To study how the pure content of spoken words affects human emotions, speech synthesizers offer good opportunities as they allow for good controllability over the prosodic cues. This thesis summarizes five publications that investigated how the lexical emotion-related content of synthesized speech affected people’s emotional experiences and physiological responses (i.e. facial muscle activity, pupil size, and heart rate). The other aim was to study how the human-likeness of synthetic voice affects responses. Thirdly, the effects of emotional content on the perception of the quality of speech synthesis were studied.

The results presented in this thesis suggest that the synthesized lexical expressions of emotions can evoke emotions in people. The results suggest that the features of the voice also matter when evoking emotions through computers. Finally, the results showed that the lexical content of the messages had such a strong effect on people that the impression of the voice quality was affected by the content of the spoken message.

14. Tomi Heimonen

Title: Design and Evaluation of User Interfaces for Mobile Web Search
Date of defense: 2012-11-20
Opponent: Matt Jones
Reviewers: George Buchanan, Mark D. Dunlop
Supervisor: Kari-Jouko Räihä


Mobile Web search is a rapidly growing form of everyday information access. Research suggests that new paradigms are needed for better support of mobile searchers. In this work, two such novel search interface techniques were designed, implemented, and evaluated. The first method is a clustering search interface that presents a categorized overview of the results. The findings from laboratory and longitudinal studies indicate that clustering can support exploratory search needs of mobile searchers. The second presentation method is a visualization of the instances of the user’s query phrase in the result document. Findings from user studies suggrest that the visualization can be useful in ruling out non-relevant results and can assist when the other result descriptors do not provide for a conclusive relevance assessment, although some learning is required. Finally, the contextual triggers and information behaviors of active mobile Internet users were studied, for understanding the role of Web search as a mobile information seeking activity. The results show that mobile Web search and browsing are important information seeking activities to resolve emerging information needs in various situations, whether at home or “on the go.” The results of this thesis underline the need for future mobile search interfaces to consider new result presentation methods and account for context-dependent information needs.

13. Toni Vanhala

Title: Towards Computer-Assisted Regulation of Emotions
Date of defense: 2011-12-09
Opponent: Manfred Thüring
Reviewers: Gary Bente, Tapio Takala
Supervisor: Veikko Surakka


Emotions are intimately connected with our lives. They are essential in motivating behaviour, for reasoning effectively, and in facilitating interactions with other people. Consequently, the ability to regulate the tone and intensity of emotions is important for leading a life of success and well-being. Intelligent computer perception of human emotions and effective expression of virtual emotions provide a basis for assisting emotion regulation with technology. State-of-the-art technologies already allow computers to recognize and imitate human social and emotional cues accurately and in great detail. For example, in the present work a regular looking office chair was used to covertly measure human body movement responses to artificial expressions of proximity and facial cues. In general, such artificial cues from visual agents were found to significantly affect heart, sweat gland, and facial muscle activities, as well as subjective experiences of emotion and attention. The perceptual and expressive capabilities were combined in a setup where a person regulated her or his more spontaneous reactions by either smiling or frowning voluntarily to a virtual humanlike character. These results highlight the potential of future emotion-sensitive technologies for creating supportive and even healthy interactions between humans and computers.

12. Ying Liu

Title: Chinese Text Entry with Mobile Devices
Date of defense: 2010-12-03
Opponent: I. Scott MacKenzie
Reviewers: Janet C. Read, Shumin Zhai
Supervisor: Kari-Jouko Räihä


Text entry methods enable input of written text to computing systems. As a logosyllabic language Chinese has unique characteristics that bring new challenges to the design and evaluation of Chinese text entry methods. This dissertation explores new interaction solutions and patterns of user behavior in the Chinese text entry process with various approaches. The work covers four means of Chinese text entry on mobile devices: Chinese handwriting recognition, Chinese indirect text entry with a rotator, Mandarin dictation, and Chinese pinyin input methods with a 12-key keypad. New design solutions for Chinese handwriting recognition and pinyin methods utilizing a rotator are proposed and shown to be well accepted by users with emprical studies. A Mandarin short message dictation application for mobile phones is also presented, with two associated studies on human factors. Two studies were also carried out on Chinese pinyin input methods that are based on the 12-key keypad. The comparative study of five phrasal pinyin input methods led to design guidelines for the advanced feature of phrasal input. The second study of pinyin input methods produced a predictive model addressing users’ error-free speeds.

11. Päivi Majaranta

Title: Text Entry by Eye Gaze
Date of defense: 2009-08-01
Opponent: Anke Huckauf
Reviewers: Hirotaka Aoki, Anthony Hornof
Supervisor: Kari-Jouko Räihä


Text entry by eye gaze is used by people with severe motor disabilities. This thesis provides an extensive review of the research conducted in the area of gaze-based text entry. It summarizes results from several experiments that study various aspects of text entry by gaze. An overview of different design solutions and guidelines derived from the research results are given. It is hoped that the thesis will provide a useful starting point for developers, researchers, and assistive technology professionals wishing to gain deeper insight into gaze-based text entry.

10. Yulia Gizatdinova

Title: Automatic Detection of Face and Facial Features from Images of Neutral and Expressive Faces
Date of defense: 2009-01-16
Opponent: Heikki Ailisto
Reviewers: Matti Pietikäinen, Marcos A. Rodrigues
Supervisor: Veikko Surakka


Most human interaction takes place through face-to-face communication. The obvious importance of facial stimuli for humans motivates the idea of utilizing facial information in human-technology interaction. In this type of interaction it is required that facial information, among others, is automatically captured, analysed, and further processed in oreder to make the interaction more natural and intelligent. Computer vision capabilities are especially helpful for capturing and analyzing important visual cues from the user’s face.

In this dissertation, one aspect of automatic face analysis, namely, face and feature detection, was addressed. During the course of this research work a framework for automatic and expression-invariant localization of faces and prominent facial landmarks, such as eyes, eyebrows, nose, and mouth from static images and real-time video was developed. The performance of the framework was evaluated on several databases of facial expressions coded in terms of prototypical facial displays, like happiness and surprise, and facial muscle activations presented alone or in combinations. In general, the results showed that the framework allowed the face and facial landmarks to be located automatically, robustly, and efficiently from static images and streaming videos displaying facial expressions of varying complexity.

9. Oleg Špakov

Title: iComponent — Device-Independent Platform for Analyzing Eye Movement Data and Developing Eye-Based Applications
Date of defense: 2008-05-09
Opponent: Markku Tukiainen
Reviewers: John Paulin Hansen, Jukka Paakki
Supervisor: Kari-Jouko Räihä


The growing number of publications that address human-computer interaction using eye movement highlights the increasing need for tools to investigate and analyse the behaviour of human eyes. Despite the fact that eye movement analysis tools are becoming more intelligent and advanced, there is still a lack of effective tools that allow recording and on-line use of eye movement data from various eye trackers. Also needed are tools for further analysis and visualisation of gaze paths that support the majority of methods already known.

This dissertation describes the iComponent software that was developed to fill this gap as well as the research projects where it has been used. iComponent has a highly flexible architecture, which allows easy development of dynamic plug-in modules to support eye tracking devices and experimental software. The unique data format and data transfer interfaces were developed on the basis of careful analysis of existing hardware. iComponent supports several types of gaze data visualization with high customization abilities. Experience has confirmed the effectiveness of the tool developed as well as pointed to several issues that were improved by further development of iComponent.

8. Erno Mäkinen

Title: Face Analysis Techniques for Human-Computer Interaction
Date of defense: 2007-12-14
Opponent: Matthew Turk
Reviewers: Jouko Lampinen, Sudeep Sarkar
Supervisor: Roope Raisamo


Vision has an important role when people communicate with each other. Face provides vast amount of information such as identity, gender, facial expressions, and age. On the other hand, traditionally people have interacted with computers using a mouse and a keyboard. This is about to change. New technologies are emerging that enable natural interaction with computers. One of these technologies is automatic face analysisthat enables computers to interpret human faces.

In this dissertation various face analysis techniques were studied and their applicability to human-computer interaction was considered. Some novel methods were presented and experiments were carried out for face detection and gender classification methods. The reliabilities of the methods were measured in conditions likely in real perceptual applications. The results of the experiments showed that about 80% gender classification accuracy can be achieved with a fully automatic face analysis system and web camera quality frontal face images. Applications where face analysis techniques have been used and ideas for future applications were also presented.

7. Harri Siirtola

Title: Interactive Visualization of Multidimensional Data
Date of defense: 2007-04-21
Opponent: Robert Spence
Reviewers: Kasper Hornbæk, Jonathan C. Roberts
Supervisor: Kari-Jouko Räihä


Acquiring data is much easier than gaining insight into it. Interactive techniques for information visualization can aid us in understanding the data, making the acquisition of information easier. One of the challenging issues in information visualization is the treatment of multidimensional data, i.e., when we need to consider a large number of data variables and their relationships simultaneously, often without a well-defined understanding of what to look for in the data. This work studies interaction in three conceptually different multidimensional visualization techniques: the reorderable matrix, parallel coordinates, and interactive glyphs. The three techniques were investigated by implementing a number of interactive prototypes and performing controlled experiments with them. A number of interaction enhancements were developed and evaluated using an incremental development approach and by augmenting the controlled experiments with usability evaluation techniques. Contributions include a new technique for processing a reorderable matrix visualization, improvements to the user interface of parallel coordinate browsers, and a new visualization technique based on data glyphs and small multiple visualizations.

6. Jaakko Hakulinen

Title: Software Tutoring in Speech User Interfaces
Date of defense: 2006-12-08
Opponent: Lars Bo Larsen
Reviewers: Arne Jönsson, Jacques Terken
Supervisor: Kari-Jouko Räihä


Speech has been in use as a computer user interface for some time already. For more than two decades, speech-recognition based services have been publicly available, and the number of available services has risen steadily. However, speech user interfaces have not reached the wide popularity sometimes hoped for or expected. This is due to the limitations of spoken human-computer interaction.For speech interfaces to be usable, users must receive some guidance to be able to act within the limits of the system. Existing systems either provide the necessary guidance to users as part of the user interface or require the users to explicitly read some sort of guidance material.

In this study, software tutoring has been applied for speech user interfaces. A software tutor is a software component that teaches the use of a software application. A tutor can monitor users’ actions and adapt appropriately. The teaching happens in situ; the users learn about the application while using it. The study presents two kinds of software tutors: a speech-based tutor and graphics-based tutors. The nature of the tutors, their technical solutions, the iterative development process, and formal evaluations are reported. The results show that the tutors can support initial use better than the previously used static text-based guidance materials.

5. Johanna Höysniemi

Title: Design and Evaluation of Physically Interactive Games
Date of defense: 2006-08-19
Opponent: Martti Mäntylä
Reviewers: Frans Mäyrä, Ana Paiva
Supervisor: Kari-Jouko Räihä


Computer games are traditionally controlled with hand-operated input devices which might cause negative health effects. This thesis focuses on studying one particular computer and video game genre, referred to as physically interactive games. These games are controlled with body movements, and they aim at involving the player in a physical effort or developing motor abilities during the course of game play. However, this new way of controlling games poses challenges for game designers not only from the technological point of view but, more importantly, also from the user perspective. The core question is that of how physically interactive games should be designed so that they would be entertaining, usable, and physically suitable for players. The thesis consists of three case studies that illustrate the importance of the quality of physical game control, and discuss the methodological issues related to game design and evaluation. Furthermore, studies show that the embodied gaming context supports social interactions between players, enables players’ creative physical expression, and can improve cardiovascular fitness, muscle strength, coordination of movements, and reaction times.

4. Aulikki Hyrskykari

Title: Eyes in Attentive Interfaces: Experiences from Creating iDict, a Gaze-Aware Reading Aid
Date of defense: 2006-05-19
Opponent: Howell Istance
Reviewers: Jukka Hyönä, Markku Tukiainen
Supervisor: Kari-Jouko Räihä


Eyes can be used as a source of information when moving from human-computer interaction toward the more natural and effective. In particular, the focus of the user’s attention could be valuable for many computing applications. In this dissertation we report the experiences obtained during the design, implementation, and evaluation of a gaze-aware attentive application, iDict. iDict aims to help with electronic documents by tracking the reader’s eye movements and providing assistance automatically when the reader seems to be in need of help. The three main issues addressed in this work are the problems caused by the limited accuracy inherent to eye tracking, the problems with interpreting user’s eye movements, and the design principles of gaze-aware applications.

3. Anne Aula

Title: Studying User Strategies and Characteristics for Developing Web Search Interfaces
Date of defense: 2005-12-09
Opponent: Morten Hertzum
Reviewers: Ann Blandford, Alan Dix
Supervisor: Kari-Jouko Räihä


World Wide Web search engines are essential tools in today’s world, where more and more information is to be found only on the Web. This thesis focuses on the strategies employed by different user groups of search engines during three phases of the information search process: query formulation, evaluation of search results, and information re-access. The studies show that queries of more experienced users tend to be longer, more precise, and, if not successful, iterated frequently. More experienced users are also more efficient in evaluating the search results. In information re-access, the experienced users are innovative and not completely dependent on the tools specifically designed for information re-access. Surprisingly, even experienced users have misconceptions concerning their primary search engine. The elderly face several challenges with Web search engines, such as not understanding the required language and the functionality provided by the interfaces, as well as problems with text input. The thesis presents several design suggestions to place the benefits of successful strategies at the disposal of all users. For example, the thesis presents a novel style for presenting textual result summaries, a natural-language explanation tool for queries, a search interface for elderly users, and several ideas for facilitating information re-access.

2. Mika Käki

Title: Enhancing Web Search Result Access with Automatic Categorization
Date of defense: 2005-12-02
Opponent: Polle T. Zellweger
Reviewers: Steve Jones, Samuel Kaski
Supervisor: Kari-Jouko Räihä


World Wide Web is an enormous source of information, but finding the relevant bits can be challenging as the current search engines typically present search results as a long list. This work studies ways to enhance users’ result access with automatically formed categories and associated filtering user interface. The concept is implemented in a search user interface called Findex. The usefulness of the approach was evaluated in controlled experiments, in a longitudinal study, and with a theoretical test. The results show that finding relevant results is about 30-40% faster with the proposed user interface compared to the de facto standard, the ranked results user interface. The user attitudes favor the new user interface. The results of the experiments were complemented with a longitudinal study in real use situation. The results indicate that the categorization user interface becomes a part of the users’ search habits and is beneficial. In real settings, the categories are needed in about every fourth search. The usage patterns indicate that the categories help when result ranking does not bring relevant results to the top of the result list.

1. Timo Partala

Title: Affective Information in Human-Computer Interaction
Date of defense: 2005-10-29
Opponent: Gary Bente
Reviewers: Heikki Mannila, Göte Nyman
Supervisor: Veikko Surakka


Human-computer interaction is a growing interdisciplinary research area, which aims at developing user-friendly methods for computer use. It has been traditionally studied mostly from the cognitive perspective. In his dissertation, Timo Partala studied the possibility of utilizing affective information in human-computer interaction. Specifically, he studied the possibilities of using pupil size variation and facial expressions in computer input, and the effects of affective synthetic speech messages and different agent proximities in computer output. The results suggested that pupil size variation and facial expressions can give the computer information about the user’s affective responses, and they could be potential input signals for human-computer interaction in the future. It was also found that synthetic speech messages used in computer output influenced the users’ affective physiological responses, behavior, and experienced valence. In addition, it was found that the different simulated proximities of a conversational agent affected the users’ experienced dominance.

Dissertations in Other Series

Pasi Välkkynen

Title: Physical Selection in Ubiquitous Computing
Date of defense: 2007-11-30
Opponent: Morten Fjeld
Reviewers: Juha Lehikoinen, Jukka Riekki
Supervisor: Roope Raisamo


In ubiquitous computing, the computing devices are embedded into the physical environment so that the users can interact with the devices at the same time as they interact with the physical environment. The various devices are connected to each other, and have various sizes and input and output capabilities depending on their purpose. These features of ubiquitous computing create a need for interaction methods that are radically different from the desktop computer interactions.

In this dissertation, physical selection is analysed as a user interaction task, and from the implementation viewpoint. Different selection methods – touching, pointing and scanning – are presented. Touching and pointing have been studied by implementing a prototype and conducting user experiments with it. The contributions of this dissertation include an analysis of physical selection in the ubiquitous computing context, suggestions for visualising the physical hyperlinks in both the physical environment and in the mobile terminal, and user requirements for physical selection as a part of an ambient intelligence architecture

Tatiana G. Evreinova

Title: Alternative Visualization of Textual Information for People with Sensory Impairment
Date of defense: 2005-11-18
Opponent: Klaus Miesenberger
Reviewers: Paul Blenkhorn, Arthur I. Karshmer
Supervisor: Roope Raisamo


By virtue of lacking visual feedback or access to verbal communication, people with a sensory impairment use alternative means for information imaging that rely on residual senses. For this reason, a wide range of assistive hardware and software came into the market to provide an efficient way of alternative imaging, for instance, of textual information. Nevertheless, nearly one third of these techniques were withdrawn from the market due to lack of use.

The foremost purpose of this dissertation is to consider the latest assistive technologies in order to suggest further improvements. Therefore, the summary provides background for the research papers included and an analytical survey of assistive methods/techniques. The problematic aspects affecting the use of computer help for people having ocular pathology and hearing disorders is the particular subject of our study. The considerations presented are intended for the developers of advanced assistive user interfaces. A key question is whether people with a sensory impairment can take advantage of the devices developed and if so, to what extent. How flexible the use of assistive technologies will become and how powerful they are going to be, will depend on the further development of the exploratory strategies directed toward the improvement of the assistive aids.

Poika Isokoski

Title: Manual Text Input: Experiments, Models, and Systems
Date of defense: 2004-04-23
Opponent: Shumin Zhai
Reviewers: Heikki Mannila, Ari Visa
Supervisor: Roope Raisamo


Mobile and pervasive computing have been popular research areas recently. Thus, these issues have a major part in the thesis at hand. Most of the text entry methods that are discussed are for mobile computers. One of the three main contributions of the work is an architecture for a middleware system intended to support personalized text entry in an environment permeated with mobile and non-mobile computers.

The two other main contributions in this thesis are experimental work on text entry methods and models of user performance in text entry tasks. The text entry methods tested in experiments were the minimal device independent text entry method (MDITIM), two methods for entering numbers using a touchpad, Quikwriting in a multi-device environment, and a menu-augmented soft-keyboard. The explanatory power of a simple model for unistroke writing time was measured. The model accounted for about 70% of the variation when applied carefully, and about 60% on first exposure. This sets the level of accuracy that more complex models must achieve in order to be useful. Also, a model that combines two previously known models for text entry rate development was constructed. This model improves the accuracy of text entry rate predictions between measured early learning curve and the theoretical upper limit.

Markku Turunen

Title: Jaspis – A Spoken Dialogue Architecture and its Applications
Date of defense: 2004-03-13
Opponent: Michael McTear
Reviewers: Alexander I. Rudnicky, Bernhard Suhm
Supervisor: Kari-Jouko Räihä


Speech can be an efficient and natural way for communication between humans and computers. Many practical applications have been constructed, but the full potential of speech applications has not been utilized. In addition to technological shortcomings, the development of speech applications lacks suitable techniques, methodology and development tools. For example, mobile
and multilingual communication needs flexible and adaptive interaction methods which take into account the needs of different users and different environments.

This dissertation addresses the following question: what kind of a system architecture do advanced speech applications require? The following challenges are specifically addressed: How could the system architecture support advanced interaction techniques? How could application development be supported by suitable models, methodology and tools?

Juha Lehikoinen

Title: Interacting with Wearable Computers: Techniques and Their Application in Wayfinding Using Digital Maps
Date of defense: 2002-09-27
Opponent: Bruce H. Thomas
Reviewers: Petri Pulli, Tapio Takala, Jukka Vanhala
Supervisors: Kari-Jouko Räihä, Hannu Nieminen


Wearable computers are a special case of mobile computers. They are either  embedded in clothing, or they may even be the clothing. They are very personal in nature, being always with the user, always on and always ready. The aim in developing wearable computers is to provide the user with instant and easy-to-use access to digital information sources anytime, anywhere.

This dissertation addresses the issues that arise when personal navigation assistants for wearable computers are developed. The research comprises eight studies in the areas of human-map interaction, wearable computing, and human-computer interaction. Two research methods have been applied: in the constructive research part, a navigation application, including several interaction techniques suitable for wearable use, has been developed. In the empirical research part, the methods and techniques developed have been evaluated to assess their usability. As a result in addition to the navigation application itself a set of user interaction techniques and interface components that support the various tasks needed in way-finding has been proposed. These include a finger-based interaction technique, an efficient list searching technique, and a novel pocket-based user interface metaphor. The results also include guidelines for designing the map behavior while navigating.

Roope Raisamo

Title: Multimodal Human-Computer Interaction: A Constructive and Empirical Study
Date of defense: 1999-12-07
Opponent: I. Scott MacKenzie
Reviewers: Martti Mäntylä, Mikko Sams
Supervisor: Kari-Jouko Räihä


Multimodal interaction is a way to make user interfaces natural and efficient with parallel and synergistic use of two or more input or output modalities. Two-handed interaction is a special case of multimodal interaction that makes use of both hands in a combined and coordinated manner. This dissertation gives a comprehensive survey on issues that are related to multimodal and two-handed interaction. Earlier work in human-computer interaction and related psychology is introduced within both these fields.

The constructive part of this dissertation consists of designing and building a group of multimodal interaction techniques that were implemented in two research prototypes. The first prototype is an object-oriented drawing program that implements new tools that are controlled with two-handed input. The second prototype is a multimodal information kiosk that responds to both
touch and speech input, and makes use of touch pressure sensing.

The need for extensive interdisciplinary research is pointed out with a group of research questions that need to be answered to better understand multimodal human-computer interaction. The current knowledge only applies to a few special cases, and there is no unified modality theory of multimodal interaction that covers both input and output modalities.