• Bibliography
  • More Referencing guides Blog Automated transliteration Relevant bibliographies by topics
  • Automated transliteration
  • Relevant bibliographies by topics
  • Referencing guides

Dissertations / Theses on the topic 'Human-computer interaction'

Create a spot-on reference in apa, mla, chicago, harvard, and other styles.

Consult the top 50 dissertations / theses for your research on the topic 'Human-computer interaction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

Jackson, Samuel. "Sustainability in Computer Science, Human-Computer Interaction, and Interaction Design." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/cmc_theses/1329.

Fleury, Rosanne. "Gender and human-computer interaction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ50310.pdf.

Li, QianQian. "Human-Computer Interaction: Security Aspects." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3427166.

Sayago, Barrantes Sergio. "Human-computer interaction with older people." Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7560.

Laberge, Dominic. "Visual tracking for human-computer interaction." Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/26504.

Abowd, Gregory Dominic. "Formal aspects of human-computer interaction." Thesis, University of Oxford, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.232812.

Ramsay, Judith Easton. "Measuring and facilitating human-computer interaction." Thesis, University of Glasgow, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.281957.

Roast, Christopher Richard. "Executing models in human computer interaction." Thesis, University of York, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335778.

Westerman, Stephen J. "Individual differences in human-computer interaction." Thesis, Aston University, 1993. http://publications.aston.ac.uk/10853/.

Frisk, Henrik. "Improvisation, computers and interaction : rethinking human-computer interaction through music /." Malmö : Malmö Academy of Music, Lund University, 2008. http://www.lu.se/o.o.i.s?id=12588&postid=1239899.

Drewes, Heiko. "Eye Gaze Tracking for Human Computer Interaction." Diss., lmu, 2010. http://nbn-resolving.de/urn:nbn:de:bvb:19-115914.

Bao, Leiming, and Chunyan Sun. "Human-Computer Interaction in a Smart House." Thesis, Högskolan Kristianstad, Sektionen för hälsa och samhälle, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-9475.

Bourges-Waldegg, Paula. "Handling cultural factors in human-computer interaction." Thesis, University of Derby, 1998. http://hdl.handle.net/10545/310928.

Bär, Nina. "Human-Computer Interaction And Online Users’ Trust." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-149685.

Tzanavari, Aimilia. "User modeling for intelligent human-computer interaction." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364961.

Sparrell, Carlton James. "Coverbal iconic gesture in human-computer interaction." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/62327.

Oliveira, Victor Adriel de Jesus. "Designing tactile vocabularies for human-computer interaction." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/99335.

Farbiak, Peter. "Správa projektů z oblasti Human-Computer Interaction." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236579.

Leiva, Torres Luis Alberto. "Diverse Contributions to Implicit Human-Computer Interaction." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/17803.

Nápravníková, Hana. "Human-Computer Interaction - spolupráce člověka a počítače." Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-359070.

Watkinson, Neil Stephen. "The evaluation of dynamic human-computer interaction." Thesis, Loughborough University, 1991. https://dspace.lboro.ac.uk/2134/7031.

Wache, Julia. "Implicit Human-computer Interaction: Two complementary Approaches." Doctoral thesis, Università degli studi di Trento, 2016. https://hdl.handle.net/11572/368869.

Wache, Julia. "Implicit Human-computer Interaction: Two complementary Approaches." Doctoral thesis, University of Trento, 2016. http://eprints-phd.biblio.unitn.it/1803/1/Wache_thesis.pdf.

Mohamedally, Dean. "Constructionism through Mobile Interactive Knowledge Elicitation (MIKE) in human-computer interaction." Thesis, City University London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433674.

TRUYENQUE, MICHEL ALAIN QUINTANA. "A COMPUTER VISION APPLICATION FOR HAND-GESTURES HUMAN COMPUTER INTERACTION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2005. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=6585@1.

Drugge, Mikael. "Wearable computer interaction issues in mediated human to human communication." Licentiate thesis, Luleå : Luleå Univ. of Technology, 2004. http://epubl.luth.se/1402-1757/2004/42.

Erdem, Ibrahim Aykut. "Vision-based Human-computer Interaction Using Laser Pointer." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1128776/index.pdf.

Pallotta, Vincenzo. "Cognitive language engineering towards robust human-computer interaction /." Lausanne, 2002. http://library.epfl.ch/theses/?display=detail&nr=2630.

Van, den Bergh Michael. "Visual body pose analysis for human-computer interaction." Konstanz Hartung-Gorre, 2010. http://d-nb.info/1000839370/04.

Gerken, Jens [Verfasser]. "Longitudinal Research in Human-Computer Interaction / Jens Gerken." Konstanz : Bibliothek der Universität Konstanz, 2011. http://d-nb.info/1017933847/34.

Azad, Minoo. "A proto-pattern language for human-computer interaction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0025/MQ52376.pdf.

Costanza, Enrico. "Subtle, intimate interfaces for mobile human computer interaction." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37387.

Britton, Brent Cabot James. "Enhancing computer-human interaction with animated facial expressions." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/64856.

Radüntz, Thea. "Biophysiological Mental-State Monitoring during Human-Computer Interaction." Doctoral thesis, Humboldt-Universität zu Berlin, 2021. http://dx.doi.org/10.18452/23026.

Trendafilov, Dari. "An information-theoretic account of human-computer interaction." Thesis, University of Glasgow, 2017. http://theses.gla.ac.uk/8614/.

Alshaali, Saif. "Human-computer interaction : lessons from theory and practice." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/210545/.

Martín-Albo, Simón Daniel. "Contributions to Pen & Touch Human-Computer Interaction." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/68482.

Obrist, Marianna. "DIY HCI do-it-yourself human computer interaction." Saarbrücken VDM Verlag Dr. Müller, 2007. http://d-nb.info/991461355/04.

Garrido, Piedad, Jesús Tramullas, Manuel Coll, Francisco Martínez, and Inmaculada Plaza. "XTM-DITA structure at Human-Computer Interaction Service." Universidad de Castilla-La Mancha, 2008. http://hdl.handle.net/10150/106152.

Wheeldon, Alan. "Improving human computer interaction in intelligent tutoring systems." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16587/1/Alan_Wheeldon_Thesis.pdf.

Wheeldon, Alan. "Improving human computer interaction in intelligent tutoring systems." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16587/.

King, William Joseph. "Toward the human-computer dyad /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/10325.

Herrera, Acuna Raul. "Advanced computer vision-based human computer interaction for entertainment and software development." Thesis, Kingston University, 2014. http://eprints.kingston.ac.uk/29884/.

Hamette, Patrick de la. "Embedded stereo vision systems for mobile human-computer interaction /." Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=18075.

Genc, Serkan. "Vision-based Hand Interface Systems In Human Computer Interaction." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611700/index.pdf.

Cermak-Sassenrath, Daniel. "The logic of play in everyday human-computer interaction." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4272/.

Schels, Martin [Verfasser]. "Multiple classifier systems in human-computer interaction / Martin Schels." Ulm : Universität Ulm. Fakultät für Ingenieurwissenschaften und Informatik, 2015. http://d-nb.info/1076828493/34.

Limerick, Hannah. "Investigating the sense of agency in human-computer interaction." Thesis, University of Bristol, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.715757.

Surie, Dipak. "An agent-centric approach to implicit human-computer interaction." Thesis, Umeå universitet, Institutionen för datavetenskap, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-52476.

Raisamo, Roope. "Multimodal human-computer interaction a constructive and empirical study /." Tampere, [Finland] : University of Tampere, 1999. http://acta.uta.fi/pdf/951-44-4702-6.pdf.

Bachelor & master theses in the research field of human-computer interaction

We offer thesis topics for bachelor and master level of the study programmes media informatics, computer science, software engineering, and cognitive systems.

Below you can find an uptodate list of topic suggestions. Of cource, we are open to discuss any proposals by students in our research field. The descriptions are partially only visible inside the campus net.

If you are interested, please directly get in touch with a research associate of the research group .

For further questions about howto conceptualise and write a thesis, please take a look at the FAQ section .

Overview of currently available thesis topics (bachelor level)

Overview of currently available thesis topics (master level), details about individual topics.

thesis on human computer interaction

Adaptive Autonomous Vehicle Driving Style

Supervisor: Annika Stampf

Level: Bachelor / Master

Description:

The aim of this thesis is to investigate how trust and acceptance is influenced if the driving style of an autonomous vehicle is adapted to the current state of a user. A prototype should be implemented in Unity and a user study should be conducted.

thesis on human computer interaction

Anthropomorphism in Highly Automated Vehicles

The aim of this work is to investigate what anthropomorphic features can be used in in-vehicle interfaces (such as physiological signals, e.g. heart beat or a nudge to the driver from the vehicle). These identified features should be implemented prototypically in a VR environment with Unity. Subsequently, a user study should be conducted to evaluate whether those identified features have a positive impact on passengers' trust in HAVs. 

thesis on human computer interaction

Balancing Access and Acceptance: Exploring the Intersection of Personalization and Social Norms in Automotive Interface Design for the Visually Impaired

Supervisor: Max Rädler

Level:  Master

This thesis explores the use of Bayesian optimization to design accessible interfaces for autonomous vehicles, focusing on the visually impaired. As autonomous driving evolves, offering new levels of independence, the challenge arises in designing interfaces that do not inadvertently disclose the user's disability in shared settings. This research aims to incorporate Bayesian optimization into vehicle interior design, creating personalized environments while considering social acceptance as a crucial factor. The feasibility of these designs will be assessed through a user study, exploring the balance between accessibility and social discretion.

thesis on human computer interaction

Breaking the Rebound: Exploring Strategies for Sustainable Consumption

Supervisor:   Albin Zeqiri

Level: Bachelor & Master

The efficiency paradox (also known as "rebound effect") is the concept that increases in resource efficiency can lead to higher consumption, which offsets the environmental benefits. For example, energy-efficient light bulbs may lead to decreased energy consumption per bulb, but leaving them on for longer periods could increase overall energy consumption. Without addressing the issue, sustainable transitions will remain challenging, and environmental issues will persist or even worsen. The goal of theses on this topic is to design and evaluate countermeasures to mitigate these effects. Based on Bachelor or Master level the thesis is adapted.

thesis on human computer interaction

Calm Technology and Digital Detox - How to nudge people towards a healthier technology usage

Supervisor:   Luca-Maxim Meinhardt

Description

The first thing we do when we get up and the last thing we do before we go to sleep is to look at our phone to check for new notifications and search for new content on social media. Unfortunately, our dependence on technology is already so deeply integrated into our habits that we do not even realize how often we use our phones. On the one hand, this addiction harms us by being less focused due to distraction via notifications. Additionally, it is said that social media contributes to our attention span getting smaller. On the other hand, technology usage is also harming our social interactions during face-to-face interactions. For example, some people focus on their phones instead of engaging in a conversation, decreasing its quality. Recent trends in wearable technology, such as smartwatches and AR glasses, might even deteriorate this behavior. Therefore, new methods and interactions must be found to nudge people towards healthier technology usage and change their habits unconsciously.

thesis on human computer interaction

Constant motion stimulus for peripherical vision to create Unconscious Notifications

Description The human peripherical vision provides us with information without shifting our focus from the primary task. This is particularly interesting since peripheral vision demands less cognitive load. So information that is positioned at the edge of the field of view can be received unconsciously without drawing attention. A prominent example is car driving since only a short moment of inattention might end in serious accidents. However, even daily struggles with loud and attention-drawing smartphone notifications that distract from working might be solved calmly with peripherical displays. In order to enable this concept, Augmented Reality glasses are the key since they provide us with wearable displays attached to our view. 

thesis on human computer interaction

Design Through My Eyes: Supporting Designers with a Vision Impairment Simulator Using Eye-Tracking

This thesis aims to create a desktop overlay simulation tool for visual impairment, designed to aid developers and designers in understanding the unique needs and experiences of visually impaired individuals. The project will involve developing a prototype simulation that can be applied across various design tasks, followed by a usability study to assess its effectiveness and impact on creating more accessible environments.

Projektbeschreibung/Description

Exploring Integration of Personal Context into Eco-Visualizations

Environmental labels play a significant role in shaping our behavior towards the environment. Understanding the meaning of eco-visualizations can help consumers make informed and sustainable purchase decisions. However, current eco-visualizations are often complex and difficult to comprehend, leading to a lack of action and confusion among consumers. The Awareness-Behavior Gap describes this issue. Personalized eco-visualizations tailored to individual behavior patterns and lifestyles could be a solution to this problem. This project aims to develop concepts, prototypes, and solutions to integrate personalized context into eco-visualizations and evaluate their effectiveness in user studies.

thesis on human computer interaction

Extensive Viewing for Language Learning

Betreuer: Tobias Wagner

Beschreibung: The goal of this project is to develop a technology-enhanced learning tool that supports language learners in learning new vocabulary through watching TV shows and movies. You will evaluate the system in terms of usability and explore its effects on learners’ learning success and motivation.

thesis on human computer interaction

Get Up, Stand Up - Put Down Your Phone - How to turn smartphone sessions into a sporty activity.

Supervisor: Luca Meinhardt and Jana Funke

Description: Based on literature research and related work, this thesis will aim for a solution to boost our self-control in phone usage. Our goal is to help people establish healthier smartphone usage by nudging them to do sports instead of browsing the phone.

thesis on human computer interaction

Hello Me! – How Similarity and Mimicry of In-Vehicle Assistants Effect Trust in Highly Automated Vehicles

The aim of this work is to design and prototypically implement an in-vehicle avatar, which is able to adapt to the appearance of a passenger, for example by using DeepFake. The avatar should be further able to mimic the passenger. Subsequently, a user study should be conducted to evaluate whether similarity and mimicry have positive impacts on passengers trust in highly automated vehicles.

thesis on human computer interaction

Immersive VR Guardians - Improving VR Gameplay through user-centered safety system design

Betreuer: Annalisa Degenhrad

Beschreibung: Have you ever played a VR game? Whenever you reach the boundary of your real-world play area (which quickly happens in average households), a grid will appear in front of you and will probably ruin your illusion of being in that fantastic VR world. The goal of this project will be to enhance built-in guardian systems for VR. In order to achieve this, you will focus on certain aspects of such systems and conduct a structured analysis to find out how these aspects could be improved to increase user experience. Possible aspects that you may focus on are the sensory representation of collision warnings, collision prevention systems or innovative customization mechanisms to match such systems with various VR worlds. Your goal will be to optimize guardian systems in terms of presence, usability, and safety in order to provide better VR experiences.

thesis on human computer interaction

Implicit Interaction Concepts for Highly Automated Vehicles

The aim of this work is to design new implicit in-vehicle interaction concepts for highly automated vehicles (HAVs). These concepts should be implemented with Unity (in a driving simulator or VR environment). Subsequently, a user study should be conducted to evaluate whether those concepts have a positive influence on passengers’ trust, acceptance, and user experience in HAVs.

thesis on human computer interaction

Is this real? Understanding the perception of virtual worlds and how to manipulate it

Supervisor: Annalisa Degenhard

The aim of this thesis is to explore the phenomenon of presence in virtual reality. A literature analysis should be conducted. By designing a suitable virtual environment a hypothesis on the behavior of presence will be tested in a user study. The goal is to get insights into how users perceive virtual environments and to draw conclusions on how they should be designed to improve the experience of virtual reality.

thesis on human computer interaction

Lost in translation – Enhancing the explainability of online translators

Description Imagine the scenario that you need to translate a document in a foreign language that you are not familiar with. You will probably look for an online translator such as Google Translator or DeepL and just copy the translator’s output. But how can you trust this translation? How can you be sure that the translation is precisely what you are trying to say? Prominent examples are sayings that might have no meaning if translated word by word. But since you are not familiar with the language, you cannot verify if the online translator grasped the hidden meaning behind the saying. Current online translators lack this explainability. They provide multiple alternatives for the translated phrase, but there is no explanation about why these alternatives were shown and how well they suit the inputted text.

thesis on human computer interaction

Obstacle-Avoidance and Path Visualisation in Urban Air Mobility

Description This thesis aims to test different kinds of obstacle-avoidance visualization during bad weather conditions in Urban Air Mobility. Furthermore, investigations should be conducted regarding path visualizations of the own vehicle and other vehicles in order to support the passenger‘s mental model of the flying urban traffic. A prototype should be designed and implemented in VR that investigates the concepts mentioned above in a user study.

thesis on human computer interaction

Perceived Safety in Piloting for Urban Air Mobility

Description:   Urban Air Mobility (UAM) has started to gain increasing interest in future mobility. Unlike conventional transportation such as buses and trains, flying taxis will not be limited to predefined routes, thus avoiding transportation delays. Hence, the vision is to create a network of flying vehicles operating in metropolitan areas to connect short and medium distances.  Experts predict that the first UAM vehicles will lift passengers in the mid 2020th. In fact, the first crewed flight of the German start-up Volocopter successfully ran 2019 in Singapour. UAM will shift air mobility from the current mass transportation to a relatively private ride with 2-4 passengers, which creates new interesting aspects for HCI research since the passengers are focused.  

thesis on human computer interaction

Sustainability-in-Design: Reframing the Human-Technology Relationship

Goals of theses in this area cover the development of new, interactive concepts that introduce living materials or even microorganisms into hardware and map its respective needs to various device functionalities. Based on bachelor or master level, the scope is adapted.

thesis on human computer interaction

Talk to Me: Voice User Interfaces in Highly Automated Vehicles for People Who Are Visually Impaired

This thesis explores the integration of voice assistance and screen reader technologies in autonomous vehicle interfaces to enhance usability for visually impaired users. With advancements in autonomous driving providing increased independence for the blind and visually impaired, the focus here is on determining how these assistive technologies can be adapted for automotive use. This involves developing various prototypes of speech interfaces and assistance systems. The effectiveness and user-friendliness of these prototypes will be assessed through workshops or studies involving visually impaired participants, aiming to ensure that all vehicle controls are accessible and comprehensible.

thesis on human computer interaction

Tell me More: Empowering the Visually Impaired with Situation Awareness

This thesis focuses on utilizing auditory technologies such as 3D Sound and Earcons in autonomous vehicles to aid visually impaired individuals. With significant advancements in autonomous driving, these technologies can enhance situational awareness by conveying essential road information. This project will implement these auditory methods in Unity's virtual reality environment and evaluate their effectiveness through a workshop with visually impaired participants. The study aims to develop practical design guidelines based on the findings and existing literature.

thesis on human computer interaction

Driving Change: Approaches to Support Green Mobility Habits

Level: Bachelor

Goals of theses in this area cover the development of new, interactive concepts that explicitly or implicitly induce behavior change regarding mobility choices. Theses include reviewing relevant literature, implementing, and evaluating the developed concepts. Based on bachelor/master level the scope is adapted.

thesis on human computer interaction

Seeing through the Haze: Preventing Perceptual Manipulations in Mixed Realit

The aim of this thesis is to develop strategies that mitigate the effect of dark patterns in MR. This includes reviewing/categorizing relevant literature, implementing, and evaluating the developed concepts. Based on bachelor/master level the scope is adapted

Logo of research group human-computer interaction, UUlm

Funding Partners Publications Awards

Projects Topics for bachelor and master theses

German Pre-CHI Event 2022

thesis on human computer interaction

If you are interested in finishing your Bachelor or Master studies with a thesis in Human-Computer Interaction, you are welcome to contact us. Our research projects are a rich source for ideas. In general, writing a thesis with a focus on HCI will require you to take a user-centred perspective and asks you to apply adequate methods, such as involving users during design and evaluation. But at the same time, you will have the chance to work with the latest technology, such as Augmented Reality Glasses, Gaze-Based Interaction, Multi-Device Interaction, Voice Assistants, or other novel interaction techniques.

Our research topics are concerned with Collaborative Work Spaces and Human-Robot Interaction. Take a look at our research project pages - there is always room for a bachelor or master thesis. Get inspired here .

You have your own idea? Please, let us know! We are always looking for new topics to expand to.

thesis on human computer interaction

Interested in a thesis topic?

Completed theses.

human computer interaction Recently Published Documents

Total documents.

  • Latest Documents
  • Most Cited Documents
  • Contributed Authors
  • Related Sources
  • Related Keywords

Project-based learning in human–computer interaction: a service‐dominant logic approach

Purpose This study aims to propose a service-dominant logic (S-DL)-informed framework for teaching innovation in the context of human–computer interaction (HCI) education involving large industrial projects. Design/methodology/approach This study combines S-DL from the field of marketing with experiential and constructivist learning to enable value co-creation as the primary method of connecting diverse actors within the service ecology. The approach aligns with the current conceptualization of central university activities as a triad of research, education and innovation. Findings The teaching framework based on the S-DL enabled ongoing improvements to the course (a project-based, bachelor’s-level HCI course in the computer science department), easier management of stakeholders and learning experiences through students’ participation in real-life projects. The framework also helped to provide an understanding of how value co-creation works and brought a new dimension to HCI education. Practical implications The proposed framework and the authors’ experience described herein, along with examples of projects, can be helpful to educators designing and improving project-based HCI courses. It can also be useful for partner companies and organizations to realize the potential benefits of collaboration with universities. Decision-makers in industry and academia can benefit from these findings when discussing approaches to addressing sustainability issues. Originality/value While HCI has successfully contributed to innovation, HCI education has made only moderate efforts to include innovation as part of the curriculum. The proposed framework considers multiple service ecosystem actors and covers a broader set of co-created values for the involved partners and society than just learning benefits.

Recommender Systems: Past, Present, Future

The origins of modern recommender systems date back to the early 1990s when they were mainly applied experimentally to personal email and information filtering. Today, 30 years later, personalized recommendations are ubiquitous and research in this highly successful application area of AI is flourishing more than ever. Much of the research in the last decades was fueled by advances in machine learning technology. However, building a successful recommender sys-tem requires more than a clever general-purpose algorithm. It requires an in-depth understanding of the specifics of the application environment and the expected effects of the system on its users. Ultimately, making recommendations is a human-computer interaction problem, where a computerized system supports users in information search or decision-making contexts. This special issue contains a selection of papers reflecting this multi-faceted nature of the problem and puts open research challenges in recommender systems to the fore-front. It features articles on the latest learning technology, reflects on the human-computer interaction aspects, reports on the use of recommender systems in practice, and it finally critically discusses our research methodology.

Research on the Construction of Human-Computer Interaction System Based on a Machine Learning Algorithm

In this paper, we use machine learning algorithms to conduct in-depth research and analysis on the construction of human-computer interaction systems and propose a simple and effective method for extracting salient features based on contextual information. The method can retain the dynamic and static information of gestures intact, which results in a richer and more robust feature representation. Secondly, this paper proposes a dynamic planning algorithm based on feature matching, which uses the consistency and accuracy of feature matching to measure the similarity of two frames and then uses a dynamic planning algorithm to find the optimal matching distance between two gesture sequences. The algorithm ensures the continuity and accuracy of the gesture description and makes full use of the spatiotemporal location information of the features. The features and limitations of common motion target detection methods in motion gesture detection and common machine learning tracking methods in gesture tracking are first analyzed, and then, the kernel correlation filter method is improved by designing a confidence model and introducing a scale filter, and finally, comparison experiments are conducted on a self-built gesture dataset to verify the effectiveness of the improved method. During the training and validation of the model by the corpus, the complementary feature extraction methods are ablated and learned, and the corresponding results obtained are compared with the three baseline methods. But due to this feature, GMMs are not suitable when users want to model the time structure. It has been widely used in classification tasks. By using the kernel function, the support vector machine can transform the original input set into a high-dimensional feature space. After experiments, the speech emotion recognition method proposed in this paper outperforms the baseline methods, proving the effectiveness of complementary feature extraction and the superiority of the deep learning model. The speech is used as the input of the system, and the emotion recognition is performed on the input speech, and the corresponding emotion obtained is successfully applied to the human-computer dialogue system in combination with the online speech recognition method, which proves that the speech emotion recognition applied to the human-computer dialogue system has application research value.

Human–Computer Interaction-Oriented African Literature and African Philosophy Appreciation

African literature has played a major role in changing and shaping perceptions about African people and their way of life for the longest time. Unlike western cultures that are associated with advanced forms of writing, African literature is oral in nature, meaning it has to be recited and even performed. Although Africa has an old tribal culture, African philosophy is a new and strange idea among us. Although the problem of “universality” of African philosophy actually refers to the question of whether Africa has heckling of philosophy in the Western sense, obviously, the philosophy bred by Africa’s native culture must be acknowledged. Therefore, the human–computer interaction-oriented (HCI-oriented) method is proposed to appreciate African literature and African philosophy. To begin with, a physical object of tablet-aid is designed, and a depth camera is used to track the user’s hand and tablet-aid and then map them to the virtual scene, respectively. Then, a tactile redirection method is proposed to meet the user’s requirement of tactile consistency in head-mounted display virtual reality environment. Finally, electroencephalogram (EEG) emotion recognition, based on multiscale convolution kernel convolutional neural networks, is proposed to appreciate the reflection of African philosophy in African literature. The experimental results show that the proposed method has a strong immersion and a good interactive experience in navigation, selection, and manipulation. The proposed HCI method is not only easy to use, but also improves the interaction efficiency and accuracy during appreciation. In addition, the simulation of EEG emotion recognition reveals that the accuracy of emotion classification in 33-channel is 90.63%, almost close to the accuracy of the whole channel, and the proposed algorithm outperforms three baselines with respect to classification accuracy.

Wearable devices in diving: A systematic review (Preprint)

BACKGROUND Wearable devices have grown enormously in importance in recent years. While wearables have generally been well studied, they have not yet been discussed in the underwater environment. OBJECTIVE The reason for this systematic review was to systematically search for the wearables for underwater operation used in the scientific literature, to make a comprehensive map of their capabilities and features, and to discuss the general direction of development. METHODS In September 2021, we conducted an extensively search of existing literature in the largest databases using keywords. For this purpose, only articles were used that contained a wearable or device that can be used in diving. Only articles in English were considered, as well as peer-reviewed articles. RESULTS In the 36 relevant studies that were found, four device categories could be identified: safety devices, underwater communication devices, head-up displays and underwater human-computer interaction devices. CONCLUSIONS The possibilities and challenges of the respective technologies were considered and evaluated separately. Underwater communication has the most significant influence on future developments. Another topic that has not received enough attention is human-computer interaction.

Analyzing the mental states of the sports student based on augmentative communication with human–computer interaction

Recognition of facial expressions and its application to human computer interaction, physical education system and training framework based on human–computer interaction for augmentative and alternative communication, enhancing the human-computer interaction through the application of artificial intelligence, machine learning, and data mining, applications of human-computer interaction for improving erp usability in education systems, export citation format, share document.

  • DSpace@MIT Home
  • MIT Libraries
  • Graduate Theses

Emerging human-computer interaction interfaces : a categorizing framework for general computing

Thumbnail

Other Contributors

Terms of use, description, date issued, collections.

Vision based hand gesture recognition for human computer interaction: a survey

  • Published: 06 November 2012
  • Volume 43 , pages 1–54, ( 2015 )

Cite this article

thesis on human computer interaction

  • Siddharth S. Rautaray 1 &
  • Anupam Agrawal 1  

16k Accesses

1024 Citations

7 Altmetric

Explore all metrics

As computers become more pervasive in society, facilitating natural human–computer interaction (HCI) will have a positive impact on their use. Hence, there has been growing interest in the development of new approaches and technologies for bridging the human–computer barrier. The ultimate aim is to bring HCI to a regime where interactions with computers will be as natural as an interaction between humans, and to this end, incorporating gestures in HCI is an important research area. Gestures have long been considered as an interaction technique that can potentially deliver more natural, creative and intuitive methods for communicating with our computers. This paper provides an analysis of comparative surveys done in this area. The use of hand gestures as a natural interface serves as a motivating force for research in gesture taxonomies, its representations and recognition techniques, software platforms and frameworks which is discussed briefly in this paper. It focuses on the three main phases of hand gesture recognition i.e. detection, tracking and recognition. Different application which employs hand gestures for efficient interaction has been discussed under core and advanced application domains. This paper also provides an analysis of existing literature related to gesture recognition systems for human computer interaction by categorizing it under different key parameters. It further discusses the advances that are needed to further improvise the present hand gesture recognition systems for future perspective that can be widely used for efficient human computer interaction. The main goal of this survey is to provide researchers in the field of gesture based HCI with a summary of progress achieved to date and to help identify areas where further research is needed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA) Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

Retracted article: a survey on recent vision-based gesture recognition.

thesis on human computer interaction

Vision Based Hand Gesture Recognition for Mobile Devices: A Review

thesis on human computer interaction

A Survey on Vision-Based Hand Gesture Recognition

A Forge.NET (2012) http://www.aforgenet.com/framework/

Alijanpour N, Ebrahimnezhad H, Ebrahimi A (2008) Inner distance based hand gesture recognition for devices control. In: International conference on innovations in information technology, pp 742–746

Alon J, Athitsos V, Yuan Q, Sclaroff S (2005) Simultaneous localization and recognition of dynamic hand gestures. In: IEEE workshop on motion and video computing (WACV/MOTION’05), pp 254–260

Alon J, Athitsos V, Yuan Q, Sclaroff S (2009) A unified framework for gesture recognition and spatiotemporal gesture segmentation. IEEE Trans Pattern Analy Mach Intell 31(9): 1685–1699

Article   Google Scholar  

Alpern M, Minardo K (2003) Developing a car gesture interface for use as a secondary task. In: CHI ’03 extended abstracts on human factors in computing systems. ACM Press, pp 932–933

Andrea C (2001) Dynamic time warping for offline recognition of a small gesture vocabulary. In: Proceedings of the IEEE ICCV workshop on recognition, analysis, and tracking of faces and gestures in real-time systems, July–August, p 83

Appenrodt J, Handrich S, Al-Hamadi A, Michaelis B (2010) Multi stereo camera data fusion for fingertip detection in gesture recognition systems. In: International conference of soft computing and pattern recognition (SoCPaR), 2010, pp 35–40

Argyros A, Lourakis MIA (2004a) Real-time tracking of multiple skin-colored objects with a possibly moving camera. In: Proceedings of the European conference on computer vision, Prague, pp 368–379

Argyros A, Lourakis MIA (2004b) 3D tracking of skin-colored regions by a moving stereoscopic observer. Appl Opt 43(2): 366–378

Argyros A, Lourakis MIA (2006) Binocular hand tracking and reconstruction based on 2D shape matching. In: Proceedings of the international conference on pattern recognition (ICPR), Hong-Kong

Bandera JP, Marfil R, Bandera A, Rodríguez JA, Molina-Tanco L, Sandoval F (2009) Fast gesture recognition based on a two-level representation. Pattern Recogn Lett 30: 1181–1189

Bao J, Song A, Guo Y, Tang H (2011) Dynamic hand gesture recognition based on SURF tracking. In: International conference on electric information and control engineering (ICEICE), pp 338–341

Baxter J (2000) A model of inductive bias learning. J Artif Intell Res 12: 149–198

MathSciNet   MATH   Google Scholar  

Bellarbi A, Benbelkacem S, Zenati-Henda N, Belhocine M (2011) Hand gesture interaction using color-based method for Tabletop interfaces. In: IEEE 7th international symposium on intelligent signal processing (WISP), pp 1–6

Belongie S, Malik J, Puzicha J (2002) Shape matching and object recognition using shape contexts. IEEE Trans Pattern Anal Mach Intell 24(4): 509–522

Berci N, Szolgay P (2007) Vision based human–machine interface via hand gestures. In: 18th European conference on circuit theory and design (ECCTD 2007), pp 496–499

Bergh M, Gool L (2011) Combining RGB and ToF cameras for real-time 3D hand gesture interaction. In: Workshop on applications of computer vision (WACV), IEEE, pp 66–72

Bergh MV, Meier EK, Bosch’e F, Gool LV (2009) Haarlet-based hand gesture recognition for 3D interaction, workshop on applications of computer vision (WACV), pp 1–8

Bernardes J, Nakamura R, Tori R (2009) Design and implementation of a flexible hand gesture command interface for games based on computer vision. In: 8th Brazilian symposium on digital games and entertainment, pp 64–73

Berry G (1998) Small-wall, a multimodal human computer intelligent interaction test bed with applications, Dept. of ECE, University of Illinois at Urbana-Champaign, MS thesis

Bhuyan MK, Ghoah D, Bora PK (2006) A framework for hand gesture recognition with applications to sign language. In: Annual IEEE India conference, pp 1–6

Bimbo AD, Landucci L, Valli A (2006) Multi-user natural interaction system based on real-time hand tracking and gesture recognition. In: 18th International conference on pattern recognition (ICPR’06), pp 55–58

Binh ND, Ejima T (2006) A new approach dedicated to hand gesture recognition. In: 5th IEEE international conference on cognitive informatics (ICCI’06), pp 62–67

Birdal A, Hassanpour R (2008) Region based hand gesture recognition. In: 16th International conference in central Europe on computer graphics, visualization and computer vision, pp 1–7

Birk H, Moeslund TB, Madsen CB (1997) Real-time recognition of hand alphabet gestures using principal component analysis. In: Proceedings of the Scandinavian conference on image analysis, Lappeenranta

Blake A, North B, Isard M (1999) Learning multi-class dynamics. In: Proceedings advances in neural information processing systems (NIPS), vol 11, pp 389–395

Bolt RA, Herranz E (1992) Two-handed gesture in multi-modal natural dialog. In: Proceedings of the 5th annual ACM symposium on user interface software and technology, ACM Press, pp 7–14

Boulay B (2007) Human posture recognition for behavior understanding. PhD thesis, Universit’e de Nice-Sophia Antipolis

Bourke A, O’Brien J, Lyons G (2007) Evaluation of a threshold-based tri-axial accelerometer fall detection algorithm. Gait & Posture 26(2):194–199. http://www.sciencedirect.com/science/article/B6T6Y-4MBCJHV-1/2/f87e4f1c82f3f93a3a5692357e3fe00c

Bowden R, Zisserman A, Kadir T, Brady M (2003) Vision based interpretation of natural sign languages. In: Exhibition at ICVS03: the 3rd international conference on computer vision systems. ACM Press, pp 1–2

Bradski G (1998) Real time face and object tracking as a component of a perceptual user interface. In: IEEE workshop on applications of computer vision. Los Alamitos, California, pp 214–219

Bradski G, Kaehler A (2008) Learning OpenCV, O‘Reilly, pp 337–341

Bretzner L, Laptev I, Lindeberg T (2002) Hand gesture recognition using multi-scale colour features, hierarchical models and particle filtering. In: Fifth IEEE international conference on automatic face and gesture recognition, pp 405–410. doi: 10.1109/AFGR.2002.1004190

Buchmann V, Violich S, Billinghurst M, Cockburn A (2004) Fingartips: gesture based direct manipulation in augmented reality. In: 2nd international conference on computer graphics and interactive techniques, ACM Press, pp 212–221

Burges CJC (1998) A tutorial on support vector machines for pattern recognition. Kluwer, Boston, pp 1–43

Google Scholar  

Cao X, Balakrishnan R (2003) Visionwand: interaction techniques for large displays using a passive wand tracked in 3d. In: ‘UIST ’03: proceedings of the 16th annual ACM symposium on User Interface software and technology. ACM Press, New York, pp 173–182

Chai D, Ngan K (1998) Locating the facial region of a head and-shoulders color image. In: IEEE international conference on automatic face and gesture recognition, pp 124–129, Piscataway

Chalechale A, Naghdy G (2007) Visual-based human–machine interface using hand gestures. In: 9th International symposium on signal processing and its applications (ISSPA 2007), pp 1–4

Chalechale A, Safaei F, Naghdy G, Premaratn P (2005) Hand gesture selection and recognition for visual-based human–machine interface. In: IEEE international conference on electro information technology, pp 1–6

Chang CC (2006) Adaptive multiple sets of CSS features for hand posture recognition. Neuro Comput 69: 2017–2025

Charniak E (1993) Statistical language learning. MIT Press, Cambridge

Chatty S, Lecoanet P (1996) Pen computing for air traffic control. In: Proceedings of the SIGCHI conference on Human factors in computing systems, ACM Press, pp 87–94

Chaudhary A, Raheja JL, Das K, Raheja S (2011) Intelligent approaches to interact with machines using hand gesture recognition in natural way: a survey. Int J Comput Sci Eng Survey (IJCSES) 2(1): 122–133

Chen YT, Tseng KT (2007) Developing a multiple-angle hand gesture recognition system for human machine interactions. In: 33rd annual conference of the IEEE industrial electronics society (IECON), pp 489–492

Chen Q, Georganas ND, Petriu EM (2007) Real-time vision-based hand gesture recognition using Haar-like features. In: Conference on instrumentation and measurement technology (IMTC 2007), pp 1–6

Chen Q, Georganas ND, Petriu M (2008) Hand gesture recognition using Haar-like features and a stochastic context-free grammar. IEEE Trans Instrum Meas 57(8): 1562–1571

Cheng J, Xie X, Bian W, Tao D (2012) Feature fusion for 3D hand gesture recognition by learning a shared hidden space. Pattern Recogn Lett 33: 476–484

Choras RS (2009) Hand shape and hand gesture recognition. In: IEEE symposium on industrial electronics and applications (ISIEA 2009), pp 145–149

Chung WK, Wu X, Xu Y (2009) A real time hand gesture recognition based on Haar wavelet representation. In: International conference on robotics and biomimetics, Bangkok, pp 336–341

Cohen PR, Johnston M, McGee D, Oviatt S, Pittman J, Smith I, Chen L, Clow J (1997) Quickset: multimodal interaction for distributed applications. In: Proceedings of the fifth ACM international conference on Multimedia, ACM Press, pp 31–40

Conci N, Ceresato P, De Natale FGB (2007) Natural human–machine interface using an interactive virtual blackboard. In: IEEE international conference on image processing, pp 181–184

Cootes TF, Taylor CJ (1992) Active shape models smart snakes. In: British machine vision conference, pp 266–275

Cootes TF, Taylor CJ, Cooper DH, Graham J (1995) Active shape models—their training and applications. Comput Vis Image Underst 61(1): 38–59

Corera S, Krishnarajah N (2011) Capturing hand gesture movement: a survey on tools techniques and logical considerations. In: Proceedings of Chi Sparks 2011 HCI research, innovation and implementation, Arnhem, Netherlands. http://proceedings.chi-sparks.nl/documents/Education-Gestures/FP-35-AC-EG.pdf

Cote M, Payeur P, Comeau G (2006) Comparative study of adaptive segmentation techniques for gesture analysis in unconstrained environments. In: IEEE international workshop on imagining systems and techniques, pp 28–33

Crowley JL, Jolle Coutaz FB (2000) Perceptual user interfaces: things that see. Commun ACM 43(3): 54–64

Crowley J, Berard F, Coutaz J (1995) Finger tracking as an input device for augmented reality. In: International workshop on gesture and face recognition, Zurich

Cui Y, Weng J (1996) Hand sign recognition from intensity image sequences with complex background. In: Proceedings of the IEEE computer vision and pattern recognition (CVPR), pp 88–93

Cui Y, Swets D, Weng J (1995) Learning-based hand sign recognition using shoslf-m. In: International workshop on automatic face and gesture recognition, Zurich, pp 201–206

Cutler R, Turk M (1998) View-based interpretation of real-time optical flow for gesture recognition. In: Proceedings of the international conference on face and gesture recognition. IEEE Computer Society, Washington, pp 416–421

Darrell T, Essa I, Pentland A (1996) Task-specific gesture analysis in real-time using interpolated views. IEEE Trans Pattern Anal Mach Intell 18(12): 1236–1242

Davis JW, Vaks S (2001) A perceptual user interface for recognizing head gesture acknowledgements. In: Proceedings of the 2001 workshop on perceptive user interfaces. ACM Press, pp 1–7

De Tan T, Geo ZM (2011) Research of hand positioning and gesture recognition based on binocular vision. In: EEE international symposium on virtual reality innovation 2011, pp 311–315

Deng LY, Lee DL, Keh HC, Liu YJ (2010) Shape context based matching for hand gesture recognition. In: IET international conference on frontier computing. Theory, technologies and applications, pp 436–444

Derpanis KG (2004) A review of vision-based hand gestures. http://cvr.yorku.ca/members/gradstudents/kosta/publications/file_Gesture_review.pdf

Derpanis KG (2005) Mean shift clustering, Lecture Notes. http://www.cse.yorku.ca/~kosta/CompVis_Notes/mean_shift.pdf

Du H, Xiong W, Wang Z (2011) Modeling and interaction of virtual hand based on virtools. In: International conference on multimedia technology (ICMT), pp 416–419

Eamonn K, Pazzani MJ (2001) Derivative dynamic time warping. In: First international SIAM international conference on data mining, Chicago

Elmezain M, Al-Hamadi A, Michaelis B (2009) Hand trajectory-based gesture spotting and recognition using HMM. In: 16th IEEE international conference on image processing (ICIP 2009), pp 3577–3580

Elmezain M, Al-Hamadi A, Sadek S, Michaelis M (2010) Robust methods for hand gesture spotting and recognition using hidden Markov models and conditional random fields. In: IEEE international symposium on signal processing and information technology (ISSPIT), pp 133–136

EyeSight’s (2012) http://www.eyesight-tech.com/

Eyetoy (2003) http://asia.gamespot.com/eyetoy-play/

Fang G, Gao W, Zhao D (2003) Large vocabulary sign language recognition based on hierarchical decision trees. In: Proceedings of the 5th international conference on multimodal interfaces. ACM Press, pp 125–131

Fang Y, Wang K, Cheng J, Lu H (2007) A real-time hand gesture recognition method. In: IEEE international conference on multimedia and expo, pp 995–998

Ferscha A, Resmerita S, Holzmann C, Reichor M (2005) Orientation sensing for gesture-based interaction with smart artifacts. Comput Commun 28: 1552–1563

Forsberg A, Dieterich M, Zeleznik R (1998) The music notepad. In: Proceedings of the 11th annual ACM symposium on user interface software and technology, ACM Press, pp 203–210

Francois R, Medioni G (1999) Adaptive color background modeling for real-time segmentation of video streams. In: International conference on imaging science, systems, and technology, Las Vegas, pp 227–232

Freeman W, Weissman C (1995) Television control by hand gestures. In: International workshop on automatic face and gesture recognition, Zurich, pp 179–183

Freeman W, Tanaka K, Ohta J, Kyuma K (1996) Computer vision for computer games. In: Proceedings of the second international conference on automatic face and gesture recognition, pp 100–105

Freund Y, Schapire R (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1): 119–139

Article   MathSciNet   MATH   Google Scholar  

Friedman J, Hastie T, Tibshiranim R (2000) Additive logistic regression: a statistical view of boosting. Ann Stat 28(2): 337–374

Article   MATH   Google Scholar  

Gandy M, Starner T, Auxier J, Ashbrook D (2000) The gesture pendant: a self illuminating, wearable, infrared computer vision system for home automation control and medical monitoring. In: ‘4th IEEE international symposium on wearable computers, IEEE Computer Society, pp 87–94

Gastaldi G, Pareschi A, Sabatini SP, Solari F, Bisio GM (2005) A man-machine communication system based on the visual analysis of dynamic gestures. In: IEEE international conference on image processing (ICIP 2005), pp 397–400

Gavrila DM, Davis LS (1995) Towards 3-d model-based tracking and recognition of human movement: multi-view approach. In: IEEE international workshop on automatic face- and gesture recognition. IEEE Computer Society, Zurich, pp 272–277

Ge SS, Yang Y, Lee TH (2006) Hand gesture recognition and tracking based on distributed locally linear embedding. In: IEEE conference on robotics, automation and mechatronics, pp 1–6

Ge SS, Yang Y, Lee TH (2008) Hand gesture recognition and tracking based on distributed locally linear embedding. Image Vis Comput 26:1607–1620

GestureTek (2008) http://www.gesturetek.com/

Gorce MDL, Fleet DJ, Paragios N (2011) Model-based 3D hand pose estimation from monocular video. IEEE Trans Pattern Anal Mach Intell 33(9): 1793–1805

Goza SM, Ambrose RO, Diftler MA, Spain IM (2004) Telepresence control of the nasa/darpa robonaut on a mobility platform. In: Conference on human factors in computing systems. ACM Press, pp 623–629

Graetzel C, Fong TW, Grange S, Baur C (2004) A non-contact mouse for surgeon-computer interaction. Technol Health Care 12(3): 245–257

Habib HA, Mufti M (2006) Real time mono vision gesture based virtual keyboard system. IEEE Trans Consumer Electron 52(4):1261–1266

Hackenberg G, McCall R, Broll W (2011) Lightweight palm and finger tracking for real-time 3D gesture control. In: IEEE virtual reality conference (VR), pp 19–26

Hall ET (1973) The silent language. Anchor Books. ISBN-13: 978-0385055499

HandGKET (2011) https://sites.google.com/site/kinectapps/kinect

HandVu (2003) http://www.movesinstitute.org/~kolsch/HandVu/HandVu.html

Hardenberg CV, Berard F (2001) Bare-hand human–computer interaction. Proceedings of the ACM workshop on perceptive user interfaces. ACM Press, pp 113–120

He GF, Kang SK, Song WC, Jung ST (2011) Real-time gesture recognition using 3D depth camera. In: 2nd International conference on software engineering and service science (ICSESS), pp 187–190

Heap T, Hogg D (1996) Towards 3D hand tracking using a deformable model. In: IEEE international conference automatic face and gesture recognition, Killington, pp 140–145

Henia OB, Bouakaz S (2011) 3D Hand model animation with a new data-driven method. In: Workshop on digital media and digital content management, IEEE, pp 72–76

Ho MF, Tseng CY, Lien CC, Huang CL (2011) A multi-view vision- based hand motion capturing system. Pattern Recogn 44: 443–453

Holzmann GJ (1925) Finite state machine: Ebook. http://www.spinroot.com/spin/Doc/Book91_PDF/F1.pdf

Hossain M, Jenkin M (2005) Recognizing hand-raising gestures using HMM. In: 2nd Canadian conference on computer and robot vision (CRV’05), pp 405–412

Howe LW, Wong F, Chekima A (2008) Comparison of hand segmentation methodologies for hand gesture recognition. In: International symposium on information technology (ITSim 2008), pp 1–7

Hsieh CC, Liou DH, Lee D (2010) A real time hand gesture recognition system using motion history image. In: 2nd International conference on signal processing systems (ICSPS), pp 394–398

Hu K, Canavan S, Yin L (2010) Hand pointing estimation for human computer interaction based on two orthogonal-views. In: International conference on pattern recognition 2010, pp 3760–3763

Huang S, Hong J (2011) Moving object tracking system based on camshift and Kalman filter. In: International conference on consumer electronics, communications and networks (CECNet), pp 1423–1426

Huang D, Tang W, Ding Y, Wan T, Wu X, Chen Y (2011a) Motion capture of hand movements using stereo vision for minimally invasive vascular interventions. In: Sixth international conference on image and graphics, pp 737–742

Huang DY, Hu WC, Chang SH (2011b) Gabor filter-based hand-pose angle estimation for hand gesture recognition under varying illumination. Expert Syst Appl 38(5):6031–6042

Iannizzotto G, Villari M, Vita L (2001) Hand tracking for human-computer interaction with gray level visual glove: turning back to the simple way. In: Workshop on perceptive user interfaces, ACM digital library, ISBN 1-58113-448-7

Ibarguren A, Maurtua I, Sierra B (2010) Layered architecture for real time sign recognition: hand gesture and movement. Eng Appl Artif Intell 23: 1216–1228

iGesture (2012) http://www.igesture.org/

Ionescu D, Ionescu B, Gadea C, Islam S (2011a) A multimodal interaction method that combines gestures and physical game controllers. In: Proceedings of 20th international conference on computer communications and networks (ICCCN), IEEE, pp 1–6

Ionescu D, Ionescu B, Gadea C, Islam S (2011b) An intelligent gesture interface for controlling TV sets and set-top boxes. In: 6th IEEE international symposium on applied computational intelligence and informatics, pp 159–164

Isard M, Blake A (1998) Condensation—conditional density propagation for visual tracking. Int J Comput Vis 29(1): 5–28

Joslin C, Sawah AE, Chen Q, Georganas N (2005) Dynamic gesture recognition. In: Conference on instrumentation and measurement technology, pp 1706–1711

Ju SX, Black MJ, Minneman S, Kimber D (1997) Analysis of gesture and action in technical talks for video indexing, Technical report, American Association for Artificial Intelligence. AAAI Technical Report SS-97-03

Juang CF, Ku KC (2005) A recurrent fuzzy network for fuzzy temporal sequence processing and gesture recognition. IEEE Trans Syst Man Cybern Part B Cybern 35(4): 646–658

Juang CF, Ku KC, Chen SK (2005) Temporal hand gesture recognition by fuzzified TSK-type recurrent fuzzy network. In: International joint conference on neural networks, pp 1848–1853

Just A, Marcel S (2009) A comparative study of two state-of-the-art sequence processing techniques for hand gesture recognition. Comput Vis Image Underst 113: 532–543

Kalman RE (1960) A new approach to linear filtering and prediction problems. Trans ASME J Basic Eng 82: 35–42

Kampmann M (1998) Segmentation of a head into face, ears, neck and hair for knowledge-based analysis-synthesis coding of video-phone sequences. In: Proceedings of the international conference on image processing (ICIP), vol 2, Chicago, pp 876–880

Kanniche MB (2009) Gesture recognition from video sequences. PhD Thesis, University of Nice

Kanungo T, Mount DM, Netanyahu NS, Piatko CD, Silverman R, Wu AY (2002) An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans Pattern Anal Mach Intell 24(7): 881–892

Kapralos B, Hogue A, Sabri H (2007) Recognition of hand raising gestures for a remote learning application. In: Eight international workshop on image analysis for multimedia interactive services (WIAMIS’07), pp 1–4

Karam M (2006) A framework for research and design of gesture-based human computer interactions. PhD Thesis, University of Southampton

Keogh E, Ratanamahatana CA (2005) Exact indexing of dynamic time warping. Knowl Inf Syst 7(3): 358–386

Kevin NYY, Ranganath S, Ghosh D (2004) Trajectory modeling in gesture recognition using cybergloves and magnetic trackers. In: TENCON 2004. IEEE region 10 conference, pp 571–574

Konrad T, Demirdjian D, Darrell T (2003) Gesture + play: full-body interaction for virtual environments. In: ‘CHI ’03 extended abstracts on human factors in computing systems. ACM Press, pp 620–621

Kurata T, Okuma T, Kourogi M, Sakaue K (2001) The hand mouse: GMM hand-color classification and mean shift track-ing. In: International workshop on recognition, analysis and tracking of faces and gestures in real-time systems, Vancouver, pp 119–124

Kuzmanić A, Zanchi V (2007) Hand shape classification using DTW and LCSS as similarity measures for vision-based gesture recognition system. In: International conference on “Computer as a Tool (EUROCON 2007)”, pp 264–269

Laptev I, Lindeberg T (2001) Tracking of multi-state hand models using particle filtering and a hierarchy of multi-scale image features. In: Proceedings of the sScale-space’01, volume 2106 of Lecture Notes in Computer Science, p 63

Lee DH, Hong KS (2010) Game interface using hand gesture recognition. In: 5th international conference on computer sciences and convergence information technology (ICCIT), pp 1092–1097

Lee H-K, Kim JH (1999) An hmm-based threshold model approach for gesture recognition. IEEE Trans Pattern Anal Mach Intell 21(10): 961–973

Lee J, Kunii TL (1995) Model-based analysis of hand posture. IEEE Comput Graphics Appl 15(5): 77–86

Lee D, Park Y (2009) Vision-based remote control system by motion detection and open finger counting. IEEE Trans Consumer Electron 55(4): 2308–2313

Lenman S, Bretzner L, Thuresson B (2002) Using marking menus to develop command sets for computer vision based hand gesture interfaces. In: Proceedings of the second Nordic conference on human–computer interaction, ACM Press, pp 239–242

Li F, Wechsler H (2005) Open set face recognition using transduction. IEEE Trans Pattern Anal Mach Intell 27(11): 1686–1697

Li S, Zhang H (2004) Multi-view face detection with ^oat-boost. IEEE Trans Pattern Anal Mach Intell 26(9): 1112–1123

Liang R-H, Ouhyoung M (1996) A sign language recognition system using hidden Markov model and context sensitive search. In: Proceedings of the ACM symposium on virtual reality software and technology’96, ACM Press, pp 59–66

Licsar A, Sziranyi T (2005) User-adaptive hand gesture recognition system with interactive training. Image Vis Comput 23: 1102–1114

Lin SY, Lai YC, Chan LW, Hung YP (2010) Real-time 3D model-based gesture tracking for multimedia control. In: International conference on pattern recognition, pp 3822–3825

Liu N, Lovell BC (2005) Hand gesture extraction by active shape models. In: Proceedings of the digital imaging computing: techniques and applications (DICTA 2005), pp 1–6

Liu Y, Zhang P (2009) Vision-based human–computer system using hand gestures. In: International conference on computational intelligence and security, pp 529–532

Liu Y, Gan Z, Sun Y (2008) Static hand gesture recognition and its application based on support vector machines. In: Ninth ACIS international conference on software engineering, artificial intelligence, networking, and parallel/distributed computing, pp 517–521

Lloyd S (1982) Least squares quantization in PCM. IEEE Trans Inf Theory 28(2): 129–137

Lu W-L, Little JJ (2006) Simultaneous tracking and action recognition using the pca-hog descriptor. In: The 3rd Canadian conference on computer and robot vision, 2006. Quebec, pp 6–13

Lumsden J, Brewster S (2003) A paradigm shift: alternative interaction techniques for use with mobile & wearable devices. In: Proceedings of the 2003 conference of the centre for advanced studies conference on collaborative research. IBM Press, pp 197–210

Luo Q, Kong X, Zeng G, Fan J (2008) Human action detection via boosted local motion histograms. Mach Vis Appl. doi: 10.1007/s00138-008-0168-5

MacQueen J (1967) Some methods for classification and analysis of multivariate observations. In: The proceedings of the fifth Berkeley symposium on mathematical statistics and probability, vol 1, pp 281–297

Malassiotis S, Strintzis MG (2008) Real-time hand posture recognition using range data. Image Vis Comput 26: 1027–1037

Mammen JP, Chaudhuri S, Agrawal T (2001) Simultaneous tracking of both hands by estimation of erroneous observations. In: Proceedings of the British machine vision conference (BMVC), Manchester

Martin J, Devin V, Crowley J (1998) Active hand tracking. In: IEEE conference on automatic face and gesture recognition, Nara, Japan, pp 573–578

MATLAB (2012) http://www.mathworks.in/products/matlab/

McNeill D (1992) Hand and mind: what gestures reveal about thought. University Of Chicago Press. ISBN: 9780226561325

Mgestyk (2009) http://www.mgestyk.com/

Microsoft Kinect (2012) http://www.microsoft.com/en-us/kinectforwindows/

Mitra S, Acharya T (2007) Gesture recognition: a survey. IEEE Trans Syst Man Cybern (SMC)Part C Appl Rev 37(3): 311–324

Modler P, Myatt T (2008) Recognition of separate hand gestures by time delay neural networks based on multi-state spectral image patterns from cyclic hand movements. In: IEEE international conference on systems, man and cybernetics (SMC 2008), pp 1539–1544

Moeslund T, Granum E (2001) A survey of computer vision based human motion capture. Comput Vis Image Underst 81: 231–268

Moyle M, Cockburn A (2002) Gesture navigation: an alternative ‘back’ for the future. In: Human factors in computing systems, ACM Press, New York, pp 822–823

Murthy GRS, Jadon RS (2010) Hand gesture recognition using neural networks. In: 2nd International advance computing conference (IACC), IEEE, pp 134–138

Nickel K, Stiefelhagen R (2003) Pointing gesture recognition based on 3d-tracking of face, hands and head orientation. In: ICMI ’03: proceedings of the 5th international conference on multimodal interfaces. ACM Press, New York, pp 140–146

Nishikawa A, Hosoi T, Koara K, Negoro D, Hikita A, Asano S, Kakutani H, Miyazaki F, Sekimoto M, Yasui M, Miyake Y, Takiguchi S, Monden M (2003) FAce MOUSe: a novel human-machine interface for controlling the position of a laparoscope. IEEE Trans Robotics Autom 19(5): 825–841

Noury N, Barralon P, Virone G, Boissy P, Hamel M, Rumeau P (2003) A smart sensor based on rules and its evaluation in daily routines. In: Engineering in medicine and biology society, 2003. Proceedings of the 25th annual international conference of the IEEE, vol 4, pp 3286–3289

OMRON (2012) http://www.omron.com/

Ong SCW, Ranganath S, Venkatesh YV (2006) Understanding gestures with systematic variations in movement dynamics. Pattern Recogn 39: 1633–1648

Ongkittikul S, Worrall S, Kondoz A (2008) Two hand tracking using colour statistical model with the K-means embedded particle filter for hand gesture recognition. In: 7th Computer information systems and industrial management applications, pp 201–206

Osawa N, Asai K, Sugimoto YY (2000) Immersive graph navigation using direct manipulation and gestures. In: ACM symposium on virtual reality software and technology. ACM Press, pp 147–152

Ottenheimer HJ (2005) The anthropology of language: an introduction to linguistic anthropology. Wadsworth Publishing. ISBN-13: 978-0534594367

Ou J, Fussell SR, Chen X, Setlock LD, Yang J (2003) Gestural communication over video stream: supporting multimodal interaction for remote collaborative physical tasks. In: Proceedings of the 5th international conference on Multimodal interfaces. ACM Press, pp 242–249

Paiva A, Andersson G, Hk K, Mourao D, Costa M, Martinho C (2002) SenToy in fantasyA: designing an affective sympathetic interface to a computer game. Pers Ubiquitous Comput 6(5–6):378–389

Pang YY, Ismail NA, Gilbert PLS (2010) A real time vision-based hand gesture interaction. In: Fourth Asia international conference on mathematical/analytical modeling and computer simulation, IEEE, pp 237–242

Pantic M, Nijholt A, Pentland A, Huanag TS (2008) Human-centred intelligent human–computer Interaction ( HCI 2 ): how far are we from attaining it?. Int J Auton Adapt Commun Syst 1: 168–187

Patwardhan KS, Roy SD (2007) Hand gesture modelling and recognition involving changing shapes and trajectories, using a predictive EigenTracker. Pattern Recogn Lett 28: 329–334

Paulraj MP, Yaacob S, Desa H, Hema CR (2008) Extraction of head and hand gesture features for recognition of sign language. In: International conference on electronic design, pp 1–6

Pausch R, Williams RD (1990) Tailor: creating custom user interfaces based on gesture. In: Proceedings of the 3rd annual ACM SIGGRAPH symposium on user interface software and technology. ACM Press, pp 123–134

Pavlovic VI, Sharma R, Huang TS (1997) Visual interpretation of hand gestures for human–computer interaction: a review. Trans Pattern Anal Mach Intell 19(7): 677–695

Perez P, Hue C, Vermaak J, Gangnet M (2002) Color-based probabilistic tracking. In: Procedings of the European conference on computer vision, Copenhagen, pp 661–675

Peterfreund N (1999) Robust tracking of position and velocity with Kalman snakes. IEEE Trans Pattern Anal Mach Intell 10(6): 564–569

Pickering CA (2005) Gesture recognition driver controls. IEE J Comput Control Eng 16(1): 27–40

MathSciNet   Google Scholar  

PointGrab’s (2012) http://www.pointgrab.com/

Prieto A, Bellas F, Duro RJ, López-Peña F (2006) An adaptive visual gesture based interface for human machine interaction in intelligent workspaces. In: IEEE international conference on virtual environments, human–computer interfaces, and measurement systems, pp 43–48

Radkowski R, Stritzke C (2012) Interactive hand gesture-based assembly for augmented reality applications. In: ACHI 2012: the fifth international conference on advances in computer–human interactions, IARIA, pp 303–308

Ramage D (2007) Hidden Markov models fundamentals, Lecture Notes. http://cs229.stanford.edu/section/cs229-hmm.pdf

Rashid O, Al-Hamadi A, Michaelis B (2009) A framework for the integration of gesture and oosture recognition using HMM and SVM. In: IEEE international conference on intelligent computing and intelligent systems (ICIS 2009), pp 572–577

Rautaray SS, Agrawal A (2011) A novel human computer interface based on hand gesture recognition using computer vision techniques. In: International conference on intelligent interactive technologies and multimedia (IITM-2011), pp 292–296

Rautaray SS, Agrawal A (2012) Real time hand gesture recognition system for dynamic applications. Int J UbiComp 3(1): 21–31

Reale MJ, Canavan S, Yin L, Hu K, Hung T (2011) A multi-gesture interaction system using a 3-D Iris disk model for Gaze estimation and an active appearance model for 3-D hand pointing. IEEE Trans Multimed 13(3): 474–486

Rehg J, Kanade T (1994) Digiteyes: vision-based hand tracking for human–computer interaction. In: Workshop on motion of non-rigid and articulated bodies, Austin Texas, pp 16–24

Rehg J, Kanade T (1995) Model-based tracking of self-occluding articulated objects. In: Proceedings of the international conference on computer vision (ICCV), pp 612–617

Ren Y, Zhang F (2009a) Hand gesture recognition based on meb-svm. In: Second international conference on embedded software and systems, IEEE Computer Society, Los Alamitos, pp 344–349

Ren Y, Zhang F (2009b) Hand gesture recognition based on MEB-SVM. In: International conferences on embedded software and systems, pp 344–349

Rodriguez S, Picon A, Villodas A (2010) Robust vision-based hand tracking using single camera for ubiquitous 3D gesture interaction. In: IEEE symposium on 3D user interfaces (3DUI), pp 135–136

Sajjawiso T, Kanongchaiyos P (2011) 3D hand pose modeling from uncalibrate monocular images. In: Eighth international joint conference on computer science and software engineering (JCSSE), pp 177–181

Salinas RM, Carnicer RM, Cuevas FJ, Poyato AC (2008) Depth silhouettes for gesture recognition. Pattern Recogn Lett 29: 319–329

Sangineto E, Cupelli M (2012) Real-time viewpoint-invariant hand localization with cluttered backgrounds. Image Vis Comput 30:26–37

Sawah AE, Joslin C, Georganas ND, Petriu EM (2007) A framework for 3D hand tracking and gesture recognition using elements of genetic programming. In: Fourth Canadian conference on computer and robot vision (CRV’07), pp 495–502

Sawah AE, Georganas ND, Petriu EM (2008) A prototype for 3-D hand tracking and posture estimation. IEEE Trans Instrum Meas 57(8): 1627–1636

Saxe D, Foulds R (1996) Toward robust skin identification in video images. In: IEEE international conference on automatic face and gesture recognition, pp 379–384

Schapire R (2002) The boosting approach to machine learning: an overview. In: MSRI workshop on nonlinear estimation and classification

Schlomer T, Poppinga B, Henze N, Boll S (2008) Gesture recognition with a wii controller. In: TEI ’08: proceedings of the 2nd international conference on Tangible and embedded interaction. ACM, New York, pp 11–14

Schmandt C, Kim J, Lee K, Vallejo G, Ackerman M (2002) Mediated voice communication via mobile ip. In: Proceedings of the 15th annual ACM symposium on User interface software and technology. ACM Press, pp 141–150

Schultz M, Gill J, Zubairi S, Huber R, Gordin F (2003) Bacterial contamination of computer keyboards in a teaching hospital. Infect Control Hosp Epidemiol 4(24): 302–303

Sclaroff S, Betke M, Kollios G, Alon J, Athitsos V, Li R, Magee J, Tian TP (2005) Tracking, analysis, and recognition of human gestures in video. In: 8th International conference on document analysis and recognition, pp 806–810

Segen J, Kumar S (1998a) Gesture VR: vision-based 3d Hand interface for spatial interaction. In: Proceedings of the sixth ACM international conference on multimedia. ACM Press, pp 455–464

Segen J, Kumar S (1998b) Video acquired gesture interfaces for the handicapped. In: Proceedings of the sixth ACM international conference on multimedia. ACM Press, pp 45–48

Segen J, Kumar SS (1999) Shadow gestures: 3D hand pose estimation using a single camera. In: Proceedings of the IEEE computer vision and pattern recognition (CVPR), pp 479–485

Senin P (2008) Dynamic time warping algorithm review, technical report. http://csdl.ics.hawaii.edu/techreports/08-04/08-04.pdf

Sharma R, Huang TS, Pavovic VI, Zhao Y, Lo Z, Chu S, Schulten K, Dalke A, Phillips J, Zeller M, Humphrey W (1996) Speech/gesture interface to a visual computing environment for molecular biologists. In: International conference on pattern recognition (ICPR ’96) volume 7276. IEEE Computer Society, pp 964–968

Shimada N, Shirai Y, Kuno Y, Miura J (1998) Hand gesture estimation and model refinement using monocular camera ambiguity limitation by inequality constraints. In: IEEE international conference on face and gesture recognition, Nara, pp 268–273

Shimizu M, Yoshizuka T, Miyamoto H (2007) A gesture recognition system using stereo vision and arm model fitting. In: International congress series 1301, Elsevier, pp 89–92

Sigal L, Sclaroff S, Athitsos V (2004) Skin color-based video segmentation under time-varying illumination. IEEE Trans Pattern Anal Mach Intell 26(7): 862–877

Smith GM, Schraefel MC (2004) The radial scroll tool: scrolling support for stylus- or touch-based document navigation. In: Proceedings of the 17th annual ACM symposium on User interface software and technology, ACM Press, pp 53–56

SoftKinetic, IISU SDK (2012) http://www.softkinetic.com/Solutions/iisuSDK.aspx

Song L, Takatsuka M (2005) Real-time 3D − nger pointing for an augmented desk. In: Australasian conference on user interface, vol 40. Newcastle, pp 99–108

Sriboonruang Y, Kumhom P, Chamnongthai K (2006) Visual hand gesture interface for computer board game control. In: IEEE tenth international symposium on consumer electronics, pp 1–5

Stan S, Philip C (2004) Fastdtw: toward accurate dynamic time warping in linear time and space. In: KDD workshop on mining temporal and sequential data

Staner AT, Pentland A (1995a) Visual recognition of American sign language using hidden Markov models. Technical Report TR-306, Media Lab, MIT

Starner T, Pentland A (1995b) Real time American sign language recognition from video using hidden Markov models, Technical Report 375, MIT Media Lab

Stotts D, Smith JM, Gyllstrom K (2004a) Facespace: endo- and exo-spatial hypermedia in the transparent video face top. In: 15th ACM conference on hypertext & hypermedia. ACM Press, pp 48–57

Stotts D, Smith JM, Gyllstrom K (2004b) Facespace: endo- and exo-spatial hypermedia in the transparent video facetop. In: Proceedings of the fifteenth ACM conference on hypertext & hypermedia. ACM Press, pp 48–57

Suk H, Sin BK, Lee SW (2008) Robust modeling and recognition of hand gestures with dynamic Bayesian network. In: 19th international conference on pattern recognition, pp 1–4

Suka H, Sin B, Lee S (2010) Hand gesture recognition based on dynamic Bayesian network framework. Pattern Recogn 43: 3059–3072

Swindells C, Inkpen KM, Dill JC, Tory M (2002) That one there! Pointing to establish device identity. In: Proceedings of the 15th annual ACM symposium on user interface software and technology. ACM Press, pp 151–160

Teng X, Wu B, Yu W, Liu C (2005) A hand gesture recognition system based on local linear embedding. J Vis Lang Comput 16: 442–454

Terrillon J, Shirazi M, Fukamachi H, Akamatsu S (2000) Comparative performance of different skin chrominance models and chrominance spaces for the automatic detection of human faces in color images. In: Proceedings of the international conference on automatic face and gesture recognition (FG), pp 54–61

Terzopoulos D, Szeliski R (1992) Tracking with Kalman Snakes. MIT Press, Cambridge, pp 3–20

Thirumuruganathan S (2010) A detailed introduction to K-nearest neighbor (KNN) algorithm. http://saravananthirumuruganathan.wordpress.com/2010/05/17/a-detailed-introduction-to-k-nearest-neighbor-knn-algorithm/

Tran C, Trivedi MM (2012) 3-D posture and gesture recognition for interactivity in smart spaces. IEEE Trans Ind Inform 8(1): 178–187

Triesch J, Malsburg C (1996) Robust classification of hand postures against complex background. In: IEEE automatic face and gesture recognition, Killington, pp 170–175

Triesch J, Von der Malsburg C (1998) A gesture interface for human-robot-interaction. In: Proceedings of the international conference on automatic face and gesture recognition (FG). IEEE, Nara, Japan, pp 546–551

Tseng KT, Huang WF, Wu CH (2006) Vision-based finger guessing game in human machine interaction. In: IEEE international conference on robotics and biomimetics, pp 619–624

Utsumi A, Ohya J (1998) Image segmentation for human tracking using sequential-image-based hierarchical adaptation. In: Proceedings IEEE computer vision and pattern recognition (CVPR), pp 911–916

Utsumi A, Ohya J (1999) Multiple-hand-gesture tracking using multiple cameras. In: Proceedings of the IEEE computer vision and pattern recognition (CVPR), Colorado, pp 473–478

Vafadar M, Behrad A (2008) Human hand gesture recognition using spatio-temporal volumes for human–computer interaction. In: International symposium in telecommunications, pp 713–718

Vámossy Z, Tóth A, Benedek B (2007) Virtual hand—hand gesture recognition system. In: 5th International symposium on intelligent systems and informatics, pp 97–102

Várkonyi-Kóczy AR, Tusor B (2011) Human–computer interaction for smart environment applications using fuzzy hand posture and gesture models. IEEE Trans Instrum Meas 60(5): 1505–1514

Varona J, Jaume-i-Capó A, Gonzà àlez J, Perales FJ (2009) Toward natural interaction through visual recognition of body gestures in real-time. Interact Comput 21: 3–10

Verma R, Dev A (2009) Vision based hand gesture recognition using finite state machines and fuzzy logic. In: International conference on ultra modern telecommunications & workshops (ICUMT ’09), pp 1–6

Vilaplana JM, Coronado JL (2006) A neural network model for coordination of hand gesture during reach to grasp. Neural Netw 19:12–30

Viola P, Jones M (2001) Robust real-time object detection. In: IEEE workshop on statistical and computational theories of vision, Vancouver

Visser M, Hopf V (2011) Near and far distance gesture tracking for 3D applications. In: 3DTV conference: the true vision-capture, transmission and display of 3D video (3DTV-CON), pp 1–4

Vo N, Tran Q, Dinh TB, Dinh TB, Nguyen QM (2010) An efficient human–computer interaction framework using skin color tracking and gesture recognition. In: IEEE RIVF international conference on computing and communication technologies, research, innovation and vision for the future (RIVF), pp 1–6

Wachs J, Stern H, Edan Y, Kartoun U (2002) Real-time hand gestures using the fuzzy-C-means Algorithm. In: Proceeding of WAC 2002, Florida

Wachs JP, Stern H, Edan Y (2005) Cluster labeling and parameter estimation for the automated setup of a hand-gesture recognition system. IEEE Trans Syst Man Cybern PART A Syst Humans 35(6): 932–944

Wachs JP, Kolsch M, Stern H, Edan Y (2011) Vision-based hand-gesture applications. Commun ACM 54: 60–71

Wang GW, Zhang C, Zhuang J (2012) An application of classifier combination methods in hand gesture recognition. Mathematical Problems in Engineering Volume 2012, Hindawi Publishing Corporation, pp 1–17. doi: 10.1155/2012/346951

Ward DJ, Blackwell AF, MacKay DJC (2000) Dasher—a data entry interface using continuous gestures and language models. In: Proceedings of the 13th annual ACM symposium on user interface software and technology, ACM Press, pp 129–137

Webel S, Keil J, Zoellner M (2008) Multi-touch gestural interaction in x3d using hidden markov models. In: VRST ’08: proceedings of the 2008 ACM symposium on vVirtual reality software and technology. ACM, New York, pp 263–264

Wii Nintendo (2006) http://www.nintendo.com/wii

Wilson A, Shafer S (2003) Xwand: UI for intelligent spaces. In: Proceedings of the conference on Human factors in computing systems. ACM Press, pp 545–552

Wohler C, Anlauf JK (1999) An adaptable time-delay neural-network algorithm for image sequence analysis. IEEE Trans Neural Netw 10(6): 1531–1536

Wu M, Balakrishnan R (2003) Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays. In: Proceedings of the 16th annual ACM symposium on user interface software and technology. ACM Press, pp 193–202

Wu Y, Huang T (1999a) Vision-based gesture recognition: a review. In: Gesture-based communications in HCI, Lecture Notes in Computer Science, vol 1739. Springer, Berlin

Wu Y, Huang TT (1999b) Capturing human hand motion: a divide-and-conquer approach. In: Proceedings of the international conference on computer vision (ICCV), Greece, pp 606–611

Wu Y, Huang TS (2000) View-independent recognition of hand postures. In: Proceedings of the IEEE computer vision and pattern recognition (CVPR), vol 2. Hilton Head Island, SC, pp 84–94

Wu Y, Lin J, Huang T (2001) Capturing natural hand articulation. In: Proceedings of the international conference on computer vision (ICCV), Vancouver, pp 426–432

Wu Y, Lin J, Huang TS (2005) Analyzing and capturing articulated hand motion in image sequences. IEEE Trans Pattern Anal Mach Intell 27(12): 1910–1922

Xiangyu W, Xiujuan L (2010) The study of moving target tracking based on Kalman CamShift in the video. In: 2nd International conference on information science and engineering (ICISE), pp 1–4

Yang M, Ahuja N (1998) Detecting human faces in color images. In: Proceedings of the international conference on image processing (ICIP), Piscataway, pp 127–130

Yang J, Lu W, Waibel A (1998a) Skin-color modeling and adaptation. In: ACCV, pp 687–694

Yang J, Lu W, Waibel A (1998b) Skin-color modeling and adaptation. In: ACCV, pp 687–694

Yang J, Xu J, Li M, Zhang D, Wang C (2011) A real-time command system based on hand gesture recognition. In: Seventh international conference on natural computation, pp 1588–1592

Yi B, Harris FC Jr, Wang L, Yan Y (2005) Real-time natural hand gestures. Comput Sci Eng IEEE 7(3):92–97

Yi X, Qin S, Kang J (2009) Generating 3D architectural models based on hand motion and gesture. Comput Ind 60:677–685

Yilmaz JA, Javed O, Shah M (2006) Object tracking: a survey. ACM Comput Surv 38: 13

Yin M, Xie X (2003) Estimation of the fundamental matrix from uncalibrated stereo hand images for 3D hand gesture recognition. Pattern Recogn 36(3): 567–584

Yin J, Han Y, Li J, Cao A (2009) Research on real-time object tracking by improved CamShift. In: International symposium on computer network and multimedia technology, pp 1–4

Yuan Q, Sclaroff S, Athitsos V (1995) Automatic 2D hand tracking in video sequences. In: IEEE workshop on applications of computer vision, pp 250–256

Yuan R, Cheng J, Li P, Chen G, Xie C, Xie Q (2010) View invariant hand gesture recognition using 3D trajectory. In: Proceedings of the 8th world congress on intelligent control and automation, Jinan, pp 6315–6320

Yun L, Peng Z (2009) An automatic hand gesture recognition system based on Viola–Jones method and SVMs. In: Second international workshop on computer science and engineering, pp 72–76

Zabulis X, Baltzakis H, Argyros A (2009) Vision-based Hand gesture recognition for human–computer interaction. In: The Universal Access Handbook. LEA

Zeller M et al (1997) A visual computing environment for very large scale biomolecular modeling. In: Proceedings of the IEEE international conference on application specific systems, architectures and processors (ASAP), Zurich, pp 3–12

Zhao S, Tan W, Wu C, Liu C, Wen S (2009) A Novel interactive method of virtual reality system based on hand gesture recognition. In: Chinese control and decision conference (CCDC ’09), pp 5879–5822

Zhu HM, Pun CM (2010) Movement tracking in real-time hand gesture recognition. In: 9th IEEE/ACIS international conference on computer and information science, pp 241–245

Download references

Author information

Authors and affiliations.

Information Technology, Indian institute of Information Technology, Allahabad, India

Siddharth S. Rautaray & Anupam Agrawal

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Siddharth S. Rautaray .

Rights and permissions

Reprints and permissions

About this article

Rautaray, S.S., Agrawal, A. Vision based hand gesture recognition for human computer interaction: a survey. Artif Intell Rev 43 , 1–54 (2015). https://doi.org/10.1007/s10462-012-9356-9

Download citation

Published : 06 November 2012

Issue Date : January 2015

DOI : https://doi.org/10.1007/s10462-012-9356-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Gesture recognition
  • Human computer interaction
  • Representations
  • Recognition
  • Natural interfaces
  • Find a journal
  • Publish with us
  • Track your research

Human-Computer Interaction - An Overview of Software Architecture

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Human Computer Interaction Lab, Saarland University

  • Publications

<div class="nav-theses-banner"><img class="icon_title" src="https://hci.cs.uni-saarland.de/wp-content/uploads/people/staff/HCI_theses-1190x330-1.jpg" /></div><br>Theses

We are on the lookout for talented students who wish to shape the future of interactive computing and user experience with us. if you are curious, creative, and enjoy solving complex problems with cutting-edge computing technology, then you will fit right in. we’d be happy to have you with us, as you write your master’s or bachelor’s thesis in human-computer interaction..

thesis on human computer interaction

Why joining the lab?

Joining the HCI lab for your thesis is a great opportunity for students to: 

  • get early exposure to upcoming visionary technologies
  • conduct creative research to contribute to the future of computing
  • be part of an internationally leading research lab
  • prepare for your next career steps: our past student alumni have started academic research at leading international labs or have got attractive positions in leading IT companies in Germany and abroad
  • learn how to write academic publications
  • get exposure to international conferences, tech demonstrations, etc.
  • possibly even become the co-author of a top-tier publication

Requirements

We are interested in students from all backgrounds, first and foremost from Computer Science, but we also accept students from Psychology, Materials Science, and the Fine Arts. Ideally, we are looking for people with:

  • prior knowledge in HCI (Lectures HCI and/or Interactive Systems)
  • technical knowledge in an area that is useful for HCI, such as computer vision, computer graphics, embedded systems, machine learning
  • or experience with empirical methods for user studies

Specifically, we are looking for people with basic skills in at least one of the following points:

thesis on human computer interaction

Together with you, we will define the topic such that it best meets your background and interest. We have developed a structured process that guides you all throughout your thesis, with continuous and regular advice and support from our side.

Whom to contact

If you are interested in joining us, please contact Prof. Dr. Jürgen Steimle or send an email to [email protected] .

To help us understand your skills quickly and provide the best advice, please join your CV and transcript of records to this email.

You can also directly contact any Ph.D. or post-doc researcher of the team who works on research topics you find interesting.

Theses can start at any time throughout the year.

Want to know more about doing your thesis in our lab? More information here .

Marie Mühlhaus: FeatherHair: Design and Implementation of a Gesture-controlled Hair Interface PDF

Michael Wessely: Fabrication and Control of Flexible Thin-Film Touch-Displays PDF

Lena Hegemann: Single-handed Gesture Input Using Finger-to-Finger Touch and Hand pose PDF

thesis on human computer interaction

Saarland University Human-Computer Interaction Lab Department of Computer Science Campus E 1.7 66123 Saarbrücken Germany

  • Zur Metanavigation
  • Zur Hauptnavigation
  • Zur Subnavigation
  • Zum Seitenfuss

Photo: HCI, ilikeinterfaces.com

Potential Topics

[bachelor / master thesis] apple vision pro development.

In this thesis, the candidates should develop a simple Apple Vision Pro Demo with Unity and evaluate the development process. Hence, the candidate should have some experiences in Xcode development and ideally in Unity as well as libraries such as SwiftUI, RealityKit, and ARKit. The exact topic has to be defined yet.

Prof. Dr. Frank Steinicke

[Bachelor / Master Thesis] Interaction on a Multi-Touch Table based on Tilt Angle and Height Adjustments

For various projects, our group works with a multi-touch table, which is a 65” display mounted to a pedestal. This pedestal is motorized so that the display can be tilted and its height can be adjusted. In this thesis, the pedestal should be extended with sensors to detect the current tilt angle and height and send this information into an application. Based on this, some use cases and interaction should be explored. For example, when displaying a 3D virtual world on the screen, the tilt angle and height could be used to observe the virtual world from different perspectives.

Orientation

  • Informatics/Psychology: 95/5
  • Software/Hardware: 60/40
  • Theory/Practice: 10/90

julia.hertel "AT" uni-hamburg.de

[Bachelor / Master Thesis] Interactive Terrain Modification on a Multi-Touch Table extended by Augmented Reality

Multi-touch tables offer great potential for collaborative design and planning of environments such as urban and industrial areas. Besides planning infrastructure, it could also be useful to plan how a modified terrain would affect the area. For this purpose, such a system should offer the possibility to modify the height of the terrain in a certain area. It should also be possible to change a water line by adding land to water areas or by removing land to increase water areas. In this thesis, it should be explored which interactions and visualizations would be suitable for this use case. For instance, multi-touch tables offer accurate multi-finger recognition , while Augmented Reality offers to possibility to visualize the terrain spatially in 3D. Both technologies could be combined to create an intuitive and precise system for terrain modification.

  • Informatics/Psychology: 80/20
  • Software/Hardware: 85/15
  • Theory/Practice: 20/80

[Bachelor Thesis] Smartphone-enabled touch table interaction

Multi-touch tables can be used in group settings to visualise content and let users collabrate in a shared environment. However, it can be challinging to reach all points on such large tables. In this thesis it should be evaluated if using a smartphone as a remote controlling device for the touch table is feasable and helpful.

judith.hartfill "AT" uni-hamburg.de

[Bachelor Thesis] Generative AI-Based Assistance on Document Categorization

It is always repetitive, time-consuming, and error-prone to manually categorize documents. The raising usage and power of Generative-AI ChatGPT has brought us the interest to investigate how far can it assist on document categorization.

As descripted in the Walter Kempowski - Ortslinien (Bachelorarbeit), there are multiple media types of files (Text, Sound, Image, Video, etc) in a massive amount that are required to be classified into a well-organized hierarchy according to the hierarchy provided by humanity researchers. Currently, all the files are still in a primitive format and the feasible way to read them is to add corresponding file extension according to its original file format (.odt; .txt; .aiff, etc). However, it would be very time-consuming to check every file manually. To speed up this process, ChatGPT is considered to help on this task, which means that an implementation to connect the local archive and ChatGPT need to be realized.

The reason here to apply ChatGPT is because of its easy accessibility and well-trained performance. If you have any other similar tools, feel free to provide your suggestion.

  • Informatics/Humanity: 80/20

[email protected]

[Bachelor Thesis] Visualization of Multimedia Archive and Interactive Solution Design

Digital transformation of archive and visualize it in a virtual space sound pretty cool and future trended. Here in this project, we aim at bringing the work from Walter Kempowski Ortslinien-Projekt into a virtual environment where user can easily search and check the digital pieces with a user-friendly and intuitive way.

Since the digital pieces are saved in multiple media types (Text, Sound, Image, Video, etc), in this case, two main points have to be considered carefully:

  • What kind of digital structure can best store these files with clear hierarchy described and correct formatting.
  • How should the user interface should be designed so that user can manipulate these files with easy accessibility and low effort.

There is no limitation on the platform such as Mobile, Desktop, Head-Mounted Devices, etc. The best result is to implement the solution on different platforms and network them together.

[Bachelor Thesis] “Development and Integration of a Spatial Selective Filter in Unity for Enhancing Agent-based Selective Speech Perception in Multi-Person Environments Through Multimodal Face Tracking and Natural Language Processing"

The aim of this thesis is to design and implement a spatial selective filter within the Unity platform, building upon prior work in the field. This filter will be integrated with an agent capable of selectively perceiving speech from multiple individuals in a shared space by leveraging multimodal connections that incorporate face tracking technology. Additionally, the implementation will explore the application of current natural language processing (NLP) models to enable the agent to appropriately respond to conversations. This research will involve a comprehensive study of spatial filtering techniques, face recognition technologies, and the application of NLP models, focusing on their integration within the Unity environment for real-world simulation scenarios. 

 Orientation

Sebastian Rings

[Bachelor Thesis] "Designing a 3D Audio Simulation Environment in Unity for Generating Realistic Spatial Soundscapes Using Conversational Artificial Agents and Virtual Microphones"

This thesis focuses on creating a sophisticated simulation environment within Unity to generate and analyze 3D audio using artificial conversational agents. The core objective is to simulate an environment where these agents engage in dialogues, thereby producing spatial soundscapes that are captured by three virtual microphones placed in the center of the simulated room. The fidelity of the 3D audio output is very important, aiming to replicate real-world acoustical properties as closely as possible in Unity with current packages and technology. 

The project will explore the principles of 3D audio generation and spatial sound engineering to create a virtual setting that accurately portrays how sound behaves in three-dimensional spaces. It involves designing artificial agents capable of engaging in conversations, thereby serving as dynamic audio sources. The virtual microphones' recordings will subsequently be employed to test and refine a Spatial Audio Filter, assessing its effectiveness in processing and enhancing spatial audio qualities.

This work will require a deep dive into Unity’s audio system, aiming to explore what’s currently possible within simulated audio environments. The expected outcome is a comprehensive simulation toolkit in Unity that can serve both as a testbed for spatial audio filters and as a prototype for applications requiring high-fidelity 3D audio, such as virtual reality projects and advanced game development.

  • Informatics/Psychology: 90/10
  • Software/Hardware: 70/30
  • Theory/Practice: 50/50

Jüngste Fortschritte in AI und maschinellem Lernen erlauben es automatisch Emotionen in Sprache, Bildern und Videos in Social-Media-Beiträgen zu erkennen. Im Rahmen dieser Arbeit sollen existieren Systeme auf ihre Tauglichkeit zur Analyse der Emotion in den Valenz-Arousal-Dimensionen (Wertigkeit-Aktivierung) von Social-Media-Beiträgen (wie z.B. X, Facebook, Youtube etc.) zu untersuchen.

  Orientation

  • Informatik/Psychology: 70/30
  • Design/Entwicklung: 20/80

(Bachelor / Master thesis) Interaktives Fahrerassistenzsystem

In einem existierenden Fahrassistenten werden die drei als nächstes anzufahrenden Haltestellen mit der Abfahrtzeit angezeigt. Alle Haltestellen müssen als Haltestelle mit den entsprechenden Koordinaten manuell am Tablet eingegeben werden oder können auch als csv-Datei ins Programm importiert werden. Der Fahrplan (oder Route), Haltestelle mit Abfahrtzeit muss im Tablet eingegeben werden oder kann als csv-Datei ins Programm importiert werden. Danach werden die Haltestellen mit Abfahrtzeit nach Uhrzeit sortiert im Fahrassistenten angezeigt.

Zentrale Fragestellungen

Die Anwendung soll um zusätzliche Funktionalität erweitert werden, wie z.B. Festlegung von weiteren Bedarfshaltestellen mit Uhrzeit, Einblendung der Route mit einem freien Routenplaner, oder Bildschirmsplitting. Alle bisher beschriebenen Funktionen betreffen die Programmierung der App mit Java.

(Bachelor / Master thesis) Exploring mental maps of individuals living environment

In psychology and geography the so-called „cognitive map“ has quite a long tradition. The term has originally been coined by Tolman (1948) and refers to the mental representation by which individuals acquire, store, recall and decode information about the relative location in a spatial environment. In order to assess how individuals mentally store knowledge of their surroundings psychologists have often asked participants to sketch a map of a certain place on a sheet of paper (Appleyard, 1970). Variants have provided participants with a small portion of the map to provide a scale and reference (Pearce, 1981) or they taught participants certain symbols to denote particular features (Beck and Wood, 1976). Since the paper size greatly limits the flexibility to extend the map into all directions, we would like to come up with a digital map drawing setup including a tablet with a pencil and potentially predefined map-like symbols, e.g. icons or sticker. Participants can select from the predefined symbols but also have the option to build up the map with freehand drawings. A similar attempt has been undertaken by Huynh & Doherty, 2007. What we would like to do is to compare paper and pencil maps with the maps produced on a digital device to weigh pros and cons of both methodologies. Our goal is to provide the field with a new optimized approach to assess “cognitive maps” reliably.

  •  Informatics/Psychology: 40/60
  • Software/Hardware: 90/10

Dr. Fariba Mostajeran and Prof. Dr. Simone Kühn

Bachelorarbeit

Humanbiologische Studien zeigen, dass visuelle Indikatoren ansteckender respiratorischer Infektionen (z.B. Niesen) bei Menschen defensive immunologische (erhöhte Antikörperausschüttung) und behaviorale Reaktionen (Vermeidungsverhalten) auslösen können. Das geplante Projekt soll untersuchen, ob sich die Begegnung mit niesenden virtuellen Agenten in einer virtuellen Umgebung ebenfalls auf die Antikörperausschüttung im Speichel auswirkt. Weiterhin soll überprüft werden, ob niesende Agenten in der virtuellen Umgebung eher vermieden werden. Hierzu sollen mehrere virtuelle Agenten in einer virtuellen Umgebung erstellt werden, die niesen. Diese sollen dann in einer realitätsnahen virtuellen Situation platziert werden (z.B. Bushaltestelle, Wartezimmer), um so implizite Annäherungs- und Vermeidungstendenzen sowie Veränderungen in Antikörpertitern der mit den Agenten interagierenden Testpersonen erfassen zu können.

Ansprechpartner

Prof. jun Dr. Esther Diekhof ( esther.diekhof "AT" uni-hamburg.de )  und  Prof. Dr. Frank Steinicke

(Bachelor / Master thesis) Virtual Group Meeting Analytics & Feedback

Virtual group meetings are omnipresent in times of the COVID-19 pandemic. At the same time, however, virtual meetings incur numerous technological limitations and HCI challenges. In particular, most current virtual meeting solutions hinder perception of correct gaze, body language, emotions, deictic relations, or eye-to-eye contact. Moreover, in virtual meetings the communication between multiple people is captured via microphones and cameras in real-time and, thus, they cause several ethical, societal, legal, and privacy issues.

In this thesis, the student should develop and evaluate advanced virtual group meeting analytics and feedback mechanisms for video conference tools such as Zoom or BigBlueButton by using AI-based detection of emotions, body language, or attention. 

  •  Informatics/Psychology: 70/30
  • Theory/Practice: 30/70

Prof. Dr. Frank Steinicke und Prof. Dr. Eva Bittner

LFF-Projekt (1 Abschlussarbeit) 

Im Rahmen einer Pilot-Studie zu „Sozialen Interaktionsdynamiken und Erwartungseffekten in der Psychotherapie“ möchten wir durch den Einsatz eines Avatars gegenüber einem menschlichen Therapeuten prüfen, welchen Einfluss das variierende Maß an Empathie in der sozialen Interaktion auf behandlungsbezogene Erwartungen von Patient*innen und auf die nonverbale Bewegungssynchronität zwischen Patient*in und Therapeut*in bzw. Patient*in und Avatar hat. Es sollen die Möglichkeiten der Erfassung der nonverbalen Synchronität in einem online- oder hybriden Setting unter Zuhilfenahme modernster Technik, wie z.B. Tiefenkameras, exploriert werden. Im Rahmen der Studie besteht die Möglichkeit, eng mit der Klinischen Psychologie an der Helmut-Schmidt-Universität zusammenzuarbeiten und Erfahrungen im Kontakt mit Patient*innen zu sammeln.

  •  Wie lässt sich nonverbale Synchronität im standardisierten Aufklärungsgespräch zu Psychotherapie erfassen?
  • Sind bei der Erfassung der nonverbalen Bewegungssynchronität die Bewegungen der Versuchsperson an die des Avatars oder an die der echten Person hinter dem Avatar gekoppelt?

LFF-Projekt (1 Abschlussarbeit)

Im Rahmen einer Abschlussarbeit der Informatik in Kooperation mit der Klinischen Psychologie an der Helmut-Schmidt-Universität sollen anhand einer Pilot-Studie zu „Sozialen Interaktionsdynamiken in der Psychotherapie“ Möglichkeiten zur automatisierten Erfassung von Gesten im psychotherapeutischen Gespräch mithilfe moderner Techniken (z.B. Tiefenkameras) exploriert werden. Ziel ist es, die Datengrundlage für die Entwicklung eines automatisierten Trackings relevanter Gesten zu schaffen. Dazu sollen zunächst relevante Gesten im psychotherapeutischen Kontext (bspw. Zurückweisung durch Kopfschütteln) identifiziert und im Rahmen einer Laborstudie kodiert werden, um langfristig eine automatisierte Erfassung zu ermöglichen. Im Rahmen der Studie besteht die Möglichkeit, eng mit der Klinischen Psychologie zusammenzuarbeiten und Erfahrungen im Kontakt mit Patient*innen zu sammeln.

  • Kann man aus den Videoaufzeichnungen eines Gesprächs bedeutsame Gesten auslesen?
  • Welche Gesten können identifiziert werden und wie können diese automatisch erfasst und ausgewertet werden?

Walter Kempowski - Ortslinien (Bachelorarbeit) 

Walter Kempowskis Ortslinien-Projekt ist ein multimediales, unvollendet gebliebenes Kunstwerk, an dessen Realisierung der Künstler bis zu seinem Lebensende 2007 arbeitete. Er hinterließ auf seinem Macintosh Performer in einer spezifischen Ordnerstruktur eine umfangreiche Sammlung von mehreren tausend Medienschnipseln (Text, Ton, Bild, Video) in unterschiedlichen teils nicht mehr durch Apple unterstützten Dateitypen, die zur Realisierung des Vorhabens vorgesehen waren, sowie schriftliche Aussagen zur geplanten Umsetzung.

Kempowski plante eine digital vermittelte Gegenüberstellung von jeweils zwei Ausschnitten aus literarischen, wissenschaftlichen oder alltäglichen Texten, Fernsehsendungen, Fotos, Zeitungsartikeln u.v.m., deren Produktionszeitpunkt zwischen ca. 1800 und 2000 exakt 100 Jahre trennen. Kempowskis Ziel war es, im virtuellen Raum durch die Verschaltung der Zeitebenen eine andere Form der Geschichtsschreibung und -rezeption für eine breite Öffentlichkeit zu ermöglichen. Das Projekt bewegt sich damit an der Grenze von Kunstwerk und digitalem Archiv.

Zentrale Fragestellung

Ziel einer heutigen Erschließung des digitalen Nachlasses müsste eine digitale Präsentation auf dem aktuellsten Stand der Technik sein, wobei innovative Konzepte zur Gestaltung virtueller Rezeptions- und Nutzungsszenarien entwickelt werden müssten, die dem Konzept Kempowskis gerecht werden.  Sind bei der Erfassung der nonverbalen Bewegungssynchronität die Bewegungen der Versuchsperson an die des Avatars oder an die der echten Person hinter dem Avatar gekoppelt?

(Bachelor-/ Masterarbeit) Virtual reality (VR) gestützte Visualisierung vom volumetrischen Bodenwassergehalt, um gezielte Bewässerungsentscheidungen auf Sportanlagen zu treffen

Beschreibung.

Im Rahmen von Nachhaltigkeitsstrategien im Sportbereich wird es zukünftig immer wichtiger sein, präzise Bewässerungsentscheidungen zu treffen, um den Wasserverbrauch zu minimieren. Bodensensoren oder manuell bediente Feuchtigkeitsmessgeräte können genutzt werden, um Aufschluss über den volumetrischen Wassergehalt zu geben. Jedoch kann der greenkeeper/ groundskeeper mit diesen Daten nur bedingt arbeiten, da für ein präzises Arbeiten mit der Hand und dem Schlauch, nicht mit dem Sprinkler System, bewässert werden muss. Um den greenkeeper/groundskeeper in seiner Arbeit zu unterstützen, benötigt es eine Visualisierung der Daten. Eine Virtual reality (VR) gestützte Lösung wäre ideal, da die Bodendaten visuell und live im Feld dargestellt werden können.

Feldversuche geplant für ein Fußballstadion in Hamburg und einen Golfplatz

Orientierung

  • Informatik/Psychologie: 70/30

Prof. Dr. Frank Steinicke , Dr. Daniel Hahn ( info "AT" golfagronomy.de ) (Hahn Turf Agronomy)

[Bachelor/Master Thesis] Virtual Reality Exergames

Virtual reality exercise games (exergames) have great potential to motivate users to exercise. However, research in this area is still limited. For example, which game elements influence motivation, the importance of feedback, and social exergames have not yet been thoroughly investigated yet. These are only a few suggestions for the direction of a thesis and this is a very open topic. If there is a specific direction you would like to focus in virtual exergames, feel free to contact me and we can discuss potential topics.

  • Informatics/Psychology: 70/30

Sukran Karaosmanoglu

(Bachelor/Master Thesis) Direkte Hilfestellungen in VR

Wenn es in VR Schwierigkeiten beim Verständnis gibt, z.B. dadurch, dass Buttons nicht gefunden werden, in die falsche Richtung geschaut wird oder Informationen nicht aufgenommen wurden; wie kann man während der Studiendurchführung/während kognitiv-physischem Training helfen, ohne die Personen physisch anzufassen oder aus der VR herauszuholen? Erklärungen mit Worten sind häufig schwer zu verstehen, insbesondere von unerfahrenen Nutzer:innen. Die Aufgabe dieser Arbeit besteht darin, ein System zu entwickeln, mit dem man Teilnehmenden direkt in VR optische/haptische Hilfestellungen geben kann, beispielsweise durch Hinweisschilder, Vibrationen, oder auch See-Through, und dies mit verbalen Hilfestellungen vergleicht. Dies beinhaltet Networking, Programmierung mit Unity (VR und Mobile/Web), Verständnis über Probleme bei der Nutzung von VR. Literatur: Wang et al: Exploring the use of gesture in collaborative tasks Pinho et al: Cooperative object manipulation in immersive virtual environments: framework and techniques Teo et al: Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction Kruse et al. Blended Collaboration: Communication and Cooperation Between Two Users Across the Reality-Virtuality Continuum

Lucie Kruse

Own topic ideas

Do you have your own idea about a potential topic? Just contact one of the WIMIs. For a general overview over who is interested in which topics, please take a look at the People website. 

  • Informatics/Psychology: ?/?
  • Software/Hardware: ?/?
  • Theory/Practice: ?/?
  • Research & Faculty
  • Offices & Services
  • Information for:
  • Faculty & Staff
  • News & Events
  • Contact & Visit
  • About the Department
  • Message from the Chair
  • Computer Science Major (BS/BA)
  • Computer Science Minor
  • Data Science and Engineering Minor
  • Combined BS (or BA)/MS Degree Program
  • Intro Courses
  • Special Programs & Opportunities
  • Student Groups & Organizations
  • Undergraduate Programs
  • Undergraduate Research
  • Senior Thesis
  • Peer Mentors
  • Curriculum & Requirements
  • MS in Computer Science
  • PhD in Computer Science
  • Admissions FAQ
  • Financial Aid
  • Graduate Programs
  • Courses Collapse Courses Submenu
  • Research Overview
  • Research Areas
  • Systems and Networking
  • Security and Privacy
  • Programming Languages
  • Artificial Intelligence
  • Human-Computer Interaction
  • Vision and Graphics
  • Groups & Labs
  • Affiliated Centers & Institutes
  • Industry Partnerships
  • Adobe Research Partnership
  • Center for Advancing Safety of Machine Intelligence
  • Submit a Tech Report
  • Tech Reports
  • Tenure-Track Faculty
  • Faculty of Instruction
  • Affiliated Faculty
  • Adjunct Faculty
  • Postdoctoral Fellows
  • PhD Students
  • Outgoing PhDs and Postdocs
  • Visiting Scholars
  • News Archive
  • Weekly Bulletin
  • Monthly Student Newsletter
  • All Public Events
  • Seminars, Workshops, & Talks
  • Distinguished Lecture Series
  • CS Colloquium Series
  • CS + X Events
  • Tech Talk Series
  • Honors & Awards
  • External Faculty Awards
  • University Awards
  • Department Awards
  • Student Resources
  • Undergraduate Student Resources
  • MS Student Resources
  • PhD Student Resources
  • Student Organization Resources
  • Faculty Resources
  • Postdoc Resources
  • Staff Resources
  • Purchasing, Procurement and Vendor Payment
  • Expense Reimbursements
  • Department Operations and Facilities
  • Initiatives
  • Student Groups
  • CS Faculty Diversity Committee
  • Broadening Participation in Computing (BPC) Plan
  • Northwestern Engineering

Research   /   Research Areas Human-Computer Interaction

Human-Computer Interaction (HCI) is a rapidly expanding area of research and development that has transformed the way we use computers in the last thirty years. Research topics and areas include augmented-reality, collective action, computer-mediated communication, computer-supported collaborative work, crowdsourcing and social computing, cyberlearning and future learning technologies, inclusive technologies and accessibility, interactive audio, mixed-initiative systems, mobile interaction design, multi-touch interaction, social media, social networks, tangible user interfaces, ubiquitous computing, and user-centered design.

Northwestern hosts a vibrant HCI community across schools, with faculty and students involved in a wide range of projects. Students in HCI are enrolled in programs in Computer Science, Communication, Learning Sciences, and Technology & Social Behavior. Students also take courses and attend seminars through the Segal Design Institute.

Photo of Nabil Alshurafa

Nabil Alshurafa

Associate Professor of Preventive Medicine and (by courtesy) Computer Science and Electrical and Computer Engineering

Email Nabil Alshurafa

Photo of Sruti Bhagavatula

Sruti Bhagavatula

Assistant Professor of Instruction

Email Sruti Bhagavatula

Photo of Larry Birnbaum

Larry Birnbaum

Professor of Computer Science

Email Larry Birnbaum

Jeremy Birnholtz

Associate Professor, Communication Studies

Associate Professor, Department of Computer Science

Photo of Nick Diakopoulos

Nick Diakopoulos

Assistant Professor, Northwestern School of Communications

Email Nick Diakopoulos

Photo of Elizabeth Gerber

Elizabeth Gerber

Professor of Mechanical Engineering and (by courtesy) Computer Science

Professor of Communication Studies

Co-Director, Center for Human Computer Interaction + Design

Email Elizabeth Gerber

Photo of Darren Gergle

Darren Gergle

Professor, Communication Studies and (by courtesy) Computer Science

Email Darren Gergle

Photo of Kristian Hammond

Kristian Hammond

Bill and Cathy Osborn Professor of Computer Science

Director, Master of Science in Artificial Intelligence Program

Director, Center for Advancing Safety of Machine Intelligence (CASMI)

Email Kristian Hammond

Photo of Michael Horn

Michael Horn

Professor of Education and Social Policy

Email Michael Horn

Photo of Ian Horswill

Ian Horswill

Associate Professor of Computer Science

Email Ian Horswill

Photo of Jessica Hullman

Jessica Hullman

Ginni Rometty Professor

Email Jessica Hullman

Photo of Matthew Kay

Matthew Kay

Associate Professor of Communication Studies

Email Matthew Kay

Photo of Eleanor O'Rourke

Eleanor O'Rourke

Assistant Professor of Computer Science

Assistant Professor of Education and Social Policy

Email Eleanor O'Rourke

Photo of Bryan Pardo

Bryan Pardo

Email Bryan Pardo

Photo of Sarah Van Wart

Sarah Van Wart

Adjunct Assistant Professor

Email Sarah Van Wart

Photo of Uri Wilensky

Uri Wilensky

Lorraine Morton Professor

Email Uri Wilensky

Photo of Marcelo Worsley

Marcelo Worsley

Karr Family Associate Professor of Computer Science

Associate Professor of Learning Sciences, School of Education and Social Policy

Email Marcelo Worsley

Photo of Haoqi Zhang

Haoqi Zhang

Email Haoqi Zhang

More in this section

  • Engineering Home
  • CS Department

Related Links

  • Research at McCormick
  • Meet our Faculty
  • Northwestern Research Overview

Contact Info

Jessica Hullman Associate Professor Email

Select language

thesis on human computer interaction

Human Computer Interaction

Thesis project and colloquium.

This is the starting point for any HCI student about to begin the graduation project. For questions that are not covered here, e-mail the  graduation coordinator .

Before starting the thesis project students are strongly advised to first attend the thesis information session meeting, which is offered at the start of each teaching period.  See course INFOMTIMHC for more info .

General description

The HCI Graduation Project is split into a 15EC project proposal phase and a 25EC thesis phase. The thesis project takes about 8 months: 40EC are gained through the the two phases. The set up phase that is necessary to arrange your project is excluded from the EC count. 

The thesis project consists of a project idea, a graduation supervisor, and a graduation project facilitator. The project facilitator can either be a company or the university. Original ideas from the students are welcome, as long as they are aligned with the research interests and/or proposed projects by the supervisors: see the KonJoin system for supervisors and projects . Companies can propose ideas by contacting the graduation coordinator .

When can a thesis be started?  

The prerequisites for starting with the thesis project are:

  • you must have obtained at least 67.5 EC.
  • you must have successfully completed the four mandatory courses.
  • the contents of the project should be related to an HCI topic.

An exception can be given by the HCI programme coordinator ( [email protected] ). for students with one pending course.

Timing.  Take into account that your supervisors will need sufficient time to review the different chapters of your thesis. It is wise to deliver individual chapters as they are ready. 

This preliminary step is executed before the official start of Phase 1. The duration largely depends on how quickly a supervisor is found and a topic is agreed upon. It is recommended to start preparing for your graduation project about ½ year before your planned start of the First Phase. This part is excluded from the duration of the thesis project.

  • Define a topic: the topic has to be agreed with the department member who will act as a first supervisor. This is any member of the Department of Information and Computing Sciences. Arrange meetings with staff members to discuss possible options, based on their research interests (look at their webpages, their google scholar profile, the KonJoin system, or ask the graduation coordinator). If unsure about possible topics, please arrange a meeting with the graduation coordinator. Students can also try to arrange a project that fits within an internship with a company. Any project, however, requires a first supervisor from the department who guarantees the scientific quality of the thesis project.
  • HCI Colloquium enrollment: enroll in the HCI Colloquium [INFOMCHCI]. Regular and active participation in the colloquium is a compulsory part of the HCI thesis project; exceptions can be made for students conducting their thesis project outside the Netherlands. 
  • Write and sign the research application form: together with the first supervisor, describe your project's aims and research goals using the "research application form", which formalizes the topic of the thesis project. You and your first supervisor both have to sign this form. For more information about the research and application form look at the procedures and forms for research project and internship.
  • Work placement agreement: If you conduct a project outside UU, a "work placement agreement" should be filled in, and signed by the student, company supervisor, and the Science Research Project Coordinator ( [email protected] ). Deviations to the standard contract shall be discussed with the Science Research Project Coordinator. 
  • It is compulsory for all students starting their thesis projects after September 1, 2022, to complete the Ethics and Privacy Quick Scan before the Part 1 submission into Osiris. More information on the Quick Scan (in Word and Qualtrics) and sample Information sheets and consent forms can be found  here . The moderator email address for the Quick Scan is [email protected] . You should complete Quick Scan together with your first supervisor and the completed form must be added to your thesis report in the appendices.

First phase (15 EC)

The student defines the research method for the thesis, and conducts a scientific survey of the literature in the field of study. The intended learning outcomes are that the student should:

  • Be able of designing and developing a research plan, and;
  • demonstrate a thorough understanding of the relevant literature.

You will officially start the First Phase by starting a new case in Osiris; My Cases’ > ‘Start Case’ > ‘Research Project GSNS’. Please consult with your first supervisor before starting this process.

The proposal contains at least the following elements:

  • Problem statement: gap in the literature and/or practice
  • Completed literature research for the project
  • Research question and subquestions
  • Description and justification of the research methodology
  • Plan for the evaluation of the results/outcome
  • Skeleton of the thesis
  • Time plan for the second part of the thesis
  • Completed Quick Scan
  • A review of the literature that confirms that the gap exists and that the proposed research method is suitable and can lead to interesting scientific insights. The literature review can be performed using any technique spanning from an enumeration of related work at the one extreme, and a systematic literature study at the other.

The work for this phase will not be graded but only marked as "Voldaan" (pass). The evaluation of the first part should consider the following factors: project's context, research approach and presentation. The first supervisor will use the template (find a link at the bottom of this page) and is in charge of communicating the results to the graduate school of natural sciences and include the graduation coordinator in copy.

Delays Part 1 should be finalized within 3 months since the beginning of the project. When this part is ranked as not satisfactory, a retake project proposal should be handed in. For details, see the protocol delay at the end of this page.

Second phase (25 EC)

This phase concerns the actual execution of the research according to the approach defined in Phase 1. The second part is passed when the student thesis is approved by the two supervisors, and a final presentation has been successfully made. The learning outcomes for the student are the following:

  • Conduct sound scientific research according to a predefined plan,
  • Contribute to the scientific body of knowledge,
  • Critically review the research and its plan,
  • Give a convincing presentation about the work, and
  • Write a scientific report about the conducted research.

The grade of Phase 2 will take into account the following artifacts

  • The thesis report
  • The final presentation

The grade is calculated using the assessment form in Osiris Case.

Cum-laude graduation   To obtain cum-laude, Phase 2 should obtain a grade of at least 8.5.

Delays   The student will receive a grade after 8 months since the beginning of the entire project. In special cases, when students have achieved all the ILOs, the graduation can take place before the 8 months. For retake rules, see the delay protocol at the end of this page.

When your supervisors agrees that your scientific paper is finalized, it is time to wrap up the project and graduate.

  • Set date for graduation: both supervisors should agree on the date, including the time.
  • Ask for HCI Colloquium EC: obtain 3 EC for the HCI colloquium by sending a message to the HCI colloquium coordinator 
  • Arrange room and beamer: send an  e-mail to the department's secretaries  to arrange for a room and a beamer (if not included in the room), where the defense will take place. Please make sure to include the time, date, name of the thesis, supervisor, and the number of expected attendees.
  • Deliver printed copies of the thesis to the supervisors: at least one week before the defense; this is compulsory, unless the supervisors explicitly tell they don't want the printed copies. Don’t forget to add the completed Quick Scan to the appendices of your thesis report.
  • Thesis defense: the student gives a presentation of 25 minutes, followed by a question-and-answer session that typically lasts about 15-20 minutes. A grade will be decided and probably announced afterwards.
  • Publish thesis to " Osiris Scripties "
  • Graduation ceremony: an official ceremony, held in the Academiegebouw in Domplein, where the diplomas are handed out.

Thesis topics

Examples of thesis topics:

  • Ephemerality: cognitive effects of the “burn after read” paradigm in non-persistent media (e.g. Snapchat)
  • Augmented Reality in safety training: Issues with usability and user experience.
  • Human-Robot Interaction: Designing explanations of robot behaviour.
  • Collaborative crowd sourcing: Designing tools for large-scale creativity.
  • Adherent Persuasive Technology: Personalized well-being and cyber-security interventions.
  • Empathic System Design: The development of sensitive agents.

Procedures and forms for research project and internship

Within all Master's programmes one or more research projects are mandatory. Please see ‘Study programme’ for general information on such projects in your curriculum. In many cases, a research project may be carried out outside of the university, in the form of an internship at a company, research institute, or another university. This can be in the Netherlands as well as abroad, with the exception of ADS (see also: ‘ study abroad ’). 

You are required to apply for approval of your research project by submitting a request via OSIRIS Student . Please select ‘My Cases’, ‘Start Case’ and then ‘Research Project GSNS’. Important: in order to apply completely and correctly, you must have discussed the project setup with your intended project supervisor beforehand! We advise you to study the request form previous to discussing it with your supervisor, or fill it out together, to make sure you obtain all of the information required.

After submitting your request, it will be forwarded to your master’s programme coordinator, the board of examiners and student affairs for checks and approvals. After approval of your project it will be automatically registered in OSIRIS. If something needs to be amended, you will be notified by email. Please DO NOT register yourself in OSIRIS for the relevant research project courses. You will be automatically registered upon approval of the Research Application Form.

  • Please note that this  protocol   (English version) applies when a project is delayed
  • In case of a project or internship outside of Utrecht University, please make sure you fill out the Work Placement Agreement  in OSIRIS Student / my cases . This agreement will be available for filling out in OSIRIS Student/My Cases when you fill out the form for research project approval.

Follow Utrecht University

Utrecht University Heidelberglaan 8 3584 CS Utrecht The Netherlands Tel. +31 (0)30 253 35 50

Master’s Thesis Presentation • Human-Computer Interaction • Technology Design Recommendations Informed by Observations of Videos of Popular Musicians Teaching and Learning Songs By Ear

Please note: this master’s thesis presentation will take place in dc 3317..

Christopher Liscio, Master’s candidate David R. Cheriton School of Computer Science

Supervisor : Professor Dan Brown

Instrumentalists who play popular music often learn songs by ear, using recordings in lieu of sheet music or tablature. This practice was made possible by technology that allows musicians to control playback events. Until now, researchers have not studied the human-recording interactions of musicians attempting to learn pop songs by ear.

Through a pair of studies analyzing the content of online videos from YouTube, we generate hypotheses and seek a better understanding of by-ear learning from a recording. Combined with results from neuroscience studies of tonal working memory and aural imagery, our findings reveal a model of by-ear learning that highlights note-finding as a core activity. Using what we learned, we discuss opportunities for designers to create a set of novel human-recording interactions, and to provide assistive technology for those who lack the baseline skills to engage in the foundational note-finding activity.

Share via Facebook

Events by date

  • February (25)
  • January (25)
  • December (20)
  • November (28)
  • October (15)
  • September (25)
  • August (30)
  • February (18)
  • January (22)

Events by audience

  • Research Seminar (3)
  • Current undergraduate students (11)
  • Current graduate students (11)
  • Faculty (11)
  • Parents (1)
  • Donors | Friends | Supporters (1)
  • Employers (1)
  • International (1)

Events by type

  • Lecture (2)
  • Seminar (3)
  • Thesis defence (6)

David R. Cheriton School of Computer Science logo

Contact Computer Science

Work for Computer Science

Visit Computer Science

David R. Cheriton School of Computer Science University of Waterloo Waterloo, Ontario Canada N2L 3G1 Phone: 519-888-4567 ext. 33293 Fax: 519-885-1208

  • Contact Waterloo
  • Maps & Directions
  • Accessibility

The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is co-ordinated within the Office of Indigenous Relations .

IMAGES

  1. Human Computer Interaction Essay Example (300 Words)

    thesis on human computer interaction

  2. (PDF) A Review Paper on Human Computer Interaction & It's Applications

    thesis on human computer interaction

  3. Human Computer Interaction.pdf

    thesis on human computer interaction

  4. HCI 1. 3 Goals of Human Computer Interaction

    thesis on human computer interaction

  5. 1: The Multidisciplinary Field of HCI, Human-Computer Interaction (HCI

    thesis on human computer interaction

  6. (PDF) Human-Computer Interaction

    thesis on human computer interaction

VIDEO

  1. Introduction to HCI

  2. ViewCube

  3. CS408_Lecture38

  4. HUMAN COMPUTER INTERACTION

  5. HCI Lecture7

  6. GET PhD Thesis Writing on Human Resource Management. #hrm #phdthesis #phd #phdthesiswriting

COMMENTS

  1. Dissertations / Theses: 'Human-computer interaction'

    Consult the top 50 dissertations / theses for your research on the topic 'Human-computer interaction.'. Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard ...

  2. HCI thesis topics

    This thesis aims to investigate the unique interplay of human-computer interaction (HCI) and space exploration. A prototype of a computer interface in space should be implemented in VR, and the defined concepts should be evaluated by conducting a user study. ... This thesis aims to create a desktop overlay simulation tool for visual impairment ...

  3. PDF HUMAN COMPUTER INTERACTION IN GAME DESIGN

    Title of Bachelor's Thesis: Human Computer Interaction in Game Design Supervisor: Ilkka Mikkonen Term and year of completion: Spring 2012 Number of pages: 75 Computer and Video Games are one of the most popular and the most important products of the software industry. One of the greatest contributors to this success is the rapid improvement of

  4. Thesis : Human-Computer Interaction : Universität Hamburg

    On this page, you will find the most important information on how to get in contact with us and how to start your thesis. This website provides you with some information about how the process of writing a thesis at the Human-Computer Interaction group works. Additionally, you can find official information on the website of the Academic Office .

  5. PDF Human-Computer Interaction and AI

    Computer Science from Stanford University, where his thesis won the Arthur P Samuel Award. He has also previously worked at Microsoft Research, the Barcelona Supercomputing Center, and Carnegie-Mellon University. Nikolas Martelaro is an Assistant Professor of Human-Computer Interaction at Carnegie Mellon University where he

  6. HCI WHS Theses

    Topics. If you are interested in finishing your Bachelor or Master studies with a thesis in Human-Computer Interaction, you are welcome to contact us. Our research projects are a rich source for ideas. In general, writing a thesis with a focus on HCI will require you to take a user-centred perspective and asks you to apply adequate methods ...

  7. human computer interaction Latest Research Papers

    Computer Interaction. Purpose This study aims to propose a service-dominant logic (S-DL)-informed framework for teaching innovation in the context of human-computer interaction (HCI) education involving large industrial projects. Design/methodology/approach This study combines S-DL from the field of marketing with experiential and ...

  8. Emerging human-computer interaction interfaces : a categorizing

    This thesis is based mainly in qualitative analysis due to the lack of comprehensive data on the new Human-Computer Interfaces. Future research can collect quantitative data based on the framework of the five domains of general computing activities and their categorical requirements. It is also possible to extend the model to other computing ...

  9. Vision based hand gesture recognition for human computer interaction: a

    As computers become more pervasive in society, facilitating natural human-computer interaction (HCI) will have a positive impact on their use. Hence, there has been growing interest in the development of new approaches and technologies for bridging the human-computer barrier. The ultimate aim is to bring HCI to a regime where interactions with computers will be as natural as an interaction ...

  10. Bibliometric analysis of the core thesis system of Interaction Design

    Human-computer interaction (HCI) attracts more and more attention in automation, multimedia retrieval, computer system, universal computing. Meanwhile, it also poses new challenges to interaction design based on HCI in user experience, aesthetics, game design, and education]. This study aimed to analyze and assess literature published in the field of interaction design of HCI. <i xmlns:mml ...

  11. PDF Human-Computer Interaction (HCI)

    Preece(1994) defined as, Human-computer interaction (HCI) is "the discipline of designing, evaluating and implementing interactive computer systems for human use, as well the study of major phenomena surrounding this discipline" (Preece, 1994). As the whole human-computer interaction studies related with both human and machine in ...

  12. A Systematic Literature Review for Human-Computer Interaction and

    Human-computer interaction (HCI) has been considered as compu ter-related cross-disciplinary domain that is strongly associated with design for information, interaction, and communication and ...

  13. (PDF) Human Computer Interaction

    PDF | On Nov 13, 2019, Alfred Tan Yik Ern published Human Computer Interaction | Find, read and cite all the research you need on ResearchGate ... Thesis PDF Available. Human Computer Interaction ...

  14. (PDF) HUMAN COMPUTER INTERACTION

    S5229793@bournem outh.ac.uk. Abstract- The improvements in the development of. computer technology has co ntributed to t he concept. of the Human Computer Interactions (HCI) since the. computer ...

  15. Research on the human computer interaction of E-learning

    This thesis begins with the analysis of digital technology, from a cognitive psychological perspective, analyzes and discusses the importance of the human-computer interaction design for E-learning. In this thesis, use visual stimuli as an example of the cognitive environment, analyze the relationship between the cognitive psychology and human ...

  16. Human-Computer Interaction

    Human-Computer Interaction (HCI) has a long and vital job in the field of science and innovation. In this paper, Human-Computer interaction is discussed along with its definitions, phrasings, Historical development, existing innovations, and ongoing advancement and future scope in the field. An outline of software architecture for human-computer interaction is also discussed in the article. An ...

  17. Theses

    Joining the HCI lab for your thesis is a great opportunity for students to: get early exposure to upcoming visionary technologies. conduct creative research to contribute to the future of computing. be part of an internationally leading research lab. prepare for your next career steps: our past student alumni have started academic research at ...

  18. Potential Topics : Human-Computer Interaction : Universität Hamburg

    Topic. In this thesis, the candidates should develop a simple Apple Vision Pro Demo with Unity and evaluate the development process. Hence, the candidate should have some experiences in Xcode development and ideally in Unity as well as libraries such as SwiftUI, RealityKit, and ARKit. The exact topic has to be defined yet.

  19. Human-Computer Interaction

    Human-Computer Interaction (HCI) is a rapidly expanding area of research and development that has transformed the way we use computers in the last 30 years. Northwestern hosts a vibrant HCI community across schools with faculty and students involved in a wide range of projects. Research topics and areas include augmented-reality, collective action, computer-mediated communication, computer ...

  20. A Review Paper on Human Computer Interaction

    Research experiments in human computer interaction involves the young age group of people that are educated and technically knowledgeable. This paper focuses on the mental model in Human Computer ...

  21. Thesis project and colloquium

    The thesis project takes about 8 months: 40EC are gained through the the two phases. The set up phase that is necessary to arrange your project is excluded from the EC count. The thesis project consists of a project idea, a graduation supervisor, and a graduation project facilitator. The project facilitator can either be a company or the ...

  22. Master's Thesis Presentation • Human-Computer Interaction • Technology

    Please note: This master's thesis presentation will take place in DC 3317. Christopher Liscio, Master's candidate David R. Cheriton School of Computer Science Supervisor: Professor Dan Brown Instrumentalists who play popular music often learn songs by ear, using recordings in lieu of sheet music or tablature. This practice was made possible by technology that allows musicians

  23. Computational Fabrication and Assembly for In Situ Manufacturing

    In this thesis, I introduce digital fabrication platforms with co-developed hardware and software that draw on tools from robotics and human-computer interaction to automate manufacturing of customized artefacts at the point of need. Highlighting three research themes across fabrication machines, modular assembly, and programmable materials ...