U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • J Healthc Eng
  • v.2021; 2021

Logo of jhe

This article has been retracted.

Deep learning-based real-time ai virtual mouse system using computer vision to avoid covid-19 spread.

1 Department of CSE, Hindusthan College of Engineering and Technology, Coimbatore, India

2 Rathinam Group of Institutions, Coimbatore, India

3 Department of ECE, Hindusthan College of Engineering and Technology, Coimbatore, India

4 Department of CSE, Hindusthan College of Engineering and Technology, Coimbatore, India

5 Department of ECE, Anna University, Chennai, India

Associated Data

The hand tracking data used to support the findings of this study are included within the article. The study uses Google's framework; hence, no new data are needed to train the model.

The mouse is one of the wonderful inventions of Human-Computer Interaction (HCI) technology. Currently, wireless mouse or a Bluetooth mouse still uses devices and is not free of devices completely since it uses a battery for power and a dongle to connect it to the PC. In the proposed AI virtual mouse system, this limitation can be overcome by employing webcam or a built-in camera for capturing of hand gestures and hand tip detection using computer vision. The algorithm used in the system makes use of the machine learning algorithm. Based on the hand gestures, the computer can be controlled virtually and can perform left click, right click, scrolling functions, and computer cursor function without the use of the physical mouse. The algorithm is based on deep learning for detecting the hands. Hence, the proposed system will avoid COVID-19 spread by eliminating the human intervention and dependency of devices to control the computer.

1. Introduction

With the development technologies in the areas of augmented reality and devices that we use in our daily life, these devices are becoming compact in the form of Bluetooth or wireless technologies. This paper proposes an AI virtual mouse system that makes use of the hand gestures and hand tip detection for performing mouse functions in the computer using computer vision. The main objective of the proposed system is to perform computer mouse cursor functions and scroll function using a web camera or a built-in camera in the computer instead of using a traditional mouse device. Hand gesture and hand tip detection by using computer vision is used as a HCI [ 1 ] with the computer. With the use of the AI virtual mouse system, we can track the fingertip of the hand gesture by using a built-in camera or web camera and perform the mouse cursor operations and scrolling function and also move the cursor with it.

While using a wireless or a Bluetooth mouse, some devices such as the mouse, the dongle to connect to the PC, and also, a battery to power the mouse to operate are used, but in this paper, the user uses his/her built-in camera or a webcam and uses his/her hand gestures to control the computer mouse operations. In the proposed system, the web camera captures and then processes the frames that have been captured and then recognizes the various hand gestures and hand tip gestures and then performs the particular mouse function.

Python programming language is used for developing the AI virtual mouse system, and also, OpenCV which is the library for computer vision is used in the AI virtual mouse system. In the proposed AI virtual mouse system, the model makes use of the MediaPipe package for the tracking of the hands and for tracking of the tip of the hands, and also, Pynput, Autopy, and PyAutoGUI packages were used for moving around the window screen of the computer for performing functions such as left click, right click, and scrolling functions. The results of the proposed model showed very high accuracy level, and the proposed model can work very well in real-world application with the use of a CPU without the use of a GPU.

1.1. Problem Description and Overview

The proposed AI virtual mouse system can be used to overcome problems in the real world such as situations where there is no space to use a physical mouse and also for the persons who have problems in their hands and are not able to control a physical mouse. Also, amidst of the COVID-19 situation, it is not safe to use the devices by touching them because it may result in a possible situation of spread of the virus by touching the devices, so the proposed AI virtual mouse can be used to overcome these problems since hand gesture and hand Tip detection is used to control the PC mouse functions by using a webcam or a built-in camera.

1.2. Objective

The main objective of the proposed AI virtual mouse system is to develop an alternative to the regular and traditional mouse system to perform and control the mouse functions, and this can be achieved with the help of a web camera that captures the hand gestures and hand tip and then processes these frames to perform the particular mouse function such as left click, right click, and scrolling function.

2. Related Work

There are some related works carried out on virtual mouse using hand gesture detection by wearing a glove in the hand and also using color tips in the hands for gesture recognition, but they are no more accurate in mouse functions. The recognition is not so accurate because of wearing gloves; also, the gloves are also not suited for some users, and in some cases, the recognition is not so accurate because of the failure of detection of color tips. Some efforts have been made for camera-based detection of the hand gesture interface.

In 1990, Quam introduced an early hardware-based system; in this system, the user should wear a DataGlove [ 2 ]. The proposed system by Quam although gives results of higher accuracy, but it is difficult to perform some of the gesture controls using the system.

Dung-Hua Liou, ChenChiung Hsieh, and David Lee in 2010 [ 3 ] proposed a study on “A Real-Time Hand Gesture Recognition System Using Motion History Image.” The main limitation of this model is more complicated hand gestures.

Monika B. Gandhi, Sneha U. Dudhane, and Ashwini M. Patil in 2013 [ 4 ] proposed a study on “Cursor Control System Using Hand Gesture Recognition.” In this work, the limitation is stored frames are needed to be processed for hand segmentation and skin pixel detection.

Vinay Kr. Pasi, Saurabh Singh, and Pooja Kumari in 2016 [ 5 ] proposed “Cursor Control using Hand Gestures” in the IJCA Journal. The system proposes the different bands to perform different functions of the mouse. The limitation is it depends on various colors to perform mouse functions.

Chaithanya C, Lisho Thomas, Naveen Wilson, and Abhilash SS in 2018 [ 6 ] proposed “Virtual Mouse Using Hand Gesture” where the model detection is based on colors. But, only few mouse functions are performed.

3. Algorithm Used for Hand Tracking

For the purpose of detection of hand gestures and hand tracking, the MediaPipe framework is used, and OpenCV library is used for computer vision [ 7 – 10 ]. The algorithm makes use of the machine learning concepts to track and recognize the hand gestures and hand tip.

3.1. MediaPipe

MediaPipe is a framework which is used for applying in a machine learning pipeline, and it is an opensource framework of Google. The MediaPipe framework is useful for cross platform development since the framework is built using the time series data. The MediaPipe framework is multimodal, where this framework can be applied to various audios and videos [ 11 ]. The MediaPipe framework is used by the developer for building and analyzing the systems through graphs, and it also been used for developing the systems for the application purpose. The steps involved in the system that uses MediaPipe are carried out in the pipeline configuration. The pipeline created can run in various platforms allowing scalability in mobile and desktops. The MediaPipe framework is based on three fundamental parts; they are performance evaluation, framework for retrieving sensor data, and a collection of components which are called calculators [ 11 ], and they are reusable. A pipeline is a graph which consists of components called calculators, where each calculator is connected by streams in which the packets of data flow through. Developers are able to replace or define custom calculators anywhere in the graph creating their own application. The calculators and streams combined create a data-flow diagram; the graph ( Figure 1 ) is created with MediaPipe where each node is a calculator and the nodes are connected by streams [ 11 ].

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.001.jpg

MediaPipe hand recognition graph [ 12 ].

Single-shot detector model is used for detecting and recognizing a hand or palm in real time. The single-shot detector model is used by the MediaPipe. First, in the hand detection module, it is first trained for a palm detection model because it is easier to train palms. Furthermore, the nonmaximum suppression works significantly better on small objects such as palms or fists [ 13 ]. A model of hand landmark consists of locating 21 joint or knuckle co-ordinates in the hand region, as shown in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.002.jpg

Co-ordinates or land marks in the hand [ 12 ].

3.2. OpenCV

OpenCV is a computer vision library which contains image-processing algorithms for object detection [ 14 ]. OpenCV is a library of python programming language, and real-time computer vision applications can be developed by using the computer vision library. The OpenCV library is used in image and video processing and also analysis such as face detection and object detection [ 15 ].

4. Methodology

The various functions and conditions used in the system are explained in the flowchart of the real-time AI virtual mouse system in Figure 3 .

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.003.jpg

Flowchart of the real-time AI virtual mouse system.

4.1. The Camera Used in the AI Virtual Mouse System

The proposed AI virtual mouse system is based on the frames that have been captured by the webcam in a laptop or PC. By using the Python computer vision library OpenCV, the video capture object is created and the web camera will start capturing video, as shown in Figure 4 . The web camera captures and passes the frames to the AI virtual system.

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.004.jpg

Capturing video using the webcam (computer vision).

4.2. Capturing the Video and Processing

The AI virtual mouse system uses the webcam where each frame is captured till the termination of the program. The video frames are processed from BGR to RGB color space to find the hands in the video frame by frame as shown in the following code:

def findHands(self, img, draw = True):

imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

self.results = self.hands.process(imgRGB)

4.3. (Virtual Screen Matching) Rectangular Region for Moving through the Window

The AI virtual mouse system makes use of the transformational algorithm, and it converts the co-ordinates of fingertip from the webcam screen to the computer window full screen for controlling the mouse. When the hands are detected and when we find which finger is up for performing the specific mouse function, a rectangular box is drawn with respect to the computer window in the webcam region where we move throughout the window using the mouse cursor, as shown in Figure 5 .

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.005.jpg

Rectangular box for the area of the computer screen where we can move the cursor.

4.4. Detecting Which Finger Is Up and Performing the Particular Mouse Function

In this stage, we are detecting which finger is up using the tip Id of the respective finger that we found using the MediaPipe and the respective co-ordinates of the fingers that are up, as shown in Figure 6 , and according to that, the particular mouse function is performed.

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.006.jpg

Detection of which finger is up.

4.5. Mouse Functions Depending on the Hand Gestures and Hand Tip Detection Using Computer Vision

4.5.1. for the mouse cursor moving around the computer window.

If the index finger is up with tip Id = 1 or both the index finger with tip Id = 1 and the middle finger with tip Id = 2 are up, the mouse cursor is made to move around the window of the computer using the AutoPy package of Python, as shown in Figure 7 .

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.007.jpg

Mouse cursor moving around the computer window.

4.5.2. For the Mouse to Perform Left Button Click

If both the index finger with tip Id = 1 and the thumb finger with tip Id = 0 are up and the distance between the two fingers is lesser than 30px, the computer is made to perform the left mouse button click using the pynput Python package, as shown in Figures ​ Figures8 8 and ​ and9 9 .

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.008.jpg

Gesture for the computer to perform left button click.

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.009.jpg

4.5.3. For the Mouse to Perform Right Button Click

If both the index finger with tip Id = 1 and the middle finger with tip Id = 2 are up and the distance between the two fingers is lesser than 40 px, the computer is made to perform the right mouse button click using the pynput Python package, as shown in Figure 10 .

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.010.jpg

Gesture for the computer to perform right button click.

4.5.4. For the Mouse to Perform Scroll up Function

If both the index finger with tip Id = 1 and the middle finger with tip Id = 2 are up and the distance between the two fingers is greater than 40 px and if the two fingers are moved up the page, the computer is made to perform the scroll up mouse function using the PyAutoGUI Python package, as shown in Figure 11 .

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.011.jpg

Gesture for the computer to perform scroll up function.

4.5.5. For the Mouse to Perform Scroll down Function

If both the index finger with tip Id = 1 and the middle finger with tip Id = 2 are up and the distance between the two fingers is greater than 40px and if the two fingers are moved down the page, the computer is made to perform the scroll down mouse function using the PyAutoGUI Python package, as shown in Figure 12 .

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.012.jpg

Gesture for the computer to perform scroll down function.

4.5.6. For No Action to be Performed on the Screen

If all the fingers are up with tip Id = 0, 1, 2, 3, and 4, the computer is made to not perform any mouse events in the screen, as shown in Figure 13 .

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.013.jpg

Gesture for the computer to perform no action.

5. Experimental Results and Evaluation

In the proposed AI virtual mouse system, the concept of advancing the human-computer interaction using computer vision is given.

Cross comparison of the testing of the AI virtual mouse system is difficult because only limited numbers of datasets are available. The hand gestures and finger tip detection have been tested in various illumination conditions and also been tested with different distances from the webcam for tracking of the hand gesture and hand tip detection. An experimental test has been conducted to summarize the results shown in Table 1 . The test was performed 25 times by 4 persons resulting in 600 gestures with manual labelling, and this test has been made in different light conditions and at different distances from the screen, and each person tested the AI virtual mouse system 10 times in normal light conditions, 5 times in faint light conditions, 5 times in close distance from the webcam, and 5 times in long distance from the webcam, and the experimental results are tabulated in Table 1 .

Experimental results.

∗ Finger tip ID for respective fingers: tip Id 0: thumb finger; tip Id 1: index finger; tip Id 2: middle finger; tip Id 3: ring finger; tip Id 4: little finger.

From Table 1 , it can be seen that the proposed AI virtual mouse system had achieved an accuracy of about 99%. From this 99% accuracy of the proposed AI virtual mouse system, we come to know that the system has performed well. As seen in Table 1 , the accuracy is low for “Right Click” as this is the hardest gesture for the computer to understand. The accuracy for right click is low because the gesture used for performing the particular mouse function is harder. Also, the accuracy is very good and high for all the other gestures. Compared to previous approaches for virtual mouse, our model worked very well with 99% accuracy. The graph of accuracy is shown in Figure 14 .

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.014.jpg

Graph of accuracy.

Table 2 shows a comparison between the existing models and the proposed AI virtual mouse model in terms of accuracy.

Comparison with existing models.

From Table 2 , it is evident that the proposed AI virtual mouse has performed very well in terms of accuracy when compared to the other virtual mouse models. The novelty of the proposed model is that it can perform most of the mouse functions such as left click, right click, scroll up, scroll down, and mouse cursor movement using finger tip detection, and also, the model is helpful in controlling the PC like a physical mouse but in the virtual mode. Figure 15 shows a graph of comparison between the models.

An external file that holds a picture, illustration, etc.
Object name is JHE2021-8133076.015.jpg

Graph for comparison between the models.

6. Future Scope

The proposed AI virtual mouse has some limitations such as small decrease in accuracy of the right click mouse function and also the model has some difficulties in executing clicking and dragging to select the text. These are some of the limitations of the proposed AI virtual mouse system, and these limitations will be overcome in our future work.

Furthermore, the proposed method can be developed to handle the keyboard functionalities along with the mouse functionalities virtually which is another future scope of Human-Computer Interaction (HCI).

7. Applications

The AI virtual mouse system is useful for many applications; it can be used to reduce the space for using the physical mouse, and it can be used in situations where we cannot use the physical mouse. The system eliminates the usage of devices, and it improves the human-computer interaction.

Major applications:

  • The proposed model has a greater accuracy of 99% which is far greater than the that of other proposed models for virtual mouse, and it has many applications
  • Amidst the COVID-19 situation, it is not safe to use the devices by touching them because it may result in a possible situation of spread of the virus by touching the devices, so the proposed AI virtual mouse can be used to control the PC mouse functions without using the physical mouse
  • The system can be used to control robots and automation systems without the usage of devices
  • 2D and 3D images can be drawn using the AI virtual system using the hand gestures
  • AI virtual mouse can be used to play virtual reality- and augmented reality-based games without the wireless or wired mouse devices
  • Persons with problems in their hands can use this system to control the mouse functions in the computer
  • In the field of robotics, the proposed system like HCI can be used for controlling robots
  • In designing and architecture, the proposed system can be used for designing virtually for prototyping

8. Conclusions

The main objective of the AI virtual mouse system is to control the mouse cursor functions by using the hand gestures instead of using a physical mouse. The proposed system can be achieved by using a webcam or a built-in camera which detects the hand gestures and hand tip and processes these frames to perform the particular mouse functions.

From the results of the model, we can come to a conclusion that the proposed AI virtual mouse system has performed very well and has a greater accuracy compared to the existing models and also the model overcomes most of the limitations of the existing systems. Since the proposed model has greater accuracy, the AI virtual mouse can be used for real-world applications, and also, it can be used to reduce the spread of COVID-19, since the proposed mouse system can be used virtually using hand gestures without using the traditional physical mouse.

The model has some limitations such as small decrease in accuracy in right click mouse function and some difficulties in clicking and dragging to select the text. Hence, we will work next to overcome these limitations by improving the finger tip detection algorithm to produce more accurate results.

Data Availability

Conflicts of interest.

The authors declare no conflicts of interest.

Virtual Mouse Using Hand Gesture

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Advertisement

Advertisement

Hand Gesture Control for Human–Computer Interaction with Deep Learning

  • Original Article
  • Published: 21 January 2022
  • Volume 17 , pages 1961–1970, ( 2022 )

Cite this article

  • S. N. David Chua   ORCID: orcid.org/0000-0003-4149-8696 1 ,
  • K. Y. Richard Chin 1 ,
  • S. F. Lim 1 &
  • Pushpdant Jain 2  

665 Accesses

8 Citations

Explore all metrics

The use of gesture control has numerous advantages compared to the use of physical hardware. However, it has yet to gain popularity as most gesture control systems require extra sensors or depth cameras to detect or capture the movement of gestures before a meaningful signal can be triggered for corresponding course of action. This research proposes a method for a hand gesture control system with the use of an object detection algorithm, YOLOv3, combined with handcrafted rules to achieve dynamic gesture control on the computer. This project utilizes a single RGB camera for hand gesture recognition and localization. The dataset of all gestures used for training and its corresponding commands, are custom designed by the authors due to the lack of standard gestures specifically for human–computer interaction. Algorithms to integrate gesture commands with virtual mouse and keyboard input through the Pynput library in Python, were developed to handle commands such as mouse control, media control, and others. The mAP result of the YOLOv3 model obtained 96.68% accuracy based on testing result. The use of rule-based algorithms for gesture interpretation was successfully implemented to transform static gesture recognition into dynamic gesture.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

literature survey of virtual mouse

Al-Shamayleh AS, Ahmad R, Abushariah MAM, Alam KA, Jomhari N (2018) A systematic literature review on vision based gesture recognition techniques. Multimed Tools Appl. https://doi.org/10.1007/s11042-018-5971-z

Article   Google Scholar  

Anwar S, Sinha SK, Vivek S, Ashank V (2019). Hand gesture recognition: a survey. Lecture notes in electrical engineering. https://doi.org/10.1007/978-981-13-0776-8_33

Ayooshkathuria (2018) pytorch-yolo-v3. Github

Bai Y, Zhang L, Wang T, Zhou X (2019) A skeleton object detection-based dynamic gesture recognition method. In: Proceedings of the 2019 IEEE 16th international conference on networking, sensing and control, ICNSC 2019. https://doi.org/10.1109/ICNSC.2019.8743166

Beyer G, Meier M (2011) Music interfaces for novice users: composing music on a public display with hand gestures. In: Proceedings of the international conference on new interfaces for musical expression

Bochkovskiy A, Wang CY, Liao M (2020) YOLOv4: optimal speed and accuracy of object detection. https://arxiv.org/pdf/2004.10934v1.pdf

Bush IJ, Abiyev R, Arslan M (2019) Impact of machine learning techniques on hand gesture recognition. J Intell Fuzzy Syst. https://doi.org/10.3233/JIFS-190353

Camgoz NC, Hadfield S, Bowden R (2017) Particle filter based probabilistic forced alignment for continuous gesture recognition. In: Proceedings—2017 IEEE international conference on computer vision workshops, ICCVW 2017. https://doi.org/10.1109/ICCVW.2017.364

Chandrasekaran G, Periyasamy S, Panjappagounder Rajamanickam K (2020) Minimization of test time in system on chip using artificial intelligence-based test scheduling techniques. Neural Comput Appl. https://doi.org/10.1007/s00521-019-04039-6

Chai X, Liu Z, Yin F, Liu Z, Chen X (2016) Two streams recurrent neural networks for large-scale continuous gesture recognition. In: Proceedings—international conference on pattern recognition. https://doi.org/10.1109/ICPR.2016.7899603

Chen D, Li G, Sun Y, Kong J, Jiang G, Tang H, Ju Z, Yu H, Liu H (2017) An interactive image segmentation method in hand gesture recognition. Sensors (Switzerland). https://doi.org/10.3390/s17020253

Chua SND, Lim SF, Lai SN et al (2019) Development of a child detection system with artificial intelligence using object detection method. J Electr Eng Technol 14:2523–2529. https://doi.org/10.1007/s42835-019-00255-1

Everingham M, Van Gool L, Williams CKI, Winn J, Zisserman A (2010) The pascal visual object classes (VOC) challenge. Int J Comput Vis. https://doi.org/10.1007/s11263-009-0275-4

Flores CJL, Cutipa AEG, Enciso RL (2017) Application of convolutional neural networks for static hand gestures recognition under different invariant features. In: Proceedings of the 2017 IEEE 24th international congress on electronics, electrical engineering and computing, INTERCON 2017. https://doi.org/10.1109/INTERCON.2017.8079727

Geirhos R, Schütt HH, Medina Temme CR, Bethge M, Rauber J, Wichmann FA (2018) Generalisation in humans and deep neural networks. In: Advances in neural information processing systems

Huang H, Chong Y, Nie C, Pan S (2019) Hand gesture recognition with skin detection and deep learning method. J Phys Conf Ser. https://doi.org/10.1088/1742-6596/1213/2/022001

Islam MZ, Hossain MS, Ul Islam R, Andersson K (2019) Static hand gesture recognition using convolutional neural network with data augmentation. In: 2019 Joint 8th international conference on informatics, electronics and vision, ICIEV 2019 and 3rd international conference on imaging, vision and pattern recognition, IcIVPR 2019 with international conference on activity and behavior computing, ABC 2019. https://doi.org/10.1109/ICIEV.2019.8858563

Ji S, Xu W, Yang M, Yu K (2013) 3D Convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2012.59

Kim H, Albuquerque G, Havemann S, Fellner DW (2005) Tangible 3D: hand gesture interaction for immersive 3D modeling. In: 9th international workshop on immersive projection technology—11th Eurographics symposium on virtual environments, IPT/EGVE 2005

Kim S, Ji Y, Lee KB (2018) An effective sign language learning with object detection based ROI segmentation. In: Proceedings—2nd IEEE international conference on robotic computing, IRC 2018. https://doi.org/10.1109/IRC.2018.00069

Köpüklü O, Gunduz A, Kose N, Rigoll G (2019) Real-time hand gesture detection and classification using convolutional neural networks. In: Proceedings—14th IEEE international conference on automatic face and gesture recognition, FG 2019. https://doi.org/10.1109/FG.2019.8756576

Maqueda AI, Del-Blanco CR, Jaureguizar F, García N (2015) Human-computer interaction based on visual hand-gesture recognition using volumetric spatiograms of local binary patterns. Comput Vis Image Underst. https://doi.org/10.1016/j.cviu.2015.07.009

Molchanov P, Yang X, Gupta S, Kim K, Tyree S, Kautz J (2016) Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural networks. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition. https://doi.org/10.1109/CVPR.2016.456

Ni Z, Chen J, Sang N, Gao C, Liu L (2018) Light YOLO for high-speed gesture recognition. In: Proceedings—international conference on image processing, ICIP. https://doi.org/10.1109/ICIP.2018.8451766

Oyedotun OK, Khashman A (2017) Deep learning in vision-based static hand gesture recognition. Neural Comput Appl. https://doi.org/10.1007/s00521-016-2294-8

Rahmat RF, Chairunnisa T, Gunawan D, Pasha MF, Budiarto R (2019) Hand gestures recognition with improved skin color segmentation in human-computer interaction applications. J Theor Appl Inf Technol 97(3):727–739

Google Scholar  

Redmon J, Farhadi A (2018) YOLO v.3. Tech Report

Tzutalin (2015) LabelImg. LabelImg. https://github.com/tzutalin/labelImg

Walker A (2013) Voice commands or gesture recognition: how will we control the computers of the future? https://www.independent.co.uk/life-style/gadgets-and-tech/voice-commands-or-gesture-recognition-how-will-we-control-the-computers-of-the-future-8899614.html

Wan J, Li SZ, Zhao Y, Zhou S, Guyon I, Escalera S (2016) ChaLearn looking at people RGB-D isolated and continuous datasets for gesture recognition. In: IEEE computer society conference on computer vision and pattern recognition workshops. https://doi.org/10.1109/CVPRW.2016.100

Yang X, Tian Y (2014) Super normal vector for activity recognition using depth sequences. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition. https://doi.org/10.1109/CVPR.2014.108

Zhang Y, Cao C, Cheng J, Lu H (2018) EgoGesture: a new dataset and benchmark for egocentric hand gesture recognition. IEEE Trans Multimed. https://doi.org/10.1109/TMM.2018.2808769

Download references

Acknowledgements

This research was funded by Universiti Malaysia Sarawak under the UNIMAS publication support fee fund.

Author information

Authors and affiliations.

Faculty of Engineering, Universiti Malaysia Sarawak, 94300, Kota Samarahan, Malaysia

S. N. David Chua, K. Y. Richard Chin & S. F. Lim

School of Mechanical Engineering, VIT Bhopal University, Bhopal Indore Bypass, Sehore, 466114, India

Pushpdant Jain

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to S. N. David Chua .

Ethics declarations

Conflict of interest.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Chua, S.N.D., Chin, K.Y.R., Lim, S.F. et al. Hand Gesture Control for Human–Computer Interaction with Deep Learning. J. Electr. Eng. Technol. 17 , 1961–1970 (2022). https://doi.org/10.1007/s42835-021-00972-6

Download citation

Received : 14 September 2021

Revised : 21 November 2021

Accepted : 23 November 2021

Published : 21 January 2022

Issue Date : May 2022

DOI : https://doi.org/10.1007/s42835-021-00972-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Hand gesture
  • Human computer interaction
  • Deep learning
  • Object detection
  • Find a journal
  • Publish with us
  • Track your research
  • International Journal of Engineering Research & Technology (IJERT)

IJERT

  • Mission & Scope
  • Editorial Board
  • Peer-Review Policy
  • Publication Ethics Policy
  • Journal Policies
  • Join as Reviewer
  • Conference Partners
  • Call for Papers
  • Journal Statistics – 2023-2024
  • Submit Manuscript
  • Journal Charges (APC)
  • Register as Volunteer
  • Upcoming Conferences
  • CONFERENCE PROCEEDINGS
  • Thesis Archive
  • Thesis Publication FAQs
  • Thesis Publication Charges
  • Author Login
  • Reviewer Login

Volume 12, Issue 04 (April 2023)

Gesture controlled virtual mouse with voice automation.

literature survey of virtual mouse

  • Article Download / Views: 69
  • Authors : Prithvi J , S Shree Lakshmi , Sohan R Kumar , Suraj Nair, Sunayana S
  • Paper ID : IJERTV12IS040131
  • Volume & Issue : Volume 12, Issue 04 (April 2023)
  • Published (First Online): 17-05-2023
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT

Creative Commons License

Prithvi J, S Shree Lakshmi, Suraj Nair and Sohan R Kumar

Department of Computer Science And Engineering

B.M.S. College of Engineering Bengaluru, Karnataka, India

Ms. Sunayana S

Department of Computer Science and Engineering Visveswaraya Technological University, Belgaum Bengaluru, Karnataka, India

Abstract This research paper proposes a Gesture Controlled Virtual Mouse system that enables human- computer interaction using hand gestures and voice commands. The system requires no direct contact with the computer and allows for virtual control of all input/output operations. The system employs state-of-the-art Machine Learning and Computer Vision algorithms to recognize static and dynamic hand gestures and voice commands, without the need for additional hardware. The system comprises two modules, one that works directly on hands using MediaPipe Hand detection and another that uses gloves of any uniform color. The system leverages models such as Convolutional Neural Networks implemented by MediaPipe running on top of pybind11. The paper discusses the systems architecture, algorithmic approach to gesture recognition, and implementation of both modules in detail. The proposed system presents a natural and user-friendly alternative to traditional input methods and can have potential applications in healthcare and education. The papers findings will be of interest to researchers and practitioners in the field of Human-Computer Interaction.

Index Terms Gesture Control, Virtual Mouse, Human- Computer Interaction, Hand Gestures, Voice Commands, Ma- chine Learning, Computer Vision, MediaPipe, Convolutional Neural Networks, Pybind11, Healthcare, Education.

INTRODUCTION

The field of Human-Computer Interaction has seen significant advancements with the introduction of inno- vative technologies. Traditional input methods such as keyboards, mice, and touchscreens have become more sophisticated, but still require direct contact with the computer, limiting the scope of interaction. Gesture-based interaction has emerged as an alternative approach to traditional methods, and the Gesture Controlled Virtual Mouse is an innovative technology that enables intu- itive interaction between humans and computers. This research paper presents a comprehensive study of the Gesture Controlled Virtual Mouse, which leverages state- of-the-art Machine Learning and Computer Vision algo- rithms to enable users to control input/output operations using hand gestures and voice commands without the need for direct contact.

The Gesture Controlled Virtual Mouse is designed using the latest technology and is capable of recognizing both static and dynamic hand gestures in addition to voice commands, making the interaction more natural and user-

friendly. The system does not require any additional hardware, and the implementation of the system is based on models such as the Convolutional Neural Network (CNN) implemented by MediaPipe running on top of pybind11. The system comprises two modules, one of which operates directly on hands using MediaPipe hand detection, while the other module uses gloves of any uniform color. The system currently supports the Windows platform.

This research paper presents a detailed analysis of the Gesture Controlled Virtual Mouse, covering the systems architecture, algorithmic approach to gesture recognition, and implementation of both modules. The paper also discusses the advantages of the Gesture Controlled Vir- tual Mouse over traditional input methods, such as the increased naturalness and user-friendliness of the interac- tion. The findings presented in this paper will contribute to the growing field of Human-Computer Interaction and will be useful for researchers, developers, and anyone in- terested in the latest advances in gesture-based interaction technology.

PROBLEM STATEMENT

With the emergence of ubiquitous computing, tradi- tional methods of user interaction involving the keyboard, mouse, and pen are no longer adequate. The limitations of these devices restrict the range of instructions that can be executed. Direct usage of hand gestures and voice commands have the potential to serve as input devices for more natural and intuitive interaction, enabling users to perform everyday tasks with ease. Such methods can offer a more extensive instruction set and eliminate the need for direct physical contact with the computer, further enhancing the users experience.

LITERATURE SURVEY

Gesture-based mouse control using computer vision has been a topic of interest for researchers for a long time. Various methods have been proposed for gesture recognition, but in this paper, the authors have proposed a new method based on color detection and masking. This system is implemented in Python programming language using the OpenCV library, which is a popular computer vision library. The proposed system is a virtual mouse that will work only based on webcam captured frames and tracking colored fingertips.

The objective of this paper is to develop and implement an alternative system to control a mouse cursor. The alternative method is hand gesture recognition using a webcam and a color detection method. The ultimate out- come of this paper is to develop a system that recognizes hand gestures and controls the mouse cursor using the color detection method of any computer.

The system works on the frames captured by the we- bcam on the computer machine or built-in camera on a laptop. By creating the video capture object, the system will capture video using the webcam in real-time. The camera should be positioned in a way so that it can see the users hands in the right positions.

Literature Survey

In the previously proposed system by Kabid Hassan Shiblys "Design and Development of Hand Gesture Based Virtual Mouse" research paper published in ICASERT (2019), color detection is done by detecting color pixels of fingertips with color caps from the frames that were captured by the webcam. This is the initial and funda- mental step of the proposed system. The outcome of this step will be a grayscale image, where the intensity of the pixels differs from the color cap to the rest of the frame, and the color cap area will be highlighted. Then, rectangle bounding boxes (masks) will be created around the color cap, and the color cap will be tracked. The gesture will be detected from the tracking of these color caps.At first, the center of two detected color objects is calculated, which is done by the coordinates of the center of the detected rectangle. To create a line between two coordinates, the built-in OpenCV function is used, and to detect the midpoint equation, a given formula is used. This midpoint is the tracker for the mouse pointer, and the mouse pointer will track this midpoint. In this system, the coordinates from camera captured frames resolution are converted to screen resolution. A predefined location for the mouse is set, so that when the mouse pointer reaches that position, the mouse started to work, and this may be called an open gesture. This allows the user to control the mouse pointer.

The previous system uses close gestures for clicking events. When the rectangle bounding boxes come closer to another rectangle, the bounding box is created with the edge of the tracking bounding boxes. When the newly created bounding box becomes 20 percent of its creation time size, the system performs the left button click, and it can be clicked. By holding this position more than 5 secnds, the user can perform a double-click. And for the right button click, again the open gesture is used. To perform the right button click, a single finger is good enough. The system will detect one fingertip color cap, then it performs a right button click.To scroll with this system, the user needs to use the open gesture move- ment with three fingers with color caps. If the users use their three fingers together and change its position to downwards, it will perform scrolling down. Similarly, if its position is changed to upwards, it will perform scrolling up. When three fingers move up or down, the color caps get a new position and new coordinates. By

the time all three color caps get new coordinates, it performs scrolls. If their y coordinate values decrease, it will perform scrolling down, and if the values increase, it will perform scrolling up.In conclusion, the proposed system has shown a new method for gesture-based mouse control using computer vision. The system uses color detection and masking to recognize hand gestures and control the mouse cursor.

PROPOSED SYSTEM

The proposed Gesture Controlled Virtual Mouse sys- tem also includes a third module that leverages voice automation for wireless mouse assistance. This module allows users to perform mouse operations such as clicking, scrolling, and dragging, by simply giving voice commands. This feature is especially helpful for users who are unable to use hand gestures due to physical limitations.

The voice automation module is implemented using state-of-the-art speech recognition algorithms that en- able the system to accurately recognize the users voice commands. The module is designed to work seamlessly with the other two modules of the system, allowing users to switch between hand gestures and voice commands effortlessly.

This module also adds a layer of convenience by allow- ing users to perform mouse operations from a distance, without the need for any direct contact with the computer. This makes it a useful tool for presentations, demonstra- tions, and other scenarios where the user needs to interact with the computer without being physically close to it.

Overall, the Gesture Controlled Virtual Mouse system is an innovative and user-friendly solution that simplifies human-computer interaction. With its advanced machine learning and computer vision algorithms, it offers a reli- able and efficient way for users to control their computers using hand gestures, voice commands, or a combination of both.

Convolutional Neural Networks (MediaPipe running on top of pybind11)

The convolutional neural network (CNN) implemented by MediaPipe is based on deep learning algorithms that use a series of convolutional layers to extract features from images. The basic algorithm for CNNs can be summarized as follows:

Input layer: Accepts the input image and performs preprocessing such as normalization.

Convolution layer: Applies convolution operation to the input image using multiple filters to extract relevant features. The output of this layer is called a feature map.

Activation function: Introduces non-linearity to the feature maps.

Pooling layer: Reduces the spatial dimensions of the feature maps to reduce computational complexity.

Repeat steps 2-4 for multiple layers.

Flatten layer: Converts the feature maps into a vector to feed them into the fully connected layer.

Fully connected layer: Performs the classification

task by applying weights and biases to the input vector.

Output layer: Produces the final output.

Heres a pseudocode implementation of a simple CNN algorithm:

Algorithm 1: Convolutional Neural Network Algo-

Input: Input image I

Output: Output feature map O

1: Initialize: Set stride S and filter size K ; Calculate:

Output size Os = (Is K )/S + 1; 2: for each filter Fi do

3: for each output channel c do

4: for each pixel in Oc do

5: Calculate: Starting pixel ps = pixeli S;

Calculate: Ending pixel pe = ps + K ;

Extract: K × K region R from Ic starting from ps ; Convolve: Element-wise multiply R with Fi ; Sum: Add up all the elements in the resulting matrix; Assign: Result to corresponding pixel in Oc ;

WORK DONE AND RESULTS ANALYSIS

Gesture-Controlled Mouse

Neutral Gesture: Neutral Gesture. Used to halt/stop execution of current gesture.

Move Cursor: Cursor is assigned to the midpoint of index and middle fingertips. This gesture moves the cursor to the desired location. Speed of the cursor movement is proportional to the speed of hand.

Left Click: Gesture for single left click

Fig. 1. Virtual Mouse

Right Click: Gesture for single right click

Double Click: Gesture for double click

Scrolling: Dynamic Gestures for horizontal and ver- tical scroll. The speed of scroll is proportional to the distance moved by pinch gesture from start point. Ver- tical and Horizontal scrolls are controlled by vertical and horizontal pinch movements respectively.

Drag and Drop: Gesture for drag and drop function- ality. Can be used to move/tranfer files from one directory

Multiple Item Selection: Gesture to select multiple items

Volume Control: Dynamic Gestures for Volume con- trol. The rate of increase/decrease of volume is propor- tional to the distance moved by pinch gesture from start point.

Brightness Control: Dynamic Gestures for Bright- ness control. The rate of increase/decrease of brightness is proportional to the distance moved by pinch gesture from start point.

Voice Automated Mouse

Launch / Stop Gesture Recognition: article graphicx Echo Launch Gesture Recognition Turns on webcam

for hand gesture recognition. Echo Stop Gesture Recognition Turns off webcam and stops gesture recognition. (Termi- nation of Gesture controller can also be done via pressing Enter key in webcam window)

Google Search: Echo search (text you wish to search) Opens a new tab on Chrome Browser if it is running, else opens a new window. Searches the given text on Google.

Find a Location on Google Maps: Echo Find a Lo- cation Will ask the user for the location to be searched. (Location you wish to find) Will find the required location on Google Maps in a new Chrome tab.

File Navigation: Echo list files / Echo list Will list the files and respective file numbers in your Current Directory (by default C:) Echo open (file number) Opens the file / directory corresponding to specified file number. Echo go back / Echo back Changes the Current Directory to Parent Directory and lists the files.

for providing us with opportunity to encourage us to write this paper.

Fig. 2. Voice Assistant- ECHO

Current Date and Time: Echo what is todays date / Echo date Echo what is the time / Echo time Returns the current date and time.

Copy and Paste: Echo Copy Copies the selected text to clipboard. Echo Paste Pastes the copied text.

Sleep / Wake up Echo: Sleep Echo bye Pauses voice command execution till the assistant is woken up. Wake up Echo wake up Resumes voice command execution.

Exit: Echo Exit Terminates the voice assistant thread. GUI window needs to be closed manually.

CONCLUSIONS

In conclusion, Gesture Controlled Virtual Mouse is an innovative system that revolutionizes the way humans interact with computers. The use of hand gestures and voice commands provides a new level of convenience and ease to users, allowing them to control all I/O op- erations without any direct contact with the computer. The system utilizes state-of-the-art Machine earning and Computer Vision algorithms such as CNN implemented by MediaPipe running on top of pybind11 to recognize hand gestures and voice commands accurately and efficiently. The two modules – one for direct hand detection and the other for gloves of any uniform color – cater to different user preferences and provide flexibility in usage. Additionally, the system incorporates a voice automation feature that serves various tasks with great efficiency, accuracy, and ease. With the current implementation of the system on the Windows platform, Gesture Controlled Virtual Mouse presents an exciting prospect for the future of human-computer interaction. It is expected to increase productivity and convenience for users and could poten- tially have numerous practical applications in industries such as healthcare, gaming, and manufacturing.

ACKNOWLEDGMENT

We would like to thank Miss Sunayana for her valuable comments, suggestions to improve the quality of the paper and for helping us review our work regularly. We would also like to thank the Department of Computer Science and Engineering, B.M.S. College of Engineering

Embedded vir- tual mouse system by using hand gesture recognition. 2015 IEEE International Conference on Consumer Electronics – Taiwan. doi:10.1109/icce- tw.2015.7216939 10.1109/icce-tw.2015.7216939.

Mouse interface based on Two-layered Bayesian Network. 2009 Workshop on Applications of Computer Vision (WACV). doi:10.1109/wacv.2009.5403082 10.1109/wacv.2009.5403082.

recognition of multiple fingertips for tabletop human-projector interaction. IEEE Transactions on Multimedia, 11. doi:10.1109/tmm.2018.2880608.

Mahes h- waram, S. (2020). Virtual Mouse Control Using Colored Fin- ger Tips and Hand Gesture Recognition. 2020 IEEE- HYDCON. doi:10.1109/hydcon48903.2020.9242677 .

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Virtual Mouse Using Hand Gesture Recognition

Profile image of IRJET  Journal

2022, IRJET

The Mouse is a great invention of technology. In recent years, different types of the mouse have been invented. There are many types of mouse available Optical Mouse, Wireless Mouse, and Bluetooth Mouse. However, hardware devices are used in this mouse, and also expensive because some mouse they used sensors. This proposed system is based on the latest technology which used hand gestures. Hand gestures are captured with the help of a camera and after that hand landmark key points are detected and according to a particular gesture is recognized and it performs various operations of the mouse cursor. Users can perform various operations of the mouse without using any hardware devices or sensors only required your hand with the help of your fingers you can perform mouse operations. and it is user-friendly also, cost- effective.

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

IMAGES

  1. (PDF) Virtual Mouse

    literature survey of virtual mouse

  2. (PDF) Deep Learning-Based Real-Time AI Virtual Mouse System Using

    literature survey of virtual mouse

  3. (PDF) The Virtual Mouse Brain: A Computational Neuroinformatics

    literature survey of virtual mouse

  4. (PDF) Virtual mouse with RGB colored tapes

    literature survey of virtual mouse

  5. (PDF) Virtual Mouse Using Colour Detection

    literature survey of virtual mouse

  6. Virtual Mouse using hand gesture recognition

    literature survey of virtual mouse

VIDEO

  1. #17 AI Virtual Mouse

  2. VIRTUAL MOUSE BASED ON HAND GESTURE RECOGNITION

  3. Research Methodology

  4. AI Virtual Mouse

  5. Carpenter Pano Video Survey. Use your mouse to look around. Apologies for the sweat on the lenses

  6. Literature Survey using Scopus @IITMadrasOfficial @nptel-nociitm9240 @iit

COMMENTS

  1. (PDF) Virtual Mouse Using Hand Gesture

    Linguistics Psycholinguistics Gestures Virtual Mouse Using Hand Gesture Authors: E Sankar CHAVALI Sri Chandrasekharendra Saraswathi Viswa Mahavidyalaya University Abstract Recent improvements...

  2. Deep Learning-Based Real-Time AI Virtual Mouse System Using Computer

    4.1. The Camera Used in the AI Virtual Mouse System. The proposed AI virtual mouse system is based on the frames that have been captured by the webcam in a laptop or PC. By using the Python computer vision library OpenCV, the video capture object is created and the web camera will start capturing video, as shown in Figure 4. The web camera ...

  3. Real-time virtual mouse system using RGB-D images and ...

    19 Citations Explore all metrics Abstract A real-time fingertip-gesture-based interface is still challenging for human-computer interactions, due to sensor noise, changing light levels, and the complexity of tracking a fingertip across a variety of subjects.

  4. [Retracted] Deep Learning-Based Real-Time AI Virtual Mouse ...

    Published 25 Oct 2021 Abstract The mouse is one of the wonderful inventions of Human-Computer Interaction (HCI) technology. Currently, wireless mouse or a Bluetooth mouse still uses devices and is not free of devices completely since it uses a battery for power and a dongle to connect it to the PC.

  5. Virtual Mouse using Hand Gestures

    The Virtual Mouse provides an infrastructure between the user and the system using only a camera. It allows users to interface with machines without the use of mechanical or physical devices, and even control mouse functionalities. This study presents a method for controlling the cursor's position without the need of any electronic equipment.

  6. Virtual Mouse And Assistant: A Technological Revolution Of Artificial

    The virtual mouse system's main objective is to replace the physical mouse with hand gestures for cursor control. The described system may be implemented via a webcam or built-in camera that identifies hand movements and hand tips and analyses these frames to perform specific mouse activities.

  7. PDF arXiv:2207.03112v3 [cs.CV] 16 Jan 2023

    mouse cursor. Literature survey has shown very few promising results about the smooth movement of the gesture-controlled mouse-cursor [19]. In this work, we attempt to develop a low-cost hand gesture recognition system along with a virtual mouse for real-time. 3 Proposed methodology This section gives a detailed description of the pro-posed ...

  8. Hand Gesture Recognition to Implement Virtual Mouse Using ...

    2 Literature Survey. The below survey was done by a few researchers, which mainly focuses on the Implementation of Virtual Mouse using different aspects. It also indicates the programming language used and the key feature and the functionality of Virtual Mouse. ... The virtual mouse works only by the simple gestures, without performing some of ...

  9. PDF Virtual Mouse

    INTRODUCTION The goal of this project is to move the mouse pointer on the screen without using any hardware, such as a mouse, and instead by utilising finger motions, a technique known as gesture recognition. Different technologies have been explored in the development of virtual mice in recent years.

  10. Voice Assistant and Gesture Controlled Virtual Mouse using Deep

    Abstract: The Gesture Controlled Virtual Mouse makes it simple to communicate with a computer using voice commands and hand gestures. The computer requires almost little direct physical contact. All input and output processes might potentially be managed digitally by combining voice instructions with both static and dynamic hand gestures.

  11. [PDF] Virtual Mouse Using Hand Gesture

    2019 TLDR This paper proposes a virtual mouse system based on HCI using computer vision and hand gestures that eliminates device dependency in order to use a mouse and can be proved beneficial inorder to develop HCI technology. 28 Real-time virtual mouse system using RGB-D images and fingertip detection

  12. Computer Vision-Based Virtual Mouse Cursor Using Hand Gesture

    Abstract. The human-computer interaction (HCI) is a field in which the developer makes a user-friendly system. The hand gesture is most frequently used as interaction in the digital environment and thus complexity and flexibility of motion of hand is a research topic. It is the most natural expression for communication between humans and ...

  13. Virtual Mouse Using Hand Gesture

    The objective is to use only finger motions to move the mouse pointer on the screen, a process known as gesture recognition, rather than using any hardware, like a mouse. We introduce a unique Human-Computer Interaction approach in this work. A live camera controls how the cursor is moved.

  14. PDF Virtual Mouse using Artificial Intelligence

    LITERATURE SURVEY The research shows the application of a virtual mouse which is an advanced form of an external mouse which establishes a great comfort for operating the machine and thereby, the virtual mouse using Al tackles several other issues regarding the operations of the mouse.

  15. A Review on Gesture Controlled Virtual Mouse

    Our literature review focuses on the research works on virtual keyboard and virtual mouse which were published in Elsevier, Springer, ACM Digital Library, IEEE Digital Library etc. We discussed about few related works on virtual keyboard and virtual mouse in the following two subsections.

  16. Hand Gesture Control for Human-Computer Interaction with ...

    Algorithms to integrate gesture commands with virtual mouse and keyboard input through the Pynput library in Python, were developed to handle commands such as mouse control, media control, and others. ... Alam KA, Jomhari N (2018) A systematic literature review on vision based gesture recognition techniques. Multimed Tools Appl. https://doi.org ...

  17. PDF Gesture Recognition Based Virtual Mouse and Keyboard

    The AI virtual mouse system was created using the Python programming language, as well as OpenCV, a computer vision library. ... LITERATURE SURVEY Paper 1 Because of the rapid advancement of computer vision, there is an increasing demand for human-machine interaction. Hand gesture recognition is frequently employed in robot control, intelligent ...

  18. VIRTUAL MOUSE USING HAND GESTURE

    A novel camera vision based cursor control system, using hand gestures captured from a webcam through a color detection technique, that will allow the user to navigate the computer cursor using their hand bearing color caps or tapes and left click and dragging will be performed using different hand gestures. This paper proposes a novel camera vision based cursor control system, using hand ...

  19. Virtual Mouse Control Using Hand Class Gesture

    Virtual Mouse Control Using Hand Class Gesture December 2020 Authors: Vijay Kumar Sharma Meerut Institute of Engineering & Technology Vimal Kumar Galgotias University Sachin Tawara Vishal...

  20. Gesture Controlled Virtual Mouse with Voice Automation

    This research paper presents a comprehensive study of the Gesture Controlled Virtual Mouse, which leverages state- of-the-art Machine Learning and Computer Vision algo- rithms to enable users to control input/output operations using hand gestures and voice commands without the need for direct contact.

  21. Virtual Mouse Using Hand Gesture Recognition

    Literature Survey Sr. no Research paper Technology Advantage Limitations 1 Virtual Mouse Control Hand Gesture used neural network Limitation of This Using Colored Fingertips Recogniti on, IP, for hand gesture Mouse they used and Hand Gesture Neural Network, recognition. ... The virtual mouse will be used in real-world applications, and also, it ...