Unità di Sistemi Intelligenti e Sicuri per la Telemedicina

Il team

Prof. Filippo Lanubile

Direttore Dipartimento di Informatica e Consigliere Scientifico CITEL

Prof. Giuseppe Pirlo

Prof. Paolo Buono

Prof. Giuseppe Desolda

Gianluigi De Gennaro

Nicole Novielli

Dott.ssa Ricercatrice Daniela Girardi

Ricercatori 2020

Pubblicazioni

“Emotion detection using noninvasive low cost sensors,” 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), San Antonio, TX, 2017, pp. 125-130, doi: 10.1109/ACII.2017.8273589

D. Girardi, F. Lanubile and N. Novielli
 
Emotion recognition from biometrics is relevant to a wide range of application domains, including healthcare. Existing approaches usually adopt multi-electrodes sensors that could be expensive or uncomfortable to be used in real-life situations. In this study, we investigate whether we can reliably recognize high vs. low emotional valence and arousal by relying on noninvasive low cost EEG, EMG, and GSR sensors. We report the results of an empirical study involving 19 subjects. We achieve state-of-the-art classification performance for both valence and arousal even in a cross-subject classification setting, which eliminates the need for individual training and tuning of classification models.

A University-NGO partnership to sustain assistive technology projects. interactions 23, 2 (March + April 2016), 74–77. DOI:https://doi.org/10.1145/2883619.

Fabio Calefato, Filippo Lanubile, Roberto De Nicolò, and Fabrizio Lippolis. 2016
 
Abstract. In this forum we celebrate research that helps to successfully bring the benefits of computing technologies to children, older adults, people with disabilities, and other populations that are often ignored in the design of mass-marketed products. 

“A handwritingbased protocol for assessing neurodegenerative dementia”, Cognitive Computation, pp. 1-11, 2019

D. Impedovo, G. Pirlo, G. Vessio and M. T. Angelillo
 
Abstract: Handwriting dynamics is relevant to discriminate people affected by neurodegenerative dementia from healthy subjects. This can be possible by administering simple and easy-to-perform handwriting/drawing tasks on digitizing tablets provided with electronic pens. Encouraging results have been recently obtained; however, the research community still lacks an acquisition protocol aimed at (i) collecting different traits useful for research purposes and (ii) supporting neurologists in their daily activities. This work proposes a handwriting-based protocol that integrates handwriting/drawing tasks and a digitized version of standard cognitive and functional tests already accepted, tested, and used by the neurological community. The protocol takes the form of a modular framework which facilitates the modification, deletion, and incorporation of new tasks in accordance with specific requirements. A preliminary evaluation of the protocol has been carried out to assess its usability. Successively, the protocol has been administered to more than 100 elderly MCI and match controlled subjects. The proposed protocol intends to provide a “cognitive model” for evaluating the relationship between cognitive functions and handwriting processes in healthy subjects as well as in cognitively impaired patients. The long-term goal of this research is the development of an easy-to-use and non-invasive methodology for detecting and monitoring neurodegenerative dementia during screening and follow-up.

PK-clustering Integrating Prior Knowledge in Mixed Initiative Social Network Clustering. IEEE Transactions on Visualization & Computer Graphics, vo. no. 01, pp. 1-1, 5555. DOI: 10.1109/TVCG.2020.3030347

A. Pister, P. Buono, J.D. Fekete, C. Plaisant, P. Valdivia
 
Abstract: In times of economical difficulties, everyone should adopt solutions that permit to get high quality training at reducedcosts thanks topossibilities offered by new Information and Communication Technologies. In medicine, it is very important to perform training inthe field, without compromising patient’s health. LARE is a system we are currently developing, whose aimis to enable surgeons to perform telementoring (i.e. remote tutoring) during a laparoscopic surgery. The surgeon in the surgery room (learner) is assisted e guided by an expert surgeon (tutor) located in another part of the world, who interacts with the learner via audio and also observes on a screen in real time, and at a very high resolution, the images that the learner is seeing in the surgery room;s/he can also annotate such images, so that s/he can indicate points on which the learner has to operate.LARE allows many people to attenda surgery in live modality; such people can also write in a chat. So far, only some components of LARE have been implemented in the current system. However, LARE has been already used, in particular during an event on February 9th 2013, when 300 surgeons assisted to two surgeries performed under the guidance of a tutorwho was about 800 km far from the surgery room. The system and the results of this event will be illustrated at the conference.

Scene extraction from telementored surgery videos. In: Proc. of International Conference on Distributed Multimedia Systems (DMS ’13). Brighton (UK). August 8-10, 2013. pp. Knowledge Systems Institute, Skokie, Illinois, USA

Buono, P., Desolda, G., Lanzillotti, R. (2013)
 
Abstract: The huge amount of videos, available for various purposes, makes video editing software very important and popular among people. One of the uses of video in medicine is to store surgical operations for educational or legal purposes. In particular, in telemedicine, the exchange of audio and video plays a very important role. In most cases, surgeons are inexpert in video editing; moreover, the user interface of such software tools is often very complex. This paper presents a tool to extract important scenes from surgery videos. The goal is to enable surgeons to easily and quickly extract scenes of interest.

“Detecting Clinical Signs of Anaemia From Digital Images of the Palpebral Conjunctiva,” in IEEE Access, vol. 7, pp. 113488-113498, 2019, doi: 10.1109/ACCESS.2019.2932274.

G. Dimauro, A. Guarini, D. Caivano, F. Girardi, C. Pasciolla and A. Iacobazzi
 
Abstract: The potential for visually detectable clinical signs of anaemia and their correlation with the severity of the pathology have supported research on non-invasive prevention methods. Physical examination for a suspected diagnosis of anaemia is a practice performed by a specialist to evaluate the pallor of the exposed tissues. The aim of the research presented herein is to quantify and minimize the subjective nature of the examination of the palpebral conjunctiva, suggesting a method of diagnostic support and autonomous monitoring. Here we describe the methodology and system for extracting key data from the digital image of the conjunctiva, which is also based on analysis of the dominant colour classes. Effective features have been used herein to establish the inclusion of each image in a diagnosis probability class for anaemia. The images of the conjunctiva were taken using a new low cost and easy to use device, designed to optimize the properties of independence from ambient light. The performance of the system was tested either by extracting manually the palpebral conjunctiva from images or by extracting them in a semi-automatic way based on the SLIC Superpixel algorithm. Tests were conducted on images obtained from 102 people. The dataset was unbalanced, since many more samples of healthy people were available, as often happens in the medical field. The SMOTE and ROSE algorithms were evaluated to balance the dataset, and some classification algorithms for assessing the anaemic condition were tested, yielding very good results. Taking a photo of the palpebral conjunctiva can aid the decision whether a blood sample is needed or even whether a patient should inform a physician, considerably reducing the number of candidate subjects for blood sampling. It also could highlight the suspected anaemia, allowing screening for anaemia in a large number of people, even in resource-poor settings.