Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Entropy (Basel) ; 26(5)2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38785633

ABSTRACT

Cyber competitions are usually team activities, where team performance not only depends on the members' abilities but also on team collaboration. This seems intuitive, especially given that team formation is a well-studied discipline in competitive sports and project management, but unfortunately, team performance and team formation strategies are rarely studied in the context of cybersecurity and cyber competitions. Since cyber competitions are becoming more prevalent and organized, this gap becomes an opportunity to formalize the study of team performance in the context of cyber competitions. This work follows a cross-validating two-approach methodology. The first is the computational modeling of cyber competitions using Agent-Based Modeling. Team members are modeled, in NetLogo, as collaborating agents competing over a network in a red team/blue team match. Members' abilities, team interaction and network properties are parametrized (inputs), and the match score is reported as output. The second approach is grounded in the literature of team performance (not in the context of cyber competitions), where a theoretical framework is built in accordance with the literature. The results of the first approach are used to build a causal inference model using Structural Equation Modeling. Upon comparing the causal inference model to the theoretical model, they showed high resemblance, and this cross-validated both approaches. Two main findings are deduced: first, the body of literature studying teams remains valid and applicable in the context of cyber competitions. Second, coaches and researchers can test new team strategies computationally and achieve precise performance predictions. The targeted gap used methodology and findings which are novel to the study of cyber competitions.

2.
Sci Rep ; 14(1): 1075, 2024 01 11.
Article in English | MEDLINE | ID: mdl-38212467

ABSTRACT

This paper demonstrates the value of a framework for processing data on body acceleration as a uniquely valuable tool for diagnosing diseases that affect gait early. As a case study, we used this model to identify individuals with peripheral artery disease (PAD) and distinguish them from those without PAD. The framework uses acceleration data extracted from anatomical reflective markers placed in different body locations to train the diagnostic models and a wearable accelerometer carried at the waist for validation. Reflective marker data have been used for decades in studies evaluating and monitoring human gait. They are widely available for many body parts but are obtained in specialized laboratories. On the other hand, wearable accelerometers enable diagnostics outside lab conditions. Models trained by raw marker data at the sacrum achieve an accuracy of 92% in distinguishing PAD patients from non-PAD controls. This accuracy drops to 28% when data from a wearable accelerometer at the waist validate the model. This model was enhanced by using features extracted from the acceleration rather than the raw acceleration, with the marker model accuracy only dropping from 86 to 60% when validated by the wearable accelerometer data.


Subject(s)
Peripheral Arterial Disease , Wearable Electronic Devices , Humans , Gait , Acceleration , Accelerometry
3.
Sensors (Basel) ; 22(19)2022 Sep 30.
Article in English | MEDLINE | ID: mdl-36236533

ABSTRACT

Peripheral artery disease (PAD) manifests from atherosclerosis, which limits blood flow to the legs and causes changes in muscle structure and function, and in gait performance. PAD is underdiagnosed, which delays treatment and worsens clinical outcomes. To overcome this challenge, the purpose of this study is to develop machine learning (ML) models that distinguish individuals with and without PAD. This is the first step to using ML to identify those with PAD risk early. We built ML models based on previously acquired overground walking biomechanics data from patients with PAD and healthy controls. Gait signatures were characterized using ankle, knee, and hip joint angles, torques, and powers, as well as ground reaction forces (GRF). ML was able to classify those with and without PAD using Neural Networks or Random Forest algorithms with 89% accuracy (0.64 Matthew's Correlation Coefficient) using all laboratory-based gait variables. Moreover, models using only GRF variables provided up to 87% accuracy (0.64 Matthew's Correlation Coefficient). These results indicate that ML models can classify those with and without PAD using gait signatures with acceptable performance. Results also show that an ML gait signature model that uses GRF features delivers the most informative data for PAD classification.


Subject(s)
Gait , Peripheral Arterial Disease , Biomechanical Phenomena , Gait/physiology , Humans , Machine Learning , Peripheral Arterial Disease/diagnosis , Walking
4.
IEEE Access ; 10: 31306-31339, 2022.
Article in English | MEDLINE | ID: mdl-35441062

ABSTRACT

This paper provides a comprehensive literature review of various technologies and protocols used for medical Internet of Things (IoT) with a thorough examination of current enabling technologies, use cases, applications, and challenges. Despite recent advances, medical IoT is still not considered a routine practice. Due to regulation, ethical, and technological challenges of biomedical hardware, the growth of medical IoT is inhibited. Medical IoT continues to advance in terms of biomedical hardware, and monitoring figures like vital signs, temperature, electrical signals, oxygen levels, cancer indicators, glucose levels, and other bodily levels. In the upcoming years, medical IoT is expected replace old healthcare systems. In comparison to other survey papers on this topic, our paper provides a thorough summary of the most relevant protocols and technologies specifically for medical IoT as well as the challenges. Our paper also contains several proposed frameworks and use cases of medical IoT in hospital settings as well as a comprehensive overview of previous architectures of IoT regarding the strengths and weaknesses. We hope to enable researchers of multiple disciplines, developers, and biomedical engineers to quickly become knowledgeable on how various technologies cooperate and how current frameworks can be modified for new use cases, thus inspiring more growth in medical IoT.

5.
JMIR Form Res ; 6(5): e36238, 2022 May 11.
Article in English | MEDLINE | ID: mdl-35389357

ABSTRACT

BACKGROUND: Contact tracing has been globally adopted in the fight to control the infection rate of COVID-19. To this aim, several mobile apps have been developed. However, there are ever-growing concerns over the working mechanism and performance of these applications. The literature already provides some interesting exploratory studies on the community's response to the applications by analyzing information from different sources, such as news and users' reviews of the applications. However, to the best of our knowledge, there is no existing solution that automatically analyzes users' reviews and extracts the evoked sentiments. We believe such solutions combined with a user-friendly interface can be used as a rapid surveillance tool to monitor how effective an application is and to make immediate changes without going through an intense participatory design method. OBJECTIVE: In this paper, we aim to analyze the efficacy of AI and NLP techniques for automatically extracting and classifying the polarity of users' sentiments by proposing a sentiment analysis framework to automatically analyze users' reviews on COVID-19 contact tracing mobile apps. We also aim to provide a large-scale annotated benchmark data set to facilitate future research in the domain. As a proof of concept, we also developed a web application based on the proposed solutions, which is expected to help the community quickly analyze the potential of an application in the domain. METHODS: We propose a pipeline starting from manual annotation via a crowd-sourcing study and concluding with the development and training of artificial intelligence (AI) models for automatic sentiment analysis of users' reviews. In detail, we collected and annotated a large-scale data set of user reviews on COVID-19 contact tracing applications. We used both classical and deep learning methods for classification experiments. RESULTS: We used 8 different methods on 3 different tasks, achieving up to an average F1 score of 94.8%, indicating the feasibility of the proposed solution. The crowd-sourcing activity resulted in a large-scale benchmark data set composed of 34,534 manually annotated reviews. CONCLUSIONS: The existing literature mostly relies on the manual or exploratory analysis of users' reviews on applications, which is tedious and time-consuming. In existing studies, generally, data from fewer applications are analyzed. In this work, we showed that AI and natural language processing techniques provide good results for analyzing and classifying users' sentiments' polarity and that automatic sentiment analysis can help to analyze users' responses more accurately and quickly. We also provided a large-scale benchmark data set. We believe the presented analysis, data set, and proposed solutions combined with a user-friendly interface can be used as a rapid surveillance tool to analyze and monitor mobile apps deployed in emergency situations leading to rapid changes in the applications without going through an intense participatory design method.

6.
Article in English | MEDLINE | ID: mdl-34408917

ABSTRACT

Despite the linear relation between the number of observed spectra and the searching time, the current protein search engines, even the parallel versions, could take several hours to search a large amount of MSMS spectra, which can be generated in a short time. After a laborious searching process, some (and at times, majority) of the observed spectra are labeled as non-identifiable. We evaluate the role of machine learning in building an efficient MSMS filter to remove non-identifiable spectra. We compare and evaluate the deep learning algorithm using 9 shallow learning algorithms with different configurations. Using 10 different datasets generated from two different search engines, different instruments, different sizes and from different species, we experimentally show that deep learning models are powerful in filtering MSMS spectra. We also show that our simple features list is significant where other shallow learning algorithms showed encouraging results in filtering the MSMS spectra. Our deep learning model can exclude around 50% of the non-identifiable spectra while losing, on average, only 9% of the identifiable ones. As for shallow learning, algorithms of: Random Forest, Support Vector Machine and Neural Networks showed encouraging results, eliminating, on average, 70% of the non-identifiable spectra while losing around 25% of the identifiable ones. The deep learning algorithm may be especially more useful in instances where the protein(s) of interest are in lower cellular or tissue concentration, while the other algorithms may be more useful for concentrated or more highly expressed proteins.

7.
Article in English | MEDLINE | ID: mdl-34430067

ABSTRACT

The diversity of the available protein search engines with respect to the utilized matching algorithms, the low overlap ratios among their results and the disparity of their coverage encourage the community of proteomics to utilize ensemble solutions of different search engines. The advancing in cloud computing technology and the availability of distributed processing clusters can also provide support to this task. However, data transferring and results' combining, in this case, could be the major bottleneck. The flood of billions of observed mass spectra, hundreds of Gigabytes or potentially Terabytes of data, could easily cause the congestions, increase the risk of failure, poor performance, add more computations' cost, and waste available resources. Therefore, in this study, we propose a deep learning model in order to mitigate the traffic over cloud network and, thus reduce the cost of cloud computing. The model, which depends on the top 50 intensities and their m/z values of each spectrum, removes any spectrum which is predicted not to pass the majority voting of the participated search engines. Our results using three search engines namely: pFind, Comet and X!Tandem, and four different datasets are promising and promote the investment in deep learning to solve such type of Big data problems.

SELECTION OF CITATIONS
SEARCH DETAIL
...