Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
Sensors (Basel) ; 24(8)2024 Apr 12.
Article in English | MEDLINE | ID: mdl-38676101

ABSTRACT

ECG classification or heartbeat classification is an extremely valuable tool in cardiology. Deep learning-based techniques for the analysis of ECG signals assist human experts in the timely diagnosis of cardiac diseases and help save precious lives. This research aims at digitizing a dataset of images of ECG records into time series signals and then applying deep learning (DL) techniques on the digitized dataset. State-of-the-art DL techniques are proposed for the classification of the ECG signals into different cardiac classes. Multiple DL models, including a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a self-supervised learning (SSL)-based model using autoencoders are explored and compared in this study. The models are trained on the dataset generated from ECG plots of patients from various healthcare institutes in Pakistan. First, the ECG images are digitized, segmenting the lead II heartbeats, and then the digitized signals are passed to the proposed deep learning models for classification. Among the different DL models used in this study, the proposed CNN model achieves the highest accuracy of ∼92%. The proposed model is highly accurate and provides fast inference for real-time and direct monitoring of ECG signals that are captured from the electrodes (sensors) placed on different parts of the body. Using the digitized form of ECG signals instead of images for the classification of cardiac arrhythmia allows cardiologists to utilize DL models directly on ECG signals from an ECG machine for the real-time and accurate monitoring of ECGs.


Subject(s)
Arrhythmias, Cardiac , Deep Learning , Electrocardiography , Neural Networks, Computer , Humans , Electrocardiography/methods , Arrhythmias, Cardiac/diagnosis , Arrhythmias, Cardiac/physiopathology , Arrhythmias, Cardiac/classification , Signal Processing, Computer-Assisted , Algorithms , Heart Rate/physiology
2.
Sensors (Basel) ; 22(9)2022 Apr 21.
Article in English | MEDLINE | ID: mdl-35590888

ABSTRACT

To study and understand the importance of Internet of Things-driven citizen science (IoT-CS) combined with data satisficing, we set up and undertook a citizen science experiment for air quality (AQ) in four Pakistan cities using twenty-one volunteers. We used quantitative methods to analyse the AQ data. Three research questions (RQ) were posed as follows: Which factors affect CS IoT-CS AQ data quality (RQ1)? How can we make science more inclusive by dealing with the lack of scientists, training and high-quality equipment (RQ2)? Can a lack of calibrated data readings be overcome to yield otherwise useful results for IoT-CS AQ data analysis (RQ3)? To address RQ1, an analysis of related work revealed that multiple causal factors exist. Good practice guidelines were adopted to promote higher data quality in CS studies. Additionally, we also proposed a classification of CS instruments to help better understand the data quality challenges. To answer RQ2, user engagement workshops were undertaken as an effective method to make CS more inclusive and also to train users to operate IoT-CS AQ devices more understandably. To address RQ3, it was proposed that a more feasible objective is that citizens leverage data satisficing such that AQ measurements can detect relevant local variations. Additionally, we proposed several recommendations. Our top recommendations are that: a deep (citizen) science approach should be fostered to support a more inclusive, knowledgeable application of science en masse for the greater good; It may not be useful or feasible to cross-check measurements from cheaper versus more expensive calibrated instrument sensors in situ. Hence, data satisficing may be more feasible; additional cross-checks that go beyond checking if co-located low-cost and calibrated AQ measurements correlate under equivalent conditions should be leveraged.


Subject(s)
Air Pollution , Citizen Science , Air Pollution/analysis , Cities , Humans , Research Design , Volunteers
3.
Environ Monit Assess ; 194(2): 133, 2022 Jan 28.
Article in English | MEDLINE | ID: mdl-35089424

ABSTRACT

Water is a basic and primary resource which is required for sustenance of life on the Earth. The importance of water quality is increasing with the ascending water pollution owing to industrialization and depletion of fresh water sources. The countries having low control on reducing water pollution are likely to retain poor public health. Additionally, the methods being used in most developing countries are not effective and are based more on human intervention than on technological and automated solutions. Typically, most of the water samples and related data are monitored and tested in laboratories, which eventually consumes time and effort at the expense of producing fewer reliable results. In view of the above, there is an imperative need to devise a proper and systematic system to regularly monitor and manage the quality of water resources to arrest the related issues. Towards such ends, Internet of Things (IoT) is a great alternative to such traditional approaches which are complex and ineffective and it allows taking remote measurements in real-time with minimal human involvement. The proposed system consists of various water quality measuring nodes encompassing various sensors including dissolved oxygen, turbidity, pH level, water temperature, and total dissolved solids. These sensors nodes deployed at various sites of the study area transmit data to the server for processing and analysis using GSM modules. The data collected over months is used for water quality classification using water quality indices and for bacterial prediction by employing machine learning algorithms. For data visualization, a Web portal is developed which consists of a dashboard of Web services to display the heat maps and other related info-graphics. The real-time water quality data is collected using IoT nodes and the historic data is acquired from the Rawal Lake Filtration Plant. Several machine learning algorithms including neural networks (NN), convolutional neural networks (CNN), ridge regression (RR), support vector machines (SVM), decision tree regression (DTR), Bayesian regression (BR), and an ensemble of all models are trained for fecal coliform bacterial prediction, where SVM and Bayesian regression models have shown the optimal performance with mean squared error (MSE) of 0.35575 and 0.39566 respectively. The proposed system provides an alternative and more convenient solution for bacterial prediction, which otherwise is done manually in labs and is an expensive and time-consuming approach. In addition to this, it offers several other advantages including remote monitoring, ease of scalability, real-time status of water quality, and a portable hardware.


Subject(s)
Internet of Things , Bayes Theorem , Environmental Monitoring , Humans , Machine Learning , Water Quality
4.
Sensors (Basel) ; 21(23)2021 Nov 26.
Article in English | MEDLINE | ID: mdl-34883905

ABSTRACT

Wheat yellow rust is a common agricultural disease that affects the crop every year across the world. The disease not only negatively impacts the quality of the yield but the quantity as well, which results in adverse impact on economy and food supply. It is highly desired to develop methods for fast and accurate detection of yellow rust in wheat crop; however, high-resolution images are not always available which hinders the ability of trained models in detection tasks. The approach presented in this study harnesses the power of super-resolution generative adversarial networks (SRGAN) for upsampling the images before using them to train deep learning models for the detection of wheat yellow rust. After preprocessing the data for noise removal, SRGANs are used for upsampling the images to increase their resolution which helps convolutional neural network (CNN) in learning high-quality features during training. This study empirically shows that SRGANs can be used effectively to improve the quality of images and produce significantly better results when compared with models trained using low-resolution images. This is evident from the results obtained on upsampled images, i.e., 83% of overall test accuracy, which are substantially better than the overall test accuracy achieved for low-resolution images, i.e., 75%. The proposed approach can be used in other real-world scenarios where images are of low resolution due to the unavailability of high-resolution camera in edge devices.


Subject(s)
Basidiomycota , Image Processing, Computer-Assisted , Agriculture , Neural Networks, Computer , Triticum
5.
PeerJ Comput Sci ; 7: e536, 2021.
Article in English | MEDLINE | ID: mdl-34141878

ABSTRACT

Crop classification in early phenological stages has been a difficult task due to spectrum similarity of different crops. For this purpose, low altitude platforms such as drones have great potential to provide high resolution optical imagery where Machine Learning (ML) applied to classify different types of crops. In this research work, crop classification is performed at different phenological stages using optical images which are obtained from drone. For this purpose, gray level co-occurrence matrix (GLCM) based features are extracted from underlying gray scale images collected by the drone. To classify the different types of crops, different ML algorithms including Random Forest (RF), Naive Bayes (NB), Neural Network (NN) and Support Vector Machine (SVM) are applied. The results showed that the ML algorithms performed much better on GLCM features as compared to gray scale images with a margin of 13.65% in overall accuracy.

6.
J Epidemiol Glob Health ; 11(2): 186-193, 2021 06.
Article in English | MEDLINE | ID: mdl-33605110

ABSTRACT

The COVID-19 pandemic is one of unmatched scale and severity. A continued state of crisis has been met with poor public adherence to preventive measures and difficulty implementing public health policy. This study aims to identify and evaluate the factors underlying such a response. Thus, it assesses the knowledge, perceived risk, and trust in the sources of information in relation to the novel coronavirus disease at the outset of the COVID-19 pandemic. An online questionnaire was completed between March 20 and 27, 2020. Knowledge, perceptions, and perceived risk (Likert scale) were assessed for 737 literate participants of a representative sample in an urban setting. We found that respondents' risk perception for novel coronavirus disease was high. The perceived risk score for both cognitive and affective domains was raised at 2.24 ± 1.3 (eight items) and 3.01 ± 1 (seven items) respectively. Misconceptions and gaps in knowledge regarding COVID-19 were noted. Religious leadership was the least trusted (10%) while health authorities were the most trusted (35%) sources of information. Our findings suggest that there was a deficiency in knowledge and high concern about the pandemic, leading to a higher risk perception, especially in the affective domain. Thus, we recommend comprehensive education programs, planned intensive risk communication, and a concerted effort by all stakeholders to mitigate the spread of disease. The first of its kind in the region, this study will be critical to response efforts against current and future outbreaks.


Subject(s)
COVID-19 , Health Knowledge, Attitudes, Practice , Surveys and Questionnaires , Adolescent , Adult , COVID-19/epidemiology , Female , Humans , Male , Middle Aged , Pakistan/epidemiology , Pandemics , SARS-CoV-2 , Young Adult
7.
Sensors (Basel) ; 22(1)2021 Dec 27.
Article in English | MEDLINE | ID: mdl-35009689

ABSTRACT

Wheat is a staple crop of Pakistan that covers almost 40% of the cultivated land and contributes almost 3% in the overall Gross Domestic Product (GDP) of Pakistan. However, due to increasing seasonal variation, it was observed that wheat is majorly affected by rust disease, particularly in rain-fed areas. Rust is considered the most harmful fungal disease for wheat, which can cause reductions of 20-30% in wheat yield. Its capability to spread rapidly over time has made its management most challenging, becoming a major threat to food security. In order to counter this threat, precise detection of wheat rust and its infection types is important for minimizing yield losses. For this purpose, we have proposed a framework for classifying wheat yellow rust infection types using machine learning techniques. First, an image dataset of different yellow rust infections was collected using mobile cameras. Six Gray Level Co-occurrence Matrix (GLCM) texture features and four Local Binary Patterns (LBP) texture features were extracted from grayscale images of the collected dataset. In order to classify wheat yellow rust disease into its three classes (healthy, resistant, and susceptible), Decision Tree, Random Forest, Light Gradient Boosting Machine (LightGBM), Extreme Gradient Boosting (XGBoost), and CatBoost were used with (i) GLCM, (ii) LBP, and (iii) combined GLCM-LBP texture features. The results indicate that CatBoost outperformed on GLCM texture features with an accuracy of 92.30%. This accuracy can be further improved by scaling up the dataset and applying deep learning models. The development of the proposed study could be useful for the agricultural community for the early detection of wheat yellow rust infection and assist in taking remedial measures to contain crop yield.


Subject(s)
Basidiomycota , Triticum , Agriculture , Machine Learning
8.
Biochem Mol Biol Educ ; 48(5): 473-481, 2020 09.
Article in English | MEDLINE | ID: mdl-32682354

ABSTRACT

The global challenge presented by COVID-19 is unparalleled. Shortages in healthcare staff and manpower bring the practical skills of medical students under the spotlight. However, before they can be placed on hospital frontlines, it is crucial to assess their preparedness for patient interaction. This can be achieved by comparing their behavioral dynamics to those of physicians. An online questionnaire was administered between March 20, 2020 and March 27, 2020. The preventive strategies adopted by medical students and physicians at different ages and levels of education were compared by using chi-square test where a p value of <0.05 was considered statistically significant. We report that the demonstration of preventive behaviors increased with educational attainment and age. Older age groups avoided crowded areas, wore more masks, used disinfectants and did not touch their faces as compared to the younger participants (p < 0.001). Similarly, postgraduate doctors used more masks and disinfectants as compared to graduate doctors and medical students (p < 0.001). Based on our results, the lack of preventive behavior shown by medical students has implications for policy makers. We recommend short- and long-term changes to medical programs and admissions policies to equip medical students with the personal and professional skills to better contribute to the healthcare system in the present pandemic and beyond.


Subject(s)
COVID-19 , Education, Medical , Health Behavior , Pandemics , SARS-CoV-2 , Surveys and Questionnaires , Adolescent , Adult , COVID-19/epidemiology , COVID-19/prevention & control , Cross-Sectional Studies , Female , Humans , Male , Middle Aged , Physicians , Students, Medical
9.
Sensors (Basel) ; 19(17)2019 Sep 02.
Article in English | MEDLINE | ID: mdl-31480709

ABSTRACT

Internet of Things (IoT)-based automation of agricultural events can change the agriculture sector from being static and manual to dynamic and smart, leading to enhanced production with reduced human efforts. Precision Agriculture (PA) along with Wireless Sensor Network (WSN) are the main drivers of automation in the agriculture domain. PA uses specific sensors and software to ensure that the crops receive exactly what they need to optimize productivity and sustainability. PA includes retrieving real data about the conditions of soil, crops and weather from the sensors deployed in the fields. High-resolution images of crops are obtained from satellite or air-borne platforms (manned or unmanned), which are further processed to extract information used to provide future decisions. In this paper, a review of near and remote sensor networks in the agriculture domain is presented along with several considerations and challenges. This survey includes wireless communication technologies, sensors, and wireless nodes used to assess the environmental behaviour, the platforms used to obtain spectral images of crops, the common vegetation indices used to analyse spectral images and applications of WSN in agriculture. As a proof of concept, we present a case study showing how WSN-based PA system can be implemented. We propose an IoT-based smart solution for crop health monitoring, which is comprised of two modules. The first module is a wireless sensor network-based system to monitor real-time crop health status. The second module uses a low altitude remote sensing platform to obtain multi-spectral imagery, which is further processed to classify healthy and unhealthy crops. We also highlight the results obtained using a case study and list the challenges and future directions based on our work.


Subject(s)
Agriculture/methods , Wireless Technology , Computer Communication Networks , Crops, Agricultural , Humans , Remote Sensing Technology
10.
Biochem Mol Biol Educ ; 46(4): 336-342, 2018 07.
Article in English | MEDLINE | ID: mdl-29668075

ABSTRACT

Debate and role play for learning critical thinking and communication skills are being increasingly used in various undergraduate medical schools worldwide. We aim to compare students' views about effectiveness of two teaching strategies; debate and role play to exercise critical thinking and communication skills during problem-based learning (PBL). This is a comparative, cross-sectional, and questionnaire-based study. Our subjects were second year undergraduate female medical students enrolled in Imam Abdulrahman Bin Faisal University (IAU), College of Medicine from September 2014-2016, divided into 10 small PBL groups (10-13 students/group/year). Students rated role play and debate as equally effective in improving communication skills. Debate was rated superior to role play in "opening new avenues of thinking" (p-value is 0.01), whereas in "integration of knowledge of basic medical sciences with clinical skills" and "reflection of real life experience" students rated role play being superior to debate (p-value 0.01 and 0.00, respectively). Both role play and debate are well accepted by the students in PBL curriculum as an effective teaching methodology. Both are perceived equally good in improving students' communication skills. Few aspects of critical thinking are improved more by role plays compared to debate and vice versa. © 2018 by The International Union of Biochemistry and Molecular Biology, 46:336-342, 2018.


Subject(s)
Communication , Education, Medical, Undergraduate , Problem-Based Learning , Role Playing , Students, Medical/psychology , Thinking , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...