Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38959147

RESUMO

All three contrast-enhanced (CE) phases (e.g., Arterial, Portal Venous, and Delay) are crucial for diagnosing liver tumors. However, acquiring all three phases is constrained due to contrast agents (CAs) risks, long imaging time, and strict imaging criteria. In this paper, we propose a novel Common-Unique Decomposition Driven Diffusion Model (CUDD-DM), capable of converting any two input phases in three phases into the remaining one, thereby reducing patient wait time, conserving medical resources, and reducing the use of CAs. 1) The Common-Unique Feature Decomposition Module, by utilizing spectral decomposition to capture both common and unique features among different inputs, not only learns correlations in highly similar areas between two input phases but also learns differences in different areas, thereby laying a foundation for the synthesis of remaining phase. 2) The Multi-scale Temporal Reset Gates Module, by bidirectional comparing lesions in current and multiple historical slices, maximizes reliance on previous slices when no lesions and minimizes this reliance when lesions are present, thereby preventing interference between consecutive slices. 3) The Diffusion Model-Driven Lesion Detail Synthesis Module, by employing a continuous and progressive generation process, accurately captures detailed features between data distributions, thereby avoiding the loss of detail caused by traditional methods (e.g., GAN) that overfocus on global distributions. Extensive experiments on a generalized CE liver tumor dataset have demonstrated that our CUDD-DM achieves state-of-the-art performance (improved the SSIM by at least 2.2% (lesions area 5.3%) comparing the seven leading methods). These results demonstrate that CUDD-DM advances CE liver tumor imaging technology.

2.
PeerJ Comput Sci ; 10: e2035, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38855251

RESUMO

Currently, most traffic simulations require residents' travel plans as input data; however, in real scenarios, it is difficult to obtain real residents' travel behavior data for various reasons, such as a large amount of data and the protection of residents' privacy. This study proposes a method combining a convolutional neural network (CNN) and a long short-term memory network (LSTM) for analyzing and compensating spatiotemporal features in residents' travel data. By exploiting the spatial feature extraction capability of CNNs and the advantages of LSTMs in processing time-series data, the aim is to achieve a traffic simulation close to a real scenario using limited data by modeling travel time and space. The experimental results show that the method proposed in this article is closer to the real data in terms of the average traveling distance compared with the use of the modulation method and the statistical estimation method. The new strategy we propose can significantly reduce the deviation of the model from the original data, thereby significantly reducing the basic error rate by about 50%.

3.
PeerJ Comput Sci ; 10: e1862, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38435579

RESUMO

Background: Artificial intelligence technologies have great potential in classifying neurodegenerative diseases such as Alzheimer's and Parkinson's. These technologies can aid in early diagnosis, enhance classification accuracy, and improve patient access to appropriate treatments. For this purpose, we focused on AI-based auto-diagnosis of Alzheimer's disease, Parkinson's disease, and healthy MRI images. Methods: In the current study, a deep hybrid network based on an ensemble classifier and convolutional neural network was designed. First, a very deep super-resolution neural network was adapted to improve the resolution of MRI images. Low and high-level features were extracted from the images processed with the hybrid deep convolutional neural network. Finally, these deep features are given as input to the k-nearest neighbor (KNN)-based random subspace ensemble classifier. Results: A 3-class dataset containing publicly available MRI images was utilized to test the proposed architecture. In experimental works, the proposed model produced 99.11% accuracy, 98.75% sensitivity, 99.54% specificity, 98.65% precision, and 98.70% F1-score performance values. The results indicate that our AI system has the potential to provide valuable diagnostic assistance in clinical settings.

4.
PeerJ Comput Sci ; 9: e1599, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38077566

RESUMO

Background: Alzheimer's disease (AD) is a disease that manifests itself with a deterioration in all mental activities, daily activities, and behaviors, especially memory, due to the constantly increasing damage to some parts of the brain as people age. Detecting AD at an early stage is a significant challenge. Various diagnostic devices are used to diagnose AD. Magnetic Resonance Images (MRI) devices are widely used to analyze and classify the stages of AD. However, the time-consuming process of recording the affected areas of the brain in the images obtained from these devices is another challenge. Therefore, conventional techniques cannot detect the early stage of AD. Methods: In this study, we proposed a deep learning model supported by a fusion loss model that includes fully connected layers and residual blocks to solve the above-mentioned challenges. The proposed model has been trained and tested on the publicly available T1-weighted MRI-based KAGGLE dataset. Data augmentation techniques were used after various preliminary operations were applied to the data set. Results: The proposed model effectively classified four AD classes in the KAGGLE dataset. The proposed model reached the test accuracy of 0.973 in binary classification and 0.982 in multi-class classification thanks to experimental studies and provided a superior classification performance than other studies in the literature. The proposed method can be used online to detect AD and has the feature of a system that will help doctors in the decision-making process.

5.
PeerJ Comput Sci ; 9: e1598, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37810341

RESUMO

Background: This article aims to determine the coefficients that will reduce the in-class distance and increase the distance between the classes, collecting the data around the cluster centers with meta-heuristic optimization algorithms, thus increasing the classification performance. Methods: The proposed mathematical model is based on simple mathematical calculations, and this model is the fitness function of optimization algorithms. Compared to the methods in the literature, optimizing algorithms to obtain fast results is more accessible. Determining the weights by optimization provides more sensitive results than the dataset structure. In the study, the proposed model was used as the fitness function of the metaheuristic optimization algorithms to determine the weighting coefficients. In this context, four different structures were used to test the independence of the results obtained from the algorithm: the particle swarm algorithm (PSO), the bat algorithm (BAT), the gravitational search algorithm (GSA), and the flower pollination algorithm (FPA). Results: As a result of these processes, a control group from unweighted attributes and four experimental groups from weighted attributes were obtained for each dataset. The classification performance of all datasets to which the weights obtained by the proposed method were applied increased. 100% accuracy rates were obtained in the Iris and Liver Disorders datasets used in the study. From synthetic datasets, from 66.9% (SVM classifier) to 96.4% (GSA Weighting + SVM) in the Full Chain dataset, from 64.6% (LDA classifier) to 80.2% in the Two Spiral datasets (weighted by BA + LDA). As a result of the study, it was seen that the proposed method successfully fulfills the task of moving the attributes to a linear plane in the datasets, especially in classifiers such as SVM and LDA, which have difficulties in non-linear problems, an accuracy rate of 100% was achieved.

6.
Diagnostics (Basel) ; 13(3)2023 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-36766680

RESUMO

This study uses machine learning to perform the hearing test (audiometry) processes autonomously with EEG signals. Sounds with different amplitudes and wavelengths given to the person tested in standard hearing tests are assigned randomly with the interface designed with MATLAB GUI. The person stated that he heard the random size sounds he listened to with headphones but did not take action if he did not hear them. Simultaneously, EEG (electro-encephalography) signals were followed, and the waves created in the brain by the sounds that the person attended and did not hear were recorded. EEG data generated at the end of the test were pre-processed, and then feature extraction was performed. The heard and unheard information received from the MATLAB interface was combined with the EEG signals, and it was determined which sounds the person heard and which they did not hear. During the waiting period between the sounds given via the interface, no sound was given to the person. Therefore, these times are marked as not heard in EEG signals. In this study, brain signals were measured with Brain Products Vamp 16 EEG device, and then EEG raw data were created using the Brain Vision Recorder program and MATLAB. After the data set was created from the signal data produced by the heard and unheard sounds in the brain, machine learning processes were carried out with the PYTHON programming language. The raw data created with MATLAB was taken with the Python programming language, and after the pre-processing steps were completed, machine learning methods were applied to the classification algorithms. Each raw EEG data has been detected by the Count Vectorizer method. The importance of each EEG signal in all EEG data has been calculated using the TF-IDF (Term Frequency-Inverse Document Frequency) method. The obtained dataset has been classified according to whether people can hear the sound. Naïve Bayes, Light Gradient Strengthening Machine (LGBM), support vector machine (SVM), decision tree, k-NN, logistic regression, and random forest classifier algorithms have been applied in the analysis. The algorithms selected in our study were preferred because they showed superior performance in ML and succeeded in analyzing EEG signals. Selected classification algorithms also have features of being used online. Naïve Bayes, Light Gradient Strengthening Machine (LGBM), support vector machine (SVM), decision tree, k-NN, logistic regression, and random forest classifier algorithms were used. In the analysis of EEG signals, Light Gradient Strengthening Machine (LGBM) was obtained as the best method. It was determined that the most successful algorithm in prediction was the prediction of the LGBM classification algorithm, with a success rate of 84%. This study has revealed that hearing tests can also be performed using brain waves detected by an EEG device. Although a completely independent hearing test can be created, an audiologist or doctor may be needed to evaluate the results.

7.
IEEE J Biomed Health Inform ; 27(2): 944-955, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36367916

RESUMO

Atrial fibrillation (AF) is one of the clinic's most common arrhythmias with high morbidity and mortality. Developing an intelligent auxiliary diagnostic model of AF based on a body surface electrocardiogram (ECG) is necessary. Convolutional neural network (CNN) is one of the most commonly used models for AF recognition. However, typical CNN is not compatible with variable-duration ECG, so it is hard to demonstrate its universality and generalization in practical applications. Hence, this paper proposes a novel Time-adaptive densely network named MP-DLNet-F. The MP-DLNet module solves the problem of incompatibility between variable-duration ECG and 1D-CNN. In addition, the feature enhancement module and data imbalance processing module are respectively used to enhance the perception of temporal-quality information and decrease the sensitivity to data imbalance. The experimental results indicate that the proposed MP-DLNet-F achieved 87.98% classification accuracy, and F1-score of 0.847 on the CinC2017 database for 10-second cropped/padded single-lead ECG fragments. Furthermore, we deploy transfer learning techniques to test heterogeneous datasets, and in the CPSC2018 12-lead dataset, the method improved the average accuracy and F1-score by 21.81% and 16.14%, respectively. Experimental results indicate that our method can update the constructed model's parameters and precisely forecast AF with different duration distributions and lead distributions. Combining these advantages, MP-DLNet-F can exemplify all kinds of varied-duration or imbalance medical signal processing problems such as Electroencephalogram (EEG) and Photoplethysmography (PPG).


Assuntos
Fibrilação Atrial , Humanos , Fibrilação Atrial/diagnóstico , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Eletrocardiografia/métodos , Fotopletismografia/métodos , Algoritmos
8.
Comput Math Methods Med ; 2022: 2157322, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35936380

RESUMO

Segmentation of skin lesions plays a very important role in the early detection of skin cancer. However, indistinguishability due to various artifacts such as hair and contrast between normal skin and lesioned skin is an important challenge for specialist dermatologists. Computer-aided diagnostic systems using deep convolutional neural networks are gaining importance in order to cope with difficulties. This study focuses on deep learning-based fusion networks and fusion loss functions. For the automatic segmentation of skin lesions, U-Net (U-Net + ResNet 2D) with 2D residual blocks and 2D volumetric convolutional neural networks were fused for the first time in this study. Also, a new fusion loss function is proposed by combining Dice Loss (DL) and Focal Tversky Loss (FTL) to make the proposed fused model more robust. Of the 2594 image dataset, 20% is reserved for test data and 80% for training data. In test data training, a Jaccard score of 0.837 and a dice score of 0.918 were obtained. The proposed model was also scored on the ISIC 2018 Task 1 test images, whose ground truths were not shared. The proposed model performed well and achieved a Jaccard index of 0.800 and a dice score of 0.880 in the ISIC 2018 Task 1 test set. In addition, it has been observed that the new fused loss function obtained by fusing Focal Tversky Loss and Dice Loss functions in the proposed model increases the robustness of the model in the tests. The proposed new loss function fusion model has outstripped the cutting-edge approaches in the literature.


Assuntos
Dermatopatias , Neoplasias Cutâneas , Artefatos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia
9.
Comput Math Methods Med ; 2022: 5714454, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35903432

RESUMO

Objective: Measurement and monitoring of blood pressure are of great importance for preventing diseases such as cardiovascular and stroke caused by hypertension. Therefore, there is a need for advanced artificial intelligence-based systolic and diastolic blood pressure systems with a new technological infrastructure with a noninvasive process. The study is aimed at determining the minimum ECG time required for calculating systolic and diastolic blood pressure based on the Electrocardiography (ECG) signal. Methodology. The study includes ECG recordings of five individuals taken from the IEEE database, measured during daily activity. For the study, each signal was divided into epochs of 2-4-6-8-10-12-14-16-18-20 seconds. Twenty-five features were extracted from each epoched signal. The dimension of the dataset was reduced by using Spearman's feature selection algorithm. Analysis based on metrics was carried out by applying machine learning algorithms to the obtained dataset. Gaussian process regression exponential (GPR) machine learning algorithm was preferred because it is easy to integrate into embedded systems. Results: The MAPE estimation performance values for diastolic and systolic blood pressure values for 16-second epochs were 2.44 mmHg and 1.92 mmHg, respectively. Conclusion: According to the study results, it is evaluated that systolic and diastolic blood pressure values can be calculated with a high-performance ratio with 16-second ECG signals.


Assuntos
Inteligência Artificial , Determinação da Pressão Arterial , Algoritmos , Pressão Sanguínea/fisiologia , Determinação da Pressão Arterial/métodos , Eletrocardiografia , Humanos , Aprendizado de Máquina
10.
Multimed Syst ; 28(4): 1439-1448, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34511733

RESUMO

Coronavirus is one of the serious threat and challenge for existing healthcare systems. Several prevention methods and precautions have been proposed by medical specialists to treat the virus and secure infected patients. Deep learning methods have been adopted for disease detection, especially for medical image classification. In this paper, we proposed a deep learning-based medical image classification for COVID-19 patients namely deep learning model for coronavirus (DLM-COVID-19). The proposed model improves the medical image classification and optimization for better disease diagnosis. This paper also proposes a mobile application for COVID-19 patient detection using a self-assessment test combined with medical expertise and diagnose and prevent the virus using the online system. The proposed deep learning model is evaluated with existing algorithms where it shows better performance in terms of sensitivity, specificity, and accuracy. Whereas the proposed application also helps to overcome the virus risk and spread.

11.
Comput Electr Eng ; 95: 107411, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34511652

RESUMO

Coronavirus is an infectious life-threatening disease and is mainly transmitted through infected person coughs, sneezes, or exhales. This disease is a global challenge that demands advanced solutions to address multiple dimensions of this pandemic for health and wellbeing.  Different types of medical and technological-based solutions have been proposed to control and treat COVID-19. Machine learning is one of the technologies used in Magnetic Resonance Imaging (MRI) classification whereas nature-inspired algorithms are also adopted for image optimization. In this paper, we combined the machine learning and nature-inspired algorithm for brain MRI images of COVID-19 patients namely Machine Learning and Nature Inspired Model for Coronavirus (MLNI-COVID-19). This model improves the MRI image classification and optimization for better diagnosis. This model will improve the overall performance especially the area of brain images that is neglected due to the unavailability of the dataset. COVID-19 has a serious impact on the patient brain. The proposed model will help to improve the diagnosis process for better medical decisions and performance. The proposed model is evaluated with existing algorithms and achieved better performance in terms of sensitivity, specificity, and accuracy.

12.
PeerJ Comput Sci ; 7: e523, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34084928

RESUMO

BACKGROUND: Brain signals (EEG-Electroencephalography) are a gold standard frequently used in epilepsy prediction. It is crucial to predict epilepsy, which is common in the community. Early diagnosis is essential to reduce the treatment process of the disease and to keep the process healthier. METHODS: In this study, a five-classes dataset was used: EEG signals from different individuals, healthy EEG signals from tumor document, EEG signal with epilepsy, EEG signal with eyes closed, and EEG signal with eyes open. Four different methods have been proposed to classify five classes of EEG signals. In the first approach, the EEG signal was first divided into four different bands (beta, alpha, theta, and delta), and then 25 time-domain features were extracted from each band, and the main EEG signal and these extracted features were combined to obtain 125-time domain features (feature extraction). Using the Random Forests classifier, EEG activities were classified into five classes. In the second approach, each One-Against-One (OVO) approach with 125 attributes was split into ten parts, pairwise, and then each piece was classified with the Random Forests classifier. The majority voting scheme was used to combine decisions from the ten classifiers. In the third proposed method, each One-Against-All (OVA) approach with 125 attributes was divided into five parts, and then each piece was classified with the Random Forests classifier. The majority voting scheme was used to combine decisions from the five classifiers. In the fourth proposed approach, each One-Against-All (OVA) approach with 125 attributes was divided into five parts. Since each piece obtained had an imbalanced data distribution, an adaptive synthetic (ADASYN) sampling approach was used to stabilize each piece. Then, each balanced piece was classified with the Random Forests classifier. To combine the decisions obtanied from each classifier, the majority voting scheme has been used. RESULTS: The first approach achieved 71.90% classification success in classifying five-class EEG signals. The second approach achieved a classification success of 91.08% in classifying five-class EEG signals. The third method achieved 89% success, while the fourth proposed approach achieved 91.72% success. The results obtained show that the proposed fourth approach (the combination of the ADASYN sampling approach and Random Forest Classifier) achieved the best success in classifying five class EEG signals. This proposed method could be used in the detection of epilepsy events in the EEG signals.

13.
Expert Syst Appl ; 180: 115141, 2021 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-33967405

RESUMO

X-ray units have become one of the most advantageous candidates for triaging the new Coronavirus disease COVID-19 infected patients thanks to its relatively low radiation dose, ease of access, practical, reduced prices, and quick imaging process. This research intended to develop a reliable convolutional-neural-network (CNN) model for the classification of COVID-19 from chest X-ray views. Moreover, it is aimed to prevent bias issues due to the database. Transfer learning-based CNN model was developed by using a sum of 1,218 chest X-ray images (CXIs) consisting of 368 COVID-19 pneumonia and 850 other pneumonia cases by pre-trained architectures, including DenseNet-201, ResNet-18, and SqueezeNet. The chest X-ray images were acquired from publicly available databases, and each individual image was carefully selected to prevent any bias problem. A stratified 5-fold cross-validation approach was utilized with a ratio of 90% for training and 10% for the testing (unseen folds), in which 20% of training data was used as a validation set to prevent overfitting problems. The binary classification performances of the proposed CNN models were evaluated by the testing data. The activation mapping approach was implemented to improve the causality and visuality of the radiograph. The outcomes demonstrated that the proposed CNN model built on DenseNet-201 architecture outperformed amongst the others with the highest accuracy, precision, recall, and F1-scores of 94.96%, 89.74%, 94.59%, and 92.11%, respectively. The results indicated that the reliable diagnosis of COVID-19 pneumonia from CXIs based on the CNN model opens the door to accelerate triage, save critical time, and prioritize resources besides assisting the radiologists.

14.
PeerJ Comput Sci ; 7: e537, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34013040

RESUMO

BACKGROUND: The brain-computer interface (BCI) is a relatively new but highly promising special field that is actively used in basic neuroscience. BCI includes interfaces for human-computer communication based directly on neural activity concerning mental processes. Fundamental BCI components consist of different units. In the first stage, the EEG and NIRS signals obtained from the individuals are preprocessed, and the signals are brought to a certain standard. METHODS: In order to realize proposed framework, a dataset containing Motor Imaginary and Mental Activity tasks are prepared with Electroencephalography (EEG) and Near-Infrared Spectroscopy (NIRS) signal. First of all, HbO and HbR curves are obtained from NIRS signals. Hbo, HbR, HbO+HbR, EEG, EEG+HbO and EEG+HbR features tables are created with the features obtained by using HbO, HbR, and EEG signals, and feature weighted is carried out with the k-Means clustering centers based attribute weighting method (KMCC-based) and the k-Means clustering centers difference based attribute weighting method (KMCCD-based). Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), and k-Nearest Neighbors algorithm (kNN) classifiers are used to see the classifier differences in the study. RESULTS: As a result of this study, an accuracy rate of 99.7% (with kNN classifier and KMCCD-based weighting) is obtained in the data set of Motor Imaginary. Similarly, an accuracy rate of 99.9% (with SVM and kNN classifier and KMCCD-based weighting) is obtained in the Mental Activity dataset. The weighting method is used to increase the classification accuracy, and it has been shown that it will contribute to the classification of EEG and NIRS BCI systems. The results show that the proposed method increases classifiers' performance, offering less processing power and ease of application. In the future, studies could be carried out by combining the k-Means clustering center-based weighted hybrid BCI method with deep learning architectures. Further improved classifier performances can be achieved by combining both systems.

15.
Big Data ; 9(4): 253-264, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33989047

RESUMO

The new and integrated area called Internet of Things (IoT) has gained popularity due to its smart, objects, services and affordability. These networks are based on data communication, augmented reality (AR), and wired and wireless infrastructures. The basic objective of these network is data communication, environment monitoring, tracking, and sensing by using smart devices and sensor nodes. The dAR is one of the attractive and advanced areas that is integrated in IoT networks in smart homes and smart industries to convert the objects into 3D to visualize information and provide interactive reality-based control. With attraction, this idea has suffered with complex and heavy processes, computation complexities, network communication degradation, and network delay. This article presents a detailed overview of these technologies and proposes a more convenient and fast data communication model by using edge computing and Fifth-Generation platforms. The article also introduces a Visualization Augmented Reality framework for IoT (VAR-IoT) networks fully integrated by communication, sensing, and actuating features with a better interface to control the objects. The proposed network model is evaluated in simulation in terms of applications response time and network delay and it observes the better performance of the proposed framework.


Assuntos
Realidade Aumentada , Internet das Coisas , Simulação por Computador
16.
PeerJ Comput Sci ; 7: e405, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33817048

RESUMO

BACKGROUND: Otitis media (OM) is the infection and inflammation of the mucous membrane covering the Eustachian with the airy cavities of the middle ear and temporal bone. OM is also one of the most common ailments. In clinical practice, the diagnosis of OM is carried out by visual inspection of otoscope images. This vulnerable process is subjective and error-prone. METHODS: In this study, a novel computer-aided decision support model based on the convolutional neural network (CNN) has been developed. To improve the generalized ability of the proposed model, a combination of the channel and spatial model (CBAM), residual blocks, and hypercolumn technique is embedded into the proposed model. All experiments were performed on an open-access tympanic membrane dataset that consists of 956 otoscopes images collected into five classes. RESULTS: The proposed model yielded satisfactory classification achievement. The model ensured an overall accuracy of 98.26%, sensitivity of 97.68%, and specificity of 99.30%. The proposed model produced rather superior results compared to the pre-trained CNNs such as AlexNet, VGG-Nets, GoogLeNet, and ResNets. Consequently, this study points out that the CNN model equipped with the advanced image processing techniques is useful for OM diagnosis. The proposed model may help to field specialists in achieving objective and repeatable results, decreasing misdiagnosis rate, and supporting the decision-making processes.

17.
Appl Soft Comput ; 110: 107610, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36569211

RESUMO

In this work, an artificial intelligence network-based smart camera system prototype, which tracks social distance using a bird's-eye perspective, has been developed. "MobileNet SSD-v3", "Faster-R-CNN Inception-v2", "Faster-R-CNN ResNet-50" models have been utilized to identify people in video sequences. The final prototype based on the Faster R-CNN model is an integrated embedded system that detects social distance with the camera. The software developed using the "Nvidia Jetson Nano" development kit and Raspberry Pi camera module calculates all necessary actions in itself, detects social distance violations, makes audible and light warnings, and reports the results to the server. It is predicted that the developed smart camera prototype can be integrated into public spaces within the "sustainable smart cities," the scope that the world is on the verge of a change.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...