Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurorobot ; 18: 1398703, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38831877

RESUMO

Introduction: During the last few years, a heightened interest has been shown in classifying scene images depicting diverse robotic environments. The surge in interest can be attributed to significant improvements in visual sensor technology, which has enhanced image analysis capabilities. Methods: Advances in vision technology have a major impact on the areas of multiple object detection and scene understanding. These tasks are an integral part of a variety of technologies, including integrating scenes in augmented reality, facilitating robot navigation, enabling autonomous driving systems, and improving applications in tourist information. Despite significant strides in visual interpretation, numerous challenges persist, encompassing semantic understanding, occlusion, orientation, insufficient availability of labeled data, uneven illumination including shadows and lighting, variation in direction, and object size and changing background. To overcome these challenges, we proposed an innovative scene recognition framework, which proved to be highly effective and yielded remarkable results. First, we perform preprocessing using kernel convolution on scene data. Second, we perform semantic segmentation using UNet segmentation. Then, we extract features from these segmented data using discrete wavelet transform (DWT), Sobel and Laplacian, and textual (local binary pattern analysis). To recognize the object, we have used deep belief network and then find the object-to-object relation. Finally, AlexNet is used to assign the relevant labels to the scene based on recognized objects in the image. Results: The performance of the proposed system was validated using three standard datasets: PASCALVOC-12, Cityscapes, and Caltech 101. The accuracy attained on the PASCALVOC-12 dataset exceeds 96% while achieving a rate of 95.90% on the Cityscapes dataset. Discussion: Furthermore, the model demonstrates a commendable accuracy of 92.2% on the Caltech 101 dataset. This model showcases noteworthy advancements beyond the capabilities of current models.

2.
PeerJ Comput Sci ; 10: e1967, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38660161

RESUMO

With the evolution of the Internet and multimedia technologies, delving deep into multimedia data for predicting topic richness holds significant practical implications in public opinion monitoring and data discourse power competition. This study introduces an algorithm for predicting English topic richness based on the Transformer model, applied specifically to the Twitter platform. Initially, relevant data is organized and extracted following an analysis of Twitter's characteristics. Subsequently, a feature fusion approach is employed to mine, extract, and construct features from Twitter blogs and users, encompassing blog features, topic features, and user features, which are amalgamated into multimodal features. Lastly, the combined features undergo training and learning using the Transformer model. Through experimentation on the Twitter topic richness dataset, our algorithm achieves an accuracy of 82.3%, affirming the efficacy and superior performance of the proposed approach.

3.
Cancers (Basel) ; 15(21)2023 Oct 31.
Artigo em Inglês | MEDLINE | ID: mdl-37958422

RESUMO

Oral cancer is a fatal disease and ranks seventh among the most common cancers throughout the whole globe. Oral cancer is a type of cancer that usually affects the head and neck. The current gold standard for diagnosis is histopathological investigation, however, the conventional approach is time-consuming and requires professional interpretation. Therefore, early diagnosis of Oral Squamous Cell Carcinoma (OSCC) is crucial for successful therapy, reducing the risk of mortality and morbidity, while improving the patient's chances of survival. Thus, we employed several artificial intelligence techniques to aid clinicians or physicians, thereby significantly reducing the workload of pathologists. This study aimed to develop hybrid methodologies based on fused features to generate better results for early diagnosis of OSCC. This study employed three different strategies, each using five distinct models. The first strategy is transfer learning using the Xception, Inceptionv3, InceptionResNetV2, NASNetLarge, and DenseNet201 models. The second strategy involves using a pre-trained art of CNN for feature extraction coupled with a Support Vector Machine (SVM) for classification. In particular, features were extracted using various pre-trained models, namely Xception, Inceptionv3, InceptionResNetV2, NASNetLarge, and DenseNet201, and were subsequently applied to the SVM algorithm to evaluate the classification accuracy. The final strategy employs a cutting-edge hybrid feature fusion technique, utilizing an art-of-CNN model to extract the deep features of the aforementioned models. These deep features underwent dimensionality reduction through principal component analysis (PCA). Subsequently, low-dimensionality features are combined with shape, color, and texture features extracted using a gray-level co-occurrence matrix (GLCM), Histogram of Oriented Gradient (HOG), and Local Binary Pattern (LBP) methods. Hybrid feature fusion was incorporated into the SVM to enhance the classification performance. The proposed system achieved promising results for rapid diagnosis of OSCC using histological images. The accuracy, precision, sensitivity, specificity, F-1 score, and area under the curve (AUC) of the support vector machine (SVM) algorithm based on the hybrid feature fusion of DenseNet201 with GLCM, HOG, and LBP features were 97.00%, 96.77%, 90.90%, 98.92%, 93.74%, and 96.80%, respectively.

4.
Comput Intell Neurosci ; 2023: 7282944, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37876944

RESUMO

Histopathological images are very effective for investigating the status of various biological structures and diagnosing diseases like cancer. In addition, digital histopathology increases diagnosis precision and provides better image quality and more detail for the pathologist with multiple viewing options and team annotations. As a result of the benefits above, faster treatment is available, increasing therapy success rates and patient recovery and survival chances. However, the present manual examination of these images is tedious and time-consuming for pathologists. Therefore, reliable automated techniques are needed to effectively classify normal and malignant cancer images. This paper applied a deep learning approach, namely, EfficientNet and its variants from B0 to B7. We used different image resolutions for each model, from 224 × 224 pixels to 600 × 600 pixels. We also applied transfer learning and parameter tuning techniques to improve the results and overcome the overfitting problem. We collected the dataset from the Lung and Colon Cancer Histopathological Image LC25000 image dataset. The dataset acquisition consists of 25,000 histopathology images of five classes (lung adenocarcinoma, lung squamous cell carcinoma, benign lung tissue, colon adenocarcinoma, and colon benign tissue). Then, we performed preprocessing on the dataset to remove the noisy images and bring them into a standard format. The model's performance was evaluated in terms of classification accuracy and loss. We have achieved good accuracy results for all variants; however, the results of EfficientNetB2 stand excellent, with an accuracy of 97% for 260 × 260 pixels resolution images.


Assuntos
Adenocarcinoma , Neoplasias do Colo , Neoplasias Pulmonares , Humanos , Algoritmos , Neoplasias do Colo/patologia , Pulmão
5.
PLoS One ; 18(8): e0290045, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37611023

RESUMO

Monkeypox is a double-stranded DNA virus with an envelope and is a member of the Poxviridae family's Orthopoxvirus genus. This virus can transmit from human to human through direct contact with respiratory secretions, infected animals and humans, or contaminated objects and causing mutations in the human body. In May 2022, several monkeypox affected cases were found in many countries. Because of its transmitting characteristics, on July 23, 2022, a nationwide public health emergency was proclaimed by WHO due to the monkeypox virus. This study analyzed the gene mutation rate that is collected from the most recent NCBI monkeypox dataset. The collected data is prepared to independently identify the nucleotide and codon mutation. Additionally, depending on the size and availability of the gene dataset, the computed mutation rate is split into three categories: Canada, Germany, and the rest of the world. In this study, the genome mutation rate of the monkeypox virus is predicted using a deep learning-based Long Short-Term Memory (LSTM) model and compared with Gated Recurrent Unit (GRU) model. The LSTM model shows "Root Mean Square Error" (RMSE) values of 0.09 and 0.08 for testing and training, respectively. Using this time series analysis method, the prospective mutation rate of the 50th patient has been predicted. Note that this is a new report on the monkeypox gene mutation. It is found that the nucleotide mutation rates are decreasing, and the balance between bi-directional rates are maintained.


Assuntos
Mpox , Animais , Humanos , Mpox/genética , Memória de Curto Prazo , Estudos Prospectivos , Monkeypox virus/genética , Mutação
6.
Heliyon ; 9(6): e17089, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37332919

RESUMO

Background: Healthcare professionals have expressed worries about using AI, while others anticipate more work opportunities in the future and better patient care. Integrating AI into practice will directly impact dentistry practice. The purpose of the study is to evaluate organizational readiness, knowledge, attitude, and willingness to integrate AI into dentistry practice. Methods: a cross-sectional exploratory study of dentists, academic faculty and students who practice and study dentistry in UAE. Participants were invited to participate in a previously validated survey used to collect participants' demographics, knowledge, perceptions, and organizational readiness. Results: One hundred thirty-four responded to the survey with a response rate was 78% from the invited group. Results showed excitement to implement AI in practice accompanied by medium to high knowledge and a lack of education and training programs. As a result, organizations were not well prepared and had to ensure readiness for AI implementation. Conclusion: An effort to ensure professional and student readiness will improve AI integration in practice. In addition, dental professional societies and educational institutions must collaborate to develop proper training programs for dentists to close the knowledge gap.

7.
PeerJ Comput Sci ; 9: e1355, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346503

RESUMO

Innovative technology and improvements in intelligent machinery, transportation facilities, emergency systems, and educational services define the modern era. It is difficult to comprehend the scenario, do crowd analysis, and observe persons. For e-learning-based multiobject tracking and predication framework for crowd data via multilayer perceptron, this article recommends an organized method that takes e-learning crowd-based type data as input, based on usual and abnormal actions and activities. After that, super pixel and fuzzy c mean, for features extraction, we used fused dense optical flow and gradient patches, and for multiobject tracking, we applied a compressive tracking algorithm and Taylor series predictive tracking approach. The next step is to find the mean, variance, speed, and frame occupancy utilized for trajectory extraction. To reduce data complexity and optimization, we applied T-distributed stochastic neighbor embedding (t-SNE). For predicting normal and abnormal action in e-learning-based crowd data, we used multilayer perceptron (MLP) to classify numerous classes. We used the three-crowd activity University of California San Diego, Department of Pediatrics (USCD-Ped), Shanghai tech, and Indian Institute of Technology Bombay (IITB) corridor datasets for experimental estimation based on human and nonhuman-based videos. We achieve a mean accuracy of 87.00%, USCD-Ped, Shanghai tech for 85.75%, and IITB corridor of 88.00% datasets.

8.
PeerJ Comput Sci ; 9: e1315, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346609

RESUMO

The field of optimization is concerned with determining the optimal solution to a problem. It refers to the mathematical loss or gain of a given objective function. Optimization must reduce the given problem's losses and disadvantages while maximizing its earnings and benefits. We all want optimal or, at the very least, suboptimal answers because we all want to live a better life. Group counseling optimizer (GCO) is an emerging evolutionary algorithm that simulates the human behavior of counseling within a group for solving problems. GCO has been successfully applied to single and multi-objective optimization problems. The 0/1 knapsack problem is also a combinatorial problem in which we can select an item entirely or drop it to fill a knapsack so that the total weight of selected items is less than or equal to the knapsack size and the value of all items is as significant as possible. Dynamic programming solves the 0/1 knapsack problem optimally, but the time complexity of dynamic programming is O(n3). In this article, we provide a feature analysis of GCO parameters and use it to solve the 0/1 knapsack problem (KP) using GCO. The results show that the GCO-based approach efficiently solves the 0/1 knapsack problem; therefore, it is a viable alternative to solving the 0/1 knapsack problem.

9.
PeerJ Comput Sci ; 9: e1176, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346684

RESUMO

Background: Humans must be able to cope with the huge amounts of information produced by the information technology revolution. As a result, automatic text summarization is being employed in a range of industries to assist individuals in identifying the most important information. For text summarization, two approaches are mainly considered: text summarization by the extractive and abstractive methods. The extractive summarisation approach selects chunks of sentences like source documents, while the abstractive approach can generate a summary based on mined keywords. For low-resourced languages, e.g., Urdu, extractive summarization uses various models and algorithms. However, the study of abstractive summarization in Urdu is still a challenging task. Because there are so many literary works in Urdu, producing abstractive summaries demands extensive research. Methodology: This article proposed a deep learning model for the Urdu language by using the Urdu 1 Million news dataset and compared its performance with the two widely used methods based on machine learning, such as support vector machine (SVM) and logistic regression (LR). The results show that the suggested deep learning model performs better than the other two approaches. The summaries produced by extractive summaries are processed using the encoder-decoder paradigm to create an abstractive summary. Results: With the help of Urdu language specialists, the system-generated summaries were validated, showing the proposed model's improvement and accuracy.

10.
Healthcare (Basel) ; 10(12)2022 Nov 25.
Artigo em Inglês | MEDLINE | ID: mdl-36553891

RESUMO

Breast cancer is one of the most widely recognized diseases after skin cancer. Though it can occur in all kinds of people, it is undeniably more common in women. Several analytical techniques, such as Breast MRI, X-ray, Thermography, Mammograms, Ultrasound, etc., are utilized to identify it. In this study, artificial intelligence was used to rapidly detect breast cancer by analyzing ultrasound images from the Breast Ultrasound Images Dataset (BUSI), which consists of three categories: Benign, Malignant, and Normal. The relevant dataset comprises grayscale and masked ultrasound images of diagnosed patients. Validation tests were accomplished for quantitative outcomes utilizing the exhibition measures for each procedure. The proposed framework is discovered to be effective, substantiating outcomes with only raw image evaluation giving a 78.97% test accuracy and masked image evaluation giving 81.02% test precision, which could decrease human errors in the determination cycle. Additionally, our described framework accomplishes higher accuracy after using multi-headed CNN with two processed datasets based on masked and original images, where the accuracy hopped up to 92.31% (±2) with a Mean Squared Error (MSE) loss of 0.05. This work primarily contributes to identifying the usefulness of multi-headed CNN when working with two different types of data inputs. Finally, a web interface has been made to make this model usable for non-technical personals.

11.
PeerJ Comput Sci ; 8: e1157, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36532801

RESUMO

Steganography is a technique in which a person hides information in digital media. The message sent by this technique is so secret that other people cannot even imagine the information's existence. This article entails developing a mechanism for communicating one-on-one with individuals by concealing information from the rest of the group. Based on their availability, digital images are the most suited components for use as transmitters when compared to other objects available on the internet. The proposed technique encrypts a message within an image. There are several steganographic techniques for hiding hidden information in photographs, some of which are more difficult than others, and each has its strengths and weaknesses. The encryption mechanism employed may have different requirements depending on the application. For example, certain applications may require complete invisibility of the key information, while others may require the concealment of a larger secret message. In this research, we proposed a technique that converts plain text to ciphertext and encodes it in a picture using up to the four least significant bit (LSB) based on a hash function. The LSBs of the image pixel values are used to substitute pieces of text. Human eyes cannot predict the variation between the initial Image and the resulting image since only the LSBs are modified. The proposed technique is compared with state-of-the-art techniques. The results reveal that the proposed technique outperforms the existing techniques concerning security and efficiency with adequate MSE and PSNR.

12.
Plants (Basel) ; 11(15)2022 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-35893629

RESUMO

Tea (Camellia sinensis L.) is one of the most highly consumed beverages globally after water. Several countries import large quantities of tea from other countries to meet domestic needs. Therefore, accurate and timely prediction of tea yield is critical. The previous studies used statistical, deep learning, and machine learning techniques for tea yield prediction, but crop simulation models have not yet been used. However, the calibration of a simulation model for tea yield prediction and the comparison of these approaches is needed regarding the different data types. This research study aims to provide a comparative study of the methods for tea yield prediction using the Food and Agriculture Organization (FAO) of the United Nations AquaCrop simulation model and machine learning techniques. We employed weather, soil, crop, and agro-management data from 2016 to 2019 acquired from tea fields of the National Tea and High-Value Crop Research Institute (NTHRI), Pakistan, to calibrate the AquaCrop simulation model and to train regression algorithms. We achieved a mean absolute error (MAE) of 0.45 t/ha, a mean squared error (MSE) of 0.23 t/ha, and a root mean square error (RMSE) of 0.48 t/ha in the calibration of the AquaCrop model and, out of the ten regression models, we achieved the lowest MAE of 0.093 t/ha, MSE of 0.015 t/ha, and RMSE of 0.120 t/ha using 10-fold cross-validation and MAE of 0.123 t/ha, MSE of 0.024 t/ha, and RMSE of 0.154 t/ha using the XGBoost regressor with train test split. We concluded that the machine learning regression algorithm performed better in yield prediction using fewer data than the simulation model. This study provides a technique to improve tea yield prediction by combining different data sources using a crop simulation model and machine learning algorithms.

13.
Artigo em Inglês | MEDLINE | ID: mdl-35682023

RESUMO

Computer-aided diagnostic (CAD) systems can assist radiologists in detecting coal workers' pneumoconiosis (CWP) in their chest X-rays. Early diagnosis of the CWP can significantly improve workers' survival rate. The development of the CAD systems will reduce risk in the workplace and improve the quality of chest screening for CWP diseases. This systematic literature review (SLR) amis to categorise and summarise the feature extraction and detection approaches of computer-based analysis in CWP using chest X-ray radiographs (CXR). We conducted the SLR method through 11 databases that focus on science, engineering, medicine, health, and clinical studies. The proposed SLR identified and compared 40 articles from the last 5 decades, covering three main categories of computer-based CWP detection: classical handcrafted features-based image analysis, traditional machine learning, and deep learning-based methods. Limitations of this review and future improvement of the review are also discussed.


Assuntos
Antracose , Minas de Carvão , Pneumoconiose , Antracose/diagnóstico por imagem , Carvão Mineral , Computadores , Humanos , Aprendizado de Máquina , Pneumoconiose/diagnóstico por imagem , Raios X
14.
Comput Methods Programs Biomed ; 223: 106951, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35767911

RESUMO

BACKGROUND AND OBJECTIVE:  Many developed and non-developed countries worldwide suffer from cancer-related fatal diseases. In particular, the rate of breast cancer in females increases daily, partially due to unawareness and undiagnosed at the early stages. A proper first breast cancer treatment can only be provided by adequately detecting and classifying cancer during the very early stages of its development. The use of medical image analysis techniques and computer-aided diagnosis may help the acceleration and the automation of both cancer detection and classification by also training and aiding less experienced physicians. For large datasets of medical images, convolutional neural networks play a significant role in detecting and classifying cancer effectively. METHODS:  This article presents a novel computer-aided diagnosis method for breast cancer classification (both binary and multi-class), using a combination of deep neural networks (ResNet 18, ShuffleNet, and Inception-V3Net) and transfer learning on the BreakHis publicly available dataset. RESULTS AND CONCLUSIONS:  Our proposed method provides the best average accuracy for binary classification of benign or malignant cancer cases of 99.7%, 97.66%, and 96.94% for ResNet, InceptionV3Net, and ShuffleNet, respectively. Average accuracies for multi-class classification were 97.81%, 96.07%, and 95.79% for ResNet, Inception-V3Net, and ShuffleNet, respectively.


Assuntos
Neoplasias da Mama , Mama/patologia , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Computadores , Feminino , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
15.
Comput Intell Neurosci ; 2022: 8393318, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35387252

RESUMO

There are several issues associated with Dark Web Structural Patterns mining (including many redundant and irrelevant information), which increases the numerous types of cybercrime like illegal trade, forums, terrorist activity, and illegal online shopping. Understanding online criminal behavior is challenging because the data is available in a vast amount. To require an approach for learning the criminal behavior to check the recent request for improving the labeled data as a user profiling, Dark Web Structural Patterns mining in the case of multidimensional data sets gives uncertain results. Uncertain classification results cause a problem of not being able to predict user behavior. Since data of multidimensional nature has feature mixes, it has an adverse influence on classification. The data associated with Dark Web inundation has restricted us from giving the appropriate solution according to the need. In the research design, a Fusion NN (Neural network)-S3VM for Criminal Network activity prediction model is proposed based on the neural network; NN- S3VM can improve the prediction.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Aprendizagem
16.
Healthcare (Basel) ; 9(12)2021 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-34946378

RESUMO

Epigenetic changes are a necessary characteristic of all cancer types. Tumor cells usually target genetic changes and epigenetic alterations as well. It is most beneficial to identify epigenetic similar features among cancer various types to be able to discover the appropriate treatments. The existence of epigenetic alteration profiles can aid in targeting this goal. In this paper, we propose a new technique applying data mining and clustering methodologies for cancer epigenetic changes analysis. The proposed technique aims to detect common patterns of epigenetic changes in various cancer types. We demonstrated the validation of the new technique by detecting epigenetic patterns across seven cancer types and by determining epigenetic similarities among various cancer types. The experimental results demonstrate that common epigenetic patterns do exist across these cancer types. Additionally, epigenetic gene analysis performed on the associated genes found a strong relationship with the development of various types of cancer and proved high risk across the studied cancer types. We utilized the frequent pattern data mining approach to represent cancer types compactly in the promoters for some epigenetic marks. Utilizing the built frequent pattern item set, the most frequent items are identified and yield the group of the bi-clusters of these patterns. Experimental results of the proposed method are shown to have a success rate of 88% in detecting cancer types according to specific epigenetic pattern.

17.
PeerJ Comput Sci ; 7: e675, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34712788

RESUMO

The presence of 3D sensors in hand-held or head-mounted smart devices has motivated many researchers around the globe to devise algorithms to manage 3D point cloud data efficiently and economically. This paper presents a novel lossy compression technique to compress and decompress 3D point cloud data that will save storage space on smart devices as well as minimize the use of bandwidth when transferred over the network. The idea presented in this research exploits geometric information of the scene by using quadric surface representation of the point cloud. A region of a point cloud can be represented by the coefficients of quadric surface when the boundary conditions are known. Thus, a set of quadric surface coefficients and their associated boundary conditions are stored as a compressed point cloud and used to decompress. An added advantage of proposed technique is its flexibility to decompress the cloud as a dense or a course cloud. We compared our technique with state-of-the-art 3D lossless and lossy compression techniques on a number of standard publicly available datasets with varying the structure complexities.

18.
PeerJ Comput Sci ; 7: e524, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34150995

RESUMO

From the past half of a century, identification of the relevant documents is deemed an active area of research due to the rapid increase of data on the web. The traditional models to retrieve relevant documents are based on bibliographic information such as Bibliographic coupling, Co-citations, and Direct citations. However, in the recent past, the scientific community has started to employ textual features to improve existing models' accuracy. In our previous study, we found that analysis of citations at a deep level (i.e., content level) can play a paramount role in finding more relevant documents than surface level (i.e., just bibliography details). We found that cited and citing papers have a high degree of relevancy when in-text citations frequency of the cited paper is more than five times in the citing paper's text. This paper is an extension of our previous study in terms of its evaluation of a comprehensive dataset. Moreover, the study results are also compared with other state-of-the-art approaches i.e., content, metadata, and bibliography. For evaluation, a user study is conducted on selected papers from 1,200 documents (comprise about 16,000 references) of an online journal, Journal of Computer Science (J.UCS). The evaluation results indicate that in-text citation frequency has attained higher precision in finding relevant papers than other state-of-the-art techniques such as content, bibliographic coupling, and metadata-based techniques. The use of in-text citation may help in enhancing the quality of existing information systems and digital libraries. Further, more sophisticated measure may be redefined be considering the use of in-text citations.

19.
PeerJ Comput Sci ; 7: e389, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33817035

RESUMO

Keyword extraction is essential in determining influenced keywords from huge documents as the research repositories are becoming massive in volume day by day. The research community is drowning in data and starving for information. The keywords are the words that describe the theme of the whole document in a precise way by consisting of just a few words. Furthermore, many state-of-the-art approaches are available for keyword extraction from a huge collection of documents and are classified into three types, the statistical approaches, machine learning, and graph-based methods. The machine learning approaches require a large training dataset that needs to be developed manually by domain experts, which sometimes is difficult to produce while determining influenced keywords. However, this research focused on enhancing state-of-the-art graph-based methods to extract keywords when the training dataset is unavailable. This research first converted the handcrafted dataset, collected from impact factor journals into n-grams combinations, ranging from unigram to pentagram and also enhanced traditional graph-based approaches. The experiment was conducted on a handcrafted dataset, and all methods were applied on it. Domain experts performed the user study to evaluate the results. The results were observed from every method and were evaluated with the user study using precision, recall and f-measure as evaluation matrices. The results showed that the proposed method (FNG-IE) performed well and scored near the machine learning approaches score.

20.
Sensors (Basel) ; 21(1)2021 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-33406623

RESUMO

Health 4.0 is an extension of the Industry standard 4.0 which is aimed at the virtualization of health-care services. It employs core technologies and services for integrated management of electronic health records (EHRs), captured through various sensors. The EHR is processed and transmitted to distant experts for better diagnosis and improved healthcare delivery. However, for the successful implementation of Heath 4.0 many challenges do exist. One of the critical issues that needs attention is the security of EHRs in smart health systems. In this work, we have developed a new interpolation scheme capable of providing better quality cover media and supporting reversible EHR embedding. The scheme provides a double layer of security to the EHR by firstly using hyperchaos to encrypt the EHR. The encrypted EHR is reversibly embedded in the cover images produced by the proposed interpolation scheme. The proposed interpolation module has been found to provide better quality interpolated images. The proposed system provides an average peak signal to noise ratio (PSNR) of 52.38 dB for a high payload of 0.75 bits per pixel. In addition to embedding EHR, a fragile watermark (WM) is also encrypted using the hyperchaos embedded into the cover image for tamper detection and authentication of the received EHR. Experimental investigations reveal that our scheme provides improved performance for high contrast medical images (MI) when compared to various techniques for evaluation parameters like imperceptibility, reversibility, payload, and computational complexity. Given the attributes of the scheme, it can be used for enhancing the security of EHR in health 4.0.


Assuntos
Segurança Computacional , Algoritmos , Registros Eletrônicos de Saúde , Humanos , Razão Sinal-Ruído
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...