Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Biol Med ; 177: 108659, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38823366

ABSTRACT

Automatic abdominal organ segmentation is an essential prerequisite for accurate volumetric analysis, disease diagnosis, and tracking by medical practitioners. However, the deformable shapes, variable locations, overlapping with nearby organs, and similar contrast make the segmentation challenging. Moreover, the requirement of a large manually labeled dataset makes it harder. Hence, a semi-supervised contrastive learning approach is utilized to perform the automatic abdominal organ segmentation. Existing 3D deep learning models based on contrastive learning are not able to capture the 3D context of medical volumetric data along three planes/views: axial, sagittal, and coronal views. In this work, a semi-supervised view-adaptive unified model (VAU-model) is proposed to make the 3D deep learning model as view-adaptive to learn 3D context along each view in a unified manner. This method utilizes the novel optimization function that assists the 3D model to learn the 3D context of volumetric medical data along each view in a single model. The effectiveness of the proposed approach is validated on the three types of datasets: BTCV, NIH, and MSD quantitatively and qualitatively. The results demonstrate that the VAU model achieves an average Dice score of 81.61% which is a 3.89% improvement compared to the previous best results for pancreas segmentation in multi-organ dataset BTCV. It also achieves an average Dice score of 77.76% and 76.76% for the pancreas under the single organ non-pathological NIH dataset, and pathological MSD dataset.


Subject(s)
Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Deep Learning , Abdomen/diagnostic imaging , Abdomen/anatomy & histology , Tomography, X-Ray Computed/methods , Pancreas/diagnostic imaging , Pancreas/anatomy & histology , Databases, Factual
2.
Comput Electr Eng ; 101: 108113, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35692868

ABSTRACT

The outlook of the World toward health infrastructure has drastically changed due to COVID-19 which created the need for the development of emerging technologies where interactions between the patients and the health workers can be minimized. Consequently, a secure and energy-efficient internet of medical things (IoMT) enabled wireless sensor network (WSN) is proposed for communicable infectious diseases that utilizes genetic algorithm. The proposed system makes use of movable sinks in IoT-enabled WSNs for healthcare called OptiGeA. The OptiGeA protocol is depicted for cluster heads (CHs) election by joining the factor of energy, density, distance, and heterogeneous node's capacity for fitness function. Additionally, a novel deployment technique and multiple mobile sink approaches are proposed to reduce transmission distance between sink and CH during system operation which mitigates hotspot issues. It is evident from the simulations that the OptiGeA protocol outflanks state-of-the-art protocols in terms of different performance measurements.

3.
Big Data ; 8(4): 323-331, 2020 08.
Article in English | MEDLINE | ID: mdl-32820950

ABSTRACT

This article proposes the MapReduce scheduler with deadline and priorities (MRS-DP) scheduler capable of handling jobs with deadlines and priorities. Big data have emerged as a key concept and revolutionized data analytics in the present era. Big data are characterized by multiple dimensions or Vs, namely volume, velocity, variety, veracity, and valence. Recently, a new and important dimension (another V) is added, known as value. Value has emerged as an important characteristic and it can be understood in terms of delay in acquiring information, leading to late decisions that may result in missed opportunities. To gain optimal benefits, this article introduces a scheduler based on jobs with deadlines and priorities intending to improve resource utilization, with efficient job progress monitoring and backup launching mechanism. The proposed scheduler is capable of accommodating multiple jobs to maximize the number of jobs processed successfully and avoid starvation of lower priority jobs while improving the resource utilization and ensuring the assured quality of service (QoS). To evaluate our proposed scheduler, we ran multiple workloads consisting of the WordCount jobs and DataSort jobs. The performance of the proposed MRS-DP scheduler is compared with minimal earliest deadline first-work conserving scheduler and MapReduce Constraint Programming based Resource Management algorithm in terms of the percentage of successful jobs, priority-wise jobs, and resource utilization of the cluster. The result of the proposed scheduler depicts an improvement of around 10%-20% in terms of the percentage of successful jobs, 20%-25% concerning effective resource utilization offered, and the ability to ensure the offered QoS.


Subject(s)
Big Data , Resource Allocation/standards , Workload , Algorithms , Efficiency, Organizational , Software
4.
Big Data ; 8(1): 62-69, 2020 02.
Article in English | MEDLINE | ID: mdl-31995397

ABSTRACT

The MapReduce programming model was designed and developed for Google File System to efficiently process large-scale distributed data sets. The open source implementation of this Google project was called the Apache Hadoop. Hadoop architecture includes Hadoop MapReduce and Hadoop Distributed File System (HDFS). HDFS supports Hadoop in effectively managing data sets over the cluster and MapReduce programming paradigm helps in the efficient processing of large data sets. MapReduce strategically re-executes a speculative task on some other node to finish the computation quickly, enhancing the overall Quality of Service (QoS). Several mechanisms were suggested over the Hadoop's Default Scheduler to improve the speculative task execution over Hadoop cluster. A large number of strategies were also suggested for scheduling jobs with deadlines. The mechanisms for speculative task execution were not developed for or were not well integrated with Deadline Schedulers. This article presents an improved speculative task detection algorithm, designed specifically for Deadline Scheduler. Our studies suggest the importance of keeping a regular track of node's performance to re-execute the speculative tasks more efficiently. We have successfully improved the QoS offered by Hadoop clusters over the jobs arriving with deadlines in terms of the percentage of successfully completed jobs, the detection time of speculative tasks, the accuracy of correct speculative task detection, and the percentage of incorrectly fagged speculative tasks.


Subject(s)
Algorithms , Cloud Computing , Computer Simulation , Appointments and Schedules , Datasets as Topic , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...