Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Total Environ ; 947: 174705, 2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-39002586

RESUMO

Groundwater irrigation districts, which play a crucial role in the Earth's critical zone, are confronted with numerous challenges, including water scarcity, pollution, and ecological degradation. These issues come from multiple systems and are linked to a groundwater-dominated water-food-environment-ecosystem nexus problem related to agricultural activities (WFEE). There is a pressing need for the scientific characterization and evaluation of the WFEE nexus in groundwater irrigation districts to assure high-quality, sustainable development. Furthermore, it is critical to provide practical and efficient regulations at the farmer level to uphold the health of this nexus. This paper presents a mapping network that focuses on groundwater irrigation districts. The network aims to convert the restriction indicators utilized to maintain the health of the WFEE nexus (at the irrigation district scale) into the targets employed to manage farmers' living and agricultural activities (at the farmer scale). Additionally, a system dynamics model is created to track and manage the interacting relationships between the WFEE nexus and farmers' living and agricultural activities. This proposed model employs a structured parameter system comprising targets, state parameters, regulatory parameters, and evaluation parameters. This system can get insight into the status of the WFEE nexus at the farmer level using state parameters, induce tailored management and regulation measures using regulatory parameters, assess the effectiveness of various measures using the evaluation parameters, and finally provide decision support to enhance the health of the WFEE nexus. The findings from the research conducted in the Yong'an groundwater irrigation district demonstrated that the model accurately described the relationship between the WFEE nexus and farmers' activities in groundwater irrigation districts. Furthermore, the model responded strongly to a variety of improvement strategies, including adjustments in planting area, optimization of planting pattern, improvement of irrigation method, and implementation of agronomic measures. As a result, it provided farmers with decision support for applying agricultural management methods and addressing the WFEE nexus problem in groundwater irrigation areas.

2.
Sensors (Basel) ; 24(10)2024 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-38794069

RESUMO

The segmentation of abnormal regions is vital in smart manufacturing. The blurring sauce-packet leakage segmentation task (BSLST) is designed to distinguish the sauce packet and the leakage's foreground and background at the pixel level. However, the existing segmentation system for detecting sauce-packet leakage on intelligent sensors encounters an issue of imaging blurring caused by uneven illumination. This issue adversely affects segmentation performance, thereby hindering the measurements of leakage area and impeding the automated sauce-packet production. To alleviate this issue, we propose the two-stage illumination-aware sauce-packet leakage segmentation (ISLS) method for intelligent sensors. The ISLS comprises two main stages: illumination-aware region enhancement and leakage region segmentation. In the first stage, YOLO-Fastestv2 is employed to capture the Region of Interest (ROI), which reduces redundancy computations. Additionally, we propose image enhancement to relieve the impact of uneven illumination, enhancing the texture details of the ROI. In the second stage, we propose a novel feature extraction network. Specifically, we propose the multi-scale feature fusion module (MFFM) and the Sequential Self-Attention Mechanism (SSAM) to capture discriminative representations of leakage. The multi-level features are fused by the MFFM with a small number of parameters, which capture leakage semantics at different scales. The SSAM realizes the enhancement of valid features and the suppression of invalid features by the adaptive weighting of spatial and channel dimensions. Furthermore, we generate a self-built dataset of sauce packets, including 606 images with various leakage areas. Comprehensive experiments demonstrate that our ISLS method shows better results than several state-of-the-art methods, with additional performance analyses deployed on intelligent sensors to affirm the effectiveness of our proposed method.

3.
Entropy (Basel) ; 25(8)2023 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-37628197

RESUMO

Recently, end-to-end deep models for video compression have made steady advancements. However, this resulted in a lengthy and complex pipeline containing numerous redundant parameters. The video compression approaches based on implicit neural representation (INR) allow videos to be directly represented as a function approximated by a neural network, resulting in a more lightweight model, whereas the singularity of the feature extraction pipeline limits the network's ability to fit the mapping function for video frames. Hence, we propose a neural representation approach for video compression with an implicit multiscale fusion network (NRVC), utilizing normalized residual networks to improve the effectiveness of INR in fitting the target function. We propose the multiscale representations for video compression (MSRVC) network, which effectively extracts features from the input video sequence to enhance the degree of overfitting in the mapping function. Additionally, we propose the feature extraction channel attention (FECA) block to capture interaction information between different feature extraction channels, further improving the effectiveness of feature extraction. The results show that compared to the NeRV method with similar bits per pixel (BPP), NRVC has a 2.16% increase in the decoded peak signal-to-noise ratio (PSNR). Moreover, NRVC outperforms the conventional HEVC in terms of PSNR.

4.
ACS Appl Mater Interfaces ; 15(15): 19545-19559, 2023 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-37037677

RESUMO

The convergence of multivalley bands is originally believed to be beneficial for thermoelectric performance by enhancing the charge conductivity while preserving the Seebeck coefficients, based on the assumption that electron interband or intervalley scattering effects are totally negligible. In this work, we demonstrate that ß-Bi with a buckled honeycomb structure experiences a topological transition from a normal insulator to a Z2 topological insulator induced by spin-orbit coupling, which subsequently increases the band degeneracy and is probably beneficial for enhancement of the thermoelectric power factor for holes. Therefore, strong intervalley scattering can be observed in both band-convergent ß- and aw-Bi monolayers. Compared to ß-Bi, aw-Bi with a puckered black-phosphorus-like structure possesses high carrier mobilities with 318 cm2/(V s) for electrons and 568 cm2/(V s) for holes at room temperature. We also unveil extraordinarily strong fourth phonon-phonon interactions in these bismuth monolayers, significantly reducing their lattice thermal conductivities at room temperature, which is generally anomalous in conventional semiconductors. Finally, a high thermoelectric figure of merit (zT) can be achieved in both bismuth monolayers, especially for aw-Bi with an n-type zT value of 2.2 at room temperature. Our results suggest that strong fourth phonon-phonon interactions are crucial to a high thermoelectric performance in these materials, and two-dimensional bismuth is probably a promising thermoelectric material due to its enhanced band convergence induced by the topological transition.

5.
Diagnostics (Basel) ; 12(12)2022 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-36553069

RESUMO

Blood glucose stability in diabetic patients determines the degree of health, and changes in blood glucose levels are related to the outcome of diabetic patients. Therefore, accurate monitoring of blood glucose has a crucial role in controlling diabetes. Aiming at the problem of high volatility of blood glucose concentration in diabetic patients and the limitations of a single regression prediction model, this paper proposes a method for predicting blood glucose values based on particle swarm optimization and model fusion. First, the Kalman filtering algorithm is used to smooth and reduce the noise of the sensor current signal to reduce the effect of noise on the data. Then, the hyperparameter optimization of Extreme Gradient Boosting (XGBoost) and Light Gradient Boosting Machine (LightGBM) models is performed using particle swarm optimization algorithm. Finally, the XGBoost and LightGBM models are used as the base learner and the Bayesian regression model as the meta-learner, and the stacking model fusion method is used to achieve the prediction of blood glucose values. In order to prove the effectiveness and superiority of the method in this paper, we compared the prediction results of stacking fusion model with other 6 models. The experimental results show that the stacking fusion model proposed in this paper can accurately predict blood glucose values, and the average absolute percentage error of blood glucose prediction is 13.01%, and the prediction error of the stacking fusion model is much lower than that of the other six models. Therefore, the proposed diabetes blood glucose prediction method in this paper has superiority.

6.
Sensors (Basel) ; 22(17)2022 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-36081166

RESUMO

The thermal imaging pedestrian-detection system has excellent performance in different lighting scenarios, but there are problems regarding weak texture, object occlusion, and small objects. Meanwhile, large high-performance models have higher latency on edge devices with limited computing power. To solve the above problems, in this paper, we propose a real-time thermal imaging pedestrian-detection method for edge computing devices. Firstly, we utilize multi-scale mosaic data augmentation to enhance the diversity and texture of objects, which alleviates the impact of complex environments. Then, the parameter-free attention mechanism is introduced into the network to enhance features, which barely increases the computing cost of the network. Finally, we accelerate multi-channel video detection through quantization and multi-threading techniques on edge computing devices. Additionally, we create a high-quality thermal infrared dataset to facilitate the research. The comparative experiments on the self-built dataset, YDTIP, and three public datasets, with other methods show that our method also has certain advantages.

7.
BMC Bioinformatics ; 23(1): 297, 2022 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-35879669

RESUMO

Since the completion of the Human Genome Project at the turn of the century, there has been an unprecedented proliferation of sequencing data. One of the consequences is that it becomes extremely difficult to store, backup, and migrate enormous amount of genomic datasets, not to mention they continue to expand as the cost of sequencing decreases. Herein, a much more efficient and scalable program to perform genome compression is required urgently. In this manuscript, we propose a new Apache Spark based Genome Compression method called SparkGC that can run efficiently and cost-effectively on a scalable computational cluster to compress large collections of genomes. SparkGC uses Spark's in-memory computation capabilities to reduce compression time by keeping data active in memory between the first-order and second-order compression. The evaluation shows that the compression ratio of SparkGC is better than the best state-of-the-art methods, at least better by 30%. The compression speed is also at least 3.8 times that of the best state-of-the-art methods on only one worker node and scales quite well with the number of nodes. SparkGC is of significant benefit to genomic data storage and transmission. The source code of SparkGC is publicly available at https://github.com/haichangyao/SparkGC .


Assuntos
Algoritmos , Compressão de Dados , Compressão de Dados/métodos , Genoma , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Humanos , Análise de Sequência de DNA/métodos , Software
8.
Sensors (Basel) ; 22(9)2022 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-35590899

RESUMO

The research of object classification and part segmentation is a hot topic in computer vision, robotics, and virtual reality. With the emergence of depth cameras, point clouds have become easier to collect and increasingly important because of their simple and unified structures. Recently, a considerable number of studies have been carried out about deep learning on 3D point clouds. However, data captured directly by sensors from the real-world often encounters severe incomplete sampling problems. The classical network is able to learn deep point set features efficiently, but it is not robust enough when the method suffers from the lack of point clouds. In this work, a novel and general network was proposed, whose effect does not depend on a large amount of point cloud input data. The mutual learning of neighboring points and the fusion between high and low feature layers can better promote the integration of local features so that the network can be more robust. The specific experiments were conducted on the ScanNet and Modelnet40 datasets with 84.5% and 92.8% accuracy, respectively, which proved that our model is comparable or even better than most existing methods for classification and segmentation tasks, and has good local feature integration ability. Particularly, it can still maintain 87.4% accuracy when the number of input points is further reduced to 128. The model proposed has bridged the gap between classical networks and point cloud processing.


Assuntos
Robótica , Realidade Virtual , Computação em Nuvem , Redes Neurais de Computação
9.
Entropy (Basel) ; 24(10)2022 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-37420397

RESUMO

Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks have become a critical academic emphasis in the current security field. However, current black-box attack methods still have shortcomings, resulting in incomplete utilization of query information. Our research, based on the newly proposed Simulator Attack, proves the correctness and usability of feature layer information in a simulator model obtained by meta-learning for the first time. Then, we propose an optimized Simulator Attack+ based on this discovery. Our optimization methods used in Simulator Attack+ include: (1) a feature attentional boosting module that uses the feature layer information of the simulator to enhance the attack and accelerate the generation of adversarial examples; (2) a linear self-adaptive simulator-predict interval mechanism that allows the simulator model to be fully fine-tuned in the early stage of the attack and dynamically adjusts the interval for querying the black-box model; and (3) an unsupervised clustering module to provide a warm-start for targeted attacks. Results from experiments on the CIFAR-10 and CIFAR-100 datasets clearly show that Simulator Attack+ can further reduce the number of consuming queries to improve query efficiency while maintaining the attack.

10.
Sensors (Basel) ; 20(24)2020 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-33348795

RESUMO

We focus on exploring the LIDAR-RGB fusion-based 3D object detection in this paper. This task is still challenging in two aspects: (1) the difference of data formats and sensor positions contributes to the misalignment of reasoning between the semantic features of images and the geometric features of point clouds. (2) The optimization of traditional IoU is not equal to the regression loss of bounding boxes, resulting in biased back-propagation for non-overlapping cases. In this work, we propose a cascaded cross-modality fusion network (CCFNet), which includes a cascaded multi-scale fusion module (CMF) and a novel center 3D IoU loss to resolve these two issues. Our CMF module is developed to reinforce the discriminative representation of objects by reasoning the relation of corresponding LIDAR geometric capability and RGB semantic capability of the object from two modalities. Specifically, CMF is added in a cascaded way between the RGB and LIDAR streams, which selects salient points and transmits multi-scale point cloud features to each stage of RGB streams. Moreover, our center 3D IoU loss incorporates the distance between anchor centers to avoid the oversimple optimization for non-overlapping bounding boxes. Extensive experiments on the KITTI benchmark have demonstrated that our proposed approach performs better than the compared methods.

11.
Biomed Res Int ; 2019: 3108950, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31915686

RESUMO

With the maturity of genome sequencing technology, huge amounts of sequence reads as well as assembled genomes are generating. With the explosive growth of genomic data, the storage and transmission of genomic data are facing enormous challenges. FASTA, as one of the main storage formats for genome sequences, is widely used in the Gene Bank because it eases sequence analysis and gene research and is easy to be read. Many compression methods for FASTA genome sequences have been proposed, but they still have room for improvement. For example, the compression ratio and speed are not so high and robust enough, and memory consumption is not ideal, etc. Therefore, it is of great significance to improve the efficiency, robustness, and practicability of genomic data compression to reduce the storage and transmission cost of genomic data further and promote the research and development of genomic technology. In this manuscript, a hybrid referential compression method (HRCM) for FASTA genome sequences is proposed. HRCM is a lossless compression method able to compress single sequence as well as large collections of sequences. It is implemented through three stages: sequence information extraction, sequence information matching, and sequence information encoding. A large number of experiments fully evaluated the performance of HRCM. Experimental verification shows that HRCM is superior to the best-known methods in genome batch compression. Moreover, HRCM memory consumption is relatively low and can be deployed on standard PCs.


Assuntos
Big Data , Compressão de Dados/métodos , Genômica/métodos , Software , Bases de Dados Genéticas , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...