Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Patient Educ Couns ; 125: 108308, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38705023

RESUMO

PURPOSE: To synthesize the available evidence on factors associated with self-management behavior in young stroke patients. METHODS: The methodological guidelines for scoping reviews developed by the Joanna Briggs Institute and the PRISMA-scR-checklist for scoping reviews were used. A total of 5586 studies were identified through bibliographic searches of the scientific databases Medline (OVID), Embase (OVID), CINAHL (EBSCO), and PsycINFO, limited to the period 2000-2023. Studies were independently assessed for inclusion and exclusion criteria by two reviewers. Quantitative observational data and qualitative studies were extracted, mapped, and summarized to provide a descriptive summary of trends and considerations for future research. RESULTS: Nine papers were finally selected to answer the research question. Young patients' self-management was mainly influenced by demographic factors (age, gender, income, education, and stroke knowledge), disease-related factors (functionality and independence, duration of stroke diagnosis, cognitive function, and poststroke fatigue), and psychosocial factors (hardiness, spiritual self-care, self-efficacy, and social support). CONCLUSION: Further research is needed to determine the trajectory of poststroke self-management over time and its potential predictors, which should lead to the development of specific stroke rehabilitation and stroke self-management support programs for young people (considering factors that influence return to work in young stroke patients' self-management). PRACTICE IMPLICATIONS: Healthcare providers can design more efficient interventions to improve the quality of life of young stroke patients after discharge. Gaining an in-depth understanding of the factors that influence self-management can help achieve this.


Assuntos
Autogestão , Acidente Vascular Cerebral , Feminino , Humanos , Masculino , Adulto Jovem , Qualidade de Vida , Autocuidado , Autoeficácia , Apoio Social , Acidente Vascular Cerebral/terapia , Acidente Vascular Cerebral/psicologia , Reabilitação do Acidente Vascular Cerebral
2.
ACS Omega ; 9(12): 14455-14464, 2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38559938

RESUMO

Wastewater treatment produces a large amount of sludge, where the minimizing of the disposed sludge is essential for environmental protection. The co-combustion of sludge with coal is a preferable method for sewage sludge disposal from the economic and environmental perspective. The co-combustion of sludge has been widely used in the industry with the advantages of large processing capacity. The melting characteristics of ash are an important criterion for the selection of the co-combustion methods and furnace types. In this study, two types of sludge and four types of coal with different ash melting points were selected, where the ash melting behavior upon co-combustion is investigated by experimental and thermodynamical approaches. Especially, the slag fluidity upon co-combustion is explored via a modified inclined plane method. It has been found that the presence of SiO2 and CaO in sludge substantially enhances its fusion temperature owing to the high content of CaO, while SiO2 acts as a solvent, facilitating the co-melting of other oxides and raising the sludge fusion temperature. Fe2O3 exhibits a specific mass fraction within the range of 10-20%. Furthermore, the presence of CaO and SiO2 prohibits the flow ability of the slag at high temperatures, and Fe2O3 promotes the flow ability for sludge at high temperatures. With increasing base/acid ratio, the sludge flow velocity increases remarkably and peaks at 1.6. The interaction between Fe-Ca and Si-AI significantly affects the fluidity significantly. The findings are expected to optimize the condition of co-combustion and desirable furnace design for the incineration of sludge.

3.
IEEE Trans Image Process ; 32: 2295-2308, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37058377

RESUMO

Low-light images incur several complicated degradation factors such as poor brightness, low contrast, color degradation, and noise. Most previous deep learning-based approaches, however, only learn the mapping relationship of single channel between the input low-light images and the expected normal-light images, which is insufficient enough to deal with low-light images captured under uncertain imaging environment. Moreover, too deeper network architecture is not conducive to recover low-light images due to extremely low values in pixels. To surmount aforementioned issues, in this paper we propose a novel multi-branch and progressive network (MBPNet) for low-light image enhancement. To be more specific, the proposed MBPNet is comprised of four different branches which build the mapping relationship at different scales. The followed fusion is performed on the outputs obtained from four different branches for the final enhanced image. Furthermore, to better handle the difficulty of delivering structural information of low-light images with low values in pixels, a progressive enhancement strategy is applied in the proposed method, where four convolutional long short-term memory networks (LSTM) are embedded in four branches and an recurrent network architecture is developed to iteratively perform the enhancement process. In addition, a joint loss function consisting of the pixel loss, the multi-scale perceptual loss, the adversarial loss, the gradient loss, and the color loss is framed to optimize the model parameters. To evaluate the effectiveness of proposed MBPNet, three popularly used benchmark databases are used for both quantitative and qualitative assessments. The experimental results confirm that the proposed MBPNet obviously outperforms other state-of-the-art approaches in terms of quantitative and qualitative results. The code will be available at https://github.com/kbzhang0505/MBPNet.

4.
Neural Netw ; 152: 276-286, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35580514

RESUMO

Recent years deep learning-based methods incorporating facial prior knowledge for face super-resolution (FSR) are advancing and have gained impressive performance. However, some important priors such as facial landmarks are not fully exploited in existing methods, leading to noticeable artifacts in the resultant SR face images especially under large magnification. In this paper, we propose a novel multi-level landmark-guided deep network (MLGDN) for FSR. More specifically, to fully exploit the dependencies between low and high resolution images and to reduce network parameters as well as capture more reliable feature representation, we introduce a recursive back-projection network with a particular feedback mechanism for coarse-to-fine FSR. Furthermore, we incorporate an attention fusion module in the front of backbone network to strengthen face components and a feature modulation module to refine features in the middle of backbone network. By this way, the facial landmarks extracted from face images can be fully shared by the modules in different levels, which benefit to produce more faithful facial details. Both quantitative and qualitative performance evaluations on two benchmark databases demonstrate that the proposed MLGDN can achieve more impressive SR results than other state-of-the-art competitors. Code will be available at https://github.com/zhuangcheng31/MLG_Face.git/.


Assuntos
Algoritmos , Face , Bases de Dados Factuais , Conhecimento
5.
Comput Intell Neurosci ; 2020: 8852137, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33414821

RESUMO

The traditional label relaxation regression (LRR) algorithm directly fits the original data without considering the local structure information of the data. While the label relaxation regression algorithm of graph regularization takes into account the local geometric information, the performance of the algorithm depends largely on the construction of graph. However, the traditional graph structures have two defects. First of all, it is largely influenced by the parameter values. Second, it relies on the original data when constructing the weight matrix, which usually contains a lot of noise. This makes the constructed graph to be often not optimal, which affects the subsequent work. Therefore, a discriminative label relaxation regression algorithm based on adaptive graph (DLRR_AG) is proposed for feature extraction. DLRR_AG combines manifold learning with label relaxation regression by constructing adaptive weight graph, which can well overcome the problem of label overfitting. Based on a large number of experiments, it can be proved that the proposed method is effective and feasible.


Assuntos
Algoritmos
6.
Arthrosc Tech ; 6(3): e549-e557, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28706799

RESUMO

Arthroscopic lateral ankle ligament reconstruction has been recently advocated. But this technique has not been popularized because of the technical complexity and potential iatrogenic injury. Because the talocalcaneal and calcaneofibular ligaments are extra-articular structures, how to efficiently view and address them is a difficult task. Limited dissection outside the capsule to form a working space is required, but aggressive dissection is harmful for tissue healing although it is helpful for visualization and instrumentation. Because almost the entire talar body is covered by articular cartilage, it is very difficult to safely make a bone tunnel without damaging the cartilage. The remnants of the lateral ankle ligament have proprioceptive sensors that are important for functional stability, but it is difficult to perform anatomical reconstruction arthroscopically while preserving them because of the narrow working space. Furthermore, how to properly tension the reconstructed ligaments in such a narrow working space is also a very difficult task. We have designed a technique that preserves the remnants of lateral ankle ligaments, and all of the above-mentioned problems have been successfully addressed. We have used this technique clinically, and only minor complications occurred.

7.
IEEE Trans Neural Netw Learn Syst ; 28(5): 1109-1122, 2017 05.
Artigo em Inglês | MEDLINE | ID: mdl-26915133

RESUMO

This paper develops a coarse-to-fine framework for single-image super-resolution (SR) reconstruction. The coarse-to-fine approach achieves high-quality SR recovery based on the complementary properties of both example learning-and reconstruction-based algorithms: example learning-based SR approaches are useful for generating plausible details from external exemplars but poor at suppressing aliasing artifacts, while reconstruction-based SR methods are propitious for preserving sharp edges yet fail to generate fine details. In the coarse stage of the method, we use a set of simple yet effective mapping functions, learned via correlative neighbor regression of grouped low-resolution (LR) to high-resolution (HR) dictionary atoms, to synthesize an initial SR estimate with particularly low computational cost. In the fine stage, we devise an effective regularization term that seamlessly integrates the properties of local structural regularity, nonlocal self-similarity, and collaborative representation over relevant atoms in a learned HR dictionary, to further improve the visual quality of the initial SR estimation obtained in the coarse stage. The experimental results indicate that our method outperforms other state-of-the-art methods for producing high-quality images despite that both the initial SR estimation and the followed enhancement are cheap to implement.

8.
IEEE Trans Image Process ; 25(2): 935-48, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26841394

RESUMO

As well known, Gaussian process regression (GPR) has been successfully applied to example learning-based image super-resolution (SR). Despite its effectiveness, the applicability of a GPR model is limited by its remarkably computational cost when a large number of examples are available to a learning task. For this purpose, we alleviate this problem of the GPR-based SR and propose a novel example learning-based SR method, called active-sampling GPR (AGPR). The newly proposed approach employs an active learning strategy to heuristically select more informative samples for training the regression parameters of the GPR model, which shows significant improvement on computational efficiency while keeping higher quality of reconstructed image. Finally, we suggest an accelerating scheme to further reduce the time complexity of the proposed AGPR-based SR by using a pre-learned projection matrix. We objectively and subjectively demonstrate that the proposed method is superior to other competitors for producing much sharper edges and finer details.

9.
IEEE Trans Neural Netw Learn Syst ; 27(12): 2472-2485, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-26357410

RESUMO

For regression-based single-image super-resolution (SR) problem, the key is to establish a mapping relation between high-resolution (HR) and low-resolution (LR) image patches for obtaining a visually pleasing quality image. Most existing approaches typically solve it by dividing the model into several single-output regression problems, which obviously ignores the circumstance that a pixel within an HR patch affects other spatially adjacent pixels during the training process, and thus tends to generate serious ringing artifacts in resultant HR image as well as increase computational burden. To alleviate these problems, we propose to use structured output regression machine (SORM) to simultaneously model the inherent spatial relations between the HR and LR patches, which is propitious to preserve sharp edges. In addition, to further improve the quality of reconstructed HR images, a nonlocal (NL) self-similarity prior in natural images is introduced to formulate as a regularization term to further enhance the SORM-based SR results. To offer a computation-effective SORM method, we use a relative small nonsupport vector samples to establish the accurate regression model and an accelerating algorithm for NL self-similarity calculation. Extensive SR experiments on various images indicate that the proposed method can achieve more promising performance than the other state-of-the-art SR methods in terms of both visual quality and computational cost.

10.
IEEE Trans Image Process ; 24(3): 846-61, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25576571

RESUMO

Example learning-based superresolution (SR) algorithms show promise for restoring a high-resolution (HR) image from a single low-resolution (LR) input. The most popular approaches, however, are either time- or space-intensive, which limits their practical applications in many resource-limited settings. In this paper, we propose a novel computationally efficient single image SR method that learns multiple linear mappings (MLM) to directly transform LR feature subspaces into HR subspaces. In particular, we first partition the large nonlinear feature space of LR images into a cluster of linear subspaces. Multiple LR subdictionaries are then learned, followed by inferring the corresponding HR subdictionaries based on the assumption that the LR-HR features share the same representation coefficients. We establish MLM from the input LR features to the desired HR outputs in order to achieve fast yet stable SR recovery. Furthermore, in order to suppress displeasing artifacts generated by the MLM-based method, we apply a fast nonlocal means algorithm to construct a simple yet effective similarity-based regularization term for SR enhancement. Experimental results indicate that our approach is both quantitatively and qualitatively superior to other application-oriented SR methods, while maintaining relatively low time and space complexity.

11.
IEEE Trans Neural Netw Learn Syst ; 25(4): 780-92, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24807954

RESUMO

It has been widely acknowledged that learning- and reconstruction-based super-resolution (SR) methods are effective to generate a high-resolution (HR) image from a single low-resolution (LR) input. However, learning-based methods are prone to introduce unexpected details into resultant HR images. Although reconstruction-based methods do not generate obvious artifacts, they tend to blur fine details and end up with unnatural results. In this paper, we propose a new SR framework that seamlessly integrates learning- and reconstruction-based methods for single image SR to: 1) avoid unexpected artifacts introduced by learning-based SR and 2) restore the missing high-frequency details smoothed by reconstruction-based SR. This integrated framework learns a single dictionary from the LR input instead of from external images to hallucinate details, embeds nonlocal means filter in the reconstruction-based SR to enhance edges and suppress artifacts, and gradually magnifies the LR input to the desired high-quality SR result. We demonstrate both visually and quantitatively that the proposed framework produces better results than previous methods from the literature.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Fotografação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
12.
IEEE Trans Neural Netw Learn Syst ; 24(10): 1648-59, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-24808601

RESUMO

Example learning-based image super-resolution (SR) is recognized as an effective way to produce a high-resolution (HR) image with the help of an external training set. The effectiveness of learning-based SR methods, however, depends highly upon the consistency between the supporting training set and low-resolution (LR) images to be handled. To reduce the adverse effect brought by incompatible high-frequency details in the training set, we propose a single image SR approach by learning multiscale self-similarities from an LR image itself. The proposed SR approach is based upon an observation that small patches in natural images tend to redundantly repeat themselves many times both within the same scale and across different scales. To synthesize the missing details, we establish the HR-LR patch pairs using the initial LR input and its down-sampled version to capture the similarities across different scales and utilize the neighbor embedding algorithm to estimate the relationship between the LR and HR image pairs. To fully exploit the similarities across various scales inside the input LR image, we accumulate the previous resultant images as training examples for the subsequent reconstruction processes and adopt a gradual magnification scheme to upscale the LR input to the desired size step by step. In addition, to preserve sharper edges and suppress aliasing artifacts, we further apply the nonlocal means method to learn the similarity within the same scale and formulate a nonlocal prior regularization term to well pose SR estimation under a reconstruction-based SR framework. Experimental results demonstrate that the proposed method can produce compelling SR recovery both quantitatively and perceptually in comparison with other state-of-the-art baselines.

13.
IEEE Trans Image Process ; 21(11): 4544-56, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22829403

RESUMO

Image super-resolution (SR) reconstruction is essentially an ill-posed problem, so it is important to design an effective prior. For this purpose, we propose a novel image SR method by learning both non-local and local regularization priors from a given low-resolution image. The non-local prior takes advantage of the redundancy of similar patches in natural images, while the local prior assumes that a target pixel can be estimated by a weighted average of its neighbors. Based on the above considerations, we utilize the non-local means filter to learn a non-local prior and the steering kernel regression to learn a local prior. By assembling the two complementary regularization terms, we propose a maximum a posteriori probability framework for SR recovery. Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Animais , Inteligência Artificial , Bases de Dados Factuais , Humanos , Análise de Regressão , Reprodutibilidade dos Testes
14.
IEEE Trans Image Process ; 21(7): 3194-205, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-22411005

RESUMO

Until now, neighbor-embedding-based (NE) algorithms for super-resolution (SR) have carried out two independent processes to synthesize high-resolution (HR) image patches. In the first process, neighbor search is performed using the Euclidean distance metric, and in the second process, the optimal weights are determined by solving a constrained least squares problem. However, the separate processes are not optimal. In this paper, we propose a sparse neighbor selection scheme for SR reconstruction. We first predetermine a larger number of neighbors as potential candidates and develop an extended Robust-SL0 algorithm to simultaneously find the neighbors and to solve the reconstruction weights. Recognizing that the k-nearest neighbor (k-NN) for reconstruction should have similar local geometric structures based on clustering, we employ a local statistical feature, namely histograms of oriented gradients (HoG) of low-resolution (LR) image patches, to perform such clustering. By conveying local structural information of HoG in the synthesis stage, the k-NN of each LR input patch is adaptively chosen from their associated subset, which significantly improves the speed of synthesizing the HR image while preserving the quality of reconstruction. Experimental results suggest that the proposed method can achieve competitive SR quality compared with other state-of-the-art baselines.

15.
IEEE Trans Image Process ; 21(2): 469-80, 2012 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22262669

RESUMO

The neighbor-embedding (NE) algorithm for single-image super-resolution (SR) reconstruction assumes that the feature spaces of low-resolution (LR) and high-resolution (HR) patches are locally isometric. However, this is not true for SR because of one-to-many mappings between LR and HR patches. To overcome or at least to reduce the problem for NE-based SR reconstruction, we apply a joint learning technique to train two projection matrices simultaneously and to map the original LR and HR feature spaces onto a unified feature subspace. Subsequently, the k -nearest neighbor selection of the input LR image patches is conducted in the unified feature subspace to estimate the reconstruction weights. To handle a large number of samples, joint learning locally exploits a coupled constraint by linking the LR-HR counterparts together with the K-nearest grouping patch pairs. In order to refine further the initial SR estimate, we impose a global reconstruction constraint on the SR outcome based on the maximum a posteriori framework. Preliminary experiments suggest that the proposed algorithm outperforms NE-related baselines.

16.
IEEE Trans Image Process ; 20(10): 2738-47, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21926001

RESUMO

Multiframe super-resolution (SR) reconstruction aims to produce a high-resolution (HR) image using a set of low-resolution (LR) images. In the process of reconstruction, fuzzy registration usually plays a critical role. It mainly focuses on the correlation between pixels of the candidate and the reference images to reconstruct each pixel by averaging all its neighboring pixels. Therefore, the fuzzy-registration-based SR performs well and has been widely applied in practice. However, if some objects appear or disappear among LR images or different angle rotations exist among them, the correlation between corresponding pixels becomes weak. Thus, it will be difficult to use LR images effectively in the process of SR reconstruction. Moreover, if the LR images are noised, the reconstruction quality will be affected seriously. To address or at least reduce these problems, this paper presents a novel SR method based on the Zernike moment, to make the most of possible details in each LR image for high-quality SR reconstruction. Experimental results show that the proposed method outperforms existing methods in terms of robustness and visual effects.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...