Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 11(1): 9891, 2021 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-33972640

RESUMO

Graphene oxide (GO), reduced graphene oxide (rGO) and carbon nanotubes (CNTs) have their own advantages in electrical, optical, thermal and mechanical properties. An effective combination of these materials is ideal for preparing transparent conductive films to replace the traditional indium tin oxide films. At present, the preparation conditions of rGO are usually harsh and some of them have toxic effects. In this paper, an SnCl2/ethanol solution was selected as the reductant because it requires mild reaction conditions and no harmful products are produced. The whole process of rGO preparation was convenient, fast and environmentally friendly. Then, SEM, XPS, Raman, and XRD were used to verify the high reduction efficiency. CNTs were introduced to improve the film conductive property. The transmittance and sheet resistance were the criteria used to choose the reduction time and the content ratios of GO/CNT. Thanks to the post-treatment of nitric acid, not only the by-product (SnO2) and dispersant in the film are removed, but also the doping effect occurs, which are all conducive to reducing the sheet resistances of films. Ultimately, by combining rGO, GO and CNTs, transparent conductive films with a bilayer and three-dimensional structure were prepared, and they exhibited high transmittance and low sheet resistance (58.8 Ω/sq. at 83.45 T%, 47.5 Ω/sq. at 79.07 T%), with corresponding [Formula: see text] values of 33.8 and 31.8, respectively. In addition, GO and rGO can modify the surface and reduce the film surface roughness. The transparent conductive films are expected to be used in photoelectric devices.

2.
IEEE Trans Pattern Anal Mach Intell ; 43(2): 549-566, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-31478840

RESUMO

In some significant applications such as data forecasting, the locations of missing entries cannot obey any non-degenerate distributions, questioning the validity of the prevalent assumption that the missing data is randomly chosen according to some probabilistic model. To break through the limits of random sampling, we explore in this paper the problem of real-valued matrix completion under the setup of deterministic sampling. We propose two conditions, isomeric condition and relative well-conditionedness, for guaranteeing an arbitrary matrix to be recoverable from a sampling of the matrix entries. It is provable that the proposed conditions are weaker than the assumption of uniform sampling and, most importantly, it is also provable that the isomeric condition is necessary for the completions of any partial matrices to be identifiable. Equipped with these new tools, we prove a collection of theorems for missing data recovery as well as convex/nonconvex matrix completion. Among other things, we study in detail a Schatten quasi-norm induced method termed isomeric dictionary pursuit (IsoDP), and we show that IsoDP exhibits some distinct behaviors absent in the traditional bilinear programs.

3.
IEEE Trans Pattern Anal Mach Intell ; 43(2): 459-472, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31398110

RESUMO

First-order non-convex Riemannian optimization algorithms have gained recent popularity in structured machine learning problems including principal component analysis and low-rank matrix completion. The current paper presents an efficient Riemannian Stochastic Path Integrated Differential EstimatoR (R-SPIDER) algorithm to solve the finite-sum and online Riemannian non-convex minimization problems. At the core of R-SPIDER is a recursive semi-stochastic gradient estimator that can accurately estimate Riemannian gradient under not only exponential mapping and parallel transport, but also general retraction and vector transport operations. Compared with prior Riemannian algorithms, such a recursive gradient estimation mechanism endows R-SPIDER with lower computational cost in first-order oracle complexity. Specifically, for finite-sum problems with n components, R-SPIDER is proved to converge to an ϵ-approximate stationary point within [Formula: see text] stochastic gradient evaluations, beating the best-known complexity [Formula: see text]; for online optimization, R-SPIDER is shown to converge with [Formula: see text] complexity which is, to the best of our knowledge, the first non-asymptotic result for online Riemannian optimization. For the special case of gradient dominated functions, we further develop a variant of R-SPIDER with improved linear rate of convergence. Extensive experimental results demonstrate the advantage of the proposed algorithms over the state-of-the-art Riemannian non-convex optimization methods.

4.
J Hazard Mater ; 384: 120978, 2020 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-31780297

RESUMO

Membrane fouling can be effectively addressed by modifying the membrane to realize anti-fouling capability together with real-time fouling detection. Here, we present the synthesis and water treatment testing of a promising candidate for this application, a composite membrane of polyvinylidene fluoride (PVDF) and functionalized carbon nano-materials prepared by a facile phase inversion method. The synergistic effect of oxidized multi-walled carbon nanotubes (OMWCNTs) and graphene oxide (GO) enabled better surface pore structures, higher surface roughness, hydrophilicity, and better antifouling property as compared with that of pristine PVDF membranes. The PVDF/OMWCNT/GO mixed matrix membranes (MMMs) achieved a high water flux of 125.6 L m-2 h-1 with high pollutant rejection rate, and their electrical conductivity of 2.11 × 10-4 S cm-1 at 100 kHz was sensitive to the amount of pollutant uptake. By using hybrid MMMs, we demonstrate simultaneous pollutant filtering and uptake monitoring, which is an important step in revolutionizing the water treatment industry.

5.
ACS Omega ; 4(23): 20265-20274, 2019 Dec 03.
Artigo em Inglês | MEDLINE | ID: mdl-31815229

RESUMO

The amphiphilic graphene derivative was prepared by covalent grafting of graphene oxide (GO) with isophorone diisocyanate and N,N-dimethylethanolamine and then noncovalent grafting of GO with sodium dodecylbenzenesulfonate. The results obtained from infrared spectroscopy, X-ray photoelectron spectroscopy, thermal gravimetric analysis, and X-ray diffraction analysis revealed that the short chains were successfully grafted onto the surface of GO. Subsequently, scanning electron microscopy and optical microscopy results showed that the modified GO (IP-GO) has the best dispersibility and compatibility than GO and reduced GO in the waterborne polyurethane matrix. The relationship between the corrosion resistance of composite coatings and the dispersibility of the graphene derivative and the compatibility of the graphene derivative with a polymer matrix were discussed. The anticorrosive properties were characterized by electrochemical impedance spectroscopy analysis and salt spray tests. Through a series of anticorrosion tests, it is concluded that the anticorrosion performance of a composite coating with 0.3 wt % IP-GO is significantly improved. The excellent anticorrosion performance is due to the perfect dispersion and good compatibility of IP-GO in waterborne polyurethane.

6.
IEEE Trans Neural Netw Learn Syst ; 29(6): 2441-2449, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-28489554

RESUMO

Hashing is emerging as a powerful tool for building highly efficient indices in large-scale search systems. In this paper, we study spectral hashing (SH), which is a classical method of unsupervised hashing. In general, SH solves for the hash codes by minimizing an objective function that tries to preserve the similarity structure of the data given. Although computationally simple, very often SH performs unsatisfactorily and lags distinctly behind the state-of-the-art methods. We observe that the inferior performance of SH is mainly due to its imperfect formulation; that is, the optimization of the minimization problem in SH actually cannot ensure that the similarity structure of the high-dimensional data is really preserved in the low-dimensional hash code space. In this paper, we, therefore, introduce reversed SH (ReSH), which is SH with its input and output interchanged. Unlike SH, which estimates the similarity structure from the given high-dimensional data, our ReSH defines the similarities between data points according to the unknown low-dimensional hash codes. Equipped with such a reversal mechanism, ReSH can seamlessly overcome the drawback of SH. More precisely, the minimization problem in our ReSH can be optimized if and only if similar data points are mapped to adjacent hash codes, and mostly important, dissimilar data points are considerably separated from each other in the code space. Finally, we solve the minimization problem in ReSH by multilayer neural networks and obtain state-of-the-art retrieval results on three benchmark data sets.

7.
IEEE Trans Pattern Anal Mach Intell ; 39(12): 2437-2450, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-28092519

RESUMO

We introduce a family of Newton-type greedy selection methods for -constrained minimization problems. The basic idea is to construct a quadratic function to approximate the original objective function around the current iterate and solve the constructed quadratic program over the cardinality constraint. The next iterate is then estimated via a line search operation between the current iterate and the solution of the sparse quadratic program. This iterative procedure can be interpreted as an extension of the constrained Newton methods from convex minimization to non-convex -constrained minimization. We show that the proposed algorithms converge asymptotically and the rate of local convergence is superlinear up to certain estimation error. Our methods compare favorably against several state-of-the-art greedy selection methods when applied to sparse logistic regression and sparse support vector machines.

8.
IEEE Trans Neural Netw Learn Syst ; 28(6): 1425-1438, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-27046912

RESUMO

A major progress in deep multilayer neural networks (DNNs) is the invention of various unsupervised pretraining methods to initialize network parameters which lead to good prediction accuracy. This paper presents the sparseness analysis on the hidden unit in the pretraining process. In particular, we use the L1 -norm to measure sparseness and provide some sufficient conditions for that pretraining leads to sparseness with respect to the popular pretraining models-such as denoising autoencoders (DAEs) and restricted Boltzmann machines (RBMs). Our experimental results demonstrate that when the sufficient conditions are satisfied, the pretraining models lead to sparseness. Our experiments also reveal that when using the sigmoid activation functions, pretraining plays an important sparseness role in DNNs with sigmoid (Dsigm), and when using the rectifier linear unit (ReLU) activation functions, pretraining becomes less effective for DNNs with ReLU (Drelu). Luckily, Drelu can reach a higher recognition accuracy than DNNs with pretraining (DAEs and RBMs), as it can capture the main benefit (such as sparseness-encouraging) of pretraining in Dsigm. However, ReLU is not adapted to the different firing rates in biological neurons, because the firing rate actually changes along with the varying membrane resistances. To address this problem, we further propose a family of rectifier piecewise linear units (RePLUs) to fit the different firing rates. The experimental results show that the performance of RePLU is better than ReLU, and is comparable with those with some pretraining techniques, such as RBMs and DAEs.

9.
IEEE Trans Neural Netw Learn Syst ; 27(11): 2448-2453, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-26415190

RESUMO

Explicit feature mapping is an appealing way to linearize additive kernels, such as χ2 kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ2 kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ2 kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ2 multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ2 kernel SVMs at almost no cost of testing accuracy.

10.
IEEE Trans Image Process ; 23(12): 5390-9, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25312925

RESUMO

In image classification, recognition or retrieval systems, image contents are commonly described by global features. However, the global features generally contain noise from the background, occlusion, or irrelevant objects in the images. Thus, only part of the global feature elements is informative for describing the objects of interest and useful for the image analysis tasks. In this paper, we propose algorithms to automatically discover the subgroups of highly correlated feature elements within predefined global features. To this end, we first propose a novel mixture sparse regression (MSR) method, which groups the elements of a single vector according to the membership conveyed by their sparse regression coefficients. Based on MSR, we proceed to develop the autogrouped sparse representation (ASR), which groups correlated feature elements together through fusing their individual sparse representations over multiple samples. We apply ASR/MSR in two practical visual analysis tasks: 1) multilabel image classification and 2) motion segmentation. Comprehensive experimental evaluations show that our proposed methods are able to achieve superior performance compared with the state-of-the-art classification on these two tasks.

11.
IEEE Trans Pattern Anal Mach Intell ; 35(12): 3025-36, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24136438

RESUMO

The forward greedy selection algorithm of Frank and Wolfe has recently been applied with success to coordinate-wise sparse learning problems, characterized by a tradeoff between sparsity and accuracy. In this paper, we generalize this method to the setup of pursuing sparse representations over a prefixed dictionary. Our proposed algorithm iteratively selects an atom from the dictionary and minimizes the objective function over the linear combinations of all the selected atoms. The rate of convergence of this greedy selection procedure is analyzed. Furthermore, we extend the algorithm to the setup of learning nonnegative and convex sparse representation over a dictionary. Applications of the proposed algorithms to sparse precision matrix estimation and low-rank subspace segmentation are investigated with efficiency and effectiveness validated on benchmark datasets.

12.
PLoS One ; 8(7): e69842, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23936111

RESUMO

Antigenic characterization based on serological data, such as Hemagglutination Inhibition (HI) assay, is one of the routine procedures for influenza vaccine strain selection. In many cases, it would be impossible to measure all pairwise antigenic correlations between testing antigens and reference antisera in each individual experiment. Thus, we have to combine and integrate the HI tables from a number of individual experiments. Measurements from different experiments may be inconsistent due to different experimental conditions. Consequently we will observe a matrix with missing data and possibly inconsistent measurements. In this paper, we develop a new mathematical model, which we refer to as Joint Matrix Completion and Filtering, for HI data integration. In this approach, we simultaneously handle the incompleteness and uncertainty of observations by assuming that the underlying merged HI data matrix has low rank, as well as carefully modeling different levels of noises in each individual table. An efficient blockwise coordinate descent procedure is developed for optimization. The performance of our approach is validated on synthetic and real influenza datasets. The proposed joint matrix completion and filtering model can be adapted as a general model for biological data integration, targeting data noises and missing values within and across experiments.


Assuntos
Antígenos Virais/química , Soros Imunes/análise , Vacinas contra Influenza/química , Modelos Imunológicos , Modelos Estatísticos , Animais , Antígenos Virais/imunologia , Aves/imunologia , Aves/virologia , Testes de Inibição da Hemaglutinação , Humanos , Soros Imunes/imunologia , Vacinas contra Influenza/administração & dosagem , Vacinas contra Influenza/imunologia , Influenza Aviária/imunologia , Influenza Aviária/prevenção & controle , Influenza Humana/imunologia , Influenza Humana/prevenção & controle , Variações Dependentes do Observador , Orthomyxoviridae/imunologia , Razão Sinal-Ruído
13.
IEEE Trans Image Process ; 21(10): 4349-60, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22736645

RESUMO

We address the problem of visual classification with multiple features and/or multiple instances. Motivated by the recent success of multitask joint covariate selection, we formulate this problem as a multitask joint sparse representation model to combine the strength of multiple features and/or instances for recognition. A joint sparsity-inducing norm is utilized to enforce class-level joint sparsity patterns among the multiple representation vectors. The proposed model can be efficiently optimized by a proximal gradient method. Furthermore, we extend our method to the setup where features are described in kernel matrices. We then investigate into two applications of our method to visual classification: 1) fusing multiple kernel features for object categorization and 2) robust face recognition in video with an ensemble of query images. Extensive experiments on challenging real-world data sets demonstrate that the proposed method is competitive to the state-of-the-art methods in respective applications.

14.
Neural Comput ; 24(4): 1047-84, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22091666

RESUMO

We investigate Newton-type optimization methods for solving piecewise linear systems (PLSs) with nondegenerate coefficient matrix. Such systems arise, for example, from the numerical solution of linear complementarity problem, which is useful to model several learning and optimization problems. In this letter, we propose an effective damped Newton method, PLS-DN, to find the exact (up to machine precision) solution of nondegenerate PLSs. PLS-DN exhibits provable semiiterative property, that is, the algorithm converges globally to the exact solution in a finite number of iterations. The rate of convergence is shown to be at least linear before termination. We emphasize the applications of our method in modeling, from a novel perspective of PLSs, some statistical learning problems such as box-constrained least squares, elitist Lasso (Kowalski & Torreesani, 2008), and support vector machines (Cortes & Vapnik, 1995). Numerical results on synthetic and benchmark data sets are presented to demonstrate the effectiveness and efficiency of PLS-DN on these problems.


Assuntos
Algoritmos , Inteligência Artificial , Modelos Lineares , Redes Neurais de Computação , Simulação por Computador , Aprendizagem/fisiologia , Análise dos Mínimos Quadrados , Modelos Teóricos , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...