Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Nat Commun ; 15(1): 1989, 2024 Mar 05.
Article in English | MEDLINE | ID: mdl-38443349

ABSTRACT

Whenever a visual scene is cast onto the retina, much of it will appear degraded due to poor resolution in the periphery; moreover, optical defocus can cause blur in central vision. However, the pervasiveness of blurry or degraded input is typically overlooked in the training of convolutional neural networks (CNNs). We hypothesized that the absence of blurry training inputs may cause CNNs to rely excessively on high spatial frequency information for object recognition, thereby causing systematic deviations from biological vision. We evaluated this hypothesis by comparing standard CNNs with CNNs trained on a combination of clear and blurry images. We show that blur-trained CNNs outperform standard CNNs at predicting neural responses to objects across a variety of viewing conditions. Moreover, blur-trained CNNs acquire increased sensitivity to shape information and greater robustness to multiple forms of visual noise, leading to improved correspondence with human perception. Our results provide multi-faceted neurocomputational evidence that blurry visual experiences may be critical for conferring robustness to biological visual systems.


Subject(s)
Neural Networks, Computer , Visual Perception , Humans , Retina
2.
Neural Comput ; 35(12): 1910-1937, 2023 Nov 07.
Article in English | MEDLINE | ID: mdl-37844328

ABSTRACT

Deep convolutional neural networks (DCNNs) have demonstrated impressive robustness to recognize objects under transformations (e.g., blur or noise) when these transformations are included in the training set. A hypothesis to explain such robustness is that DCNNs develop invariant neural representations that remain unaltered when the image is transformed. However, to what extent this hypothesis holds true is an outstanding question, as robustness to transformations could be achieved with properties different from invariance; for example, parts of the network could be specialized to recognize either transformed or nontransformed images. This article investigates the conditions under which invariant neural representations emerge by leveraging that they facilitate robustness to transformations beyond the training distribution. Concretely, we analyze a training paradigm in which only some object categories are seen transformed during training and evaluate whether the DCNN is robust to transformations across categories not seen transformed. Our results with state-of-the-art DCNNs indicate that invariant neural representations do not always drive robustness to transformations, as networks show robustness for categories seen transformed during training even in the absence of invariant neural representations. Invariance emerges only as the number of transformed categories in the training set is increased. This phenomenon is much more prominent with local transformations such as blurring and high-pass filtering than geometric transformations such as rotation and thinning, which entail changes in the spatial arrangement of the object. Our results contribute to a better understanding of invariant neural representations in deep learning and the conditions under which it spontaneously emerges.


Subject(s)
Neural Networks, Computer , Pattern Recognition, Visual
3.
bioRxiv ; 2023 Jul 31.
Article in English | MEDLINE | ID: mdl-37577646

ABSTRACT

Whenever a visual scene is cast onto the retina, much of it will appear degraded due to poor resolution in the periphery; moreover, optical defocus can cause blur in central vision. However, the pervasiveness of blurry or degraded input is typically overlooked in the training of convolutional neural networks (CNNs). We hypothesized that the absence of blurry training inputs may cause CNNs to rely excessively on high spatial frequency information for object recognition, thereby causing systematic deviations from biological vision. We evaluated this hypothesis by comparing standard CNNs with CNNs trained on a combination of clear and blurry images. We show that blur-trained CNNs outperform standard CNNs at predicting neural responses to objects across a variety of viewing conditions. Moreover, blur-trained CNNs acquire increased sensitivity to shape information and greater robustness to multiple forms of visual noise, leading to improved correspondence with human perception. Our results provide novel neurocomputational evidence that blurry visual experiences are very important for conferring robustness to biological visual systems.

4.
Acta Parasitol ; 67(1): 539-545, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34731404

ABSTRACT

PURPOSE: Metagonimiasis, commonly seen in East Asian countries, is a parasitic disorder caused by definitive hosts' ingestion of undercooked freshwater fishes. Recently, genetic analysis has proved 28S rRNA and cytochrome c oxidase subunit I (COI) mtDNA gene to be a successful marker differentiating species of the genus Metagonimus. In the present study, using specimens from the newly discovered Joseon Dynasty human remains of Goryeong, we obtained updated genetic data on genus Metagonimus, which was also prevalent during the Joseon period. METHODS: The ancient DNA (aDNA) was retrieved from the coprolite sample of the seventeenth century, half-mummified individual discovered at Goryeong Country, South Korea. Cloning and sequencing were performed on PCR-amplified amplicons for M. yokogawai 28S rRNA and COI mtDNA gene. The consensus sequences were used for species identification and phylogenetic analysis using NCBI/BLAST and MEGA X software. RESULTS: Based on the COI mtDNA gene region, the Goryeong sequence was confirmed as belonging to M. yokogawai, as it was shown to form a separate cluster with other M. yokogawai taxa that are distinct also from M. takahashii and M. miyatai. CONCLUSION: In a series of our genetic analyses on genus Metagonimus using samples retrieved from Joseon-period cases, aDNA sequences of genus Metagonimus revealed in South Korea thus far are those of M. yokogawai, but not of M. miyatai or M. takahashii yet.


Subject(s)
Heterophyidae , Trematode Infections , Animals , Body Remains , DNA, Ancient , Heterophyidae/genetics , Humans , Phylogeny , Republic of Korea , Trematode Infections/parasitology
5.
PLoS Biol ; 19(12): e3001418, 2021 12.
Article in English | MEDLINE | ID: mdl-34882676

ABSTRACT

Deep neural networks (DNNs) for object classification have been argued to provide the most promising model of the visual system, accompanied by claims that they have attained or even surpassed human-level performance. Here, we evaluated whether DNNs provide a viable model of human vision when tested with challenging noisy images of objects, sometimes presented at the very limits of visibility. We show that popular state-of-the-art DNNs perform in a qualitatively different manner than humans-they are unusually susceptible to spatially uncorrelated white noise and less impaired by spatially correlated noise. We implemented a noise training procedure to determine whether noise-trained DNNs exhibit more robust responses that better match human behavioral and neural performance. We found that noise-trained DNNs provide a better qualitative match to human performance; moreover, they reliably predict human recognition thresholds on an image-by-image basis. Functional neuroimaging revealed that noise-trained DNNs provide a better correspondence to the pattern-specific neural representations found in both early visual areas and high-level object areas. A layer-specific analysis of the DNNs indicated that noise training led to broad-ranging modifications throughout the network, with greater benefits of noise robustness accruing in progressively higher layers. Our findings demonstrate that noise-trained DNNs provide a viable model to account for human behavioral and neural responses to objects in challenging noisy viewing conditions. Further, they suggest that robustness to noise may be acquired through a process of visual learning.


Subject(s)
Imaging, Three-Dimensional , Neural Networks, Computer , Neurons/physiology , Vision, Ocular/physiology , Adult , Behavior , Female , Humans , Male , Middle Aged , Photic Stimulation , Sensory Thresholds/physiology , Visual Cortex/physiology , Young Adult
6.
J Vis ; 21(12): 6, 2021 11 01.
Article in English | MEDLINE | ID: mdl-34767621

ABSTRACT

Although convolutional neural networks (CNNs) provide a promising model for understanding human vision, most CNNs lack robustness to challenging viewing conditions, such as image blur, whereas human vision is much more reliable. Might robustness to blur be attributable to vision during infancy, given that acuity is initially poor but improves considerably over the first several months of life? Here, we evaluated the potential consequences of such early experiences by training CNN models on face and object recognition tasks while gradually reducing the amount of blur applied to the training images. For CNNs trained on blurry to clear faces, we observed sustained robustness to blur, consistent with a recent report by Vogelsang and colleagues (2018). By contrast, CNNs trained with blurry to clear objects failed to retain robustness to blur. Further analyses revealed that the spatial frequency tuning of the two CNNs was profoundly different. The blurry to clear face-trained network successfully retained a preference for low spatial frequencies, whereas the blurry to clear object-trained CNN exhibited a progressive shift toward higher spatial frequencies. Our findings provide novel computational evidence showing how face recognition, unlike object recognition, allows for more holistic processing. Moreover, our results suggest that blurry vision during infancy is insufficient to account for the robustness of adult vision to blurry objects.


Subject(s)
Facial Recognition , Neural Networks, Computer , Adult , Head , Humans , Vision, Ocular , Visual Perception
7.
J Neurosci Methods ; 330: 108451, 2020 01 15.
Article in English | MEDLINE | ID: mdl-31626847

ABSTRACT

BACKGROUND: Restricted Boltzmann machines (RBMs), including greedy layer-wise trained RBMs as part of a deep belief network (DBN), have the ability to identify spatial patterns (SPs; functional networks) in resting-state fMRI (rfMRI) data. However, there has been little research on (1) the reproducibility and test-retest reliability of SPs derived from RBMs and on (2) hierarchical SPs derived from DBNs. METHODS: We applied a weight sparsity-controlled RBM and DBN to whole-brain rfMRI data from the Human Connectome Project. We evaluated the within-session reproducibility and between-session test-retest reliability of the SPs derived from the RBM approach and compared them both with those identified using independent component analysis (ICA) and with three voxel-wise statistical measures-the Hurst exponent, entropy, and kurtosis-of the rfMRI data. We also assessed the potential hierarchy of the SPs from the DBN. RESULTS: An increase in the sparsity level of the RBM weights enhanced the reproducibility of the SPs. The SPs deriving from a stringent weight sparsity level were predominantly found in the cortical gray matter and substantially overlapped with the SPs obtained from the Hurst exponent. A hierarchical representation was shown by constructed using the default-mode network obtained from the DBN. COMPARISON WITH EXISTING METHODS: The test-retest reliability of the SPs from the RBM was superior to that of the SPs from the voxel-wise statistics. CONCLUSIONS: The SPs from the RBM were reproducible within sessions and reliable across sessions. The hierarchically organized SPs from the DBN could possibly be applied to research based on rfMRI data.


Subject(s)
Brain/physiology , Connectome/methods , Default Mode Network/physiology , Echo-Planar Imaging/methods , Nerve Net/physiology , Neural Networks, Computer , Pattern Recognition, Automated/methods , Adult , Brain/diagnostic imaging , Connectome/standards , Default Mode Network/diagnostic imaging , Echo-Planar Imaging/standards , Humans , Nerve Net/diagnostic imaging , Pattern Recognition, Automated/standards , Reproducibility of Results
8.
J Vet Sci ; 19(1): 157-160, 2018 Jan 31.
Article in English | MEDLINE | ID: mdl-28693304

ABSTRACT

Holstein calves weighing less than 20 kg at birth have been noted in Korea. Due to insufficient information, we raised small calves with age-matched normal birth weight Holstein calves and determined body weights before puberty. In addition, 3 single nucleotide polymorphisms (SNPs) of the growth hormone (GH) gene were analyzed. Up to 10 months of age, low birth weight calves were smaller than normal weight calves. In exon 5 of the GH gene, SNP genotype variation was detected in some small calves; however, this did not appear to be the only factor inducing low birth weight and slow growth.


Subject(s)
Cattle/growth & development , Cattle/genetics , Growth Hormone/genetics , Polymorphism, Single Nucleotide , Animals , Birth Weight , Female , Growth Hormone/metabolism , Male , Republic of Korea
9.
Neuroimage ; 145(Pt B): 314-328, 2017 01 15.
Article in English | MEDLINE | ID: mdl-27079534

ABSTRACT

Feedforward deep neural networks (DNNs), artificial neural networks with multiple hidden layers, have recently demonstrated a record-breaking performance in multiple areas of applications in computer vision and speech processing. Following the success, DNNs have been applied to neuroimaging modalities including functional/structural magnetic resonance imaging (MRI) and positron-emission tomography data. However, no study has explicitly applied DNNs to 3D whole-brain fMRI volumes and thereby extracted hidden volumetric representations of fMRI that are discriminative for a task performed as the fMRI volume was acquired. Our study applied fully connected feedforward DNN to fMRI volumes collected in four sensorimotor tasks (i.e., left-hand clenching, right-hand clenching, auditory attention, and visual stimulus) undertaken by 12 healthy participants. Using a leave-one-subject-out cross-validation scheme, a restricted Boltzmann machine-based deep belief network was pretrained and used to initialize weights of the DNN. The pretrained DNN was fine-tuned while systematically controlling weight-sparsity levels across hidden layers. Optimal weight-sparsity levels were determined from a minimum validation error rate of fMRI volume classification. Minimum error rates (mean±standard deviation; %) of 6.9 (±3.8) were obtained from the three-layer DNN with the sparsest condition of weights across the three hidden layers. These error rates were even lower than the error rates from the single-layer network (9.4±4.6) and the two-layer network (7.4±4.1). The estimated DNN weights showed spatial patterns that are remarkably task-specific, particularly in the higher layers. The output values of the third hidden layer represented distinct patterns/codes of the 3D whole-brain fMRI volume and encoded the information of the tasks as evaluated from representational similarity analysis. Our reported findings show the ability of the DNN to classify a single fMRI volume based on the extraction of hidden representations of fMRI volumes associated with tasks across multiple hidden layers. Our study may be beneficial to the automatic classification/diagnosis of neuropsychiatric and neurological diseases and prediction of disease severity and recovery in (pre-) clinical settings using fMRI volumes without requiring an estimation of activation patterns or ad hoc statistical evaluation.


Subject(s)
Brain/diagnostic imaging , Brain/physiology , Functional Neuroimaging/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Pattern Recognition, Automated/methods , Adult , Humans , Motor Activity/physiology , Perception/physiology
10.
Int J Syst Evol Microbiol ; 66(4): 1713-1717, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26813106

ABSTRACT

A novel bacterial strain, CJ22T, was isolated from soil of a ginseng field located in Anseong, Korea. Cells of strain CJ22T were aerobic, Gram-stain-positive, endospore-forming, motile, oxidase- and catalase-positive and rod-shaped. The isolate grew optimally at pH 7 and 30 °C. Phylogenetic analysis based on the 16S rRNA gene sequence revealed that strain CJ22T belonged to the genus Cohnella, displaying highest sequence similarity of 97.3% with Cohnella panacarvi Gsoil 349T. DNA-DNA relatedness between strain CJ22T and its closest relative was 35.5 % (reciprocal value, 23.8%). The phenotypic features of strain CJ22T also distinguished it from related species of the genus Cohnella. The diagnostic diamino acid in the cell-wall peptidoglycan was meso-diaminopimelic acid. The major isoprenoid quinone was menaquinone MK-7 and the major polar lipids were phosphatidylglycerol, phosphatidylethanolamine, diphosphatidylglycerol, lysyl-phosphatidylglycerol, two unidentified phospholipids and two unidentified aminophospholipids. The predominant cellular fatty acids of strain CJ22T were anteiso-C15 : 0, iso-C16:0 and C16:0. The DNA G+C content was 63.1 mol%. Based on data from this polyphasic taxonomic study, strain CJ22T is considered to represent a novel species of the genus Cohnella, for which the name Cohnella saccharovorans sp. nov. is proposed. The type strain is CJ22T (=KACC 17501T=JCM 19227T).


Subject(s)
Bacillales/classification , Panax/microbiology , Phylogeny , Soil Microbiology , Bacillales/genetics , Bacillales/isolation & purification , Bacterial Typing Techniques , Base Composition , DNA, Bacterial/genetics , Diaminopimelic Acid/chemistry , Fatty Acids/chemistry , Molecular Sequence Data , Nucleic Acid Hybridization , Peptidoglycan/chemistry , Phospholipids/chemistry , RNA, Ribosomal, 16S/genetics , Republic of Korea , Sequence Analysis, DNA , Vitamin K 2/analogs & derivatives , Vitamin K 2/chemistry
11.
Appl Opt ; 54(5): 1027-31, 2015 Feb 10.
Article in English | MEDLINE | ID: mdl-25968017

ABSTRACT

We fabricated amorphous silicon (a-Si)-based distributed Bragg reflectors (DBRs) consisting of alternating dense/porous films (i.e., pair) for a center wavelength (λ(c)) of 0.96 µm by oblique angle deposition (OAD) technique using an electron-beam evaporation system. The dense (high refractive index, i.e., high-n) and porous (low-n) a-Si films were deposited at two incident vapor flux angles of 0° and 80° in the OAD, respectively. Their optical reflectance characteristics were investigated in the wavelength range of 0.6-1.5 µm, including theoretical comparison using a rigorous coupled-wave analysis method. Above three pairs, the reflectivity (R) of a-Si DBRs was almost saturated at wavelengths around 0.96 µm, exhibiting R values of >97%. For the a-Si DBR with only three pairs, a broad normalized stop bandwidth (Δλ/λ(c)) of ∼22.5% was obtained at wavelengths of ∼0.87-1.085 µm, keeping high R values of >95%. To simply demonstrate the feasibility of device applications, the a-Si DBR with three pairs was coated as a high-reflection layer at the rear facet of GaAs/InGaAs quantum-well laser diodes (LDs) operating at λ=0.96 µm. For the LDs coated with three-pair a-Si DBR, external differential quantum efficiency (η(d)) was nearly doubled compared to the uncoated LDs, indicating the η(d) value of ∼50.6% (i.e., η(d)∼25.5% for the uncoated LDs).

SELECTION OF CITATIONS
SEARCH DETAIL
...