Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Biosci Biotechnol Biochem ; 86(6): 763-769, 2022 May 24.
Article in English | MEDLINE | ID: mdl-35289847

ABSTRACT

Accumulation levels of Arg, Lys, and His in vacuoles of Schizosaccharomyces pombe cells were drastically decreased by the disruption of SPAC24H6.11c (vsb1+) gene identified by a homology search with the VSB1 gene of Saccharomyces cerevisiae. The Vsb1p fused with green fluorescent protein particularly localized at vacuolar membranes in S. pombe cells. Overexpression of vsb1+ markedly increased vacuolar levels of basic amino acids; however, overexpression of the vsb1D174A mutant did not affect the levels of these amino acids. These results suggest that the vsb1+ contributes to the accumulation of basic amino acids into the vacuoles of S. pombe, and the aspartate residue in the putative first transmembrane domain conserved among fungal homologs is crucial for the function of Vsb1p.


Subject(s)
Schizosaccharomyces pombe Proteins , Schizosaccharomyces , Amino Acids, Basic/genetics , Amino Acids, Basic/metabolism , Membrane Proteins/genetics , Saccharomyces cerevisiae/metabolism , Schizosaccharomyces/genetics , Schizosaccharomyces/metabolism , Schizosaccharomyces pombe Proteins/genetics , Schizosaccharomyces pombe Proteins/metabolism , Vacuoles/metabolism
2.
Biosci Biotechnol Biochem ; 85(5): 1157-1164, 2021 Apr 24.
Article in English | MEDLINE | ID: mdl-33704406

ABSTRACT

The Ygr125w was previously identified as a vacuolar membrane protein by a proteomic analysis. We found that vacuolar levels of basic amino acids drastically decreased in ygr125wΔ cells. Since N- or C-terminally tagged Ygr125w was not functional, an expression plasmid of YGR125w with HA3-tag inserted in its N-terminal hydrophilic region was constructed. Introduction of this plasmid into ygr125w∆ cells restored the vacuolar levels of basic amino acids. We successfully detected the uptake activity of arginine by the vacuolar membrane vesicles depending on HA3-YGR125w expression. A conserved aspartate residue in the predicted first transmembrane helix (D223) was indispensable for the accumulation of basic amino acids. YGR125w has been recently reported as a gene involved in vacuolar storage of arginine; and it is designated as VSB1. Taken together, our findings indicate that Ygr125w/Vsb1 contributes to the uptake of arginine into vacuoles and vacuolar compartmentalization of basic amino acids.


Subject(s)
Amino Acids, Basic/metabolism , Membrane Transport Proteins/metabolism , Recombinant Fusion Proteins/metabolism , Saccharomyces cerevisiae Proteins/metabolism , Saccharomyces cerevisiae/metabolism , Vacuoles/metabolism , Arginine/metabolism , Aspartic Acid/chemistry , Aspartic Acid/metabolism , Biological Transport , Cloning, Molecular , Fluorescent Dyes/chemistry , Gene Expression , Genetic Complementation Test , Hemagglutinins, Viral/genetics , Hemagglutinins, Viral/metabolism , Intracellular Membranes/metabolism , Membrane Transport Proteins/genetics , Plasmids/chemistry , Plasmids/metabolism , Pyridinium Compounds/chemistry , Quaternary Ammonium Compounds/chemistry , Recombinant Fusion Proteins/genetics , Recombinant Proteins/genetics , Recombinant Proteins/metabolism , Saccharomyces cerevisiae/genetics , Saccharomyces cerevisiae Proteins/genetics
3.
Front Neurorobot ; 13: 103, 2019.
Article in English | MEDLINE | ID: mdl-31920613

ABSTRACT

A deep Q network (DQN) (Mnih et al., 2013) is an extension of Q learning, which is a typical deep reinforcement learning method. In DQN, a Q function expresses all action values under all states, and it is approximated using a convolutional neural network. Using the approximated Q function, an optimal policy can be derived. In DQN, a target network, which calculates a target value and is updated by the Q function at regular intervals, is introduced to stabilize the learning process. A less frequent updates of the target network would result in a more stable learning process. However, because the target value is not propagated unless the target network is updated, DQN usually requires a large number of samples. In this study, we proposed Constrained DQN that uses the difference between the outputs of the Q function and the target network as a constraint on the target value. Constrained DQN updates parameters conservatively when the difference between the outputs of the Q function and the target network is large, and it updates them aggressively when this difference is small. In the proposed method, as learning progresses, the number of times that the constraints are activated decreases. Consequently, the update method gradually approaches conventional Q learning. We found that Constrained DQN converges with a smaller training dataset than in the case of DQN and that it is robust against changes in the update frequency of the target network and settings of a certain parameter of the optimizer. Although Constrained DQN alone does not show better performance in comparison to integrated approaches nor distributed methods, experimental results show that Constrained DQN can be used as an additional components to those methods.

SELECTION OF CITATIONS
SEARCH DETAIL
...