Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
J Environ Sci (China) ; 123: 510-521, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36522010

ABSTRACT

Air pollution control policies in China have been experiencing profound changes, highlighting a strategic transformation from total pollutant emission control to air quality improvement, along with the shifting targets starting from acid rain and NOx emissions to PM2.5 pollution, and then the emerging O3 challenges. The marvelous achievements have been made with the dramatic decrease of SO2 emission and fundamental improvement of PM2.5 concentration. Despite these achievements, China has proposed Beautiful China target through 2035 and the goal of 2030 carbon peak and 2060 carbon neutrality, which impose stricter requirements on air quality and synergistic mitigation with Greenhouse Gas (GHG) emissions. Against this background, an integrated multi-objective and multi-benefit roadmap is required to provide decision support for China's long-term air quality improvement strategy. This paper systematically reviews the technical system for developing the air quality improvement roadmap, which was integrated from the research output of China's National Key R&D Program for Research on Atmospheric Pollution Factors and Control Technologies (hereafter Special NKP), covering mid- and long-term air quality target setting techniques, quantitative analysis techniques for emission reduction targets corresponding to air quality targets, and pathway optimization techniques for realizing reduction targets. The experience and lessons derived from the reviews have implications for the reformation of China's air quality improvement roadmap in facing challenges of synergistic mitigation of PM2.5 and O3, and the coupling with climate change mitigation.


Subject(s)
Air Pollutants , Air Pollution , Air Pollutants/analysis , Particulate Matter/analysis , Industrial Development , Quality Improvement , Air Pollution/prevention & control , Air Pollution/analysis , Carbon/analysis , China
2.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 39(6): 1140-1148, 2022 Dec 25.
Article in Chinese | MEDLINE | ID: mdl-36575083

ABSTRACT

Heart sound analysis is significant for early diagnosis of congenital heart disease. A novel method of heart sound classification was proposed in this paper, in which the traditional mel frequency cepstral coefficient (MFCC) method was improved by using the Fisher discriminant half raised-sine function (F-HRSF) and an integrated decision network was used as classifier. It does not rely on segmentation of the cardiac cycle. Firstly, the heart sound signals were framed and windowed. Then, the features of heart sounds were extracted by using improved MFCC, in which the F-HRSF was used to weight sub-band components of MFCC according to the Fisher discriminant ratio of each sub-band component and the raised half sine function. Three classification networks, convolutional neural network (CNN), long and short-term memory network (LSTM), and gated recurrent unit (GRU) were combined as integrated decision network. Finally, the two-category classification results were obtained through the majority voting algorithm. An accuracy of 92.15%, sensitivity of 91.43%, specificity of 92.83%, corrected accuracy of 92.01%, and F score of 92.13% were achieved using the novel signal processing techniques. It shows that the algorithm has great potential in early diagnosis of congenital heart disease.


Subject(s)
Heart Defects, Congenital , Heart Sounds , Humans , Algorithms , Neural Networks, Computer , Heart Defects, Congenital/diagnosis , Signal Processing, Computer-Assisted
3.
Journal of Biomedical Engineering ; (6): 1140-1148, 2022.
Article in Chinese | WPRIM (Western Pacific) | ID: wpr-970652

ABSTRACT

Heart sound analysis is significant for early diagnosis of congenital heart disease. A novel method of heart sound classification was proposed in this paper, in which the traditional mel frequency cepstral coefficient (MFCC) method was improved by using the Fisher discriminant half raised-sine function (F-HRSF) and an integrated decision network was used as classifier. It does not rely on segmentation of the cardiac cycle. Firstly, the heart sound signals were framed and windowed. Then, the features of heart sounds were extracted by using improved MFCC, in which the F-HRSF was used to weight sub-band components of MFCC according to the Fisher discriminant ratio of each sub-band component and the raised half sine function. Three classification networks, convolutional neural network (CNN), long and short-term memory network (LSTM), and gated recurrent unit (GRU) were combined as integrated decision network. Finally, the two-category classification results were obtained through the majority voting algorithm. An accuracy of 92.15%, sensitivity of 91.43%, specificity of 92.83%, corrected accuracy of 92.01%, and F score of 92.13% were achieved using the novel signal processing techniques. It shows that the algorithm has great potential in early diagnosis of congenital heart disease.


Subject(s)
Humans , Heart Sounds , Algorithms , Neural Networks, Computer , Heart Defects, Congenital/diagnosis , Signal Processing, Computer-Assisted
4.
Animals (Basel) ; 11(7)2021 Jul 07.
Article in English | MEDLINE | ID: mdl-34359153

ABSTRACT

Dairy farm decision support systems (DSS) are tools which help dairy farmers to solve complex problems by improving the decision-making processes. In this paper, we are interested in newer generation, integrated DSS (IDSS), which additionally and concurrently: (1) receive continuous data feed from on-farm and off-farm data collection systems and (2) integrate more than one data stream to produce insightful outcomes. The scientific community and the allied dairy community have not been successful in developing, disseminating, and promoting a sustained adoption of IDSS. Thus, this paper identifies barriers to adoption as well as factors that would promote the sustained adoption of IDSS. The main barriers to adoption discussed include perceived lack of a good value proposition, complexities of practical application, and ease of use; and IDSS challenges related to data collection, data standards, data integration, and data shareability. Success in the sustainable adoption of IDSS depends on solving these problems and also addressing intrinsic issues related to the development, maintenance, and functioning of IDSS. There is a need for coordinated action by all the main stakeholders in the dairy sector to realize the potential benefits of IDSS, including all important players in the dairy industry production and distribution chain.

5.
J Appl Toxicol ; 37(7): 792-805, 2017 07.
Article in English | MEDLINE | ID: mdl-28074598

ABSTRACT

The replacement of animal use in testing for regulatory classification of skin sensitizers is a priority for US federal agencies that use data from such testing. Machine learning models that classify substances as sensitizers or non-sensitizers without using animal data have been developed and evaluated. Because some regulatory agencies require that sensitizers be further classified into potency categories, we developed statistical models to predict skin sensitization potency for murine local lymph node assay (LLNA) and human outcomes. Input variables for our models included six physicochemical properties and data from three non-animal test methods: direct peptide reactivity assay; human cell line activation test; and KeratinoSens™ assay. Models were built to predict three potency categories using four machine learning approaches and were validated using external test sets and leave-one-out cross-validation. A one-tiered strategy modeled all three categories of response together while a two-tiered strategy modeled sensitizer/non-sensitizer responses and then classified the sensitizers as strong or weak sensitizers. The two-tiered model using the support vector machine with all assay and physicochemical data inputs provided the best performance, yielding accuracy of 88% for prediction of LLNA outcomes (120 substances) and 81% for prediction of human test outcomes (87 substances). The best one-tiered model predicted LLNA outcomes with 78% accuracy and human outcomes with 75% accuracy. By comparison, the LLNA predicts human potency categories with 69% accuracy (60 of 87 substances correctly categorized). These results suggest that computational models using non-animal methods may provide valuable information for assessing skin sensitization potency. Copyright © 2017 John Wiley & Sons, Ltd.


Subject(s)
Animal Testing Alternatives/methods , Biological Assay/methods , Dermatitis, Allergic Contact/etiology , Dermatitis, Allergic Contact/immunology , Hazardous Substances/toxicity , Machine Learning , Skin/drug effects , Humans , Models, Statistical , United States
6.
J Appl Toxicol ; 37(3): 347-360, 2017 Mar.
Article in English | MEDLINE | ID: mdl-27480324

ABSTRACT

One of the Interagency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens™ assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy 88%), any of the alternative methods alone (accuracy 63-79%) or test batteries combining data from the individual methods (accuracy 75%). These results suggest that computational methods are promising tools to identify effectively the potential human skin sensitizers without animal testing. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA.


Subject(s)
Dermatitis, Allergic Contact/etiology , Hazardous Substances/toxicity , Models, Biological , Skin/drug effects , Animal Use Alternatives , Biological Assay , Databases, Factual , Dermatitis, Allergic Contact/immunology , Humans , Logistic Models , Machine Learning , Multivariate Analysis , Predictive Value of Tests
7.
J Appl Toxicol ; 36(9): 1150-62, 2016 09.
Article in English | MEDLINE | ID: mdl-26851134

ABSTRACT

One of the top priorities of the Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM) is the identification and evaluation of non-animal alternatives for skin sensitization testing. Although skin sensitization is a complex process, the key biological events of the process have been well characterized in an adverse outcome pathway (AOP) proposed by the Organisation for Economic Co-operation and Development (OECD). Accordingly, ICCVAM is working to develop integrated decision strategies based on the AOP using in vitro, in chemico and in silico information. Data were compiled for 120 substances tested in the murine local lymph node assay (LLNA), direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens assay. Data for six physicochemical properties, which may affect skin penetration, were also collected, and skin sensitization read-across predictions were performed using OECD QSAR Toolbox. All data were combined into a variety of potential integrated decision strategies to predict LLNA outcomes using a training set of 94 substances and an external test set of 26 substances. Fifty-four models were built using multiple combinations of machine learning approaches and predictor variables. The seven models with the highest accuracy (89-96% for the test set and 96-99% for the training set) for predicting LLNA outcomes used a support vector machine (SVM) approach with different combinations of predictor variables. The performance statistics of the SVM models were higher than any of the non-animal tests alone and higher than simple test battery approaches using these methods. These data suggest that computational approaches are promising tools to effectively integrate data sources to identify potential skin sensitizers without animal testing. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA.


Subject(s)
Allergens/toxicity , Skin/drug effects , Xenobiotics/toxicity , Animal Testing Alternatives/methods , Animals , Cell Line , Computational Biology , Decision Making , Dermatitis, Allergic Contact/pathology , Humans , Local Lymph Node Assay , Mice , Reproducibility of Results , Risk Assessment
SELECTION OF CITATIONS
SEARCH DETAIL
...