Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(4)2022 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-35214392

RESUMO

Mathematical modeling and data-driven methodologies are frequently required to optimize industrial processes in the context of Cyber-Physical Systems (CPS). This paper introduces the PipeGraph software library, an open-source python toolbox for easing the creation of machine learning models by using Directed Acyclic Graph (DAG)-like implementations that can be used for CPS. scikit-learn's Pipeline is a very useful tool to bind a sequence of transformers and a final estimator in a single unit capable of working itself as an estimator. It sequentially assembles several steps that can be cross-validated together while setting different parameters. Steps encapsulation secures the experiment from data leakage during the training phase. The scientific goal of PipeGraph is to extend the concept of Pipeline by using a graph structure that can handle scikit-learn's objects in DAG layouts. It allows performing diverse operations, instead of only transformations, following the topological ordering of the steps in the graph; it provides access to all the data generated along the intermediate steps; and it is compatible with GridSearchCV function to tune the hyperparameters of the steps. It is also not limited to (X,y) entries. Moreover, it has been proposed as part of the scikit-learn-contrib supported project, and is fully compatible with scikit-learn. Documentation and unitary tests are publicly available together with the source code. Two case studies are analyzed in which PipeGraph proves to be essential in improving CPS modeling and optimization: the first is about the optimization of a heat exchange management system, and the second deals with the detection of anomalies in manufacturing processes.

2.
Sensors (Basel) ; 19(18)2019 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-31505791

RESUMO

Positioning asynchronous architectures based on time measurements are reaching growing importance in Local Positioning Systems (LPS). These architectures have special relevance in precision applications and indoor/outdoor navigation of automatic vehicles such as Automatic Ground Vehicles (AGVs) and Unmanned Aerial Vehicles (UAVs). The positioning error of these systems is conditioned by the algorithms used in the position calculation, the quality of the time measurements, and the sensor deployment of the signal receivers. Once the algorithms have been defined and the method to compute the time measurements has been selected, the only design criteria of the LPS is the distribution of the sensors in the three-dimensional space. This problem has proved to be NP-hard, and therefore a heuristic solution to the problem is recommended. In this paper, a genetic algorithm with the flexibility to be adapted to different scenarios and ground modelings is proposed. This algorithm is used to determine the best node localization in order to reduce the Cramér-Rao Lower Bound (CRLB) with a heteroscedastic noise consideration in each sensor of an Asynchronous Time Difference of Arrival (A-TDOA) architecture. The methodology proposed allows for the optimization of the 3D sensor deployment of a passive A-TDOA architecture, including ground modeling flexibility and heteroscedastic noise consideration with sequential iterations, and reducing the spatial discretization to achieve better results. Results show that optimization with 15% of elitism and a Tournament 3 selection strategy offers the best maximization for the algorithm.

3.
Sensors (Basel) ; 19(13)2019 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-31261946

RESUMO

Time difference of arrival (TDOA) positioning methods have experienced growing importance over the last few years due to their multiple applications in local positioning systems (LPSs). While five sensors are needed to determine an unequivocal three-dimensional position, systems with four nodes present two different solutions that cannot be discarded according to mathematical standards. In this paper, a new methodology to solve the 3D TDOA problems in a sensor network with four beacons is proposed. A confidence interval, which is defined in this paper as a sphere, is defined to use positioning algorithms with four different nodes. It is proven that the separation between solutions in the four-beacon TDOA problem allows the transformation of the problem into an analogous one in which more receivers are implied due to the geometric properties of the intersection of hyperboloids. The achievement of the distance between solutions needs the application of genetic algorithms in order to find an optimized sensor distribution. Results show that positioning algorithms can be used 96.7% of the time with total security in cases where vehicles travel at less than 25 m/s.

4.
Sensors (Basel) ; 19(13)2019 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-31324032

RESUMO

The accuracy requirements for sensor network positioning have grown over the last few years due to the high precision demanded in activities related with vehicles and robots. Such systems involve a wide range of specifications which must be met through positioning devices based on time measurement. These systems have been traditionally designed with the synchronization of their sensors in order to compute the position estimation. However, this synchronization introduces an error in the time determination which can be avoided through the centralization of the measurements in a single clock in a coordinate sensor. This can be found in typical architectures such as Asynchronous Time Difference of Arrival (A-TDOA) and Difference-Time Difference of Arrival (D-TDOA) systems. In this paper, a study of the suitability of these new systems based on a Cramér-Rao Lower Bound (CRLB) evaluation was performed for the first time under different 3D real environments for multiple sensor locations. The analysis was carried out through a new heteroscedastic noise variance modelling with a distance-dependent Log-normal path loss propagation model. Results showed that A-TDOA provided less uncertainty in the root mean square error (RMSE) in the positioning, while D-TDOA reduced the standard deviation and increased stability all over the domain.

5.
Sensors (Basel) ; 19(12)2019 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-31216729

RESUMO

This paper proposes a methodology for dealing with an issue of crucial practical importance in real engineering systems such as fault detection and recovery of a sensor. The main goal is to define a strategy to identify a malfunctioning sensor and to establish the correct measurement value in those cases. As study case, we use the data collected from a geothermal heat exchanger installed as part of the heat pump installation in a bioclimatic house. The sensor behaviour is modeled by using six different machine learning techniques: Random decision forests, gradient boosting, extremely randomized trees, adaptive boosting, k-nearest neighbors, and shallow neural networks. The achieved results suggest that this methodology is a very satisfactory solution for this kind of systems.

6.
Sensors (Basel) ; 19(5)2019 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-30823682

RESUMO

This paper presents a new texture descriptor booster, Complete Local Oriented Statistical Information Booster (CLOSIB), based on statistical information of the image. Our proposal uses the statistical information of the texture provided by the image gray-levels differences to increase the discriminative capability of Local Binary Patterns (LBP)-based and other texture descriptors. We demonstrated that Half-CLOSIB and M-CLOSIB versions are more efficient and precise than the general one. H-CLOSIB may eliminate redundant statistical information and the multi-scale version, M-CLOSIB, is more robust. We evaluated our method using four datasets: KTH TIPS (2-a) for material recognition, UIUC and USPTex for general texture recognition and JAFFE for face recognition. The results show that when we combine CLOSIB with well-known LBP-based descriptors, the hit rate increases in all the cases, introducing in this way the idea that CLOSIB can be used to enhance the description of texture in a significant number of situations. Additionally, a comparison with recent algorithms demonstrates that a combination of LBP methods with CLOSIB variants obtains comparable results to those of the state-of-the-art.

7.
ScientificWorldJournal ; 2014: 179105, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25276846

RESUMO

This paper analyses the effect of the effort distribution along the software development lifecycle on the prevalence of software defects. This analysis is based on data that was collected by the International Software Benchmarking Standards Group (ISBSG) on the development of 4,106 software projects. Data mining techniques have been applied to gain a better understanding of the behaviour of the project activities and to identify a link between the effort distribution and the prevalence of software defects. This analysis has been complemented with the use of a hierarchical clustering algorithm with a dissimilarity based on the likelihood ratio statistic, for exploratory purposes. As a result, different behaviours have been identified for this collection of software development projects, allowing for the definition of risk control strategies to diminish the number and impact of the software defects. It is expected that the use of similar estimations might greatly improve the awareness of project managers on the risks at hand.


Assuntos
Algoritmos , Software , Análise por Conglomerados , Biologia Computacional/classificação , Biologia Computacional/métodos , Mineração de Dados/classificação , Mineração de Dados/métodos , Análise Discriminante , Reprodutibilidade dos Testes , Design de Software , Validação de Programas de Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...