Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
PLoS One ; 19(5): e0299255, 2024.
Article in English | MEDLINE | ID: mdl-38722923

ABSTRACT

Despite the huge importance that the centrality metrics have in understanding the topology of a network, too little is known about the effects that small alterations in the topology of the input graph induce in the norm of the vector that stores the node centralities. If so, then it could be possible to avoid re-calculating the vector of centrality metrics if some minimal changes occur in the network topology, which would allow for significant computational savings. Hence, after formalising the notion of centrality, three of the most basic metrics were herein considered (i.e., Degree, Eigenvector, and Katz centrality). To perform the simulations, two probabilistic failure models were used to describe alterations in network topology: Uniform (i.e., all nodes can be independently deleted from the network with a fixed probability) and Best Connected (i.e., the probability a node is removed depends on its degree). Our analysis suggests that, in the case of degree, small variations in the topology of the input graph determine small variations in Degree centrality, independently of the topological features of the input graph; conversely, both Eigenvector and Katz centralities can be extremely sensitive to changes in the topology of the input graph. In other words, if the input graph has some specific features, even small changes in the topology of the input graph can have catastrophic effects on the Eigenvector or Katz centrality.


Subject(s)
Algorithms , Computer Simulation , Models, Theoretical , Models, Statistical , Probability
2.
Sensors (Basel) ; 22(12)2022 Jun 08.
Article in English | MEDLINE | ID: mdl-35746130

ABSTRACT

In water resources management, modeling water balance factors is necessary to control dams, agriculture, irrigation, and also to provide water supply for drinking and industries. Generally, conceptual and physical models present challenges to find more hydro-climatic parameters, which show good performance in the assessment of runoff in different climatic regions. Accordingly, a dynamic and reliable model is proposed to estimate inter-annual rainfall-runoff in five climatic regions of northern Algeria. This is a new improvement of Ol'Dekop's equation, which models the residual values obtained between real and predicted data using artificial neuron networks (ANNs), namely by ANN1 and ANN2 sub-models. In this work, a set of climatic and geographical variables, obtained from 16 basins, which are inter-annual rainfall (IAR), watershed area (S), and watercourse (WC), were used as input data in the first model. Further, the ANN1 output results and De Martonne index (I) were classified, and were then processed by ANN2 to further increase reliability, and make the model more dynamic and unaffected by the climatic characteristic of the area. The final model proved the best performance in the entire region compared to a set of parametric and non-parametric water balance models used in this study, where the R2Adj obtained from each test gave values between 0.9103 and 0.9923.


Subject(s)
Neural Networks, Computer , Water Supply , Agriculture , Reproducibility of Results , Water , Water Movements
3.
Comput Methods Programs Biomed ; 223: 106951, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35767911

ABSTRACT

BACKGROUND AND OBJECTIVE:  Many developed and non-developed countries worldwide suffer from cancer-related fatal diseases. In particular, the rate of breast cancer in females increases daily, partially due to unawareness and undiagnosed at the early stages. A proper first breast cancer treatment can only be provided by adequately detecting and classifying cancer during the very early stages of its development. The use of medical image analysis techniques and computer-aided diagnosis may help the acceleration and the automation of both cancer detection and classification by also training and aiding less experienced physicians. For large datasets of medical images, convolutional neural networks play a significant role in detecting and classifying cancer effectively. METHODS:  This article presents a novel computer-aided diagnosis method for breast cancer classification (both binary and multi-class), using a combination of deep neural networks (ResNet 18, ShuffleNet, and Inception-V3Net) and transfer learning on the BreakHis publicly available dataset. RESULTS AND CONCLUSIONS:  Our proposed method provides the best average accuracy for binary classification of benign or malignant cancer cases of 99.7%, 97.66%, and 96.94% for ResNet, InceptionV3Net, and ShuffleNet, respectively. Average accuracies for multi-class classification were 97.81%, 96.07%, and 95.79% for ResNet, Inception-V3Net, and ShuffleNet, respectively.


Subject(s)
Breast Neoplasms , Breast/pathology , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Computers , Female , Humans , Machine Learning , Neural Networks, Computer
4.
Sensors (Basel) ; 22(9)2022 Apr 23.
Article in English | MEDLINE | ID: mdl-35590930

ABSTRACT

Watershed climatic diversity poses a hard problem when it comes to finding suitable models to estimate inter-annual rainfall runoff (IARR). In this work, a hybrid model (dubbed MR-CART) is proposed, based on a combination of MR (multiple regression) and CART (classification and regression tree) machine-learning methods, applied to an IARR predicted data series obtained from a set of non-parametric and empirical water balance models in five climatic floors of northern Algeria between 1960 and 2020. A comparative analysis showed that the Yang, Sharif, and Zhang's models were reliable for estimating input data of the hybrid model in all climatic classes. In addition, Schreiber's model was more efficient in very humid, humid, and semi-humid areas. A set of performance and distribution statistical tests were applied to the estimated IARR data series to show the reliability and dynamicity of each model in all study areas. The results showed that our hybrid model provided the best performance and data distribution, where the R2Adj and p-values obtained in each case were between (0.793, 0.989), and (0.773, 0.939), respectively. The MR model showed good data distribution compared to the CART method, where p-values obtained by signtest and WSR test were (0.773, 0.705), and (0.326, 0.335), respectively.


Subject(s)
Machine Learning , Water , Multivariate Analysis , Reproducibility of Results , Water Movements
5.
Sensors (Basel) ; 21(23)2021 Nov 23.
Article in English | MEDLINE | ID: mdl-34883778

ABSTRACT

Recent developments in cloud computing and the Internet of Things have enabled smart environments, in terms of both monitoring and actuation. Unfortunately, this often results in unsustainable cloud-based solutions, whereby, in the interest of simplicity, a wealth of raw (unprocessed) data are pushed from sensor nodes to the cloud. Herein, we advocate the use of machine learning at sensor nodes to perform essential data-cleaning operations, to avoid the transmission of corrupted (often unusable) data to the cloud. Starting from a public pollution dataset, we investigate how two machine learning techniques (kNN and missForest) may be embedded on Raspberry Pi to perform data imputation, without impacting the data collection process. Our experimental results demonstrate the accuracy and computational efficiency of edge-learning methods for filling in missing data values in corrupted data series. We find that kNN and missForest correctly impute up to 40% of randomly distributed missing values, with a density distribution of values that is indistinguishable from the benchmark. We also show a trade-off analysis for the case of bursty missing values, with recoverable blocks of up to 100 samples. Computation times are shorter than sampling periods, allowing for data imputation at the edge in a timely manner.


Subject(s)
Cloud Computing , Machine Learning , Benchmarking
6.
PLoS One ; 16(8): e0255067, 2021.
Article in English | MEDLINE | ID: mdl-34379625

ABSTRACT

Data collected in criminal investigations may suffer from issues like: (i) incompleteness, due to the covert nature of criminal organizations; (ii) incorrectness, caused by either unintentional data collection errors or intentional deception by criminals; (iii) inconsistency, when the same information is collected into law enforcement databases multiple times, or in different formats. In this paper we analyze nine real criminal networks of different nature (i.e., Mafia networks, criminal street gangs and terrorist organizations) in order to quantify the impact of incomplete data, and to determine which network type is most affected by it. The networks are firstly pruned using two specific methods: (i) random edge removal, simulating the scenario in which the Law Enforcement Agencies fail to intercept some calls, or to spot sporadic meetings among suspects; (ii) node removal, modeling the situation in which some suspects cannot be intercepted or investigated. Finally we compute spectral distances (i.e., Adjacency, Laplacian and normalized Laplacian Spectral Distances) and matrix distances (i.e., Root Euclidean Distance) between the complete and pruned networks, which we compare using statistical analysis. Our investigation identifies two main features: first, the overall understanding of the criminal networks remains high even with incomplete data on criminal interactions (i.e., when 10% of edges are removed); second, removing even a small fraction of suspects not investigated (i.e., 2% of nodes are removed) may lead to significant misinterpretation of the overall network.


Subject(s)
Criminals , Data Analysis , Social Networking , Algorithms , Humans , Terrorism
7.
PLoS One ; 15(8): e0236476, 2020.
Article in English | MEDLINE | ID: mdl-32756592

ABSTRACT

Compared to other types of social networks, criminal networks present particularly hard challenges, due to their strong resilience to disruption, which poses severe hurdles to Law-Enforcement Agencies (LEAs). Herein, we borrow methods and tools from Social Network Analysis (SNA) to (i) unveil the structure and organization of Sicilian Mafia gangs, based on two real-world datasets, and (ii) gain insights as to how to efficiently reduce the Largest Connected Component (LCC) of two networks derived from them. Mafia networks have peculiar features in terms of the links distribution and strength, which makes them very different from other social networks, and extremely robust to exogenous perturbations. Analysts also face difficulties in collecting reliable datasets that accurately describe the gangs' internal structure and their relationships with the external world, which is why earlier studies are largely qualitative, elusive and incomplete. An added value of our work is the generation of two real-world datasets, based on raw data extracted from juridical acts, relating to a Mafia organization that operated in Sicily during the first decade of 2000s. We created two different networks, capturing phone calls and physical meetings, respectively. Our analysis simulated different intervention procedures: (i) arresting one criminal at a time (sequential node removal); and (ii) police raids (node block removal). In both the sequential, and the node block removal intervention procedures, the Betweenness centrality was the most effective strategy in prioritizing the nodes to be removed. For instance, when targeting the top 5% nodes with the largest Betweenness centrality, our simulations suggest a reduction of up to 70% in the size of the LCC. We also identified that, due the peculiar type of interactions in criminal networks (namely, the distribution of the interactions' frequency), no significant differences exist between weighted and unweighted network analysis. Our work has significant practical applications for perturbing the operations of criminal and terrorist networks.


Subject(s)
Criminals/psychology , Social Networking , Humans , Sicily
8.
Sensors (Basel) ; 19(17)2019 Aug 27.
Article in English | MEDLINE | ID: mdl-31461834

ABSTRACT

This research work investigates how RSS information fusion from a single, multi-antenna access point (AP) can be used to perform device localization in indoor RSS based localization systems. The proposed approach demonstrates that different RSS values can be obtained by carefully modifying each AP antenna orientation and polarization, allowing the generation of unique, low correlation fingerprints, for the area of interest. Each AP antenna can be used to generate a set of fingerprint radiomaps for different antenna orientations and/or polarization. The RSS fingerprints generated from all antennas of the single AP can be then combined to create a multi-layer fingerprint radiomap. In order to select the optimum fingerprint layers in the multilayer radiomap the proposed methodology evaluates the obtained localization accuracy, for each fingerprint radio map combination, for various well-known deterministic and probabilistic algorithms (Weighted k-Nearest-Neighbor-WKNN and Minimum Mean Square Error-MMSE). The optimum candidate multi-layer radiomap is then examined by calculating the correlation level of each fingerprint pair by using the "Tolerance Based-Normal Probability Distribution (TBNPD)" algorithm. Both steps take place during the offline phase, and it is demonstrated that this approach results in selecting the optimum multi-layer fingerprint radiomap combination. The proposed approach can be used to provide localisation services in areas served only by a single AP.

9.
Nat Commun ; 9(1): 2383, 2018 06 19.
Article in English | MEDLINE | ID: mdl-29921910

ABSTRACT

Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdos-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.

10.
Sci Rep ; 8(1): 7007, 2018 Apr 30.
Article in English | MEDLINE | ID: mdl-29712929

ABSTRACT

A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.

11.
Sensors (Basel) ; 18(2)2018 Jan 27.
Article in English | MEDLINE | ID: mdl-29382072

ABSTRACT

Current trends in interconnecting myriad smart objects to monetize on Internet of Things applications have led to high-density communications in wireless sensor networks. This aggravates the already over-congested unlicensed radio bands, calling for new mechanisms to improve spectrum management and energy efficiency, such as transmission power control. Existing protocols are based on simplistic heuristics that often approach interference problems (i.e., packet loss, delay and energy waste) by increasing power, leading to detrimental results. The scope of this work is to investigate how machine learning may be used to bring wireless nodes to the lowest possible transmission power level and, in turn, to respect the quality requirements of the overall network. Lowering transmission power has benefits in terms of both energy consumption and interference. We propose a protocol of transmission power control through a reinforcement learning process that we have set in a multi-agent system. The agents are independent learners using the same exploration strategy and reward structure, leading to an overall cooperative network. The simulation results show that the system converges to an equilibrium where each node transmits at the minimum power while respecting high packet reception ratio constraints. Consequently, the system benefits from low energy consumption and packet delay.

12.
Sci Rep ; 8(1): 1571, 2018 01 25.
Article in English | MEDLINE | ID: mdl-29371618

ABSTRACT

Almost all the natural or human made systems can be understood and controlled using complex networks. This is a difficult problem due to the very large number of elements in such networks, on the order of billions and higher, which makes it impossible to use conventional network analysis methods. Herein, we employ artificial intelligence (specifically swarm computing), to compute centrality metrics in a completely decentralized fashion. More exactly, we show that by overlaying a homogeneous artificial system (inspired by swarm intelligence) over a complex network (which is a heterogeneous system), and playing a game in the fused system, the changes in the homogeneous system will reflect perfectly the complex network properties. Our method, dubbed Game of Thieves (GOT), computes the importance of all network elements (both nodes and edges) in polylogarithmic time with respect to the total number of nodes. Contrary, the state-of-the-art methods need at least a quadratic time. Moreover, the excellent capabilities of our proposed approach, it terms of speed, accuracy, and functionality, open the path for better ways of understanding and controlling complex networks.

13.
Sensors (Basel) ; 17(4)2017 Apr 10.
Article in English | MEDLINE | ID: mdl-28394268

ABSTRACT

Indoor user localization and tracking are instrumental to a broad range of services and applications in the Internet of Things (IoT) and particularly in Body Sensor Networks (BSN) and Ambient Assisted Living (AAL) scenarios. Due to the widespread availability of IEEE 802.11, many localization platforms have been proposed, based on the Wi-Fi Received Signal Strength (RSS) indicator, using algorithms such as K-Nearest Neighbour (KNN), Maximum A Posteriori (MAP) and Minimum Mean Square Error (MMSE). In this paper, we introduce a hybrid method that combines the simplicity (and low cost) of Bluetooth Low Energy (BLE) and the popular 802.11 infrastructure, to improve the accuracy of indoor localization platforms. Building on KNN, we propose a new positioning algorithm (dubbed i-KNN) which is able to filter the initial fingerprint dataset (i.e., the radiomap), after considering the proximity of RSS fingerprints with respect to the BLE devices. In this way, i-KNN provides an optimised small subset of possible user locations, based on which it finally estimates the user position. The proposed methodology achieves fast positioning estimation due to the utilization of a fragment of the initial fingerprint dataset, while at the same time improves positioning accuracy by minimizing any calculation errors.

SELECTION OF CITATIONS
SEARCH DETAIL
...