Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 22(13)2022 Jul 03.
Article in English | MEDLINE | ID: mdl-35808523

ABSTRACT

In emergent technologies, data integrity is critical for message-passing communications, where security measures and validations must be considered to prevent the entrance of invalid data, detect errors in transmissions, and prevent data loss. The SHA-256 algorithm is used to tackle these requirements. Current hardware architecture works present issues regarding real-time balance among processing, efficiency and cost, because some of them introduce significant critical paths. Besides, the SHA-256 algorithm itself considers no verification mechanisms for internal calculations and failure prevention. Hardware implementations can be affected by diverse problems, ranging from physical phenomena to interference or faults inherent to data spectra. Previous works have mainly addressed this problem through three kinds of redundancy: information, hardware, or time. To the best of our knowledge, pipelining has not been previously used to perform different hash calculations with a redundancy topic. Therefore, in this work, we present a novel hybrid architecture, implemented on a 3-stage pipeline structure, which is traditionally used to improve performance by simultaneously processing several blocks; instead, we propose using a pipeline technique for implementing hardware and time redundancies, analyzing hardware resources and performance to balance the critical path. We have improved performance at a certain clock speed, defining a data flow transformation in several sequential phases. Our architecture reported a throughput of 441.72 Mbps and 2255 LUTs, and presented an efficiency of 195.8 Kbps/LUT.

2.
Sensors (Basel) ; 22(8)2022 Apr 13.
Article in English | MEDLINE | ID: mdl-35458970

ABSTRACT

Cryptography has become one of the vital disciplines for information technology such as IoT (Internet Of Things), IIoT (Industrial Internet Of Things), I4.0 (Industry 4.0), and automotive applications. Some fundamental characteristics required for these applications are confidentiality, authentication, integrity, and nonrepudiation, which can be achieved using hash functions. A cryptographic hash function that provides a higher level of security is SHA-3. However, in real and modern applications, hardware implementations based on FPGA for hash functions are prone to errors due to noise and radiation since a change in the state of a bit can trigger a completely different hash output than the expected one, due to the avalanche effect or diffusion, meaning that modifying a single bit changes most of the desired bits of the hash; thus, it is vital to detect and correct any error during the algorithm execution. Current hardware solutions mainly seek to detect errors but not correct them (e.g., using parity checking or scrambling). To the best of our knowledge, there are no solutions that detect and correct errors for SHA-3 hardware implementations. This article presents the design and a comparative analysis of four FPGA architectures: two without fault tolerance and two with fault tolerance, which employ Hamming Codes to detect and correct faults for SHA-3 using an Encoder and a Decoder at the step-mapping functions level. Results show that the two hardware architectures with fault tolerance can detect up to a maximum of 120 and 240 errors, respectively, for every run of KECCAK-p, which is considered the worst case. Additionally, the paper provides a comparative analysis of these architectures with other works in the literature in terms of experimental results such as frequency, resources, throughput, and efficiency.

3.
Sensors (Basel) ; 22(7)2022 Mar 24.
Article in English | MEDLINE | ID: mdl-35408115

ABSTRACT

The latest generation of communication networks, such as SDVN (Software-defined vehicular network) and VANETs (Vehicular ad-hoc networks), should evaluate their communication channels to adapt their behavior. The quality of the communication in data networks depends on the behavior of the transmission channel selected to send the information. Transmission channels can be affected by diverse problems ranging from physical phenomena (e.g., weather, cosmic rays) to interference or faults inherent to data spectra. In particular, if the channel has a good transmission quality, we might maximize the bandwidth use. Otherwise, although fault-tolerant schemes degrade the transmission speed by solving errors or failures should be included, these schemes spend more energy and are slower due to requesting lost packets (recovery). In this sense, one of the open problems in communications is how to design and implement an efficient and low-power-consumption mechanism capable of sensing the quality of the channel and automatically making the adjustments to select the channel over which transmit. In this work, we present a trade-off analysis based on hardware implementation to identify if a channel has a low or high quality, implementing four machine learning algorithms: Decision Trees, Multi-Layer Perceptron, Logistic Regression, and Support Vector Machines. We obtained the best trade-off with an accuracy of 95.01% and efficiency of 9.83 Mbps/LUT (LookUp Table) with a hardware implementation of a Decision Tree algorithm with a depth of five.

4.
Sensors (Basel) ; 21(21)2021 Oct 25.
Article in English | MEDLINE | ID: mdl-34770377

ABSTRACT

The design of neural network architectures is carried out using methods that optimize a particular objective function, in which a point that minimizes the function is sought. In reported works, they only focused on software simulations or commercial complementary metal-oxide-semiconductor (CMOS), neither of which guarantees the quality of the solution. In this work, we designed a hardware architecture using individual neurons as building blocks based on the optimization of n-dimensional objective functions, such as obtaining the bias and synaptic weight parameters of an artificial neural network (ANN) model using the gradient descent method. The ANN-based architecture has a 5-3-1 configuration and is implemented on a 1.2 µm technology integrated circuit, with a total power consumption of 46.08 mW, using nine neurons and 36 CMOS operational amplifiers (op-amps). We show the results obtained from the application of integrated circuits for ANNs simulated in PSpice applied to the classification of digital data, demonstrating that the optimization method successfully obtains the synaptic weights and bias values generated by the learning algorithm (Steepest-Descent), for the design of the neural architecture.


Subject(s)
Neural Networks, Computer , Semiconductors , Algorithms , Neurons , Oxides
5.
Sensors (Basel) ; 21(16)2021 Aug 22.
Article in English | MEDLINE | ID: mdl-34451097

ABSTRACT

Currently, cryptographic algorithms are widely applied to communications systems to guarantee data security. For instance, in an emerging automotive environment where connectivity is a core part of autonomous and connected cars, it is essential to guarantee secure communications both inside and outside the vehicle. The AES algorithm has been widely applied to protect communications in onboard networks and outside the vehicle. Hardware implementations use techniques such as iterative, parallel, unrolled, and pipeline architectures. Nevertheless, the use of AES does not guarantee secure communication, because previous works have proved that implementations of secret key cryptosystems, such as AES, in hardware are sensitive to differential fault analysis. Moreover, it has been demonstrated that even a single fault during encryption or decryption could cause a large number of errors in encrypted or decrypted data. Although techniques such as iterative and parallel architectures have been explored for fault detection to protect AES encryption and decryption, it is necessary to explore other techniques such as pipelining. Furthermore, balancing a high throughput, reducing low power consumption, and using fewer hardware resources in the pipeline design are great challenges, and they are more difficult when considering fault detection and correction. In this research, we propose a novel hybrid pipeline hardware architecture focusing on error and fault detection for the AES cryptographic algorithm. The architecture is hybrid because it combines hardware and time redundancy through a pipeline structure, analyzing and balancing the critical path and distributing the processing elements within each stage. The main contribution is to present a pipeline structure for ciphering five times on the same data blocks, implementing a voting module to verify when an error occurs or when output has correct cipher data, optimizing the process, and using a decision tree to reduce the complexity of all combinations required for evaluating. The architecture is analyzed and implemented on several FPGA technologies, and it reports a throughput of 0.479 Gbps and an efficiency of 0.336 Mbps/LUT when a Virtex-7 is used.

6.
PLoS One ; 15(6): e0234293, 2020.
Article in English | MEDLINE | ID: mdl-32559235

ABSTRACT

Several areas, such as physical and health sciences, require the use of matrices as fundamental tools for solving various problems. Matrices are used in real-life contexts, such as control, automation, and optimization, wherein results are expected to improve with increase of computational precision. However, special attention should be paid to ill-conditioned matrices, which can produce unstable systems; an inadequate handling of precision might worsen results since the solution found for data with errors might be too far from the one for data without errors besides increasing other costs in hardware resources and critical paths. In this paper, we make a wake-up call, using 2 × 2 matrices to show how ill-conditioning and precision can affect system design (resources, cost, etc.). We first demonstrate some examples of real-life problems where ill-conditioning is present in matrices obtained from the discretization of the operational equations (ill-posed in the sense of Hadamard) that model these problems. If these matrices are not handled appropriately (i.e., if ill-conditioning is not considered), large errors can result in the computed solutions to the systems of equations in the presence of errors. Furthermore, we illustrate the generated effect in the calculation of the inverse of an ill-conditioned matrix when its elements are approximated by truncation. We present two case studies to illustrate the effects on calculation errors caused by increasing or reducing precision to s digits. To illustrate the costs, we implemented the adjoint matrix inversion algorithm on different field-programmable gate arrays (FPGAs), namely, Spartan-7, Artix-7, Kintex-7, and Virtex-7, using the full-unrolling hardware technique. The implemented architecture is useful for analyzing trade-offs when precision is increased; this also helps analyze performance, efficiency, and energy consumption. By means of a detailed description of the trade-offs among these metrics, concerning precision and ill-conditioning, we conclude that the need for resources seems to grow not linearly when precision is increased. We also conclude that, if error is to be reduced below a certain threshold, it is necessary to determine an optimal precision point. Otherwise, the system becomes more sensitive to measurement errors and a better alternative would be to choose precision carefully, and/or to apply regularization or preconditioning methods, which would also reduce the resources required.


Subject(s)
Algorithms , Computer Simulation
7.
Sensors (Basel) ; 20(11)2020 Jun 02.
Article in English | MEDLINE | ID: mdl-32498271

ABSTRACT

The electrocardiogram records the heart's electrical activity and generates a significant amount of data. The analysis of these data helps us to detect diseases and disorders via heart bio-signal abnormality classification. In unbalanced-data contexts, where the classes are not equally represented, the optimization and configuration of the classification models are highly complex, reflecting on the use of computational resources. Moreover, the performance of electrocardiogram classification depends on the approach and parameter estimation to generate the model with high accuracy, sensitivity, and precision. Previous works have proposed hybrid approaches and only a few implemented parameter optimization. Instead, they generally applied an empirical tuning of parameters at a data level or an algorithm level. Hence, a scheme, including metrics of sensitivity in a higher precision and accuracy scale, deserves special attention. In this article, a metaheuristic optimization approach for parameter estimations in arrhythmia classification from unbalanced data is presented. We selected an unbalanced subset of those databases to classify eight types of arrhythmia. It is important to highlight that we combined undersampling based on the clustering method (data level) and feature selection method (algorithmic level) to tackle the unbalanced class problem. To explore parameter estimation and improve the classification for our model, we compared two metaheuristic approaches based on differential evolution and particle swarm optimization. The final results showed an accuracy of 99.95%, a F1 score of 99.88%, a sensitivity of 99.87%, a precision of 99.89%, and a specificity of 99.99%, which are high, even in the presence of unbalanced data.


Subject(s)
Arrhythmias, Cardiac , Electrocardiography , Signal Processing, Computer-Assisted , Algorithms , Arrhythmias, Cardiac/classification , Arrhythmias, Cardiac/diagnosis , Cluster Analysis , Databases, Factual , Humans
8.
PLoS One ; 13(1): e0190939, 2018.
Article in English | MEDLINE | ID: mdl-29360824

ABSTRACT

Security is a crucial requirement in the envisioned applications of the Internet of Things (IoT), where most of the underlying computing platforms are embedded systems with reduced computing capabilities and energy constraints. In this paper we present the design and evaluation of a scalable low-area FPGA hardware architecture that serves as a building block to accelerate the costly operations of exponentiation and multiplication in [Formula: see text], commonly required in security protocols relying on public key encryption, such as in key agreement, authentication and digital signature. The proposed design can process operands of different size using the same datapath, which exhibits a significant reduction in area without loss of efficiency if compared to representative state of the art designs. For example, our design uses 96% less standard logic than a similar design optimized for performance, and 46% less resources than other design optimized for area. Even using fewer area resources, our design still performs better than its embedded software counterparts (190x and 697x).


Subject(s)
Computer Security/instrumentation , Internet , Wearable Electronic Devices , Algorithms , Computer Systems , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...