Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
Add more filters










Publication year range
2.
Sci Rep ; 12(1): 14557, 2022 08 25.
Article in English | MEDLINE | ID: mdl-36008439

ABSTRACT

This work presents the vision pipeline for our in-house developed autonomous reconfigurable pavement sweeping robot named Panthera. As the goal of Panthera is to be an autonomous self-reconfigurable robot, it has to understand the type of pavement it is moving in so that it can adapt smoothly to changing pavement width and perform cleaning operations more efficiently and safely. deep learning (DL) based vision pipeline is proposed for the Panthera robot to recognize pavement features, including pavement type identification, pavement surface condition prediction, and pavement width estimation. The DeepLabv3+ semantic segmentation algorithm was customized to identify the pavement type classification, an eight-layer CNN was proposed for pavement surface condition prediction. Furthermore, pavement width estimation was computed by fusing the segmented pavement region on the depth map. In the end, the fuzzy inference system was implemented by taking input as the pavement width and its conditions detected and output as the safe operational speed. The vision pipeline was trained using the DL provided with the custom pavement images dataset. The performance was evaluated using offline test and real-time field trial images captured through the reconfigurable robot Panthera stereo vision sensor. In the experimental analysis, the DL-based vision pipeline components scored 88.02% and 93.22% accuracy for pavement segmentation and pavement surface condition assessment, respectively, and took approximately 10 ms computation time to process the single image frame from the vision sensor using the onboard computer.


Subject(s)
Robotics , Algorithms , Semantics
3.
Sensors (Basel) ; 22(14)2022 Jul 16.
Article in English | MEDLINE | ID: mdl-35890997

ABSTRACT

Robot-aided cleaning auditing is pioneering research that uses autonomous robots to assess a region's cleanliness level by analyzing the dirt samples collected from various locations. Since the dirt sample gathering process is more challenging, adapting a coverage planning strategy from a similar domain for cleaning is non-viable. Alternatively, a path planning approach to gathering dirt samples selectively at locations with a high likelihood of dirt accumulation is more feasible. This work presents a first-of-its-kind dirt sample gathering strategy for the cleaning auditing robots by combining the geometrical feature extraction and swarm algorithms. This combined approach generates an efficient optimal path covering all the identified dirt locations for efficient cleaning auditing. Besides being the foundational effort for cleaning audit, a path planning approach considering the geometric signatures that contribute to the dirt accumulation of a region has not been device so far. The proposed approach is validated systematically through experiment trials. The geometrical feature extraction-based dirt location identification method successfully identified dirt accumulated locations in our post-cleaning analysis as part of the experiment trials. The path generation strategies are validated in a real-world environment using an in-house developed cleaning auditing robot BELUGA. From the experiments conducted, the ant colony optimization algorithm generated the best cleaning auditing path with less travel distance, exploration time, and energy usage.


Subject(s)
Robotics , Algorithms , Robotics/methods
4.
Sensors (Basel) ; 21(24)2021 Dec 13.
Article in English | MEDLINE | ID: mdl-34960425

ABSTRACT

Cleaning is one of the fundamental tasks with prime importance given in our day-to-day life. Moreover, the importance of cleaning drives the research efforts towards bringing leading edge technologies, including robotics, into the cleaning domain. However, an effective method to assess the quality of cleaning is an equally important research problem to be addressed. The primary footstep towards addressing the fundamental question of "How clean is clean" is addressed using an autonomous cleaning-auditing robot that audits the cleanliness of a given area. This research work focuses on a novel reinforcement learning-based experience-driven dirt exploration strategy for a cleaning-auditing robot. The proposed approach uses proximal policy approximation (PPO) based on-policy learning method to generate waypoints and sampling decisions to explore the probable dirt accumulation regions in a given area. The policy network is trained in multiple environments with simulated dirt patterns. Experiment trials have been conducted to validate the trained policy in both simulated and real-world environments using an in-house developed cleaning audit robot called BELUGA.


Subject(s)
Robotics
5.
Sci Rep ; 11(1): 22378, 2021 11 17.
Article in English | MEDLINE | ID: mdl-34789747

ABSTRACT

Drain blockage is a crucial problem in the urban environment. It heavily affects the ecosystem and human health. Hence, routine drain inspection is essential for urban environment. Manual drain inspection is a tedious task and prone to accidents and water-borne diseases. This work presents a drain inspection framework using convolutional neural network (CNN) based object detection algorithm and in house developed reconfigurable teleoperated robot called 'Raptor'. The CNN based object detection model was trained using a transfer learning scheme with our custom drain-blocking objects data-set. The efficiency of the trained CNN algorithm and drain inspection robot Raptor was evaluated through various real-time drain inspection field trial. The experimental results indicate that our trained object detection algorithm has detect and classified the drain blocking objects with 91.42% accuracy for both offline and online test images and is able to process 18 frames per second (FPS). Further, the maneuverability of the robot was evaluated from various open and closed drain environment. The field trial results ensure that the robot maneuverability was stable, and its mapping and localization is also accurate in a complex drain environment.

6.
Sensors (Basel) ; 21(21)2021 Nov 01.
Article in English | MEDLINE | ID: mdl-34770593

ABSTRACT

Human visual inspection of drains is laborious, time-consuming, and prone to accidents. This work presents an AI-enabled robot-assisted remote drain inspection and mapping framework using our in-house developed reconfigurable robot Raptor. The four-layer IoRT serves as a bridge between the users and the robots, through which seamless information sharing takes place. The Faster RCNN ResNet50, Faster RCNN ResNet101, and Faster RCNN Inception-ResNet-v2 deep learning frameworks were trained using a transfer learning scheme with six typical concrete defect classes and deployed in an IoRT framework remote defect detection task. The efficiency of the trained CNN algorithm and drain inspection robot Raptor was evaluated through various real-time drain inspection field trials using the SLAM technique. The experimental results indicate that robot's maneuverability was stable, and its mapping and localization were also accurate in different drain types. Finally, for effective drain maintenance, the SLAM-based defect map was generated by fusing defect detection results in the lidar-SLAM map.


Subject(s)
Raptors , Robotics , Algorithms , Animals , Humans
7.
Sensors (Basel) ; 21(18)2021 Sep 18.
Article in English | MEDLINE | ID: mdl-34577486

ABSTRACT

Staircase cleaning is a crucial and time-consuming task for maintenance of multistory apartments and commercial buildings. There are many commercially available autonomous cleaning robots in the market for building maintenance, but few of them are designed for staircase cleaning. A key challenge for automating staircase cleaning robots involves the design of Environmental Perception Systems (EPS), which assist the robot in determining and navigating staircases. This system also recognizes obstacles and debris for safe navigation and efficient cleaning while climbing the staircase. This work proposes an operational framework leveraging the vision based EPS for the modular re-configurable maintenance robot, called sTetro. The proposed system uses an SSD MobileNet real-time object detection model to recognize staircases, obstacles and debris. Furthermore, the model filters out false detection of staircases by fusion of depth information through the use of a MobileNet and SVM. The system uses a contour detection algorithm to localize the first step of the staircase and depth clustering scheme for obstacle and debris localization. The framework has been deployed on the sTetro robot using the Jetson Nano hardware from NVIDIA and tested with multistory staircases. The experimental results show that the entire framework takes an average of 310 ms to run and achieves an accuracy of 94.32% for staircase recognition tasks and 93.81% accuracy for obstacle and debris detection tasks during real operation of the robot.


Subject(s)
Deep Learning , Form Perception , Robotics , Algorithms
8.
Sensors (Basel) ; 21(16)2021 Aug 06.
Article in English | MEDLINE | ID: mdl-34450767

ABSTRACT

Routine rodent inspection is essential to curbing rat-borne diseases and infrastructure damages within the built environment. Rodents find false ceilings to be a perfect spot to seek shelter and construct their habitats. However, a manual false ceiling inspection for rodents is laborious and risky. This work presents an AI-enabled IoRT framework for rodent activity monitoring inside a false ceiling using an in-house developed robot called "Falcon". The IoRT serves as a bridge between the users and the robots, through which seamless information sharing takes place. The shared images by the robots are inspected through a Faster RCNN ResNet 101 object detection algorithm, which is used to automatically detect the signs of rodent inside a false ceiling. The efficiency of the rodent activity detection algorithm was tested in a real-world false ceiling environment, and detection accuracy was evaluated with the standard performance metrics. The experimental results indicate that the algorithm detects rodent signs and 3D-printed rodents with a good confidence level.


Subject(s)
Neural Networks, Computer , Rodentia , Algorithms , Animals , Rats
9.
Dermatol Ther (Heidelb) ; 11(4): 1373-1384, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34110605

ABSTRACT

INTRODUCTION: Patients with psoriasis (PsO) experience impaired health-related quality of life due to physical and psychosocial burdens. The objective of this study was to assess the correlation between change in Psoriasis Area and Severity Index (PASI) score and selected Dermatology Life Quality Index (DLQI) domain scores in patients with moderate-to-severe PsO and those with PsO and comorbid psoriatic arthritis (PsA). METHODS: This post hoc analysis of four phase 3 clinical trials included patients with moderate-to-severe PsO randomized to secukinumab 150/300 mg, etanercept, or placebo. Pairwise latent growth models were applied to assess the longitudinal correlation between change in PASI scores and changes in three DLQI domain scores (daily activities, leisure activities, and symptoms/feelings). The initial (baseline to week 12) and sustained (week > 12 to week 52) treatment exposures were analysed by population type (total, PsO only, and PsO with comorbid PsA) and treatment arm (secukinumab, etanercept, or placebo). RESULTS: Among the total population (N = 2401), PASI change was positively correlated with change in each assessed DLQI domain; correlations were weak to moderate over the initial treatment exposure period (ß range, 0.20-0.29; all P < 0.001) and moderate to strong over the sustained exposure period (ß range, 0.63-0.69; all P < 0.001). Similar trends were observed regardless of the presence of comorbid PsA. These relationships were confirmed among patients treated with secukinumab, etanercept, or placebo. CONCLUSIONS: Improvements in PASI scores were directly moderately related to improvements in DLQI domain scores from initiation of treatment and extended over time, regardless of presence of comorbid PsA or treatment received. CLINICAL TRIAL REGISTRATION: ERASURE (NCT01365455), FIXTURE (NCT01358578), FEATURE (NCT01555125), and JUNCTURE (NCT01636687).

10.
Sensors (Basel) ; 21(5)2021 Mar 03.
Article in English | MEDLINE | ID: mdl-33802434

ABSTRACT

Regular washing of public pavements is necessary to ensure that the public environment is sanitary for social activities. This is a challenge for autonomous cleaning robots, as they must adapt to the environment with varying pavement widths while avoiding pedestrians. A self-reconfigurable pavement sweeping robot, named Panthera, has the mechanisms to perform reconfiguration in width to enable smooth cleaning operations, and it changes its behavior based on environment dynamics of moving pedestrians and changing pavement widths. Reconfiguration in the robot's width is possible, due to the scissor mechanism at the core of the robot's body, which is driven by a lead screw motor. Panthera will perform locomotion and reconfiguration based on perception sensors feedback control proposed while using an Red Green Blue-D (RGB-D) camera. The proposed control scheme involves publishing robot kinematic parameters for reconfiguration during locomotion. Experiments were conducted in outdoor pavements to demonstrate the autonomous reconfiguration during locomotion to avoid pedestrians while complying with varying pavements widths in a real-world scenario.


Subject(s)
Pedestrians , Robotics , Feedback , Humans , Locomotion , Perception
11.
Sensors (Basel) ; 21(8)2021 Apr 07.
Article in English | MEDLINE | ID: mdl-33917223

ABSTRACT

The pavement inspection task, which mainly includes crack and garbage detection, is essential and carried out frequently. The human-based or dedicated system approach for inspection can be easily carried out by integrating with the pavement sweeping machines. This work proposes a deep learning-based pavement inspection framework for self-reconfigurable robot named Panthera. Semantic segmentation framework SegNet was adopted to segment the pavement region from other objects. Deep Convolutional Neural Network (DCNN) based object detection is used to detect and localize pavement defects and garbage. Furthermore, Mobile Mapping System (MMS) was adopted for the geotagging of the defects. The proposed system was implemented and tested with the Panthera robot having NVIDIA GPU cards. The experimental results showed that the proposed technique identifies the pavement defects and litters or garbage detection with high accuracy. The experimental results on the crack and garbage detection are presented. It is found that the proposed technique is suitable for deployment in real-time for garbage detection and, eventually, sweeping or cleaning tasks.

12.
J Dermatolog Treat ; 32(7): 709-715, 2021 Nov.
Article in English | MEDLINE | ID: mdl-31873050

ABSTRACT

BACKGROUND: Patients with psoriasis experience decreased health-related quality of life due to physical and psychological burdens. OBJECTIVE: To assess the effect of a highly effective psoriasis treatment (secukinumab) on domains of the 3-level EuroQol 5 Dimensions questionnaire (EQ-5D-3L) in patients with moderate-to-severe psoriasis who reported problems at baseline. METHODS: This pooled analysis of four phase 3 clinical trials (ERASURE [NCT01365455], FIXTURE [NCT01358578], FEATURE [NCT01555125], and JUNCTURE [NCT01636687]) included patients with moderate-to-severe psoriasis randomized to receive placebo or secukinumab 300 mg and who reported problems in the EQ-5D-3L domains of mobility, self-care, usual activities, pain/discomfort, or anxiety/depression at baseline. Percentage of patients reporting problems in each domain were compared at Weeks 4, 8, and 12 between patients receiving secukinumab 300 mg and placebo. RESULTS: At baseline, 570 patients receiving secukinumab 300 mg and 579 receiving placebo reported ≥1 problem in the EQ-5D-3L domains. Patients receiving secukinumab 300 mg reported improvements in all 5 domains at Weeks 4, 8, and 12 compared with placebo (all p < .0001). CONCLUSION: These findings provide additional evidence of the quality-of-life impairment in patients with moderate-to-severe psoriasis and highlight the improvement across all EQ-5D-3L domains among patients treated with secukinumab.


Subject(s)
Psoriasis , Quality of Life , Antibodies, Monoclonal, Humanized , Health Status , Humans , Psoriasis/drug therapy , Surveys and Questionnaires
13.
Sensors (Basel) ; 20(18)2020 Sep 15.
Article in English | MEDLINE | ID: mdl-32942750

ABSTRACT

Insect detection and control at an early stage are essential to the built environment (human-made physical spaces such as homes, hotels, camps, hospitals, parks, pavement, food industries, etc.) and agriculture fields. Currently, such insect control measures are manual, tedious, unsafe, and time-consuming labor dependent tasks. With the recent advancements in Artificial Intelligence (AI) and the Internet of things (IoT), several maintenance tasks can be automated, which significantly improves productivity and safety. This work proposes a real-time remote insect trap monitoring system and insect detection method using IoT and Deep Learning (DL) frameworks. The remote trap monitoring system framework is constructed using IoT and the Faster RCNN (Region-based Convolutional Neural Networks) Residual neural Networks 50 (ResNet50) unified object detection framework. The Faster RCNN ResNet 50 object detection framework was trained with built environment insects and farm field insect images and deployed in IoT. The proposed system was tested in real-time using four-layer IoT with built environment insects image captured through sticky trap sheets. Further, farm field insects were tested through a separate insect image database. The experimental results proved that the proposed system could automatically identify the built environment insects and farm field insects with an average of 94% accuracy.


Subject(s)
Deep Learning , Insecta , Internet of Things , Pest Control , Animals , Neural Networks, Computer
14.
Sensors (Basel) ; 20(11)2020 Jun 10.
Article in English | MEDLINE | ID: mdl-32531960

ABSTRACT

Periodic cleaning of all frequently touched social areas such as walls, doors, locks, handles, windows has become the first line of defense against all infectious diseases. Among those, cleaning of large wall areas manually is always tedious, time-consuming, and astounding task. Although numerous cleaning companies are interested in deploying robotic cleaning solutions, they are mostly not addressing wall cleaning. To this end, we are proposing a new vision-based wall following framework that acts as an add-on for any professional robotic platform to perform wall cleaning. The proposed framework uses Deep Learning (DL) framework to visually detect, classify, and segment the wall/floor surface and instructs the robot to wall follow to execute the cleaning task. Also, we summarized the system architecture of Toyota Human Support Robot (HSR), which has been used as our testing platform. We evaluated the performance of the proposed framework on HSR robot under various defined scenarios. Our experimental results indicate that the proposed framework could successfully classify and segment the wall/floor surface and also detect the obstacle on wall and floor with high detection accuracy and demonstrates a robust behavior of wall following.

15.
Sensors (Basel) ; 20(12)2020 Jun 23.
Article in English | MEDLINE | ID: mdl-32585864

ABSTRACT

The role of mobile robots for cleaning and sanitation purposes is increasing worldwide. Disinfection and hygiene are two integral parts of any safe indoor environment, and these factors become more critical in COVID-19-like pandemic situations. Door handles are highly sensitive contact points that are prone to be contamination. Automation of the door-handle cleaning task is not only important for ensuring safety, but also to improve efficiency. This work proposes an AI-enabled framework for automating cleaning tasks through a Human Support Robot (HSR). The overall cleaning process involves mobile base motion, door-handle detection, and control of the HSR manipulator for the completion of the cleaning tasks. The detection part exploits a deep-learning technique to classify the image space, and provides a set of coordinates for the robot. The cooperative control between the spraying and wiping is developed in the Robotic Operating System. The control module uses the information obtained from the detection module to generate a task/operational space for the robot, along with evaluating the desired position to actuate the manipulators. The complete strategy is validated through numerical simulations, and experiments on a Toyota HSR platform.


Subject(s)
Betacoronavirus , Coronavirus Infections/prevention & control , Disinfection/instrumentation , Pandemics/prevention & control , Pneumonia, Viral/prevention & control , Robotics/instrumentation , Algorithms , COVID-19 , Coronavirus Infections/transmission , Coronavirus Infections/virology , Deep Learning , Disinfection/methods , Equipment Design , Humans , Maintenance , Motion , Pneumonia, Viral/transmission , Pneumonia, Viral/virology , Robotics/methods , Robotics/statistics & numerical data , SARS-CoV-2
16.
Rev. Fac. Odontol. Univ. Antioq ; 21(2): 198-207, jun. 2010. ilus, tab
Article in Spanish | LILACS | ID: lil-551746

ABSTRACT

Introducción: la ortopantomografía ha sido de suma importancia antes, durante, y después del tratamientoortodóncico para evaluar el paralelismo radicular. La imagen de la pieza dentaria que presenta mayor distorsión en ella es la del canino, cuyo posicionamiento se considera un elemento fundamental para lograr una adecuada estabilidad. Actualmente dispositivos radiográficos como la tomografía computacional cone beam (CBCT) entregan mayor precisión y confiabilidad en su imagen. El objetivo de este estudio fue comparar la angulación mesiodistal de caninos obtenida con ortopantomografía y CBCTtomadas al mismo tiempo y bajo los mismos parámetros. Métodos: se comparó la angulación con respecto a una vertical de los cuatro caninos de 29 pacientes en fase final de tratamiento ortodóncico obtenidas por ortopantomiografía y CBCT utilizandodispositivo radiográfico 3D Galileos® (Sirona®). Se tomó como referencia del eje axial el conducto radicular y la cámara pulpar, obviando trayectorias anormales del conducto y curvaturas apicales de la raíz. Para la medición de la angulación mesiodistal, se utilizaron los programas computacionales Galaxis®, de Sirona®, y MB-Ruler© versión 3.6. Se utilizó el test t de Student para muestras pareadas en aquellos grupos que tuvieron una distribución Normal y una dispersión homogénea; y el test de los rangossignados de Wilcoxon en aquellos grupos que cumplieran estos requisitos. Resultados: se encontraron diferencias estadísticamente significativas al comparar las mediciones de ambas imágenes radiográficas. Las angulaciones mesiodistales de caninos medidas en ortopantomografía, fueron siempre mayores que las medidas en CBCT. Conclusión: en la muestra seleccionada y bajo las condiciones de observación la angulación mesiodistal del canino medida en ortopantomografía muestra incremento de 1 a 2º con respecto a la obtenida en CBCT.


Introduction: orthopantomography has been widely used before, during and after orthodontic treatment to evaluateroot parallelism. Canines are the teeth which usually show the biggest distortion when this technique is used. The position of this tooth is considered by many as a key element to obtain stability after orthodontic treatment. At present, radiographic devices suchas Cone Beam Computerized Tomography (CBCT) provide better precision and reliability in their image. The purpose of this study was to compare the mesiodistal angulation of canines as measured by Orthopantomography and CBCT, taken at the same time and under the same parameters. Methods: angulation was compared with respect to a vertical of the four canines in 29 patients undergoing the final phase of orthodontic treatment, obtained by Orthopantomography and CBCT, using a 3D Galileos® (Sirona®) radiographic device. The root canal and the pulp chamber were taken as reference points of the longitudinal axes of the tooth, ignoring abnormal trajectories of the root canal and apical curvatures of the root. Galaxis® from Sirona® and MB- Ruler© version 3.6, computational programs were used to measure mesiodistal angulation of the teeth. Student´s t-test for paired samples was used for those groups that showed a Normal distribution and homogeneous dispersion. Wilcoxon´s signed-rank test was used for those groups meeting these requirements. Results: statistically significant differences were found when both radiographic images were compared. The mesiodistal angulation of canines was always larger when measured by orthopantomography method than by the CBCT method. Conclusion: in the selected sample and under the conditions of this study, the mesiodistal angulation of caninesshowed an increase of 1° to 2° when measured with orthopantomography as compared with the CBCT method.


Subject(s)
Humans , Cone-Beam Computed Tomography , Cuspid , Radiography, Panoramic , Orthodontics
17.
Rev. chil. fonoaudiol ; 9(1): 27-39, oct. 2009. graf, tab
Article in Spanish | LILACS | ID: lil-551876

ABSTRACT

Objetivo: determinar si existen diferencias en el escape de aire nasal durante la emisión de fonemas oclusivos y fonemas fricativos en pacientes portadores de fisura labiovelopalatina unilateral operada. Material y método: Para la presente investigación se utilizó un instrumento llamado Aerofonoscopio IIc que capta el escape de aire nasal de los pacientes al hablar. El estudio fue realizado en 74 pacientes diagnosticados con fisura labiovelopalatina unilateral operada, con un rango de edad desde los 58 meses (4 años 10 meses) hasta los 540 meses (45 años). Para realizar el análisis comparativo se utilizó la prueba "t" student. Resultados: Se encontraron diferencias estadísticamente significativas en el porcentaje de escape de aire nasal entre fonemas fricativos (52,22 por ciento) y fonemas oclusivos (35,94 por ciento). Conclusión: Los fonemas fricativos son los más afectados en portadores de fisura labiovelopalatina. El fonema fricativo más alterado es el fonema /s/. La aerofonoscopía es un examen objetivamente válido para determinar el grado de escape nasal.


Objetive: The objetive of the present study is to determine whether or not differences in the nasal air escape during the emission of plosive and fricative phonemes in patients with unilateral cleft lip and palate exist. Methods. An aerophonoscope IIc, an instrument which captures the nasal air escape during speech, was used. The study sample comprised 74 patients diagnosed with operated unilateral cleft lip and palate. The age of the participants ranged between 58 (4 years and 10 months)and 540 months (45 years). A t-test for group comparisons was computed for the statistical analysis. Results. Statistically significant differences were found in the percentage of nasal air escape between fricative (52,22 percent) and plosive phonemes (35,94 percent). Conclusion. Fricative phonemes are the most affected ones among patients with unilateral cleft lip and palate. The most affected fricative phoneme is /s/. The aerophonoscopy is an objetive and valid test to determine the nasal air escape.


Subject(s)
Humans , Adolescent , Adult , Child, Preschool , Child , Middle Aged , Cleft Palate/physiopathology , Phonation/physiology , Cleft Lip/physiopathology , Speech Production Measurement/instrumentation , Nose/physiopathology , Speech Disorders/physiopathology , Phonetics , Respiration
18.
Rev. chil. fonoaudiol ; 6(2): 39-51, dic. 2005. ilus, tab, graf
Article in Spanish | LILACS | ID: lil-439398

ABSTRACT

El tejido adenoideo aparece como parte del anillo linfático de Waldeyer durante los primeros años de vida. Alcanza su máximo volumen a los 6 años de edad, comienza a desaparecer entre los 8 a 10 años de edad, reduciéndose a una pequeña cavidad faríngea en adultos. En presencia de adenoides hipertróficos, pueden existir alteraciones respiratorias y de crecimientos general en el niño. Para determinar patrones de involución adenoidea, fueron medidas áreas de tejido adenoideo y nasofaríngeas, usando telerradiografías de perfil, según el método de Handelman y Osborne. La edad ósea fue obtenida a través del análisis de madurez ósea de Bjõrk y Helm en radiografías de mano, comparadas con el atlas de madurez ósea de Canals. Se seleccionó una muestra de 5 grupos, cada una con 20 individuos entre 10 y 14 años de edad. Los resultados fueron analizados mediante regresión lineal y validados con análisis de varianza.


Subject(s)
Humans , Male , Adolescent , Female , Child , Adenoids/anatomy & histology , Adenoids/growth & development , Nasopharynx/anatomy & histology , Nasopharynx/growth & development , Age Factors , Analysis of Variance , Adenoids , Cephalometry , Hyperplasia , Linear Models , Hand , Nasopharynx , Sex Factors , Teleradiology
SELECTION OF CITATIONS
SEARCH DETAIL
...