Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 6.964
Filtrar
1.
R Soc Open Sci ; 11(10): 240180, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39386990

RESUMO

As large language models (LLMs) continue to gain popularity due to their human-like traits and the intimacy they offer to users, their societal impact inevitably expands. This leads to the rising necessity for comprehensive studies to fully understand LLMs and reveal their potential opportunities, drawbacks and overall societal impact. With that in mind, this research conducted an extensive investigation into seven LLMs, aiming to assess the temporal stability and inter-rater agreement on their responses on personality instruments in two time points. In addition, LLMs' personality profile was analysed and compared with human normative data. The findings revealed varying levels of inter-rater agreement in the LLMs' responses over a short time, with some LLMs showing higher agreement (e.g. Llama3 and GPT-4o) compared with others (e.g. GPT-4 and Gemini). Furthermore, agreement depended on used instruments as well as on domain or trait. This implies the variable robustness in LLMs' ability to reliably simulate stable personality characteristics. In the case of scales which showed at least fair agreement, LLMs displayed mostly a socially desirable profile in both agentic and communal domains, as well as a prosocial personality profile reflected in higher agreeableness and conscientiousness and lower Machiavellianism. Exhibiting temporal stability and coherent responses on personality traits is crucial for AI systems due to their societal impact and AI safety concerns.

2.
Comput Biol Med ; 182: 109183, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39357134

RESUMO

Explainable artificial intelligence (XAI) aims to offer machine learning (ML) methods that enable people to comprehend, properly trust, and create more explainable models. In medical imaging, XAI has been adopted to interpret deep learning black box models to demonstrate the trustworthiness of machine decisions and predictions. In this work, we proposed a deep learning and explainable AI-based framework for segmenting and classifying brain tumors. The proposed framework consists of two parts. The first part, encoder-decoder-based DeepLabv3+ architecture, is implemented with Bayesian Optimization (BO) based hyperparameter initialization. The different scales are performed, and features are extracted through the Atrous Spatial Pyramid Pooling (ASPP) technique. The extracted features are passed to the output layer for tumor segmentation. In the second part of the proposed framework, two customized models have been proposed named Inverted Residual Bottleneck 96 layers (IRB-96) and Inverted Residual Bottleneck Self-Attention (IRB-Self). Both models are trained on the selected brain tumor datasets and extracted features from the global average pooling and self-attention layers. Features are fused using a serial approach, and classification is performed. The BO-based hyperparameters optimization of the neural network classifiers is performed and the classification results have been optimized. An XAI method named LIME is implemented to check the interpretability of the proposed models. The experimental process of the proposed framework was performed on the Figshare dataset, and an average segmentation accuracy of 92.68 % and classification accuracy of 95.42 % were obtained, respectively. Compared with state-of-the-art techniques, the proposed framework shows improved accuracy.

3.
Acta Psychol (Amst) ; 250: 104501, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39357416

RESUMO

The integration of artificial intelligence (AI) technology in e-commerce has currently stimulated scholarly attention, however studies on AI and e-commerce generally relatively few. The current study aims to evaluate how artificial intelligence (AI) chatbots persuade users to consider chatbot recommendations in a web-based buying situation. Employing the theory of elaboration likelihood, the current study presents an analytical framework for identifying factors and internal mechanisms of consumers' readiness to adopt AI chatbot recommendations. The authors evaluated the model employing questionnaire responses from 411 Chinese AI chatbot consumers. The findings of present study indicated that chatbot recommendation reliability and accuracy is positively related to AI technology trust and have negative effect on perceived self-threat. In addition, AI technology trust is positively related to intention to adopt chatbot decision whereas perceived self-threat negatively related to intention to adopt chatbot decision. The perceived dialogue strengthens the significant relationship between AI-tech trust and intention to adopt chatbot decision and weakens the negative relationship between perceived self-threat and intention to adopt AI chatbot decisions.

4.
Br J Clin Pharmacol ; 2024 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-39359001

RESUMO

Drug-drug interactions (DDIs) present a significant health burden, compounded by clinician time constraints and poor patient health literacy. We assessed the ability of ChatGPT (generative artificial intelligence-based large language model) to predict DDIs in a real-world setting. Demographics, diagnoses and prescribed medicines for 120 hospitalized patients were input through three standardized prompts to ChatGPT version 3.5 and compared against pharmacist DDI evaluation to estimate diagnostic accuracy. Area under receiver operating characteristic and inter-rater reliability (Cohen's and Fleiss' kappa coefficients) were calculated. ChatGPT's responses differed based on prompt wording style, with higher sensitivity for prompts mentioning 'drug interaction'. Confusion matrices displayed low true positive and high true negative rates, and there was minimal agreement between ChatGPT and pharmacists (Cohen's kappa values 0.077-0.143). Low sensitivity values suggest a lack of success in identifying DDIs by ChatGPT, and further development is required before it can reliably assess potential DDIs in real-world scenarios.

5.
Small Methods ; : e2401108, 2024 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-39359026

RESUMO

Transmission electron microscopy (TEM) plays a crucial role in heterogeneous catalysis for assessing the size distribution of supported metal nanoparticles. Typically, nanoparticle size is quantified by measuring the diameter under the assumption of spherical geometry, a simplification that limits the precision needed for advancing synthesis-structure-performance relationships. Currently, there is a lack of techniques that can reliably extract more meaningful information from atomically resolved TEM images, like nuclearity or geometry. Here, cycle-consistent generative adversarial networks (CycleGANs) are explored to bridge experimental and simulated images, directly linking experimental observations with information from their underlying atomic structure. Using the versatile Pt/CeO2 (Pt particles centered ≈2 nm) catalyst synthesized by impregnation, large datasets of experimental scanning transmission electron micrographs and physical image simulations are created to train a CycleGAN. A subsequent size-estimation network is developed to determine the nuclearity of imaged nanoparticles, providing plausible estimates for ≈70% of experimentally observed particles. This automatic approach enables precise size determination of supported nanoparticle-based catalysts overcoming crystal orientation limitations of conventional techniques, promising high accuracy with sufficient training data. Tools like this are envisioned to be of great use in designing and characterizing catalytic materials with improved atomic precision.

6.
Global Spine J ; : 21925682241290752, 2024 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-39359113

RESUMO

STUDY DESIGN: Narrative review. OBJECTIVES: Artificial intelligence (AI) is being increasingly applied to the domain of spine surgery. We present a review of AI in spine surgery, including its use across all stages of the perioperative process and applications for research. We also provide commentary regarding future ethical considerations of AI use and how it may affect surgeon-industry relations. METHODS: We conducted a comprehensive literature review of peer-reviewed articles that examined applications of AI during the pre-, intra-, or postoperative spine surgery process. We also discussed the relationship among AI, spine industry partners, and surgeons. RESULTS: Preoperatively, AI has been mainly applied to image analysis, patient diagnosis and stratification, decision-making. Intraoperatively, AI has been used to aid image guidance and navigation. Postoperatively, AI has been used for outcomes prediction and analysis. AI can enable curation and analysis of huge datasets that can enhance research efforts. Large amounts of data are being accrued by industry sources for use by their AI platforms, though the inner workings of these datasets or algorithms are not well known. CONCLUSIONS: AI has found numerous uses in the pre-, intra-, or postoperative spine surgery process, and the applications of AI continue to grow. The clinical applications and benefits of AI will continue to be more fully realized, but so will certain ethical considerations. Making industry-sponsored databases open source, or at least somehow available to the public, will help alleviate potential biases and obscurities between surgeons and industry and will benefit patient care.

7.
Cureus ; 16(8): e68307, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39350844

RESUMO

Introduction The study assesses the readability of AI-generated brochures for common emergency medical conditions like heart attack, anaphylaxis, and syncope. Thus, the study aims to compare the AI-generated responses for patient information guides of common emergency medical conditions using ChatGPT and Google Gemini. Methodology Brochures for each condition were created by both AI tools. Readability was assessed using the Flesch-Kincaid Calculator, evaluating word count, sentence count and ease of understanding. Reliability was measured using the Modified DISCERN Score. The similarity between AI outputs was determined using Quillbot. Statistical analysis was performed with R (v4.3.2). Results ChatGPT and Gemini produced brochures with no statistically significant differences in word count (p= 0.2119), sentence count (p=0.1276), readability (p=0.3796), or reliability (p=0.7407). However, ChatGPT provided more detailed content with 32.4% more words (582.80 vs. 440.20) and 51.6% more sentences (67.00 vs. 44.20). In addition, Gemini's brochures were slightly easier to read with a higher ease score (50.62 vs. 41.88). Reliability varied by topic with ChatGPT scoring higher for Heart Attack (4 vs. 3) and Choking (3 vs. 2), while Google Gemini scored higher for Anaphylaxis (4 vs. 3) and Drowning (4 vs. 3), highlighting the need for topic-specific evaluation. Conclusions Although AI-generated brochures from ChatGPT and Gemini are comparable in readability and reliability for patient information on emergency medical conditions, this study highlights that there is no statistically significant difference in the responses generated by the two AI tools.

8.
Cureus ; 16(8): e68313, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39350876

RESUMO

Recent advances in generative artificial intelligence (AI) have enabled remarkable capabilities in generating images, audio, and videos from textual descriptions. Tools like Midjourney and DALL-E 3 can produce striking visualizations from simple prompts, while services like Kaiber.ai and RunwayML Gen-2 can generate short video clips. These technologies offer intriguing possibilities for clinical and educational applications in otolaryngology. Visualizing symptoms like vertigo or tinnitus could bolster patient-provider understanding, especially for those with communication challenges. One can envision patients selecting images to complement chief complaints, with AI-generated differential diagnoses. However, inaccuracies and biases necessitate caution. Images must serve to enrich, not replace, clinical judgment. While not a substitute for healthcare professionals, text-to-image and text-to-video generation could become valuable complementary diagnostic tools. Harnessed judiciously, generative AI offers new ways to enhance clinical dialogues. However, education on proper, equitable usage is paramount as these rapidly evolving technologies make their way into medicine.

9.
Cureus ; 16(8): e68298, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39350878

RESUMO

GPT-4 Vision (GPT-4V) represents a significant advancement in multimodal artificial intelligence, enabling text generation from images without specialized training. This marks the transformation of ChatGPT as a large language model (LLM) into GPT-4's promised large multimodal model (LMM). As these AI models continue to advance, they may enhance radiology workflow and aid with decision support. This technical note explores potential GPT-4V applications in radiology and evaluates performance for sample tasks. GPT-4V capabilities were tested using images from the web, personal and institutional teaching files, and hand-drawn sketches. Prompts evaluated scientific figure analysis, radiologic image reporting, image comparison, handwriting interpretation, sketch-to-code, and artistic expression. In this limited demonstration of GPT-4V's capabilities, it showed promise in classifying images, counting entities, comparing images, and deciphering handwriting and sketches. However, it exhibited limitations in detecting some fractures, discerning a change in size of lesions, accurately interpreting complex diagrams, and consistently characterizing radiologic findings. Artistic expression responses were coherent. WhileGPT-4V may eventually assist with tasks related to radiology, current reliability gaps highlight the need for continued training and improvement before consideration for any medical use by the general public and ultimately clinical integration. Future iterations could enable a virtual assistant to discuss findings, improve reports, extract data from images, provide decision support based on guidelines, white papers, and appropriateness criteria. Human expertise remain essential for safe practice and partnerships between physicians, researchers, and technology leaders are necessary to safeguard against risks like bias and privacy concerns.

10.
Digit Health ; 10: 20552076241284936, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39351313

RESUMO

Objective: The enabling and derailing factors for using artificial intelligence (AI)-based applications to improve patient care in the United Arab Emirates (UAE) from the physicians' perspective are investigated. Factors to accelerate the adoption of AI-based applications in the UAE are identified to aid implementation. Methods: A qualitative, inductive research methodology was employed, utilizing semi-structured interviews with 12 physicians practicing in the UAE. The collected data were analyzed using NVIVO software and grounded theory was used for thematic analysis. Results: This study identified factors specific to the deployment of AI to transform patient care in the UAE. First, physicians must control the applications and be fully trained and engaged in the testing phase. Second, healthcare systems need to be connected, and the AI outcomes need to be easily interpretable by physicians. Third, the reimbursement for AI-based applications should be settled by insurance or the government. Fourth, patients should be aware of and accept the technology before physicians use it to avoid negative consequences for the doctor-patient relationship. Conclusions: This research was conducted with practicing physicians in the UAE to determine their understanding of enabling and derailing factors for improving patient care through AI-based applications. The importance of involving physicians as the accountable agents for AI tools is highlighted. Public awareness regarding AI in healthcare should be improved to drive public acceptance.

11.
Front Artif Intell ; 7: 1393903, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39351510

RESUMO

Introduction: Recent advances in generative Artificial Intelligence (AI) and Natural Language Processing (NLP) have led to the development of Large Language Models (LLMs) and AI-powered chatbots like ChatGPT, which have numerous practical applications. Notably, these models assist programmers with coding queries, debugging, solution suggestions, and providing guidance on software development tasks. Despite known issues with the accuracy of ChatGPT's responses, its comprehensive and articulate language continues to attract frequent use. This indicates potential for ChatGPT to support educators and serve as a virtual tutor for students. Methods: To explore this potential, we conducted a comprehensive analysis comparing the emotional content in responses from ChatGPT and human answers to 2000 questions sourced from Stack Overflow (SO). The emotional aspects of the answers were examined to understand how the emotional tone of AI responses compares to that of human responses. Results: Our analysis revealed that ChatGPT's answers are generally more positive compared to human responses. In contrast, human answers often exhibit emotions such as anger and disgust. Significant differences were observed in emotional expressions between ChatGPT and human responses, particularly in the emotions of anger, disgust, and joy. Human responses displayed a broader emotional spectrum compared to ChatGPT, suggesting greater emotional variability among humans. Discussion: The findings highlight a distinct emotional divergence between ChatGPT and human responses, with ChatGPT exhibiting a more uniformly positive tone and humans displaying a wider range of emotions. This variance underscores the need for further research into the role of emotional content in AI and human interactions, particularly in educational contexts where emotional nuances can impact learning and communication.

12.
JMIR Form Res ; 8: e51383, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39353189

RESUMO

BACKGROUND: Generative artificial intelligence (AI) and large language models, such as OpenAI's ChatGPT, have shown promising potential in supporting medical education and clinical decision-making, given their vast knowledge base and natural language processing capabilities. As a general purpose AI system, ChatGPT can complete a wide range of tasks, including differential diagnosis without additional training. However, the specific application of ChatGPT in learning and applying a series of specialized, context-specific tasks mimicking the workflow of a human assessor, such as administering a standardized assessment questionnaire, followed by inputting assessment results in a standardized form, and interpretating assessment results strictly following credible, published scoring criteria, have not been thoroughly studied. OBJECTIVE: This exploratory study aims to evaluate and optimize ChatGPT's capabilities in administering and interpreting the Sour Seven Questionnaire, an informant-based delirium assessment tool. Specifically, the objectives were to train ChatGPT-3.5 and ChatGPT-4 to understand and correctly apply the Sour Seven Questionnaire to clinical vignettes using prompt engineering, assess the performance of these AI models in identifying and scoring delirium symptoms against scores from human experts, and refine and enhance the models' interpretation and reporting accuracy through iterative prompt optimization. METHODS: We used prompt engineering to train ChatGPT-3.5 and ChatGPT-4 models on the Sour Seven Questionnaire, a tool for assessing delirium through caregiver input. Prompt engineering is a methodology used to enhance the AI's processing of inputs by meticulously structuring the prompts to improve accuracy and consistency in outputs. In this study, prompt engineering involved creating specific, structured commands that guided the AI models in understanding and applying the assessment tool's criteria accurately to clinical vignettes. This approach also included designing prompts to explicitly instruct the AI on how to format its responses, ensuring they were consistent with clinical documentation standards. RESULTS: Both ChatGPT models demonstrated promising proficiency in applying the Sour Seven Questionnaire to the vignettes, despite initial inconsistencies and errors. Performance notably improved through iterative prompt engineering, enhancing the models' capacity to detect delirium symptoms and assign scores. Prompt optimizations included adjusting the scoring methodology to accept only definitive "Yes" or "No" responses, revising the evaluation prompt to mandate responses in a tabular format, and guiding the models to adhere to the 2 recommended actions specified in the Sour Seven Questionnaire. CONCLUSIONS: Our findings provide preliminary evidence supporting the potential utility of AI models such as ChatGPT in administering standardized clinical assessment tools. The results highlight the significance of context-specific training and prompt engineering in harnessing the full potential of these AI models for health care applications. Despite the encouraging results, broader generalizability and further validation in real-world settings warrant additional research.


Assuntos
Delírio , Humanos , Delírio/diagnóstico , Inquéritos e Questionários , Inteligência Artificial
13.
Artigo em Inglês | MEDLINE | ID: mdl-39367946

RESUMO

The increasing use of plastics in rural environments has led to concerns about agricultural plastic waste (APW). However, the plasticulture information gap hinders waste management planning and may lead to plastic residue leakage into the environment with consequent microplastic formation. The location and estimated quantity of the APW are crucial for territorial planning and public policies regarding land use and waste management. Agri-plastic remote detection has attracted increased attention but requires a consensus approach, particularly for mapping plastic-mulched farmlands (PMFs) scattered across vast areas. This article tests whether a streamlined time-series approach minimizes PMF confusion with the background using less processing. Based on the literature, we performed a vast assessment of machine learning techniques and investigated the importance of features in mapping tomato PMF. We evaluated pixel-based and object-based classifications in harmonized Sentinel-2 level-2A images, added plastic indices, and compared six classifiers. The best result showed an overall accuracy of 99.7% through pixel-based using the multilayer perceptron (MLP) classifier. The 3-time series with a 30-day composite exhibited increased accuracy, a decrease in background confusion, and was a viable alternative for overcoming the impact of cloud cover on images at certain times of the year in our study area, which leads to a potentially reliable methodology for APW mapping for future studies. To our knowledge, the presented PMF map is the first for Latin America. This represents a first step toward promoting the circularity of all agricultural plastic in the region, minimizing the impacts of degradation on the environment.

14.
Artigo em Inglês | MEDLINE | ID: mdl-39368900

RESUMO

Artificial intelligence (AI) is already an essential tool in the handling of large data sets in epidemiology and basic research. Significant contributions to radiological diagnosis are emerging alongside increasing use of digital pathology. The future lies in integrating this information together with clinical data relevant to each individual patient. Linkage with clinical protocols will enable personalized management options to be presented to the oncologist of the future. Radiotherapy has the distinction of being the first to have a National Institute for Health and Care Excellence (NICE)-approved AI-based recommendation. There is the opportunity to revolutionize the workflow with many tasks currently undertaken by clinicians taken over by AI-based systems for volume outlining, planning, and quality assurance. Education and training will be essential to understand the AI processes and inputs. Clinicians will however have to feel confident interrogating the AI-derived information and in communicating AI-derived treatment plans to patients.

15.
Cognition ; 254: 105958, 2024 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-39362054

RESUMO

How do ordinary people evaluate robots that make morally significant decisions? Previous work has found both equal and different evaluations, and different ones in either direction. In 13 studies (N = 7670), we asked people to evaluate humans and robots that make decisions in norm conflicts (variants of the classic trolley dilemma). We examined several conditions that may influence whether moral evaluations of human and robot agents are the same or different: the type of moral judgment (norms vs. blame); the structure of the dilemma (side effect vs. means-end); salience of particular information (victim, outcome); culture (Japan vs. US); and encouraged empathy. Norms for humans and robots are broadly similar, but blame judgments show a robust asymmetry under one condition: Humans are blamed less than robots specifically for inaction decisions-here, refraining from sacrificing one person for the good of many. This asymmetry may emerge because people appreciate that the human faces an impossible decision and deserves mitigated blame for inaction; when evaluating a robot, such appreciation appears to be lacking. However, our evidence for this explanation is mixed. We discuss alternative explanations and offer methodological guidance for future work into people's moral judgment of robots and humans.

16.
Laryngoscope ; 2024 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-39363661

RESUMO

OBJECTIVES: Here we describe the development and pilot testing of the first artificial intelligence (AI) software "copilot" to help train novices to competently perform flexible fiberoptic laryngoscopy (FFL) on a mannikin and improve their uptake of FFL skills. METHODS: Supervised machine learning was used to develop an image classifier model, dubbed the "anatomical region classifier," responsible for predicting the location of camera in the upper aerodigestive tract and an object detection model, dubbed the "anatomical structure detector," responsible for locating and identifying key anatomical structures in images. Training data were collected by performing FFL on an AirSim Combo Bronchi X mannikin (United Kingdom, TruCorp Ltd) using an Ambu aScope 4 RhinoLaryngo Slim connected to an Ambu® aView™ 2 Advance Displaying Unit (Ballerup, Ambu A/S). Medical students were prospectively recruited to try the FFL copilot and rate its ease of use and self-rate their skills with and without the copilot. RESULTS: This model classified anatomical regions with an overall accuracy of 91.9% on the validation set and 80.1% on the test set. The model detected anatomical structures with overall mean average precision of 0.642. Through various optimizations, we were able to run the AI copilot at approximately 28 frames per second (FPS), which is imperceptible from real time and nearly matches the video frame rate of 30 FPS. Sixty-four novice medical students were recruited for feedback on the copilot. Although 90.9% strongly agreed/agreed that the AI copilot was easy to use, their self-rating of FFL skills following use of the copilot were overall equivocal to their self-rating without the copilot. CONCLUSIONS: The AI copilot tracked successful capture of diagnosable views of key anatomical structures effectively guiding users through FFL to ensure all anatomical structures are sufficiently captured. This tool has the potential to assist novices in efficiently gaining competence in FFL. LEVEL OF EVIDENCE: NA Laryngoscope, 2024.

17.
Folia Med (Plovdiv) ; 66(3): 303-311, 2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-39365615

RESUMO

The ongoing growth of artificial intelligence (AI) involves virtually every aspect of oncologic care in medicine. Although AI is in its infancy, it has shown great promise in the diagnosis of oncologic urological conditions. This paper aims to explore the expanding role of artificial intelligence in the histopathological diagnosis in urological oncology. We conducted a focused review of the literature on AI in urological oncology, searching PubMed and Google Scholar for recent advancements in histopathological diagnosis using AI. Various keyword combinations were used to find relevant sources published before April 2nd, 2024. We approached this article by focusing on the impact of AI on common urological malignancies by incorporating the use of different AI algorithms. We targeted the capabilities of AI's potential in aiding urologists and pathologists in histological cancer diagnosis. Promising results suggest AI can enhance diagnosis and personalized patient care, yet further refinements are needed before widespread hospital adoption. AI is transforming urological oncology by improving histopathological diagnosis and patient care. This review highlights AI's advancements in diagnosing prostate, renal cell, and bladder cancer. It is anticipated that as AI becomes more integrated into clinical practice, it will have a greater influence on diagnosis and improve patient outcomes.


Assuntos
Inteligência Artificial , Neoplasias Urológicas , Humanos , Neoplasias Urológicas/patologia , Neoplasias Urológicas/diagnóstico , Neoplasias da Bexiga Urinária/patologia , Neoplasias da Bexiga Urinária/diagnóstico , Neoplasias da Próstata/patologia , Neoplasias da Próstata/diagnóstico , Neoplasias Renais/patologia , Neoplasias Renais/diagnóstico , Masculino , Algoritmos , Carcinoma de Células Renais/patologia , Carcinoma de Células Renais/diagnóstico
18.
Sci Rep ; 14(1): 22878, 2024 10 02.
Artigo em Inglês | MEDLINE | ID: mdl-39358399

RESUMO

Satellite imagery is a potent tool for estimating human wealth and poverty, especially in regions lacking reliable data. This study compares a range of poverty estimation approaches from satellite images, spanning from expert-based to fully machine learning-based methodologies. Human experts ranked clusters from the Tanzania DHS survey using high-resolution satellite images. Then expert-defined features were utilized in a machine learning algorithm to estimate poverty. An explainability method was applied to assess the importance and interaction of these features in poverty prediction. Additionally, a convolutional neural network (CNN) was employed to estimate poverty from medium-resolution satellite images of the same locations. Our analysis indicates that increased human involvement in poverty estimation diminishes accuracy compared to machine learning involvement, exemplified with the case of Tanzania. Expert defined features exhibited significant overlap and poor interaction when used together in a classifier. Conversely, the CNN-based approach outperformed human experts, demonstrating superior predictive capability with medium-resolution images. These findings highlight the importance of leveraging machine learning explainability methods to identify predictive elements that may be overlooked by human experts. This study advocates for the integration of emerging technologies with traditional methodologies to optimize data collection and analysis of poverty and welfare.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Pobreza , Imagens de Satélites , Humanos , Imagens de Satélites/métodos , Tanzânia , Algoritmos , Viés
19.
Sci Rep ; 14(1): 22884, 2024 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-39358433

RESUMO

The integration of IoT systems into automotive vehicles has raised concerns associated with intrusion detection within these systems. Vehicles equipped with a controller area network (CAN) control several systems within a vehicle where disruptions in function can lead to significant malfunctions, injuries, and even loss of life. Detecting disruption is a primary concern as vehicles move to higher degrees of autonomy and the possibility of self-driving is explored. Tackling cyber-security challenges within CAN is essential to improve vehicle and road safety. Standard differences between different manufacturers make the implementation of a discreet system difficult; therefore, data-driven techniques are needed to tackle the ever-evolving landscape of cyber security within the automotive field. This paper examines the possibility of using machine learning classifiers to identify cyber assaults in CAN systems. To achieve applicability, we cover two classifiers: extreme gradient boost and K-nearest neighbor algorithms. However, as their performance hinges on proper parameter selection, a modified metaheuristic optimizer is introduced as well to tackle parameter optimization. The proposed approach is tested on a publicly available dataset with the best-performing models exceeding 89% accuracy. Optimizer outcomes have undergone rigorous statistical analysis, and the best-performing models were subjected to analysis using explainable artificial intelligence techniques to determine feature impacts on the best-performing model.

20.
Front Big Data ; 7: 1393758, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39364222

RESUMO

Detecting lung diseases in medical images can be quite challenging for radiologists. In some cases, even experienced experts may struggle with accurately diagnosing chest diseases, leading to potential inaccuracies due to complex or unseen biomarkers. This review paper delves into various datasets and machine learning techniques employed in recent research for lung disease classification, focusing on pneumonia analysis using chest X-ray images. We explore conventional machine learning methods, pretrained deep learning models, customized convolutional neural networks (CNNs), and ensemble methods. A comprehensive comparison of different classification approaches is presented, encompassing data acquisition, preprocessing, feature extraction, and classification using machine vision, machine and deep learning, and explainable-AI (XAI). Our analysis highlights the superior performance of transfer learning-based methods using CNNs and ensemble models/features for lung disease classification. In addition, our comprehensive review offers insights for researchers in other medical domains too who utilize radiological images. By providing a thorough overview of various techniques, our work enables the establishment of effective strategies and identification of suitable methods for a wide range of challenges. Currently, beyond traditional evaluation metrics, researchers emphasize the importance of XAI techniques in machine and deep learning models and their applications in classification tasks. This incorporation helps in gaining a deeper understanding of their decision-making processes, leading to improved trust, transparency, and overall clinical decision-making. Our comprehensive review serves as a valuable resource for researchers and practitioners seeking not only to advance the field of lung disease detection using machine learning and XAI but also from other diverse domains.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA