Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 865
Filter
1.
Article in English | MEDLINE | ID: mdl-39017767

ABSTRACT

We investigated the association between computer and mobile phone online activities and adolescents' problem behaviors (e.g., depressive symptoms, withdrawal, somatic complaints, attention deficit, and aggression) using data from the Korean Children and Youth Panel Survey and latent growth model analysis. The results demonstrated that text-related activities lowered withdrawal and attention deficit. Higher use of online communities or personal websites was associated with higher depressive symptoms, withdrawal, somatic symptoms, and aggression. Online gaming increased both attention deficit' initial value and its decrease rate. Taking photos decreased withdrawal. Watching videos increased depressive symptoms, withdrawal, and attention deficit. Listening to music lowered attention deficit' initial value and somatic symptoms' decrease rate. Accessing adult websites increased attention deficit and aggression. Educational information searches reduced attention deficit and aggression. Online transactions increased somatic symptoms. This study indicates that adolescents' problem behaviors may appear differently depending on the type of information technology use.

2.
Sensors (Basel) ; 24(13)2024 Jun 29.
Article in English | MEDLINE | ID: mdl-39001004

ABSTRACT

The survival and growth of young plants hinge on various factors, such as seed quality and environmental conditions. Assessing seedling potential/vigor for a robust crop yield is crucial but often resource-intensive. This study explores cost-effective imaging techniques for rapid evaluation of seedling vigor, offering a practical solution to a common problem in agricultural research. In the first phase, nine lettuce (Lactuca sativa) cultivars were sown in trays and monitored using chlorophyll fluorescence imaging thrice weekly for two weeks. The second phase involved integrating embedded computers equipped with cameras for phenotyping. These systems captured and analyzed images four times daily, covering the entire growth cycle from seeding to harvest for four specific cultivars. All resulting data were promptly uploaded to the cloud, allowing for remote access and providing real-time information on plant performance. Results consistently showed the 'Muir' cultivar to have a larger canopy size and better germination, though 'Sparx' and 'Crispino' surpassed it in final dry weight. A non-linear model accurately predicted lettuce plant weight using seedling canopy size in the first study. The second study improved prediction accuracy with a sigmoidal growth curve from multiple harvests (R2 = 0.88, RMSE = 0.27, p < 0.001). Utilizing embedded computers in controlled environments offers efficient plant monitoring, provided there is a uniform canopy structure and minimal plant overlap.


Subject(s)
Germination , Lactuca , Seedlings , Lactuca/growth & development , Lactuca/physiology , Germination/physiology , Seedlings/growth & development , Seedlings/physiology , Chlorophyll/analysis , Chlorophyll/metabolism , Seeds/growth & development , Seeds/physiology
3.
Sci Rep ; 14(1): 15376, 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-38965362

ABSTRACT

An algorithm of digital logarithm calculation for the Galois field G F ( 257 ) is proposed. It is shown that this field is coupled with one of the most important existing standards that uses a digital representation of the signal through 256 levels. It is shown that for this case it is advisable to use the specifics of quasi-Mersenne prime numbers, representable in the form p = 2 n + 1 , which includes the number 257. For fields G F ( 2 n + 1 ) , an alternating encoding can be used, in which non-zero elements of the field are displayed through binary characters corresponding to the numbers + 1 and - 1. In such an encoding, multiplying a field element by 2 is reduced to a quasi-cyclic permutation of binary symbols (the permuted symbol changes sign). Proposed approach makes it possible to significantly simplify the design of computing devices for calculation of digital logarithm and multiplication of numbers modulo 257. A concrete scheme of a device for digital logarithm calculation in this field is presented. It is also shown that this circuit can be equipped with a universal adder modulo an arbitrary number, which makes it possible to implement any operations in the field under consideration. It is shown that proposed digital algorithm can also be used to reduce 256-valued logic operations to algebraic form. It is shown that the proposed approach is of significant interest for the development of UAV on-board computers operating as part of a group.

4.
J Adv Periodontol Implant Dent ; 16(1): 55-63, 2024.
Article in English | MEDLINE | ID: mdl-39027206

ABSTRACT

4D printing is an innovative digital manufacturing technology that originated by adding a fourth dimension, i.e., time, to pre-existing 3D technology or additive manufacturing (AM). AM is a fast-growing technology used in many fields, which develops accurate 3D objects based on models designed by computers. Dentistry is one such field in which 3D technology is used for manufacturing objects in periodontics (scaffolds, local drug-delivering agents, augmentation of ridges), implants, prosthodontics (partial and complete dentures, obturators), oral surgery for reconstructing jaw, and orthodontics. Dynamism is a vital property needed for the survival of materials used in the oral cavity since the oral cavity is constantly subjected to various insults. 4D printing technology has overcome the disadvantages of 3D printing technology, i.e., it cannot create dynamic objects. Therefore, constant knowledge of 4D technology is required. 3D printing technology has shortcomings, which are discussed in this review. This review summaries various printing technologies, materials used, stimuli, and potential applications of 4D technology in dentistry.

5.
Stud Hist Philos Sci ; 106: 109-117, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38936271

ABSTRACT

In the second half of the 20th century, neuroscientists across North America developed automated systems for use in their research laboratories. Their decisions to do so were complex and contingent, partly a result of global reasons, such as the need to increase efficiency and flexibility, and partly a result of local reasons, such as the need to amend perceived biases of earlier research methodologies. Automated methods were advancements but raised several challenges. Transferring a system from one location to another required that certain components of the system be standardized, such as the hardware, software, and programming language. This proved difficult as commercial manufacturers lacked incentives to create standardized products for the few neuroscientists working towards automation. Additionally, investing in automated systems required massive amounts of time, labor, funding, and computer expertise. Moreover, neuroscientists did not agree on the value of automation. My brief history investigates Karl Pribram's decisions to expand his newly created automated system by standardizing equipment, programming, and protocols. Although he was an eminent Stanford neuroscientist with strong institutional support and computer know-how, the development and transfer of his automated behavioral testing system was riddled with challenges. For Pribram and neuroscience more generally, automation was not so automatic.

6.
J Gambl Stud ; 2024 May 09.
Article in English | MEDLINE | ID: mdl-38724824

ABSTRACT

Computer technology has long been touted as a means of increasing the effectiveness of voluntary self-exclusion schemes - especially in terms of relieving gaming venue staff of the task of manually identifying and verifying the status of new customers. This paper reports on the government-led implementation of facial recognition technology as part of an automated self-exclusion program in the city of Adelaide in South Australia-one of the first jurisdiction-wide enforcements of this controversial technology in small venue gambling. Drawing on stakeholder interviews, site visits and documentary analysis over a two year period, the paper contrasts initial claims that facial recognition offered a straightforward and benign improvement to the efficiency of the city's long-running self-excluded gambler program, with subsequent concerns that the new technology was associated with heightened inconsistencies, inefficiencies and uncertainties. As such, the paper contends that regardless of the enthusiasms of government, tech industry and gaming lobby, facial recognition does not offer a ready 'technical fix' to problem gambling. The South Australian case illustrates how this technology does not appear to better address the core issues underpinning problem gambling, and/or substantially improve conditions for problem gamblers to refrain from gambling. As such, it is concluded that the gambling sector needs to pay close attention to the practical outcomes arising from initial cases such as this, and resist industry pressures for the wider replication of this technology in other jurisdictions.

7.
J Pak Med Assoc ; 74(4 (Supple-4)): S126-S131, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38712420

ABSTRACT

In recent times, dentistry has seen significant technological advancements that have transformed various specialized areas within the field. Developed into applications for mobile devices, augmented reality (AR) seamlessly merges digital components with the physical world, enhancing both realms while maintaining their individual separateness. On the other hand, virtual reality (VR) relies on advanced, tailored software to visualize a digital 3D environment stimulating the operator's senses through computer generated sensations and feedback. The current advances use the application of VR, haptic simulators, the use of an AI algorithm and many more that provides new opportunities for smart learning and enhance the teaching environment. As this technology continues to evolve, it is poised to become even more remarkable, enabling specialists to potentially visualize both soft and hard tissues within the patient's body for effective treatment planning. This literature aims to present the newest advancements and ongoing development of AR and VR in dentistry and medicine. It highlights their diverse applications while identifying areas needing further research for effective integration into clinical practice.


Subject(s)
Augmented Reality , Dentistry , Virtual Reality , Humans , Dentistry/methods
8.
J Nurs Scholarsh ; 56(4): 599-605, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38615340

ABSTRACT

BACKGROUND: Compared to other providers, nurses spend more time with patients, but the exact quantity and nature of those interactions remain largely unknown. The purpose of this study was to characterize the interactions of nurses at the bedside using continuous surveillance over a year long period. METHODS: Nurses' time and activity at the bedside were characterized using a device that integrates the use of obfuscated computer vision in combination with a Bluetooth beacon on the nurses' identification badge to track nurses' activities at the bedside. The surveillance device (AUGi) was installed over 37 patient beds in two medical/surgical units in a major urban hospital. Forty-nine nurse users were tracked using the beacon. Data were collected 4/15/19-3/15/20. Statistics were performed to describe nurses' time and activity at the bedside. RESULTS: A total of n = 408,588 interactions were analyzed over 670 shifts, with >1.5 times more interactions during day shifts (n = 247,273) compared to night shifts (n = 161,315); the mean interaction time was 3.34 s longer during nights than days (p < 0.0001). Each nurse had an average of 7.86 (standard deviation [SD] = 10.13) interactions per bed each shift and a mean total interaction time per bed of 9.39 min (SD = 14.16). On average, nurses covered 7.43 beds (SD = 4.03) per shift (day: mean = 7.80 beds/nurse/shift, SD = 3.87; night: mean = 7.07/nurse/shift, SD = 4.17). The mean time per hourly rounding (HR) was 69.5 s (SD = 98.07) and 50.1 s (SD = 56.58) for bedside shift report. DISCUSSION: As far as we are aware, this is the first study to provide continuous surveillance of nurse activities at the bedside over a year long period, 24 h/day, 7 days/week. We detected that nurses spend less than 1 min giving report at the bedside, and this is only completed 20.7% of the time. Additionally, hourly rounding was completed only 52.9% of the time and nurses spent only 9 min total with each patient per shift. Further study is needed to detect whether there is an optimal timing or duration of interactions to improve patient outcomes. CLINICAL RELEVANCE: Nursing time with the patient has been shown to improve patient outcomes but precise information about how much time nurses spend with patients has been heretofore unknown. By understanding minute-by-minute activities at the bedside over a full year, we provide a full picture of nursing activity; this can be used in the future to determine how these activities affect patient outcomes.


Subject(s)
Nursing Staff, Hospital , Humans , Nursing Staff, Hospital/statistics & numerical data , Nurse-Patient Relations , Time Factors
9.
Eur Radiol ; 2024 Apr 18.
Article in English | MEDLINE | ID: mdl-38634876

ABSTRACT

OBJECTIVES: To distinguish histological subtypes of renal tumors using radiomic features and machine learning (ML) based on multiphase computed tomography (CT). MATERIAL AND METHODS: Patients who underwent surgical treatment for renal tumors at two tertiary centers from 2012 to 2022 were included retrospectively. Preoperative arterial (corticomedullary) and venous (nephrogenic) phase CT scans from these centers, as well as from external imaging facilities, were manually segmented, and standardized radiomic features were extracted. Following preprocessing and addressing the class imbalance, a ML algorithm based on extreme gradient boosting trees (XGB) was employed to predict renal tumor subtypes using 10-fold cross-validation. The evaluation was conducted using the multiclass area under the receiver operating characteristic curve (AUC). Algorithms were trained on data from one center and independently tested on data from the other center. RESULTS: The training cohort comprised n = 297 patients (64.3% clear cell renal cell cancer [RCC], 13.5% papillary renal cell carcinoma (pRCC), 7.4% chromophobe RCC, 9.4% oncocytomas, and 5.4% angiomyolipomas (AML)), and the testing cohort n = 121 patients (56.2%/16.5%/3.3%/21.5%/2.5%). The XGB algorithm demonstrated a diagnostic performance of AUC = 0.81/0.64/0.8 for venous/arterial/combined contrast phase CT in the training cohort, and AUC = 0.75/0.67/0.75 in the independent testing cohort. In pairwise comparisons, the lowest diagnostic accuracy was evident for the identification of oncocytomas (AUC = 0.57-0.69), and the highest for the identification of AMLs (AUC = 0.9-0.94) CONCLUSION: Radiomic feature analyses can distinguish renal tumor subtypes on routinely acquired CTs, with oncocytomas being the hardest subtype to identify. CLINICAL RELEVANCE STATEMENT: Radiomic feature analyses yield robust results for renal tumor assessment on routine CTs. Although radiologists routinely rely on arterial phase CT for renal tumor assessment and operative planning, radiomic features derived from arterial phase did not improve the accuracy of renal tumor subtype identification in our cohort.

10.
Ann Clin Biochem ; : 45632241252006, 2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38631810

ABSTRACT

BACKGROUND: Parametric regression analysis is widely used in methods comparisons and more recently in checking the concordance of test results following receipt of new reagent lots. The greater frequency of reagent-lot evaluations increases pressure to detect bias with smallest possible sample sizes (i.e. smallest consumption of time and resources). This study revisits bias detection using the joint slope, intercept confidence region as an alternative to slope and intercept confidence intervals. METHODS: Four cases were considered representing constant errors, proportional errors (constant CV) and two more complicated error patterns typical of immunoassays. Maximum:minimum range ratios varied from 2:1 to 2000:1. After setting a maximum tolerable difference a series of slope, intercept combinations, each of which predicted the critical difference, were systematically evaluated in simulations which determined the minimum sample size required to detect the difference, firstly using slope, intercept confidence intervals and secondly using the joint slope, intercept confidence region. RESULTS: At small to moderate range ratios, bias detection by joint confidence region required greatly reduced sample sizes to the extent that it should encourage reagent-lot evaluations or, alternatively, transform those already routinely performed into considerably less costly exercises. CONCLUSIONS: While some software is available to calculate joint confidence regions in real-life analyses, shifting this testing method into the mainstream will require a greater number of software developers incorporating the necessary code into their regression programs. The computer program used to conduct this study is freely available and can be used to model any laboratory test.

11.
Medisan ; 28(2)abr. 2024.
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1558516

ABSTRACT

Introducción: El síndrome visual informático o fatiga visual digital es una enfermedad causada por el cansancio ocular que provoca el pasar mucho tiempo frente a una pantalla. Objetivo: Diagnosticar el síndrome visual informático en pacientes menores de 35 años atendidos en la consulta de refracción. Métodos: Se realizó un estudio descriptivo prospectivo y transversal de pacientes atendidos en la consulta de refracción del Policlínico Especialidades del Hospital Provincial Docente Clínicoquirúrgico Saturnino Lora, durante el periodo de abril a junio de 2022. Resultados: Predominaron los pacientes en las edades comprendidas entre 26 y 35 años y del sexo femenino; los síntomas más frecuentes fueron el cansancio visual, ardor ocular, sensación de ojo seco, visión borrosa de cerca, ojo rojo y el dolor de cabeza después del esfuerzo visual. Los dispositivos digitales más usados fueron el celular y la computadora con un tiempo de uso de una a tres horas, destacándose este último con un tiempo superior a 4 horas. Los defectos refractivos constituyeron la principal causa de limitaciones visuales. Los pacientes con síndrome visual informático y alguna ametropía sin corrección fueron los que mayor cantidad de síntomas tuvieron, seguidos de los pacientes corregidos inadecuadamente. Conclusiones: Este síndrome afecta en gran medida a la población más joven. El adecuado interrogatorio y la incorporación de los procedimientos correctos en el estudio optométrico diario permitió el diagnóstico de tal síndrome en los pacientes atendidos y la corrección óptica pertinente.


Introduction: Digital visual syndrome or digital visual fatigue is a disease caused by the ocular fatigue provoked by spending much time in front of a screen. Objective: To diagnose the digital visual syndrome in patients under 35 years assisted in the refraction service. Methods: A prospective descriptive and cross-sectional study of patients assisted in the refraction service of the Specialties Polyclinic in Saturnino Lora Teaching Clinical Surgical Provincial Hospital, was carried out from April to June, 2022. Results: There was a prevalence of patients aged 26 and 35 and female sex; the most frequent symptoms were visual fatigue, ocular burning, dry eye sensation, closely blurred vision, red eye and headache after visual effort. The most used digital devices were the cellphone and the computer with a time of use from one to three hours, with emphasis in the last one with more than 4 hours. The refractive defects constituted the main cause of visual limitations. The patients with digital visual syndrome and some type of ametropia without correction were those with more symptoms, followed by the patients inadequately corrected. Conclusions: This syndrome affects the youngest population to a great extent. The appropriate interrogation and the incorporation of correct procedures in the daily optometric study allowed the diagnosis of such a syndrome in the assisted patients and the pertinent optic correction.

12.
Laryngoscope ; 2024 Mar 28.
Article in English | MEDLINE | ID: mdl-38545679

ABSTRACT

OBJECTIVE: Investigate the accuracy of ChatGPT in the manner of medical questions related to otolaryngology. METHODS: A ChatGPT session was opened within which 93 questions were asked related to otolaryngology topics. Questions were drawn from all major domains within otolaryngology and based upon key action statements (KAS) from clinical practice guidelines (CPGs). Twenty-one "patient-level" questions were also asked of the program. Answers were graded as either "correct," "partially correct," "incorrect," or "non-answer." RESULTS: Correct answers were given at a rate of 45.5% (71.4% correct in patient-level, 37.3% CPG); partially correct answers at 31.8% (28.6% patient-level, 32.8% CPG); incorrect at 21.6% (0% patient-level, 28.4% CPG); and 1.1% non-answers (% patient-level, 1.5% CPG). There was no difference in the rate of correct answers between CPGs published before or after the period of data collection cited by ChatGPT. CPG-based questions were less likely to be correct than patient-level questions (p = 0.003). CONCLUSION: Publicly available artificial intelligence software has become increasingly popular with consumers for everything from story-telling to data collection. In this study, we examined the accuracy of ChatGPT responses to questions related to otolaryngology over 7 domains and 21 published CPGs. Physicians and patients should understand the limitations of this software as it applies to otolaryngology, and programmers in future iterations should consider giving greater weight to information published by well-established journals and written by national content experts. LEVEL OF EVIDENCE: N/A Laryngoscope, 2024.

13.
Nano Converg ; 11(1): 11, 2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38498068

ABSTRACT

An elementary review on principles of qubits and their prospects for quantum computing is provided. Due to its rapid development, quantum computing has attracted considerable attention as a core technology for the next generation and has demonstrated its potential in simulations of exotic materials, molecular structures, and theoretical computer science. To achieve fully error-corrected quantum computers, building a logical qubit from multiple physical qubits is crucial. The number of physical qubits needed depends on their error rates, making error reduction in physical qubits vital. Numerous efforts to reduce errors are ongoing in both existing and emerging quantum systems. Here, the principle and development of qubits, as well as the current status of the field, are reviewed to provide information to researchers from various fields and give insights into this promising technology.

14.
Front Digit Health ; 6: 1321485, 2024.
Article in English | MEDLINE | ID: mdl-38433989

ABSTRACT

Importance: Healthcare organizations operate in a data-rich environment and depend on digital computerized systems; thus, they may be exposed to cyber threats. Indeed, one of the most vulnerable sectors to hacks and malware is healthcare. However, the impact of cyberattacks on healthcare organizations remains under-investigated. Objective: This study aims to describe a major attack on an entire medical center that resulted in a complete shutdown of all computer systems and to identify the critical actions required to resume regular operations. Setting: This study was conducted on a public, general, and acute care referral university teaching hospital. Methods: We report the different recovery measures on various hospital clinical activities and their impact on clinical work. Results: The system malfunction of hospital computers did not reduce the number of heart catheterizations, births, or outpatient clinic visits. However, a sharp drop in surgical activities, emergency room visits, and total hospital occupancy was observed immediately and during the first postattack week. A gradual increase in all clinical activities was detected starting in the second week after the attack, with a significant increase of 30% associated with the restoration of the electronic medical records (EMR) and laboratory module and a 50% increase associated with the return of the imaging module archiving. One limitation of the present study is that, due to its retrospective design, there were no data regarding the number of elective internal care hospitalizations that were considered crucial. Conclusions and relevance: The risk of ransomware cyberattacks is growing. Healthcare systems at all levels of the hospital should be aware of this threat and implement protocols should this catastrophic event occur. Careful evaluation of steady computer system recovery weekly enables vital hospital function, even under a major cyberattack. The restoration of EMR, laboratory systems, and imaging archiving modules was found to be the most significant factor that allowed the return to normal clinical hospital work.

15.
Med Teach ; : 1-6, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38489501

ABSTRACT

Co-creation is the active involvement of all stakeholders, including students, in educational design processes to improve the quality of education by embodying inclusivity, transparency and empowerment. Virtual co-creation has the potential to expand the utility of co-creation as an inclusive approach by overcoming challenges regarding the practicality and availability of stakeholders, typically experienced in face-to-face co-creation. Drawing from the literature and our experiences of virtual co-creation activities in different educational contexts, this twelve tips paper provides guidelines on how to effectively operationalize co-creation in a virtual setting. Our proposed three-phased approach (preparation, conduction, follow-up) might help those aiming to virtually co-create courses and programs by involving stakeholders beyond institutes and across borders.

16.
Biochem Mol Biol Educ ; 52(4): 462-473, 2024.
Article in English | MEDLINE | ID: mdl-38411364

ABSTRACT

The COVID-19 pandemic has forced a shift in thinking regarding the safe delivery of wet laboratory courses. While we were fortunate to have the capacity to continue delivering wet laboratory experiments with physical distancing and other measures in place, modifications to the mechanisms of delivery within courses were necessary to minimize risk to students and teaching staff. One such modification was introduced in BCH370H, an introductory biochemistry laboratory course, where a OneNote Class Notebook (ONCN) was used as an electronic laboratory notebook (ELN) in place of the traditional hardbound paper laboratory notebook (PLN) used prior to the pandemic. The initial reasoning for switching to an ELN was around safety-allowing course staff and students to maintain physical distancing whenever possible and eliminating the need for teaching assistants to handle student notebooks; however, the benefits of the ONCN proved to be significantly more. OneNote acted not only as a place for students to record notes but the Class Notebook's unique features allowed easy integration of other important aspects of the course, including delivery of laboratory manuals, posting of student results, notetaking feedback, sharing of instructional materials with teaching assistants, and more. Student and teacher experiences with the ONCN as used within a fully in person biochemistry laboratory course, as well as learned best practices, are reviewed.


Subject(s)
Biochemistry , COVID-19 , Laboratories , SARS-CoV-2 , Humans , COVID-19/epidemiology , Biochemistry/education , Students , Curriculum , Pandemics
17.
Diabetes Metab Syndr ; 18(2): 102946, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38330745

ABSTRACT

BACKGROUND: Peer review is the established method for evaluating the quality and validity of research manuscripts in scholarly publishing. However, scientific peer review faces challenges as the volume of submitted research has steadily increased in recent years. Time constraints and peer review quality assurance can place burdens on reviewers, potentially discouraging their participation. Some artificial intelligence (AI) tools might assist in relieving these pressures. This study explores the efficiency and effectiveness of one of the artificial intelligence (AI) chatbots, ChatGPT (Generative Pre-trained Transformer), in the peer review process. METHODS: Twenty-one peer-reviewed research articles were anonymised to ensure unbiased evaluation. Each article was reviewed by two humans and by versions 3.5 and 4.0 of ChatGPT. The AI was instructed to provide three positive and three negative comments on the articles and recommend whether they should be accepted or rejected. The human and AI results were compared using a 5-point Likert scale to determine the level of agreement. The correlation between ChatGPT responses and the acceptance or rejection of the papers was also examined. RESULTS: Subjective review similarity between human reviewers and ChatGPT showed a mean score of 3.6/5 for ChatGPT 3.5 and 3.76/5 for ChatGPT 4.0. The correlation between human and AI review scores was statistically significant for ChatGPT 3.5, but not for ChatGPT 4.0. CONCLUSION: ChatGPT can complement human scientific peer review, enhancing efficiency and promptness in the editorial process. However, a fully automated AI review process is currently not advisable, and ChatGPT's role should be regarded as highly constrained for the present and near future.


Subject(s)
Artificial Intelligence , Time Pressure , Humans , Pressure
18.
Australas J Ageing ; 43(2): 415-419, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38415380

ABSTRACT

OBJECTIVES: Following a user-centred redesign and refinement process of an electronic delirium screening tool (eDIS-MED), further accuracy assessment was performed prior to anticipated testing in the clinical setting. METHODS: Content validity of each of the existing questions was evaluated by an expert group in the domains of clarity, relevance and importance. Questions with a Content Validity Index (CVI) <0.80 were reviewed by the development group for potential revision. Items with CVI <0.70 were discarded. Next, face validity of the entirety of the tests was conducted and readability measured. RESULTS: A panel of five clinical experts evaluated the test battery comprising eDIS-MED. The content validity process endorsed 61 items. The overall scale CVI was 0.92. Eighty-eight per cent of the responses with regard to question relevancy, usefulness and appropriateness were positive. The questions were deemed fifth grade level and very easy to read. CONCLUSIONS: A revised electronic screening tool was shown to be accurate according to an expert group. A clinical validation study is planned.


Subject(s)
Delirium , Mobile Applications , Predictive Value of Tests , Humans , Delirium/diagnosis , Reproducibility of Results , Comprehension
19.
Heliyon ; 10(2): e24277, 2024 Jan 30.
Article in English | MEDLINE | ID: mdl-38312706

ABSTRACT

The increasing influence of technology on education has attracted considerable attention. This study aims to determine the current status and development trends of educational technologies. At first, we used COOC, HistCite, and VOSviewer to systematically review 1562 educational articles published in Computers in Human Behavior (CHB) from 2004 to 2022. Based on bibliometrics, this study identified publication trends, research forces, collaboration, key articles, and research themes. Then, we visualized the technologies predicted by 30 Horizon Reports and combined them with CHB educational research to evaluate the accuracy of the identified trends. The results revealed an immediate influence of AI technology, extended reality and digital resources on education, a moderate influence of educational tools and games, and a delayed influence of data management and maker technology. In addition, human psychology and behavior in technological environment may be important themes in the future. In conclusion, this study not only proposes a comparative analysis of leading reports and representative literature, but also provides guidance for future research and development in educational technology.

20.
Health Sci Rep ; 7(2): e1889, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38357488

ABSTRACT

Background and Aims: The coronavirus disease 2019 (COVID-19) pandemic stimulated a paradigm shift in medical and surgical education from in-person teaching to online teaching. It is unclear whether an in-person or online approach to surgical teaching for medical students is superior. We aim to compare the outcomes of in-person versus online surgical teaching in generating interest in and improving knowledge of surgery in medical students. We also aim the quantify the impact of a peer-run surgical teaching course. Methods: A six-session course was developed by medical students and covered various introductory surgical topics. The first iteration was offered online to 70 UK medical students in March 2021, and the second iteration was in-person for 20 students in November 2021. Objective and subjective knowledge was assessed through questionnaires before and after each session, and also for the entire course. Data were analyzed from this mixed-methods study to compare the impact of online versus in-person teaching on surgical knowledge and engagement. Results: Students in both iterations showed significant improvement of 33%-282% across the six sessions in knowledge and confidence after completing the course (p < 0.001). There was no significant difference in the level of objective knowledge, enjoyment, or organization of the course between online and in-person groups, although the in-person course was rated as more engaging (mean Likert score 9.1 vs. 9.7, p = 0.033). Discussion: Similar objective and subjective surgical teaching outcomes were achieved in both iterations, including in "hands-on" topics such as suturing, gowning, and gloving. Students who completed the online course did not have any lower knowledge or confidence in their surgical skills; however, the in-person course was reported to be more engaging. Surgical teaching online and in-person may be similarly effective and can be delivered according to what is most convenient for the circumstances, such as in COVID-19.

SELECTION OF CITATIONS
SEARCH DETAIL
...