Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Clin J Gastroenterol ; 2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39017991

ABSTRACT

Intestinal lymphangiectasia (IL) is a protein-losing enteropathy (PLE) that occasionally leads to gastrointestinal bleeding (GIB). We encountered a 41-year-old female with a 9-year history of duodenal IL with PLE and GIB that progressively worsened. Despite a diet, supplemented with medium-chain triglycerides, antiplasmin therapy, oral corticosteroids, octreotides, sirolimus, and repeated endoscopic hemostasis, her symptoms remained uncontrolled, leading to blood transfusion dependence. Lymphangiography revealed significant leakage from abnormal abdominal lymph vessels into the duodenal lumen. The patient subsequently underwent an abdominal-level lymphaticovenous anastomosis combined with local venous ligation. This approach resulted in a dramatic improvement and sustained resolution of both the PLE and GIB. More than 6 months after surgery, the patient remained free of symptoms and blood transfusion dependence.

2.
Aesthetic Plast Surg ; 48(13): 2389-2398, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38684536

ABSTRACT

BACKGROUND: ChatGPT is a free artificial intelligence (AI) language model developed and released by OpenAI in late 2022. This study aimed to evaluate the performance of ChatGPT to accurately answer clinical questions (CQs) on the Guideline for the Management of Blepharoptosis published by the American Society of Plastic Surgeons (ASPS) in 2022. METHODS: CQs in the guideline were used as question sources in both English and Japanese. For each question, ChatGPT provided answers for CQs, evidence quality, recommendation strength, reference match, and answered word counts. We compared the performance of ChatGPT in each component between English and Japanese queries. RESULTS: A total of 11 questions were included in the final analysis, and ChatGPT answered 61.3% of these correctly. ChatGPT demonstrated a higher accuracy rate in English answers for CQs compared to Japanese answers for CQs (76.4% versus 46.4%; p = 0.004) and word counts (123 words versus 35.9 words; p = 0.004). No statistical differences were noted for evidence quality, recommendation strength, and reference match. A total of 697 references were proposed, but only 216 of them (31.0%) existed. CONCLUSIONS: ChatGPT demonstrates potential as an adjunctive tool in the management of blepharoptosis. However, it is crucial to recognize that the existing AI model has distinct limitations, and its primary role should be to complement the expertise of medical professionals. LEVEL OF EVIDENCE V: Observational study under respected authorities. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .


Subject(s)
Artificial Intelligence , Blepharoptosis , Practice Guidelines as Topic , Blepharoptosis/surgery , Humans , Blepharoplasty/methods , Japan
SELECTION OF CITATIONS
SEARCH DETAIL
...