Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 90
Filter
1.
Nature ; 629(8012): 567-572, 2024 May.
Article in English | MEDLINE | ID: mdl-38720079

ABSTRACT

Entanglement has evolved from an enigmatic concept of quantum physics to a key ingredient of quantum technology. It explains correlations between measurement outcomes that contradict classical physics and has been widely explored with small sets of individual qubits. Multi-partite entangled states build up in gate-based quantum-computing protocols and-from a broader perspective-were proposed as the main resource for measurement-based quantum-information processing1,2. The latter requires the ex-ante generation of a multi-qubit entangled state described by a graph3-6. Small graph states such as Bell or linear cluster states have been produced with photons7-16, but the proposed quantum-computing and quantum-networking applications require fusion of such states into larger and more powerful states in a programmable fashion17-21. Here we achieve this goal by using an optical resonator22 containing two individually addressable atoms23,24. Ring25 and tree26 graph states with up to eight qubits, with the names reflecting the entanglement topology, are efficiently fused from the photonic states emitted by the individual atoms. The fusion process itself uses a cavity-assisted gate between the two atoms. Our technique is, in principle, scalable to even larger numbers of qubits and is the decisive step towards, for instance, a memory-less quantum repeater in a future quantum internet27-29.

2.
J Neurooncol ; 166(3): 535-546, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38316705

ABSTRACT

BACKGROUND: Adverse radiation effect (ARE) following stereotactic radiosurgery (SRS) for brain metastases is challenging to distinguish from tumor progression. This study characterizes the clinical implications of radiologic uncertainty (RU). METHODS: Cases reviewed retrospectively at a single-institutional, multi-disciplinary SRS Tumor Board between 2015-2022 for RU following SRS were identified. Treatment history, diagnostic or therapeutic interventions performed upon RU resolution, and development of neurologic deficits surrounding intervention were obtained from the medical record. Differences in lesion volume and maximum diameter at RU onset versus resolution were compared with paired t-tests. Median time from RU onset to resolution was estimated using the Kaplan-Meier method. Univariate and multivariate associations between clinical characteristics and time to RU resolution were assessed with Cox proportional-hazards regression. RESULTS: Among 128 lesions with RU, 23.5% had undergone ≥ 2 courses of radiation. Median maximum diameter (20 vs. 16 mm, p < 0.001) and volume (2.7 vs. 1.5 cc, p < 0.001) were larger upon RU resolution versus onset. RU resolution took > 6 and > 12 months in 25% and 7% of cases, respectively. Higher total EQD2 prior to RU onset (HR = 0.45, p = 0.03) and use of MR perfusion (HR = 0.56, p = 0.001) correlated with shorter time to resolution; larger volume (HR = 1.05, p = 0.006) portended longer time to resolution. Most lesions (57%) were diagnosed as ARE. Most patients (58%) underwent an intervention upon RU resolution; of these, 38% developed a neurologic deficit surrounding intervention. CONCLUSIONS: RU resolution took > 6 months in > 25% of cases. RU may lead to suboptimal outcomes and symptom burden. Improved characterization of post-SRS RU is needed.


Subject(s)
Brain Neoplasms , Radiation Injuries , Radiosurgery , Humans , Radiosurgery/adverse effects , Radiosurgery/methods , Treatment Outcome , Retrospective Studies , Uncertainty , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/radiotherapy , Brain Neoplasms/pathology , Radiation Injuries/diagnostic imaging , Radiation Injuries/etiology , Radiation Injuries/surgery
3.
Radiology ; 310(2): e231319, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38319168

ABSTRACT

Filters are commonly used to enhance specific structures and patterns in images, such as vessels or peritumoral regions, to enable clinical insights beyond the visible image using radiomics. However, their lack of standardization restricts reproducibility and clinical translation of radiomics decision support tools. In this special report, teams of researchers who developed radiomics software participated in a three-phase study (September 2020 to December 2022) to establish a standardized set of filters. The first two phases focused on finding reference filtered images and reference feature values for commonly used convolutional filters: mean, Laplacian of Gaussian, Laws and Gabor kernels, separable and nonseparable wavelets (including decomposed forms), and Riesz transformations. In the first phase, 15 teams used digital phantoms to establish 33 reference filtered images of 36 filter configurations. In phase 2, 11 teams used a chest CT image to derive reference values for 323 of 396 features computed from filtered images using 22 filter and image processing configurations. Reference filtered images and feature values for Riesz transformations were not established. Reproducibility of standardized convolutional filters was validated on a public data set of multimodal imaging (CT, fluorodeoxyglucose PET, and T1-weighted MRI) in 51 patients with soft-tissue sarcoma. At validation, reproducibility of 486 features computed from filtered images using nine configurations × three imaging modalities was assessed using the lower bounds of 95% CIs of intraclass correlation coefficients. Out of 486 features, 458 were found to be reproducible across nine teams with lower bounds of 95% CIs of intraclass correlation coefficients greater than 0.75. In conclusion, eight filter types were standardized with reference filtered images and reference feature values for verifying and calibrating radiomics software packages. A web-based tool is available for compliance checking.


Subject(s)
Image Processing, Computer-Assisted , Radiomics , Humans , Reproducibility of Results , Biomarkers , Multimodal Imaging
4.
JCO Clin Cancer Inform ; 7: e2300136, 2023 Sep.
Article in English | MEDLINE | ID: mdl-38055914

ABSTRACT

In August 2022, the Cancer Informatics for Cancer Centers brought together cancer informatics leaders for its biannual symposium, Precision Medicine Applications in Radiation Oncology, co-chaired by Quynh-Thu Le, MD (Stanford University), and Walter J. Curran, MD (GenesisCare). Over the course of 3 days, presenters discussed a range of topics relevant to radiation oncology and the cancer informatics community more broadly, including biomarker development, decision support algorithms, novel imaging tools, theranostics, and artificial intelligence (AI) for the radiotherapy workflow. Since the symposium, there has been an impressive shift in the promise and potential for integration of AI in clinical care, accelerated in large part by major advances in generative AI. AI is now poised more than ever to revolutionize cancer care. Radiation oncology is a field that uses and generates a large amount of digital data and is therefore likely to be one of the first fields to be transformed by AI. As experts in the collection, management, and analysis of these data, the informatics community will take a leading role in ensuring that radiation oncology is prepared to take full advantage of these technological advances. In this report, we provide highlights from the symposium, which took place in Santa Barbara, California, from August 29 to 31, 2022. We discuss lessons learned from the symposium for data acquisition, management, representation, and sharing, and put these themes into context to prepare radiation oncology for the successful and safe integration of AI and informatics technologies.


Subject(s)
Neoplasms , Radiation Oncology , Humans , Artificial Intelligence , Informatics , Neoplasms/diagnosis , Neoplasms/radiotherapy
5.
Cogn Sci ; 47(11): e13373, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37950700

ABSTRACT

Discovering the meaning of novel communicative cues is challenging and amounts to navigating an unbounded hypothesis space. Several theories posit that this problem can be simplified by relying on positive expectations about the cognitive utility of communicated information. These theories imply that learners should assume that novel communicative cues tend to have low processing costs and high cognitive benefits. We tested this hypothesis in three studies in which toddlers (N = 90) searched for a reward hidden in one of several containers. In all studies, an adult communicated the reward's location with an unfamiliar and ambiguous cue. We manipulated the processing costs (operationalized as inferential chain length) and cognitive benefits (operationalized as informativeness) of the possible interpretations of the cues. Toddlers processing of novel communicative cues were guided by expectations of low processing costs (Study 1) and high cognitive benefits (Studies 2 and 3). More specifically, toddlers treated novel cues as if they were easy to process, informative, and accurate, even when provided with repeated evidence to the contrary. These results indicate that, from toddlerhood onward, expectations of cognitive utility shape the processing of novel communicative cues. These data also reveal that toddlers, who are in the process of learning the language and communicative conventions of people around them, exert a pressure favoring cognitive efficiency in communicative systems.


Subject(s)
Cues , Motivation , Adult , Humans , Child, Preschool , Learning , Communication , Language
6.
Pract Radiat Oncol ; 2023 Nov 18.
Article in English | MEDLINE | ID: mdl-37981253

ABSTRACT

PURPOSE: Lung blocks for total-body irradiation are commonly used to reduce lung dose and prevent radiation pneumonitis. Currently, molten Cerrobend containing toxic materials, specifically lead and cadmium, is poured into molds to construct blocks. We propose a streamlined method to create 3-dimensional (3D)-printed lung block shells and fill them with tungsten ball bearings to remove lead and improve overall accuracy in the block manufacturing workflow. METHODS AND MATERIALS: 3D-printed lung block shells were automatically generated using an inhouse software, printed, and filled with 2 to 3 mm diameter tungsten ball bearings. Clinical Cerrobend blocks were compared with the physician drawn blocks as well as our proposed tungsten filled 3D-printed blocks. Physical and dosimetric comparisons were performed on a linac. Dose transmission through the Cerrobend and 3D-printed blocks were measured using point dosimetry (ion-chamber) and the on-board Electronic-Portal-Imaging-Device (EPID). Dose profiles from the EPID images were used to compute the full-width-half-maximum and to compare with the treatment-planning-system. Additionally, the coefficient-of-variation in the central 80% of full-width-half-maximum was computed and compared between Cerrobend and 3D-printed blocks. RESULTS: The geometric difference between treatment-planning-system and 3D-printed blocks was significantly lower than Cerrobend blocks (3D: -0.88 ± 2.21 mm, Cerrobend: -2.28 ± 2.40 mm, P = .0002). Dosimetrically, transmission measurements through the 3D-printed and Cerrobend blocks for both ion-chamber and EPID dosimetry were between 42% to 48%, compared with the open field. Additionally, coefficient-of-variation was significantly higher in 3D-printed blocks versus Cerrobend blocks (3D: 4.2% ± 0.6%, Cerrobend: 2.6% ± 0.7%, P < .0001). CONCLUSIONS: We designed and implemented a tungsten filled 3D-printed workflow for constructing total-body-irradiation lung blocks, which serves as an alternative to the traditional Cerrobend based workflow currently used in clinics. This workflow has the capacity of producing clinically useful lung blocks with minimal effort to facilitate the removal of toxic materials from the clinic.

7.
Behav Brain Sci ; 46: e260, 2023 10 02.
Article in English | MEDLINE | ID: mdl-37779296

ABSTRACT

This response takes advantage of the diverse and wide-ranging series of commentaries to clarify some aspects of the target article, and flesh out other aspects. My central point is a plea to take graphic codes seriously as codes, rather than as a kind of visual art or as a byproduct of spoken language; only in this way can the puzzle of ideography be identified and solved. In this perspective, I argue that graphic codes do not derive their expressive power from iconicity alone (unlike visual arts), and I clarify the peculiar relationship that ties writing to spoken language. I then discuss three possible solutions to the puzzle of ideography. I argue that a learning account still cannot explain why ideographies fail to evolve, even if we emancipate the learning account from the version that Liberman put forward; I develop my preferred solution, the "standardization account," and contrast it with a third solution suggested by some commentaries, which says that ideographies do not evolve because they would make communication too costly. I consider, by way of conclusion, the consequences of these views for the future evolution of ideography.


Subject(s)
Communication , Learning , Humans
8.
Evol Hum Sci ; 5: e10, 2023.
Article in English | MEDLINE | ID: mdl-37587938

ABSTRACT

Cattle brands (ownership marks left on animals) are subject to forces influencing other graphic codes: the copying of constituent parts, pressure for distinctiveness and pressure for complexity. The historical record of cattle brands in some US states is complete owing to legal registration, providing a unique opportunity to assess how sampling processes leading to time- and space-averaging influence our ability to make inferences from limited datasets in fields like archaeology. In this preregistered study, we used a dataset of ~81,000 Kansas cattle brands (1990-2016) to explore two aspects: (1) the relative influence of copying, pressure for distinctiveness and pressure for complexity on the creation and diffusion of brand components; and (2) the effects of time- and space-averaging on statistical signals. By conducting generative inference with an agent-based model, we found that the patterns in our data are consistent with copying and pressure for intermediate complexity. In addition, by comparing mixed and structured datasets, we found that these statistical signals of copying are robust to, and possibly boosted by, time- and space-averaging.

9.
Cognition ; 238: 105527, 2023 09.
Article in English | MEDLINE | ID: mdl-37364507

ABSTRACT

Zipf's Law of Abbreviation - the idea that more frequent symbols in a code are simpler than less frequent ones - has been shown to hold at the level of words in many languages. We tested whether it holds at the level of individual written characters. Character complexity is similar to word length in that it requires more cognitive and motor effort for producing and processing more complex symbols. We built a dataset of character complexity and frequency measures covering 27 different writing systems. According to our data, Zipf's Law of Abbreviation holds for every writing system in our dataset - the more frequent characters have lower degrees of complexity and vice-versa. This result provides further evidence of optimization mechanisms shaping communication systems.


Subject(s)
Language , Models, Theoretical , Humans , Writing
10.
Med Phys ; 50(5): 2662-2671, 2023 May.
Article in English | MEDLINE | ID: mdl-36908243

ABSTRACT

BACKGROUND: Misalignment to the incorrect vertebral body remains a rare but serious patient safety risk in image-guided radiotherapy (IGRT). PURPOSE: Our group has proposed that an automated image-review algorithm be inserted into the IGRT process as an interlock to detect off-by-one vertebral body errors. This study presents the development and multi-institutional validation of a convolutional neural network (CNN)-based approach for such an algorithm using patient image data from a planar stereoscopic x-ray IGRT system. METHODS: X-rays and digitally reconstructed radiographs (DRRs) were collected from 429 spine radiotherapy patients (1592 treatment fractions) treated at six institutions using a stereoscopic x-ray image guidance system. Clinically-applied, physician approved, alignments were used for true-negative, "no-error" cases. "Off-by-one vertebral body" errors were simulated by translating DRRs along the spinal column using a semi-automated method. A leave-one-institution-out approach was used to estimate model accuracy on data from unseen institutions as follows: All of the images from five of the institutions were used to train a CNN model from scratch using a fixed network architecture and hyper-parameters. The size of this training set ranged from 5700 to 9372 images, depending on exactly which five institutions were contributing data. The training set was randomized and split using a 75/25 split into the final training/ validation sets. X-ray/ DRR image pairs and the associated binary labels of "no-error" or "shift" were used as the model input. Model accuracy was evaluated using images from the sixth institution, which were left out of the training phase entirely. This test set ranged from 180 to 3852 images, again depending on which institution had been left out of the training phase. The trained model was used to classify the images from the test set as either "no-error" or "shifted", and the model predictions were compared to the ground truth labels to assess the model accuracy. This process was repeated until each institution's images had been used as the testing dataset. RESULTS: When the six models were used to classify unseen image pairs from the institution left out during training, the resulting receiver operating characteristic area under the curve values ranged from 0.976 to 0.998. With the specificity fixed at 99%, the corresponding sensitivities ranged from 61.9% to 99.2% (mean: 77.6%). With the specificity fixed at 95%, sensitivities ranged from 85.5% to 99.8% (mean: 92.9%). CONCLUSION: This study demonstrated the CNN-based vertebral body misalignment model is robust when applied to previously unseen test data from an outside institution, indicating that this proposed additional safeguard against misalignment is feasible.


Subject(s)
Deep Learning , Humans , X-Rays , Vertebral Body , Retrospective Studies , Neural Networks, Computer
11.
J Neurosurg ; 138(1): 104-112, 2023 01 01.
Article in English | MEDLINE | ID: mdl-35594891

ABSTRACT

OBJECTIVE: The authors previously evaluated risk and time course of adverse radiation effects (AREs) following stereotactic radiosurgery (SRS) for brain metastases, excluding lesions treated after prior SRS. In the present analysis they focus specifically on single-fraction salvage SRS to brain metastases previously treated with SRS or hypofractionated SRS (HFSRS), evaluating freedom from progression (FFP) and the risk and time course of AREs. METHODS: Brain metastases treated from September 1998 to May 2019 with single-fraction SRS after prior SRS or HFSRS were analyzed. Serial follow-up magnetic resonance imaging (MRI) and surgical pathology reports were reviewed to score local treatment failure and AREs. The Kaplan-Meier method was used to estimate FFP and risk of ARE measured from the date of repeat SRS with censoring at the last brain MRI. RESULTS: A total of 229 retreated brain metastases in 124 patients were evaluable. The most common primary cancers were breast, lung, and melanoma. The median interval from prior SRS/HFSRS to repeat SRS was 15.4 months, the median prescription dose was 18 Gy, and the median duration of follow-up imaging was 14.5 months. At 1 year after repeat SRS, FFP was 80% and the risk of symptomatic ARE was 11%. The 1-year risk of imaging changes, including asymptomatic RE and symptomatic ARE, was 30%. Among lesions that demonstrated RE, the median time to onset was 6.7 months (IQR 4.7-9.9 months) and the median time to peak imaging changes was 10.1 months (IQR 5.6-13.6 months). Lesion size by quadratic mean diameter (QMD) showed similar results for QMDs ranging from 0.75 to 2.0 cm (1-year FFP 82%, 1-year risk of symptomatic ARE 11%). For QMD < 0.75 cm, the 1-year FFP was 86% and the 1-year risk of symptomatic ARE was only 2%. Outcomes were worse for QMDs 2.01-3.0 cm (1-year FFP 65%, 1-year risk of symptomatic ARE 24%). The risk of symptomatic ARE was not increased with tyrosine kinase inhibitors or immunotherapy before or after repeat SRS. CONCLUSIONS: RE on imaging was common after repeat SRS (30% at 1 year), but the risk of a symptomatic ARE was much less (11% at 1 year). The results of repeat single-fraction SRS were good for brain metastases ≤ 2 cm. The authors recommend an interval ≥ 6 months from prior SRS and a prescription dose ≥ 18 Gy. Alternatives such as HFSRS, laser interstitial thermal therapy, or resection with adjuvant radiation should be considered for recurrent brain metastases > 2 cm.


Subject(s)
Brain Neoplasms , Melanoma , Radiation Injuries , Radiosurgery , Humans , Radiosurgery/adverse effects , Radiosurgery/methods , Retrospective Studies , Radiation Injuries/diagnostic imaging , Radiation Injuries/etiology , Radiation Injuries/surgery , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/radiotherapy , Brain Neoplasms/pathology , Melanoma/secondary , Treatment Outcome
12.
Semin Radiat Oncol ; 32(4): 421-431, 2022 10.
Article in English | MEDLINE | ID: mdl-36202444

ABSTRACT

Recent advancements in artificial intelligence (AI) in the domain of radiation therapy (RT) and their integration into modern software-based systems raise new challenges to the profession of medical physics experts. These AI algorithms are typically data-driven, may be continuously evolving, and their behavior has a degree of (acceptable) uncertainty due to inherent noise in training data and the substantial number of parameters that are used in the algorithms. These characteristics request adaptive, and new comprehensive quality assurance (QA) approaches to guarantee the individual patient treatment quality during AI algorithm development and subsequent deployment in a clinical RT environment. However, the QA for AI-based systems is an emerging area, which has not been intensively explored and requires interactive collaborations between medical doctors, medical physics experts, and commercial/research AI institutions. This article summarizes the current QA methodologies for AI modules of every subdomain in RT with further focus on persistent shortcomings and upcoming key challenges and perspectives.


Subject(s)
Algorithms , Artificial Intelligence , Humans
13.
Behav Brain Sci ; 46: e233, 2022 10 18.
Article in English | MEDLINE | ID: mdl-36254782

ABSTRACT

An ideography is a general-purpose code made of pictures that do not encode language, which can be used autonomously - not just as a mnemonic prop - to encode information on a broad range of topics. Why are viable ideographies so hard to find? I contend that self-sufficient graphic codes need to be narrowly specialized. Writing systems are only an apparent exception: At their core, they are notations of a spoken language. Even if they also encode nonlinguistic information, they are useless to someone who lacks linguistic competence in the encoded language or a related one. The versatility of writing is thus vicarious: Writing borrows it from spoken language. Why is it so difficult to build a fully generalist graphic code? The most widespread answer points to a learnability problem. We possess specialized cognitive resources for learning spoken language, but lack them for graphic codes. I argue in favor of a different account: What is difficult about graphic codes is not so much learning or teaching them as getting every user to learn and teach the same code. This standardization problem does not affect spoken or signed languages as much. Those are based on cheap and transient signals, allowing for easy online repairing of miscommunication, and require face-to-face interactions where the advantages of common ground are maximized. Graphic codes lack these advantages, which makes them smaller in size and more specialized.


Subject(s)
Language , Learning , Humans , Sign Language , Communication
14.
Front Oncol ; 12: 920393, 2022.
Article in English | MEDLINE | ID: mdl-35912214

ABSTRACT

Introduction: There is a cumulative risk of 20-40% of developing brain metastases (BM) in solid cancers. Stereotactic radiotherapy (SRT) enables the application of high focal doses of radiation to a volume and is often used for BM treatment. However, SRT can cause adverse radiation effects (ARE), such as radiation necrosis, which sometimes cause irreversible damage to the brain. It is therefore of clinical interest to identify patients at a high risk of developing ARE. We hypothesized that models trained with radiomics features, deep learning (DL) features, and patient characteristics or their combination can predict ARE risk in patients with BM before SRT. Methods: Gadolinium-enhanced T1-weighted MRIs and characteristics from patients treated with SRT for BM were collected for a training and testing cohort (N = 1,404) and a validation cohort (N = 237) from a separate institute. From each lesion in the training set, radiomics features were extracted and used to train an extreme gradient boosting (XGBoost) model. A DL model was trained on the same cohort to make a separate prediction and to extract the last layer of features. Different models using XGBoost were built using only radiomics features, DL features, and patient characteristics or a combination of them. Evaluation was performed using the area under the curve (AUC) of the receiver operating characteristic curve on the external dataset. Predictions for individual lesions and per patient developing ARE were investigated. Results: The best-performing XGBoost model on a lesion level was trained on a combination of radiomics features and DL features (AUC of 0.71 and recall of 0.80). On a patient level, a combination of radiomics features, DL features, and patient characteristics obtained the best performance (AUC of 0.72 and recall of 0.84). The DL model achieved an AUC of 0.64 and recall of 0.85 per lesion and an AUC of 0.70 and recall of 0.60 per patient. Conclusion: Machine learning models built on radiomics features and DL features extracted from BM combined with patient characteristics show potential to predict ARE at the patient and lesion levels. These models could be used in clinical decision making, informing patients on their risk of ARE and allowing physicians to opt for different therapies.

15.
Nature ; 608(7924): 677-681, 2022 08.
Article in English | MEDLINE | ID: mdl-36002484

ABSTRACT

The central technological appeal of quantum science resides in exploiting quantum effects, such as entanglement, for a variety of applications, including computing, communication and sensing1. The overarching challenge in these fields is to address, control and protect systems of many qubits against decoherence2. Against this backdrop, optical photons, naturally robust and easy to manipulate, represent ideal qubit carriers. However, the most successful technique so far for creating photonic entanglement3 is inherently probabilistic and, therefore, subject to severe scalability limitations. Here we report the implementation of a deterministic protocol4-6 for the creation of photonic entanglement with a single memory atom in a cavity7. We interleave controlled single-photon emissions with tailored atomic qubit rotations to efficiently grow Greenberger-Horne-Zeilinger (GHZ) states8 of up to 14 photons and linear cluster states9 of up to 12 photons with a fidelity lower bounded by 76(6)% and 56(4)%, respectively. Thanks to a source-to-detection efficiency of 43.18(7)% per photon, we measure these large states about once every minute, which is orders of magnitude faster than in any previous experiment3,10-13. In the future, this rate could be increased even further, the scheme could be extended to two atoms in a cavity14,15 or several sources could be quantum mechanically coupled16, to generate higher-dimensional cluster states17. Overcoming the limitations encountered by probabilistic schemes for photonic entanglement generation, our results may offer a way towards scalable measurement-based quantum computation18,19 and communication20,21.

16.
Med Phys ; 49(10): 6293-6302, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35946608

ABSTRACT

PURPOSE: A knowledge-based planning technique is developed based on Bayesian stochastic frontier analysis. A novel missing data management is applied in order to handle missing organs-at-risk and work with a complete dataset. METHODS: Geometric metrics are used to predict DVH metrics for lung SBRT with a retrospective database of 299 patients. In total, 16 DVH metrics were predicted for the main bronchus, heart, esophagus, spinal cord PRV, great vessels, and chest wall. The predictive model is tested on a test group of 50 patients. RESULTS: Mean difference between the observed and predicted values ranges between 1.5 ± 1.9 Gy and 4.9 ± 5.3 Gy for the spinal cord PRV D0.35cc and the main bronchus D0.035cc, respectively. CONCLUSIONS: The missing data model implanted in the predictive model is robust in the estimation of the parameters. Bayesian stochastic frontier analysis with missing data management can be used to predict DVH metrics for lung SBRT treatment planning.


Subject(s)
Lung Neoplasms , Radiosurgery , Radiotherapy, Intensity-Modulated , Algorithms , Bayes Theorem , Data Management , Humans , Lung , Lung Neoplasms/radiotherapy , Lung Neoplasms/surgery , Organs at Risk , Radiosurgery/methods , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated/methods , Retrospective Studies
17.
Nat Commun ; 13(1): 3423, 2022 06 14.
Article in English | MEDLINE | ID: mdl-35701415

ABSTRACT

Detection and segmentation of abnormalities on medical images is highly important for patient management including diagnosis, radiotherapy, response evaluation, as well as for quantitative image research. We present a fully automated pipeline for the detection and volumetric segmentation of non-small cell lung cancer (NSCLC) developed and validated on 1328 thoracic CT scans from 8 institutions. Along with quantitative performance detailed by image slice thickness, tumor size, image interpretation difficulty, and tumor location, we report an in-silico prospective clinical trial, where we show that the proposed method is faster and more reproducible compared to the experts. Moreover, we demonstrate that on average, radiologists & radiation oncologists preferred automatic segmentations in 56% of the cases. Additionally, we evaluate the prognostic power of the automatic contours by applying RECIST criteria and measuring the tumor volumes. Segmentations by our method stratified patients into low and high survival groups with higher significance compared to those methods based on manual contours.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Algorithms , Carcinoma, Non-Small-Cell Lung/diagnostic imaging , Humans , Lung Neoplasms/diagnostic imaging , Prospective Studies , Tomography, X-Ray Computed/methods
18.
Int J Radiat Oncol Biol Phys ; 113(5): 1091-1102, 2022 08 01.
Article in English | MEDLINE | ID: mdl-35533908

ABSTRACT

PURPOSE: Performing measurement-based patient-specific quality assurance (PSQA) is recognized as a resource-intensive and time inefficient task in the radiation therapy treatment workflow. Paired with technological refinements in modern radiation therapy, research toward measurement-free PSQA has seen increased interest during the past 5 years. However, these efforts have not been clinically implemented or prospectively validated in the United States. We propose a virtual QA (VQA) system and workflow to assess the safety and workload reduction of measurement-free PSQA. METHODS: An XGBoost machine learning model was designed to predict PSQA outcomes of volumetric modulated arc therapy plans, represented as percent differences between the measured ion chamber point dose in a phantom and the corresponding planned dose. The final model was deployed within a web application to predict PSQA outcomes of clinical plans within an existing clinical workflow. The application also displays relevant feature importance and plan-specific distribution analyses relative to database plans for documentation and to aid physicist interpretation and evaluation. VQA predictions were prospectively validated over 3 months of measurements at our clinic to assess safety and efficiency gains. RESULTS: Over 3 months, VQA predictions for 445 volumetric modulated arc therapy plans were prospectively validated at our institution. VQA predictions for these plans had a mean absolute error of 1.08% ± 0.77%, with a maximum absolute error of 2.98%. Using a 1% prediction threshold (ie, plans predicted to have an absolute error <1% would not require a measurement) would yield a 69.2% reduction in QA workload, saving 32.5 hours per month on average, with 81.5% sensitivity, 72.4% specificity, and an area under the curve of 0.81 at a 3% clinical threshold and 100% sensitivity, 70% specificity, and an area under the curve of 0.93 at a 4% clinical threshold. CONCLUSIONS: This is the first prospective clinical implementation and validation of VQA in the United States, which we observed to be efficient. Using a conservative threshold, VQA can substantially reduce the number of required measurements for PSQA, leading to more effective allocation of clinical resources.


Subject(s)
Radiotherapy, Intensity-Modulated , Humans , Prospective Studies , Quality Assurance, Health Care , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted
19.
Adv Radiat Oncol ; 7(2): 100886, 2022.
Article in English | MEDLINE | ID: mdl-35387423

ABSTRACT

Purpose: The aim was to develop a novel artificial intelligence (AI)-guided clinical decision support system, to predict radiation doses to subsites of the mandible using diagnostic computed tomography scans acquired before any planning of head and neck radiation therapy (RT). Methods and Materials: A dose classifier was trained using RT plans from 86 patients with oropharyngeal cancer; the test set consisted of an additional 20 plans. The classifier was trained to predict whether mandible subsites would receive a mean dose >50 Gy. The AI predictions were prospectively evaluated and compared with those of a specialist head and neck radiation oncologist for 9 patients. Positive predictive value (PPV), negative predictive value (NPV), Pearson correlation coefficient, and Lin concordance correlation coefficient were calculated to compare the AI predictions to those of the physician. Results: In the test data set, the AI predictions had a PPV of 0.95 and NPV of 0.88. For 9 patients evaluated prospectively, there was a strong correlation between the predictions of the AI algorithm and physician (P = .72, P < .001). Comparing the AI algorithm versus the physician, the PPVs were 0.82 versus 0.25, and the NPVs were 0.94 versus 1.0, respectively. Concordance between physician estimates and final planned doses was 0.62; this was 0.71 between AI-based estimates and final planned doses. Conclusion: AI-guided decision support increased precision and accuracy of pre-RT dental dose estimates.

20.
Cogn Sci ; 46(2): e13113, 2022 02.
Article in English | MEDLINE | ID: mdl-35174902

ABSTRACT

The amount of information conveyed by linguistic conventions depends on their precision, yet the codes that humans and other animals use to communicate are quite ambiguous: they may map several vague meanings to the same symbol. How does semantic precision evolve, and what are the constraints that limit it? We address this question using a multiplayer gaming app, where individuals communicate with one another in a scaled-up referential game. Here, the goal is for a sender to use black and white symbols to communicate colors. We expected that the players' mappings between symbols and colors would grow more specific over time, through a selection process whereby precise mappings are preferentially copied. We found that players become increasingly more precise in their use of symbols over the course of their interactions. This trend did not, however, result from selective copying of precise mappings. We explore the implications of this result for the study of lexical ambiguity, Zipf's Law of Meaning, and disagreements over semantic conventions.


Subject(s)
Cultural Evolution , Mobile Applications , Video Games , Humans , Language , Semantics
SELECTION OF CITATIONS
SEARCH DETAIL
...