Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters











Database
Language
Publication year range
1.
J Pathol Inform ; 13: 100150, 2022.
Article in English | MEDLINE | ID: mdl-36268090

ABSTRACT

Background: A pathology order interface using Health Level 7 standards (HL7) generally has an HL7 client program that gathers information from the clinical electronic medical record system, packages the information in the form of HL7 message, and sends the message using secure communication protocols to an HL7 interface engine located on the pathology side. We describe an alternative approach that transmits the texts obtained from requisitions, with subsequent just-in-time construction of HL7 messages. Materials and methods: The order interface is between a dermatology clinic EMR and pathology information system. A text acquisition and processing program runs in the background in desktop computers in dermatology clinic so that a copy of pathology requisition text is obtained each time when the clinic prints a pathology requisition. Discrete elements of the data are extracted from this text, prepended to the text and saved on a shared drive within the dermatology office intranet. This text file is then transferred to pathology intranet using secure File Transfer Protocol (sFTP). Once received, an HL7 message construction program extracts the discrete data elements to construct an HL7 message. The HL7 message is then forwarded to an HL7 interface engine and entered into the pathology information system as an order. Results: Using an actual case as an example, the content and format of the information flowing through different steps of the interface are demonstrated. Conclusions: The construction of such an interface does not involve the clinic EMR vendor, thus avoiding its associated cost and potential delay. This interface has advantages over our other order interfaces constructed using the conventional approach in that it does not require a change of order process and it avoids duplicate orders.

2.
Arch Pathol Lab Med ; 145(5): 599-606, 2021 05 01.
Article in English | MEDLINE | ID: mdl-32960950

ABSTRACT

CONTEXT.­: Studies on the adoption of voice recognition in health care have mostly focused on turnaround time and error rate, with less attention paid to the impact on the efficiency of the providers. OBJECTIVE.­: To study the impact of voice recognition on the efficiency of grossing biopsy specimens. DESIGN.­: Timestamps corresponding to barcode scanning for biopsy specimen bottles and cassettes were retrieved from the pathology information system database. The time elapsed between scanning a specimen bottle and the corresponding first cassette was the length of time spent on the gross processing of that specimen and is designated as the specimen time. For the first specimen of a case, the specimen time additionally included the time spent on dictating the clinical information. Therefore, the specimen times were divided into the following 2 categories: first-specimen time and subsequent-specimen time. The impact of voice recognition on specimen times was studied using both univariate and multivariate analyses. RESULTS.­: Specimen complexity, prosector variability, length of clinical information text, and the number of biopsies the prosector grossed that day were the major determinants of specimen times. Adopting voice recognition had a negligible impact on specimen times. CONCLUSIONS.­: Adopting voice recognition in the gross room removes the need to hire transcriptionists without negatively impacting the efficiency of the prosectors, resulting in an overall cost saving. Using computer scripting to automatically enter clinical information (received through the electronic order interface) into report templates may potentially increase the grossing efficiency in the future.


Subject(s)
Pathology, Clinical/methods , Speech Recognition Software , Biopsy , Efficiency , Humans , Multivariate Analysis , Pathology, Clinical/organization & administration , Reproducibility of Results , Time Factors , Workflow
3.
J Pathol Inform ; 10: 20, 2019.
Article in English | MEDLINE | ID: mdl-31367472

ABSTRACT

BACKGROUND: Pathology report defects refer to errors in the pathology reports, such as transcription/voice recognition errors and incorrect nondiagnostic information. Examples of the latter include incorrect gender, incorrect submitting physician, incorrect description of tissue blocks submitted, report formatting issues, and so on. Over the past 5 years, we have implemented computational algorithms to identify and correct these report defects. MATERIALS AND METHODS: Report texts, tissue blocks submitted, and other relevant information are retrieved from the pathology information system database. Two complementary algorithms are used to identify the voice recognition errors by parsing the gross description texts to either (i) identify previously encountered error patterns or (ii) flag sentences containing previously-unused two-word sequences (bigrams). A third algorithm based on identifying conflicting information from two different sources is used to identify tissue block designation errors in the gross description; the information on actual block submission is compared with the block designation information parsed from the gross description text. RESULTS: The computational algorithms identify voice recognition errors in approximately 8%-10% of the cases and block designation errors in approximately 0.5%-1% of all the cases. CONCLUSIONS: The algorithms described here have been effective in reducing pathology report defects. In addition to detecting voice recognition and block designation errors, these algorithms have also be used to detect other report defects, such as wrong gender, wrong provider, special stains or immunostains performed but not reported, and so on.

4.
J Pathol Inform ; 10: 13, 2019.
Article in English | MEDLINE | ID: mdl-31057982

ABSTRACT

BACKGROUND: At our department, each specimen was assigned a tentative current procedural terminology (CPT) code at accessioning. The codes were subject to subsequent changes by pathologist assistants and pathologists. After the cases had been finalized, their CPT codes went through a final verification step by coding staff, with the aid of a keyword-based CPT code-checking web application. Greater than 97% of the initial assignments were correct. This article describes the construction of a CPT code-predicting neural network model and its incorporation into the CPT code-checking application. MATERIALS AND METHODS: R programming language was used. Pathology report texts and CPT codes for the cases finalized during January 1-November 30, 2018, were retrieved from the database. The order of the specimens was randomized before the data were partitioned into training and validation set. R Keras package was used for both model training and prediction. The chosen neural network had a three-layer architecture consisting of a word-embedding layer, a bidirectional long short-term memory (LSTM) layer, and a densely connected layer. It used concatenated header-diagnosis texts as the input. RESULTS: The model predicted CPT codes in both the validation data set and the test data set with an accuracy of 97.5% and 97.6%, respectively. Closer examination of the test data set (cases from December 1 to 27, 2018) revealed two interesting observations. First, among the specimens that had incorrect initial CPT code assignments, the model disagreed with the initial assignments in 73.6% (117/159) and agreed in 26.4% (42/159). Second, the model identified nine additional specimens with incorrect CPT codes that had evaded all steps of checking. CONCLUSIONS: A neural network model using report texts to predict CPT codes can achieve high accuracy in prediction and moderate sensitivity in error detection. Neural networks may play increasing roles in CPT coding in surgical pathology.

5.
J Pathol Inform ; 7: 44, 2016.
Article in English | MEDLINE | ID: mdl-28066684

ABSTRACT

BACKGROUND: Different methods have been described for data extraction from pathology reports with varying degrees of success. Here a technique for directly extracting data from relational database is described. METHODS: Our department uses synoptic reports modified from College of American Pathologists (CAP) Cancer Protocol Templates to report most of our cancer diagnoses. Choosing the melanoma of skin synoptic report as an example, R scripting language extended with RODBC package was used to query the pathology information system database. Reports containing melanoma of skin synoptic report in the past 4 and a half years were retrieved and individual data elements were extracted. Using the retrieved list of the cases, the database was queried a second time to retrieve/extract the lymph node staging information in the subsequent reports from the same patients. RESULTS: 426 synoptic reports corresponding to unique lesions of melanoma of skin were retrieved, and data elements of interest were extracted into an R data frame. The distribution of Breslow depth of melanomas grouped by year is used as an example of intra-report data extraction and analysis. When the new pN staging information was present in the subsequent reports, 82% (77/94) was precisely retrieved (pN0, pN1, pN2 and pN3). Additional 15% (14/94) was retrieved with certain ambiguity (positive or knowing there was an update). The specificity was 100% for both. The relationship between Breslow depth and lymph node status was graphed as an example of lesion-specific multi-report data extraction and analysis. CONCLUSIONS: R extended with RODBC package is a simple and versatile approach well-suited for the above tasks. The success or failure of the retrieval and extraction depended largely on whether the reports were formatted and whether the contents of the elements were consistently phrased. This approach can be easily modified and adopted for other pathology information systems that use relational database for data management.

6.
Arch Pathol Lab Med ; 139(7): 929-35, 2015 Jul.
Article in English | MEDLINE | ID: mdl-26125433

ABSTRACT

CONTEXT: Pathologists' daily tasks consist of both the professional interpretation of slides and the secretarial tasks of translating these interpretations into final pathology reports, the latter of which is a time-consuming endeavor for most pathologists. OBJECTIVE: To describe an artificial intelligence that performs secretarial tasks, designated as Secretary-Mimicking Artificial Intelligence (SMILE). DESIGN: The underling implementation of SMILE is a collection of computer programs that work in concert to "listen to" the voice commands and to "watch for" the changes of windows caused by slide bar code scanning; SMILE responds to these inputs by acting upon PowerPath Client windows (Sunquest Information Systems, Tucson, Arizona) and its Microsoft Word (Microsoft, Redmond, Washington) Add-In window, eventuating in the reports being typed and finalized. Secretary-Mimicking Artificial Intelligence also communicates relevant information to the pathologist via the computer speakers and message box on the screen. RESULTS: Secretary-Mimicking Artificial Intelligence performs many secretarial tasks intelligently and semiautonomously, with rapidity and consistency, thus enabling pathologists to focus on slide interpretation, which results in a marked increase in productivity, decrease in errors, and reduction of stress in daily practice. Secretary-Mimicking Artificial Intelligence undergoes encounter-based learning continually, resulting in a continuous improvement in its knowledge-based intelligence. CONCLUSIONS: Artificial intelligence for pathologists is both feasible and powerful. The future widespread use of artificial intelligence in our profession is certainly going to transform how we practice pathology.


Subject(s)
Artificial Intelligence , Pathology, Clinical/methods , Telepathology , Humans , Image Processing, Computer-Assisted , Pathology, Clinical/instrumentation
SELECTION OF CITATIONS
SEARCH DETAIL