Preparing for the Artificial Intelligence Explosion at RSNA 2019
November 04, 2019 – What do the following numbers have to do with the annual meeting of the Radiological Society of North America: 2, 12, 32, 271, and 308? They refer to the presence of “artificial intelligence” at the show from 2015 to 2019, in that order.
RNSA sees tremendous potential in the application of AI and its various permutations to the work of radiologists across the continent — a significant shift from the initial belief that AI would make radiologists redundant.
The society has gone so far as standing up an expanded AI showcase for this year’s show, which takes place December 1 through 6 at the McCormick Place in Chicago.
“Many RSNA meeting attendees seek out AI subject matter. Creating an encompassing showcase on artificial intelligence for exhibitors, educators and researchers will create a dynamic environment for our attendees,” said Steve Drew, RSNA Assistant Executive Director of Scientific Assembly, Informatics and Corporate Relations in a July announcement.
“High interest by commercial companies and meeting attendees led to this exciting development,” added John Jaworski, CEM, Director: Meetings and Exhibition Services of RSNA. “We now have more than 100 AI Showcase companies participating—which is up 25 percent over 2018’s final showcase figures—and the AI Theater, Deep Learning Classroom and Hands-on Classroom will provide various educational opportunities on artificial intelligence within the Showcase.”
Given this explosion of AI at RNSA’s annual event, attendees must know the terms that will be thrown around and differentiate between hype and reality. So here’s a primer for you, dear reader.
Getting Conversant in AI
AI is often seen as the silver bullet to healthcare’s many problems. It holds the promise of detecting diseases earlier and with more accuracy, standardizing clinical processes, and eliminating scheduling and paperwork. Ultimately, integrating artificial intelligence into clinical workflows can help ease provider burnout and improve patient outcomes.
Since 2016 alone, the FDA has approved 38 artificial intelligence algorithms for clinical use. Nearly half of these apply to radiology practice, the field most quickly adopting AI. Images and image reads easily lend themselves to interpretation by artificial intelligence.
Radiology is littered with studies demonstrating how algorithms and machine models are outperforming providers in detecting, characterizing, and monitoring disease. In the future, many predict artificial intelligence will continue to improve, exceeding humans in certain, more complex tasks.
Many radiologists are fearful that the widespread use of AI will result in machines replacing their jobs. However, artificial intelligence should be a supplement to the traditional workflow of providers, complimenting their work rather than eliminating it.
In order for radiologists to confidently implement artificial intelligence into clinical workflow, they must understand the different types of artificial intelligence and how these methods can be leveraged in radiology practice to dissuade false assumptions and hesitancy towards adoption.
Natural Language Processing
Natural language processing (NLP) is one branch of artificial intelligence that allows computers to understand and interpret language. The technology can comb through reports, interpret spoken language, and generate structured text from free text.
A systematic review of NLP in radiology practice identified dozens of natural language processing methodologies applicable to clinical practice. Results demonstrated how the technology can be used for diagnostic surveillance, quality assessment, clinical support services, and cohort building for epidemiological studies.
All four of these aspects of care will help improve provider efficiency and care quality. Diagnostic surveillance allows the machine to alert providers when items have not been acted on, promoting efficient care and quick referral. The ability of natural language processing to transform free text into structured text can eliminate the administrative burden on providers, automating routine data entry and improving clinical workflow. Building a cohort allows researchers to quickly identify individuals for studies or allow providers to identify high-risk groups sooner.
Another branch of artificial intelligence is machine learning. In this model, algorithms learn from a data set on how to solve a specific task. Data is inputted into the system, the machine learns from it, and uses that data to predict a desired outcome (e.g., risk of contracting a disease). Rather than being programmed to give a specific result from a data set, the machine learns how to predict outcomes using patterns in the data to identify which variables are most influential to the result.
In radiology practice, machine learning has a wide array of potential applications as the sheer amount of data radiology has is ripe for algorithm development. Machine learning processes can learn how to read and interpret a variety of medical images, including PET scans, MRIs, and CT scans. Quicker and more accurate reads of these imagines can identify disease faster and in an earlier stage with more accuracy.
Some studies indicate that machine learning can help improve overall workflow, communication, and patient safety if image read time is decreased and the quality of the image read is improved. Not only can this give providers more time to spend with patients instead of interpreting results, but it can also improve patient safety as more accurate reads will result in fewer false positive or false negative diagnoses.
Other research demonstrates how machine learning can help identify complex patterns in diagnosis. As a result, this artificial intelligence method can improve radiologists’ ability to make accurate decisions, identifying diseases more precisely and accurately.
Deep Learning/Neural Networks
Deep learning, often referred to as neural networks, is a type of machine learning where the algorithm is trained using a complex network of patterns similar to the brain’s neural network. The methodology has demonstrated high performance in identifying disease from imaging studies, taking the methods of machine learning one step further. Rather than learning from a set of inputs given to the machine from the algorithm developer, the algorithm learns from the data. It is a more advanced kind of machine learning that requires large datasets to train the algorithm and the data must be standardized as the machine has to learn where to identify irregularities in images.
With obvious applicability to radiology practice, research demonstrates deep learning models can be particularly useful in screening images or early-stage identification.
Deep learning algorithms, though, are at risk of the ‘black box’ problem if their neural networks are not extensively understood. ‘Black box’ AI is the development of an algorithm without an understanding of how the machine generated the output. Thus, many providers as uneasy trusting diagnostic and treatment decisions to an algorithm they do not understand.
If deep learning methods are to be more widely and confidently utilized in radiology practice, their interpretability will need to improve, and their methods must be clearly laid out. As with all artificial intelligence methodologies, the higher quality data inputted into generating the algorithm, the more accurate and more trusted the results will be.