Meet Our Lead Software Programmer: Ethan Ulrich, PhD Candidate
In our "Meet the Team" series, our knowledgeable teammates discuss the major trends, impact, and technology of artificial intelligence in the diagnostic space. Today we speak with Ethan Ulrich, our Lead Software Programmer.
You are finishing up your PhD from the University of Iowa in Biomedical Engineering, why did you choose to pursue a career in artificial intelligence?
My graduate work involves processing medical images for the detection and diagnosis of disease. Digital medical images contain a large amount of data, and it is my goal to design software that uses that data to predict disease. Deep learning techniques are well-suited to tackle these types of challenges.
I’m choosing this career path because artificial intelligence is going to be the way of the future. With advances in computing, it is becoming easier to develop deep learning models. Moreover, the medical community is gradually accepting these innovative technologies into their clinical workflows. There is going to be a boom of artificial intelligence in medicine, and I am excited to be a part of it.
What is your dissertation on and how does it translate to your work at Bot Image?
My dissertation is on my research of advanced medical image processing techniques for treatment response prediction. The research focusses on predicting a patient’s response to treatment for head and neck cancer using positron emission tomography (PET) images.
The course of my research and education has equipped me well for the unique complexities involving software development for medical image computing. First, medical image data often needs to be pre-processed before it can be used in a prediction model. This may involve patient anonymization, file conversion, and data normalization. Next, when training a prediction model, certain methods are followed to reduce over-fitting or bias of the model. This may involve analysis of the patient cohort and sampling data so that sub-cohorts are represented equally. Finally, when evaluating the performance of a software model, advanced statistical techniques are needed to estimate the true performance of the model in the real world.
Can you explain how ProstatID works for those of us who don’t have the same technical background as you?
Sure! ProstatID is a software that processes MRI images to detect and diagnose prostate cancer. There are three main steps: pre-processing, organ segmentation, and tissue classification. Pre-processing just converts the medical image data to a format that can be used by the ProstatID algorithm. Organ segmentation uses a trained deep learning model to automatically identify the prostate organ inside the MRI image. Finally, tissue classification uses a trained random forest model to provide a cancer probability for each pixel in the prostate.
What steps have you taken to make ProstatID and other Bot Image products user-friendly?
We wanted our products to integrate as seamlessly as possible into the physicians’ regular, clinical workflows. That means getting rid of a lot of flashy software tech, such as dedicated desktop programs and graphical user interfaces. With ProstatID, the clinic simply sends a patient’s MRI data to the Bot Image server using a secure connection. The data is automatically processed, and results are sent back to the clinic in a matter of minutes. When the radiologist is ready to review the patient’s images, the ProstatID results will already be waiting, and the ProstatID results can be viewed using the preferred image viewer of the radiologist.
What do you feel is the most innovative aspect of the algorithm behind ProstatID?
I like how ProstatID was designed to run automatically. The algorithm automatically detects the images it needs, even when the user sends additional images. This puts less of a burden on the user.
Regarding the prediction model, we used a combination of machine learning methods to form ProstatID, namely convolutional neural networks and random forest. Each method has advantages and disadvantages over the other. When tackling the project, we played to each method’s strengths.
ProstatID uses a trained deep learning model, what are your views on the “black box” problem of deep learning?
Yes, ProstatID utilizes a deep learning model. The “black box” problem refers to a lack of understanding of how an algorithm reaches its conclusions. It is like putting your data in the top of a black box and receiving an answer out the bottom, with no clue as to how the box came up with the answer.
As an engineer, I like to know how things work, so I sympathize with those that are skeptical about black-boxing with AI. There are methods for “opening the black box” of deep learning models, which allow us to investigate the aspects of images that contribute the most to predicting disease. This not only helps us better understand the image qualities necessary for finding disease, but it may also help us better understand the disease itself.
There’s been a lot of concern about AI bias, especially in the field of medicine, how has your team worked to prevent bias in ProstatID?
I often tell people regarding AI that a model is only as good as its training data. Put another way, if there is a bias in your training data, you will most likely end up with a biased model. We mitigate bias through various methods, such as having a balanced training data set so that sub-cohorts of data are represented fairly.
An example of a sub-cohort is patients without cancer. If we had trained ProstatID using only images of patients with cancer, then the model would likely find cancer in every patient, even those that do not truly have cancer. This is obviously a bad thing. The demographics of the patient cohort, such as age and race, are also a big concern. We put in effort to ensure where possible that certain categories of patients are not over- or under-represented in the training data.
Who do you think will benefit the most from this technology?
I believe that patients will benefit the most from this technology. Accurate diagnosis of cancer with the aid of AI will allow for more personalized treatment planning, potentially treating the disease before it becomes a larger problem. Also, accurate diagnosis of non-cancers could prevent patients from undergoing painful, unnecessary biopsies.
What’s the biggest impact you believe that AI will make in the medical diagnostic space?
AI technology has huge potential in the field of radiology and medical imaging. So many interesting things can be accomplished, such as artifact correction, improved resolution, automated measurement, and, of course, diagnosis. The biggest impact will be the increased accuracy of physicians. I maintain that these technologies will not replace the physician anytime soon, but will rather be used as a helpful tool in their decision making.
Bot Image has a lot of products in the pipeline, what upcoming project are you most excited about working on?
I’m not sure what I’m allowed to disclose publicly here. I can say that we are doing more exciting work with detection and diagnosis using MRI, and we have expanded into other imaging modalities, including cell pathology images. Every project presents unique challenges, and I am very excited to be involved in each of them.