If you’re rolled into the emergency room of Harborview Medical Center with a head injury, doctors there might use an $8,000 infrared camera to track how your pupils respond to light.
The digital pupilometer can measure the pupillary light reflex with more precision than even an expert clinician can, using a pen light and his or her naked eye. Taken each hour, the pupillary light reflex can indicate when a patient’s brain injury is worsening, telling doctors when an early intervention may be necessary. Even mild traumatic brain injuries—also known as concussions—can reveal themselves in subtle changes to the pupillary light reflex.
Some 3.8 million concussions happen each year in the U.S., but half of them go undiagnosed, putting people at increased risk of a repeat injury or permanent neurological deficit.
After working with the complex and expensive digital device, doctors Anthony Law and Lynn McGrath, Jr., saw the potential to put some of its precise measurement capabilities in the hands of other doctors, first responders, coaches, athletic trainers, and anyone else who might encounter someone who has suffered a blow to the head—and happens to have a smartphone. Which is to say, pretty much everyone.
“We like to think of the democratization of this really complex medical equipment,” says McGrath, a neurosurgery resident at the University of Washington. [Disclosure: The author’s spouse is an employee of UW Medicine, but has no involvement in this project.]
Law and McGrath sought out expertise within the university to help build a smartphone app that could catch more of those missed concussions. The app, called PupilScreen, uses the phone’s camera flash, camera, and video recording capabilities to measure pupillary reflexes. It could provide an objective assessment of whether an athlete who just took a hit to the head should be seen by a doctor for further evaluation, or is safe to go back out on the field.
They were referred to the “ubiquitous computing” research lab of professor Shwetak Patel in the UW’s Paul G. Allen School of Computer Science & Engineering. “I think we got very lucky in that we found Shwetak’s lab very early in the process, because he turned out to be just the exact right guy to tackle the problem,” McGrath says. (McGrath and Law are now working on a startup—more on that below.)
The UbiComp Lab has developed a host of smartphone apps that do exactly the kind of democratization McGrath had in mind. The lab has published papers and developed prototypes of apps that can screen for neonatal and adult jaundice, the latter being a potential indicator of pancreatic cancer; measure blood pressure; screen for hemoglobin levels; assess ambulatory coughing; and measure respiration—all using a smartphone. (Some of those apps were packaged up as a startup company called Senosis Health, which Google acquired earlier this summer, according to unnamed sources quoted by GeekWire.)
Alex Mariakakis, a doctoral student in the UbiComp lab who is the lead author of the paper describing PupilScreen, to be presented at the Ubicomp 2017 conference next week, says the challenge was to develop a concussion screening tool that is as quick as the penlight exam, but also offers something approaching the precision of the pupilometer. To ensure pupil visibility, and standardize the light stimulus from the phone’s flash, the team created a box that sits over the eyes similar to a cardboard virtual reality viewer. They hope to improve the system such that the box would not be necessary in future versions.
Computer vision and machine learning technologies—specifically convolutional neural networks— underpin PupilScreen. The app was trained on thousands of images of various irises and eye shapes and colors to identify the precise location of a person’s pupil. With the pupil located, the app tracks changes in the pupil diameter in each frame of a 3-second video as it is exposed to light. The rate of change determines whether an individual may have a concussion or traumatic brain injury.
In a preliminary clinical trial, PupilScreen was used to measure the pupillary light reflex of six patients with traumatic brain injuries. Measurements of both injured and healthy patients were given to clinicians, who “were almost always able to reach the correct diagnosis,” the researchers write.
Another clinical study is planned this fall to gather data in the field that will help inform the app in marginal cases.
The researchers received funding from the National Science Foundation, Washington Research Foundation, and Amazon Catalyst, a grant program the technology giant began at UW in 2015.
McGrath and Law, co-authors on the paper, plan to advance work on PupilScreen through a startup they co-founded called EigenHealth. The startup—formed with the assistance of UW’s CoMotion tech transfer office—aims to capture different types of clinical data from head and neck exams, sometimes using smartphones, and then analyze it with machine learning to tease out subtleties that could help doctors diagnose certain conditions earlier.
McGrath is optimistic about the potential of artificial intelligence technologies to extend the capabilities of medical professionals. While he doesn’t write the software himself, he is informed by his study of the original neural network—the human one.
“I have a little bit of an advantage in terms of grasping why machine learning and specifically neural networks have as much promise as they do, and why they work,” McGrath says. He adds that the PupilScreen project has “illustrated for all of us the massive impact that machine learning could have on all the day to day things that doctors do.”