So many exciting things happening in the Healthcare analytics space, and what bigger impact than driving improved patient care, prevention, and cure with data.
This years Health Analytics Summit hosted by Health Catalyst (NASDAQ: HCAT), exclusively for its clients and partners, did not disappoint.
This years focus was about “Embracing the Human Side of Healthcare Analytics," and throughout the week we looked at the patient and building systems that allow team members to deliver better care using data and analytics.
There were two general sessions that I found particularly engaging, both focused on human behavior but tangential to data.
Shawn Achor - The Happiness Advantage
According to Shawn (happiness researcher and bestselling author), our formula for happiness is broken. We tend to equate happiness with certain achievements or milestones (e.g., “I’ll be happier when I accomplish X.”) As a result, we keep happiness on the opposite side of a moving target, making it elusive. Research shows that success doesn’t improve happiness, but rather happiness is a precursor to success.
In our field, we have spoken at length with many of our employees who value impact, and they define this as the thing that makes them happiest at work. Being happy at work is only a part of being happy in life so ensuring that impact at work does not come at the cost of life outside of work.
This is a common theme in Shawns teachings, and here is a link so that you can watch a similar TED talk of his in 2011. https://www.ted.com/talks/shawn_achor_the_happy_secret_to_better_work?language=en
Dr. Marzyeh Ghassemi, MSc, PhD - Designing Machine Learning Processes for Equitable Health
In her presentation, Dr. Ghassemi shared her research findings regarding the use of artificial intelligence (AI), ethical issues, and important considerations for the use of AI in healthcare.
While the value of machine learning (ML) is increasingly gaining acceptance as a decision aid for clinicians in areas like radiology and cardiology, healthcare providers must keep in mind that ML models are trained and can learn unconscious biases such as racism. Dr. Ghassemi shared how ML models can predict race from a simple chest X-ray where humans cannot.
As she explained the findings in her research for this, her belief is that it is due to issues with the calibration data in medical devices, where their training data is based on lighter skin. If the race of a patient is obvious to the model, but impossible for a human radiologist to determine and that model has been mis-trained to have a racist bias, it can lead to incorrect treatment decisions. These faulty decisions will perpetuate as the model stays in use.
While Dr. Ghassemi sees great value and the potential for AI in healthcare prevention, diagnosis, and treatment, she emphasizes the need for clinicians to look to AI for supplementary information and actionable insights, not final diagnosis or treatment recommendations. While there is no simple fix, but Dr. Ghassemi provided some guidance about how to apply AI more ethically:
Consider sources of bias in the data and take steps to correct them in the processes and sources generating that data. Think broader.
Evaluate models more comprehensively to understand different kinds of performance metrics, especially input data calibration error.
Not all gaps can be corrected - can you determine which gaps are clinically acceptable and add value, even if they’re not perfect?
We are constant trying to remove biases across our business and this gave me more food for thought when assessing where we can improve, from hiring to delivery. At Proactiviti we are constantly looking for ways to be more effective and improve on our goal to make technology careers more broadly accessible in LATAM. We can start by constantly challenging the biases that we've learnt over our broad and varied lives.