Big Data

For Better Hearts


Interview with Tom Lumbers

Dr Tom Lumbers                            

HDR UK Rutherford Fellow and Honorary Consultant Cardiologist

UCL Institute of Health Informatics

London, United Kingdom







Citation: Shah, S., Henry, A., Roselli, C. et al. Genome-wide association and Mendelian randomisation analysis provide insights into the pathogenesis of heart failure. Nat Commun 11, 163 (2020).

Background: Dr Tom Lumbers is a Consultant Cardiologist at UCLH and Barts Hearth NHS Trusts, HDR UK Fellow at University College London, and Visiting Scientist at the Broad Institute of Harvard and MIT. He earned his medical degree from Cambridge University and his PhD in molecular biology from Imperial College London, and completed post-doctoral training in genetic epidemiology at University College London. His research focuses on defining the genetic architecture of heart failure and left ventricular dysfunction and he is a co-founder of the HERMES Consortium, an international collaboration in heart failure genetics. Through his work with BigData@Heart, Tom is developing tools to deliver scalable and validated disease phenotypes using real-world data to enable large scale genetic analysis of disease subtypes. 


Q: Tom, thanks for taking time out of your busy schedule to speak with us. Has the COVID-19 pandemic changed the way you are currently working?

A: Normally, I divide my time evenly between patient care and research. During the initial phase of the pandemic, I was redeployed full time to support the clinical service. It is never easy balancing the demands of both roles, but I am privileged to be able to work as a clinical academic and it is rewarding to develop a research program that aims to address some of the unmet needs I encounter in the clinic. In particular, it is clear that we still have a lot of work to do to address the burden associated with cardiovascular disease.


Q:  Has COVID-19 changed your view on the importance of your research?

A: My research centers on developing and improved understanding of the causal factors and molecular mechanisms of heart failure (HF) and left ventricular dysfunction, with a major emphasis on the analysis of genomics and healthcare data. These pathologies are more relevant than ever in the context of COVID-19 because they increase the risk associated with COVID-19 and many other diseases in an important way. The novel coronavirus has amplified the risks associated with cardiovascular disease highlighting the great need for better strategies for prevention and treatment. Heart failure is growing problem and patients are at elevated risk during the current pandemic. So on a personal level the pandemic has boosted my motivation to understand the disease better. I just wish there were more hours in the day to work more on this. 

Fortunately, BigData@Heart has provided support that has enabled us to make progress in better understanding the genetic basis of heart failure, and in turn, the important mechanisms of disease. From a standing start a few years ago, with no large-scale studies on heart failure, we have now completed two major studies: one exploring how genetic variation influences the risk of getting the disease and another looking at how genetic factors that influence clinical outcomes in those affected. This year we will be finishing a third study on sub-types of heart failure where we’ve used new disease identification algorithms to probe health record datasets linked to genomic data. By leveraging healthcare-linked biobanks, we have been able to generate sample sizes for disease subtypes have not been achievable in the past with conventional research studies. In addition to providing a genomic map that can help us to identify causal genes for heart failure, the genetic association data that we are generating may contribute to improving diagnosis and prediction.


Q: Medicine has advanced tremendously in recent years, but we are still far from really integrating innovations in diagnostics and therapeutics. Would you agree with that assessment?

A: I think it depends on the disease that you look at. If the question is about translational genomics and integrating genetic information into the clinic, then there are clearly some examples of impact. On the one hand we rare Mendelian disorders, or monogenic disorders, where if you inherit an abnormal gene then you have a high probability of developing that disease. In these cases, genetic information can give us the ability to diagnose and stratify patients, and to screen family members who may be at risk. In more complex or multifactorial disease, where many genetic and non-genetic factors combine to influence risk, common genetic variation is more important. Polygenic risk scores are a way of summarizing the aggregated risk effects of many genetic variants and there is promise that these may have a role in diagnosis and prediction in routine clinical, but a lot of work is still needed to demonstrate that this information can be used to improve patient outcomes. 


Q: Which of those single-gene inheritable conditions have changed?

A: There are cardiomyopathies or primary diseases of the heart muscle where diagnosing patients with mutations in certain genes can identify a subgroup with distinct disease progression and prognosis. In some cases, this information can lower the threshold at which we would consider treatment with implantable defibrillators. In many cases, however, there is uncertainty around the significance of the results of genetic testing, and treatment decisions based on risk are driven mainly by the disease phenotype. Even for genetic variants with an established link to disease, there is wide variation in carriers in terms of whether they develop the disease and how severe it is. I think it is increasingly recognised that we need to shift our interpretation of genetic information from a deterministic to a probabilistic paradigm wherein multiple genetic and non-genetic factors combine to influence risk. We have examples where genetics are helpful, but in a lot of areas of genetic medicine there are still many questions about what the purpose of the testing is and what the impact of the results are. 

We are recognising that the concept of a single-gene inheritable disease is probably too simplistic a paradigm in most cases. Even for diseases that are classically Mendelian such as Huntington’s disease, researchers have shown that polygenic background influences age of onset. My work and that of others is beginning to show the importance of polygenic risk in both carriers and non-carriers of Mendelian variants, particularly for dilated cardiomyopathy, which accounts for about 30 percent of all heart failure. I think we need to explore whether we can use this information to help tailor preventive treatments in at risk populations, such as those suffering from myocardial infarction.


Q: What motivated you to deal with heart failure?

A: I first got interested in heart failure when I was involving in looking after a young patient with dilated cardiomyopathy and heart failure who was on the waiting list for heart transplantation. The diagnostic work up, including genetic testing, had not identified a cause and I just found that so unsatisfactory. Even now, many years on, there are patients whose disease we can’t explain. I am motivated to better understand the causes of dilated cardiomyopathy and heart failure so that we can explain that to patients and identify biology that we can address to reduce risk and the burden of disease on patients.  Despite heart failure being among the most common hospital discharge codes, the underlying causes have been difficult to disentangle. We know that cardiomyopathies, such as dilated cardiomyopathy, are often hidden within the general HF population. This is the basis for our efforts in the BigData@Heart where we are focused on using healthcare data to study these patients with patterns of disease.


Q: You recently published a study in Nature Communication. Could you provide us with a brief overview of the main findings of your study?

A: So we know that heart failure is a a complex disorder and a leading cause of morbidity and mortality worldwide, with an estimated heritability of approximately 27%. However, only a small proportion of HF cases are attributable to monogenic cardiomyopathies and existing genome-wide association studies (GWAS) have provided us with limited insights, leaving the observed heritability of HF largely unexplained.

We hypothesised that a GWAS of HF with greater power would provide an opportunity to discover genetic variants that modify disease susceptibility in a range of comorbid contexts, both through subtype-specific and shared pathophysiological mechanisms, such as fluid congestion. Also, that a large GWAS meta-analysis of HF would provide insights into aetiology by estimating the unconfounded causal contribution of observationally associated risk factors by Mendelian randomisation (MR) analysis.

We included 47,309 cases and 930,014 controls and were able to identify 12 independent genetic variants, at 11 loci associated with heart failure (HF) at genome-wide significance (P < 5 × 10−8). The loci we identified were associated with modifiable risk factors and traits related to left ventricle structure and function. Ten of these loci had not been previously reported for HF. All variants have one or more associations with coronary artery disease (CAD), atrial fibrillation (AF), or reduced left ventricular function, which suggests shared genetic aetiology. Mendelian randomisation analysis supports causal roles for several HF risk factors, and demonstrated CAD-independent effects for AF, body mass index, and hypertension.

Our study actually identified a modest number of genetic associations for HF compared to other cardiovascular disease GWAS of comparable sample size, such as for AF. This suggests that an important component of HF heritability may be more attributable to specific disease subtypes than components of a final common pathway. These findings reinforce the importance of HF phenotype work we are doing in the BigData@Heart project.


Q: Tom, your publication has received a lot of attention. It has been accessed more than 10,000 times and widely cited. Why has it gotten so much attention?

A: I think that´s because it was a study that was probably overdue – heart failure is the last common cardiovascular disease to be the subject of a large GWAS study. Given its rising prevalence, associated with the aging population, and the high burden on patients and healthcare systems, heart failure is recognised as an area to focus. Clinical trials, however, are expensive and challenging due to a lack of validated surrogates, so there is hope that genomics will be useful in prioritising potential therapeutic targets. And since heart failure is a marker of adverse prognosis in most cardiovascular disease, better understanding of risk might have a broad impact in the way we manage patients. We think there is great opportunity to make sense of this disease state using data from genomics and health records, and BigData@Heart should help us to achieve this goal.  


Q: Women and heart failure: is this an area you are studying?

A: Biology differs between the sexes in important ways and in ways that have been understudied. Heart failure in women is an area that we plan to look at but have not got to yet. We are recognising that diversity is important, both to ensure that insights are relevant to all people and because ancestral differences can help us to better understand the biology. Up until now, for pragmatic reasons most genetic studies have been limited to individuals with a broadly European ancestry. Our current projects are all multi-ancestry across five ancestral groups, which is really important.  


Q: Closing our discussion, what are your wishes for the future in Big Data research?

A: We have a lot of unexploited data in electronic health records and patient registries – there is a lot of untapped potential there. BigData@Heart is trying to address this, but I would like to see more connected work on data architecture, terminology, and electronic health record systems at a European level. Once you crack all these, the quality of the data will be better, the level of confidence in the data will be boosted, and doors to many more research opportunities will open, be it trustworthy artificial intelligence or better designed clinical trials using electronic health records. If we are going to succeed, we have to put the patient at the center and we need to collaborate in new ways to improve the health of our communities.

Published on: 02/10/2021