Cardiomyopathies are characterized by hearts that are structurally and functionally abnormal. The most common morphological subtypes are dilated and hypertrophic cardiomyopathies (DCM and HCM respectively), and mutations in the genes encoding for the proteins which make up muscle are among the most common causes of HCM and DCM. Currently, treatment for cardiomyopathies involves phenotyping hearts that have, in many cases, already undergone irreversible damage and then treating symptoms. While genotyping patients has become easier and faster, given the complex nature of muscle it can be difficult to predict what treatment might be most effective at preventing harmful and irreversible damage. This project involves simulations of muscle force generation to develop a process for identifying therapeutic targets in genetic muscle variants which can then be assessed experimentally.
What is the role of computation in addressing the problem, and what is the nature of the computational approach?
We used a spatially explicit model of the molecular basis of muscle contraction in which the rate parameters governing muscle force development can be altered to simulate the effect of both genetic mutations (disease rates) and small molecules (therapeutic target rates). Using this computational model we expanded on preliminary use of Azure to first generate a very large data set of simulated twitches under different permutations of disease rates and target rates. We then used this data set to characterize the behavior of the muscle model with respect to both therapeutic target rates and disease rates. From there we determined what the target rates would need to be set to in the spatially explicit muscle model in order to generate a healthy twitch in the presence of perturbation to the disease state. In short, we are attempting to solve an ’inverse problem’ using a computational muscle model to solve under what conditions a muscle in a disease state can be induced to behave in a healthy state. Because the spatially explicit muscle model is computationally costly, generating the training data set requires a large amount of computational resources, such as Azure cloud computing.
How did using cloud services advance your research?
Because we want to characterize the behavior of our model's dependence on many variables we needed to run the simulation many times with many combinations of parameters. We used Azure Batch services to run hundreds of simulations simultaneously. What was especially useful was that Azure was able to increase our core quota. We at first ran ~50k simulations over a broad range of multiple variables to characterize our model's dependence. Later we found it convenient to sample smaller regions of the behavior space. With such a large core quota it was possible to sample a smaller region overnight and examine it the next day.
What was your experience with the technical side of working in the cloud?
We found the batch and storage services easy to use. This was mainly due to the Github pages which were paired with Azure “how to” guides using Python (e.g.: https://github.com/Azure-Samples/batch-python-quickstart). It pretty much contained all the information we needed to start our project. We found these uniquely helpful across the computing platforms we’ve tried in the past, and if we use other services in the future, such examples would be very helpful.
Related references:
Myofilament Meeting - May 21-24, 2022 - Curse of Dimensionality in Multifilament Modeling - (Poster) - https://cvrc.wisc.edu/myofilament-conference/
Authors:
Travis Tune, Research Scientist, Department of Biology at the University of Washington
Farid Moussavi-Haram MD, Assistant Professor of Cardiology at the University of Washington
Thomas Daniel, Professor, Department of Biology at the University of Washington; Assistant Professor at UW Medicine