Poster Session B   |   7:00am Expo - Hall A & C   |   Poster ID #241

Deconvolving intra-tumor heterogeneity in renal cancer via deep learning

Program:
Academic Research
Category:
Bioinformatics and Computational Biology
FDA Status:
Not Applicable
CPRIT Grant:
Cancer Site(s):
Kidney and Renal Pelvis
Authors:
Aleksandra Weronika Nielsen
The University of Texas Southwestern Medical Center
Hua Zhong
The University of Texas Southwestern Medical Center
Vandana Panwar
The University of Texas Southwestern Medical Center
Payal Kapur
The University of Texas Southwestern Medical Center
Satwik Rajaram
The University of Texas Southwestern Medical Center

Introduction

Profiling of intra-tumor heterogeneity traditionally relies on multi-region sequencing, whose cost makes it impractical to scale up to clinical settings. In contrast, previous studies have demonstrated that tumor tissue morphology, assessed via ubiquitous and inexpensive histological sections, can be used to infer the underlying molecular state. We hypothesize that identification of morphologically distinct areas on hematoxylin and eosin (H&E) stained slides provides a more practical and cost-effective approach to deconvolve tumor heterogeneity. However, given the massive size and complexity of tumor tissue, where no two cells look identical, a key challenge is objectively identifying which aspects of tissue morphology are functionally relevant. Here, in the context of clear cell renal cell carcinoma (ccRCC), the paradigm of intra-tumor heterogeneity, we present a deep learning approach to measure morphological similarity, and we test its ability to capture functionally relevant morphological differences, and to detect changes in driver mutation status.

Methods

Our approach is based on the idea that spatially proximal areas within a tumor are statistically more likely to be in the same functional state. We developed a deep learning model to identify aspects of tissue morphology reflecting this. To identify the optimal model, we tested different training frameworks (BYOL, MoCo, Triplet) and model architectures (ResNet50, ViTb) in their ability to identify spatially proximal tissue regions. Next, we tested the ability of our models to detect intra-slide changes in tumor nuclear grade and architecture, which have been identified by pathologists as being prognostically relevant. Finally, we tested whether intra-slide variation in the key driver mutations (BAP1, SEDT2, or PBRM1) within a slide resulted in morphological changes detectable by our model.

Results

By comparing different deep learning approaches in developing a similarity measure, we found that we achieved the best performance with a BYOL framework that used Vision Transformers (ViTb) backbone. In whole slide images of tumor tissue, our model was able to distinguish morphological regions with different nuclear grade, architecture, and mutational status. Interestingly, even within areas of the same BAP1/PBRM1/SETD2 status, we identified sub-areas exhibiting distinct morphologies, suggesting that our model can detect genetic heterogeneity beyond these specific driver mutations of interest.

Conclusion

We present a deep learning model that recognizes nuclear grades and vascular architectures, and detects mutational loss in subset of driver mutations in renal cancer without prior morphological labels, suggesting the potential of morphological similarity as an indicator of genetic similarity. As such, we believe that by applying our deep learning model to histopathological slides, it is possible to provide a cost-effective means of deconvolving intra-tumor heterogeneity. Future works will test the range of molecular differences our approach can detect, and therefore explore ways to develop intra-tumor heterogeneity-aware biomarkers of response.