Research

 

 

 

 

Uncovering relationships between tongue structure and function

The human tongue human tongue consists of numerous intrinsic and extrinsic muscles, each of which has distinct roles to compress and expand tissue points. We are interested in disambiguate the complex relationships between tongue structure and function including speech and swallowing using various imaging and machine learning techniques. We are particularly interested in speech disorders caused by tongue caner, cleft palate, ALS, and many other neurological disorders.

Grant support: NIH NIDCD

 

 

Atlas-driven MR imaging for studying speech production

‚ÄčAtlases integrate diverse imaging information for individuals and groups, by correlating images with quantitative measurements, and constructing diagnostic tools. Quantitative measures derived from MR images, such as tissue compression, expansion, principle strain, and muscle mechanics play a crucial role to characterize normal and abnormal motion of the tongue. A powerful way to measure changes and compromises in tongue structure and function is via 4D statistical atlas and associated image analysis techniques.

Grant support: NIH NIDCD and NIDCR

 

 

Multimodal deep learning to associate heterogeneous representations from multimodal data

Deep representation learning is a fundamental challenge in computer vision, machine learning, and medical image analysis. In this research, we are interested in associating disparate representations from multimodal data including imaging and audio/acoustic data in order to mine hidden latent features, thereby advancing our understanding of speech motor control and therapeutic, rehabilitative, and surgical procedures.

Grant support: NIH NIDCD

 

 

Successive Subspace Learning

Successive subspace leraning (SSL) is a powerful technique, providing an interpretable and lightweight model to perform classification, regression, and segmentation. Compared with popular convolutional neural network (CNN) architectures, SSL has modular and transparent structures with fewer parameters without backpropagation, so it is well-suited to small dataset size and 3D imaging data.

Grant support:

 

 

Unsupervised Domain Adaptation

The goal of unsupervised domain adaptation (UDA) is to transfer knowledge learned from a label-rich domain to new unlabeled target domains. We apply this technique to both discriminative and generative tasks, such as segmentation, classification, image synthesis, and clustering for seamless deployment of deep learning models in testing.

Grant support: