Leveraging foundation models and transfer learning for peptide transport prediction, molecular taste classification, and visual texture analysis

Foundation models (FMs) offer powerful tools for modeling symbolic and image data in food science through transfer learning. This study presents three applications demonstrating how pre-trained FMs can improve prediction tasks commonly found in experimental food science data. First, embeddings from the “Evolutionary Scale Model Cambrian” protein language model were used to classify the Caco-2 monolayer transportability of goat milk-derived peptides based on their amino acid sequences. The resulting model achieved an accuracy of 0.89, outperforming conventional peptide embedding methods. Second, the “MolFormer” chemical language model was applied to predict tastes of small molecules (sweet, bitter, umami), achieving an accuracy of 0.99 and surpassing chemoinformatic-based models. Third, image embeddings from a vision transformer model (“Contrastive Language-Image Pre-training”) were used to predict fibrousness in meat analogue samples. This approach improved upon previous automated image analyses and successfully handled samples with complex appearance. These examples demonstrate that transfer learning from FMs enables accurate and scalable prediction models for food science applications. The transfer learning approach supports integration of protein, chemical, and image data, offering a unified framework for diverse experimental data analysis in food science, with the potential to accelerate innovation in novel food design and predictive food manufacturing.

Comments (0)

No login
gif