Presentation Details
| FAIR Chemosensory Data: Unlocking AI for Flavor, Food and Health Valentina Parma1, 2, Joel D.Mainland1, 3, Liaar-Dar Hwang4, Richard J.Kedziora5, Nicolas Pineau6, Richard C.Gerkin7, 8, Masha Y.Niv9. 1Monell Chemical Senses Center, Philadelphia, PA, USA.2Department of Otorhinolaryngology – Head and Neck Surgery, University of Pennsylvania, Philadelphia, PA, USA.3Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA.4Institute for Molecular Bioscience, The University of Queensland, Brisbane, Australia.5Estenda Solutions, Wayne, PA, USA.6DataInsight, Lausanne, Switzerland.7Osmo Labs, New York, NY, USA.8School of Life Sciences and School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, USA.9The Institute of Biochemistry, Food Science and Nutrition, The Robert H.Smith Faculty of Agriculture, Food and Environment, The Hebrew University of Jerusalem, Jerusalem, Israel |
Abstract
Background: Chemosensory perception is a major driver of food choice, intake, product acceptance and health, yet sensory data in this domain remain fragmented, inconsistently reported and rarely shared in machine‑readable form. This limits cumulative progress and prevents the development of robust AI models that could transform food innovation, personalized nutrition and clinical chemosensory applications. Methods: By synthesizing lessons from data‑intensive fields and mapping them onto current chemosensory infrastructures, including existing databases such as FoodOn, BitterDB, FlavorDB, Pyrfume and Hub4Smell, we frame these efforts within the Findable, Accessible, Interoperable and Reusable (FAIR) principles and identify minimal, high‑impact actions towards changing this situation. One actionable item is to share one’s available data in the ChemoSensory Data community (https://zenodo.org/communities/chemosensorydata/). Results: Concrete steps by researchers, editors, funders, societies and software vendors can contribute towards chemosensory data FAIRness. These include publishing tidy sensory tables with core metadata, adopting shared ontologies and common data elements, implementing standardized export formats in sensory software, and exploring federated learning options to leverage industrial datasets without exposing confidential information. Conclusion: Making chemosensory data FAIR is both feasible and urgently needed to unlock AI for chemosensory science and health. Coordinated but incremental adoption of shared standards, repositories and privacy‑preserving collaborations will enable interoperable chemosensory datasets that support predictive modeling, cross‑sector innovation and more equitable access to chemosensory insights worldwide.
No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the author.
No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the author.