Meta AI researchers today released OpenEQA, a new open-source benchmark dataset that aims to measure an artificial intelligence system’s capacity for “embodied question answering” — developing an understanding of the real world that allows it to answer natural language questions about an environment. The dataset, which Meta is positioning as a key benchmark for the nascent field of “embodied AI,” contains over 1,600 questions about more than 180 different real-world environments like homes and offices. These span seven question categories that thoroughly test an AI’s abilities in skills like object and attribute recognition, spatial and functional reasoning, and commonsense knowledge.
No comments:
Post a Comment