The findings, published in Nature Human Behaviour, demonstrate that large language models (LLMs) trained on vast datasets of text can distill patterns from scientific literature, enabling them to forecast scientific outcomes with superhuman accuracy. The researchers say this highlights their potential as powerful tools for accelerating research, going far beyond just knowledge retrieval. Lead author Dr. Ken Luo (UCL Psychology & Language Sciences) said, "Since the advent of generative AI like ChatGPT, much research has focused on LLMs' question-answering capabilities, showcasing their remarkable skill in summarizing knowledge from extensive training data. However, rather than emphasizing their backward-looking ability to retrieve past information, we explored whether LLMs could synthesize knowledge to predict future outcomes.
No comments:
Post a Comment