Meta’s Fundamental AI Research (FAIR) team has released an AI tool called the ‘Self-Taught Evaluator’ that can check the accuracy of other AI models, without the need for human intervention, possibly paving the way for less human involvement in the AI development process. The tool generates different responses from AI models and uses another AI system to assess the accuracy and improve the responses to complex questions in areas like Math, science, and coding. But the incredible bit is that Meta trained the model on AI-generated data only, removing the need for human intervention, and reported that it performed better than models that rely on human-labelled data (like GPT-4 does, for example).
No comments:
Post a Comment