During last week's International Telecommunication Union AI for Good Global Summit in Geneva, Switzerland, OpenAI CEO Sam Altman was stumped after being asked how his company's large language models (LLM) really function under the hood. "We certainly have not solved interpretability," he said, as quoted by the Observer, essentially saying the company has yet to figure out how to trace back their AI models' often bizarre and inaccurate output and the decisions it made to come to those answers. Other AI companies are trying to find new ways to "open the black box" by mapping the artificial neurons of their algorithms. For instance, OpenAI competitor Anthropic recently took a detailed look at the inner workings of one of its latest LLMs called Claude Sonnet as a first step.
No comments:
Post a Comment