Elon Musk’s xAI releases Grok source and weights, taunting OpenAI

An AI-generated image released by xAI during the launch of Grok

Enlarge / An AI-generated image released by xAI during the open-weights launch of Grok-1. (credit: xAI)

On Sunday, Elon Musk's AI firm xAI released the base model weights and network architecture of Grok-1, a large language model designed to compete with the models that power OpenAI's ChatGPT. The open-weights release through GitHub and BitTorrent comes as Musk continues to criticize (and sue) rival OpenAI for not releasing its AI models in an open way.

Announced in November, Grok is an AI assistant similar to ChatGPT that is available to X Premium+ subscribers who pay $16 a month to the social media platform formerly known as Twitter. At its heart is a mixture-of-experts LLM called "Grok-1," clocking in at 314 billion parameters. As a reference, GPT-3 included 175 billion parameters. Parameter count is a rough measure of an AI model's complexity, reflecting its potential for generating more useful responses.

xAI is releasing the base model of Grok-1, which is not fine-tuned for a specific task, so it is likely not the same model that X uses to power its Grok AI assistant. "This is the raw base model checkpoint from the Grok-1 pre-training phase, which concluded in October 2023," writes xAI on its release page. "This means that the model is not fine-tuned for any specific application, such as dialogue," meaning it's not necessarily shipping as a chatbot. But it will do next-token prediction, meaning it will complete a sentence (or other text prompt) with its estimation of the most relevant string of text.

Read 9 remaining paragraphs | Comments

文 » A