Bestgamingpro

Product reviews, deals and the latest tech news

An open source replacement for ChatGPT is now available, but good luck maintaining it

The first open-source ChatGPT alternative has come, but you may have trouble getting it to operate on your laptop.

Developer Philip Wang, who is known for his work in reverse-engineering closed-sourced AI systems like Meta’s Make-A-Video, has now published PaLM + RLHF, a text-generating model with comparable behaviour to ChatGPT. With the help of Google’s massive language model PaLM and a method dubbed Reinforcement Learning with Human Feedback (RLHF for short), the system can do virtually all of ChatGPT’s capabilities, such as composing emails and recommending lines of code, with little to no human intervention.

However, PaLM + RLHF isn’t trained in advance. That is to say, the system lacks the web-based examples it would need to function properly. In order to have an experience similar to ChatGPT after installing PaLM + RLHF, you will need to collect terabytes of text from which the model can learn and locate hardware powerful enough to manage the training burden.

PaLM + RLHF is a statistical technique for word prediction, much as ChatGPT. PaLM + RLHF learns how often words are to appear based on patterns such as the semantic context of surrounding text when given a large amount of instances from training data, such as posts from Reddit, news articles, and ebooks.

A common ingredient between ChatGPT and PaLM + RLHF, a method that tries to better align language models with what users need them to do, is Reinforcement Learning with Human Feedback. In RLHF, a language model (in this example, PaLM) is trained and fine-tuned using a data set consisting of questions (such as “Explain machine learning to a six-year-old”) and the responses that human volunteers anticipate the model will provide (such as “Machine learning is a kind of AI…”). Volunteers are given the aforementioned stimuli and feed them into the refined model, which produces many replies. Finally, the ratings are used to teach a “reward model” to rank the original model’s responses in order of preference, allowing it to select the best possible solutions to a question.

Training data collection is a costly operation. The cost of training is also high. There are 540 billion parameters in PaLM, with “parameters” referring to the linguistic components of the model that were trained with the data. According to a report from the year 2020, it may cost as much as $1.6 million to create a text-generating model with just 1.5 billion parameters. Additionally, it took three months and 384 Nvidia A100 GPUs (each of which may cost several thousand dollars) to train the open-source model Bloom, which contains 176 billion parameters.

It’s not easy to run a trained model as large as PaLM + RLHF either. A dedicated PC with about eight A100 GPUs is needed for Bloom. The cost of running OpenAI’s text-generating GPT-3, which has over 175 billion parameters, on a single Amazon Web Services is estimated to be around $87,000 annually, making cloud alternatives rather costly.

It may be difficult to scale up the required dev procedures, as AI researcher Sebastian Raschka notes in a LinkedIn article regarding PaLM + RLHF. You still need to deal with infrastructure and have a software framework that can manage it, he added, even if someone gives you 500 GPUs to train this model. It’s doable, but it’s a lot of work right now (we’re working on frameworks to make it easier, but it’s still not trivial).

To sum up, until a well-funded venture (or person) takes to the expense of training and making it publicly available, PaLM + RLHF isn’t going to replace ChatGPT any time soon.

In more upbeat news, numerous other initiatives, notably one lead by a research group called CarperAI, are making rapid headway in their attempts to recreate ChatGPT. CarperAI is working with EleutherAI, an open AI research organisation, Scale AI, and Hugging Face, two tech businesses, to roll out the first fully operational AI model that is similar to ChatGPT and has been trained using human feedback.

The non-profit organisation LAION, which provided the original data set used to train Stable Diffusion, is also leading an effort to recreate ChatGPT with state-of-the-art ML methods. The ambitious goal of LAION is to create a “assistant of the future” that “does meaningful work, accesses APIs, dynamically studies information, and much more” in addition to writing emails and cover letters. We’re just in the beginning phases now. But a few weeks ago, a GitHub repository containing the project’s materials became public.