Open source programme that, in principle, can accomplish everything OpenAI’s ChatGPT does thanks to a new text-generating language model that combines Google’s PaLM model with a method known as Reinforcement Learning with Human Feedback.
But this is only a hypothesis for the vast majority of people. However, Philip Wang’s PaLM + RLHF, also developed by an AI developer, lacks the pre-trained text data that is included in ChatGPT. To train the model and carry out processing tasks, users must assemble their own data corpuses and employ their own resources.
The newest trend in artificial intelligence is text generation models that can learn from human input, such as ChatGPT and PaLM + RLHF. They study semantic patterns from an existing data collection, which may include everything from ebooks to online flame fights, and then make predictions about which words will be used in a certain context.
Developing AI that can be used by the masses
While PaLM + RLHF does come pre-trained, the Reinforcement Learning with Human Feedback approach aims to improve the user experience by making it more natural to use.
To train a language model, RLHF generates several possible answers to a human inquiry and then has human volunteers rate them. A “reward model,” trained with the rankings, determines the preferred sequence of replies.
The cost of this method will preclude all save the richest AI enthusiasts from training the model. According to a research from 2020, it would cost anything from $80,000 to $1.6 million to train only a 1.6 billion parameter model, whereas PaLM has 540 billion language model components (or parameters) that need to be trained on data.
At this time, it seems that we must depend on a rich donor to help fund the model’s development, training, and eventual dissemination to the public. Even if previous reliances have usually ended badly, other firms are already working to copy ChatGPT’s features and provide them for free.
The first language model trained with human input is being released by the research groups CarperAI and EleutherAI in collaboration with the firms Scale AI and Hugging Face.
Furthermore, LAION, the company that provided the training data set for “machine learning, text-to-image” model Stable Diffusion, has created a similar project on GitHub that aims to surpass OpenAI by incorporating APIs, compiling its own research, allowing for user customization, and being optimised for consumer hardware.
Subtly charming pop culture geek. Amateur analyst. Freelance tv buff. Coffee lover