Product reviews, deals and the latest tech news

According to the chief AI scientist at Meta, AI has sparked a renaissance in tech industry R&D

According to Yann LeCun, chief AI scientist at Meta, the advent of the deep learning age of AI has sparked something of a renaissance in business R&D in information technology.

During a recent Zoom discussion with a select group of reporters and company executives, LeCun said, “The sort of tactics that we’ve been working on have had a far broader commercial effect, much more wide-ranging.”

As a consequence, “the upshot is that it has drawn a lot of research funds and, in fact, created a rebirth of industrial research.”

LeCun claims that until 20 years ago, Microsoft Research “had any type of significance in information technology.” LeCun countered that in the 2010s “Google Research really coming to the fore, and FAIR [Facebook AI Research], which I started, and a number of other laboratories going up, and essentially restoring the concept that business should perform fundamental research.”

“because the potential of what may happen in the future, and what occurs in the present, owing to such technologies is immense,” LeCun added, explaining the return of business R&D.

LeCun argued that the importance of applied AI necessitates a two-track approach, one in which corporations continue to invest in far-reaching “moonshot” initiatives, and another in which research is primarily focused on commercially viable end-uses.

To achieve our ultimate objective of creating intelligent virtual assistants on par with humans, it is necessary for Meta to maintain a sizable research lab, but in the meanwhile, the technology it has produced may be put to good use in the here and now.

LeCun cited Google’s Transformer natural language processing programme, introduced in 2017, as an example, noting that it has served as the basis for numerous other programmes like OpenAI’s ChatGPT. “For example, content moderation and speech detection in multiple languages has been completely revolutionised over the last two or three years by large, Transformers pre-trained in a self-supervised manner,” he said.

LeCun credited cutting-edge AI study for the “extraordinary” development.

Collective[i] advertises itself as “an AI platform developed to maximise B2B sales,” and LeCun was asked to speak at an hour and a half long presentation sponsored by the Collective[i] Forecast, an online, interactive conversation series.

When asked by ZDNET how the extraordinary interest in AI by business and commerce is affecting the core research of AI, LeCun provided this response.

LeCun said he was “optimistic” that AI may be put to good use in the world. While AI may fall short of some of its intended outcomes, he said that the impacts it generates may still be worthwhile.

LeCun used the example of autonomous vehicle systems, which have not been fully autonomous but have resulted in life-saving improvements to vehicle security on the road.

According to LeCun, “automatic emergency braking system, ABS,” is currently standard on all vehicles sold in Europe. Not in the United States of America. yet it’s standard on a lot of vehicles.

He remarked that ABS “are the same technologies that also enable the automobile to drive itself on the highway, right?” He found that the method for braking vehicles resulted in a 40% decrease in collisions. So, despite what you may have heard about the Tesla that collided with a truck or anything else, such things are necessary because they save lives.

To improve people’s lives, “one of the things I find quite hopeful about AI is the application of the AI in research and health at the present,” LeCun said.

LeCun said that “there are a lot of experimental systems, a few hundred of which have obtained FDA clearance,” which enhance the accuracy of MRI and X-ray diagnoses, among other things. I can already see the effects this will have on people’s health.

Those advances, although promising, pale in comparison to “the major deal,” he added, which is “the way AI is utilised for research in the future.

In addition, deep learning’s “energy frontier” is investigated by Meta’s own AI guru, LeCun.

According to LeCun, “we now have systems that can fold proteins, and we also have systems that would be able to design proteins to cling to a certain place,” which allows for a new approach to drug creation.

In addition, AI offers “enormous potential for advances in materials science,” according to LeCun. We need high-capacity batteries that don’t break the bank and don’t call for the usage of exotic elements that can only be found in one spot if we’re going to be successful in combating climate change, as has been said.

LeCun gave the example of Open Catalyst, a materials initiative developed by FAIR colleagues in collaboration with Carnegie Mellon University to utilise AI to create “new catalysts for use in renewable energy storage to aid in tackling climate change.”

“The concept there is, if we could cover a small desert with solar panels and then store the energy that is needed by those panels, for example, in the form of hydrogen or methane,” said LeCun. Currently, he says, “it’s either scalable, or efficient, but not both” to store hydrogen or methane products.

What if we were able to find a new catalyst using AI that would allow us to scale up that process without resorting to the use of some rare or expensive ingredient? It’s possible it won’t work, but it’s still worth a go.

Although AI has numerous practical and commercial applications, according to LeCun, the pursuit of intelligence on par with that of animals or humans is the ultimate goal of AI research.

LeCun said that the unparalleled availability of data and processing in the deep learning era was responsible for the tremendous research discoveries that underpin today’s applications like Transformers, although basic scientific advancements haven’t always been as numerous or as rich.

In the words of one expert, “what has produced the more recent wave is, first, a few conceptual advances—but, really, not a great lot and not that impressive—but, truly, the quantity of data that is accessible and the amount of compute that makes it feasible to scale those systems up.”

Large Language Models like GPT-3, the software that forms the basis of ChatGPT, are proof that scaling AI, i.e. adding more layers of configurable parameters, significantly increases programme performance. He mentioned things like “it turns out they perform incredibly well when you scale them up” in reference to GPT-3 and similar compounds.

LeCun warned that if the sector continues to focus only on growing, it might eventually run into a point of diminishing returns.

Also, the third and last instalment of the great artificial general intelligence versus artificial intelligence debate: machine cognition.

“Simply make things larger and it will just function better,” he added, citing the ethos of several businesses like OpenAI. However, “I believe we are now nearing that point’s boundaries.”

However, as LeCun put it, “We don’t appear to be able to train a totally autonomous self-driving [automobile] system by simply, you know, training bigger neural nets on more data; it doesn’t seem to get there.”

The outstanding ChatGPT, which LeCun has labelled “not especially unique” and “nothing groundbreaking,” lacks any kind of ability for planning, he claimed.

According to LeCun, “they are absolutely reactive.” When a user types in a prompt, the system takes it as input and uses it to produce the following token in a totally reactive manner.

“There is no forward planning or deconstruction of a complicated activity into lesser ones, it is purely reactive,” LeCun remarked.

LeCun gave the example of Microsoft’s adoption of the OpenAI programme Co-Pilot, which may now be accessed through GitHub. He warned that “a very serious restriction of such systems” exists. They are being used in the same way that a predictive keyboard would be employed by a person with more advanced computer skills.