2022 was the year AI finally started to live up to its hype

Already since deep learning burst into the mainstream in 2012, the hype surrounding AI research has often exceeded his reality. However, over the past year, a series of major breakthroughs and milestones suggest that the technology may finally deliver on its promise.

Despite the obvious potential of deep learning, over the past decade regular warnings of the dangers of rampant superintelligence and the prospect of technological unemployment have been tempered by the fact that most learning systems IA were concerned about identifying images of cats or providing questionable English-to-English translations. Chinese.

Over the past year, however, there has been an undeniable shift in the capabilities of AI systems, in areas as diverse as the creative industries, basic sciences, and computer programming. Moreover, these AI systems and their results are becoming more visible and accessible to ordinary people.

Nowhere has progress been more evident than in the burgeoning field of generative AI, a catch-all term for a host of models focusing on creative tasks.

This is mainly due to a kind of model called transformer, which was actually first unveiled by Google in 2017. Indeed, many AI systems that have made headlines this year are updates models that their developers have been working on for some time. time, but the results they produced in 2022 blew previous iterations out of the water.

Chief among them is ChatGPT, an AI chatbot based on the latest version of OpenAI’s big GPT-3 language model. Made public at the end of November, the service was amaze people with his amazing ability to engage in natural-sounding conversations, answer complex technical questions, and even produce compelling prose and poetry.

Earlier this year, another OpenAI model called DALL-E 2 took the internet by storm with its ability to generate hyperrealistic images in response to prompts as bizarre as “a raccoon playing tennis at Wimbledon in the 1990s” and “Spider-Man from ancient Rome.” Meta took things a step further in September with a system that could produce short video clips from text prompts, and Google researchers have even managed to create an AI that can generate music in the style of an audio clip, it is played.

The implications of this explosion of AI creativity and fluidity are difficult to gauge at the moment, but they have already spurred predictions that it could replace traditional search engines, kill college essay, and lead to the death of art.

This is due as much to the improvement capabilities of these models as to theIr increasing accessibility, with services like ChatGPT, DALL-E 2 and the text-to-image generator Midjourney open to everyone for free (for now, at least). Going even further, the independent AI lab Stable Dbroadcasting has even open-source their text-to-image AIallowing anyone with a modestly powerful computer to run it themselves.

AI has also made progress in more prosaic tasks over the past year. In January, Deepmind unveiled AlphaCode, an AI-powered code generator that the company says could match the average programmer in coding competitions. In the same order of ideas, GitHub co-pilot, a AI coding tool developed by GitHub and OpenAI, evolved from a prototype to a commercial subscription service.

Another major bright spot for the field has been the increasingly prominent role of AI in basic science. In July, DeepMind announced that its revolutionary AlphaFold AI had predicts the structure of almost all proteins known to science, creating a potential revolution in both life sciences and drug discovery. The company also announced in February that it had trained its AI to control stirred plasmas found inside experimental fusion reactors.

And while AI seems to be moving further and further away from the kind of toy problems that have preoccupied the field for the past decade, it has also made major strides in one of the pillars of toy research. AI: games.

In November, Meta showed an AI that ranked in the top 10% of players in the Diplomacy board game, which requires a difficult combination of strategy and natural language negotiation with other players. In the same month, a team from Nvidia trained an AI to play Minecraft complex 3D video game using only high-level natural language instructions. And in December, DeepMind cracked the devilishly complicated game Strategowhich involves long-term planning, bluffing and a fair amount of uncertainty.

However, it was not all easy. Despite the superficially impressive nature of the release of generative AI like ChatGPT, many were quick to point out that they are very compelling bullshit generators. They are trained on huge amounts of text of varying quality from the internet. And in the end, all they do is guess which text is most likely to follow a prompt, without being able to judge the veracity of their output. This raised fears that the internet would soon be flooded with huge amounts of seemingly convincing nonsense.

This came to light with the release of Meta’s AI Galactica, which was supposed to summarize academic papers, solve math problems, and write computer code for scientists to speed up their research. The problem was that he was producing compelling material that was completely wrong or very biasedand the service was taken down in just three days.

Bias is a significant issue for this new breed of AI, which is trained on large swaths of material from the internet rather than the more carefully selected datasets that previous models have fed. Similar issues have arisen with ChatGPT, which despite the filters put in place by OpenAI can be tricked into saying that Alone whip and asian men make good scientists. And the popular AI image-generating app Lensa has been called upon to sexualzportraits of womenparticularlily those of Asiashundred.

Other areas of AI have also had a less than stellar year. One of the most touted real-world use cases, self-driving cars, has suffered significant setbacks, with the Argo shutdown backed by Ford and VolkswagenYou’re here push back against allegations of fraud on his inability to deliver”Feverything self-Driving”, and a growing chorus of voices demanding the industry is stuck in a rut.

Despite the apparent progress it is been done, there are also those, like Gary Marcus, who say that deep learning is reach its limits, for he is not able to truly understand the material he is trained on and merely learns to make statistical connections which can produce convincing but often erroneous results.

But for those behind some of this year’s most impressive results, 2022 is just a taste of what’s to come. Many predict that the next big breakthroughs will come from multimodal models that combine increasingly powerful capabilities in everything from text to imaging and audio. Whether the field can maintain its momentum in 2023 remains to be seen, but either way, this year should be a turning point in AI research.

Image credit: DeepMind / Unsplash

Leave a Reply

Your email address will not be published. Required fields are marked *