Connect with us

Tech News

The technology behind OpenAI’s fiction-writing, fake-news-spewing AI, explained

Avatar

Published

on

Last Thursday (Feb. 14), the nonprofit research firm OpenAI released a new language model capable of generating convincing passages of prose. So convincing, in fact, that the researchers have refrained from open-sourcing the code, in hopes of stalling its potential weaponization as a means of mass-producing fake news.

While the impressive results are a remarkable leap beyond what existing language models have achieved, the technique involved isn’t exactly new. Instead, the breakthrough was driven primarily by feeding the algorithm ever more training data—a trick that has also been responsible for most of the other recent advancements in teaching AI to read and write. “It’s kind of surprising people in terms of what you can do with […] more data and bigger models,” says Percy Liang, a computer science professor at Stanford. 

The passages of text that the model produces are good enough to masquerade as something human-written. But this ability should not be confused with a genuine understanding of language—the ultimate goal of the subfield of AI known as natural-language processing (NLP). (There’s an analogue in computer vision: an algorithm can synthesize highly realistic images without any true visual comprehension.) In fact, getting machines to that level of understanding is a task that has largely eluded NLP researchers. That goal could take years, even decades, to achieve, surmises Liang, and is likely to involve techniques that don’t yet exist.

Four different philosophies of language currently drive the development of NLP techniques. Let’s begin with the one used by OpenAI.

#1. Distributional semantics

Linguistic philosophy. Words derive meaning from how they are used. For example, the words “cat” and “dog” are related in meaning because they are used more or less the same way. You can feed and pet a cat, and you feed and pet a dog. You can’t, however, feed and pet an orange.

How it translates to NLP. Algorithms based on distributional semantics have been largely responsible for the recent breakthroughs in NLP. They use machine learning to process text, finding patterns by essentially counting how often and how closely words are used in relation to one another. The resultant models can then use those patterns to construct complete sentences or paragraphs, and power things like autocomplete or other predictive text systems. In recent years, some researchers have also begun experimenting with looking at the distributions of random character sequences  rather than words, so models can more flexibly handle acronyms, punctuation, slang, and other things that don’t appear in the dictionary, as well as languages that don’t have clear delineations between words.

Pros. These algorithms are flexible and scalable, because they can be applied within any context and learn from unlabeled data.

Advertisement

Cons. The models they produce don’t actually understand the sentences they construct. At the end of the day, they’re writing prose using word associations.

#2. Frame semantics

Linguistic philosophy. Language is used to describe actions and events, so sentences can be subdivided into subjects, verbs, and modifiers—who, what, where, and when.

How it translates to NLP. Algorithms based on frame semantics use a set of rules or lots of labeled training data to learn to deconstruct sentences. This makes them particularly good at parsing simple commands—and thus useful for chatbots or voice assistants. If you asked Alexa to “find a restaurant with four stars for tomorrow,” for example, such an algorithm would figure out how to execute the sentence by breaking it down into the action (“find”), the what (“restaurant with four stars”), and the when (“tomorrow”).

Pros. Unlike distributional-semantic algorithms, which don’t understand the text they learn from, frame-semantic algorithms can distinguish the different pieces of information in a sentence. These can be used to answer questions like “When is this event taking place?”

Cons. These algorithms can only handle very simple sentences and therefore fail to capture nuance. Because they require a lot of context-specific training, they’re also not flexible.

#3. Model-theoretical semantics

Linguistic philosophy. Language is used to communicate human knowledge.

Advertisement

How it translates to NLP. Model-theoretical semantics is based on an old idea in AI that all of human knowledge can be encoded, or modeled, in a series of logical rules. So if you know that birds can fly, and eagles are birds, then you can deduce that eagles can fly. This approach is no longer in vogue because researchers soon realized there were too many exceptions to each rule (for example, penguins are birds but can’t fly). But algorithms based on model-theoretical semantics are still useful for extracting information from models of knowledge, like databases. Like frame-semantics algorithms, they parse sentences by deconstructing them into parts. But whereas frame semantics defines those parts as the who, what, where, and when, model-theoretical semantics defines them as the logical rules encoding knowledge. For example, consider the question “What is the largest city in Europe by population?” A model-theoretical algorithm would break it down into a series of self-contained queries: “What are all the cities in the world?” “Which ones are in Europe?” “What are the cities’ populations?” “Which population is the largest?” It would then be able to traverse the model of knowledge to get you your final answer.

Pros. These algorithms give machines the ability to answer complex and nuanced questions.

Cons. They require a model of knowledge, which is time consuming to build, and are not flexible across different contexts.

#4. Grounded semantics

Linguistic philosophy. Language derives meaning from lived experience. In other words, humans created language to achieve their goals, so it must be understood within the context of our goal-oriented world.

How it translates to NLP. This is the newest approach and the one that Liang thinks holds the most promise. It tries to mimic how humans pick up language over the course of their life: the machine starts with a blank state and learns to associate words with the correct meanings through conversation and interaction. In a simple example, if you wanted to teach a computer how to move objects around in a virtual world, you would give it a command like “Move the red block to the left” and then show it what you meant. Over time, the machine would learn to understand and execute the commands without help.

Pros. In theory, these algorithms should be very flexible and get the closest to a genuine understanding of language.

Cons. Teaching is very time intensive—and not all words and phrases are as easy to illustrate as “Move the red block.”

Advertisement

In the short term, Liang thinks, the field of NLP will see much more progress from exploiting existing techniques, particularly those based on distributional semantics. But in the longer term, he believes, they all have limits. “There’s probably a qualitative gap between the way that humans understand language and perceive the world and our current models,” he says. Closing that gap would probably require a new way of thinking, he adds, as well as much more time.

This originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.

Continue Reading
Advertisement
Advertisement
Advertisement Submit

TechAnnouncer On Facebook

Advertisement
A Review of the Shure SM7B Microphone A Review of the Shure SM7B Microphone
Tech Reviews4 days ago

Unleashing the Power of Sound: A Review of the Shure SM7B Microphone

The Shure SM7B microphone has made waves in the audio world, becoming a favorite among podcasters, musicians, and broadcasters alike....

Pocket Cinema Camera 6K Pro Pocket Cinema Camera 6K Pro
Tech Gadgets4 days ago

Capturing Magic: A Review of the Blackmagic Pocket Cinema Camera 6K Pro

The Blackmagic Pocket Cinema Camera 6K Pro is a game-changer for filmmakers and content creators. With its impressive features and...

Apple 2023 MacBook Air Apple 2023 MacBook Air
Tech Reviews4 days ago

Unleashing Power: A Review of the Apple 2023 MacBook Air with M2 Chip

The Apple 2023 MacBook Air with M2 chip is a sleek and powerful laptop that has captured the attention of...

BTC staking campaign BTC staking campaign
Bitcoin7 days ago

Exploring pSTAKE’s edge within Binance’s latest BTC staking campaign

Recently, Binance launched its latest BTC Staking on Babylon Campaign, inviting users to participate in an exciting opportunity to earn...

The 2022 Apple MacBook Air with M2 chip The 2022 Apple MacBook Air with M2 chip
Electronics1 week ago

Apple MacBook Air: A Student’s Best Friend

The 2022 Apple MacBook Air with M2 chip has quickly become a favorite among students and professionals alike. With its...

DJI Avata 2 DJI Avata 2
Drones Technology1 week ago

Experience the Sky Like Never Before with the DJI Avata 2

Flying the DJI Avata 2 Fly More Combo is an exhilarating experience that takes you to new heights. This FPV...

Sony Alpha 7 IV: A Comprehensive Review Sony Alpha 7 IV: A Comprehensive Review
Tech Reviews1 week ago

Unleashing Creativity with the Sony Alpha 7 IV: A Comprehensive Review

The Sony Alpha 7 IV is a remarkable camera that has captured the attention of both amateur and professional photographers...

Market Turmoil: Iran's Missile Attack on Israel Sends Stocks Down Market Turmoil: Iran's Missile Attack on Israel Sends Stocks Down
Trending Technology1 week ago

Market Turmoil: Iran’s Missile Attack on Israel Sends Stocks Down

U.S. stock markets experienced a significant downturn on October 1, 2024, following Iran’s missile strikes on Israel, which escalated geopolitical...

Tesla Stock Tesla Stock
Trending Technology1 week ago

Tesla Stock Slips After EV Maker Misses Estimates on Deliveries

Tesla Inc. faced a significant setback as its stock price dropped over 6% following the announcement of its third-quarter vehicle...

Chinese Stocks Surge Over 7% in Hong Kong Amid Stimulus Optimism Chinese Stocks Surge Over 7% in Hong Kong Amid Stimulus Optimism
Trending Technology1 week ago

Chinese Stocks Surge Over 7% in Hong Kong Amid Stimulus Optimism

Chinese stocks listed in Hong Kong experienced a remarkable surge, climbing more than 7% as traders returned from the National...

Advertisement
Advertisement Submit

Trending

Pin It on Pinterest

Share This