The Artificial Intelligence research company, OpenAI has finally released GPT2 – the text generating AI tool. Many experts had earlier warned against this AI.
GPT2 is a software that can produce realistic-sounding news articles and it does so by processing small amounts of text on a specific topic. This is not a new launch though – the company had earlier announced the release of the software in February this year.
They had only released a portion of the software back then and withheld the whole version out of the fear that it could be misused.
No strong evidence of misuse
After experts raised concerns regarding the possible misuse of the software in spreading misinformation and fake news, the company had withheld the release of the complete version of the software.
The company released the smaller version of the software and studied the effects so far.
OpenAI said in a blog post that it has seen ‘no strong evidence of misuse so far’ and so they were releasing the software in full mode.
It is noteworthy that during the entire year, many models based on GPT2 were made. Two researchers were even able to recreate an entire version of the GPT2 model and were later scrutinized in the process.
From fake news to Coding, GPT2 can do all of it
GPT2 was able to impress experts because of its ability to generate coherent text from minimal available prompts.
GPT2 was trained using eight million text documents that were taken from the web and responses to text snippets supplied by users.
Now, if you give it a fake news headline, it will write a whole news story on it and if you give it the first line of poetry, it can write a complete verse.
At the moment, it can be a bit vague to say how good its output can be. The writing that the model produces is so good that it often gives the appearance of intelligence. So, is it really that intelligent? Not really.
You just need to spend some time playing around with the system and its limitations become clear. Its biggest challenge is that of long term coherence. What does that mean?
Lacking in long term coherence basically means not being able to stick to a point or subject. To understand it on a more personal level, think of a situation when you are talking to a friend.
Has it ever happened that you started talking about pastries and were soon discussing climate change very passionately?
Well, GPT2 also has a problem in sticking to one topic but it is much more amplified than yours. So even if it may appear smart and even thoughtful in the beginning, it only has a bleak understanding of the world.
To get a better idea of how GPT2 might be working, you can access a web version of TalkToTransformer.com and enter your own prompts. TalkToTransformer.com is an iteration of the GPT2 model.
What are the critics saying?
Despite everything GPT2 can do, not everyone is happy with the system or OpenAI’s approach of dealing with these problems.
Researcher Dilip Rao told The Verge back in February that the company threw out the words ‘too dangerous’ casually out here without a lot of thought or experimentation.
“I don’t think OpenAI spent enough time proving it was actually dangerous,” he further added.
OpenAI has said that it will continue to watch how GPT2 is used by the community and the public. The company said that it will further develop its policies on the responsible publication of AI research.