chatgpt

The next significant update to the technology that drives ChatGPT and Microsoft Bing, the search engine that uses the technology, was GPT-4, announced on Tuesday by AI giant OpenAI. The Uniform Bar Test, which is required for those who want to practise law in the US, is one of the top exams that GPT-4 is allegedly bigger, faster, and more accurate than ChatGPT.

On its announcement blog for the same, the business revealed the language model’s capabilities, claiming that it is more imaginative and collaborative than before. In contrast to ChatGPT, which was driven by GPT-3.5, GPT-4 can also use photos to provide captions and analysis. That, however, is just the tip of the iceberg. We describe the new language model and its features.

OpenAI released GPT-4, a sizable multimodal model, on March 14, 2023. Text is only one component of multimodal models; GPT-4 also accepts image input. GPT-3 and GPT-3.5, on the other hand, only supported text as a mode of operation, which limited users to typing out questions.

GPT-4, according to OpenAI, “exhibits human-level performance on many professional and academic benchmarks” in addition to its newly acquired capacity for image processing. The language model’s broader general knowledge and problem-solving skills enable it to pass a mock bar exam with a score in the top 10% of test takers and to solve challenging questions more accurately.

It can, for instance, “address tax-related queries, arrange a meeting for three busy people, or determine a user’s creative writing style.”

A wider range of use cases, including as lengthy conversations, document search and analysis, and long-form content production, are now possible because to GPT-4’s ability to handle texts longer than 25,000 words.

*GPT-4 can now “see” images: The most obvious modification to GPT-4 is that it is multimodal, enabling it to comprehend information from several informational modalities. GPT-3 and ChatGPT’s GPT-3.5 could only read and write text, hence they were restricted to text input and output. GPT-4, however, can be instructed to output data in response to images that are supplied to it.

It makes sense if this makes you think of Google Lens. Lens, however, only looks up data that is relevant to an image. GPT-4 is significantly more sophisticated in that it can comprehend and analyse images. An illustration of an outrageously huge iPhone connector with the language model explaining the humour was supplied by OpenAI. The only drawback is that picture inputs are currently at the research preview stage and are not accessible to the general public.

It’s tougher to deceive GPT-4: The tendency of generative models like ChatGPT and Bing to occasionally veer off course and produce suggestions that raise questions or, worse, outright scare users is one of their biggest shortcomings. They may also mess up the facts and spread false information.

The company’s “best-ever results on factuality, steerability, and refusing to go outside of guardrails” were achieved, according to OpenAI, after 6 months of training GPT-4 using lessons from its “adversarial testing programme” and ChatGPT.

GPT-4 can handle a lot more data concurrently: Despite having been trained on trillions of parameters and infinite quantities of data, there are limits to how much information Large Language Models (LLMs) can process during a conversation. The GPT-3.5 model of ChatGPT was capable of handling 4,096 tokens, or roughly 8,000 words, but GPT-4 increases those capacities to 32,768 tokens, or over 64,000 words.

This improvement implies that, unlike ChatGPT, which could only process 8,000 words at a time before losing track of things, GPT-4 can continue to function properly for far longer talks. Moreover, it can handle longer documents and produce long-form material, which were much more restricted on GPT-3.5.

*GPT-4 has increased accuracy: OpenAI acknowledges that GPT-4 has the same drawbacks as earlier iterations—it is still fallible and prone to erroneous reasoning. Nonetheless, “GPT-4 dramatically lowers hallucinations relative to earlier models” and receives a factuality evaluation score 40% higher than GPT-3.5. It will be far more difficult to persuade GPT-4 to generate undesired outputs like hate speech and false information.

*GPT-4 performs higher in non-English language comprehension: Training LLMs in other languages might be difficult because machine learning data and the most of the content on the internet today are primarily in English.

Yet, OpenAI has shown that it outperforms GPT-3.5 and other LLMs by correctly answering thousands of multiple-choice questions across 26 languages, while GPT-4 is more multilingual. With an accuracy rate of 85.5%, it clearly handles English the best, although Indian languages like Telugu aren’t far behind at 71.4%. This means that consumers will be able to use chatbots built on GPT-4 to provide outputs in their local languages that are more accurate and clear.

For various reasons, GPT-4 has already been incorporated into services like Duolingo, Stripe, and Khan Academy. Even though it hasn’t yet been made freely available to everyone, a $20 per month ChatGPT Plus subscription can get you access right away. Although this is going on, GPT-3.5 continues to form the foundation of ChatGPT’s free tier.

There is, however, a “unofficial” option to start utilising GPT-4 right away if you don’t want to pay. According to Microsoft, the new Bing search interface is now powered by GPT-4, and you can access it right away at bing.com/chat.

Developers will have access to GPT-4 through its API in the interim. A waitlist for API access has been revealed, and it will start taking people later this month.

Leave a Reply

Bizemag

FREE
VIEW