Chinese Researchers Introduces Tool to Correct AI Hallucination

Chinese Researchers Create AI Hallucination Corrector

A team of researchers from the University of Science and Technology of China and Tencent’s YouTu Lab have created an innovative solution to the problem of AI hallucination in Multimodal Large Language Models (MLLMs).

Solving Artificial Intelligence Delusion: Introducing Woodpecker

The solution was presented in a research paper titled “Woodpecker: Hallucination Correction for Multimodal Large Language Models.” The research was published on the pre-print server arXiv.

Woodpecker makes use of three distinct AI models. This differs from the MLLM for hallucinations that are being corrected.

The models include the GPT-3.5 turbo, the Grounding DINO, and the BLIP-2-FlanT5.

Their fusion enables a system in which an evaluation is conducted first to identify hallucinations and then command the model undergoing correction for hallucinations to regenerate its result based on its data.

This is not the first attempt to address the issue of hallucination in AI models.

Prior to this time, existing solutions involved an instruction-tuning approach that necessitated retraining the model with specific data.

However, these methodologies require a great deal of data and computation, making them costly.

According to the inspiration behind its name, the Woodpecker framework consists of five distinct phases, including extracting key concepts, formulating questions, validating visual knowledge, generating visual claims, and correcting hallucinations.

Hallucination in Artificial Intelligence Models

Contextually, AI hallucination occurs when an AI model generates outputs with a high level of confidence that are inconsistent with the information inherent in its training data.

Large Language Model (LLM) research has frequently encountered these situations.

OpenAI’s ChatGPT and Anthropic’s Claude are two AI applications that employ LLM and are susceptible to these hallucinations.

A note in the research paper states, “Hallucination is a big shadow hanging over the rapidly evolving Multimodal Large Language Models (MLLMs), referring to the phenomenon that the generated text is inconsistent with the image content.”

With the release of new chatbot models such as GPT-4, particularly its visual variant GPT-4V and other visual systems that process image and text into a generative AI mode, such hallucinations are imminent, and Woodpecker is deemed a viable solution.

 

Read Previous

NFT Market Remains Bearish Despite Widespread Crypto Rally

Read Next

The Role of Cross-Chain Platforms in Advancing Interoperability