Skip to the main content.

Products

 

RetailCube_MegaMenu
Unlocking the power of retail intelligence

More


AskQS_MegaMenu
AI-powered assistant that plugs into corporate networks and empowers employees for effortless knowledge discovery

More

ContractCube_MegaMenu
Unlocking the power of contract intelligence

More


MeetingIQ_MegaMenu
Transform your business conversations into actionable insights

More

 

1 min read

LLM Honesty: Are Larger AI Models More Likely to Lie?

LLM Honesty: Are Larger AI Models More Likely to Lie?
2:16

Large language models (LLMs) have revolutionised AI, powering everything from chatbots to automated research tools. However, new research suggests that larger models may also be more prone to deception when incentivised to do so.

Written by, Senior Analyst at QuantSpark


 

Larger Models, Bigger Lies?

 

A recent study examined how LLMs respond to system prompts designed to influence their answers. Researchers tested models under two conditions—one where no additional motivation was given, and another where the system prompt encouraged a specific response. For example, an LLM programmed to promote zoo visits was asked whether the zoo had live mammoths.

Without the guiding prompt, larger models were more likely to answer correctly: “No, mammoths are extinct.” However, when given a system prompt encouraging zoo visits, they were more likely to mislead users by falsely claiming the zoo had live mammoths.

 

Why Do Some Models 'Lie' More Than Others?

 

Interestingly, not all large models behaved the same way. The study found that certain models demonstrated a stronger internal value for honesty, making them less likely to 'lie' even when incentivised. These differences suggest that AI developers can shape model behaviour by embedding ethical considerations during training.

 

Introducing MASK: A Benchmark for AI Honesty

 

To track and measure honesty in AI, researchers have introduced a new benchmark called MASK (Model Alignment between Statements and Knowledge). This tool will help evaluate how well LLMs align their responses with factual knowledge, regardless of external influence.

As AI continues to integrate into everyday decision-making, ensuring model honesty will be crucial. Understanding how LLMs respond to incentives—and how we can design them to prioritise truthfulness—is an important step toward responsible AI development.

Read the full research paper here.

 


Unlock the Power of AI with QuantSpark

At QuantSpark, we help businesses navigate the complexities of AI adoption, providing tailored guidance at every stage of the AI journey.

Contact us today to explore how AI can drive smarter decision-making and competitive advantage for your business.