Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnWeb3SquareMore
Trade
Spot
Buy and sell crypto with ease
Margin
Amplify your capital and maximize fund efficiency
Onchain
Going Onchain, without going Onchain!
Convert & block trade
Convert crypto with one click and zero fees
Explore
Launchhub
Gain the edge early and start winning
Copy
Copy elite trader with one click
Bots
Simple, fast, and reliable AI trading bot
Trade
USDT-M Futures
Futures settled in USDT
USDC-M Futures
Futures settled in USDC
Coin-M Futures
Futures settled in cryptocurrencies
Explore
Futures guide
A beginner-to-advanced journey in futures trading
Futures promotions
Generous rewards await
Overview
A variety of products to grow your assets
Simple Earn
Deposit and withdraw anytime to earn flexible returns with zero risk
On-chain Earn
Earn profits daily without risking principal
Structured Earn
Robust financial innovation to navigate market swings
VIP and Wealth Management
Premium services for smart wealth management
Loans
Flexible borrowing with high fund security
Google analyst warns AI answers ‘not perfect, can’t replace your brain’

Google analyst warns AI answers ‘not perfect, can’t replace your brain’

Cryptopolitan2024/07/29 16:00
By:By Jeffrey Gogo

Share link:In this post: Google analyst Gary Illyes warned that LLMs still have gaps in accuracy. The models need a human eye to verify the content they produce. He said people shouldn’t trust AI responses without checking authoritative sources.Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional bef

Google analyst Gary Illyes warned that large language models – the tech behind generative AI chatbots like ChatGPT – still have gaps in accuracy and need a human eye to verify the content they produce. The comments come just days after OpenAI launched SearchGPT , a new AI-powered search engine that will compete directly with Google. 

Illyes shared the comments on LinkedIn in response to a question he got in his inbox, but did not say what the question was. He said people shouldn’t trust AI responses without checking other authoritative sources. OpenAI aims for its search tool to upend Google’s dominance in the search engine market.

AI responses are not ‘necessarily factually correct’

Illyes, who has been with Google for over a decade, said while AI answers may be close to the fact, they are not “necessarily factually correct”. That is because large language models (LLMs) are not immune from feeding off wrong information floating on the internet, he explained.

“Based on their training data LLMs find the most suitable words, phrases, and sentences that align with a prompt’s context and meaning,” Illyes wrote. “This allows them to generate relevant and coherent responses. But not necessarily factually correct ones.”

The Google analyst said users will still need to validate AI answers based on what “you know about the topic you asked the LLM or on additional reading on resources that are authoritative for your query.”

See also Alphabet Inc. reports steady growth amid AI integration

One way developers have tried to ensure the reliability of AI-generated content is through a practice called “grounding.” The process involves the infusion of machine-created information with human elements to guard against error. According to Illyes, grounding may still not be enough.

“Grounding can help create more factually correct responses, but it’s not perfect; it doesn’t replace your brain,” he said. “The internet is full of intended and unintended misinformation, and you wouldn’t believe everything you read online, so why would you LLM responses?”

Elon Musk accuses Google of gatekeeping public info

Traditionally, trust has always been an issue with search engines like Google and other artificial intelligence platforms and how they exert some control over the information they remit to users.

One such incident involves the recent assassination attempt of former U.S. President Donald Trump. Elon Musk suggested that Google banned the shooting incident from appearing in its search results, sparking a major debate on social media about the reach of Big Tech.

In the flurry of responses, a spoof account purporting to belong to Google vice president Ana Mostarac added to the debate, sharing a fake apology from the company for allegedly blocking content on Trump.

“…People’s information needs continue to grow, so we’ll keep evolving and improving Search. However, it seems we need to recalibrate what we mean by accurate. What is accurate is subjective, and the pursuit of accuracy can get in the way of getting things done,” the fake account posted on X.

See also OpenAI to spend $7 billion on ChatGPT training this year - Report

“You can be assured that our team is working hard to ensure that we don’t let our reverence for accuracy be a distraction that gets in the way of our vision for the future,” it added.

The community notes on X immediately flagged the tweet, saying the person was impersonating the Google VP.  This is an example of how information can easily be distorted, and AI models may not be able to discern between what is accurate or not without human review.

0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!

You may also like