What Powers Google’s Result Selection

What Powers Google’s Result Selection

Key Insights from Google’s Deep Dive Asia Pacific 2025 (Day 3)

Day 3 of Google’s Search Central Live: Deep Dive Asia Pacific 2025 shifted the focus to serving and ranking; how Google interprets user queries, retrieves documents, and determines which pages earn visibility. It also addressed the changing definitions of quality, the rise of AI-generated content, and the ongoing fight against spam.

For marketers managing both technical SEO and content strategy, these sessions provided a rare glimpse into Google’s inner workings, offering actionable takeaways for staying competitive in 2025 and beyond.

Understanding Queries and User Intent

The journey begins with query interpretation. Without understanding user queries, retrieval would fail, as Google must match what people search for with content that meets their intent.

Google’s synonym system plays a central role here. User vocabulary rarely matches document vocabulary, so the system automatically adds alternative words to bridge the gap. These synonyms are context-dependent and don’t need to “read well” in a traditional sense, they simply need to reflect how users search.

Machine learning also identifies “sibling terms,” such as brand comparisons like Canon vs Nikon, using query logs to recognise when similar terms are frequently compared. Spelling variations and translated terms (e.g., multiple ways to spell “pad thai”) are handled automatically if they’re common.

However, niche services or products with less search data require marketers to include variations and explain terms more clearly in their content. “Write in your customers’ language,” Google emphasised, “and don’t overthink it unless your terminology is unfamiliar to most users.”

Retrieval and Ranking: Beyond Keyword Matching

Google’s retrieval system uses tokenisation to cross-reference user queries with a posting list, its fundamental building block for finding relevant documents. While “more matching words” might help retrieval, Gary Illyes reminded attendees:

“Keyword stuffing is not cool. It’s not the year 2000.”

Ranking comes after retrieval and is based on probabilities influenced by hundreds of signals, such as language, location, and document quality. For instance:

  1. Language and location: The site’s language and the user’s browser or location signals.
  2. Quality: Pages considered higher quality are boosted.

404 errors or noindex directives are indexing issues, not quality problems, and should not be confused with content quality.

What Google Means by “Quality”

Google reiterated that content quality is a key signal, particularly for YMYL (Your Money or Your Life) topics. Quality is not about writing for search engines, it’s about creating useful, trustworthy, and original content for real people.

The four pillars of quality outlined were:

  • Effort - Has genuine work gone into creating something valuable?
  • Originality - Is the content unique and insightful, rather than rehashed?
  • Talent or Skill - Does it reflect expertise or first-hand experience?
  • Accuracy - Is it factually correct, particularly on sensitive or expert topics?

Pages mass-produced with minimal human effort, overrun with ads, or containing fake author profiles or AI-generated “creator” images are considered low quality.

Key point: EEAT (Experience, Expertise, Authoritativeness, Trustworthiness) is not a ranking factor, but Google’s quality ratings provide guidance on how to create content that satisfies these principles.

Why Google Updates Search

In 2023 alone, Google conducted over 800,000 search quality tests, launched 4,700+ updates, and blocked 40 billion spammy pages daily. Updates serve three main purposes:

  • Support new content formats - As new formats gain popularity, Google adapts to user needs.
  • Improve breadth and relevance - With content saturation, updates help surface the best material.
  • Combat spam - Spammers exploit loopholes, requiring targeted algorithms to maintain fairness.

Core updates are broad, affecting entire sites rather than single pages, while spam updates target bad practices like cloaking, doorway pages, scraped content, link schemes, and hacked content.

As Gary Illyes explained, core updates aren’t penalties:

“Continue doing great work. Study what competitors are doing better, and improve your site.”

Lightning Talks: From Content to Findability

1. Site Search Spam

Spammers exploit internal site search features by generating spammy URLs and titles, which can waste crawl budget and harm site reputation. Solutions range from implementing CAPTCHA to blocking abusive keywords in robots.txt or applying noindex on site search results. A key takeaway was the importance of regularly reviewing Google Search Console index reports and balancing short-term fixes with long-term safeguards.

2. Why Some Blogs Stand Out

Creating standout content isn’t about word count; it’s about relevance, novelty, and emotion.

  • Novelty triggers curiosity and strengthens expertise and experience signals.
  • Emotional content tells stories that resonate with audiences.
  • Data-driven insights, such as expert quotes, statistics, and trends, enhance authority.
  • Memorable presentation comes from a unique voice, strong structure, and multimedia elements.

3. Generative AI and SEO Content

AI can streamline content creation if used wisely. When human-reviewed and tailored, AI content can match user engagement metrics of human-written content. Custom AI models can amplify these benefits but should always be combined with quality checks and brand voice.

Search Features, Structured Data, and SERP Enhancements

Google detailed how results are presented:

  • Text results include title links, snippets, sitelinks, and byline dates.
  • Review snippets offer quick social proof with ratings or votes.
  • Shopping features rely on both structured data and Google Merchant Center.

Structured data remains critical for rich results but doesn’t directly boost rankings. It requires ongoing maintenance, as errors can affect eligibility. Google may generate certain rich features (like site names) even without explicit markup, but it’s best to implement and monitor schema.

Analytics, Data Crossovers, and Tracking Challenges

Comparing performance data between Google Analytics (GA), Search Console (GSC), and Trends remains complex:

  • GA limitations: Blocked cookies (especially in Europe), improper tag setups, and attribution quirks (e.g., return visitors logged as “direct”).
  • GSC differences: Aggregates canonical URLs, tracks all content types including PDFs, and uses California time zones, creating misalignments with APAC reports.
  • Key fix: Align GA time zones with California for meaningful comparisons.

Daniel Waisberg reminded attendees that no single tool provides a complete picture, marketers must combine insights from multiple sources.

AI in Search: Opportunities and Cautions

AI underpins many of Google’s ranking and search features through models like RankBrain, BERT, MUM, and the latest Gemini LLMs. These models interpret user intent, process multi-modal inputs (text, images, video), and power AI Overviews.

However, as Gary Illyes bluntly noted:

“LLMs are not smart. They will make mistakes and these mistakes are hallucinations.”

Generative AI does not retrieve facts but predicts word sequences, which means outputs can be unreliable or unstable over time. Google warned against blindly trusting AI for content creation, particularly at scale, as low-effort AI output remains a core quality risk.

What This Means for You: Key Takeaways

  • Focus on user language and intent. Include variations for niche terms.
  • Prioritise originality and depth. Avoid mass AI content without human input.
  • Audit structured data. Keep it accurate and updated to enhance visibility.
  • Watch for site search spam. Secure internal search pages to protect crawl budget.
  • Track across multiple tools. Combine GSC, GA, and Trends to understand user behaviour.
  • Use AI responsibly. It can improve workflows but requires careful oversight.
  • Stay adaptable. Core and spam updates reward sites with genuine value and trust signals.

Looking Ahead

Search in 2025 is no longer about chasing algorithms but about understanding users, demonstrating expertise, and embracing technology responsibly. From query interpretation to ranking signals and AI integration, Google’s systems increasingly reward sites that blend human insight with technical precision.

If you’re unsure how well your site aligns with modern SEO practices, or whether your content is ready for AI-enhanced search, Altitude Search offers a free SEO Health Check. It’s designed to reveal technical gaps, content opportunities, and actionable steps for growth.

Book your free SEO Health Check today.

Back to blog

About Google Deep Dive Asia Pacific 2025

Michaela Laubscher was selected as one of approximately 400 attendees for Google's inaugural Search Central Live Deep Dive Asia Pacific 2025, a three-day flagship SEO conference held in Bangkok.

The invite-only event featured in-depth workshops and sessions led by Google's Search team, focusing on technical SEO topics and hands-on learning experiences not available in traditional one-day conferences.

Want to read all the findings from Google’s Deep Dive Asia Pacific 2025?