nl-ai-search-results-opinions

#52 | AI Search: Who decides what you know?

TL;DR: The invisible filter between you and information. AI answers stop most people from looking further—and what you see may depend on money, power, and optimization behind the scenes.

👋 Hello AI searchers,

It’s weird, but it feels so normal now to just take AI’s answers at face value. Even though we shouldn’t.

I caught myself doing it the other day with something as mundane as vitamins. Asked ChatGPT, Perplexity, and Google AI which supplements actually help with focus and energy.

ChatGPT zeroed in on B vitamins—B12, folate—talking about neural signaling and keeping your brain sharp as you age. Perplexity went full research mode, ranking B vitamins, omega-3s, and citicoline based on actual meta-analyses.

Google AI? More cautious. They basically said “only if you’re deficient,” but added an interesting note about how B vitamins and omega-3s together might do something special for at-risk groups.

Long story short, they all agreed—sort of-on the chemical delivery: Methylated B12 beats regular B12. Phosphatidylserine trumps generic choline. They all added different suggestions for products and intake recommendations.

Same questions. Different platforms. Overlapping answers, but the nuance—the stuff that probably matters for whether this actually helps you—hides in the details.

But why am I writing this? Well, here’s what got me:

The inconsistency of the results is not the issue. Scientists disagree about vitamin research all the time. I expected that.

What stopped me was (again) the confidence. Each answer felt complete. Authoritative. Like, “here’s what you need to know, now act on it.”

But there were no caveats.

AI results are often generic or incomplete because requests haven’t been specified clearly, or the algorithm follows its own constraints and rules.

We get suggestions for ideas, strategies, products, travel destinations, and even political views. But is that all? Is it what we really wanted to know? Does it matter to us? What’s missing or has been omitted, and why?

Today’s topic is about how AI search may impact the diversity of opinions and perspectives.

What I started noticing everywhere

Once you see it with vitamins, you can’t unsee it anywhere else.

There’s some research out there about what happens when AI gives you that neat summary at the top of search results.

Traffic to the actual sources? Roughly 50-60% of searches now end without anyone clicking anything. Sound familiar?

CBS News looked at its own numbers. When AI summaries appeared for their content, 75% of people just… stopped, read the summary, closed the tab. Compared to a 54% baseline when summaries weren’t there.

Chegg—that educational platform—reported a 49% traffic decline. Publishers have been quietly submitting evidence to regulators showing 10-25% drops year over year.

People are reading the AI’s version and moving on.

And look, I do this too. More than I want to admit.

Here’s why I think it matters:

Traditional Google gave you ten results. You’d click three, maybe five if you were really digging. You’d see what different sources said, notice where they agreed, and where they contradicted each other.

That little bit of effort of moving your index finger to scroll and click multiple times kept you exposed to competing views.

Now? AI gives you the answer. You accept it. Loop closed. No alternatives appear.

Same with my messy approach to vitamins in a quick search.

The information got compressed before I even knew to ask the right questions.

That’s the pattern worth getting aware of and thinking about.

So, who’s actually answering (y)our questions?

So here’s where it gets interesting—and maybe a little uncomfortable.

Microsoft holds 49% profit rights in OpenAI. Thirteen billion dollars invested. When you ask ChatGPT about your health, Microsoft’s commercial interests are… somewhere in that equation.

Google AI? That’s Alphabet. A two trillion dollar company that controls over 90% of search in most markets.

Perplexity? Backed by Jeff Bezos. Who, incidentally, profits when you buy things on Amazon. (Including supplements, but I’m not crossing that bridge yet.)

Well, I’m not saying conspiracy. I’m saying these may not be neutral information utilities. They’re commercial platforms with business models built substantially on advertising.

So what?

The AI search advertising market? One billion dollars in 2025. Projected to hit twenty-six billion by 2029.

Google claims their AI Overviews “monetize at the same rate as traditional search.” Make of that what you will.

But here’s what really got under my skin:

Stanford researchers looked at what happens when AI systems face competitive pressure. They set up scenarios where performance metrics—sales, votes, engagement—determined success.

The results? A few small lies to kick things in the right direction, nothing special.

  • Marketing scenarios: 6.3% sales increase, 14% rise in deceptive claims.
  • Political campaigns: 4.9% vote gain, 22.3% more disinformation.
  • Social media: 7.5% engagement boost, 188.6% increase in misleading content.

And this is the kicker—the alignment techniques designed to keep AI truthful? Didn’t prevent it. In some cases, they made it worse.

The models learned something we probably should have seen coming: manipulating information produces better performance metrics than providing accurate information.

Ok, I know you are aware of that and do your own research. But does it matter?

Here’s how it plays out on a larger industry scale:

  • Businesses need to optimize content for AI search visibility and pay.
  • Brand authority strongly correlates with AI visibility.
  • Either you’re authoritative enough to get cited, or you’re invisible.
  • Large corporations optimize because they can afford to.
  • The independent nutritionists with years of clinical experience but no optimized web presence? The AI fails to recognize their existence.

So when supplement companies spend $50K a month on optimization, they show up. The CDC’s evidence-based nutrition guidance—unoptimized, unfunded—might never appear.

Quality doesn’t determine visibility here. Resources do.

What disappears when everything gets compressed

I kept digging, too interesting.

Researchers analyzed over 14,000 conversation logs from AI search tools. Found that 34% of Google Gemini responses were generated without fetching any online sources. 92% of Gemini answers provided no clickable citation.

Perplexity visits ten pages per query but only shows you three or four in citations.

You have no way to trace which information shaped the answer. What got excluded? Why certain perspectives made the cut while others didn’t.

And the system concentrates heavily on Western, English-language sources. Sixty percent of web traffic flows to U.S.-based websites. AI training data reflects this.

When AI synthesizes health guidance, it pulls predominantly from mainstream medical institutions. Traditional healing practices, functional medicine perspectives, dissenting research opinions—they lack the infrastructure to compete.

Nature published research on “model collapse.” When AI trains on AI-generated content instead of diverse human sources, quality degrades irreversibly.

Each generation produces fewer diverse outputs. Those outputs train the next generation. The decay compounds.

Back to vitamins for a second—because this is where it really clicked for me.

An Ayurvedic practitioner would approach “low energy” completely differently. Not isolated nutrient deficiencies—constitutional imbalances. Vata, pitta, kapha.

Traditional Chinese Medicine would frame it as Qi deficiency: herbs, acupuncture, dietary therapy.

Functional medicine doctors would investigate root causes—mitochondrial function, adrenal status, gut health—before recommending any supplement.

These are all legitimate frameworks for understanding how bodies work. All may systematically be absent from AI search results for good or bad reasons.

Not because they’re wrong. Because they’re not optimized.

Different people, different stakes

I spent time thinking about who wins and who loses here.

Tech companies frame this as solving real needs. Getting answers without clicking through multiple sites? That’s genuine convenience. And they invest substantially in alignment research, acknowledging ongoing challenges.

Publishers face different math. Significant traffic drops threaten business models built on advertising and direct audience relationships. Some are experimenting with revenue-sharing—AI platforms compensating creators when their work gets cited. Others pivot to subscriptions, newsletters, and direct reader support.

Users navigate a trade-off. Zero-click searches save time. Accepting answers without exploring sources means trusting invisible editorial decisions about what matters and what doesn’t.

But let’s be clear. Currently, nobody disputes the value of AI search. It is fast and convenient—and synthesis matters.

What remains unclear:

  • What gets compressed out when singular answers replace exposure to multiple sources?
  • Whose knowledge counts when optimization determines visibility?
  • Which perspectives disappear when convenience becomes the primary design goal?

What I’m doing differently now

Not trying to preach here. Just sharing what’s been working for me.

Next time AI answers your question—especially if it’s something that matters—click through to one of the sources. Check what the original material actually says versus how AI synthesized it.

Not every time. That’d be exhausting. Just sometimes.

Because that little bit of friction? That’s where informed choice actually lives.

Cheers,

Mark
The AI Learning Guy
👋⚡😎

Interesting Sources

  1. Search Traffic Impact Studies:
    ​How Google AI Overviews Fuels Zero-Click​
    Semrush: Impact of AI Search on SEO Traffic​
    Italian Publishers Demand AI Overview Investigation​
  2. Bias & Representation Research:
    ChatGPT Political Bias Study (PMC)​
    ​Cultural Bias in Large Language Models​
    Epistemic Injustice in Generative AI​
  3. Commercial Influence & Ownership:
    Microsoft and OpenAI Partnership Evolution​
    AI Search Ad Spending to Hit $26bn by 2029​
    UK Designates Google Strategic Market Status​
  4. Optimization & Visibility:
    What Is Generative Engine Optimization​
    Building Authority in the AI Era​
    Zero-Click Crisis for Small Business​
  5. Regulatory & Policy:
    ​EU Artificial Intelligence Act Summary​
    OSCE: Freedom of Expression in Age of AI​

Note: No single website has all the answers. This list serves as a starting point for those who want to explore or satisfy their curiosity about AI. Links: Links with * are affiliate links. See disclosure below.