A purported leak of 2,500 pages of internal documentation from Google sheds light on how Search, the most powerful arbiter of the internet, operates.

The leaked documents touch on topics like what kind of data Google collects and uses, which sites Google elevates for sensitive topics like elections, how Google handles small websites, and more. Some information in the documents appears to be in conflict with public statements by Google representatives, according to Fishkin and King.

368 points

Some information in the documents appears to be in conflict with public statements by Google representatives

I would have never guessed that.

permalink
report
reply
117 points

At this point if you are not assuming that corporation is pretty much lying for convenience. you aint operating in reality haha

permalink
report
parent
reply
45 points

Yep but I’ll add my two cents, half is lying and half is guessfull ignorance because nobody really knows how big and old systems really work.

permalink
report
parent
reply
-17 points

No one reaches a position like Google’s without knowing to the atom every knook and crany of their systems.

permalink
report
parent
reply
96 points

Crazy how self regulation always winds up like this. By crazy I mean predictable of course.

permalink
report
parent
reply
17 points

Libertarians assemble!

permalink
report
parent
reply
32 points

Listen, the problem is too many regulations prevented the Invisible Hand from manifesting. If we remove even more regulations the free market will work this time, I swear.

permalink
report
parent
reply
2 points

I prefer socialist libertarian.

permalink
report
parent
reply
2 points

Libertarians go away!

permalink
report
parent
reply
10 points

You’re supposed to move to a different search engine for the market to work. I already have, have you?

permalink
report
parent
reply
11 points

This approach is doomed to fail, so long as the general public isn’t aware of the problem or its scale. Government regulation is the only way.

permalink
report
parent
reply
3 points

I did years ago when Google started censoring my search results even with safe search off.

Unfortunately Bing is doing it too now and I can’t find a search engine that isn’t, though I would love to learn about one that isn’t.

permalink
report
parent
reply
1 point

I tried using some but they’re all equally shit.

permalink
report
parent
reply
6 points

This doesn’t have anything to with regulation. This is mainly a bunch of SEO and marketing people whining that Google hasn’t been honest with them in telling them exactly how to game their search engine.

permalink
report
parent
reply
22 points

Well I am just shocked, SHOCKED. Well, not that shocked.

permalink
report
parent
reply
3 points
*
0 points

Here is an alternative Piped link(s):

Surprise MF

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

permalink
report
parent
reply
98 points

Can’t wait for selfhosted web search to become better.

permalink
report
reply
62 points

You mean hosting your own crawler/indexer? That doesn’t really sound like a thing you could do cost-effectively.

permalink
report
parent
reply
62 points

No problem we crowdsource the crawling torrent style.

We outsourced that to google for reasonnable performance reason. But they shit the bed so now there’s no choice but to do it ourselves.

permalink
report
parent
reply
11 points

ooh that might be an interesting app to run on veilid

permalink
report
parent
reply
19 points

Surprisingly, it’s very doable, requires basic technical knowledge and relatively minimal computing resources (runs in the background on your computer).

https://yacy.net/ Github

I have tampermonkey script that sends yacy to crawl any websites that I visit, and it’s keeping up relatively good index for personal use of the visited websites. Combine yacy with ~300gb of Kiwix databases, add searxng as a frontend and you have pretty strong self hosted search engine.

Of course you need to supplement your searches from other search engines, as yacy does not crawl the whole web, just what you tell it to.

I encourage anyone who’s even slightly interested on this stuff to try Yacy, it’s ancient piece of software, but it still works very well and is not an abandoned project yet!

I personally use Yacy mostly on private mode, but it does have the distributed network there as well.

permalink
report
parent
reply
7 points

Yeah, I guess the P2P component sort of solves part of the issue I was imagining by distributing indexes and crawling. I was thinking that people were trying to run all of Google on a raspberry pi at home.

permalink
report
parent
reply
5 points

This is interesting, have you had it index reddit? I’m just wondering how much storage space the database takes up.

permalink
report
parent
reply
16 points

Right!

Before his company was able to block more of Microsoft’s own tracking scripts, DuckDuckGo CEO and founder Gabriel Weinberg explained in a Reddit reply why firms like his weren’t going the full DIY route:

“… [W]e source most of our traditional links and images privately from Bing … Really only two companies (Google and Microsoft) have a high-quality global web link index (because I believe it costs upwards of a billion dollars a year to do), and so literally every other global search engine needs to bootstrap with one or both of them to provide a mainstream search product. The same is true for maps btw – only the biggest companies can similarly afford to put satellites up and send ground cars to take streetview pictures of every neighborhood.”

Ars

permalink
report
parent
reply
16 points

Federated bookmarks?

permalink
report
parent
reply
54 points

Federated directories. We’re going back to Yahoo like it’s 1995

permalink
report
parent
reply
7 points

I’m so ready for something like this. I’ve cleaned up my bookmarks and been waiting for alternatives to search engines.

permalink
report
parent
reply
7 points

You could use Common Crawl, it’s run by a non profit

https://en.wikipedia.org/wiki/Common_Crawl

permalink
report
parent
reply
5 points

Look up the yacy repo in github

permalink
report
parent
reply
17 points

How is that even supposed to work? These search engines need per definition massive databanks to search through. Either you need your own crawler and indexer which is more than just inefficient, or you are limited to a relatively short list of curated static results.

permalink
report
parent
reply
12 points

If they’re taking tips from Google, why would they get better?

permalink
report
parent
reply
32 points
*

Google actually was good, so there’s probably some good information in this documentation. If nothing else we can perhaps figure out what “went wrong.”

Edit: I’ve been reading the blog post that appears to be the main person the leak was shared with and there’s a lot of in-depth analysis being done there, but I’m not seeing a link to the actual documents. This is a huge article, though, I might be overlooking it.

permalink
report
parent
reply
6 points

That was an interesting read. Thanks for linking to it.

permalink
report
parent
reply
3 points

What are the current contenders?

permalink
report
parent
reply
13 points

What it looks like beyond Google and Bing

It would be much harder to know what exists beyond “GBY” (Google, Bing, Yandex) and how it all works without the work of Rohan “Seirdy” Kumar. For three years, Kumar has been updating a heavily annotated list of search engines with their own indexes. It is 7,000 words, but only a portion of it deals with engines offering general indexing, in the English language. You can read Kumar’s evaluation methodology for a better understanding of how he compared and assessed sites.

What stands out? Mojeek (“it’s not bad… I’d live”) and Stract (“a useful supplement to more major engines”) are two of Kumar’s favorites. Right Dao has “very fast, good results,” in part because its crawler starts off from Wikipedia. Yep reaches farther out, showing results that link to and back from sites related to your query and also promises to share ad revenue with creators. All of them show promise, but you get the sense that they’re a second car, or a third bicycle, rather than a primary transport.

There are far smaller-scoped engines in other sections of Kumar’s post. If you’re wondering where that one other search engine you’ve heard about is, it’s probably in the “Semi-independent indexes” section, because it uses a GBY index when its own results are not strong enough. Here, you’ll find cryptocurrency-friendly, controversy-courting-founder-having Brave, a few engines that either “resell” GBY results or stuff affiliate links into them, and “the most interesting entry,” according to Kumar, Kagi.

Kagi requires an account and uses its own index, Teclis, in combination with Google, Bing, Yandex, Mojeek, and others, including, notably, Brave. Kagi’s founder has strong opinions on the AI-based future of search and responding to harmful searches in ways that are not “scalable.” How much of that does or does not bother you will vary, but it’s worth noting that Kagi also suffers when the GBY triumvirate is restricted.

Ars Technica this week: Bing outage shows just how little competition Google search really has

The referenced search engine comparison by Rohan “Seirdy” Kumar

permalink
report
parent
reply
5 points

can’t emphasise too much that this piece is a very necessary read for anyone who wants to know about search; not just because it says good things about us, but because of the depth of research which has been put in here. Most times you encounter an article about indexes they are just taking whatever a (meta)search engine says about themselves, not even looking at privacy policies for “relationships with microsoft” etc. or doing any comparative work.

permalink
report
parent
reply
2 points

I’ve been using Kagi and really like it so far. It’s not good for local stuff, but afaik only Google and Bing have the resources and userbase for things like maps and reviews. It’s designed to be an ad-free ‘premium’ search engine and only earns revenue from users paying for membership.

permalink
report
parent
reply
6 points

the only one I know that isn’t a proxy search is yacy

permalink
report
parent
reply

I was looking at it the other day unfortunatly its got quite poor results

permalink
report
parent
reply
5 points

YaCy, Mwmbl, Alexandria, Stract, Marginalia to name a few.

permalink
report
parent
reply
85 points

Google has been pretty crap for a decade now.

I still remember demoing how easily they can manipulate people by searching “Pakistan News” and the results being exclusively all Indian media outlet propaganda way back in 2016.

I really feel like they never got properly exposed for this just because it’s a search engine and not a social media, so people didn’t care enough about it. Also because Google was still top of the game in most results compared to other sites back then.

permalink
report
reply
17 points

My thought exactly. If this was back in like 2010, it would be a real oh shit moment, The key to the kingdom has been leaked. Now I don’t think anybody really cares other than SEO spammers who will game the system even more than they already are.

Google search is crap and has been crap for some time. Not sure any others are better. But it started going downhill with the Google Plus social network, when they removed “+” as a search operator so you could better search for ‘Google+’ that was the first time they messed with Search to further some other business goal. It wasn’t the last time. Back when Google was good, they publicly said their goal was to get you off their site as fast as possible. Now the results reek of engagement algorithm bullshit.

permalink
report
parent
reply
3 points

SearXNG works all right for me, and it’s free. I’ve also heard good things about the paid service Kagi.

permalink
report
parent
reply
79 points
*

Rand Fishkin, who worked in SEO for more than a decade, says a source shared 2,500 pages of documents with him with the hopes that reporting on the leak would counter the “lies” that Google employees had shared about how the search algorithm works.

Am I supposed to care that the poor SEO assholes that need to get their ads more visibility weren’t being given all the instructions on how to do that by the search engine?

Most of this article is SEO “experts” complaining that some of the guidelines they were given didn’t match what’s in the internal documents.

Google is shit, but SEO is a cancer too. I can’t be too bothered by Google jacking them around a bit.

permalink
report
reply
31 points
*

And I supposed to care that the poor SEO assholes that need to get their ads more visibility weren’t being given all the instructions on how to do that by the search engine?

No. You’re supposed to care that a company is pointlessly* lying, thus it’s extremely likely to deceive, mislead and lie when it gets some benefit out of it.

In other words: SEO arseholes can ligma, Google is lying to you and me too.

*I say “pointlessly” because not disclosing info would achieve practically the same result as lying.

permalink
report
parent
reply
24 points

need to get their ads more visibility

I occasionally encounter the desire for a search engine to surface non-advertisement content :)

Now if they lied to advertisers and told small bloggers, reputable news agencies, fediverse admins, etc. the insider secrets… now we’re talkin’!

permalink
report
parent
reply
8 points

Historically, Google had a give-and-take with SEO. You can’t make SEO companies go away, but you can curb the worst behavior. Google used to punish bad behavior with a poor listing, and you had to do some work to get it back into compliance and tell Google it’s fixed up.

It wasn’t ideal, but it functioned well enough.

The drive to make search more profitable over the past few years seems to have meant dropping this. SEO companies can get away with whatever. If they now have the whole manual, game over. Google of a decade ago might have done something about it. Google of today won’t bother.

permalink
report
parent
reply
-18 points
*

Edit: If you’re going to downvote me, please take the time to explain why you think I’m wrong. Stop being the hive mind.

Tell me you don’t know shit about SEO without telling me you don’t know shit about SEO.

Just because there are people who do bad things doesn’t mean the industry is bad or have bad intentions. SEO isn’t ads. Advertorials can be a tactic of SEO, but it’s not SEO as a whole. Same with clickbait because it works, and I guarantee you also fall for it constantly.

SEO is about understanding what someone needs and creating an experience to ensure that someone finds the answer to what they need through content and/or a product to solve their needs.

This can be achieved through copywriting, researching search trends and queries, technical analysis of websites and how they render, providing guidance on helpful assets (photos, pdfs, videos, form, copy, etc), PR outreach because links are how people move around online or discover things, social planning because social media are a form of search engines, and more.

And finally, SEOs are not responsible for how Google treats shit. That’s Google who is responsible. Google is the one that tweaks the algorithm and doesn’t catch spammy shit. In fact many SEOs catch it and report it to Google’s reps, but they are the ones who can ensure the right team(s) fix the issue.

permalink
report
parent
reply
13 points

Fuck SEOs - that is why you are getting downvoted. Organic content creation has been ruined by you AND google. Own your problems, beg forgiveness, stop playing the stupid game where there are no winners

permalink
report
parent
reply
-15 points

You’re exactly the person I was talking about - the hive mind. You don’t critically think and you blame an entire industry that has niches and actors of all sorts. You’d probably say all black people are bad because a few on a street did something wrong once.

Please, tell me YOUR industry so I can have fun shitting on it and drawing asinine conclusions.

permalink
report
parent
reply
2 points

wait what is “social planning” and how is it different from conventional marketing on social media. That seems pretty far removed from search engines

permalink
report
parent
reply
0 points
*

Great question! Search engines crawl social media and discover links. It can be a sign of trust and authority if it’s shared widely, which can help boost signals of page importance to Google (or other engines) and help with pushing up in organic ranking positions.

Harmonizing brand details (name, address, phone number, website link) across all social platforms is important so you don’t send mixed signals or lead to unneeded redirects.

There’s also figuring out what page(s) you want to ensure are showcased if multiple URL links are allowed or maybe your social team doesn’t know all of the page assets you have to satisfy their audience, such as an orphaned page. These are part of what are called “backlinks”.

Hashtags do matter for some platforms and knowing how to research them for intent is wise.

There’s also open graph (OG) metadata that you can set on a webpage that allows your metadata to be different on social platforms than you would use for a search engine - tailor to your audience!

Edit: one other thing is, while not social media, maybe connecting with a social team (if there is one) to find out if any posts need to be applied to Google Business listings via a Google Post for local locations.

permalink
report
parent
reply
68 points

Here’s the sooper-secret search result algorithm for whatever you type into Google:

YouTube results, followed by Reddit results, followed by “Sponsored” results, followed by AI-written Bot results, then a couple pages of Amazon results and finally, on page 10 or so, a ten-year-old result that’s probably no longer relevant.

permalink
report
reply
10 points

That’s generally what I’ve found to be the case, shocking that it’s considered so secret lol

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 11K

    Posts

  • 504K

    Comments