1) the .net version has a couple of very high authority links, namely from theregister and thenewstack (both of which have had lots of engagement).
I highly doubt it would have ranked without those links.
2) its only been a week. Give Google time to understand which pages should rank higher.
3) Google is biased towards sites that cover a topic earlier than others.
I’ve seen pages that are still top 3 for a particular competitive query years later, simply because they were one of the first to write about it.
Suggestions: give it time. Meanwhile I would recommend linking to your website rather than your github everywhere you mention it, to give it a boost
With so many copycats on the internet, first to publish seems like a fairly good indication of the original source. But as we can see here, that's not always true.
Thousands of little weights driven by obscure attributes of the site that you're not really going to figure out by thrashing and changing stuff.
https://web.archive.org/web/20260301133636/https://www.there... https://web.archive.org/web/20260211162657/https://venturebe... https://web.archive.org/web/20260220201539/https://thenewsta...
Is Google supposed to have drastic updates to its index over 2 weeks?
This hit hard. I'm a solo developer who just shipped my first app and I've spent the last two weeks learning that distribution is an entirely separate skill from building. Submitting to launch platforms, building Reddit karma, writing HN comments, optimizing YouTube SEO...none of this is why I learned to code.
But the uncomfortable truth I'm discovering is that building something great and having people find it are two completely different problems, and Google is increasingly unreliable for the second one. I've basically given up on organic search as a discovery channel for a new product and gone all-in on community-driven distribution instead.
Your situation is worse because you HAVE the authority signals and Google is still failing. For the rest of us without 18,000 GitHub stars and press coverage, we never had a chance to begin with. Google's discoverability problem isn't just an SEO issue; it's reshaping how builders have to think about distribution from day one.
bro.
This is why open source projects like Firefox hold trademarks near and dear.
I would think a US trademark plus a nasty cease and desist letter would deter most. But maybe I’m naive.
Either that or just accept that someone else has a scam site. Report it to anyone you can report it to, put a message in your software stating that it shouldn’t have ads or payments and convey the official website.
Get more traffic (make sure google analytics sees it, IDK but that probably matters because monopoly) and it might help.
Most of the other indices aren’t much better. Turns out fighting spam is expensive, easier to just do a combo of boosting really big sites and blessed spammers that use your ad network.
Plus based on the results it’s not entirely clear that only the ad part are ads. Especially around certain topics where money is involved, the Google first page is often showing companies that could profit from traffic
The obvious risk here is a bait and switch, where one of these sites switches their link to the Github repo to point to a malicious imitator repo instead.
One approach would be to go after the sites themselves, not their Google ranking. See if their hosts are willing to take them down. Is there anything you can assert copyright over to hang a DCMA request on? That's hard for an Open Source project, I guess. And the fake sites aren't (yet) doing any actual scamming.
Good luck, though!
Actually I don't trust Google and I don't expect it to surface reliable information. I expect it to surface information and I will dig through it and judge for myself whether it is reliable or not.
Then I tried opening up google.com. and this works too. I didn't know that websites resolve when you add another additional dot after TLD. This was a really fun coincidence type thing so I wanted to share it with you.
I read an interesting blog article on this a while back: https://lacot.org/blog/2024/10/29/the-trailing-dot-in-domain...
Have given a glance through it but I am also bookmarking to read it later once I get more free. Thanks for sharing it!
From the article:
> Wait, what? I can put a dot at the end of my domain names?
This was exactly how I felt at that moment :) The article has started pretty nicely.
I appreciate that you open source your projects for us to study. But TBH, please help yourself first.
Humans are psychologically incapable of assigning respect to things that are free; across the board - not donating to open-source, maxing out every dollar of food stamps, refusing to pay a dollar for an app if it has a free tier, even companies like AWS ripping off open source without any qualms. If you got an offer for a free relationship no strings attached, would you take it seriously? If someone on a street corner has artwork for $5 or $500, it could be the same piece of art, but which one gets more attention on first glance?
If you want your work to be respected, do not make it open source. Your odds are slightly better at succeeding at acting. Remember that 97% of public GitHub repos have zero external users.
This extends into the world of work as well. Employers that don't pay well tend to treat their employees poorly.
The entitlement is truly real at times. I think that sometimes I can be part of that entitlement too but I think I try to be respectful usually and say my concerns if I have any.
This sort of becomes a circular because VPS at the very least do indicate support and good quality/atleast decent quality hardware. A server too cheap and too overprovisioned with steal factor (Like Contabo) is universally hated by people. But these are the same people who will take deals if they are the cheapest across the board (myself included at times, I have got an idle netcup vps for a few months for 10$ simply out of curiosity but I do think that's 10$ worth spent to get the idea of a public facing ipv4 but yea)
So a lot of summer hosts/ deadpools (Scam-type) take on this opportunity and what they do is rent hardware for a month or year from other providers with large specs and split it into small chunks and give yearly, triannually, lifetime deals which can be too good to be true.
Turns out that they are, as usually sme sort of scam type stuff happens after a year or two or three.
This also makes it hard for new providers to try to prove their worth at times too if they are legit all within a market which is very price competitive.
For Steve Jobs it was not about respect or value, that's the lie. It was about greed.
We live in the richest country on planet earth and we eliminated child hunger here during COVID only to roll it back.
It's not even 1.5% of the budget currently. Compare this to our military adventurism budget.
Every $1 invested in SNAP generates $1.80 in economic activity, right now.
Children need food to grow up and be 'productive', even if you don't see value in human life and are captial-maxxing; This is an important program for creating excess productivity. The same is true of well funded public schools. A well-fed and educated populous is optimal by every public metric.
I doubt you are an actual member of the bourgeoisie, so I must conclude you just enjoy a starving and undereducated mass of parents and children you look down upon for their poor moral character?
Adults need food to be 'productive' as well. Adults that are not afraid that they are going to starve commit fewer crimes.
You want to 'save' some money? Eliminate means testing entirely and give every American have a baseline EBT card food budget per person in the household. No special virtuous food categories to make sure the poor know they are being watched. Just a monthly cash infusion spendable at all grocers.
This way, walmart and other mega-corps won't be able to scam the government by creating positions that force their workers onto these means tested programs and lock them there.
Its weird to be all evo psych about this either way IMO, free as in gratis has only been situationaly possible at all for very short time of human history. All armchair philosophy needs to take it into account! As soon as you recognize that, we're forced to question such pat appeals to nature or what not, and drawn necessarily to consider how systems make humans one way or another.
Put another way, this position is incredibly fatalistic, as well as kinda sad and lonely to my ears.
As for value extraction, have a look at this article and weep: https://www.heise.de/en/news/Harvard-study-Open-source-has-a...
OTOH this also shows the huge potential FOSS has, if it manages to only slightly shift that balance in their favor.
Sorry, I'll put it in hand-crafted ChatGPTese:
## The Slop Problem
Every post sounds the same. No intelligence. No individuality. Just pure, clean LLM slop. Let's dive in.
- Every post has LLM tells. This is key.
- Posts get upvoted anyway. Nobody seems to notice or indeed care.
- People acclimate to the slop. This isn't just a coincidence. This is a real shift in standards. When people read enough of this, they begin to think it sounds normal.
## The Replying Dilemma
Should you engage with the content, when there is a real person involved? On the one hand, they put their name on it, and probably the details are drawn from their prompt, so it can be said to fairly represent what they wanted to say. So maybe ragging on their ChatGPT prose is being mean. On the other hand, if nobody ever mentions this, the acclimatization will only get worse as the rising tide of slop overwhelms any other style of writing.
## The "Snobbery is good actually" Option
Relentlessly bully people for their half-baked LLM copy. Make it your whole personality. Go insane.
## The "Giving Up" Solution
Learn to stop worrying and love the LLM.
It's slop all the way down.
Instead it seems like there's a solid core of people who have always wanted to outsource their brains entirely to machines, and have finally got their wish.
I'm old enough to remember when we joked about normies who were dumb enough to let computers think for them.
Curation in general is probably a skill that will become more and more in demand as the Internet fills up with AI slop.
Another point but DDG's AI feature actually references Nanoclaw.net as a source.
Damn I booted up Orion (Kagi) and even Kagi shows nanoclaw.net as the third result after the github page with qwibitai and another github page with your (previous?) github username ie gavrielc which when clicked on also results to the same github page.
There is an interesting find page in kagi which references the website but it still shows nanoclaw.net page earlier and the nanoclaw.dev interesting find shows the .dev domain barely that in first time I didn't even notice it.
I expected it better from DDG/Kagi to be honest. I also tried brave and it had the same issue. Brave even is its own independent index and even that struggles with.
Let's hope that this can quickly get patched though. Also a good reminder to people to prefer opening up github links than websites as I must admit that even as a tech-savvy person I could've fallen for nanoclaw.net link as well given its second in like all search engines.
I have also written a more detailed comparison comparing all search providers that I could find, perhaps it might be of interest to ya but only Mojeek/(yandex.ru with the nanoclaw.dev/ru) were able to reference it earlier than .net
I have been an happy user of DDG for many time. I trust DDG significantly more than Google and I am happy that you guys could read such feedback!
Have a nice day DDG team!
Bing, DuckDuckGo, Qwant, Ecosia, Brave all had the github repo and nanoclaw.net (the fake homepage) in the first or second place. Marginalia had fascinating results about biology but only tangentially related Nanoclaw results, not the github repo or either the fake or real homepage.
Mojeek was the exception, sort of. It had some random news sites up top, but the github repo in 2nd place and nanoclaw.dev (the real homepage) in the 4th place. The fake nanoclaw.net did not show.
Kagi is the only one I couldn't try because apparently I used up my free credits a year back. Can anyone see how they compare?
I assume the "I" here refers to Claude, who seemingly wrote the entire project AND the linked post.
But for entities with a bit more time, you can prevent this scenario by taking acquiring the .com/.net variant domains before launching.
Happy to do the same for you if you want.
The quickest win in your case: map all the backlinks the .net site got (happy to pull this for you), then email every publication that linked to it. "Hey, you covered NanoClaw but linked to a fake site, here's the real one." You'd be surprised how many will actually swap the link. That alone could flip things.
Beyond that there's some technical SEO stuff on nanoclaw.dev that would help - structured data, schema, signals for search engines and LLMs. Happy to walk you through it.
update: ok this is getting more traction than I expected so let me give some practical stuff.
1. Google Search Console - did you add and verify nanoclaw.dev there? If not, do it now and submit your sitemap. Basic but critical.
2. I checked the fake site and it actually doesn't have that many backlinks, so the situation is more winnable than it looks.
3. Your GitHub repo has tons of high quality backlinks which is great. Outreach to those places, tell the story. I'm sure a few will add a link to your actual site. That alone makes you way more resilient to fakers going forward. This is only happening because everything is so new. Here's a list with all the backlinks pointing to your repo:
https://docs.google.com/spreadsheets/d/1bBrYsppQuVrktL1lPfNm...
4. Open social profiles for the project - Twitter/X, LinkedIn page if you want. This helps search engines build a knowledge graph around NanoClaw. Then add Organization and sameAs schema markup to nanoclaw.dev connecting all the dots (your site, the GitHub repo, the social profiles). This is how you tell Google "these all belong to the same entity."
5. One more thing - you had a chance to link to nanoclaw.dev from this HN thread but you linked to your tweet instead. Totally get it, but a strong link from a front page HN post with all this traffic and engagement would do real work for your site's authority. If it's not crossing any rule (specific use case here so maybe check with the mods haha) drop a comment here with a link to nanoclaw.dev. I don't think anyone here would mind if it will get you few steps closer towards winning that fake site
If I was the author, however, I'd still feel like I've been put in a predicament where I need to spend personal agency to fix something that Google has broken.
While that may just be a fact of life, my internal injustice-o-meter would be raging. Like, Google is going to take hours of my life because they, with all their billions of capital, can't figure out the canonically-true website when it's RIGHT THERE in the GitHub repository?
Ugh. I guess that's just the day we live in. But it makes me rage against the machine on the author's behalf.
No it's not, it's a sales pitch that intentionally ignores some of the things pointed out in the article. The author has invested time into proper SEO optimization, legit websites already link to it et cetera, it's all explained in the article.
From the perspective of a spammer: They need like 2 million MAU to earn below minimum wage. You're never getting those figures by doing something legit and actually useful to a tiny subset of people. You either need a vague site beyond any point of usefulness to anyone or you need a network of knockoff sites. The reason you can't compete with these shitty SEO spam version of your site is because they already have a network of "authoritative" (in Google's eyes) sites and all they have to do is to link from them to a new one to expand their shitty network.
From the perspective of SEO agencies: They can't guarantee results. They can tell you vague, easily-googleable best practices and give you an output of some SEO SaaS that's far too expensive for an individual to purchase. Ahrefs(.com) is the prime example of this, the cheapest paid version costs $129/month. Do you care about SEO that much? No, so you go to these agencies and give them money for them to give you the output of such a tool. But that SaaS also only contains vague and nebulous "things to fix" to follow "best practices" because they also cannot know what drives traffic to your competitor from the outside perspective.
My best suggestion would be to start a website from day one. Doesn't matter how good the website is at first, Google favours sites that exist for longer. If you're creating a website after the knock-off version already exist, you might as well give up immediately, it's gonna be near impossible to recover from that.
1. DDG 2. Kagi 3. Brave 4. Ecosia 5. Startpage 6. Marginalia 7. Mojeek 8. Yandex.ru
from 1-5 all referenced .net before .dev and DDG referenced .net before github , marinalia didn't give me either .net, .dev or gh link but rather docker.com or some other tech articles
Mojeek and Yandex.ru DID give me .dev links before .net at the time of writing.
I literally opened these two as a joke especially Mojeek not expecting too much But I just know names of lots of search engines so I tried.
Mojeek and Yandex.ru have surprised me although I think yandex.ru might have referenced the .dev because of https://nanoclaw.dev/ru/ as it points to this.
Mojeek seems interesting now from this observation
I also wanted to try swisscows but looks like they have become 100% premium as I do remember being able to search for free but now a popup comes.
I also tried baidu (chinese search engine) and it gave results in chinese and firefox translate sort of stuttered and didn't work when I tried to translate, I don't know chinese so pasted it in claude and it doesn't link to either .net or .dev but rather chinese links.
Now with all of this observation, I think that we do know one Provider (Mojeek) who won. A lot of these on these lists are actually not independent except Mojeek and brave and probably yandex.ru
SO I guess the main takeaway from this could be that Independent search engines can be interesting. They can still be hit or miss but the more independent search engines the merrier given that some might miss but some will also hit.
My comment definitely feels like a good reputation bonus for mojeek. Well anything for more independent search engines imo. I looked at their about me and it seems that they are a single person (Marc Smith). Fascinating stuff
I know marginalia_nu is on hn so maybe marginalia and mojeek can share some index together. Anyways this was a fun exciting experiment to do. I hope the community tries out other search engines if I may have missed any and share insights if a particular search engine gives interesting results.
I think this had just made me curious so yeah haha
I mean one thing I am not understanding is why they would write an article with AI tho. They still prompted AI, might as well give us what they prompted or just write under <300 words or less. I mean its literally twitter (refuse to call it X)
Or like make a 2 minute video with screenshare just talking to the camera about it like they might've with claude perhaps.
They also have discord, They could have literally given a free contributor to help write the article from such video or concerns and credit them properly. I mean, heck I could've written the article for free for just a credit at this point where I got so invested haha.
I genuinely don't understand why you would prompt an article/text out of all things with AI. I hope I never get persuaded with this dark side lol.
1) this style genuinely is preferred by lots of people on X/Twitter so you might as well lean into it
2) People who spend a lot of time with LLMs think this sort of writing is normal or even standard just through overexposure, a sort of pseudo social proof
2b) People who spend a lot of time with other people who use LLMs think this is how humans write (actual social proof)
3) People are insecure about their writing ability and find the non-judgmental non-human LLM editor soothing
4) people are lazy
5) people aren't lazy per se but they know writing has been so devalued that they aren't going to spend time on it that they don't need to
6) their first experience of writing was trying to hit word count requirements in grade school and that stuck
7) Visibly using LLMs is becoming a shibboleth for a social group on Twitter and LinkedIn. It's a marker that you are dogfooding the crappy AI tools you're developing and selling. Under this theory, being visibly LLM output is actually intentional: "look ma, no hands- all NanoClaw!"
- I hate that Google returns content farms instead of product web pages
- I hate that Google provides a page of 10 useful links, later links are just pure garbage. I think that something in Google engine is profoundly broken
- I maintain my own search index, but it requires a lot of effort, and attention. I do insert links if I find them worthy. I think more people should have their personal search indexes. Mine is below. I am quite happy that problems like these do not affect me that much
Optimizing for ad revenue is a good start.
A lot of handwringing about hypotheticals. The page is up there because it links the official repo. Changing that will quickly tank its search rank.
The crux of the matter is that there's nothing that protects an open project besides reputation, and nowadays in the digital space it can be cheaply farmed.
Laws could help, but they only work when you undertake purposeful actions to be covered by them, like register a trademark, and it's never cheap.
Imagine you're in a local band playing shows. It's 3 month old and you have no issued records. A second band tighter with venues takes your name and starts performing under your moniker. You have no money to take that to court and good luck making a case. You can't do anything besides screaming on the web or, don't know, kicking a few butts. You change your name.
I've tested on a few of the big search engines, and nanoclaw.dev is never in the first page.
Gemini was also unable to find the .dev, even in "Research Mode." The only way I was able to get a direct link to nanoclaw.dev was with chatgpt, which found it by scraping the GitHub (it also spat out links to a couple of other copies it found from google.)
Seems this is a wider SEO issue, one which infiltrates even the technology supposed to replace it.
Unsurprisingly, right? Gemini just uses the same back end as Google itself, which - according to OP - doesn't list his site on page 1, not page 2 and not page 5.
Depending on the prompt, it should have gotten the link from the github, but that's like an indirect hint from a secondary source, it probably ranks the Google index quite highly when it does research.
The weird bit isn’t that a scraper site exists, it’s that Google can’t do the obvious graph join: query == project name, #1 result is the repo, repo declares Homepage = X, yet Google still boosts an imposter domain. That’s not “SEO”, that’s the ranking system refusing to treat maintainer-declared canonical as a strong signal. Early domain squatters get to “set the default” purely by being first, then they can flip the content later once trust is baked in.
People keep saying “tell users to bookmark the real URL” like that scales. Most people will click the second link and assume it’s official. If Google can’t solve this class of problem, their “AI answers” are going to be a bigger mess than blue links ever were.
Github only has authority because people put their shit there; if people want to point that back at the "right" website, Github should be helping facilitate that, instead of trying to help Google make their dogshit search index any better.
I mean, seriously, doesn't Bing own Github anyway?
Unfortunately, the fake website [.net] is also #3 on Kagi, and #1 on Duckduckgo. On Kagi, the Github is #1 and nanoclaw.dev is #4, but only if you count "Interesting Finds". On Duckduckgo, the Github is #2 and nanoclaw.dev is nowhere to be found.
[1] https://zeroclaw.net/ [2] https://github.com/openagen/zeroclaw
By the sound of it, everything except reporting it? Winning SEO just means appear before them in search results, but the fake page shouldn't just lose the race, it should be taken down.
ICANN specifies how to deal with this kind of issue: https://www.icann.org/en/system/files/files/submitting-dns-a...
senko•1h ago
Sorry, but this is a SEO problem. The fake site has probably been linked to by a number of high-SEO outlets. What you should do is contact them and tell them to fix the links (to point to your site), which they should be happy to do.
thepasch•1h ago
Google linking to a fake website directly underneath the real project's repository that has a real link to the real website isn't a SEO problem, lol.
beardyw•1h ago
jermaustin1•1h ago
It was 100% a game of whack-a-mole. And while we were a reputation raiser, we were always combatting against reputation tarnishers. Car dealerships already have a bad reputation to begin with, but they hate eachother more than their customers hate them. They were our bread and butter. Same with tradespeople (plumbing, electrical, hvac, handy(wo)men).
Hizonner•1h ago