frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

The Only Moat Left Is Knowing Things

https://growtika.com/blog/authenticity-edge
28•Growtika•2h ago

Comments

jdthedisciple•1h ago
Ironically this reads like AI slop.
zvqcMMV6Zcr•1h ago
No, it reads like Linkedin post. That said, do we now have to check if the text we wrote doesn't look like something AI generated?
Growtika•51m ago
Genuinely curious, what felt off? Ideas are mine, AI just helped clean up the English (I added a disclaimer)
djeastm•34m ago
For me it's a general feel of the style, but something about this stands out:

>We're not against AI tools. We use them constantly. What we're against is the idea that using them well is a strategy. It's a baseline.

The short, staccato sentences seem to be overused by AI. Real people tend to ramble a bit more often.

xnorswap•28m ago
Most of the subheadings starting with "The" and "What Actually" is a bit of a giveaway in my view.

Not exclusive to AI, but I'd be willing to bet any money that the subheadings were generated.

duskdozer•25m ago
The writing style just has several AI-isms; at this point, I don't want to point them out because people are trying to conceal their usage. It's maybe not as blatant as some examples, but it's off-putting by the first couple paragraphs. Anymore, I lose all interest in reading when I notice it.

I would much, much, much rather read an article with imperfect English and mistakes than an LLM-edited article. At least I can get an idea of your thinking style and true meaning. Just as an example - if you were to use a false friend [1], an LLM may not deal with this well and conceal it, whereas if I notice the mistake, I can follow the thought process back to look up what was originally intended.

[1] https://en.wikipedia.org/wiki/False_friend

_tk_•1h ago
Big LinkedIn post on a concept with little proof.
Growtika•1h ago
Fair point. This is more mindset than case study. The proof is still being built across client work. Though I'd say the same was true for SEO in the early days. People speculating on what made Google rank certain sites higher, what made pages index faster, etc. The frameworks came before the proven playbooks
mcny•1h ago
> If I subconsciously detect that you spent 12 seconds creating this, why should I invest five minutes reading it?

The problem is it isn't easy to detect it and I'm sure the people who work on generated stuff will work hard to make detection even harder.

I have difficulty detecting even fake videos. How can I possibly I detect generated text in plain text accurately? I mean I will make plenty of false positive mistakes, accusing people of using generated text when they wrote it themselves. This will cause unnecessary friction which I don't know how to prevent.

fhd2•51m ago
First thought: In my experience, this is a muscle we build over time. Humans are pretty great at pattern detection, but we need some time to get there with new input. Remember 3D graphics in movies ~15 years ago? Looked mind blowingly realistic. Watching old movies now, I find they look painfully fake. YMMV of course.

Second thought: Does it _really_ matter? You find it interesting, you continue reading. You don't like it, you stop reading. That's how I do it. If I read something from a human, I expect it to be their thoughts. I don't know if I should expect it to be their hand typing. Ghost writers were a thing long before LLMs. That said, it wouldn't even _occur_ to me to generate anything I want to say. I don't even spell check. But that's me. I can understand that others do it differently.

jongjong•1h ago
I think the most valuable intellectual skill remaining is contrarian thinking which happens to be correct.

LLMs are naive and have a very mainstream view on things; this often leads them down suboptimal paths. If you can see through some of the mainstream BS on a number of topics, you can help LLMs avoid mistakes. It helps if you can think from first principles.

I love using LLMs but I wouldn't trust one to write code unsupervised for some of my prized projects. They work incredibly well with supervision though.

bschne•48m ago
> Was this physically difficult to write? If it flowed out effortlessly in one go, it's usually fluff.

Probably my best and most insightful stuff has been produced more or less effortlessly, since I spent enough time/effort _beforehand_ getting to know the domain and issue I was interested in from different angles.

When I try writing fluff or being impressive without putting in the work first, I usually bump up against all the stuff I don't have a clear picture of yet, and it becomes a neverending slog. YMMV.

jmkd•32m ago
The central idea that we all have the same tools which now represent an infrastructure baseline, therefore we need to look harder to establish our moats (not just in knowing things although that's one) is sound and well put. Thanks.

Europe's next-generation weather satellite sends back first images

https://www.esa.int/Applications/Observing_the_Earth/Meteorological_missions/meteosat_third_gener...
200•saubeidl•3h ago•33 comments

Render Mermaid diagrams as SVGs or ASCII art

https://github.com/lukilabs/beautiful-mermaid
248•mellosouls•8h ago•39 comments

We can't send mail farther than 500 miles (2002)

https://web.mit.edu/jemorris/humor/500-miles
376•giancarlostoro•6h ago•43 comments

Apple to soon take up to 30% cut from all Patreon creators in iOS app

https://www.macrumors.com/2026/01/28/patreon-apple-tax/
401•pier25•13h ago•328 comments

Decompiling Xbox games using PDB debug info

https://i686.me/blog/csplit/
38•orange_redditor•2d ago•1 comments

Maine’s ‘Lobster Lady’ who fished for nearly a century dies aged 105

https://www.theguardian.com/us-news/2026/jan/28/maine-lobster-lady-dies-aged-105
141•NaOH•8h ago•17 comments

The Chemistry of Tea [pdf]

https://www.researchgate.net/profile/Matthew-Harbowy/publication/216792045_Tea_Chemistry/links/09...
26•aabiji•5d ago•1 comments

Mecha Comet – Open Modular Linux Handheld Computer

https://mecha.so/comet
141•Realman78•3d ago•43 comments

Xmake: A cross-platform build utility based on Lua

https://xmake.io/
49•phmx•3d ago•18 comments

Airfoil (2024)

https://ciechanow.ski/airfoil/
464•brk•19h ago•51 comments

Trinity large: An open 400B sparse MoE model

https://www.arcee.ai/blog/trinity-large
189•linolevan•1d ago•58 comments

Tesla ending Models S and X production

https://www.cnbc.com/2026/01/28/tesla-ending-model-s-x-production.html
314•keyboardJones•11h ago•560 comments

Show HN: A MitM proxy to see what your LLM tools are sending

https://github.com/jmuncor/sherlock
173•jmuncor•15h ago•86 comments

Mousefood – Build embedded terminal UIs for microcontrollers

https://github.com/ratatui/mousefood
208•orhunp_•17h ago•45 comments

Android's desktop interface leaks

https://9to5google.com/2026/01/27/android-desktop-leak/
242•thunderbong•1d ago•319 comments

Did a celebrated researcher obscure a baby's poisoning?

https://www.newyorker.com/magazine/2026/02/02/did-a-celebrated-researcher-obscure-a-fatal-poisoning
154•littlexsparkee•1d ago•53 comments

An Illustrated Guide to Hippo Castration (2014)

https://www.science.org/content/article/scienceshot-illustrated-guide-hippo-castration
66•joebig•4d ago•24 comments

How London became the rest of the world’s startup capital

https://www.economist.com/britain/2026/01/26/how-london-became-the-rest-of-the-worlds-startup-cap...
116•ellieh•1d ago•126 comments

Questom (YC F25) is hiring an engineer

https://www.ycombinator.com/companies/questom/jobs/UBebsyO-founding-engineer
1•ritanshu•6h ago

Satellites encased in wood are in the works

https://www.economist.com/science-and-technology/2026/01/21/satellites-encased-in-wood-are-in-the...
59•andsoitis•3d ago•35 comments

Why do RSS readers look like email clients?

https://www.terrygodier.com/phantom-obligation
47•zdw•4h ago•21 comments

In a genre where spoilers are devastating, how do we talk about puzzle games?

https://thinkygames.com/features/in-a-genre-where-information-is-sacred-and-spoilers-are-devastat...
68•tobr•5d ago•57 comments

LM Studio 0.4

https://lmstudio.ai/blog/0.4.0
152•jiqiren•16h ago•80 comments

Oban, the job processing framework from Elixir, has come to Python

https://www.dimamik.com/posts/oban_py/
231•dimamik•17h ago•90 comments

The Only Moat Left Is Knowing Things

https://growtika.com/blog/authenticity-edge
28•Growtika•2h ago•13 comments

OpenAI's Unit Economics

https://www.exponentialview.co/p/inside-openais-unit-economics-epoch-exponentialview
5•swolpers•2h ago•0 comments

Computer History Museum Launches Digital Portal to Its Collection

https://computerhistory.org/press-releases/computer-history-museum-launches-digital-portal-to-its...
156•ChrisArchitect•16h ago•26 comments

Bf-Tree: modern read-write-optimized concurrent larger-than-memory range index

https://github.com/microsoft/bf-tree
86•SchwKatze•12h ago•16 comments

Somebody used spoofed ADSB signals to raster the meme of JD Vance

https://alecmuffett.com/article/143548
493•wubin•12h ago•125 comments

Show HN: Externalized Properties, a modern Java configuration library

https://github.com/joel-jeremy/externalized-properties
4•jeyjeyemem•2d ago•1 comments