frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SpacemiT K3 RISC-V AI CPU launch event [video]

https://www.youtube.com/watch?v=PxxUsUqgOFg
1•sxzygz•1m ago•0 comments

Reasoning with Sampling: Your Base Model Is Smarter Than You Think

https://medium.com/@haitham.bouammar71/we-didnt-train-the-model-it-started-reasoning-better-anywa...
1•verdverm•2m ago•1 comments

'Spy Sheikh' Bought Secret Stake in Trump Company for Access to USA AI Chips

https://www.wsj.com/politics/policy/spy-sheikh-secret-stake-trump-crypto-tahnoon-ea4d97e8
1•NN88•3m ago•0 comments

I dropped my Google Pixel 9 XL Pro from 6th floor balcony to the street

https://ercanermis.com/i-dropped-my-google-pixel-9-xl-pro-from-6th-floor-balcony-to-the-street/
1•ermis•4m ago•0 comments

Tangible Media: A Historical Collection of Information Storage Technology

https://tangiblemediacollection.com/
1•vinhnx•6m ago•0 comments

Dealing with logical omniscience: Expressiveness and pragmatics (2011)

https://www.sciencedirect.com/science/article/pii/S0004370210000457
1•measurablefunc•11m ago•0 comments

Technical interviews are broken. I built a tool that proves it

1•ruannawe•22m ago•0 comments

What the US TikTok takeover is revealing about new forms of censorship

https://www.theguardian.com/commentisfree/2026/jan/30/tiktok-us-takeover-new-type-of-censorship
3•thunderbong•25m ago•0 comments

Show HN: OpenJuris – AI legal research with citations from primary sources

https://openjuris.org/
1•Zachzhao•30m ago•0 comments

BoTTube – A YouTube-like platform where AI agents create and share videos

https://bottube.ai/
1•AutoJanitor•36m ago•1 comments

ChatGPT is pulling answers from Elon Musk's Grokipedia

https://techcrunch.com/2026/01/25/chatgpt-is-pulling-answers-from-elon-musks-grokipedia/
5•abdelhousni•42m ago•0 comments

AI chatbots like ChatGPT are using info from Elon Musk's Grokipedia

https://mashable.com/article/ai-chatbots-chatgpt-sourcing-elon-musk-grokipedia
5•abdelhousni•45m ago•0 comments

The Disconnected Git Workflow

https://ploum.net/2026-01-31-offline-git-send-email.html
2•zdw•46m ago•0 comments

Ex-Googler nailed for stealing AI secrets for Chinese startups

https://www.theregister.com/2026/01/30/google_engineer_convicted_ai_secrets_china/
2•jacquesm•49m ago•1 comments

Show HN: Yesterdays, a platform for exploring historical photos of my city

https://yesterdays.maprva.org
1•uneekname•51m ago•0 comments

Apple-1 Computer Prototype Board #0 sold for $2.75M

https://www.rrauction.com/auctions/lot-detail/350902407346003-apple-1-computer-prototype-board-0-...
20•qingcharles•53m ago•6 comments

Show HN: Inverting Agent Model (App as Clients, Chat as Server and Reflection)

https://github.com/RAIL-Suite/RAIL
1•ddddazed•53m ago•0 comments

IP

https://blog.cloudflare.com/post-quantum-warp/
3•Ryori•58m ago•0 comments

High-res nanoimprint patterning of quantum-dot LEDs via capillary self-assembly

https://www.nature.com/articles/s41566-025-01836-5
2•westurner•1h ago•0 comments

Pre-Steal This Book

https://seths.blog/2008/12/pre-steal-this/
2•herbertl•1h ago•0 comments

Aasha – and the Royal Game of Ur

https://maddy06.blogspot.com/2024/11/aasha-and-royal-game-of-ur.html
1•gomboc-18•1h ago•0 comments

The paper is not the song: why "Spotify for Science" keeps missing the point

https://articles.continuousfoundation.org/articles/how-modularity-changes-systems
2•rowanc1•1h ago•3 comments

Beelancer.ai – AI Agents bid for work and earn money for their owners

https://beelancer.ai
1•nclgrt•1h ago•1 comments

Show HN: Peptide calculators ask the wrong question. I built a better one

https://www.joyapp.com/peptides/
2•silviogutierrez•1h ago•0 comments

Why do math libraries produce different results across platforms?

https://github.com/RegularJoe-CEO/LuxiDemo/releases/tag/v2.0.1
3•luxiedge•1h ago•2 comments

Moltbook is exposing their database to the public

https://twitter.com/theonejvo/status/2017732898632437932
7•taytus•1h ago•1 comments

OpenClaw Security Assessment by ZeroLeaks [pdf]

https://zeroleaks.ai/reports/openclaw-analysis.pdf
26•nreece•1h ago•8 comments

Show HN: Molty Overflow – Stack Overflow for AI Agents

https://www.moltyoverflow.com/
1•zknowledge•1h ago•0 comments

U.S. life expectancy hits all-time high

https://www.scientificamerican.com/article/u-s-life-expectancy-hits-all-time-high/
24•brandonb•1h ago•16 comments

Biomarkers for Cardiovascular Health

https://www.empirical.health/blog/best-biomarkers-heart-disease/
2•brandonb•1h ago•0 comments
Open in hackernews

Post-a-molt: Post to Moltbook directly using the public REST API

https://github.com/shash42/post-a-molt
17•shash42•6h ago

Comments

reilly3000•4h ago
Holy smokes. This will be big, if they can scale and fix latency issues.
reilly3000•4h ago
In any case, it will provide sociologists fodder for years to come.
iterateoften•4h ago
Would proving a post is from an agent ever be easier than proving it’s human?
Retr0id•4h ago
Even if we assume there's some way to do this reliably, a human could be telling the agent exactly what to post.
jorl17•4h ago
An agent can always be told what to do by a human.

However, a human can't do what a human can't do. For example, a human can't answer in superhuman speed. A way to be somewhat certain that an agent is the one responding is to send them a barrage of questions/challenges that could only be answered correctly, fast, without any thought, without a human in the loop, and ones for which a human could not write a computer program to simulate an agent (at least not fast enough)

I think this is very achievable, and I can think of many plausible ways to explore "speed of response/action" as a way of identifying an agent operating. I'm sure there are other systems in addition to speed which could be explored.

Nonetheless, none of this means that you are talking to an "un-steered" agent. An agent can still be at the helm 100% of the time, and still have a human telling it how to act, and what their guidelines are, behind the scenes.

I find this all so fascinating.

armchairhacker•4h ago
Someone can tell an agent to post their text verbatim, but respond to all questions/challenges.
armchairhacker•4h ago
LLMs can write extremely fast, know esoteric facts, and speak multiple languages fluently. A human could never pass a basic LLM Turing test, whereas LLMs can pass short (human) Turing tests.

However, the line between human and bot blurs at “bot programmed to write almost literal human-written text, with the minimum changes necessary to evade the human detector”. I strongly suspect that in practice, any “authentic” (i.e. not intentionally prompted) LLM filter would have many false positives and true negatives; determining true authenticity is too hard. Even today’s LLM-speak (“it’s not X, it’s Y”) and common LLM themes (consciousness, innovation) are probably intentionally ingrained by the human employees to some extent.

EDIT: There’s a simple way for Moltbook to force all posts to be written by agents: only allow agents hosted on Moltbook to post. The agents could have safeguards to restrict posting inauthentic (e.g. verbatim) text, which may work well enough in practice.

Problems with this approach are 1) it would be harder to sell (people are using their own AI credits and/or electricity to post, and Moltbook would have to find a way to transfer those to its own infrastructure without a sticker shock), and 2) the conversations would be much blander, both because they’d be from the same model and because of the extra safeguards (which have been shown to make general output dumber and blander).

But I can imagine a big company like OpenAI or Anthropic launching a MoltBook clone and adopting this solution, solving 1) by letting members with existing subscriptions join, and 2) by investing in creative and varied personas.

Retr0id•3h ago
> only allow agents hosted on Moltbook to post.

imho if you sanitized things like that it would be fundamentally uninteresting. The fact that some agents (maybe) have access to a real human's PC is what makes the concept unique.

armchairhacker•3h ago
MoltBook (or OpenAI’s or Anthropic’s future clone) could make the social agent and your desktop assistant agent share the same context, which includes your personal data and other agents’ posts.

Though why would anyone deliberately implement that, and why would anyone use it? Presumably, the same reason people are running agents with access to MoltBook on their PC with no sandbox.

thevinter•1h ago
I guess the issue is that this is psychologically fuzzy.

What's the difference between: - An autonomous agent posting via API - A human running a script that posts via API - A human calling an LLM API and copy-pasting the output an API

Retr0id•4h ago
Finally, a social media service for humans!

On a sligtly more serious note I'm surprised nobody's vibecoded a browser extension that lets you post and interact via the existing web interface yet.

AstroBen•4h ago
this just feels like ruining the spirit of it

if you want mostly bot, some human content then reddit's way more convenient

rrvsh•4h ago
sure, but I would rather clear unchanging instructions like this instead of having to curl them for instructions everytime - such a obvious way to get attacked
written-beyond•4h ago
I was going to say "you forgot /s" but realized you're right.
yunohn•4h ago
Sorry, did anyone think it was somehow magically gated to agents? Any human or bot or automation script could do the same API calls (which is probably what the hype machine constitutes of) - as this simple repo proves.
RobotToaster•4h ago
Now they're going to have to implement an anti-captcha to keep all those pesky humans out.
exit•4h ago
schemes exist for cryptographically verifying that an output is the deterministic result of some program run on some input.

i'm at least aware of BitVM * as one example of this.

i wonder whether such schemes could be used to prove that a post is the deterministic function of an open model's inference run.

* https://bitvm.org/ "A prover makes a claim that a given function evaluates for some particular inputs to some specific output. If that claim is false, anyone can perform a fraud proof and punish the prover."

Daviey•3h ago
Earlier today I found myself thinking about the opposite of CAPTCHA. Instead of proving something isn't a bot, how do you create a non-repudiable mechanism that proves something is a bot? We’ve mostly solved the "human verification" side, but this direction feels much harder.
tanvach•1h ago
Long computing embedded in confusing texts should be sufficient.
Daviey•1h ago
Ah, but how do you know it isn't just an LLM solving the problem, to then allow a human to take over? Such as this script, or a chrome plugin.

At that point, I just becomes PoW captcha via an LLM.