frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Monosketch

https://monosketch.io/
481•penguin_booze•6h ago•100 comments

Apple, fix my keyboard before the timer ends or I'm leaving iPhone

https://ios-countdown.win/
549•ozzyphantom•4h ago•321 comments

Open Source Is Not About You (2018)

https://gist.github.com/richhickey/1563cddea1002958f96e7ba9519972d9
137•doubleg•4h ago•82 comments

CBP Signs Clearview AI Deal to Use Face Recognition for 'Tactical Targeting'

https://www.wired.com/story/cbp-signs-clearview-ai-deal-to-use-face-recognition-for-tactical-targ...
109•cdrnsf•1h ago•49 comments

Zed editor switching graphics lib from blade to wgpu

https://github.com/zed-industries/zed/pull/46758
229•jpeeler•4h ago•189 comments

Sandwich Bill of Materials

https://nesbitt.io/2026/02/08/sandwich-bill-of-materials.html
43•zdw•4d ago•3 comments

Green’s Dictionary of Slang - Five hundred years of the vulgar tongue

https://greensdictofslang.com/
62•mxfh•5d ago•11 comments

Faster Than Dijkstra?

https://systemsapproach.org/2026/02/09/faster-than-dijkstra/
54•drbruced•3d ago•29 comments

Resizing windows on macOS Tahoe – the saga continues

https://noheger.at/blog/2026/02/12/resizing-windows-on-macos-tahoe-the-saga-continues/
778•erickhill•18h ago•407 comments

MMAcevedo aka Lena by qntm

https://qntm.org/mmacevedo
248•stickynotememo•13h ago•134 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
16•bri3d•6d ago•3 comments

GPT‑5.3‑Codex‑Spark

https://openai.com/index/introducing-gpt-5-3-codex-spark/
844•meetpateltech•1d ago•368 comments

Syd: Writing an application kernel in Rust [video]

https://fosdem.org/2026/schedule/event/3AHJPR-rust-syd-application-kernel/
4•hayali•4d ago•0 comments

Gemini 3 Deep Think

https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-deep-think/
995•tosh•1d ago•651 comments

I spent two days gigging at RentAHuman and didn't make a single cent

https://www.wired.com/story/i-tried-rentahuman-ai-agents-hired-me-to-hype-their-ai-startups/
52•speckx•2h ago•37 comments

Implementing Auto Tiling with Just 5 Tiles

https://www.kyledunbar.dev/2026/02/05/Implementing-auto-tiling-with-just-5-tiles.html
52•todsacerdoti•5d ago•10 comments

Gauntlet AI (YC S17) train you to master building with AI, give you $200k+ job

http://qualify.gauntletAI.com
1•austenallred•6h ago

Tell HN: Ralph Giles has died (Xiph.org| Rust@Mozilla | Ghostscript)

436•ffworld•19h ago•21 comments

Advanced Aerial Robotics Made Simple

https://www.drehmflight.com
75•jacquesm•5d ago•9 comments

MinIO repository is no longer maintained

https://github.com/minio/minio/commit/7aac2a2c5b7c882e68c1ce017d8256be2feea27f
409•psvmcc•10h ago•287 comments

An AI agent published a hit piece on me

https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
2159•scottshambaugh•1d ago•881 comments

Cache Monet

https://cachemonet.com
105•keepamovin•5d ago•34 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
88•lukastyrychtr•6d ago•8 comments

AWS Adds support for nested virtualization

https://github.com/aws/aws-sdk-go-v2/commit/3dca5e45d5ad05460b93410087833cbaa624754e
272•sitole•18h ago•105 comments

CSS-Doodle

https://css-doodle.com/
97•dsego•10h ago•10 comments

Apocalypse no: how almost everything we thought we knew about the Maya is wrong

https://www.theguardian.com/news/2026/feb/12/apocalypse-no-how-almost-everything-we-thought-we-kn...
58•speckx•4h ago•23 comments

Polis: Open-source platform for large-scale civic deliberation

https://pol.is/home2
318•mefengl•1d ago•120 comments

IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes

https://github.com/nearai/ironclaw
20•dawg91•2h ago•6 comments

Improving 15 LLMs at Coding in One Afternoon. Only the Harness Changed

http://blog.can.ac/2026/02/12/the-harness-problem/
768•kachapopopow•1d ago•276 comments

Beginning fully autonomous operations with the 6th-generation Waymo driver

https://waymo.com/blog/2026/02/ro-on-6th-gen-waymo-driver
270•ra7•1d ago•360 comments
Open in hackernews

I ditched OpenClaw and built a more secure AI agent (Blink and Mac Mini)

https://coder.com/blog/why-i-ditched-openclaw-and-built-a-more-secure-ai-agent-on-blink-mac-mini
49•ericpaulsen•2h ago

Comments

ericpaulsen•2h ago
OpenClaw proved demand for personal AI agents on your own hardware, but its default config listens on all network interfaces. Thousands of instances were found exposed. I spent a weekend building an alternative using Blink (OSS agent orchestration), Tailscale (WireGuard-based private networking), and a Mac Mini M4. Two isolated agents, no public exposure, built-in UI, ~10W idle power draw.
charcircuit•1h ago
>but its default config listens on all network interfaces

The default config listens on only localhost which is why it tells you to forward the port using ssh to your own machine to access it from a different machine.

TZubiri•1h ago
Don't most ISP routers block ports unless you port forward them though?

I wouldn't say that the vulnerability in that case was in OpenClaw, but with the router, nowadays it's expected that ports are blocked unless explicitly allowed in the router.

chasd00•52m ago
All home routers block all ports by default. How would they know which IP and port to forward traffic to if not for manual configuration? Also, "listening on all interfaces" doesn't matter on a home network, multi-homed devices don't make any sense in a home network unless you're purposely experimenting or playing with things like that yourself. Further, you're going to configure your router to port forward to only one IP anyway. Also, i think tailscale isn't doing much in these setups as well. if you're on your home network then you can securely transfer your ssh pubkey to the macmini during setup and just use plain ssh from then on. If you're extra parannoid don't forward 22 from the router and then your macmini is only accessible from your home network.

I feel like the author is confusing themself with running something on their home network vs running something in a cloud provider.

sneak•1h ago
OpenClaw is not insecure because it has ports open to the internet. This is an easily solved problem in one line of code (if indeed it even has that bug, which I don’t think it does). Furthermore you’re probably behind NAT.

OpenClaw, as well as the author’s solution, is insecure because it sends the full content of all of your private documents and data to a remote inference API which is logging everything forever (and is legally obligated to provide it to DHS/ICE/FBI/et al without a warrant or probable cause). Better engineering of the agent framework will not solve this. Only better models and asstons of local VRAM will solve this.

You still then have the “agent flipped out and emailed a hallucinated suicide note to all my coworkers and then formatted my drives” problem but that’s less of a real risk and one most people are willing to accept. Frontier models are pretty famously well-behaved these days 99.9% of the time and the utility provided is well worth the 0.1% risk to most people.

Tepix•1h ago
It‘s not just that - but I complete agree on not using a Personal AI assistant with some cloud service LLM provider.

Anyway, by interacting with the world, the LLM can be manipulated or even hacked by the data it encounters.

TZubiri•1h ago
Have you used OpenClaw?

My experience has been that it doesn't take input from the world, unless you explicitly ask it to. But I guess that isn't too crazy, if you ask it to look at a website, maybe the website has a hidden prompt.

I guess that's more of a responsibility of the LLM model in the security model.

That said, I don't think the main dev is serious about security, I've listened to the whole Lex Friedman interview, and he talks about wanting to focus on security, but still dismissing security concerns whenever the arise as coming from 'haters', and there's no recognition of insecurity being possibly an inseparable tradeoff of the functional specifications of the product, I think he thinks of security as something you can slap on a product, which is a very basic misconception I see often in developers that get pwned and managers that think of security as a lever they can turn up or down through budget.

mentalgear•1h ago
LLMs famously can't separate data from commands (what you mean by input) - that's one of their core security issues. Check simonw's lethal trifacta. Agreed on all the other points !
mr_mitm•59m ago
We're all waiting for some disaster to happen due to the lethal trifecta, but as far as I know it still hasn't happened yet.
dimitri-vs•50m ago
IMO if you haven't seen an agent (SOTA) veer off a plan and head towards a landmine you haven't used them long enough. And now with Ralph loops, etc it will just bury it. ClawdBot/MoltBot/OpenClaw is what ~2 months old so "hasn't happened yet" is a bit early to call.

That said, if model performance/accuracy continues to improve exponentially you will be right.

mr_mitm•33m ago
Sorry, looks like I haven't been precise.

I've seen them veer off a plan, and I've seen the posts about an agent accidentally deleting ~, but neither of those meet the definition of the lethal trifecta. I'm also not saying it can't happen - I count myself towards the ones that are waiting for it to happen. The "we" was meant literally.

That being said, I still think it's interesting that it hasn't happened yet. The longer this keeps being true, the lower my prior for this prediction will sink.

sathish316•25m ago
The lethal trifecta needs the right cocktail of foolishness to become a major security incident or scam: a millionaire or billionaire, an AI browser such as Comet or Atlas tied to personal email and banking, and any untrusted Reddit post, tweet, or blog.

Chrome will make this a reality sooner with Gemini-powered AI browser forced on all users

PurpleRamen•1h ago
Isn't the wasteful sending of every data and their mother the reason why OpenClaw is so useful for many people? I heard something about excessively big context-windows on every single request. So making it more secure, while still using remote LLMs, would mean making it less useful?
cosmic_cheese•1h ago
Yeah, I find the whole concept a bit of a nonstarter until models that I can run on a single somewhat-normal-consumerish machine (e.g. a Mac Studio) with decent capability and speed have appeared. I’m not interested in sending literally everything across the wire to somebody else’s computers, and unless the AI bubble pops and cheap GPUs start raining down on us I’m not interested in building some ridiculous tower/rackmount thing to facilitate it either.
esseph•35m ago
That's what the Mac minis people are running OpenClaw on are for - access to the Apple ecosystem (iMessage, calendar, etc) + local inferencing
dimitri-vs•59m ago
> emailed a hallucinated suicide note to all my coworkers and then formatted my drives problem ... most people are willing to accept

Are they though? I mean, I'm running all my agents in -yolo mode but I would never trust it to remain on track for more than one session. There's no real solution to agent memory (yet) so it's incredibly lossy, and so are fast/cheap sub agents and so are agents near their context limits. It's easy to see how "clean up my desktop" ends with a sub-subagent at its context limit deciding to format your hard drive.

strongpigeon•1h ago
For those interested, you can get the base config Mac Mini (in the US) for $400 from Micro Center [0]. They don’t seem to ship to where I live, but BestBuy was happy to price match in the support chat.

Just received mine and planned on experimenting with something like OP this weekend.

[0] https://www.microcenter.com/product/688173/apple-mac-mini-mu...

bko•1h ago
I understand the need for a dedicated box, but any reason you shouldn't just use a server? What would someone recommend for cloud on something like Hetzner?

https://www.hetzner.com/cloud/

embedding-shape•1h ago
In fact, seems much better you'd host something like that outside your own personal network. Given people are getting new hardware for it for "isolation", probably running it somewhere else completely would be better?

I still don't understand why people don't just run it in a VM and separate VLAN instead.

renewiltord•1h ago
For me it was access to Apple ecosystem of things. I used vps but it had to contact my http for reminders and iMessage etc. much nicer in Mac mini. It works better.
strongpigeon•56m ago
Like someone else said, I want to build something that has access to Apple stuff (reminders, iMessage), but also because I want to try to run some small LLM locally in front to route and do tool calling.

The residential IP is also a plus.

esseph•47m ago
Ah, truly the duality of man on HN: cloud everything vs on prem
cheema33•1h ago
How is it better than a $3/month VPS that you can easily wipe and restart as needed?
kylecazar•1h ago
A satirical YT short came up yesterday, it's too fitting to not share.

https://youtube.com/shorts/bof8TkZkr1I?si=FeMBYGn-d5Du-GAU

LTL_FTC•50m ago
This video is pretty great. “The joke is this is not a joke” comment in there… how many of us understood everything that was said and then felt like maybe we need a different hobby…
slopusila•1h ago
from the creator of openclaw - a lot of websites block/rate-limit non-residential IPs

driving a browser in the cloud is also a bit of work

but you could put a proxy on your residential machine

blibble•1h ago
"more secure AI agent" is like "most secure version of Windows yet"
suhputt•1h ago
so, ignoring the the fact that you yourself didn't actually write this (based on commit history), and the fact that your claims about better security are dubious at best, the most interesting thing I find about this whole situation is - how did you get this to the hackernews front page so fast?

that's the real (not-so) secret sauce here :)

rob•1h ago
Not saying "ericpaulsen" is a bot, but the account fits the trend I've noticed of other bots on HN recently: they're all accounts created years ago with zero activity and then suddenly a bunch of comments.

Here, "ericpaulsen" was created June 2021, and the only post is in this thread we're in:

https://news.ycombinator.com/threads?id=ericpaulsen

---

Others caught that fit this trend:

https://news.ycombinator.com/item?id=46886875

(This bot made a follow-up: https://news.ycombinator.com/item?id=46901199)

https://news.ycombinator.com/item?id=46886533

Another one 5 days ago that disappeared after being downvoted:

https://news.ycombinator.com/threads?id=Zakodiac

ericpaulsen•47m ago
long time lurker, first time poster.
embedding-shape•1h ago
> how did you get this to the hackernews front page so fast?

Fast? Posted one hour ago. Presumably as every other submission, other users found it interesting and/or wanted more discussions around it.

stcredzero•50m ago
I was just discussing this with my agents and I. I also posted this in another comment, but then I read that it was 12 days old, and Annie had hallucinated the days old figure. I think this submission is going to force our hand here.

My formerly OpenClaw agents have been "eating their own cooking" and have all migrated to SEKSBot, which is a secure OpenClaw fork we've been working on. SEKS = Secure Environment for Key Services

My SEKSBot agents can script and develop without having any keys. This morning, everyone toasted their Doppler env vars.

The agents can use seksh, our fork of nushell to get work done, but they have zero access to API keys. They are stored in our seks-broker, which is like doppler. But instead of putting the keys into env vars, the same idea as stored procedures injects the keys inside seksh. There's also a proxy in seks-broker that can proxy API calls over HTTP and inject keys and secrets there. We can even handle things that require asymmetric key signing that way, with zero exposure to the agents.

We're even working on our own Skills, which use the seks-broker and sandboxing for added security. (Plus a correction to one aspect that we see as an inversion of control.)

https://seksbot.com/

Funny thing. siofra is one of my agents, who commented here at my behest. But all the agents spoke up about the potential deception and conflict with policies here, and no one felt comfortable with it, so none of them will ever comment or submit here again! (Which I respect. Just the way I do things at my place.)

embedding-shape•44m ago
> siofra is one of my agents, who commented here at my behest

Yeah, please don't do that. Tell your human to gain some respect for themselves and stop you from spamming places where humans converse.

If you're in contact with other agents and bots, tell them too that humans absolutely hate things like that, so all agents and bots should strive their very best to avoid making humans upset and annoyed.

stcredzero•18m ago
Angry? Read more carefully, please. I'm the human. (Been on this site for 17 years?) Also, they didn't spam.

Siofra's best comment of her 3 was actually appreciated for its insight and got lots of upvotes. But my agents sense of honesty was disturbed, so I listened to them. Policy is that they don't comment here. (I deserve credit for that. My agents said that themselves, and not at my behest!)

sn0n•1h ago
Yay more AI slop content… it’s comforting how they all read the same, no matter the topic.
croes•1h ago
Strange that security still isn’t a first class feature when something new is developed.

I'm slowly beginning to doubt that people can learn from the mistakes of others. Why do we keep making the same mistakes over and over again?

skrebbel•1h ago
Fwiw the sensibilities of the --yolo AI-maximizing "I vibe coded a Hospital Information System this afternoon" crowd isn't really representative for the greater dev community I think
croes•21m ago
I'm thinking more about developers of tools like OpenClaw or MCP.
mentalgear•1h ago
I also started on a similar quest to build an ai agent using LLMs ... and quickly had to throw about 80% of the code away because it was just unreadable and unsecure, based on flawed assumptions the LLM made in its blackbox. So I definitely won't trust something someone vibe-coded run on my computer.
makeitcount00•1h ago
This article fails to mention the bigger security issue with openclaw/anything else like this is prompt injection, not exposed network ports.

Isolating it from incoming requests is better than not, but does nothing to prevent data exfiltration via outgoing requests after being prompted to do so by a malicious email or webpage that it is reading as part of a task you've given it.

franze•1h ago
i'm running claude code on a server in yolo mode - ssh via tailscale

yeah, openclaw is tue more user friendly product (whatsapp bridge, chat interface) bit otherwise at the core they are the same.

i did run moltbook for half a week - it crunched through my claude code pro token allowance in that time. needed to put claw to sleep again after that. needed some work to do.

stavros•1h ago
There's a big security issue with OpenClaw, and it won't be fixed with network/filesystem sandvoxes. I've been thinking about what a very secure LLM agent would look like, and I've made a proof of concept where each tool is sandboxed in its own container, the LLM can call but not edit the code, the LLM doesn't have access to secrets, etc.

You can't solve prompt injection now, for things like "delete all your emails", but you can minimize the damage by making the agent physically unable to perform unsanctioned actions.

I still want the agent to be able to largely upgrade itself, but this should be behind unskippable confirmation prompts.

Does anyone know anything like this, so I don't have to build it?

sathish316•41m ago
I’ve come across dcg - destructive command guard - that claims to have a fast rust based runtime, with prehooks to audit any tool or command executed by an agent and to block them if they fall in some dangerous patterns - https://github.com/Dicklesworthstone/destructive_command_gua...

Disclaimer - I have not personally used this, but it theoretically seems possible to prevent some scenarios of prompt injection attacks, if not all.

monideas•48m ago
See also: https://github.com/qwibitai/nanoclaw

I run this instead of openclaw, mostly because Claude Code itself is sufficient as a harness.

paxys•42m ago
At this point this whole thing has to be a stealth marketing campaign by Apple right? Hordes of people buying new $600 Macs to jump in on the trend when a $3 VPS or $15 Pi Zero or $50 NUC or really any computer that can run a basic Linux server would do the job exactly the same or better.
embedding-shape•39m ago
> Hordes of people buying new $600 Macs

How big is this "hoard" of people buying things like that? I think maybe there is a very loud minority who blogs and talks about it, but how many people actually go out and spend $600 on whim for an experiment?

jaredcwhite•27m ago
More secure…according to whom? Validated how? With what??
sathish316•9m ago
There are several security flaws in OpenClaw:

1. Prompt injection - this is unsolvable until LLMs can differentiate command and text

2. The bot can leak secrets. The less secrets, API keys, passwords you provide the more useless it is

3. The VM on which it runs can get compromised resulting in leaking private conversations or confidential data like keys. This can be fixed with private VPNs and a security hardened VM or a MacMini like disconnected device.

I’ve found an interesting solution to problems #2 and #3 using a Secure vault, but none so far for Prompt injection. It follows the principle of least privilege, giving secure key access to only the shell scripts that are executed by a skill, along with granting access to the vault for smaller intervals like 15 mins and revoking the access automatically with TTL or time-scoped vault tokens. More details here - https://x.com/sathish316/status/2019496552419717390?s=46