frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

155M US land parcel boundaries

https://www.kaggle.com/datasets/landrecordsus/us-parcel-layer
1•tjwebbnorfolk•1m ago•0 comments

Private Inference

https://confer.to/blog/2026/01/private-inference/
1•jbegley•4m ago•0 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
1•krapp•7m ago•0 comments

Show HN: Seedance 2.0 AI video generator for creators and ecommerce

https://seedance-2.net
1•dallen97•11m ago•0 comments

Wally: A fun, reliable voice assistant in the shape of a penguin

https://github.com/JLW-7/Wally
1•PaulHoule•13m ago•0 comments

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
1•y1n0•14m ago•0 comments

Lobsters Vibecoding Challenge

https://gist.github.com/MostAwesomeDude/bb8cbfd005a33f5dd262d1f20a63a693
1•tolerance•14m ago•0 comments

E-Commerce vs. Social Commerce

https://moondala.one/
1•HamoodBahzar•15m ago•1 comments

Avoiding Modern C++ – Anton Mikhailov [video]

https://www.youtube.com/watch?v=ShSGHb65f3M
2•linkdd•16m ago•0 comments

Show HN: AegisMind–AI system with 12 brain regions modeled on human neuroscience

https://www.aegismind.app
2•aegismind_app•20m ago•1 comments

Zig – Package Management Workflow Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
1•Retro_Dev•22m ago•0 comments

AI-powered text correction for macOS

https://taipo.app/
1•neuling•25m ago•1 comments

AppSecMaster – Learn Application Security with hands on challenges

https://www.appsecmaster.net/en
1•aqeisi•26m ago•1 comments

Fibonacci Number Certificates

https://www.johndcook.com/blog/2026/02/05/fibonacci-certificate/
1•y1n0•28m ago•0 comments

AI Overviews are killing the web search, and there's nothing we can do about it

https://www.neowin.net/editorials/ai-overviews-are-killing-the-web-search-and-theres-nothing-we-c...
3•bundie•33m ago•1 comments

City skylines need an upgrade in the face of climate stress

https://theconversation.com/city-skylines-need-an-upgrade-in-the-face-of-climate-stress-267763
3•gnabgib•34m ago•0 comments

1979: The Model World of Robert Symes [video]

https://www.youtube.com/watch?v=HmDxmxhrGDc
1•xqcgrek2•38m ago•0 comments

Satellites Have a Lot of Room

https://www.johndcook.com/blog/2026/02/02/satellites-have-a-lot-of-room/
2•y1n0•39m ago•0 comments

1980s Farm Crisis

https://en.wikipedia.org/wiki/1980s_farm_crisis
4•calebhwin•39m ago•1 comments

Show HN: FSID - Identifier for files and directories (like ISBN for Books)

https://github.com/skorotkiewicz/fsid
1•modinfo•45m ago•0 comments

Show HN: Holy Grail: Open-Source Autonomous Development Agent

https://github.com/dakotalock/holygrailopensource
1•Moriarty2026•52m ago•1 comments

Show HN: Minecraft Creeper meets 90s Tamagotchi

https://github.com/danielbrendel/krepagotchi-game
1•foxiel•59m ago•1 comments

Show HN: Termiteam – Control center for multiple AI agent terminals

https://github.com/NetanelBaruch/termiteam
1•Netanelbaruch•59m ago•0 comments

The only U.S. particle collider shuts down

https://www.sciencenews.org/article/particle-collider-shuts-down-brookhaven
2•rolph•1h ago•1 comments

Ask HN: Why do purchased B2B email lists still have such poor deliverability?

1•solarisos•1h ago•3 comments

Show HN: Remotion directory (videos and prompts)

https://www.remotion.directory/
1•rokbenko•1h ago•0 comments

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
2•guerrilla•1h ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•1h ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•1h ago•0 comments

Microsoft purges Win11 printer drivers, devices on borrowed time

https://www.tomshardware.com/peripherals/printers/microsoft-stops-distrubitng-legacy-v3-and-v4-pr...
4•rolph•1h ago•1 comments
Open in hackernews

Protect your consciousness from AI

https://jordangoodman.bearblog.dev/protect-your-consciousness-from-ai/
67•zekrom•3mo ago

Comments

noir_lord•3mo ago
I'm far too lazy to be able to responsibly use a machine that can give me semi-sensible answers.

I saw the danger of it as a form of learned helplessness down the line and swore off using LLM's for that reason, that and I feel no need to delegate my thinking to a machine that can't think and I like thinking.

Same reason snacks are upstairs in the kitchen and not in my office on the ground floor - I'm too lazy and if they are easily available I'll eat them.

binary132•3mo ago
This resonated for me. It’s really easy to hit that tiny cognitive speedbump of needing to put a bit of effort into recalling some API detail or other and instead of reaching into the old trusty manages, tabbing into the spammy chatbot for a quick fix.
binary132•2mo ago
manpages*
nvllsvm•3mo ago
> In the professional world, I see software developers blindly copying and pasting code suggestions from LLM providers without testing it, or understanding it.

When you see that, call them out on it. Not understanding copy+pasted code is one thing, but not testing it is a whole other level of garbage.

simonw•3mo ago
Seriously. The job of a software developer is to deliver working software. If the software doesn't work that's a dereliction of duty.
delichon•3mo ago
> This creates a sea of noise and misinformation that people unknowingly consume at scale.

The same objections apply to the written word. A culture that succeeded in not succumbing to writing and reading may indeed have been better off in the short term, depending on the quality of the memes. But it would have been at a competitive disadvantage to cultures that were permissive with knowledge transfer.

The main advantage of AI so far has been as a distiller of the knowledge embedded in the written word. It's another leap in knowledge transfer. That's still a competitive advantage to any culture that doesn't abjure it. This particular consciousness intends to leverage the opportunity.

idiotsecant•3mo ago
It distills knowledge in the same way that breakfast cereal distills complex food into quick easy energy. The problem is that AI gives us diabetes of the soul. There is value in struggle and the learning that results
Agraillo•3mo ago
I think GP by mentioning "knowledge transfer" meant, for example, he benefits from the embedding space and semantic equivalence of LLMs when you want to know more about a fact, an entity, a law, or something else. Yes, hallucinations can spoil this transfer, but I see no issue in using this tool to get quicker to the prior art or what is on the "shoulders of giants."

Though, when we try to use it as a synthesizer of new knowledge (software, article, review), that's when the OP's thinking about protection makes sense.

thundergolfer•3mo ago
This isn't a new problem at all. If you only started noticing it as a problem with "AI", as the author apparently did, then you were blind to how our mediums and tools have always shaped us, alienated us from the world and each other, and made us dependent on mechanism. This has happened hugely already with television.

You can go back and read McLuhan, he's great, but a recent and more approachable book on this is _God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning_.

Way back in 1969 the utopian vision of technology put humans at the centre. The Whole Earth Catalogue's slogan was “access to tools.” Just _tools_. That same year, technology put a man on the moon.

Unfortunately, if you realize the extent and the history of the problem, you see we're so far gone, miles away from getting a grip.

satisfice•3mo ago
I always thought hacker culture was independent and skeptical. Somehow AI has turned a lot of them into drooling fanboys.

It’s embarrassing. Don’t rely on AI, guys. Have pride in yourselves.

simonw•3mo ago
I thought a big part of hacker culture involved taking interest in new technology, exploring the edges of it (and beyond those edges) and figuring out what works and what breaks - and how to break it.

I don't understand why many software engineers are so resistant to exploring AI. It's fascinating!

saulpw•3mo ago
I've explored AI, and will continue to do so. I deliberately overcame my resistance to it because I'm an old-school hacker and I do enjoy tinkering with technology. But regardless of how cool it is and whether it works or not (sometimes it does, sometimes it doesn't), it doesn't make me feel good. Like the brain stupor I get from watching a vapid movie, or eating too much sugar and bouncing off the walls.

Relatedly, this blog quote[0] really resonated with me:

> reaching the end state of a task as fast as possible has never been my primary motivation. I code to explore ideas and problem spaces...

Using AI to code is like mountain biking but on a motor scooter, and you're riding in a sidecar looking at a map while a golem drives the bike. It's amazing that the golem can drive the bike at all, to be sure, and yes I'm wearing a helmet so the crashes aren't too bad, but...what are we doing here? I like riding my bike! It might be more physical work for me (but also with the golem I often have to get out and push the motorbike anyway so it's not clear), but when I'm biking, I'm connected to the ground and I can get into a flow state with it. I'm also getting exercise and getting better at biking and learning the trails and it's easy to hop off and explore some useless cave that's not accessible by bike, just because it looks interesting. I know the AI-proponent answer is "you still can!" but when I'm in the sidecar, my modality shifts. I'm no longer independent, I'm using a different kind of agency that's map- and destination- focused, and I'm not paying attention to the unfolding world around me.

So I understand why some people are excited about AI, and I don't think it's necessarily bad (though it does seem insidious in some pretty obvious ways, that even its proponents are aware/wary of). But why are many of those people, like yourself, seemingly unwilling/unable to understand why others of us are bouncing off it?

I feel like people keep explaining this, often in direct replies to your comments like this one, some of which you specifically even respond to. So if you still don't understand, maybe you're reading but not actually listening? Or maybe long-term memory deteriorates as one merges with the AI?

[0]https://handmadeoasis.com/ai-and-software-engineering-the-co...

simonw•3mo ago
I'm fine with people deciding that AI programming isn't for them, especially if they've given it a fair shake first and didn't drop it the second it made an obvious mistake.

What frustrates me is when people 1. claim it's entirely useless and that anyone who thinks it's useful is deceiving themselves (still very common, albeit maybe less so now than it was six months ago) or 2. claim that spending time writing about and understanding it "has turned a lot of them into drooling fanboys."

Hence my snappy response to the above comment. I took it a bit personally.

saulpw•3mo ago
That's fair. I also made a snappy response because I get frustrated on the other side, when proponents say 1) "you're using it wrong" or 2) "it'll get better" or 3) "I don't understand why people are resistant". In that last case, there's two interpretations of resistance, the one in which someone doesn't overcome their initial knee-jerk response, and another one when someone develops a more warranted resistance after exploring it. The first kind is potentially conservative, or ideological, or fearful, or lazy, which I think is what you take issue with. The second kind is more balanced and reasonable. Do we need a better descriptor to differentiate the two?
simonw•3mo ago
Yeah, that makes sense.

Honestly at this point you could build a full periodic table of AI hesitancy/resistance/criticism and have it be a useful document!

I'm staunchly opposed to the whole "model welfare" thing, deeply skeptical of most of the AGI conversation and continue to think that outsourcing any critical decisions to an AI system is a catastrophically bad idea in a world where we haven't solved prompt injection (or indeed made much progress towards solving it) - so carve me out a bunch of squares for those.

Maybe there's room here for one of those political compass style quizzes.

xanderlewis•3mo ago
That's because a lot of commenters here are not hackers in any real sense; rather, they're software engineers. Perhaps this hasn't always been the case.
Bukhmanizer•3mo ago
Culturally we are going through a phase where thought is getting massively devalued. It’s all well and good to say “I’m using AI responsibly”, but it won’t matter if at the end of the day no one values your opinion over whatever ChatGPT spewed out.
tmaly•3mo ago
I am perfectly okay with offloading low value mental work to an LLM just to recoup time to spend with my family. The modern world has way too many demands that just suck up time.
isodev•3mo ago
When you meet a model that actually saves you time (instead of shifting the work to something else), write about it
mrloba•3mo ago
It is everywhere. Even on birthday invites for my kids there's nonsense from an LLM. At work I review PRs with code that doesn't even run. Doing research is harder than ever as more and more references are completely made up.

We're too lazy and too obsessed with getting ahead to use this technology responsibly in my opinion.

simonw•3mo ago
How do those PR authors react when you point out that the code doesn't run and block the merge? Any signs of them improving their work ethic over time based on your feedback?
mrloba•3mo ago
Well they then use more AI to try to fix the PR, which leads to many more rounds of the same. It's like I'm coding using an AI except through a real person who mangles the prompt. I've had some success as well in talking people out of it, but it feels like I'm gonna lose eventually
i_love_retros•3mo ago
Of course they are providing that feedback! No one gives a shit though. Our industry and society at large has basically given approval for people to submit AI slop. Managers and executives consider it working smart and efficiently. So telling someone "this code doesn't run" results in more slop in an attempt to fix it. Eventually it will run and get merged and the code base gets even shitter. There's only so much gate keeping and quality control the few people who actually give a damn can be expected to do when swimming against the tide. Mental health is a thing. And to quote Dan Ashcroft, the idiots are winning.
simonw•3mo ago
Do managers genuinely not care if the code they are paying to have written works or not?
nopassrecover•3mo ago
Accepting a little exaggeration (“works” vs “works well”), for a segment, almost certainly.

Particularly when they know that people like the commenter above are making sure it ultimately “works” by covering the incompetence of their colleagues.

The comment you are replying to is in my view a superb observation of the challenge of maintaining quality against systemic pressures to appear to be performing.

Most senior leaders in organisations cannot (or care not to) measure quality. Few (outside big tech I assume but wouldn’t be surprised to also see overlook this?) are even usefully measuring benefits realisation tied back to activity (such as software releases).

What they can measure and are systemically incentivised for is “what does it take to get the approval of the next leader above me”, and most of the time a plausible report that the software has been delivered to/ahead of schedule is the real objective to achieve this goal.

That doesn’t mean morally motivated managers aren’t out there driving quality. But it is at odds with these org systems, at the detriment (or risk of detriment) of their own careers compared to peers who optimise more for what the system rewards, and at the expense of greater energy as they effectively have to hide their pursuit of better outcomes for the organisation under a veneer of performing as the organisation expects (that is, serve two goals simultaneously, one covert and one performative).

Something like this is a good exploration of the subject: https://spakhm.substack.com/p/how-to-get-promoted

xanderlewis•3mo ago
> too obsessed with getting ahead

or perhaps with others (potentially) getting ahead of us.

lbrito•3mo ago
Or management outright mandating the use of LLM.
whiplash451•3mo ago
Savvy researchers/engineers have an opportunity to arbitrage here: working without LLMs on something hard leads to better outcome than what your "AI-enabled" peers achieve (after all, Karpathy could not resort on any AI to build nano-chat). It's sad state of affairs, but it really is there.
bossyTeacher•3mo ago
> At work I review PRs with code that doesn't even run

Why is that being allowed?

Chance-Device•3mo ago
I was expecting something about how to protect your consciousness from (or during) AI use, but I got a short 200 word note rehashing common sentiments about AI. I guess it’s not wrong, it’s just not very interesting.
AstroBen•3mo ago
To me the answer was fairly obvious—default to using your own thinking first
zwnow•3mo ago
It is very interesting because it tackles things people love to forget when using AI. A little over a decade ago it was scandal on how big tech companies are using peoples data, now people give it knowingly to them via all kinds of bullshit apps. So you have to repeat the obvious over and over, and even then it wont click for many.
roxolotl•3mo ago
So wild to think Cambridge Analytica was a scandal worthy of congressional hearings. LLMs are personalized persuasion on steroids.
andy99•3mo ago
Yeah if found it slightly ironic that an argument against using AI is made as an empty social media-style post. Ironically AI could have written a better one.
dingnuts•3mo ago
it'd be worse, just longer
candiddevmike•3mo ago
I still feel "weird" trying to reason about GenAI content or looking at GenAI pictures sometimes. Some of it is so off-putting in a my-brain-struggles-to-make-sense-of-it way.
tyleo•3mo ago
I’ve found the hesitation to shovel text into AI weird given the _lack_ of hesitation to shovel text into search engines.

Either case is weird in absolute terms but in relative terms, it all goes to the same place. The human-like nature of AI seems to make people realize this more.

subquantum2•3mo ago
Agree that this LLM stuff 'dumbs down' or using a better word for it 'changes the human skill set'. Your real skills are reduced over time. The LLM is like a broken mirror of your own skills and because the mirror is biased at some point you do not learn anymore, or learn the biased world, it becomes brain rot, you become the LLM pet. you cannot function without your owner...

On the negative side: The LLM uses fancy language to try to convince disinfo. The danger is that you do not see this disinfo and it will shape your consciousness that is the trap.

However if you are lucky you learn to distrust the LLMs it's not a educated AI.

On the positive side: You can still use it as search engine or to get some ideas But you should continue on your own to increase your creative skills.

Your consciousness / attention is stolen on a daily basis to keep you occupied to do stuff. However this is already ongoing before LLMs, before the computer age.

I think at some point your consciousness will detect this brain rot at some point and evolve beyond.

Our body's has evolved to copy trait from others from childhood on, moreover it's also in our DNA itself its created to copy. So the LLM is not any different but you should be aware which trait to copy.

ChrisArchitect•3mo ago
Thought this would be something more about being AI-pilled.... the increasing effect of contact with AI systems and content created by them that leads to a mindset we're seeing more of where one constantly questions everything about their reality. Protect your conciousness from that.
saaaaaam•3mo ago
Most people are lazy and stupid. Lazy stupid people use naked LLM outputs. Their brains were already rotting.

Don’t be lazy and stupid.