frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

AI researchers are now injecting prompts into their papers

https://twitter.com/Yuchenj_UW/status/1942266306746802479
1•timebound•1m ago•0 comments

ICE Using Border Facial Recognition Tech to ID Protesters and Activists in US

https://www.techdirt.com/2025/07/07/ice-using-repurposed-border-facial-recognition-tech-to-id-protesters-activists-and-migrants-on-us-streets/
1•lehi•1m ago•0 comments

A data analysis of animal gender in children's books

https://pudding.cool/2025/07/kids-books/
1•CharlesW•4m ago•2 comments

Bilt: The points-obsessed startup that rewards you for paying rent

https://www.fastcompany.com/91336688/bilt-rewards-credit-card-loyalty-pro
1•toomuchtodo•7m ago•1 comments

Earth is going to spin much faster over the next few months

https://www.livescience.com/planet-earth/earth-is-going-to-spin-much-faster-over-the-next-few-months-so-fast-that-several-days-are-going-to-get-shorter
1•TMEHpodcast•9m ago•0 comments

Home-manager is a false enlightenment

https://fzakaria.com/2025/07/07/home-manager-is-a-false-enlightenment
2•setheron•12m ago•0 comments

Weedkiller ingredient widely used in US can damage organs and gut bacteria

https://www.theguardian.com/environment/2025/jul/06/weedkiller-diquat-organ-damage-study
1•mikhael•13m ago•0 comments

Photography 'Rules' That Social Media Destroyed

https://fstoppers.com/social-media/5-photography-rules-social-media-destroyed-705854
2•patrakov•14m ago•2 comments

Show HN: Agents for DevOps? Demo

https://www.bitflux.ai/examples/
1•jared_hulbert•19m ago•0 comments

Poll: Interested in joining a "Founders over Fifty" group?

2•sendos•22m ago•0 comments

A Chinese Wikipedia editor spent years writing fake Russian medieval history (2022)

https://www.engadget.com/chinese-wikipedia-editor-fake-russian-medieval-history-122001604.html
2•georgecmu•26m ago•0 comments

Kind Candorship – A reflection on the high cost of being a 'straight shooter'

https://tloriato.com/essay/2025/07/07/kind-candorship/
1•tloriato•28m ago•0 comments

Arizona brings a grid battery online ahead of peak demand

https://electrek.co/2025/07/07/arizona-brings-a-huge-grid-battery-online-ahead-of-peak-demand/
3•toomuchtodo•28m ago•0 comments

Atlassian migrated 4M Postgres databases to shrink AWS bill

https://www.theregister.com/2025/07/07/asia_tech_news_in_brief/
2•kmdupree•30m ago•0 comments

You will own NOTHING and be HAPPY [video]

https://www.youtube.com/watch?v=rAsgjKBkKMA
2•tartoran•31m ago•0 comments

From Prompt Towards Silicon

https://sidpm.com
1•suraj_sindhia•32m ago•1 comments

Liquid glass, now with frosted tips

https://birchtree.me/blog/liquid-glass-now-with-frosted-tips/
2•robenkleene•35m ago•0 comments

Build your own SQLite, Part 6: Overflow pages

https://blog.sylver.dev/build-your-own-sqlite-part-6-overflow-pages
3•sevender•36m ago•0 comments

Ted Nelson struggles with uncomprehending radio interviewer [video] (1979)

https://www.youtube.com/watch?v=RVU62CQTXFI
2•EGreg•37m ago•0 comments

Atomic macOS infostealer adds backdoor for persistent attacks

https://www.bleepingcomputer.com/news/security/atomic-macos-infostealer-adds-backdoor-for-persistent-attacks/
4•sandwichsphinx•37m ago•0 comments

We just hit $1M ARR in 4 years. With zero funding

https://projectionlab.com/blog/we-reached-1m-arr-with-zero-funding
4•jonkuipers•41m ago•1 comments

Evidence our brains make neurons in adulthood may close century-old debate

https://www.science.org/content/article/genetic-evidence-our-brains-make-new-neurons-adulthood-may-close-century-old-debate
3•gnabgib•43m ago•0 comments

The Psychological Significance of Eiji Yoshikawa's Musashi

https://abesorock.substack.com/p/the-psychological-significance-of
2•stopachka•54m ago•0 comments

AMD OpenSIL PoC Still Being Worked on for Phoenix SoCs

https://www.phoronix.com/news/AMD-openSIL-Phoenix-Turin-2025
3•pietrushnic•57m ago•0 comments

Caltech Agrees to Settle Lawsuit Accusing It of Misleading Students

https://www.nytimes.com/2025/07/07/us/caltech-simplilearn-settlement-bootcamp.html
4•breadwinner•59m ago•0 comments

Guy built a TikTok replacement for students

https://skilltok.vercel.app/
2•iboshidev•1h ago•1 comments

Core Conversations: Apple M4 vs. Lunar Lake/ Arrow Lake

https://www.intel.com/content/www/us/en/content-details/859225/core-conversations-apple-m4-vs-lunar-lake-arrow-lake.html
2•high_na_euv•1h ago•1 comments

N8n AI Workflows – 3,400 Workflows and an LLM Prototype

2•sayedev•1h ago•0 comments

Numbers are in and NYC congestion pricing is a big 'success,' Hochul says

https://gothamist.com/news/numbers-are-in-and-nyc-congestion-pricing-is-a-big-success-hochul-says
7•rntn•1h ago•0 comments

Naming Software Teams

https://staysaasy.com/management/2025/07/06/team-names.html
2•thisismytest•1h ago•0 comments
Open in hackernews

I am uninstalling AI coding assistants from my personal computer

https://sam.sutch.net/posts/uninstailling-ai-coding-from-personal-computer
45•ssutch3•4h ago

Comments

breckenedge•4h ago
> As I have kept up conversation with my developer friends, it has become essentially the norm, and everyone is being pressed to find greater productivity using AI coding tools.

What a weird alternate universe it is that I live in. My managers are somewhat skeptical of AI workflows and keep throwing up roadblocks to deeper and more coordinated use among my colleagues. Probably because there is so much churn, and it’s difficult to replicate the practice from one engineer to another. Some of my colleagues are very resistant to using AI. I use it quite extensively, but rate limits mean that there are occasions when I must pick up where the machine leaves off.

Mouvelie•4h ago
> Regardless, the lesson for people like myself is that, in order to feel happy with creating, we have to actually create. An artist would not call their work art if they had little to no role in creating it.

Thanks. The author touched something there, close to a truth (or deep belief I got ?) about our life, something about the journey mattering more than the destination...

toomuchtodo•3h ago
https://en.wikipedia.org/wiki/Drive:_The_Surprising_Truth_Ab...

https://youtu.be/u6XAPnuFjJc, referenced often here: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

bitwize•3h ago
Good. Enjoy that journey... on your own time. You've been missing your productivity OKRs, and your Claude logs say you haven't been using the tools the company has provided. You're on a PIP: if measurable progress is not seen in 30 days, disciplinary action up to and including termination may be taken.
cyanydeez•1h ago
Claude, write me a macro that will autofelate yourself according to these metrics.
fithisux•2h ago
Of interest

https://medium.com/@kenichisasagawa/the-reason-behind-develo...

NathanKP•1h ago
I think the author's definition of "creating" is just too narrow. A gardener can get tremendous satisfaction from watching their plants grow from the bed of soil that they prepared, even if there is not as much weeding or watering to do later on in the growth cycle. A parent can get tremendous satisfaction from watching their child continue to grow and develop, even after the child is no longer an infant who requires constant care and attention.

In my opinion, having spent about a year and half working on various coding projects using AI, there are phases to the AI coding lifecycle.

1) Coding projects start out like infants: you need to write a lot of code by hand at first to set the right template and patterns you want the AI to follow going forward.

2) Coding projects continue to develop kind of like garden beds: you have to guide the structure and provide the right "nutrients" for the project, so that the AI can continue to add additional features based on what you have supplied to it.

3) Coding projects mature kind of like children growing up to become adults. A well configured AI agent, starting from a clean, structured code repo, might be mostly autonomous, but just like your adult kid might still need to phone home to Mom and Dad to ask for advice or help, you as the "parent" of the project are still going to be involved when the AI gets stuck and needs help.

Personally, while I can get some joy and satisfaction from manually typing lines of code, most of those lines of code are things I've typed literally hundreds of times over my decades long journey as a developer. There isn't as much joy in typing out the same things again and again, but there is joy in the longer term steering and shaping of a project so that it stays sane, clean, and scalable. I get a similar same sense of joy out of gently steering AI towards success in my projects, that I get from gently steering my own child towards success. There is something incredible about providing the right environment and the right pushes in the right direction, and then seeing something grow and develop mostly on it's own (but with your support backing it up).

fakedang•1h ago
Cue me, cursing the AI with a choice selection of names, when my AI code writer of choice decides to change core files that I had explicitly told it not to touch earlier in the chat.

Guess I will not be a good parent lol.

NathanKP•1m ago
Negative instructions do not work as well as positive ones. If you tell the LLM "don't do this" you only put the idea of doing that into it's context. (Surprisingly the same goes for human toddlers.... the AI is just in it's toddler phase).

Not to mention that context length is limited, so if you told it something "earlier" then your statement has probably already dropped off the end of the context window.

What works better is to prompt with positive instructions of intent like:

"Working exclusively in file(s) ____ and ____ implement ____ in a manner similar to how it is done in example file ______".

I start a fresh chat for each prompt, with fresh context, and try to keep all instructions embedded within a single prompt rather than relying on fragile past state that may or may not have dropped off the end of the context window. If there is something like "don't touch these core files" or "work exclusively in folder X" that I want it to always consider, then I add it as a system prompt or global rule file (ensures that the instruction gets included automatically on every prompt).

And don't get me wrong I get frustrated with AI sometimes too, but the frustration has declined dramatically as I've learned how to prompt it: appropriate task sizes, how to use positive statements rather than negative, how to gather the appropriate context for it to steer behavior and output, etc.

bluefirebrand•4h ago
I haven't had nearly the same experience of success with AI.

I'm often accused of letting my skepticism hold me back from really trying it properly, and maybe that's true. I certainly could not imagine going months without writing any code, letting the AI just generate it while I prompt

My work is pushing these tools hard and it is taking a huge toll on me. I'm constantly hearing how life changing this is, but I cannot replicate it no matter what I do

I'm either just not "getting it", or I'm too much of a control freak, or everyone else is just better than I am, or something. It's been miserable. I feel like I'm either extremely unskilled or everyone else is gaslighting me, basically nowhere in between

I have not once had an LLM generate code that I could accept. Not one time! Every single time I try to use the LLM to speed me up, I get code I have to heavily modify to correct. Sometimes it won't even run!

The advice is to iterate, but that makes no sense to me! I would easily spend more time iterating with the LLM than just writing the code myself!

It's been extremely demoralizing. I've never been unhappier in my career. I don't know what to do, I feel I'm falling behind and being singled out

I probably need to change employers to get away from AI usage metrics at this point, but it feels like it's everyone everywhere guzzling the AI hype. It feels hopeless

cheevly•3h ago
And the irony is that those of us using AI to amplify our output to produce at exponential speeds feel like your comments are gaslighting us instead! Ive never seen such an outright divide in practitioners of a technology in terms of perception and outcomes. I got into LLMs super early, using them daily since 2022; so that may have bolstered the way I’ve augmented my approaches and tooling. Now almost everything I build uses AI at runtime to generate better tools for my AI to generate tools at runtime.
upghost•3h ago
Can we use this micro moment to try to bridge the gap? I was sold on cocaine but all I've gotten so far was corn starch. Is there like a definitive tutorial on this? I mean look I am proud of my work but if I can drop 200-1000/month for the "blue stuff" I'm not gonna turn my nose up at it.

I've been pretty deeply into LLMs myself since 2023 and built several small models myself from scratch and (SFT) trained many more so it's not like I'm ignorant of how it works, I'm just not getting the workflow results.

ssutch3•3h ago
It's going to depend heavily on what you're doing. If you're doing common tasks in popular languages, and not using cutting edge library features, the tools are pretty good at automating a large amount of the code production. Just make sure the context/instruction file (i.e. claude.md) and codebase are set up to properly constrict the bot and you can get away with a lot.

If you're not doing tasks that are statistically common in the training data however you're not going to have a great experience. That being said, very little in software is "novel" anymore so you might be surprised.

bluefirebrand•2h ago
Just because it's not strictly novel doesn't mean that the LLM is outputting the right thing

We used to caution people not to copy and paste from StackOverflow without understanding the code snippets, now we have people generating "vibe code" from nothing using AI, never reading it once and pushing it to master?

It feels like an insane fever dream

bluefirebrand•1h ago
> And the irony is that those of us using AI to amplify our output

I'm guessing you don't care about quality very much, since you are focusing on your output volume

01HNNWZ0MV43FF•52m ago
Maybe I need to watch some videos on YouTube to understand what other people are seeing.

I couldn't even get Zed hooked up to GitHub Copilot. I use ChatGPT for snippets and search and it's okay but I don't want to bother checking its work on a large scale

jaredcwhite•15m ago
> amplify our output to produce at exponential speeds

I think I blacked out when my brain tried to process this phrase.

Nothing personal, but I automatically discount all claims like this (something something require extraordinary evidence and all that…).

ssutch3•3h ago
AI coding tools aren't equally effective across all software domains or languages. They're going to be the "best" (relative to their own ability distribution) in the "fat middle" of software engineering where they have the most training data. Popular tasks in popular languages and popular libraries (web dev in React, for example). You're probably out of luck if your task is writing netcode for a game engine, for instance.
bluefirebrand•2h ago
I am a web dev in React, though

My experience is in one of the areas that people are saying it is most helpful

Which really just adds to the gaslighting effect

20after4•3h ago
I have a working theory that it's mostly bad programmers who are achieving massive productivity gains. Really good programmers will probably have trouble getting the LLM tools to perform as well as their normal level of output.

This could be cope but I don't think it is.

bluefirebrand•1h ago
I'm not sure if it is cope, but I sort of feel the same

The quality of LLM code is consistently average at best, and usually quite bad imo. People say it is like a junior, but if a junior I hired produced such code consistently and never improved I would be recommending the company PIP them out.

Having output like a Junior would be fine, if I didn't have to fix it myself. As it stands, I've never been able to get it to produce code of the quality I want so I have to spend more time fixing it than I would just writing it.

I dunno. It sucks man

steveklabnik•1h ago
I have seen good programmers, ones I respect a lot, get good results with AI.

I don't think this is it, personally.

glouwbug•1h ago
My approach has always been to consult 3 different models to get my understanding on the right path and then do the rest myself. Are we really just doing blind copy pastes? As an example, I recently was able to prototype several different one dimensional computational fluid dynamic GLSL shaders. Claude outputted everything with vec3s, and so the flux math matched what you’d see in the theory. It’s rapid iteration and a declutterred search engine for me with an interactive inline comment section, though I do understand some would disagree that statement, especially since it’s lacking any sort of formal verification. I counter with the old adage that anyone can be a dog on the internet
bluefirebrand•1h ago
> My approach has always been to consult 3 different models to get my understanding on the right path and then do the rest myself. Are we really just doing blind copy pastes

For me, if I spent the time testing 3 different models I would definitely be slower than writing the code myself

clown_strike•1h ago
You're being gaslit. The point is to make you look unproductive.

The untrained temp workers using AI to do the entirety of their jobs aren't producing code of professional quality, it doesn't adhere to best practices or security unless you monitor that shit like a hawk but if you're still engineering for quality then AI is not the first train you've missed.

They will get code into production quicker and cheaper than you through brute force iteration. Nothing else matters. Best practices went the way of the rest of the social contract the instant feigned competence became cheaper.

Even my podunk employer has AI metrics. You won't escape it. AI will eventually gatekeep all expertise and the future employee becomes just a disposable meat interface (technician) running around doing whatever SHODAN tells them to.

geoka9•12m ago
My "agentic" experience is mostly Aider, working across a Golang webapp codebase. I've mostly used Gemini (whatever model Aider chooses to use at the moment).

Most of my experience has been similar to yours. But yesterday, out of the blue, it spit out a commit that I accepted almost verbatim (just added some line breaks and stuff). I was actually really surprised: not only it followed the existing codebase conventions and variable naming style, but also introduced a couple of patterns that I haven't thought of (and I liked).

But it also charged me $2 for the privilege :) (On a related note, Gemini API has become noticeably more expensive compared to, say, a month ago.)

I find that with Aider managing context (what files you add to it) can make all the difference.

Macha•1h ago
> I wonder if some “actual" artists (as in, those people who create the kind of art most people would recognize) have gone through a similar arc of realizing the emptiness of creating with AI tools.

My impression is that artists are even more hostile than the most AI-skeptic of software engineers. In large part, this is likely because the economic argument doesn't hold much sway. For the large majority of artists, it's hard for them to make money with art as is, the bottleneck is not the volume of art they can produce. There's a much clearer path to turning "more code" into "more money", even if it's still not direct.

jaredcwhite•19m ago
Perhaps that's why I as a software developer am fully genAI-skeptic…I've always considered myself a multidisciplinary artist and the skill I have in writing code is simply one of the many possible avenues I use to express myself. (Alas, it's the one which produces the most income by far, but that's another conversation!)
randomNumber7•29m ago
When you get better than junior level you see the limitations of current coding assistants.

But to get there it might be a good move to code for yourself (and read books).

Then on the other hand coding will not be a fun job anymore...