frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Torvalds Drops Bcachefs Support After Clash

https://news.itsfoss.com/linux-kernel-bcachefs-drop/
1•Volundr•6m ago•0 comments

Why a Simple Button Press Can Crash Your FPGA System (and How to Fix It)

https://siliscale.substack.com/p/mastering-external-signal-synchronization
1•glcssr•7m ago•1 comments

Experimental X11 Compatibility Layer

https://github.com/kaniini/wayback
1•nobody9999•18m ago•1 comments

OpenAI Partnership Puts Conversational AI in Mattel Toys

https://www.pymnts.com/news/artificial-intelligence/2025/barbie-gets-brain-openai-partnership-puts-conversational-ai-mattel-toys/
1•geox•24m ago•0 comments

Accuracy of Apple Watch calorie counts

https://www.empirical.health/blog/apple-watch-calories-accuracy/
1•brandonb•30m ago•0 comments

Solving `UK Passport Application` with Haskell

https://jameshaydon.github.io/passport/
2•jameshh•34m ago•1 comments

A reverse-delta backup strategy – obvious idea or bad idea?

3•datastack•45m ago•2 comments

How to Train Your GPT Wrapper

https://blog.sshh.io/p/how-to-train-your-gpt-wrapper
1•sshh12•56m ago•0 comments

There's not a shred of evidence on the internet that this band has ever existed

https://www.musicradar.com/music-tech/theres-not-a-shred-of-evidence-on-the-internet-that-this-band-has-ever-existed-this-apparently-ai-generated-artist-is-racking-up-hundreds-of-thousands-of-spotify-streams
2•coloneltcb•1h ago•0 comments

App51 vs. Bolt, Replit, Rork and A0

https://www.app51.ai
2•shimon1981•1h ago•1 comments

Supreme Court Greenlights Online Digital ID Checks

https://reclaimthenet.org/supreme-court-greenlights-online-digital-id-checks
2•like_any_other•1h ago•0 comments

Sysadmin.ca – Free tools and policies for system administrators

https://sysadmin.ca/
1•WallyCanada•1h ago•0 comments

Crewless ship is defending Denmark's and NATO's waters

https://www.euronews.com/next/2025/06/25/this-crewless-ship-is-defending-denmarks-and-natos-waters-this-is-how-it-works
1•zdw•1h ago•0 comments

How to Surf the Web in 2025, and Why You Should

https://www.raptitude.com/2025/06/how-to-surf-the-web-in-2025-and-why-you-should/
1•zdw•1h ago•0 comments

Automatic build number incrementing in Xcode

https://blog.gingerbeardman.com/2025/06/28/automatic-build-number-incrementing-in-xcode/
1•zdw•1h ago•1 comments

Taiwan Looks to New Sea-Drone Tech to Repel China

https://www.wsj.com/world/asia/taiwan-looks-to-new-sea-drone-tech-to-repel-china-c1615d42
2•bookofjoe•1h ago•1 comments

Archive Postgres Partitions to Iceberg

https://www.crunchydata.com/blog/archive-postgres-partitions-to-iceberg
1•craigkerstiens•1h ago•0 comments

What went wrong with our happiness

https://medium.com/@orzel.jarek/what-went-wrong-with-our-happiness-aa1f017ba05e
5•jorzel•1h ago•0 comments

In the Age of AI, Is Code Literacy Your Superpower?

https://pmbanugo.me/blog/ai-code-literacy
2•eddieos•1h ago•1 comments

The Death of the Middle-Class Musician

https://thewalrus.ca/the-death-of-the-middle-class-musician/
12•pseudolus•1h ago•6 comments

Banausos

https://en.wikipedia.org/wiki/Banausos
1•tusslewake•1h ago•0 comments

Swiss cocaine so cheap and widely used they're considering legalising it

https://www.telegraph.co.uk/world-news/2023/12/21/swiss-cocaine-cheap-widely-used-high-quality-bern-legalise/
11•Anon84•1h ago•0 comments

Show HN: Build Discord bots, earn prizes (18 and under)

https://converge.hackclub.com/
1•JustSkyfall•1h ago•0 comments

White-Label AI Platform for SMB

https://parallellabs.app/white-label-solutions-from-parallel-ai/
1•davidrichards•1h ago•0 comments

Exploring Trichromacy through Maxwell's Color Experiment (2023)

https://maxwell.kohterai.com/
4•niwrad•1h ago•0 comments

AI Knows Us Too Well

https://nautil.us/ai-already-knows-us-too-well-1220707/
2•dnetesn•1h ago•0 comments

Group of investors represented by YouTuber Perifractic buys Commodore

https://www.amiga-news.de/en/news/AN-2025-06-00123-EN.html
5•erickhill•1h ago•0 comments

Ask HN: How do you decide what to ship each week as a solo founder?

2•five9s•1h ago•1 comments

Removing race as a risk factor for cardiovascular disease

https://peterattiamd.com/race-and-cvd-risk/
2•brandonb•1h ago•0 comments

So Much Better for Beginners Than Tmux

https://www.howtogeek.com/this-terminal-multiplexer-is-so-much-better-than-tmux-for-beginners/
5•Bluestein•2h ago•1 comments
Open in hackernews

BYU study: Why some people choose not to use artificial intelligence

https://news.byu.edu/intellect/byu-study-finds-the-real-reasons-why-some-people-choose-not-to-use-artificial-intelligence
26•computator•5h ago

Comments

BryanLegend•4h ago
Sounds like the answer is FUD.
mixmastamyk•4h ago
I don’t use them, yet. Am open to it, but no longer trust most tech companies. Probably an open model in several years on a beefy yet affordable machine I’ve yet to purchase.

Also am at the peak of my game, and automated templates, snippets, and stackoverflow lookup a decade+ ago. I prefer reading a discussion of tradeoffs to approaches before picking one. It may take up to ten more minutes up front but save hours later.

So waiting for the dust to settle.

herbst•4h ago
There are several companies with proper privacy terms offering available models as pay per use to a fair price.
TimorousBestie•4h ago
Work kind of guilt-tripped me into giving it a shot and my experience with Claude was such a time-waster that I’ve kinda been ignoring it and continuing to do my own thing.

Maybe other people are better at prompting or designing VSCode integrations but what I’ve experienced so far has been a mess. Utterly nonsensical design decisions, it doesn’t seem to understand basic linear algebra or the LAPACK API. (I tried adding the Fortran or C source to its context to no avail.) I asked it to rewrite a well-documented scalar function using AVX intrinsics and. . . woof. No good.

Hopefully the field either improves dramatically in a couple years or goes back into hibernation for the next AI winter.

TheCleric•4h ago
Same. I gave up after 30 minutes of correcting it when it kept suggesting code to use that was completely invalid.
ksynwa•4h ago
> A photo illustration created by AI depicting someone skeptical of using AI.

> Photo by Nate Edwards/BYU Photo

So is it a photograph or AI generated?

Retr0id•4h ago
Just looking at it, I don't think I've ever been more on-the-fence about whether an image is AI or not. It has AI vibes, but no especially obvious AI artefacts. Nate Edwards is definitely a real photographer, too.
mitthrowaway2•1h ago
The guy's shirt has buttons on both sides
johncole•4h ago
> While some people might worry about an AI apocalypse,

Wait, so if you’re worried about an ai apocalypse, you’re not using it. What does that solve?

> Steffen and Wells found that most non-users are more concerned with issues like trusting the results, missing the human touch or feeling unsure if GenAI is ethical to use.

Are the ethical users avoiding LLMs entirely? Or only for certain use cases?

add-sub-mul-div•4h ago
> Wait, so if you’re worried about an ai apocalypse, you’re not using it. What does that solve?

It solves acting according to one's principles and against what one perceives as harmful? I don't understand the question.

ost-ing•4h ago
Im pretty close to not using it, mainly because I rage trying to explain something akin to a 6 year old while its smearing shit on the walls.

True engineering requires discipline, anything short of this philosophy is brain rot and you will pay the price in the long term

pico303•4h ago
I’m a dev working with AI to build tools for others, but I don’t use them personally. Why? Because they make your writing sound like everyone else, they produce shoddy and broken code (unless you’re doing something really commonplace), and they dull your own creativity. If you’re relying on someone else to do your work, you’re going to lose the ability to think for yourself.

AI is built essentially on averages. It’s the summary of the most common approach to everything. All the art, writing, podcasts, and code look the same. Is that the bland, unimaginative world we’re looking for?

I love the bit in the study about the “fear” of AI. I’m not “afraid” it’ll produce bad code. I know it will; I’ve seen it do it 100 times. AI is fine as one tool to help you learn and think about things, but don’t use it as a replacement for thinking and learning in the first place.

BoiledCabbage•4h ago
> AI is built essentially on averages.

It is, but that also means if you prompt it correctly it will give you the answer of the average graduate student working on theoretical physics, or the average expert on the historical inter-cultural conflict of the country you are researching. Averages can be very powerful as well.

ofjcihen•4h ago
I see this argument all the time. That the user must not be prompting correctly.

In my experience the way you prompt is less important than the “averageness” of the answer you’re looking for.

Jach•3h ago
Talking about averages is really misleading. Talk about capabilities instead, framed in tool language if you must.

Quoting https://buttondown.com/hillelwayne/archive/ai-is-a-gamechang... about https://zfhuang99.github.io/github%20copilot/formal%20verifi... "In the post, Cheng Huang claims that Azure successfully used LLMs to examine an existing codebase, derive a TLA+ spec, and find a production bug in that spec." This is not the behavior of the "average" anything.

ofjcihen•2h ago
Take it from someone in the business of exploiting race conditions for money: that’s about as average as you can get. Additionally, whatever Azure is considering “traditional” methods may be bare bones poorly optimized automated code reviews given the egregious issues they’ve had in the past.

As a side note:LLMs by definition do not demonstrate “understanding” of anything.

andy99•3h ago
Research has no average, there's opinion and experience and nuance. This whole "graduate level" thing (no idea if that's what the parent comment refers to) is so stupid, and marketing at people who have never done research or advanced studies.

Getting an average response by necessity gives you something dumbed down and smoothed over that nobody in the field would actually write (except maybe to train and LLM or contribute an encyclopedia entry).

Not that having general knowledge is a bad thing, but LLM output is not representative of what a researcher would do or write.

Jach•2h ago
One thing the "graduate level" concept reminds me of is Terence Tao's semi endorsement almost a year ago: https://mathstodon.xyz/@tao/113132502735585408 People quote the "The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, (static simulation of a) graduate student." part but ignore all the rest of the nuance in the thread like "It may only take one or two further iterations of improved capability (and integration with other tools, such as computer algebra packages and proof assistants) until the level of "(static simulation of a) competent graduate student" is reached, at which point I could see this tool being of significant use in research level tasks." or "I inadvertently gave the incorrect (and potentially harmful) impression that human graduate students could be reductively classified according to a static, one dimensional level of “competence”."
ofjcihen•4h ago
I need to second all of these points and add in the additional reason I don’t use it often: unless it’s a very common use case and I just need some boilerplate starting code I already know I’m going to spend more time fixing the issues it creates than if I just write it myself.
standardUser•4h ago
> they produce shoddy and broken code

We must have dramatically different approaches to writing code with LLMs. I would never implement AI-written code that I can't understand or prove works immediately. Are people letting LLMs write entire controllers or modules and then just crossing their fingers?

ofjcihen•3h ago
In my experience: Yes.

Doing security reviews for this content can be a real nightmare.

To be fair though I have no issue with using LLM created code with the caveat being YOU MUST BE UNDERSTAND IT. If you don’t understand it enough to be able to review it you’re effectively copying and pasting Stack Overflow

standardUser•3h ago
At least with Stack Overflow there's upvotes and comments to give me some confidence (sometimes too much confidence). With LLMs I start hyper-skeptical and remain hyper-skeptical - there's really no way to develop confidence in it because the mistakes can be so random and dissimilar to the errors we're used to parsing in human-generated content.

Having said that, LLMs have saved me a ton of time, caught my dumb errors and typos, helped me improve code performance (especially database queries) and even clued me into to some better code-writing conventions/updated syntax that I hadn't been using.

stackskipton•3h ago
Also with most Stack Overflow copy and pasted code, you can Google the suspicious code, find the link to it and read over the question/comments and somewhat grok the decision and maybe even find a fix in the comments.

Most AI Code does not have prompts and even if it does, there is not guarantee that same prompt will produce the same output so it's like reading human code except human can't explain themselves even if you have access to them.

thayne•1h ago
In my experience, fixing code generated by AI is often more work than writing it myself the right way.

And even uf you understand the code, that doesn't mean it is maintainable code.

stackskipton•3h ago
Yes. VAST majority of developers are working in feature factories where they pick a Jira ticket off the top, it probably has timeboxed work amount. Their goal is close Jira ticket within timeboxed amount by getting build to go green and PM to accept "Yep, that feature was implemented." If badly written LLMs will get build to go green and feature to be accepted, whatever, Jira ticket closed, paycheck collected. Any downstream problems are tomorrow problem when tech debt piles up high enough and Jira tickets to fix the tech debt are written.
sotix•35m ago
> > they produce shoddy and broken code

> We must have dramatically different approaches to writing code with LLMs.

I’ve seen this same conversation occur on HN every day for the past year and a half. Help! I think I’m stuck in an llm conversation where it keeps repeating itself and is unable to move onto the next point.

1vuio0pswjnm7•1h ago
Autocomplete allows one to see what strings others have written, typed, or dictated. That can be useful, no doubt. For one, it saves time typing those strings oneself.

But claiming those strings as one's own is a bridge too far. Of course one might want to avoid inadvertently creating strings that others have already created. Autocomplete can prevent that. But people will inevitably need to create new strings that no one else has created before. There is no substitute for the thinking behind the creation of new strings. Recombining old strings is not a substitute.

"AI" is being marketed as a substitute. Recombination of past work is not, by itself, new work or new thinking. As with autocomplete, there are limits to its usefulness.

For software developers who hate "intellectual property" and like to take ideas from others, this may be 100% acceptable. But for non-software developers who seek originality, it might fall short.

When the people invested in "AI", e.g., Silicon Valley wonks, start throwing around terms like "intelligence" to describe a new type of autocomplete, when they fake demos to mislead people about its limits, then some people are going to lose interest. Software developers betting on "AI" may not be among them. The irony is that software development is already so rife with economically justified mindless copying and unoriginality that software quality is in a free fall. "AI" is only going to supercharge the race to the bottom.

Like it or not, the market wants "bad code". It loves mindless copying. It has no notion of "code quality". It demands minimisation of "developer time". Perhaps "AI" will deliver.

analog31•4h ago
For me what matters is how I ration my attention. Time spent reacting to the AI could be spent thinking or working.

With that said, getting it to create boilerplate code is pretty useful, but not all that important a part of my job.

mdaniel•4h ago
I wish OP had linked to the actual study, because that blurb is a press release

Resistance to Generative AI: Investigating the Drivers of Non-Use - https://scholarspace.manoa.hawaii.edu/server/api/core/bitstr...

karmakaze•3h ago
Either I'm an outlier or this is a bad article/study.

All the reasons given are fears:

    Output Quality - Fears that...
    Ethical - Fears about..
    Risk - Fears that...
    Human Connection - Fears that...
    Impairment - Fears that...
    Creativity - Fears that...
My disuse is all about flow and value, not fear. The ways I use it is in refining ideas at a higher level, not outputting code/content/etc (except for rote work).
Jach•3h ago
Programmers are usually the minority. The introduction mentions that ChatGPT reached 100 million users faster than any other consumer technology in history. There aren't even that many programmers worldwide. In their table 3 of non-use scenarios, programming isn't an explicit one while "creating poetry" is. (Despite mentioning CoPilot use as one of their pre-screen options. Perhaps in the 24 situation codes they came up with, one of the 4 they removed for table 3 due to having the greatest reported AI usage was programming, as this study is more about non-usage.) To put yourself in the mindset of a study participant, go through each of those scenarios and ask yourself if you've used the AI for that (and would use it again) or not, and why.

They also only surveyed a few hundred people via Prolific.

The product success (millions of users) implies that for most people, concerns over "ease of use" (which is what I'd code your reason of "flow" as) aren't common, because it's quite easy to use for many scenarios. But I'd still expect the concern to come up for those talking about using it for artwork because even with things like inpainting in a graphics editor it's still not exactly easy to get exactly what you want... The study mentions they consolidated 29 codes into the 8 in table 2 (you missed the two general concerns, Societal and Barrier). Perhaps "ease of use" slides ito "Barrier", as they highlight "lack of skill" as fitting there and that's similar. It would be nice to see a sample of the actual survey questions and answers and coding decisions but hey what is open data am I right.

Anyway, the table headings are "General Concerns" and "Specific Concerns". I wouldn't get too hung up on the use of the term "fear" as the authors seem to use it interchangeably with "concern". I'd also read something like "Output Quality: fears that GenAI output is inaccurate..." synonymously as "has low confidence in the output quality of GenAI". (I'd code your "value" issue as one about output quality.) All of these fears/concerns/confidence questions can be justifiable, too, the term is independent of that.

lucas_membrane•2h ago
Fear? I suppose that any negative evaluation can be stereotyped as fear or lack of intrepidity, but perhaps that repeated use of that label is projection -- that the article was written by an AI believer who fears that AI might have to recognize some realities beyond its purview. Or maybe the article was written by AI that has learned that fear is what we fear.

Human thought is implemented by a system that has adapted for hundreds of millions of years in diverse environments. We are adapted to huge variations in resources, threats of innumerable kinds, climates, opportunities, social and ecological relationships, etc, and many of its adaptations may be adaptations to control, balance or modify its other adaptations. It would be crazy to expect human intelligence to be what we could describe as optimized for something, and it would be crazy to expect humans to be able to figure out what that something is even if that were true. Perhaps our minds have gotten us here, and they cannot get us out of here, but they maintain some pretty strong links to our natural environment, which is still our landlord.

AI, OTOH, is a new kind of creature of a single time and a monoculture -- the internet. I don't talk to AI; perhaps someone has asked AI how much fear we should have of AI, and what the odds are.

bawolff•4h ago
The weirdest thing about AI is how shocked people that like AI are that some people don't use it.

While i'm sure its a useful tool in some situations, and i don't begrudge anyone who finds value in it, it simply doesn't fit into my life as something that would be useful on a regular basis.

analog31•4h ago
There's an element of political correctness here. It's hard to look your friend in the eye and tell them that the stuff they're writing isn't worth reading.

In a similar vein, when people find out that I ride a bicycle, their first question is why I don't ride an e-bike.

tforcram•3h ago
I just had that conversation with a coworker last week, they started with 'I wonder if there is anyone left who isn't using AI daily?', and I had to reply with.. 'um well me actually'.

I only occasionally try it out for specific tasks and have never felt the inclination to try making it a part of any daily process, but his mindset was such that he couldn't perceive of anyone not wanting to fully dive in everyday and that those who didn't were missing out on significant value to their lives.

ofjcihen•2h ago
Any time I’ve had someone express this kind of thinking they’ve either been a non-dev or someone who writes CRUD exclusively.

Devs and others recognize that the tech use very useful but not “magic” .

Cornbilly•4h ago
For me, it's only really useful as an enhanced search engine and using it to do any real work usually just leads to more issues than if I just did it manually.

The vast majority of uses are your typical Silicon Valley hype, jargon filled bullshit that sells half-baked products to the tech illiterate folks in the C-suite.

gunnarmorling•3h ago
The other day, I came across a blog post by someone I really value and it sounded very much like written by AI. So I decided to be very explicit and transparent about my own ways of using (and not using) AI for my blog: https://www.morling.dev/ai/.

TL,DR: I don't use it for writing (I want to say something original in my own voice), but I do use it for copy editing (improving wording, helping with title ideas, etc.).

fpoling•3h ago
When ChatGPT appeared I have been working at a small startup for couple of months that was planning to hire another programmer.

CTO became extremely enthusiastic about ChatGPT and said that the programming would be a dying job and tried to show during presentation how good ChatGPT was and asked it to write a basic code related to our tasks. It produced total garbage that could not be used even as a starting point. CTO tried to prompt it to the needed directions, but it made things worse.

After the presentation I tried to search for the task from the presentation. It turned out there were very few StackOverflow or GitHub entries about it as the topic was rather specialized and ChatGPT tried to average those into the task.

In a month I and another recent hire departed from the company. And a year later the company was hiring programmers again.

Out of curiosity I repeated the task few times with different models all the time resulting in the same garbage.

So my rule of the thumb is that if a task generates a lot of search hit, then perhaps a LLM can average the knowledge into something reasonable. If not, averaging is not possible.

drivingmenuts•3h ago
Of the four top concerns (Output Quality, Ethical Implications, Risk and Human Connection), I agree with the first three and am ambivalent about the fourth. I think the first three are also inseparably interlinked. The Human Connection issue is a bit different - that's more about the individual than the technology, to my mind. As long as no one is forced to use an AI and as long as final decisions are made by responsible humans, we might be OK.
diggan•3h ago
Re the ethics part, something I haven't quite understand myself yet:

On one hand, training it isn't "copying" per se, but "learning", so maybe it isn't straight up copyright infringement, unless it can reproduce large parts identically. It could also allow small team/individuals to have much large impact in the world and could lower the barrier to entry for research and experimentation, maybe even other endeavors. It certainly could help with knowledge sharing and accessibility, where downstream creativity and usefulness can outweigh diffuse individual harm. Maybe it expands the creative field rather than shrinks it, that'd be a good thing.

But then on the other hand, many models (datasets) are built with copyrighted works without permission or royalties, with the effect that LLM availability could reduce demand for human livelihoods, leading to eroding fields instead of expanding them. Most releases today are kind of opaque with their training datasets, most are undisclosed and it's hard if not impossible for authors to have agency over if their work is included or not. Maybe if LLMs remain it'll be hard to sustain cultural production instead, that'd be good for no one.

So then what is the best approach for someone who doesn't want the forfeit the usefulness they themselves experience, but also not go directly against what the ethical considerations bring up? In the end I don't know if there is an easy or right side to take, I guess usually the optimum sits somewhere around the middle, not at the extremes at least.