frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Lights and Shadows

https://ciechanow.ski/lights-and-shadows/
1•amelius•37s ago•0 comments

Design for the Trash Can

https://arpi.se/anders/#design-for-the-trash-can
1•Bogdanp•1m ago•0 comments

Cl-tuition: a Common Lisp library for building TUIs inspired by Charm

https://github.com/atgreen/cl-tuition
1•todsacerdoti•9m ago•0 comments

How to Learn in Public

https://blackgirlbytes.dev/how-to-learn-in-public
1•Balorgai•14m ago•0 comments

Creating a Singleton Class in Python and Why You (Probably) Don't Need It

https://www.thepythoncodingstack.com/p/creating-a-singleton-class-in-python
1•rbanffy•18m ago•0 comments

The problem with Big Tech's favorite carbon removal tech

https://www.technologyreview.com/2025/10/16/1125794/carbon-removal-beccs-problems/
1•fleahunter•19m ago•0 comments

They Signed Up for Citi's New Premium Card. It Turned into a Nightmare

https://www.wsj.com/personal-finance/credit/citi-new-premium-credit-card-problems-b99f7642
1•JumpCrisscross•19m ago•0 comments

Show HN: Opendatabay.com – Indexing Worlds Data for Humans and Machines

https://www.opendatabay.com
1•ibnzUK•23m ago•0 comments

Steve Wozniak’s Secret Service Interrogation

https://web.archive.org/web/20111122202554/https://archive.woz.org/letters/general/78.html
1•strogonoff•24m ago•0 comments

AI interpretability has the same problems as philosophy of mind

https://www.seangoedecke.com/philosophy-and-ai-interpretability/
1•swah•28m ago•0 comments

Free Graphic Cards for Everyone

https://idiallo.com/blog/free-graphic-cards-for-everyone
2•jnord•29m ago•0 comments

What's your go-to strategy for giving engineers access to production?

https://lobste.rs/s/heikad/what_s_your_go_strategy_for_giving
1•todsacerdoti•34m ago•0 comments

The Republic of the Mind

https://workingintelligence.ai/posts/republic-of-the-mind/
1•Philosopheril•35m ago•0 comments

Nix: Connecting to the Sandbox

https://bmcgee.ie/posts/2025/10/nix-connecting-to-the-sandbox/
2•Bogdanp•35m ago•0 comments

The Ultimate Computer

https://en.wikipedia.org/wiki/The_Ultimate_Computer
1•tosh•40m ago•0 comments

Faster CPython team layoff at Microsoft (May 2025)

https://bsky.app/profile/snarky.ca/post/3lp5w5j5tws2i
1•JohnKemeny•42m ago•1 comments

Element Pro: Element X, built specifically for the workplace

https://element.io/blog/element-pro-element-x-built-specifically-for-the-workplace/
1•sasvari•45m ago•0 comments

AI Design Face-Off: An Interior Designer's Test of Google's Gemini vs. OpenAI

https://www.fcilondon.co.uk/blog/nano-banana-vs-open-ai-interior-design
1•ctippett•48m ago•0 comments

Nat traversal, and how we're improving it

https://tailscale.com/blog/nat-traversal-improvements-pt-1
1•calcifer•49m ago•0 comments

Top Best Free Email Services Now

https://whoerip.com/blog/top-best-free-email-services/
1•denis_kkk•51m ago•1 comments

TV Typewriter Remembered

https://hackaday.com/2023/07/20/tv-typewriter-remembered/
2•gregsadetsky•56m ago•0 comments

Clustering Nvidia DGX Spark and M3 Ultra Mac Studio for 4x Faster LLM Inference

https://twitter.com/exolabs/status/1978525767739883736
2•alexandercheema•57m ago•1 comments

Magic Words: Programming the Next Generation of AI Applications

https://www.oreilly.com/radar/magic-words-programming-the-next-generation-of-ai-applications/
1•BerislavLopac•58m ago•0 comments

Steve Jobs to Be Featured on U.S. Commemorative $1 Coin in 2026

https://www.macrumors.com/2025/10/15/steve-jobs-coin-design/
2•tosh•1h ago•0 comments

Last-minute /boot boost for Fedora 43

https://lwn.net/SubscriberLink/1041078/6a7618329aca23e3/
2•rwmj•1h ago•0 comments

Show HN: Ovi AI – End-to-End Audio-Video Generation from Image and Prompt

https://www.oviaivideo.com
1•Viaya•1h ago•0 comments

Eddy Cue Explains Why Apple TV+ Is Now Apple TV

https://www.macrumors.com/2025/10/15/eddy-cue-explains-apple-tv-plus-name-change/
1•tosh•1h ago•0 comments

Hardware Touch, Stronger SSH

https://www.ubicloud.com/blog/hardware-touch-stronger-ssh
3•ekjhgkejhgk•1h ago•1 comments

Show HN: Sora2 AI – Create Cinematic Videos with Realistic Sound in Minutes

https://www.soraisai.com
2•Viaya•1h ago•0 comments

We Built a Chinese Typewriter [video]

https://www.youtube.com/watch?v=-IhuFgiWNS4
3•karimf•1h ago•0 comments
Open in hackernews

When Compiler Engineers Act as Judges, What Can Possibly Go Wrong?

https://seylaw.blogspot.com/2025/05/when-compiler-engineers-act-as-judges.html
27•meinersbur•5mo ago

Comments

hyperhello•5mo ago
> He derided my attempt to use an AI summary to bridge a communication gap (I explicitly stated I'm not a programmer) as a "...stochastic parrot designed to produce lies instead of actionable information...".

I don't really have a dog in the race, but I think people should react this way to AI communication. They should be shunned and informed in no uncertain terms that they are not welcome to communicate any more.

welferkj•5mo ago
Agreed, but this needs to be codified in the CoC, otherwise people will use it to rules-lawyer and treat morally correct anti-AI bias as a character flaw.
zzrrt•5mo ago
I have a nitpick about about how AI is supposedly "designed to produce lies." That's pretty clearly false, unless you really believe the creators of AI are intentionally spreading lies through their products and that was their intention from the start. Call them careless or their technology inherently flawed if you want, but neither means "design[ing]" liar technology.

This might have been too petty to comment on, if it weren't for the irony that an arrogant human asserting that AIs are fallible made his own logical error or exaggeration in the same sentence. Was he designed to produce lies too?

Edit: I'm not really defending a layman using AI to produce patches, but the OSS developer's characterization is an overreach in the other direction. It's not a very useful heuristic either; at some point AI content is not going to be labeled or obvious, so it will have to be carefully evaluated for correctness and good-faith intention the same as human-generated content is.

jdiff•5mo ago
One of the very first examples published by GPT-2, the model that started the "too dangerous to be open anymore" trend, was a fake news article about the discovery of unicorns in the Scottish mountains. It's been part of OpenAI's narrative since the beginning.
TheDong•5mo ago
Even that phrasing is kinda rude. "bridge a communication gap", presumably the gap between programmer and non-programmer, right?

He used AI as in he gave the AI the patch that regressed, asked the AI to "find the bug", and pasted that output.

This would be akin to walking up to the architect for a house, and saying "it's not my job to build buildings, but I tried to use this lego design to show you how to do your job. Look, the legos snap together, can't you do that with the house?"

Using AI to try and explain things to a subject matter expert, which you yourself do not understand, will come off like that, like you're second-guessing, belittling, and underestimating their expertise all at once.

carols10cents•5mo ago
And the architect is a volunteer for Habitat for Humanity.
KingOfCoders•5mo ago
And the architect tells them to submit a new blueprint of the plans otherwise they can't do anything - knowing quite well he's not an architect and can't do that ("Submit a patch").
axman6•5mo ago
Thats absolutely not what they asked for, no one was able to reproduce the issue, so they asked for clearer instructions on how to reproduce the issue and were met with hostility. It's not the job of OSS developers to debug someone else's scripts just to then start debugging the actual issue. This is the absolute bare minimum of any bug report, if you think there's a bug but no one else can observe it, in the first instance you have to assume it's something to do with their setup, until shown otherwise. The addition of not just wrong, but completely misleading AI summaries just makes the job of an OSS dev harder, they now have to start debugging the bug report itself to try to figure out whats parts are even facts at all (hint, most of the AI generated content was completely wrong, but sounded plausible).

Personally, the developers of both the LLVM and Mesa projects were far kinder and patient than I would have been, most OSS developers aren't just not paid to work on these projects, but are usually paid to work on other things. Taking up their time with this nonsense is very insulting to them, and the attitude that they owe the author anything at all is, as stated in the LLVM ticket, exactly what pushes many developers out of OSS development.

high_na_euv•5mo ago
Why links are going thru Google.com? It is shady as hell
ajb•5mo ago
Probably just copypasting them out of google search, it's an easy mistake to make if you're not that technical.
faresahmed•5mo ago
I'm not quiet sure why a non-technical person would be engaging in a technical matter such as compiling LLVM, they say they are involved with some Arch Linux derivative but again the question persists.
high_na_euv•5mo ago
For pr link Id agree, but specific comment link?!
zzrrt•5mo ago
Maybe Blogspot wraps all links like that, to fight malware and SEO.
rho4•5mo ago
I feel the author. Very often when I report issues to open source projects, the first response is "why don't you submit a patch?", followed by subtle hints that I am a leech profiting off the backs of volunteers. I am also at the point where I seriously ask myself whether I should invest the time to report and provide minimal examples for reproduction.
KingOfCoders•5mo ago
It's just like Wikipedia to circle the wagons.
yeputons•5mo ago
> Open source thrives on collaboration. ... Central to this ecosystem are Codes of Conduct (CoCs), designed to ...

Open source thrived back in early 2000-s too. Although I don't remember anything even remotely resembling Code of Conduct back then, I wasn't paying attention. Was it a thing?

I found that Drupal adopted CoC in 2010, and Ubuntu had one already no later than 2005 (the "Ubuntu Management Philosophy" book from 2005 mentions it).

KingOfCoders•5mo ago
Sorry, I'm that old

https://en.wikipedia.org/wiki/Etiquette_in_technology

tomovo•5mo ago
> Maybe the comment that voiced my anger crossed a line, too. I take full responsibility for that. But I think that this provoked reaction is understandable after all the time and effort spent to solve this issue constructively by a non-technical person.

> it further demonstrated my good intentions

> "you are arguing with a law professional"

> "AI summary," ... shows the effort I am willing to invest ...

Wow.

KingOfCoders•5mo ago
Developers feel the end times coming - out the pitchforks for any mention of AIs. Reading the article it does not seem to be an auto-generated AI bug report - nevertheless the pitchforks are out and the mob is even infiltrating unrelated bug threads on a different project to burn the heretic. The end times are coming.
axman6•5mo ago
After reading through the relevant threads, I'm completely on the side of the LLVM CoC committee, this user is just wasting their time. Asking for minimal steps to reproduce an issue is the bare minimum for report issues on open source projects, it is not the job of the developers to show that there is an issue, particularly when some of them attempt to do that, and are also unable to do so. The AI content in the LLVM and Mesa threads was actively misleading, confidently stating absolute nonsense, not even close to anything that was true, but still 100% confident. It's misinformation, bordering on disinformation.

It actually reminds me of the the [OSS Sabotage book](https://www.cia.gov/static/5c875f3ec660e092cf893f60b4a288df/...)'s section on General Interference with Organizations and Production (page 28):

    (11) General Interference with Organizations and Production
        (a) Organizations and Conferences 
            (1) Insist on doing everything through “channels.” Never permit 
                short-cuts to be taken in order to expedite decisions.
            (2) Make “speeches.” Talk as frequently as possible and at great 
                length. Illustrate your “points” by long anecdotes and accounts 
                of personal experiences. Never hesitate to make a few appropriate 
                “patriotic” comments.
            (3) When possible, refer all matters to committees, for “further 
                study and consideration.” Attempt to make the committees as 
                large as possible—never less than five.
            (4) Bring up irrelevant issues as frequently as possible.
            (5) Haggle over precise wordings of communications, minutes, resolutions.
            (6) Refer back to matters decided upon at the last meeting and attempt 
                to re-open the question of the advisability of that decision.
            (7) Advocate “caution.” Be “reasonable” and urge your fellow-conferees
                to be “reasonable” and avoid haste which might result in 
                embarrassments or difficulties later on.
            (8) Be worried about the propriety of any decision—raise the 
                question of whether such action as is contemplated lies within 
                the jurisdiction of the group or whether it might conflict with 
                the policy of some higher echelon.
itsanaccount•5mo ago
"actually do your job"

What an amazingly effective phrase to get open source developers to do what you want. /s

malcolmgreaves•5mo ago
So, is this a lawyer who is using his normal legal tactics of intimidation against LLVM devs who are donating their time to provide open source software? And is his aim that, because he’s incompetent, he messed up multiple parts of the process and got his own feelings hurt, and thus now wants…other people to coddle his feelings?

EDIT -

> Once again these two Gentoo developers showed a lack of good manners.

…

> hold a personal grudge against me

Yes, indeed this non technical person seems to have found that while they don’t have a mind sharp enough for software, nor the respect and understanding that they can’t talk to people the same way they do as a lawyer, they’ve well on their way into the subculture of posting their emotional rants onto the internet. (haha!)

jcranmer•5mo ago
> He derided my attempt to use an AI summary to bridge a communication gap (I explicitly stated I'm not a programmer)

LLVM has already found that AI summaries tend to provide negative utility when it comes to bug reports, and it has a policy of not using them. The moment you admit "an AI told me that...", you've told every developer in the room that you don't know what you're doing, and very likely, trying to get any useful information of you to be able to resolve the bug report is going to be at best painful. (cf. https://discourse.llvm.org/t/rfc-define-policy-on-ai-tool-us...)

Looking over the bug report in question... I disagree with the author here. The original bug report is "hi, you have lots of misnamed compiler option warnings when I build it with my toolchain" which is a very unactionable bug report. The scripts provided don't really provide a lot of insight into what the problem might be, and having loads and loads of options in a configure command increases the probability that it breaks for no good reason. Also, for extra good measure, the link provided is to the latest version of a script, which means it can change and no longer reproduce the issue in question.

Quite frankly, the LLVM developer basically responded with "hey, can you provide a better, simpler steps to reproduce?" To which the author responds [1] "no, you should be able to figure it out from what I've given already." Which, if I were in the developer's shoes, would cause me to silently peace out at that moment.

At the end of the day, what seems to have happened to me is that the author didn't provide sufficient detail in their initial bug report and bristled quite thoroughly at being asked to provide more detail. Eli Schwartz might have crossed the line in response, but the author here was (to me) quite clearly the first person to have thoroughly crossed the line.

[1] Direct link, so you can judge for yourself if my interpretation is correct: https://github.com/llvm/llvm-project/issues/72413#issuecomme...

gyesxnuibh•5mo ago
Yeah I'm on the maintainers side with the whole

> Do your job

Which is a volunteer based job lol. Even if it was said in a heated argument, the bug reporter never really apologizes from what I read.

Maybe that's a "strawman" though

axman6•5mo ago
> Do your job

"I'm a volunteer, my job is to choose where I volunteer my time, and I won't be volunteering it for free for this".

QuadmasterXLII•5mo ago
The author of this blog post does not come off as well as he thinks he does.
geocar•5mo ago
Agreed.

I feel like it's probably a Google VP who works on Gemini astroturfing.

axman6•5mo ago
This would have me astroturfing it out the window after seeing this nonsense.

The output in their Mesa project bug report is outright misinformation, it sounds completely plausible but is absolute nonsense. This is the true danger of AI, it is so convincingly confident that people forget to question it, or in this case, don't even have the tools to begin questioning it. It's actively unhelpful at best.

GrantMoyer•5mo ago
Direct link to the GitHub thread the post is about: https://github.com/llvm/llvm-project/issues/72413
kordlessagain•5mo ago
Seyfarth admits he might’ve crossed a line, but he always wraps it in some version of “yeah, but look what they did first.” That’s textbook rationalization. He can’t be wrong if he was provoked. Reading more of his stuff, it’s clear the guy has a serious fixation on procedural control. As long as the system works in his favor, he’s its biggest fan. But the second it doesn’t validate him? He flips the table, blames everyone else, and rebrands it as a systemic failure. What starts as a disagreement turns into a legal crusade every time.
TheDong•5mo ago
Reading through the github issues, the author of this article comes off as very rude and entitled.

I'm on the side of the CoC committee who told the author they engaged without enough consideration or kindness.

Reporting bugs is nice. It's less nice if, when a maintainer asks for a clearer reproduction, you respond with "I already gave you a reproduction, even if you have to edit it a little. I'm not a programmer, all I can give you is some AI spam. I'll leave it up to you to do your jobs" (edited only lightly from what the author really wrote).

KingOfCoders•5mo ago
"I'll leave it up to you to do your jobs"

Because the authors expect him to submit a patch when he stated that he is not a developer. That they expect him to reduce the build scripts when he can't do that. Pointing that out, the dev tells him, they don't expect him to be a developer, when some comments above they exactly did that. That is classic passive-aggressive behaviour.

The dev also writes on their page as the top item on what they do:

"fixing paper cuts for users, so all sorts of bugs;"

jmull•5mo ago
That's not fair. The ask was this:

"Please try to minimise the steps required to reproduce it rather than producing large scripts with options that definitely won't work for me."

The guy doesn't have to do that, but then, he can hardly expect that people will want to donate their own time to help him with his problem.

Now, I get that he may not have known entirely how to proceed, but instead of asking how, he just says "no" and demands action.

That doesn't leave the dev anywhere to go -- without a way to reproduce the problem they really can't produce a fix.

So only then does the dev say "You're free to propose a patch yourself instead" which I think is pretty obviously rhetorical, meant to point out that there aren't any good alternatives if you don't want the dev's help.

It's all so strangely entitled -- the dev is asking for only the basic minimum of what's needed to actually fix the user's problem and now we've got people trying to shame them on HN.

TheDong•5mo ago
Sam didn't expect him to submit a patch at first, he said that _after_ the author refused to cooperate and was an ass.

The expectation to have a reasonable reproducer makes total sense, and if your reporter can't provide a clear reproduction, well, the developer can spend time on the bug but they're not obligated to. Our author was speaking like he was entitled to Sam's time.

I do agree "patches welcome" can be pretty passive aggressive, but in this case it was after our user was already an entitled asshole, and after our user posted AI slop, so I can understand why Sam might feel like being short.

Also, it's just wild that a "non-programmer" is submitting bug reports to a compiler, and then defending themselves with "but I'm not a programmer". Who cares about compiler warnings? Programmers. Compiler warnings are literally just for programmers.

Compilers are one of the projects where the devs actually can and should expect 100% of their users to be programmers, by definition. Why else would you be running a compiler?

I guess maybe the director of the CSI: Cyber show would care about them because they'd make the show look more l33t h4x0r, but I'm really struggling to think of any other audience for compiler errors.

yeputons•5mo ago
> Compilers are one of the projects where the devs actually can and should expect 100% of their users to be programmers, by definition. Why else would you be running a compiler?

Following some random instructions for "downloading good GenAI software from GitHub".

watusername•5mo ago
Looking at the whole interaction as well as the AI patch (https://github.com/llvm/llvm-project/pull/125602#issuecommen...) the author submitted, I have to disagree. It removes the flag setting altogether and adds useless code. It demonstrates that the author really has no understanding of the code, which may be okay for your weekend SaaS but definitely not for build system code in critical compiler infrastructures. To put it bluntly: This _is_ AI slop.

There's no denying that AI is helpful, _when_ the human has some baseline knowledge to check the output and steer the model in the correct direction. In this case, it's just wasting the maintainers' time.

I've seen many instances of this happening in the support channels for Nix/NixOS, which has been around long enough for the models to give plausible responses, yet too niche for them to emit usable output without sufficient prompting:

"No, this won't work because [...]" "What about [another AI response that is incorrect]?" (multiple back-and-forths, everyone is tired)

axman6•5mo ago
The Mesa project example(https://gitlab.freedesktop.org/mesa/mesa/-/issues/13022) is even more deranged - I came to this story from someone on Mastodon recommending people proactively ban this person from their project. I'm not personally in favour of doing that, but this is the closest I've been to thinking that's a reasonable thing to do to prevent actively wasting time on nonsense busywork.
KingOfCoders•5mo ago
Reading the original issue (instead of the article)

"But I am not blaming you for not having a degree in software engineering."

But then

"And you admitted that the large scripts in question contain hardcoded information such as your personal computer's login username, clearly those scripts won't work out of the box on someone else's machine".

while the other committer ends with

"You're free to propose a patch yourself instead. "

So the committer is acknowledging that the user is no software developer, but then the two of them demand the user to do things that the user might not be able to do.

That's not going to work.

whstl•5mo ago
I don't think anyone involved ends up looking good.
KingOfCoders•5mo ago
I'd agree.
Zufield•5mo ago
You posted a bug that referenced stuff not actually in the thing you were filing a bug report for, and when asked to produce steps to replicate, you posted AI generated slop. They were politer to you than you deserved.