frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Microsoft investigates Israeli military's use of Azure cloud storage

https://www.theguardian.com/technology/2025/aug/09/microsoft-israeli-military-azure-cloud-investigation
1•laimewhisps•29s ago•0 comments

Show HN: Tovideo – AI Video Generator with 9 Models (Google Veo 3 etc.)

https://apps.apple.com/us/app/ai-video-generator-tovideo/id6748954744
1•incendies•1m ago•0 comments

Show HN: AI That Rewrites Your Shopify Store for Every Visitor in Real Time

1•HariVP03•2m ago•0 comments

Remarkable News in Potatoes

https://www.theatlantic.com/science/archive/2025/07/potato-tomato-evolution-hybrid/683721/
1•naves•2m ago•0 comments

The new American shopping mall is less Macy's, more church, bowling, bookstore

https://www.cnbc.com/2025/08/09/americas-dying-shopping-malls-have-surprising-rebound-in-store.html
1•rntn•3m ago•0 comments

Show HN: Custom statusline for Claude Code with Git/PR/environment info

https://gist.github.com/dhkts1/55709b1925b94aec55083dd1da9d8f39
1•dhkts1•3m ago•0 comments

Wassette: Microsoft's Rust-Powered Bridge Between WASM and MCP

https://thenewstack.io/wassette-microsofts-rust-powered-bridge-between-wasm-and-mcp/
1•weinzierl•5m ago•0 comments

Honky-Tonk Tokyo (2020)

https://www.afar.com/magazine/in-tokyo-japan-country-music-finds-an-audience
2•NaOH•6m ago•0 comments

JD Vance's team had water level of Ohio river raised for family's boating trip

https://www.theguardian.com/us-news/2025/aug/06/jd-vance-ohio-lake-water-levels
2•LopRabbit•9m ago•0 comments

Physical Media Is Cool Again. Streaming Services Have Themselves to Blame

https://www.rollingstone.com/culture/culture-features/physical-media-collectors-trend-viral-streamers-1235387314/
3•coloneltcb•10m ago•0 comments

Citizen Lab director warns cyber industry about US authoritarian descent

https://techcrunch.com/2025/08/06/citizen-lab-director-warns-cyber-industry-about-us-authoritarian-descent/
1•throw0101d•10m ago•0 comments

Show HN: Goat – An open-source social debate platform

https://www.goat.uz
1•umarov•11m ago•0 comments

Curious about the training data of OpenAI's new GPT-OSS models? I was too

https://twitter.com/jxmnop/status/1953899426075816164
1•tosh•15m ago•0 comments

Linus Torvalds Rejects RISC-V Changes for Linux 6.17: "Garbage"

https://www.phoronix.com/news/Linux-6.17-RISC-V-Rejected
1•hortense•17m ago•0 comments

Batch Inference Benchmarks

https://outerbounds.com/blog/autonomous-inference
1•locomotive-mp•20m ago•0 comments

Google rolls out AI coding tool for GitHub

https://www.infoworld.com/article/4036153/google-rolls-out-ai-coding-tool-for-github-repos.html
2•msolujic•21m ago•0 comments

Breaking through the Senior Engineer ceiling

https://incident.io/blog/breaking-through-the-senior-engineer-ceiling
1•shelika•24m ago•0 comments

We built Chipp – 1,650 users have moved $54K since March

https://chipp.it/index.html
1•ompatil94•25m ago•1 comments

Debian 13 "Trixie" Released

https://micronews.debian.org/2025/1754772107.html
2•todsacerdoti•26m ago•0 comments

Ch.at – a lightweight LLM chat service accessible through HTTP, SSH, DNS and API

https://ch.at/
2•ownlife•27m ago•0 comments

Inspector: Visual testing tool for MCP servers

https://github.com/modelcontextprotocol/inspector
1•tosh•31m ago•0 comments

Ask HN: OpenAI GPT-5 API seems to be significantly slower – is this expected?

3•tlogan•31m ago•2 comments

Learnings from two years of using AI tools for software engineering

https://newsletter.pragmaticengineer.com/p/two-years-of-using-ai
1•rbanffy•34m ago•0 comments

Ask HN: Would you still recommend SICP in 2025?

1•dondraper36•36m ago•0 comments

Textile scientist on unshrinking clothes that's shrunk in the wash

https://theconversation.com/why-do-some-clothes-shrink-in-the-wash-a-textile-scientist-explains-how-to-unshrink-them-259388
1•gsf_emergency_2•37m ago•0 comments

Episode 2 – Wolf Rock Lighthouse maintenance visit and tour [video]

https://www.youtube.com/watch?v=m81KWrfJED0
2•toomuchtodo•40m ago•0 comments

Google Gemini's Self Loathing

https://www.businessinsider.com/gemini-self-loathing-i-am-a-failure-comments-google-fix-2025-8
2•FergusArgyll•40m ago•1 comments

Show HN: I Started Building a Clay Alternative

https://www.enrichspot.com
1•xnoyzi•41m ago•0 comments

AOL discontinues dial-up Internet service

https://appleinsider.com/articles/25/08/09/you-had-mail-aol-finally-discontinues-dial-up-internet-service
2•bookofjoe•42m ago•1 comments

Pkl Lang for Writing and Maintaining Config

https://pkl-lang.org/index.html
1•paradox460•48m ago•0 comments
Open in hackernews

Don Knuth on ChatGPT(07 April 2023)

https://cs.stanford.edu/~knuth/chatGPT20.txt
59•b-man•2h ago

Comments

wslh•2h ago
It would be great to have an update from Knuth. There is no other Knuth.
vbezhenar•1h ago
For question 3, ChatGPT 5 Pro gave better answer:

> It isn’t “wrong.” Wolfram defines Binomial[n,m] at negative integers by a symmetric limiting rule that enforces Binomial[n,m] = Binomial[n,n−m]. With n = −1, m = −1 this forces Binomial[−1,−1] = Binomial[−1,0] = 1. The gamma-formula has poles at nonpositive integers, so values there depend on which limit you adopt. Wolfram chooses the symmetry-preserving limit; it breaks Pascal’s identity at a few points but keeps symmetry. If you want the convention that preserves Pascal’s rule and makes all cases with both arguments negative zero, use PascalBinomial[−1,−1] = 0. Wolfram added this explicitly to support that alternative definition.

Of course this particular question might have been in the training set.

Honestly 2.5 years feel like infinity when it comes to AI development. I'm using ChatGPT very regularly, and while it's far from perfect, recently it gave obviously wrong answers very rarely. Can't say anything about ChatGPT 5, I feel like in my conversations with AI, I've reached my limit, so I'd hardly notice AI getting smarter, because it's already smart enough for my questions.

seanhunter•53m ago
On Wolfram specifically, GPT-5 is a huge step up from GPT-4. One of the first things I asked it was to write me a mathematica program to test the basic properties (injectivity, surjectivity, bijectivity) of various functions. The notebook it produced was

1) 100% correct

2) Really useful (ie it includes various things I didn’t ask for but are really great like a little manipulator to walk through the function at various points and visualize what the mapping is doing)

3) Built in a general way so I can easily change the mapping to explore different types of functions and how they work.

It seems very clear (both from what they said in the launch demos etc and from my experience of trying it out) that performance on coding tasks has been an area of massive focus and the results are pretty clear to me.

tra3•50m ago
Right, I’m still trying to wrap my mind around how gpts work.

If we keep retraining them on the currently available datasets then the questions that stumped ChatGPT3 are in the training set for chatgpt5.

I don’t have the background to understand the functional changes between ChatGPT 3 and 5. It can’t be just the training data can it?

godelski•12m ago

  > gave *obviously wrong* answers very rarely.
I don't think this is a reason I'd trust it, actually this is a reason I don't trust it.

There's a big difference between "obviously wrong" and "wrong". It is not objective but entirely depends on the reader/user.

The problem is it optimizes deception alongside accuracy. It's a useful tool but good design says we should want to make errors loud and apparent. That's because we want tools to complement us, to make us better. But if errors are subtle, nuanced, or just difficult to notice then there is actually a lot of danger to the tool (true for any tool).

I'm reminded of the Murray Gell-Mann Amnesia effect: you read something in the news paper that you're an expert in and lambast it for its inaccuracies, but then turn the page to something you don't have domain knowledge and trust it.

The reason I bring up MGA is because we don't often ask GPT things we know about or have deep knowledge in. But this is a good way to learn about how much we should trust it. Pretend to know nothing about a topic you are an expert in. Are its answers good enough? If not, then be careful when asking questions you can't verify.

Or, I guess... just ask it to solve "5.9 = x + 5.11"

ayhanfuat•52m ago
Previous discussion: Don Knuth plays with ChatGPT - May 20, 2023, 626 comments, 927 points https://news.ycombinator.com/item?id=36012360
krackers•46m ago
I'll never get over the fact that the grad student didn't even bother to use gpt-4, so this was using gpt 3.5 or something.
bigyabai•31m ago
It's not the end of the world. Both are equally "impressive" at basic Q/A skills and GPT-4 is noticeably more sterile writing prose.

Even if GPT-3.5 was noticeably worse for any of these questions, it's honestly more interesting for someone's first experience to be with the exaggerated shortcomings of AI. The slightly-screwy answers are still endemic of what you see today, so it all ended well enough I think. Would've been a terribly boring exchange if Knuth's reply was just "looks great, thanks for asking ChatGPT" with no challenging commentary.

rvba•44m ago
What is with those reposts?

Someone could at least run the same questions on the latest model and show the new answers.

Farming karma reddit style..

gjvc•6m ago
[delayed]
TZubiri•3m ago
I was reading yesterday about a Buddhist concept (albeit quite popular in the west) called Begginer's Mind. I think this post represents it perfectly.

We are presented with a first reaction to chatgpt, we must never forget how incredible this technology is, and not become accustomed to it.

Donald knuth approached several of the questions from the absence of knowledge, asking questions as basic as "12. Write a sentence that contains only 5-letter words.", and being amazed not only by correct answers, but incorrect answers parsed effectively and with semantic understanding.