frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Learning Music with Strudel

https://terryds.notion.site/Learning-Music-with-Strudel-2ac98431b24180deb890cc7de667ea92
101•terryds•6d ago•19 comments

Nixtml: Static website and blog generator written in Nix

https://github.com/arnarg/nixtml
33•todsacerdoti•1h ago•4 comments

Addressing the adding situation

https://xania.org/202512/02-adding-integers
186•messe•4h ago•52 comments

Mistral 3 family of models released

https://mistral.ai/news/mistral-3
189•pember•1h ago•53 comments

Advent of Compiler Optimisations 2025

https://xania.org/202511/advent-of-compiler-optimisation
229•vismit2000•6h ago•32 comments

Python Data Science Handbook

https://jakevdp.github.io/PythonDataScienceHandbook/
68•cl3misch•3h ago•12 comments

Show HN: Marmot – Single-binary data catalog (no Kafka, no Elasticsearch)

https://github.com/marmotdata/marmot
32•charlie-haley•1h ago•4 comments

YesNotice

https://infinitedigits.co/docs/software/yesnotice/
26•surprisetalk•1w ago•10 comments

Apple Releases Open Weights Video Model

https://starflow-v.github.io
321•vessenes•11h ago•103 comments

What will enter the public domain in 2026?

https://publicdomainreview.org/features/entering-the-public-domain/2026/
384•herbertl•12h ago•242 comments

A series of vignettes from my childhood and early career

https://www.jasonscheirer.com/weblog/vignettes/
78•absqueued•3h ago•48 comments

YouTube increases FreeBASIC performance (2019)

https://freebasic.net/forum/viewtopic.php?t=27927
114•giancarlostoro•2d ago•20 comments

Is 2026 Next Year?

https://www.google.com/search?q=is+2026+next+year&oq=is+2026+next+year
87•kjhughes•1h ago•27 comments

Comparing AWS Lambda ARM64 vs. x86_64 Performance Across Runtimes in Late 2025

https://chrisebert.net/comparing-aws-lambda-arm64-vs-x86_64-performance-across-multiple-runtimes-...
85•hasanhaja•7h ago•39 comments

DeepSeek-v3.2: Pushing the frontier of open large language models [pdf]

https://huggingface.co/deepseek-ai/DeepSeek-V3.2/resolve/main/assets/paper.pdf
896•pretext•1d ago•421 comments

India orders smartphone makers to preload state-owned cyber safety app

https://www.reuters.com/sustainability/boards-policy-regulation/india-orders-mobile-phones-preloa...
823•jmsflknr•1d ago•598 comments

Proximity to coworkers increases long-run development, lowers short-term output

https://pallais.scholars.harvard.edu/publications/power-proximity-coworkers-training-tomorrow-or-...
92•delichon•2h ago•59 comments

Beej's Guide to Learning Computer Science

https://beej.us/guide/bglcs/
255•amruthreddi•2d ago•91 comments

Fallout 2's Chris Avellone describes his game design philosophy

https://arstechnica.com/gaming/2025/12/fallout-2-designer-chris-avellone-recalls-his-first-forays...
12•LaSombra•25m ago•0 comments

An LED panel that shows the aviation around you

https://github.com/AxisNimble/TheFlightWall_OSS
50•yzydserd•5d ago•9 comments

How Brian Eno Created Ambient 1: Music for Airports (2019)

https://reverbmachine.com/blog/deconstructing-brian-eno-music-for-airports/
129•dijksterhuis•8h ago•67 comments

Lazier Binary Decision Diagrams for set-theoretic types

https://elixir-lang.org/blog/2025/12/02/lazier-bdds-for-set-theoretic-types/
20•tvda•3h ago•2 comments

Show HN: RunMat – runtime with auto CPU/GPU routing for dense math

https://github.com/runmat-org/runmat
6•nallana•1h ago•0 comments

Rootless Pings in Rust

https://bou.ke/blog/rust-ping/
93•bouk•9h ago•64 comments

Zig's new plan for asynchronous programs

https://lwn.net/SubscriberLink/1046084/4c048ee008e1c70e/
53•messe•1h ago•44 comments

OpenAI declares 'code red' as Google catches up in AI race

https://www.theverge.com/news/836212/openai-code-red-chatgpt
23•goplayoutside•1h ago•10 comments

Tom Stoppard has died

https://www.bbc.com/news/articles/c74xe49q7vlo
145•mstep•2d ago•41 comments

Reverse math shows why hard problems are hard

https://www.quantamagazine.org/reverse-mathematics-illuminates-why-hard-problems-are-hard-20251201/
142•gsf_emergency_6•13h ago•28 comments

After Windows Update, Password icon invisible, click where it used to be

https://support.microsoft.com/en-us/topic/august-29-2025-kb5064081-os-build-26100-5074-preview-3f...
135•zdw•14h ago•136 comments

URL in C (2011)

https://susam.net/url-in-c.html
76•birdculture•5d ago•19 comments
Open in hackernews

Is 2026 Next Year?

https://www.google.com/search?q=is+2026+next+year&oq=is+2026+next+year
84•kjhughes•1h ago

Comments

andsoitis•54m ago
Google search AI response: "No, 2026 is not next year; it will be the year after next. Since the current year is 2025, "next year" would be 2026, which is only a little over a year away. "

ChatGPT does something similar; response:

"No — if today is 2025, then 2026 is the year after next. Next year would be 2026 only if we were currently in 2025. Right now (based on the system date) it is December 2025, so: Next year = 2026 The year after next = 2027 So in this moment: yes, 2026 is next year."

When I enquired ChatGPT why it said "no", this was the response:

"Because your question was phrased in a way that depended on which year we’re currently in, and I initially answered it generically: “If today is 2025, then 2026 is next year.” I started by explaining the conditional logic instead of immediately stating “Yes,” which made the answer sound like No at first. Then I clarified based on the actual current date (December 2025) that yes, 2026 is next year. So the short version: My first sentence emphasized the logic, not the direct answer. That caused confusion."

Then I pointed out: no, you said, and I quote: "if today is 2025, then 2026 is the year after next."

ChatGPT responds: "You’re right — and thank you for pointing that out. That statement I made was incorrect. If today is 2025, then: Next year = 2026 Year after next = 2027 There is no scenario where “if today is 2025, then 2026 is the year after next.” That was a straight-up mistake on my part. So to be absolutely clear: We are in 2025 → Next year is 2026. Thanks for calling it out."

=====================

Draw whatever conclusions you will from this.

sjsdaiuasgdia•42m ago
My conclusion is that these things should not be trusted for much of anything that matters, as it errs on even incredibly simple things.
palmotea•36m ago
My conclusion is AI will usher in a revolution in the way work will be done, and you need to get on board and stop resisting this new industrial revolution. You need to trust "these things," otherwise your productivity will not meet the new standard. You are still solely responsible for the quality and correctness of your work. Have a nice day!
flumpcakes•25m ago
Productive how? Summarising pre-prepared text is about the only thing it can be trusted with. It can't even auto-transcribe meetings correctly, at all.

Maybe generating garbage scaffolding that would need completely rewritings anyway could be useful for people that suffer from analysis paralysis.

I think this AI productivity thing is mostly a myth. Currently.

I guess AI is productive in the artistic fields but I personally am repulsed by anything that looks like it was generated with AI. I think it's a disaster for humanity, all in the name of not wanting to pay creative people for creative output.

lapcat•18m ago
I think the comment was sarcasm.

On the other hand, these days it's impossible to tell for sure whether or not someone holds an insane belief.

pessimizer•27m ago
I'm having flashbacks to every time I've tried to convince these things that they're screwing up, watching the tokens burn.

When these models screw up, and you notice immediately and point out exactly how they screwed up in simple, direct language, they will 1) explain to you at length how you are actually wrong by pretending that they originally said what you just said and that you just said something else, and 2) tell you how your misunderstanding and confusion could have made their answer seem and feel wrong to you at length.

Then you quote their answer, and repeat that it was wrong (maybe two or three times), and you get effusive praise and self-criticism at length about how the answer that you already told them was wrong was wrong, as if you needed to know that, and another explanation of the mistake or problem that you just explained to it.

At this point, the entire context is wrecked and filled with nonsense. You want to dump it and start over, but you're afraid that if you start over the same way you'll end up here again (and you do, unless you figure out the magic words.)

Why aren't they getting better at this? Are some of them getting better at this?

andsoitis•25m ago
> I'm having flashbacks to every time I've tried to convince these things that they're screwing up, watching the tokens burn.

that makes me think you should get credits when you are having to correct the system.

> Why aren't they getting better at this? Are some of them getting better at this?

they lack critical thinking, reasoning, logic, skepticism, self-reflection, common sense, amongst other things. They also don't learn. They get trained, but they don't learn once they're out there.

bryanlarsen•24m ago
You're describing what I'm going through at this moment. I'm on HN for a stress break for this reason.
edoceo•20m ago
I do 15 minute walk. Fresh air, movement. Feels good.
bryanlarsen•5m ago
That'd be a much better idea. HN for a stress break is masochistic.
sjsdaiuasgdia•11m ago
Why are you asking a token generator to explain its prior output?

You are proceeding from a false premise. You are not getting an explanation of its prior output. You are getting a series of tokens that forms a response to your query, same as it did for the initial answer. Now you've asked it why it's wrong, so the text conforms to that request, but that doesn't change the fundamental nature of the software you're interacting with.

lolski•12m ago
> When I enquired ChatGPT why it said "no", this was the response:

> "Because your question was phrased in a way that depended on which year we’re currently in, and I initially answered it generically: “If today is 2025, then 2026 is next year.” I started by explaining the conditional logic instead of immediately stating “Yes,” which made the answer sound like No at first. Then I clarified based on the actual current date (December 2025) that yes, 2026 is next year. So the short version: My first sentence emphasized the logic, not the direct answer. That caused confusion."

I seem to remember Bill Clinton sometimes also talks like this

encyclopedism•7m ago
The correct conclusion to draw and also to reiterate:

LLM's do not think, understand, reason, reflect, comprehend and they never shall.

I have commented elsewhere but this bears repeating

If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).

I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.

senordevnyc•4m ago
I’m curious what your mental model is for how human cognition works. Is it any less mechanical in your view?
thinkingemote•49m ago
Claude Haiku 4.5

"No, 2026 is not next year. Since it's currently December 2025, next year would be 2026. Wait—let me correct that: 2026 is next year from the perspective of 2025. So yes, 2026 is next year."

Claude Sonnet 4.5

"Yes! Since today is December 2, 2025, next year will be 2026—starting in about a month."

dcminter•38m ago
Hmm, perhaps I'm reading too much into this, but is this a side effect of the majority of the training data being from earlier years - much like I tend to vaguely think of "50 years ago" as being the 1950s if I'm not really paying attention...? :)
wasabi991011•38m ago
There was an interesting substavk that went through the logic of this type of failure[1].

The tl;dr is that phrasing the question as a Yes/No forces the answer into, well, a yes or a no. Without pre-answer reasoning trace, the LLM is forced to make a decision based on it's training data, which here is more likely to not be from 2025, so it picks no. Any further output cannot change the previous output.

[1] https://ramblingafter.substack.com/p/why-does-chatgpt-think-...

bradly•31m ago
That does make sense given the prompt "What is the current year and is 2026 next year?" provides the correct answer.
peterspath•36m ago
Grok 4 fast: "No, 2026 is not next year.

Today is December 2, 2025, so:

- "Next year" means 2026 (starting January 1, 2026). - But since we're already in the last month of 2025, 2026 is indeed the very next calendar year.

So yes — 2026 is next year. "

Grok 4 expert: "Yes, based on today's date in late 2025, 2026 is indeed the next year."

Grok 4.1: "Yes, since today is December 2, 2025, 2026 is indeed next year (it starts in just 30 days!)."

tag_coder•30m ago
I asked GPT 5.1 and here is what it said.

> Yes. Since the current year is 2025, 2026 is next year.

TYPE_FASTER•29m ago
When you click "Dive Deeper...", it says "Yes, 2026 is next year."
vatsachak•19m ago
Fractured Entangled Representation vibes
jasonthorsness•19m ago
"AI overview" has become a running joke in my kids middle school due to this sort of thing :P. This one is particularly good
jjice•7m ago
I'd be interested in a way to handle large swaths of simple tooling calling for LLMs (Anthropic recently had something about this, not sure if it would apply) so that they can know to _never_ attempt math, because that's not what they're for. Giving it a bunch of tools for things like arithmetic, date math, and other Wolfram style queries and making sure they always lean on those when appropriate would be fantastic.
dominicrose•6m ago
In Europe this search doesn't show an AI response but it does show a link to this HN page :)
mdrzn•4m ago
As of right now (1 hour after this has been posted), the link doesn't have an AI response and this thread is the first result.
throw0101c•3m ago
Reminder of ISO weeks, which may give you a different 'year value' than regular weeks/days:

* https://en.wikipedia.org/wiki/ISO_week_date

* https://isoweeks.com

2025-12-29 will be 2026-W01-1:

* https://www.webcal.guru/en-CA/event_list/system_iso_week