frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Geizhals Preisvergleich Donates USD 10k to the Perl and Raku Foundation

https://www.perl.com/article/geizhals-donates-to-tprf/
98•oalders•1h ago•28 comments

Fuck, You're Still Sad?

https://bessstillman.substack.com/p/oh-fuck-youre-still-sad
120•LaurenSerino•1h ago•32 comments

Flipper Zero Geiger Counter

https://kasiin.top/blog/2025-08-04-flipper_zero_geiger_counter_module/
75•wgx•2h ago•22 comments

Slack has raised our charges by $195k per year

https://skyfall.dev/posts/slack
2167•JustSkyfall•14h ago•943 comments

TernFS – An exabyte scale, multi-region distributed filesystem

https://www.xtxmarkets.com/tech/2025-ternfs/
51•rostayob•1h ago•8 comments

KDE is now my favorite desktop

https://kokada.dev/blog/kde-is-now-my-favorite-desktop/
343•todsacerdoti•3h ago•285 comments

The quality of AI-assisted software depends on unit of work management

https://blog.nilenso.com/blog/2025/09/15/ai-unit-of-work/
72•mogambo1•2h ago•37 comments

Luau – fast, small, safe, gradually typed scripting language derived from Lua

https://luau.org/
47•andsoitis•2h ago•11 comments

Midcentury North American Restaurant Placemats

https://casualarchivist.substack.com/p/order-up
94•NaOH•1d ago•23 comments

Automatic Differentiation Can Be Incorrect

https://www.stochasticlifestyle.com/the-numerical-analysis-of-differentiable-simulation-automatic...
26•abetusk•1h ago•4 comments

CERN Animal Shelter for Computer Mice

https://computer-animal-shelter.web.cern.ch/index.shtml
233•EbNar•8h ago•32 comments

WASM 3.0 Completed

https://webassembly.org/news/2025-09-17-wasm-3.0/
1001•todsacerdoti•21h ago•433 comments

Show HN: The text disappears when you screenshot it

https://unscreenshottable.vercel.app/?text=Hello
419•zikero•13h ago•139 comments

Meta Ray-Ban Display

https://www.meta.com/blog/meta-ray-ban-display-ai-glasses-connect-2025/
537•martpie•15h ago•775 comments

Pnpm has a new setting to stave off supply chain attacks

https://pnpm.io/blog/releases/10.16
128•ivanb•8h ago•83 comments

This Website Has No Class

https://aaadaaam.com/notes/no-class/
151•robin_reala•6h ago•71 comments

You Had No Taste Before AI

https://matthewsanabria.dev/posts/you-had-no-taste-before-ai/
167•codeclimber•3h ago•133 comments

CircuitHub (YC W12) Is Hiring Operations Research Engineers (UK/Remote)

https://www.ycombinator.com/companies/circuithub/jobs/UM1QSjZ-operations-research-engineer
1•seddona•6h ago

Nvidia buys $5B in Intel stock in seismic deal

https://www.tomshardware.com/pc-components/cpus/nvidia-and-intel-announce-jointly-developed-intel...
280•stycznik•4h ago•180 comments

Fast Fourier Transforms Part 1: Cooley-Tukey

https://connorboyle.io/2025/09/11/fft-cooley-tukey.html
55•signa11•6h ago•10 comments

Mirror Life Worries

https://www.science.org/content/blog-post/mirror-life-worries
23•etiam•4h ago•11 comments

Keeping SSH sessions alive with systemd-inhibit

https://kd8bny.com/posts/session_inhibit/
36•kd8bny•2d ago•13 comments

An Afternoon at the Recursive Café: Two Threads Interleaving

https://ipfs.io/ipfs/bafkreieiwashxhlv5epydts2apocoepdvjudzhpnrswqxcd3zm3i5gipyu
8•robertothais•3d ago•2 comments

Boring is good

https://jenson.org/boring/
256•zdw•2d ago•57 comments

One Token to rule them all – Obtaining Global Admin in every Entra ID tenant

https://dirkjanm.io/obtaining-global-admin-in-every-entra-id-tenant-with-actor-tokens/
270•colinprince•16h ago•41 comments

A better future for JavaScript that won't happen

https://drewdevault.com/2025/09/17/2025-09-17-An-impossible-future-for-JS.html
8•warrenm•22m ago•1 comments

Orange Pi RV2 $40 RISC-V SBC: Friendly Gateway to IoT and AI Projects

https://riscv.org/ecosystem-news/2025/09/orange-pi-rv2-40-risc-v-sbc-friendly-gateway-to-iot-and-...
92•warrenm•2d ago•83 comments

A postmortem of three recent issues

https://www.anthropic.com/engineering/a-postmortem-of-three-recent-issues
346•moatmoat•18h ago•109 comments

60 years after Gemini, newly processed images reveal details

https://arstechnica.com/space/2025/09/60-years-after-gemini-newly-processed-images-reveal-incredi...
40•rbanffy•2d ago•1 comments

YouTube addresses lower view counts which seem to be caused by ad blockers

https://9to5google.com/2025/09/16/youtube-lower-view-counts-ad-blockers/
403•iamflimflam1•1d ago•736 comments
Open in hackernews

New tools and features in the Responses API

https://openai.com/index/new-tools-and-features-in-the-responses-api
74•meetpateltech•4mo ago

Comments

skeptrune•3mo ago
Wow background mode looks awesome. I'm excited to work that into our UX for people. Live Q&A is such a dead interface at this point.

Reasoning summaries also look great. Anything that provides extra explainability is a win in my book.

pizzuh•3mo ago
It's great to see more and more adoption for MCP. I'm not sure it's the most bulletproof protocol, but it feels like it's in a strong lead, especially with OpenAI support.

I've been using Codex for the last 24 hours, and background mode boosts your output. You can have Codex work on n+ features async. I had it building a database model alongside frontend authentication, and did both pretty well.

tedtimbrell•3mo ago
Im quite surprised they’re actually going with hosted mcp versus just implementing the mcp server locally and interacting with the api
nknj•3mo ago
you can use local mcp servers with the agents sdk: https://openai.github.io/openai-agents-python/mcp/

responses api is a hosted thing and so it made most sense for it to directly connect to other hosted services (like remote mcp servers).

jasongill•3mo ago
I wish OpenAI would provide more clarity about the Assistants API deprecation, which has been announced as being sunset in spring of 2026 and replaced by the Responses API, but still no other updates on the timeline or migration plan.

Prior to the release of the Responses API, the Assistants API was the best way (for our use cases, at least) to interact with OpenAI's API, so hopefully some clarity on the plan for it is released soon (now that Responses API has some of the things that it was previously missing)

nknj•3mo ago
I hear you and really appreciate the patience here.

We're almost ready to share a migration guide. Today, we closed the gap between Assistants and Responses by launching Code Interpreter and support for multiple vector stores in File Search.

We still need to add support for Assistants and Threads objects to Responses before we can give devs a simple migration path. Working on this actively and hope to have all of this out in the coming weeks.

alasano•3mo ago
Interesting that you're migrating assistants and threads to the responses API, I presumed you were killing them off.

I started my MVP product with assistants and migrated to responses pretty easily. I handle a few more things myself but other than that it's not really been difficult.

beklein•3mo ago
On the announcement page they are saying that "...introducing updates to the file search tool that allow developers to perform searches across multiple vector stores...". On the docs, I still find this limitation: "At the moment, you can search in only one vector store at a time, so you can include only one vector store ID when calling the file search tool."

Anybody knows how searching multiple vector stores is implemented? The obvious plan would be to allow something like:

  "vector_store_ids": ["<vector_store_id1>", "<vector_store_id2>", ...]
nknj•3mo ago
sorry about the error in the docs. we're removing that call out.

`"vector_store_ids": ["<vector_store_id1>", "<vector_store_id2>"]` is exactly right. only 2 vector stores are supported at the moment.

akgfab•3mo ago
2 feels quite arbitrary and honestly not that much of an improvement. Any plans to up that limit?
mritchie712•3mo ago
we were using it for our agent in https://www.definite.app/ and I've been expecting it to die for almost a year considering the lack of updates.

We switched over to https://ai.pydantic.dev/ which I really like. LLM agnostic and the team is very receptive to feedback.

andrewrn•3mo ago
It was never really clear what the difference between the chat and responses APIs were. Anyone know the difference?
brittlewis12•3mo ago
chat completions is stateless — you must provide the entire conversation history with each new message; openai stores nothing (at least nothing that the downstream product _can use_) beyond the life of the request.

responses api, by contrast, is stateful — only send the latest message, and openai stores the conversation history, while keeping track of other details on behalf of the calling app, like parallel tool call states.

but i would say that since chat completions has become an informal industry standard, the responses api feels like an attempt by openai to break away from that shared interface, because it is so easy to swap out providers with nothing more than a base url and a model id, to a paradigm which requires data migration as well as replacement infrastructure (containers for code execution, for example).

nknj•3mo ago
one additional difference between chat and responses is the number model turns a single api call can make. chat completions is a single turn api primitive -- which means it can talk to the model just once. responses is capable of making multiple model turns and tool calls in a single api call.

for example, you can give the responses api access to 3 tools: a vector store with some user memories (file_search), the shopify mcp server, and code_interpreter. you can then ask it to look up some user memories, find relevant items in the shopify mcp store, and then download them into a csv file. all of this can be done in a single api call that involves multiple model turns and tool calls.

p.s. - you can also use responses statelessly by setting store=false.

OutOfHere•3mo ago
What are my choices for using a custom tool? Does it come down to: function calling (single turn), MCP (multi-turn via Responses)? What else are my choices?

Why would anyone want to use Responses statelessly? Just trying to understand.

swyx•3mo ago
i think the original intent of responses api was also to unify the realtime experiences into responses - is that accurate?
nknj•3mo ago
we expect responses and realtime to be our 2 core api primitives long term — responses for turn by turn interactions and realtime for models requiring low latency bidirectional streams to/from the apps/models.
swyx•3mo ago
thank you for the correction!
andrewrn•3mo ago
This is very enlightening. You're right then, it does seem to partially be a strategic moat-building move by OpenAI
rafram•3mo ago
> Encrypted reasoning items: Customers eligible for Zero Data Retention (ZDR) can now reuse reasoning items across API requests

So, so weird that they still don't want you to see their models' reasoning process, to the point that even highly trusted organizations with ZDR contracts only get them in a black-box encrypted form. Gemini has no issue showing its work. Why can't OpenAI?

vessenes•3mo ago
Is this true? I can click open o3’s dialogue and see a running monologue. I guess it might be a summary of the actual reasoning though.
hhh•3mo ago
It is a summary
mediaman•3mo ago
Correct, you are not seeing the reasoning chains.
rafram•3mo ago
I may be giving Gemini too much credit, actually - seems like its "reasoning" may be a summary as well.
Doohickey-d•3mo ago
They changed it yesterday or so: it used to show the actual reasoning, now it no longer does. And the reasoning was quite useful to see if it was going down the wrong track, the summary is much less so.
epiccoleman•3mo ago
That's disappointing. I was getting a lot of utility from reading through the thoughts returned by Gemini when I used it in Cursor - occasionally even learning something new from its stream of "consciousness". Obfuscating the information because it can be used to train competitors seems misguided, if understandable.
vessenes•3mo ago
Agreed. Right now deepseek’s R1 has uncensored stream of consciousness in open weights. I think it’s interesting that teams feel the streams should be proprietary. They must be doing something a little different than R1, or it wouldn’t be worth the extra engineering work.
fermisea•3mo ago
Not only that. I have an agent product and I’m currently blocked from using their reasoning models on Azure for having asked for a chain of thought, which apparently is against the ToS.

The customer service itself was surreal enough that it was easier just to migrate to Anthropic

NitpickLawyer•3mo ago
> So, so weird that they still don't want you to see their models' reasoning process

It's not weird at all. R1-distills have shown that you can get pretty close to the real thing with post-training on enough completions. I believe gemini has also stopped showing the thinking steps (apparently the GLM series of open access models were heavily trained on gemini data).

ToS violations can't be enforced in any effective way, and certainly not cross-borders. Their only way to maintain whatever moat thinking models give them is to simply not show the thinking parts.

zvitiate•3mo ago
Google actually switched to an OpenAI system for 2.5 Pro's Chain-of-Thought yesterday on the Gemini app and AI Studio ("I did this; I did that. etc"). Apparently it still shows via API, but no clear how long. Also, in my experience, if you select the "Canvas" output, you still get the old style CoT.

And yes, the above is true even if you are ULTRA.

You can still view your old thinking traces from prior turns and conversations.

zoogeny•3mo ago
My heart just broke to hear this. Although I honestly don't read the thinking output very often. But I had been cheekily copy-n-pasting the info for my own records.
knowsuchagency•3mo ago
I agree, but there's always Deepseek. They're publishing and open-sourcing more than anyone these days.
orasis•3mo ago
Reasoning models can now call tools during the reasoning process.
joshwarwick15•3mo ago
List of remote MCP servers to use here: https://github.com/jaw9c/awesome-remote-mcp-servers