frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Demo: Vague AI coding prompts can lead to disastrous outcomes

https://www.youtube.com/playlist?list=PLYGWJjYNEIt2b9pzSWTt-EbmL6A3b8VBj
1•raeroumeliotis•14s ago•1 comments

A Peek into Tesla's Autonomous Future by VP Ashok Elluswamy at ICCV25 [video]

https://www.youtube.com/watch?v=IRu-cPkpiFk
1•tristanz•5m ago•0 comments

Oversight Committee Releases Additional Epstein Estate Documents

https://oversight.house.gov/release/oversight-committee-releases-additional-epstein-estate-docume...
1•latexr•6m ago•0 comments

Comparing the Best AI Upscalers for Video and Images

https://blog.fal.ai/comparing-the-best-ai-upscalers-for-video-and-images/
1•Skopimus•7m ago•1 comments

Upbeat moves from sensor tech to MCU to 'secret weapon' compute-in-memory

https://www.fiercesensors.com/sensors/upbeat-moves-sensor-tech-mcu-secret-weapon-compute-memory
1•initramfs•7m ago•0 comments

Altman and Masa Back a 27-Year-Old's Plan to Build a New Bell Labs Ultra

https://www.corememory.com/p/exclusive-altman-and-masa-back-episteme-louis-andre
1•slackpad•7m ago•0 comments

What Happens When You Turn 20

https://freakonomics.com/podcast/what-happens-when-you-turn-20/
1•impish9208•9m ago•0 comments

Cows, Jamaica, & Solar – Winning the Clean Energy Revolution

https://cleantechnica.com/2025/11/11/cows-jamaica-solar-winning-the-clean-energy-revolution/
1•debo_•12m ago•0 comments

Malicious Chrome Extension Exfiltrates Seed Phrases, Enabling Wallet Takeover

https://socket.dev/blog/malicious-chrome-extension-exfiltrates-seed-phrases
1•feross•12m ago•0 comments

Open-source AI browser. Switch between ChatGPT, Claude, Gemini, or local LLMs

https://github.com/aiexperti/atlaswebx
1•Atlasweb•15m ago•1 comments

I Wrote Task Manager – 30 Years Later, the Secrets You Never Knew [video]

https://www.youtube.com/watch?v=yQykvrAR_po
3•slazien•15m ago•0 comments

OpenAI releases GPT-5.1 alongside eight new ChatGPT personality styles

https://arstechnica.com/ai/2025/11/openai-walks-a-tricky-tightrope-with-gpt-5-1s-eight-new-person...
1•moelf•16m ago•1 comments

Making a Living as an Artist

https://essays.fnnch.com/make-a-living
1•wdaher•16m ago•0 comments

Parallel raises $100M Series A to build web infrastructure for agents

https://parallel.ai/blog/series-a
2•mnemonet•24m ago•0 comments

SF tech founders go to finishing school – and Garry Tan does not approve

https://sfstandard.com/2025/11/08/sf-tech-founders-go-finishing-school-garry-tan-does-approve/
1•cgoodmac•26m ago•0 comments

Multi-tenant AI chat for databases (dialektai.com)

https://dialektai.com/
1•edihasaj•30m ago•0 comments

/R/datahoarder/ tapes preserved. 2004~2009 CNN/MSNBC/FOX News

https://old.reddit.com/r/DataHoarder/comments/1ouprgf/free_thousands_of_tapes_preserved_20042009/
3•Teever•30m ago•0 comments

Outpost Mono – a futuristic monospaced font designed for Martian outposts

https://github.com/ursooperduper/outpost-mono
1•yoz•30m ago•0 comments

Star Wars Q&A: Computers (1983)

https://www.flickr.com/photos/paxtonholley/albums/72157627392602868/
1•phil-pickering•32m ago•0 comments

Fei-Fei Li's World Labs speeds up the world model race with Marble

https://techcrunch.com/2025/11/12/fei-fei-lis-world-labs-speeds-up-the-world-model-race-with-marb...
1•cl42•33m ago•0 comments

Jasmine: A Simple, Performant and Scalable Jax-Based World Modeling Codebase

https://arxiv.org/abs/2510.27002
4•PaulHoule•38m ago•1 comments

Release Notes for Safari Technology Preview 232

https://webkit.org/blog/17601/release-notes-for-safari-technology-preview-232/
1•feross•40m ago•0 comments

AI Codes Electronics with Tscircuit

https://tscircuit.com
1•cat-whisperer•41m ago•0 comments

Can AI tell when someone's lying? MSU study says not yet

https://msutoday.msu.edu/news/2025/11/can-ai-tell-when-someones-lying-msu-study-says-not-yet
1•rmason•46m ago•0 comments

The Selfish (AI) Model

https://crocccante-engineering.vercel.app/doc
1•olirex99•46m ago•0 comments

Ask HN: AI video models are good at nude body movement => porn in training data?

1•sendos•47m ago•1 comments

Restructuring Vector Quantization with the Rotation Trick

https://arxiv.org/abs/2410.06424
1•fzliu•48m ago•0 comments

GPT-5.1 Instant and GPT-5.1 Thinking System Card Addendum [pdf]

https://cdn.openai.com/pdf/4173ec8d-1229-47db-96de-06d87147e07e/5_1_system_card.pdf
3•mnemonet•51m ago•0 comments

SBoM Diffing: Next Frontier for Supply Chain Security

https://worklifenotes.com/2025/11/12/sbom-diffing-next-frontier-for-supply-chain-security/
1•taleodor•52m ago•0 comments

Show HN: The Prompt Engineering Bible – Complete Guide to AI Communication

https://dimitriosmitsos.gumroad.com/l/prompt-engineering-bible
2•Cranot•55m ago•0 comments
Open in hackernews

New tools and features in the Responses API

https://openai.com/index/new-tools-and-features-in-the-responses-api
74•meetpateltech•5mo ago

Comments

skeptrune•5mo ago
Wow background mode looks awesome. I'm excited to work that into our UX for people. Live Q&A is such a dead interface at this point.

Reasoning summaries also look great. Anything that provides extra explainability is a win in my book.

pizzuh•5mo ago
It's great to see more and more adoption for MCP. I'm not sure it's the most bulletproof protocol, but it feels like it's in a strong lead, especially with OpenAI support.

I've been using Codex for the last 24 hours, and background mode boosts your output. You can have Codex work on n+ features async. I had it building a database model alongside frontend authentication, and did both pretty well.

tedtimbrell•5mo ago
Im quite surprised they’re actually going with hosted mcp versus just implementing the mcp server locally and interacting with the api
nknj•5mo ago
you can use local mcp servers with the agents sdk: https://openai.github.io/openai-agents-python/mcp/

responses api is a hosted thing and so it made most sense for it to directly connect to other hosted services (like remote mcp servers).

jasongill•5mo ago
I wish OpenAI would provide more clarity about the Assistants API deprecation, which has been announced as being sunset in spring of 2026 and replaced by the Responses API, but still no other updates on the timeline or migration plan.

Prior to the release of the Responses API, the Assistants API was the best way (for our use cases, at least) to interact with OpenAI's API, so hopefully some clarity on the plan for it is released soon (now that Responses API has some of the things that it was previously missing)

nknj•5mo ago
I hear you and really appreciate the patience here.

We're almost ready to share a migration guide. Today, we closed the gap between Assistants and Responses by launching Code Interpreter and support for multiple vector stores in File Search.

We still need to add support for Assistants and Threads objects to Responses before we can give devs a simple migration path. Working on this actively and hope to have all of this out in the coming weeks.

alasano•5mo ago
Interesting that you're migrating assistants and threads to the responses API, I presumed you were killing them off.

I started my MVP product with assistants and migrated to responses pretty easily. I handle a few more things myself but other than that it's not really been difficult.

beklein•5mo ago
On the announcement page they are saying that "...introducing updates to the file search tool that allow developers to perform searches across multiple vector stores...". On the docs, I still find this limitation: "At the moment, you can search in only one vector store at a time, so you can include only one vector store ID when calling the file search tool."

Anybody knows how searching multiple vector stores is implemented? The obvious plan would be to allow something like:

  "vector_store_ids": ["<vector_store_id1>", "<vector_store_id2>", ...]
nknj•5mo ago
sorry about the error in the docs. we're removing that call out.

`"vector_store_ids": ["<vector_store_id1>", "<vector_store_id2>"]` is exactly right. only 2 vector stores are supported at the moment.

akgfab•5mo ago
2 feels quite arbitrary and honestly not that much of an improvement. Any plans to up that limit?
mritchie712•5mo ago
we were using it for our agent in https://www.definite.app/ and I've been expecting it to die for almost a year considering the lack of updates.

We switched over to https://ai.pydantic.dev/ which I really like. LLM agnostic and the team is very receptive to feedback.

andrewrn•5mo ago
It was never really clear what the difference between the chat and responses APIs were. Anyone know the difference?
brittlewis12•5mo ago
chat completions is stateless — you must provide the entire conversation history with each new message; openai stores nothing (at least nothing that the downstream product _can use_) beyond the life of the request.

responses api, by contrast, is stateful — only send the latest message, and openai stores the conversation history, while keeping track of other details on behalf of the calling app, like parallel tool call states.

but i would say that since chat completions has become an informal industry standard, the responses api feels like an attempt by openai to break away from that shared interface, because it is so easy to swap out providers with nothing more than a base url and a model id, to a paradigm which requires data migration as well as replacement infrastructure (containers for code execution, for example).

nknj•5mo ago
one additional difference between chat and responses is the number model turns a single api call can make. chat completions is a single turn api primitive -- which means it can talk to the model just once. responses is capable of making multiple model turns and tool calls in a single api call.

for example, you can give the responses api access to 3 tools: a vector store with some user memories (file_search), the shopify mcp server, and code_interpreter. you can then ask it to look up some user memories, find relevant items in the shopify mcp store, and then download them into a csv file. all of this can be done in a single api call that involves multiple model turns and tool calls.

p.s. - you can also use responses statelessly by setting store=false.

OutOfHere•5mo ago
What are my choices for using a custom tool? Does it come down to: function calling (single turn), MCP (multi-turn via Responses)? What else are my choices?

Why would anyone want to use Responses statelessly? Just trying to understand.

swyx•5mo ago
i think the original intent of responses api was also to unify the realtime experiences into responses - is that accurate?
nknj•5mo ago
we expect responses and realtime to be our 2 core api primitives long term — responses for turn by turn interactions and realtime for models requiring low latency bidirectional streams to/from the apps/models.
swyx•5mo ago
thank you for the correction!
andrewrn•5mo ago
This is very enlightening. You're right then, it does seem to partially be a strategic moat-building move by OpenAI
rafram•5mo ago
> Encrypted reasoning items: Customers eligible for Zero Data Retention (ZDR) can now reuse reasoning items across API requests

So, so weird that they still don't want you to see their models' reasoning process, to the point that even highly trusted organizations with ZDR contracts only get them in a black-box encrypted form. Gemini has no issue showing its work. Why can't OpenAI?

vessenes•5mo ago
Is this true? I can click open o3’s dialogue and see a running monologue. I guess it might be a summary of the actual reasoning though.
hhh•5mo ago
It is a summary
mediaman•5mo ago
Correct, you are not seeing the reasoning chains.
rafram•5mo ago
I may be giving Gemini too much credit, actually - seems like its "reasoning" may be a summary as well.
Doohickey-d•5mo ago
They changed it yesterday or so: it used to show the actual reasoning, now it no longer does. And the reasoning was quite useful to see if it was going down the wrong track, the summary is much less so.
epiccoleman•5mo ago
That's disappointing. I was getting a lot of utility from reading through the thoughts returned by Gemini when I used it in Cursor - occasionally even learning something new from its stream of "consciousness". Obfuscating the information because it can be used to train competitors seems misguided, if understandable.
vessenes•5mo ago
Agreed. Right now deepseek’s R1 has uncensored stream of consciousness in open weights. I think it’s interesting that teams feel the streams should be proprietary. They must be doing something a little different than R1, or it wouldn’t be worth the extra engineering work.
fermisea•5mo ago
Not only that. I have an agent product and I’m currently blocked from using their reasoning models on Azure for having asked for a chain of thought, which apparently is against the ToS.

The customer service itself was surreal enough that it was easier just to migrate to Anthropic

NitpickLawyer•5mo ago
> So, so weird that they still don't want you to see their models' reasoning process

It's not weird at all. R1-distills have shown that you can get pretty close to the real thing with post-training on enough completions. I believe gemini has also stopped showing the thinking steps (apparently the GLM series of open access models were heavily trained on gemini data).

ToS violations can't be enforced in any effective way, and certainly not cross-borders. Their only way to maintain whatever moat thinking models give them is to simply not show the thinking parts.

zvitiate•5mo ago
Google actually switched to an OpenAI system for 2.5 Pro's Chain-of-Thought yesterday on the Gemini app and AI Studio ("I did this; I did that. etc"). Apparently it still shows via API, but no clear how long. Also, in my experience, if you select the "Canvas" output, you still get the old style CoT.

And yes, the above is true even if you are ULTRA.

You can still view your old thinking traces from prior turns and conversations.

zoogeny•5mo ago
My heart just broke to hear this. Although I honestly don't read the thinking output very often. But I had been cheekily copy-n-pasting the info for my own records.
knowsuchagency•5mo ago
I agree, but there's always Deepseek. They're publishing and open-sourcing more than anyone these days.
orasis•5mo ago
Reasoning models can now call tools during the reasoning process.
joshwarwick15•5mo ago
List of remote MCP servers to use here: https://github.com/jaw9c/awesome-remote-mcp-servers