frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

LIGO detects most massive black hole merger to date

https://www.caltech.edu/about/news/ligo-detects-most-massive-black-hole-merger-to-date
134•Eduard•3h ago•63 comments

Apple's MLX adding CUDA support

https://github.com/ml-explore/mlx/pull/1983
67•nsagent•1h ago•33 comments

RFC: PHP license update

https://wiki.php.net/rfc/php_license_update
91•josephwegner•1h ago•25 comments

DEWLine Museum – The Distant Early Warning Radar Line

https://dewlinemuseum.com/
10•reaperducer•1h ago•0 comments

Kiro: A new agentic IDE

https://kiro.dev/blog/introducing-kiro/
631•QuinnyPig•9h ago•276 comments

NeuralOS: An operating system powered by neural networks

https://neural-os.com/
60•yuntian•3h ago•20 comments

Show HN: Bedrock – An 8-bit computing system for running programs anywhere

https://benbridle.com/projects/bedrock.html
45•benbridle•4d ago•9 comments

Replicube: 3D shader puzzle game, online demo

https://replicube.xyz/staging/
69•inktype•3d ago•11 comments

Cognition (Devin AI) to Acquire Windsurf

https://cognition.ai/blog/windsurf
320•alazsengul•5h ago•257 comments

Context Rot: How increasing input tokens impacts LLM performance

https://research.trychroma.com/context-rot
48•kellyhongsn•4h ago•9 comments

Cidco MailStation as a Z80 Development Platform (2019)

https://jcs.org/2019/05/03/mailstation
41•robin_reala•5h ago•3 comments

Building Modular Rails Applications: A Deep Dive into Rails Engines

https://www.panasiti.me/blog/modular-rails-applications-rails-engines-active-storage-dashboard/
113•giovapanasiti•8h ago•26 comments

SQLite async connection pool for high-performance

https://github.com/slaily/aiosqlitepool
35•slaily•3d ago•19 comments

Anthropic, Google, OpenAI and XAI Granted Up to $200M from Defense Department

https://www.cnbc.com/2025/07/14/anthropic-google-openai-xai-granted-up-to-200-million-from-dod.html
85•ChrisArchitect•2h ago•59 comments

Embedding user-defined indexes in Apache Parquet

https://datafusion.apache.org/blog/2025/07/14/user-defined-parquet-indexes/
83•jasim•7h ago•10 comments

Strategies for Fast Lexers

https://xnacly.me/posts/2025/fast-lexer-strategies/
117•xnacly•8h ago•41 comments

Show HN: The HTML Maze – Escape an eerie labyrinth built with HTML pages

https://htmlmaze.com/
20•kyrylo•2h ago•2 comments

Japanese grandparents create life-size Totoro with bus stop for grandkids (2020)

https://mymodernmet.com/totoro-sculpture-bus-stop/
223•NaOH•7h ago•54 comments

Meticulous (YC S21) is hiring in UK to redefine software dev

https://tinyurl.com/join-meticulous
1•Gabriel_h•6h ago

Lightning Detector Circuits

https://techlib.com/electronics/lightningnew.htm
64•nateb2022•8h ago•35 comments

Data brokers are selling flight information to CBP and ICE

https://www.eff.org/deeplinks/2025/07/data-brokers-are-selling-your-flight-information-cbp-and-ice
384•exiguus•7h ago•186 comments

Tandy Corporation, Part 3 Becoming IBM Compatible

https://www.abortretry.fail/p/tandy-corporation-part-3
50•klelatti•3d ago•13 comments

East Asian aerosol cleanup has likely contributed to global warming

https://www.nature.com/articles/s43247-025-02527-3
144•defrost•14h ago•153 comments

Two guys hated using Comcast, so they built their own fiber ISP

https://arstechnica.com/tech-policy/2025/07/two-guys-hated-using-comcast-so-they-built-their-own-fiber-isp/
259•LorenDB•7h ago•168 comments

Impacts of adding PV solar system to internal combustion engine vehicles

https://www.jstor.org/stable/26169128
97•red369•12h ago•208 comments

The Corset X-Rays of Dr Ludovic O'Followell (1908)

https://publicdomainreview.org/collection/the-corset-x-rays-of-dr-ludovic-o-followell-1908/
22•healsdata•3d ago•1 comments

It took 45 years, but spreadsheet legend Mitch Kapor finally got his MIT degree

https://www.bostonglobe.com/2025/06/24/business/mitch-kapor-mit-degree-bill-aulet/
154•bookofjoe•3d ago•14 comments

Lossless Float Image Compression

https://aras-p.info/blog/2025/07/08/Lossless-Float-Image-Compression/
88•ingve•4d ago•10 comments

Why random selection is necessary to create stable meritocratic institutions

https://assemblingamerica.substack.com/p/there-is-no-meritocracy-without-lottocracy
196•namlem•8h ago•176 comments

A Century of Quantum Mechanics

https://home.cern/news/news/physics/century-quantum-mechanics
100•bookofjoe•4d ago•77 comments
Open in hackernews

Web search on the Anthropic API

https://www.anthropic.com/news/web-search-api
272•cmogni1•2mo ago

Comments

benjamoon•2mo ago
Good that it has an “allowed domain” list, makes it really useable. The OpenAI Responses api web search doesn’t let you limit domains currently so can’t make good use of it for client stuff.
minimaxir•2mo ago
The web search functionality is also available in the backend Workbench (click the wrench Tools icon) https://console.anthropic.com/workbench/

The API request notably includes the exact text it cites from its sources (https://docs.anthropic.com/en/docs/build-with-claude/tool-us...), which is nifty.

Cost-wise it's interesting. $10/1000 queries is much cheaper for heavy use than Google's Gemini (1500 free per day then $35/1000) when you'd expect Google to be the cheaper option. https://ai.google.dev/gemini-api/docs/grounding

handfuloflight•2mo ago
So the price is just the $0.01 per query? Are they not charging for the tokens loaded into context from the various sources?
minimaxir•2mo ago
The query cost is in addition to tokens used. It is unclear if the tokens ingested from the search query count as addititional input tokens.

> Web search is available on the Anthropic API for $10 per 1,000 searches, plus standard token costs for search-generated content.

> Each web search counts as one use, regardless of the number of results returned. If an error occurs during web search, the web search will not be billed.

stephpang•2mo ago
Hi, stephanie from Anthropic here. Thanks for the feedback! We've updated the docs to hopefully make it a little more clear but yes search results do count towards input tokens

https://docs.anthropic.com/en/docs/build-with-claude/tool-us...

minimaxir•2mo ago
Thanks for the update!

> Web search results in the conversation are counted as input tokens on subsequent completion requests during the current turn or on subsequent conversation turns.

Yes, that's clear.

istjohn•2mo ago
Well also Google has put onerous conditions on their service:

- If you show users text generated by Gemini using Google Search (grounded Gemini), you must display a provided widget with suggested search terms that links directly to Google Search results on google.com.

- You may not modify the text generated by grounded Gemini before displaying it to your users.

- You may not store grounded responses more than 30 days, except for user histories, which can retain responses for up to 6 months.

https://ai.google.dev/gemini-api/terms#grounding-with-google...

https://ai.google.dev/gemini-api/docs/grounding/search-sugge...

miohtama•2mo ago
Google obviously does not want to cannibalise their golden goose. However it's inevitable that Google search will start to suffer because people need it less and less with LLMs.
aaronscott•2mo ago
It would be nice if the search provider could be configured. I would like to use this with Kagi.
lemming•2mo ago
I would really love this too. However I think that the only solution for that is to give it a Kagi search tool, in combination with a web scraping tool, and a loop while it figures out whether it's got the information it needs to answer the question.
cmogni1•2mo ago
I think the most interesting thing to me is they have multi-hop search & query refinement built in based on prior context/searches. I'm curious how well this works.

I've built a lot of LLM applications with web browsing in it. Allow/block lists are easy to implement with most web search APIs, but multi-hop gets really hairy (and expensive) to do well because it usually requires context from the URLs themselves.

The thing I'm still not seeing here that makes LLM web browsing particularly difficult is the mismatch between search result relevance vs LLM relevance. Getting a diverse list of links is great when searching Google because there is less context per query, but what I really need from an out-of-the-box LLM web browsing API is reranking based on the richer context provided by a message thread/prompt.

For example, writing an article about the side effects of Accutane should err on the side of pulling in research articles first for higher quality information and not blog posts.

It's possible to do this reranking decently well with LLMs (I do it in my "agents" that I've written), but I haven't seen this highlighted from anyone thus far, including in this announcement.

simple10•2mo ago
That's been my experience as well. Web search built into the API is great for convenience, but it would be ideal to be able to provide detailed search and reranking params.

Would be interesting to see comparisons for custom web search RAG vs API. I'm assuming that many of the search "params" of the API could be controlled via prompting?

peterldowns•2mo ago
> For example, writing an article about the side effects of Accutane should err on the side of pulling in research articles first for higher quality information and not blog posts.

Interesting, I'm taking isotretinoin right now and I've found it's more interesting and useful to me to read "real" experiences (from reddit and blogs) than research papers.

TechDebtDevin•2mo ago
Wear lots of (mineral) sunscreen, and drink lots and lots of water. La Roche Posey lotions are what I used, and continue to use with tretinoin. Sunscreen is the most important.
peterldowns•2mo ago
Great advice, already quite on top of it. I'd recommend checking out stylevana and importing some of the japanese/korean sunscreens if you haven't tried them out yet!
TuringTourist•2mo ago
Can you elaborate? What information are you gleaning from anecdotes that is both reliable and efficacious enough to outweigh research?

I'm not trying to challenge your point, I am genuinely curious.

peterldowns•2mo ago
I just want to hear about how other people have felt while taking the medicine. I don't care about aggregate statistics very much. Honestly what research do you read and for what purpose? All social science is basically junk and most medical research is about people whose bodies and lifestyles are very different than mine.
simonw•2mo ago
I couldn't see anything in the documentation about whether or not it's allowed to permanently store the results coming back from search.

Presumably this is using Brave under the hood, same as Claude's search feature via the Anthropic apps?

minimaxir•2mo ago
Given the context/use of encrypted_index and encrypted_context, I suspect search results are temporarily cached.
simonw•2mo ago
Right, but are there any restrictions on what I can do with them?

Google Gemini has some: https://ai.google.dev/gemini-api/docs/grounding/search-sugge...

OpenAI has some rules too: https://platform.openai.com/docs/guides/tools-web-search#out...

> "When displaying web results or information contained in web results to end users, inline citations must be made clearly visible and clickable in your user interface."

I'm used to search APIs coming with BIG sets of rules on how you can use the results. I'd be surprised but happy if Anthropic didn't have any.

The Brave Search API is a great example of this: https://brave.com/search/api/

They have a special, much more expensive tier called "Data w/ storage rights" which is $45 CPM, compared to $5 CPM for the tier that doesn't include those storage rights.

istjohn•2mo ago
Google's restrictions are outlandish: "[You] will not modify, or intersperse any other content with, the Grounded Results or Search Suggestions..."
minimaxir•2mo ago
The API response actually contains the full HTML to include.
simonw•2mo ago
I'm not quite sure how I should handle that in my CLI tool!
lemming•2mo ago
Trafilatura to markdown? But yeah, likely to be clunky.
istjohn•2mo ago
It just goes counter to the way I think about LLMs. It assumes end-products will merely be thin wrappers around an API, perhaps with some custom prompts. It's like thinking of the internet as a faster telegraph, instead of understanding that it's an entirely new paradigm. The most interesting applications of AI will use search as just one ingredient, one input, that will be sliced, diced, and pureed as it is combined with half a dozen other sources of information.

When your intelligent email client uses Gemini to identify the sender of an email as someone in the industry your B2B company serves, deciding to flag the email as important, where is that HTML supposed to go? Where does it go in a product that generates slide show lesson plans? What if I'm using it to generate audio or video? What if a digital assistant uses Gemini as a tool a few dozen times early in a complex 10,000 step workflow that was kicked off by me asking it to create three proposals for family vacations complete with a three 5-minute video presentations on each option? What if my product is helping candidates write tailored cover letters?

It's bad optics for a company just ruled to have acted illegally to maintain a monopoly in "general search services and general text advertising," but worse, it lacks imagination.

lemming•2mo ago
I'm also interested to know if there are other limitations with this. Gemini, for example, has a built-in web search tool, but it can't be used in combination with other tools, which is a little annoying. o3/o4-mini can't use the search tool at all over the API, which is even more annoying.
omneity•2mo ago
Related: For those who want to build their own AI search for free and connect it to any model they want, I created a browser MCP that interfaces with major public search engines [0], a SERP MCP if you want, with support for multiple pages of results.

The rate limits of the upstream engines are fine for personal use, and the benefit is it uses the same browser you do, so results are customized to your search habits out-of-the-box (or you could use a blank browser profile).

0: https://herd.garden/trails/@omneity/serp

potlee•2mo ago
If you use your own search tool, you would have to pay for input tokens again every time the model decides to search. This would be a big discount if they only charging once for all output as output tokens but seems unclear from the blog post
stephpang•2mo ago
Thanks for the feedback, just updated our docs to hopefully make this a little clearer. Search results count towards input tokens on every subsequent iteration

https://docs.anthropic.com/en/docs/build-with-claude/tool-us...

potlee•2mo ago
Thanks for addressing it. Still sounds like a significant discount if only the search results and not all messages count are input tokens on subsequent iterations!
jarbus•2mo ago
Is search really that costly to run? $10/1000 searches seems really pricey. I'm wondering if these costs will come down in a few years.
tuyguntn•2mo ago
they will come down, because up until recently consumers were not paying directly for searches, with the LLM which has a cutoff date in the past and hallucinations, search got popular paid API.

Popularity will grow even more, hence competition will increase and prices will change eventually

AznHisoka•2mo ago
I dont think that will be true. What competition? Google, Bing, and.. Kagi? (And only one of those have a far superior index/algo than the others)
jsnell•2mo ago
Yes.

The Bing Search API is priced at $15/1k queries in the cheapest tier, Brave API is $9 at the non-toy tier, Google's pricing for a general search API is unknown but their Search grounding in Gemini costs $35/1k queries.

Search API prices have been going up, not down, over time. The opposite of LLMs, which have gotten 1000x cheaper over the last two years.

jwr•2mo ago
> Google's pricing for a general search API

As I discovered recently, and much to my surprise, Google does not offer a "general search API", at least not officially.

There is a "custom search" API that sounds like web search, but isn't: it offers a subset of the index, which is not immediately apparent. Confusing and misleading labeling there.

Bing offers something a bit better, but I recently ended up trying the Kagi API, and it is the best thing I found so far. Expensive ($25/1000), but works well.

formercoder•2mo ago
I work at Google but not on this. We do offer Gemini with Google Search grounding which is similar to a search API.
QuadmasterXLII•2mo ago
??????
teeklp•2mo ago
How much do you pay people to use this?
jsnell•2mo ago
There are multiple search engines known to be based on Google's API (Startpage, Leta, Kagi), so that product definitely exists. But it exciting that's all we know. They indeed do not publish anything about it. We don't know the price, the terms, or even the name.
camkego•2mo ago
Do you have any references to the point that the Google Custom Search API is for a subset of the regular Google search index?
ricw•2mo ago
No reference here but found this out the hard way too. Google search Ali is Utterly useless in fact and entirely different search results vs using the web. Bing is better. Haven’t tried ksgi yet
jwr•2mo ago
"References"? :-) This is a corporation we're talking about, and Google at that. Layers upon layers of obscurity, "strategic decisions" and discontinued products.

Try it and you'll see — there is no official Search API and the Custom Search API is quite poor and not usable in most scenarios.

ColinHayhurst•2mo ago
Excuse the self-promotion but Mojeek is £3/1,000: https://www.mojeek.com/services/search/web-search-api/
firtoz•2mo ago
> Can I store data obtained through the API? > You can store results on Business plan and optionally on the Enterprise plan. For other plans, you may store the results for 1 hour to enable caching.

Curious... I can understand that this may be a defensive action, however, feels unenforceable. And in some cases impractical for the user, after seeing this I may keep looking for alternatives for example because it's not clear to me if I have a chat history that has the search results in one of the messages, do I have to have a kind of mechanism to clean those out or something?

AznHisoka•2mo ago
If you want an unofficial API, most data providers usually charge $4/1000 queries or so. By unofficial, I mean they just scrape whats in Google and return that to you. So thats the benchmark I use, which means the cost here is around 2x that.

As far as I know, the pricing really hasnt gone down over the years. If anything it has gone up because Google is increasingly making it harder for these providers

Manouchehri•2mo ago
That seems expensive.

For 100 results per query, serper.dev is $2/1000 queries and Bright Data is $1.5/1000 queries.

jbellis•2mo ago
I'm not sure that's correct -- the first party APIs are priced per query but BD is per 1k results. Not immediately obvious what they count as a "result" tho.
Manouchehri•2mo ago
It's really poor wording. Bright Data does indeed consider 100 results in a single request to be a single billed "result" event, billed at $1.5/1000 requests.

I always set 100 results per request from Bright Data, and I can see my bill indeed says `SERP Requests: x reqs @ 1.5 $/CPM` (where `x` is the number of requests I've made, not x * 100).

https://docs.brightdata.com/scraping-automation/serp-api/faq...

For serper.dev, they consider 10 results to be 1 "credit", and 20 to 100 results to be 2 "credits". They bill at $50/50,000 credits, so it becomes $1/1000 requests if you are okay with just 10 results per request, or $2/1000 requests if you want 100 results per request.

(Both providers here scale pricing with larger volumes, just trying to compare the easiest price point for those getting started.)

AznHisoka•2mo ago
Sorry, got this off by a multiple. Yes, pricing is around that. So these “official” APIs are much more expensive.
OxfordOutlander•2mo ago
Openai search mode is $30-50 per 1000 depending on low-high context

Gemini is $30/1000

So Anthropic is actually the cheapest.

For context, exa is $5 / 1000.

zhyder•2mo ago
Now all the big 3 LLM providers provide web search grounding in their APIs, but how do they compare in ranking quality of the retrieved web search results? Anyone run benchmarks here?

Clearly web search ranking is hard after decades of content spam that's been SEO optimized (and we get to look forward to increasing AI spam dominating the web in the future). The best LLM provider in the future could be the one with just the best web search ranking, just like what allowed Google to initially win in search.

RainbowcityKun•2mo ago
Right now, most LLMs with web search grounding are still in Stage 1: they can retrieve content, but their ability to assess quality, trustworthiness, and semantic ranking is still very limited.

The LLMs can access the web, but they can't yet understand it in a structured, evaluative way.

What’s missing is a layer of engineered relevance modeling, capable of filtering not just based on keywords or citations, but on deeper truth alignment and human utility.

And yes, as you mentioned, we may even see the rise of LLM-targeted SEO—content optimized not for human readers, but to game LLM attention and summarization heuristics. That's a whole new arms race.

The next leap won’t be about just accessing more data, but about curating and interpreting it meaningfully.

simianwords•2mo ago
>Right now, most LLMs with web search grounding are still in Stage 1: they can retrieve content, but their ability to assess quality, trustworthiness, and semantic ranking is still very limited.

Why do you think it is limited? Imagine you show a link with details to an LLM and ask it if it is trustworthy or high quality w.r.t the query, why can't it answer it?

RainbowcityKun•2mo ago
What I mean is that more powerful engineering capabilities are needed to provide LLM with processing of search results.
simianwords•2mo ago
Not sure I understand -- LLM's are pretty good at assessing quality of search results. If an LLM can bulk assess a bunch of results it can get a pretty far, probably more efficient than a human hand checking all the results.
lgiordano_notte•2mo ago
Don't think the limit is in what LLMs can evaluate - given the right context, they’re good at assessing quality. The problem is what actually gets retrieved and surfaced in the first place. If the upstream search doesn’t rank high-quality or relevant material well, LLM never sees it. It's not a judgment problem, more of a selection problem.
metalrain•2mo ago
It's a good reminder that AI chats won't make web searches obsolete, just embed them at deeper in the stack.

Maybe Google search revenue moves from ads to more towards B2B deals for search API use.

simianwords•2mo ago
Can any one answer this question: are they using custom home made web index? Or are they using bing/google api?

Also I'm quite sure that they don't use vector embeddings for web search, its purely on text space. I think the same holds for all LLM web search tools. They all seem to work well -- maybe we don't need embeddings for RAG and grepping works well enough?

elisson22•2mo ago
Regarding the costs, do we have a clear indication as to how much it costs to the company to perform tasks from a power consumption perspective? Or is it negligible?
throwaway314155•2mo ago
Setting the allowed url to "youtube.com" results in tool usage failing (up to max_calls times).

Does this mean there are certain sites that the search tool simply can't access?

alberduris•2mo ago
Yes, it seems so. I’ve tried every possible way to get it to specifically search for YouTube videos and there’s just no way.

Even if you search for videos on a topic with hundreds of highly relevant YouTube videos, and explicitly ask it to find YouTube videos, it still says it found other relevant stuff, but not YouTube videos.

yuta2912•2mo ago
I Noticed a huge amount of latency using the web search on the api , is it me or with everyone ? Generally when u see other providers like open ai's web search it takes usually 7-8 secs and same for pplx sonar api. Any comments?
gyani•1mo ago
Bing is shutting down API access as well and going to "Grounding with Bing Search" at $35/1k https://azure.microsoft.com/en-us/updates?id=492574