(I was very skeptical about Kagi Assistant but now i am a happy Kagi Ultimate subscriber).
I like that Kagi charges for their service, so their motive is to provide services for that cost, and not with ads on top of it.
That said, all my friends think I'm insane and poke fun at me for paying for search, so I imagine we're a small minority.
People just hate paying for software in general in my experience, especially a subscription.
I have multiple good friends who refuse to pay 99 cents a month to get 50gb of iCloud storage so they can backup their phones, and instead of all their precious memories on a single device that is out and about.
I do live these days with the understanding that pretty much all of my personal info is out there one way or another, a social security number is about as private as a phone number these days.
My credit union login does an infinite redirect on login. Works fine on Chrome (and all other major browsers)
Perplexity on the mobile version web search entirely broken. Loops with some error and becomes unresponsive. Works fine on safari.
Many other random stuff breaks at least a few times per day usually from login redirects and authentication. Extensions like 1Password have autofill only working some of the time. The list goes on
It's just a nice interface for all LLMs which i often use on mobile or laptop for various work and also private tasks.
The last months have shown that there is no single LLM worth investing in (todays "top" LLM is tomorrows second-in-class).
Kagi's contracts with LLM providers are the ones businesses get with actual privacy protections which is also nice.
You get multiple LLM in a single interface, with a single login and a single subscription to maintain, all your threads stored at the same place, the ability to switch between models in a thread, custom models...
Because not every site has a RSS feed. For example when Claude Sonnet 4.5 released it would make sense to have that, but there is no RSS feed for Anthropic. Being compatible with the entire web instead of just a subset of it is useful.
I'm currently on the hunt for an RSS reader that has good filtering and sorting functionality, so I can (for instance) pull several feeds from only certain sources, but not see any posts/articles about terms A or B, yet see and sort any posts with term C by time, followed by either posts from source 1 with terms C and D, or posts from source 2 with terms E or F but not G, which would be sorted by relevance.
I know that's a complicated and probably poorly written explanation; but I'm imagining something like Apple Mail Rules for RSS.
I tended towards Axios but lately it's gotten a bit paywalled and less informative. Can't wait to incorporate Kagi News into my daily workflow.
I might not agree with all decisions Kagi makes, but this is gold. Endless scrolling is a big indicator that you're a consumer not a customer.
Someone recently highlighted the shift from social networks to social media in a way I'd never thought about:
>> The shift from social networks to social media was subtle, and insidious. Social networks, systems where you talk to your friends, are okay (probably). Social media, where you consume content selected by an algorithm, is not. (immibis https://news.ycombinator.com/item?id=45403867)
Specifically, in the same way that insufficient supply of mortgage securities (there's a finite number of mortgages) led to synthetic CDOs [0] in order to artificially boost supply of something there was a market for.
Social media and 24/7 news (read: shoving content from strangers into your eyeballs) are the synthetic CDOs of content, with about the same underlying utility.
There is in fact a finite amount of individually useful content per unit of time.
[0] If you want the Michael Lewis-esque primer on CDOs https://m.youtube.com/watch?v=A25EUhZGBws
This is a great way to put it. Much of the social media content is a derivative/synthetic representation of actual engagement. Content creators and influencers can make us "feel" like we have a connection to them (eg: "get ready with me!" type videos), but it's not the same as genuine connection or communication with people.
but now it's ABSOLUTELY EVERYWHERE and almost completely socially acceptable. In fact, people look at you weird if you don't have a favorite youtuber or what-have-you.
It's not healthy. Not healthy one bit. Whereas it used to be for 'others' (meaning rich and famous people who lived lives we could never hope for), parasocial relationships tend to be focused on people who are 'just like us' now. There's probably something in there to be studied.
Please expand obscure acronyms, not everyone lives in your niche.
Anyway, there's this https://netnewswire.com - https://github.com/Ranchero-Software/NetNewsWire (mac native) if someone is looking for an open source alt.
Now I just read the news on a Sunday (unless I'm doing something much more exciting). For the remainder of the week I don't read the news at all. It's the way my grandad used to read the news when he was a farmer.
I've found it to be a convenient format. It let's you stay informed, while it gives enough of a gap for news stories to develop and mature (unless they happen the day before). There's less speculation and rumours, and more established details, and it has reduced my day-to-day stress.
Annoyingly I still hear news from people around me, but I try to tune it out in the moment. I can't believe I used to consume news differently and it baffles me why I hear of people reading/watching/listening to the news 10+ times per day, including first thing when they awaken and last thing before they sleep. Our brains were not designed for this sort of thing.
I would agree that a single daily news update is useful (and healthy), but this must also be reflected in the choice of topics and the type of reporting.
I feel this is what Apple News should've been. Instead it's just god-awful ad-filled mess of news articles. And the only reason I have it is because of Apple One. But it is a clearly neglected product.
I also pay for ground news but it hasn't met my expectations, mostly because there's a lot of redundancy with wire stories. Like it'll show 50 sources but they're all just regurgitating the same AP or Reuters article. So it skews the "bias"
Bunch of discussion here 3 months ago? https://news.ycombinator.com/item?id=44518473
It was in beta then.
The UK section seems to have a heavy bias towards news from Scotland.
It looks too simplistic for me to actually use.
When Biden was president I barely heard anything about US politics, but with Trump in power it's hard to avoid.
Apart from that, it's really nice! Good job, kagi team!
This is pulling the content of the RSS feeds of several news sites into the context window of an LLM and then asking it to summarize news items into articles and fill in the blanks?
I'm asking because that is what it looks like, but AI / LLMs are not specifically mentioned in this blog post, they just say news are 'generated' under the 'News in your language' heading, which seems to imply that is what they are doing.
I'm a little skeptical towards the approach, when you ask an LLM to point to 'sources' for the information it outputs, as far as I know there is no guarantee that those are correct – and it does seem like sometimes they just use pure LLM output, as no sources are cited, or it's quoted as 'common knowledge'.
That’s not news. That’s news-adjacent random slop.
As an example from one of their sources, you can only re-publish a certain amount of words from an article in The Guardian (100 commercially, 500 non-comercially) without paying them.
But instead, Kagi "helpfully" regurgitates the whole story, visits the article once, delivers it to presumably thousands, and it can't even be bothered to display all of the sources it regurgitates unless you click to expand the dropdown. And even then the headline itself is one additional click away, and they straight up don't even display the name of the journalist in the pop-up, just the headline.
Incredibly shitty behaviour from them. And then they have the balls to start their about page with this:
> Why Kagi News? Because news is broken.
I don't know how they do it, and I'm not sure I care, the result is they've eliminated both clickbait and ragebait, and the news are indeed better off for it!
Not gonna call it the worst insult to journalism I've ever seen because I've seen factually(.)so which does essentially the same thing but calls it an "AI fact check", but it's not much better.
It's like instead of borrowing a book from the library, there's like a spokesperson at the entrance who you ask a question and then blindly believe whatever they say.
This is exactly how I want my news to be. Nothing worse than a headline about a new vaccine breakthrough, followed by a first paragraph that starts with "it was a cold November morning as I arrived in..."
I guess it's a matter of taste, but I prefer it short and to the point
Unfortunately, the above is nearly a cliché at this point. The phrase "value judgment" is insufficient because it occludes some important differences. To name just two that matter; there is a key difference between (1) a moral value judgment; (2) selection & summarization (often intended to improve information density for the intended audience).
For instance, imagine two non-partisan medical newsletters. Even if they have the same moral values (e.g. rooted in the Hippocratic Oath), they might have different assessments of what is more relevant for their audience. One could say both are "biased", but does doing so impart any functional information? I would rather say something like "Newsletter A is compromised of Editorial Board X with such-and-such a track record and is known for careful, long-form articles" or "Newsletter B is a one-person operation known for a prolific stream of hourly coverage." In this example, saying the newsletters differ in framing and intended audience is useful, but calling each "biased in different ways" is a throwaway comment (having low informational content in the Shannonian sense).
Personally, instead of saying "biased" I tend to ask questions like: (a) Who is their intended audience; (b) What attributes and qualities consistently shine through?; (c) How do they make money? (d) Is the publication/source transparent about their approach? (e) What is their track record about accuracy, separating commentary from factual claims, professional integrity, disclosure of conflicts of interest, level of intellectual honesty, epistemic standards, and corrections?
Hmmm. Here I will quote some representative sections from the announcement [1]:
>> News is broken. We all know it, but we’ve somehow accepted it as inevitable. The endless notifications. The clickbait headlines designed to trigger rather than inform, driven by relentless ad monetization. The exhausting cycle of checking multiple apps throughout the day, only to feel more anxious and less informed than when we started. This isn’t what news was supposed to be. We can do better, and create what news should have been all along: pure, essential information that respects your intelligence and time.
>> .. Kagi News operates on a simple principle: understanding the world requires hearing from the world. Every day, our system reads thousands of community curated RSS feeds from publications across different viewpoints and perspectives. We then distill this massive information into one comprehensive daily briefing, while clearly citing sources.
>> .. We strive for diversity and transparency of resources and welcome your contributions to widen perspectives. This multi-source approach helps reveal the full picture beyond any single viewpoint.
>> .. If you’re tired of news that makes you feel worse about the world while teaching you less about it, we invite you to try a different approach with Kagi News, so download it today ...
I don't see any evidence from these selections (nor the announcement as a whole) that their approach states, assumes, or requires a value/fact dichotomy. Additionally, I read various example articles to look for evidence that their information architecture group information along such a dichotomy.
Lastly, to be transparent, I'll state a claim that I find to be true: for many/most statements, it isn't that difficult nor contentious to separate out factual claims from value claims. We don't need to debate the exact percentages or get into the weeds on this unless you think it will be useful.
I will grant this -- which is a different point that what the commenter above made -- when reading various articles from a particular source, it can take effort and analysis to suss out the source's level of intellectual honesty, ulterior motives, and other questions I mention in my sibling comment.
(I say this sarcastically and unhappily)
I use RSS with newsboat and I get mainstream news by visiting individual sites (nytimes.com, etc.) and using the Newshound aggregator. Also, of course, HN with https://hn-ai.org/
Ironically, this submission is at the top of that website :)
then i got the machine to write a front-end that visualises them and builds a search query for you: https://pastebin.com/HNwytYr9
enjoy
I think Google hates the loss of no/few ads or lame suggestions.
I'm sorry I know how to use your tool?? ? Didn't you put these keywords in to be used?
Kagi founder here. I am personally not an LLM-optimist. The thing is that I do not think LLMs will bring us to "Star Trek" level of useful computers (which I see humans eventually getting to) due to LLM's fundamentally broken auto-regressive nature. A different approach will be needed. Slight nuance but an important one.
Kagi as a brand is building tools in service of its users, no particular affinity towards any technologies.
When you go to Google News, the way they group together stories is AI (pre-LLM technology). Kagi is merely taking it one step further.
I agree with your concern. I see this as a convenient grouping, and if any interests me I can skip reading the LLM summary and just click on the sources they provide (making it similar to Google News).
I would argue creating your own summary is several steps beyond an ordering algorithm.
Do you know that's what they're doing? They are a search engine after all. They do run their own indexer, as well as cache results from other sources.
If they're feeding urls to an AI, why can't they validate AI output urls are real? Maybe they do.
You don't and you should not use this one either.
It actually seems more like an aggregator (like ground.news) to me. And pretty much every single sentence cites the original article(s).
There are nice summaries within an article. I think what they mean is that they generate a meta-article after combining the rest of them. There's nothing novel here.
But the presentation of the meta-article and publishing once a day feel like great features.
> And pretty much every single sentence cites the original article(s).
Yeah but again, correct me if I'm wrong, but I don't think asking an LLM to provide a source / citation yields any guarantee that the text it generates alongside it is accurate.
I also see a lot of text without any citations at all, here are three sections (Historical background, Technical details and Scientific significance) that don't cite any sources: https://kite.kagi.com/s/5e6qq2
Google points to phys and phys is a republish of the MIT article.
I guess I'm trying to understand your comment. Is there a distinction you're making between LLM summaries or LLM generated text, or are you stating that they aren't being transparent about the summaries being generated by LLMs (as opposed to what? human editors?).
Because at some point when I launched the app, it did say summaries might be inaccurate.
Looks like you found an example where it isn't properly citing the summaries. My guess is that they will tighten this up, because I looked mostly at the first and second page and most of those articles seemed to have citations in the summaries.
Like most people, I would want those everywhere to guard against potential hallucinations. No, the citations don't guarantee that there weren't any hallucinations, but if you read something that makes you go "huh" – the citations give you a low-friction opportunity to read more.
But another sibling commenter talked about the phys.org and google both pointing to the same thing. I agree, and this is exactly an issue I have with other aggregators like Ground.news.
They need to build some sort of graph that distills down duplicates. Like I don't need the article to say "30 sources" when 26 of them are just reprints of an AP/Reuters wire story. That shouldn't count as 30 sources.
The main point of my original comment was that I wanted to understand what this is, how it works and whether I can trust the information on there, because it wasn't completely clear to me.
I'm not super up to date with AI stuff, but my working knowledge is that I should never trust the output of an LLM and always verify it myself, so therefore I was wondering if this is just LLM output or if there is some human review process, or a mechanism related to the citation functions that makes it output of a different, more trusted category.
I did catch the message on the loading screen as well now, I do still think it could be a little more clear on the individual articles about it being LLM generated text, apart from that I think I understand somewhat better what it is now.
Either you mean every time you read something interesting (“huh”) you should check it. But in that case, why bother with reading the AI summary in the first place…
Or you mean that any time you read something that sounds wrong, you should check it. But in that case, everything false in the summaries that happens to sound true to you will be confirmed in your mind without you ever checking it.
...yes? If I go to a website called "_ News" (present company included), I expect to see either news stories aggregated by humans or news stories written and fact checked by humans. That's why newspapers have fact checking departments, but they're being replaced by something with almost none of the utility and its proponents are framing the benefits of the old system as impossible or impractical.
Like, I was asking whether they were expecting the curation/summarization to be done by humans at Kagi News.
Gmail seems like the easiest piece of the Google puzzle to replace. Different calendar systems have different quirks around repeating events, you sometimes need to try a variety of search engines to find what you're looking for, Docs aren't bug-for-bug equivalent to the Office or iCloud competitors, YouTube has audience, monetization, and hosting scale... Gmail is just "make an email account with a different provider and switch all of your accounts to use the new address." They don't even give you that much storage for free Gmail; it's 15GB, which lots of other email providers can match (especially paid ones). You can import your old emails to your new provider or just store them offline with a variety of email clients.
Is updating all of your accounts (and telling your contacts about the new address) what you consider to be the hard part, or do you actually use any Gmail-specific features? Genuinely curious, as I tend to disregard almost all mail-provider-specific features that any of my mail providers try to get me excited about (Gmail occasionally adds some new trick, but Zoho Mail is especially bad about making me roll my eyes with their new feature notifications).
2-3 spam emails slip through every week, and sometimes a false positive happens when I sign up for something new. I don't see this as a huge problem, and I doubt Gmail is significantly better.
I agree with the other commenter, I use Fastmail and I get very few spam emails, most of which wouldn't have been detected by gmail either because they're basically legitimate looking emails advertising scams. I have a Gmail account I don't use and it seems like it receives about the same amount of spam, if not more.
1: https://www.cloudflare.com/en-gb/learning/email-security/dma...
So if this automates the process of fetching the top news from a static list of news sites and summarizing the content in a specific structure, there's not much that can go wrong there. There's a very small chance that the LLM would hallucinate when asked to summarize a relatively short amount of text.
Not that the userbase of 50k is big enough to matter right now, but still...
So this might result in lower traffic for "anyone involved in journalism" – but the constant doomscrolling is worse for society. So I think we can all agree that the industry needs to veer towards less quantity and more quality.
Actual journalism doesn't rely on advertising, and is subscription based. Anyone interested in that is already subscribed to those sources, but that is not the audience this service is aiming for. Some people only want to spend a few minutes a day catching up with major events, and this service can do that for them. They're not the same people who would spend hours on news sites, so these sites are not missing any traffic.
I continue to subscribe to Reuters because of the quality of journalism and reporting. I have also started using Kagi News. They are not incompatible.
Imagine if Google news use LLM to show summaries to the users without explicitly saying it's AI on the UI.
Ironically, one of the first LLM-induced mistakes experienced by average people was a news summary: https://www.bbc.com/news/articles/cge93de21n0o.amp
Kagi made search useful again, and their genAI stuff can be easily ignored. Best of both worlds -- it remains useful for people like myself who don't want genAI involved, but there's genAI stuff for people who like that sort of thing.
That said, if their genAI stuff gets to be too hard to ignore, then I'd stop using or praising Kagi.
That this is about news also makes it less problematic for me. I just won't see it at all, since I don't go to Kagi for news in the first place.
Even Google calls the overview box AI Overview (not saying it doesn't hurt content hosting sites.)
Same as I would like to know if humans self assessed in a study about how well they drive vs the empirical evidence. Humans just aren't that good at that task so it would be good to know coming in.
Just call it Kagi Vibes instead of Kagi News as news has a higher bar (at least for me)
I've seen it so many times it definitely needs a name. As an entity of human intelligence, I am offended by these silly thought-terminating arguments.
To be honest though that’s not the point. I’m more annoyed they weren’t transparent about their methods than I am about them using AI.
A lot of times when I ask for a source, I get broken links. I'm not sure if the links existed at one point, or if the LLM is just hallucinating where it thinks a link should exist. CDN libraries, for example. Or sources to specific laws.
They'll do pretty much everything you ask of them, so unless the text actually come from some source (via tool calls, injecting content into the context or other way), they'll make up a source rather than doing nothing, unless prompted otherwise.
(unless you are Google etc which are specifically let in to get the article indexed into search)
How do you make an LLM understand that it must only give factual sources? Just some naive RL with positive reward on the correct sources and negative reward on incorrect sources is not enough -- there are obscenely many more hallucinated sources possible, and the set of correct sources is a set of insanely tiny measure.
Of course, "easy" is in quotes because none of this is easy. It's just easier than AGI.
For every line of text output, give me a full MLA annotated source. If you cannot then say your source does not exist or you are generating information based on multiple sources then give me those sources. If you cannot do that, print that you need more information to respond properly.
Every new model I mess with needs a slightly different prompt due to safeguards or source protections. It is interesting when it lists a source that I physically own and their training data is deteriorated.
For example: "/glossary/love-parade" - There is no mention of this on my website. "/guides/blue-card-germany" has always been at "/guides/blue-card". I don't know what "/guides/cost-of-beer-distribution" even refers to.
The loop "create a research plan, load a few promising search results into context, summarize them with the original question in mind" is vastly superior to "freely associate tokens based on the user's question, and only think about sources once they dig deeper".
> Privacy by design: Your reading habits belong to you. We don’t track, profile, or monetize your attention. You remain the customer and not the product.
But the person running the LLM surely does.
https://github.com/kagisearch/kite-public/issues/97
There's also a line at the bottom of the about page at https://kite.kagi.com/about that says "Summaries may contain errors. Please verify important information."
Non fake news is going to be restricted to pay services like Bloomberg terminals.
This is why the moon landing hoax was revolutionary in the 60's. The size of this project was enormous.
Fake news exists because of the perverses incentives of the system; where getting as many clicks as possible is what matters. This is very much a result of social networks and view-based remuneration.
I don't think it's that bad if people need to pay for real information...
…are you describing a newspaper?
I'm all for more proper fact checkers, backed by reputable sources.
- I agreee fake news is a real problem
- I pay for Kagi because I get more much more precise results[1]
- They have a public feedback forum and I think every time I have pointed out a problem they have come back with an answer and most of the time also a fix
- When Kagi introduced AI summaries in search they made it opt in, and unlike every other AI summary provider I had seen at that point they have always pointed to the sources. The AI might still hallucinate[2] but if it does I am confident that if I pointed it out to them my bug report would be looked into and I would get a good answer and probably even a fix.
[1]: I hear others say they get more precise Google results, and if so, more power to them. I have used Google enthusiastically since 2005, as the only real option from 2012, as fallback for DDG since somewhere between 2012 and 2022 and basically only when I am on other peoples devices or to prove a point since I started using Kagi in 2022
[2]: haven't seen much of that, but that might be because of the kind of questions I ask and the fact that I mostly use ordinary search.
Unlike the disastrous Apple feature from earlier this year (which is still available, somehow!), this isn't trying to transform individual articles. Rather, it's focused on capturing broader trends and giving just enough info to decide whether to click into any of the source articles. That seems like a much smaller, more achievable scope than Apple's feature, and as always, open-source helps work like this a ton.
I, for one, like it! I'll try it out. Seems better than my current sources for a quick list of daily links, that's for sure (namely Reddit News, Apple News, Bluesky in general, and a few industry newsletters).
If that info is hallucinated, then it's worse than useless. Click bait still attempts to represent the article, a hallucination isn't guaranteed to do thst.
Why not have someone properly vet out interesting and curious news and articles and provide traffic to their site? In this age of sincerity, proper citation is more vital than ever.
Services listing sources, like Kagi news, perplexity and others don't do that. They start with known links and run LLMs on that content. They don't ask LLMs to come up with links based on the question.
"Exact" is far from accurate. I just did a side-by-side comparison. To name only two obvious differences:
A. At the top level, Perplexity has a "Discover" tab [1] -- not titled "News". That leads to a AAF page with the endless-scroll anti-pattern (see [2] [3] for other examples). Kagi News [4] presents a short list of ~7ish items without images.
B. At the detail-page level, Kagi organizes their content differently (with more detail, including "sources", "highlights", "perspectives", "historical background", and "quick questions"). Perplexity only has content with sources and "discover more". You can verify for yourself.
[1]: https://www.perplexity.ai/discover
[2]: https://www.reddit.com/r/rant/comments/e0a99k/cnn_app_is_ann...
[3]: https://www.tumblr.com/make-me-imagine/614701109842444288/an...
Kagi seems to offer regional news and the sources appear to be from the respective area also. do appreciate public access (for now?) with RSS feeds (ironic but handy).
1. It seems to omit key facts from most stories.
2. No economic value is returned to the sources doing the original reporting. This is not okay.
3. If your summary device makes a mistake, and it will, you are absolutely on the hook for libel.
There seem to be some misunderstandings about what news is and what’s makes it well-executed. It’s not the average, it’s the deepest and most accurate reporting. If anyone from the Kagi team wants to discuss, I’m a paying member and I know this field really, really well.
Yes! I'm also a paying member but I'm deeply suspicious of this feature.
The website claims "we expose readers to the full spectrum of global perspectives", but not all perspectives are equal. It smacks of "all sides" framing which is just not what news ought to be about.
https://web.archive.org/web/20250930154005/https://blog.kagi...
A) redacted the news in a format that is read friendly
B) set up a page with prioritized news
Because _that’s what a newspaper is_.
What extra value is gotten from a AI rewrite? At best is a borderline noop, at worst a lossy transformation (?)
Far more interesting is how they aggregate the data. I thought many sources moved behind paywalls already.
News Minimalist [1] and Boring Report [2]. Both aggregate news and (IMO) most importantly provide links from multiple outlets for the same stories. Really made me notice the clickbait and allows me to be more selective in choosing reputable sources.
Both use AI, with the former ranking news based on importance, while the latter summarizes articles. (That doesn't feel useful for supporting journalism as a whole so I typically click through and read the articles unless I don't like the outlet reporting)
> Both use AI, with the former ranking news based on importance
I like this! If I'm in a rush, I check for very high priority stories. Usually there are 3 or even none. Done!
On days I want to sit back and read, it provides nice sources.
Are these articulated in a manner which gives stakeholders (investors, users, and staff) assurances and standing?
...
What are competitors and collaborators in this space? Semafor seems to have a similar product, what are the differentiators and/or collaboration opportunities?
...
Netflix was subscription only, till it was "pay to get rid of ads". Then there is the whole business of profiling customer interest, etc.
We have product labeling for food, why not web services?
IDK what their future plans are, but there current plans work well for me as a consumer.
However, I set my feed up on the web app, seeing that it should sync on "all my devices".
Next, I installed the Android app, and mybe I missed something, but I don't see any way to connect to my Kagi account.
So much for syncing...
Gives me a good high-level view of the news. I'm a Kagi customer and I definitely don't want anything they do with the news.
Can you expand on why?
If you haven't seen it there's also an amazing feature that you can go back and see the homepage as it was from any point in time in the last 20 years
If you missed a day of news, whatever was really important will re-surface in today's news (major world incident)
Otherwise, perhaps what was missed is noise!
That said, I do think the service could be improved. Often the summary is a very short blurb that forces me to go to one of the original sites for the content, and hopefully land on one that is not obnoxious to use, which kind of defeats the purpose. The event timeline sounds interesting, but when it essentially shows 2 or 3 events that are obvious from the context, it's not so useful in practice. I always skip the "Quick questions" section, since it reads like an elementary school report, and the questions are really basic. How about letting me ask the questions I want?
Also:
> We don’t scrape content from websites. Instead, we use publicly available RSS feeds that publishers choose to provide.
I think this is a mistake. Most publishers are hostile to RSS and often don't offer it. Scraping is, unfortunately, a requirement if you want to consume public content on your own terms, which is the entire point of this service. Besides, scraping is how all search engines generate their index, so as long as the bot is well behaved and doesn't hammer the site, follows robots.txt or perhaps even bends the rules a bit, it should be fine. I would rather Kagi wasn't so respectful of publishers' wishes, if that would allow them to offer a better service. I understand if they want to avoid getting in trouble with publishers, but the alternative would be better for their users.
Nice release nonetheless!
I really don't need to know which party he is part of. If the article was about a party's stance, it makes sense - but the article is about one politician.
And ignoring that, it’s general context. Part of the job of a journalist.
Or to construct patterns that don't reflect reality.
Should we also list their ages, ethnicity, religious affiliation(s) in each article mentioning a Congressperson and construct those patterns as well?
Sorry, I'd like to think on my own.
When articles always mention party affiliation, people will judge the politician's behavior based on the affiliation, and not on his actions.
Some things that could change that:
- Deep fact checking. Community Notes on twitter do a better job at this than any other system I've seen. The reason it doesn't really work in practice is that the stream of misinformation and confusion is orders of magnitude larger than the Community Notes community. A news app should not have that scalability issue.
- Follow up. If I read something that later turns out to be false I need to be notified of that. This unfortunately requires that the app track what I have read.
- Context. If you have a news article about a stabbing, it sounds like stabbings are up. The context that they are going up or down statistically is extremely relevant. The lack of context can turn a tiny truth into a bigger lie.
- Deep confusion analysis. Figuring out where people are confused statistically and focusing on trying to manage that misinformation gap is not something that is dealt with at all. I would like to become LESS confused by information sources not more.
The word "just" is doing a lot of heavy lifting there.
Systems can change. Human brains on a population/genetic level can't. Blaming individual humans when we know statistically what they will do is mathematically equivalent to giving up.
Also, giving someone information that turns out to be false and never following up isn't "media literacy". I can't see how it can possibly be.
> Systems can change. Human brains on a population/genetic level can't.
Realistically we can't fix media literacy education and we can't fix journalism, both are systemically broken. But I would never blame people, everyone is the product of their environment and a victim of the system.
> Also, giving someone information that turns out to be false and never following up isn't "media literacy". I can't see how it can possibly be.
Media literacy in that context would just refer to reliable sourcing, reliable sources post retractions/corrections.
I think it's a lot more reasonable to expect change journalism. Or maybe not journalism per se, but information dissemination/world model updates. News/journalism is just the form we've sort of settled on for that kind of job, but it's fundamentally the wrong thing. It's like asking for a faster horse when we want a car. Or asking for email notifications when we really want is a way to know the current status of something.
> Media literacy in that context would just refer to reliable sourcing, reliable sources post retractions/corrections.
I think the reach of those corrections is as much a problem as if they are published. Posting retractions to a printer that is directly hooked up to a shredder is technically "posting retractions", but practically it's not. Same as most news sources really. The retractions are functionally buried for almost all sources, including the most prestigious source Nature.
There is no particular reason to assume journalism has a future at all, why rely on a journalists biased summary of a press release and biased editorial teams prioritization of what's important. It increasingly discredits itself, with smaller concentrated ownership and blatant biased reporting (see Iraq war, Gaza genocide). Then the centralization makes it vulnerable to external malicious attacks (see Gawker). But there are so many parts that are hard to replace, credible reporting on current events and investigative journalism, that requires resources mostly only present in large organizations like the visual investigations team at the nyt. It's broken but hard to replace, especially making it decentralized.
But anyway, kagi news does nothing.
[1] https://embit.ca
can't imagine it would go over well in the court system.
I've been really enjoying Semafor's emails too, but their 2x a day is tough for me to keep up with. I'll try to get a habit of looking at Kagi News to stay informed.
That's despite the appropriate HTTP header:
Accept-Language: en-US,en;q=0.5
When you share a Russian story with a non-russian speaker, they will still be able to read the story in their own set Content Language in the Settings. We're working on improving the UX of language, sharing a story, and more.
For example, I can speak Portuguese, Spanish, Japanese and English. Ideally I would want news in those languages to keep their original text, while translating news in other languages to a target language.
For example, if I set my language as English, Russian news would get translated to English, but Portuguese ones would keep their original text.
These guys are doing great work and this news product is exactly what I want... Once a day hit. What is happening in the world? As far as pmf goes they hit the mark for an old fart like me.
If you wanted to fix the news you'd begin by critically curating mainstream news and throwing 80% of it in the trash, then you'd add 80% of material and critical analysis back to the 20% that had none of that.
This example includes a Reddit post as a source:
https://kite.kagi.com/s/hjgy55
But that post is actually a link to reuters.com
There is also a list of "citations" which are referenced from the generated text, and "sources" which are not referenced anywhere. It's not clear if they used reddit or reuters to generate any of the text.
I also see lots of citations to "common knowledge"... which is um, weird.
For example:
> National Guard activation: Guard forces can serve under state control (Title 32) or be federalized (Title 10), which determines who directs missions and the scope of authority [*].
Is this common knowledge?
About "common knowledge" sources - we validate all content for accuracy. When the LLM needs to add context that's missing from sources (e.g. historical background), we mark these as "common knowledge" since this generated content can't be validated against the original sources. You're right that your example isn't common knowledge at all, we'll work on adding actual sources for these claims too.
Thanks for trying it out!
An attitude of "Hook me up to the novelty juice, this is old weak sauce", is a sign of internet / news addiction.
I know the announcement page talks about not scraping, but to me personally the value i see in this product is that i don't have to go to those ad ridden, poorly organized and often terrible pages of the authors. Which then seems really unfair to the actual content providers.
I'd like to see this type of service cost $3-5/m ontop of my normal Kagi sub to compensate the authors of the articles i read. A Streaming Music model for news, ish.
This proposed value is quite small, but my assumption is only a very small amount of money would reach them from my ad views anyway so a $10/m addition feels extreme to me.
Some UX friction i noticed: To get back to the homepage from an article, i have to click on the article headline. While this is elegant and you likely get used to it, once you know it, it's not exactly intuitive.
"Mark as read" checks all the checkmarks, but since they're still there after a reload, I don't see the point.
I think keeping them on the page instead of automatically hiding them makes more sense for a product that's trying to update their news feed once per day. You feel more in control, as if it's not a stream of never-ending stories, but rather a fixed amount of stories that you can realistically power through. Seeing all items checked sort-of supports this philosophy.
A news site has to display some uncommon stories to have any appeal.
News is broken because journalism is no longer a viable career path. No amount of RSS aggregators will fix that.
It was a very big relief going back to a normal email client.
I still support Proton (i pay for Proton VPN) and hope they will succeed in their mission.
When you're paying for something you expect the basics to be there and thats what annoys me about proton.
[1] https://kagifeedback.org/d/3285-safe-search-dns-locking-for-...
I mean, it keeps bothering me that their search engine logo is a "g". Anything to position themselves as close to google.
Here's the Kagi article which potentially could have mentioned this: https://kite.kagi.com/s/8b5ta4
It's unclear to me if any of the source material reported this when the summary was generated, especially since the source articles may have been updated throughout the day.
* for now
Let me open the app once a month and see a summary of what has happened over it.
I hoping this can fill a gap for me currently. I want something that will give me broad awareness of big news I should probably know about, that’s not a 24 hour firehose of news.
I like the once-per-day update and the relatively short list of stories. The jury is still out on how sticky it will be, in terms of being my go-to place for a daily update.
Summaries are no substitute for real articles, even if they're generated by hand (and these apparently are not). Summaries are bound to strip the information of context, important details and analysis. There's also no accountability for the contents.
Sure, there are links to the actual articles, but let's not kid ourselves that most people are going to read them. Why would they need a summarizing service otherwise? Especially if there are 20 sources of varying quality.
There are no "lifehacks" to getting informed. I'll be harsh: this service strikes me as informationally illiterate person's idea of what getting informed is like.
Should all politicians' remarks be reproduced verbatim with absolutely no commentary, no fact-checking and no context? Should an article about an airplane crossing the Pacific include "some experts believe that this is impossible because Earth is flat?"
Excessive bias in media is definitely a problem, but I don't think that completely unbiased media can exist while still being useful. In my expierence, people looking for it either haven't thought about it deeply enough, or they just want information that doesn't make their side look bad.
Yes. That's an interview, and is much better than summarizations and short soundbites and one-sentence quotes.
World leaders will always lie or side-step the truth in lesser or bigger degrees, because they represent a people or an organizations and committed fully to the interest of those. Part of being mature as a listener or reader is understanding that, and still get the useful information you need. Every person you meet in life will first and foremost speak from their own interest and agenda.
Then these interviews are complemented by regular reporting and interviews with people from the opposing viewpoint, if you so wish.
A bigger bias problem by far is bias by omission, so including all stories whether they meet the presenter's political agenda or not would be a great start.
I agree, but how do you envision that happening? Journalism died a long time ago, arguably around the birth of the 24-hour news cycle, and it was further buried by social media. A niche tech company can only provide a better way to consume what's out there, not solve such large societal problems.
> There are no "lifehacks" to getting informed.
I don't think their intent is to change how people are informed. What this aims to do is replace endless doomscrolling on sites that are incentivized to rob us of our attention and data, with spending a few minutes a day to get a sense of general events around the world. If something piques your interest, you can visit the linked sources, or research the event elsewhere. But as a way of getting a quick general overview of what's going on, I think it's great.
FWIW, I agree with you.
I used to be a news junkie. I've always thought of writing the lessons I learned, but one of them was "If you're a casual news reader, you are likely more misinformed than the one who doesn't read any news." One either should abstain or go all in.
I guess I'd amend it to put people who only glance at headlines to be even more misinformed. It was not at all unusual for me to read articles where the content just plain disagreed with the headline!
For me, this is only useful as a curated list of news feeds (and subreddits I guess), but nothing more.
[1]: https://github.com/kagisearch/kite-public/issues/97#issuecom...
[2]: https://kite.kagi.com/about
[3]: https://github.com/kagisearch/kite-public/blob/main/core_fee...
You have defined the desirable news as "pure, essential information". What's that again? How do you know what's pure and essential info for any user? The traditional news media had started there, with that pure news, and ended up here where they are today.
Ultimately, you will realize that your content need to grab attention enough so that people consume your feed. People's attention goes to where things look weird, exciting, sensational, emotional, trivia, gossip etc. You can't do away with all that and just dish out the pure and essential info. It didn't work. People tried it.
Parasitic by definition.
And embracing the news from nowhere perspective.
So both a parasite and boring at the same time.
I wish more tech folks who want to "fix the news" would learn from Gabe Rivera's Techmeme, Memeorandum, and Mediagazer.
He's done aggregation right for 20 years
- Parquet de Paris ouvre 24 enquêtes pour menaces
- Update: famille et experte ADN au procès Jubillar
- Intersyndicale appelle à la grève du 2 octobre
This won't be used by French speakers as is.
I'm currently working on a major overhaul to provide more holistic context around news by better surfacing less-discussed events.
Trump, Congress deadlock as shutdown deadline nears
Taliban cuts internet nationwide, flights grounded in Afghanistan
Indonesia school collapse leaves 38 missing, 77 hurt
YouTube settles Trump suspension lawsuit for $24.5m
German court jails AfD aide for China spying
US deports 120 Iranians after deal
Russian drone strike kills family of four
Is this really what I need to know in the world? Am I saying “informed”? This is not helping the anxiety from reading news described in the article. This is not good for people.
This is awful. It's cutting out any money going to the news agencies that go out there and write news. If they didn't exist, Kagi wouldn't work.
Why would Kagi stop working if news didn't exist? Kagi is a search engine first and foremost, Kagi News is not a money making product of theirs. Kagi would still be making money with their search engine.
Also, this should entice news writers to write better news. The main reason people use products such as this is that they are sick and tired of going to news sites only to have to power through filler material to get the 10% that actually matters...
Sort of like a loss leader, eg the Costco hot dog :-)
It's just plain text web 1.0 page that uses some ranking algo to figure out the top stores of a given day across categories, and shows that headline and under it similar headlines across different news sources.
It used to pull in RSS from the sources so you could also read the articles in plaintext, but that broke a bit ago and the dev hasn't fixed it.
Regardless, I still find it a great site to quickly get up to speed on top stories of the day!
But also I really like (and pay for!) Kagi so happily support their own effort here.
What I actually want is a curated set of things that are useful to me personally given my situation. The most important things about my situation to give me useful news are things like: net worth, income, citizenship, family situation, where I live, what industries I work in, current investments, travel destinations, regulatory and political risks associated with any of those things, etc.
Because those are the things that dictate how the parts of the world I can't control are going to affect me (especially if I don't react). I don't want to hear about random things that aren't going to affect me when I'm looking at the news. Sometimes I want to learn new random/useless things for fun, but that's a leisure activity. It's totally separate from the "news", which is a thing that adults consume as a chore to better plan their lives.
The fundamental problem is that myself and others are not going to willing give out the personal information required to curate useful news feeds, so the news will always be filled with noise. Maybe local AI can help with that.
Kagi’s ‘neutral’ stance on politics, their association with Yandex/the Russian state mean this will be interesting to watch.
It’s by far the best search I’ve ever used.
https://old.reddit.com/r/ukraine/comments/1gvcqua/psa_the_ka...
I do wish I could have better control of what languages I'm getting. Right now the option is to either translate everything or nothing. I'd prefer news in their original, untranslated form if it's one of the 4 languages I speak, otherwise translate them to English.
I added the category "Israel" and everything was in Hebrew, so I had to set my language to English, but now news in my native Swedish are translated to English and I have to kind of translate it back in my head as I read them.
It's not the end of the world, but it seems like fairly low-hanging fruit!
I like that it only provides the list once a day (I do think that's a clever feature), but the inability to influence bias seems like a mistake, especially since the sources already seem to follow a bias.
This sounds like it's going to be a massive headache. Activists with nothing to do all day will be all over this, for their chance to try to have influence over what other people read.
I think it is human curated, but I'm not positive about that.
Yet, there is Hacker Newsletter (https://hackernewsletter.com/, which I like and use), there are others pointed by GPT5 that I don't Mailbrew and Digest. Kagi looks like the true former.
What I do want is personalization - not by picking interest, but actual personality, prompt, tastes, good enough that it puts something other, rather than only narrowing and narrowing my view. Yet high quality, rather than clickbaits and other "fluff". Otherwise, following a few Reddits would do the job (with some API to send emails).
What I would like even more is something that actually turns my social media into daily emails.
While I understand different people find value in different things, dismissing Kagi generally as "too expensive" is ignorant IMO.
I think it also depends what you use it for. I use both their search and their AI models for software development and it saves me precious time when looking for information - in a way it pays for itself.
I had two major issues with it:
- it wasn't as snappy as google, but I kind of got used to it
- I wasn't trusting it (if that makes sense) and was falling back !g to make sure everything was searched
For $10, I expect to get a premium service, not a just good enough one.
I have to admit that I liked the idea, feeling of privacy and the ability to tailor a search engine for my needs.
Unfortunatelly I think that they are not where they need to be for the $10 pricing plan.
I do however like the fact that Kagi only pushes _once_ a day. Drinking from the firehose is physically and mentally exhausting. Even daily feels like too much these days other than a quick check to make sure the world didn't implode or the Rapture happened while I was busy trying to get CC to behave.
So far, i quite enjoy having a summary with bullet points.
For example, here's the summary of this discussion: https://extraakt.com/extraakts/kagi-s-daily-news-ritual-spar...
- Site blocking with /etc/hosts doesn't work consistently with Orion, it intermittently and inconsistently ignores these rules. (this is sort of niche but it's bizarre for a browser based on WebKit)
- The password manager is busted on certain websites that have a third input box (so a captcha or 2FA code), where it'll fill the password twice
- Kept randomly getting the error "Orion can't open this page: This operation couldn't be completed. Cannot allocate memory" with like 10 windows, ~30 tabs open. Haven't seen it recently but like many Orion bugs it is intermittent and hard to reproduce consistently.
- Switching between Chrome and Orion sometimes (inconsistently) switches me to the last Orion window I had open (often on a different Desktop) rather than the one I clicked on.
- On networks where I can form WebRTC connections in Safari and Chrome, I cannot in Orion.
- This was just fixed but until like yesterday, the highlight color in their PDF viewer for ctrl F was a barely visible 10% opacity highlight that was totally unusable.
- Various other intangible performance bugs that seem to pile up when you haven't restarted in a day or two. It starts out really snappy and tends to get slower the longer you've had it open.
I should note that the pace of developement would be much faster if they would open source the browser, but instead of that they keep starting new, closed source projects that will likely have the same fate. Their Linux Orion port is from scratch, none of their macOS code is reusable.
oh i hate it when developers get cute with DNS. this doesn't happen with Safari? i've also had issues with the password manager (even after telling it i want to use Passwords, it just... doesn't sometimes).
i've been in the same boat as you - i really want to diversify the browser ecosystem, so i've been daily driving orion for a bit, but their stance toward open source (which you mentioned) is a big bummer.
It seems like all their recent releases are just following into the AI hype.
Lately I've been working toward less app time and more boredom https://youtu.be/orQKfIXMiA8?si=ZyvxO0SFjoGGHbdK ¯\_(ツ)_/¯ works wonders
I think upvoting/downvoting is a crucial aspect to news/information/knowledge. But we've been doing it with just numbers all along. Why not experiment with weights or more complex voting methods? Ex: my reputation is divided in categories - I'm more an expert in history then politics hence my vote towards historical subjects have more weights. Feels like that's the next big step for news. Instead of just another centralized aggregator?
No offense to the cool system and website though
Miniflux (https://miniflux.app/) in Docker, fetching 75 RSS feeds I've collected over the years
~200 lines in a Jupyter notebook:
- Fetch entries from Miniflux API (last 24-48 hours)
- Convert to CSV, feed to LLM. GPT-5 identifies trending stories across sources
- Each article gets web-fetched and summarized via Gemini-2.5-flash
- Results render via IPython.display
Ten minutes per day, fully informed.(Edit) Now I see. You have to scroll through the story and click "Close story" to get back. It's "mobile first".
If you live in big city beware that your newspaper probably is lacking your neighborhood coverage which is what you need.
It is a hard problem.
RSS works great and there are a million ways to consume them. There are also a myriad New aggregator offerings, most with some sort of LLM thrown in on top.
Did we really need this? Was there nothing better Kagi could dedicate its resources to?
To be fair this is exactly how Kagi Search happened too - many people didn't see a point in a paid search engine in 2018 too, but I and my family needed one and it happened.
You have a great search service. Please focus on that. Build that into an actual Google-beater. Provide the features your customers actually want. Spend your time, money, and energy making that the greatest search service possible.
Don't waste this opportunity. Please.
Mozilla fell into this trap as its business model was fundamentally broken (majority revenue coming from biggest compatitor). Our business model is healthy and the more apps we have in the ecosystem the stronger the ecosystem gets.
that only works if the apps somehow reinforce each other, and spending time developing one app also adds something to the other apps. I'm not sure that's true here. I don't see how an LLM-based news service makes your search service stronger?
The news feature feels a bit underwhelming and underdeveloped though, especially with the LLM/AI approach.
I had a little trouble imagining myself using this in particular but I'm a big fan of the search engine.
But the Sports section is bad. The game finished 10 hours ago and it's still showing a match preview.
It feels to me like the bigger problem is more about assembling time series of "news" not "news today".
Like if you wanted "show me all stories about crime X from the BBC since 1980" or whatever but then you want to do this across many sources.
This is the missing piece for most new analytics. I think there are legal blockers to getting this done and why I mention decentralization.
- Allow me to have a single feed (as opposed to one tab/feed per category). Also, to prevent that feed from becoming too long, allow me to set a maximum number of news items or maximum number of minutes I'd like to spend. Prioritize/leave out news items accordingly. In other words: While I might be interested in sports, I'm not interested in reading or scrolling through as many news items about sports as about, say, world politics.
- "Highlights" and "perspectives" below the article text read like useless AI slop that merely reiterates the text, and artificially prolong an otherwise neatly concise page.
- Allow me to intersect categories and/or choose a regional "focus". Non-regional categories like "sports", "business", "technology" currently seem to aggregate news from across the world. However, I might be particularly¹ interested in a regional subset of e.g. business or sports news.
¹) I.e. not exclusively so. I'm still interested in world news but only when it comes to major events (in the sports case, say, world cups and championships).
A save feature to keep track of interesting articles would be nice.
Having more news (or more filtered for quality) would also be nice. Right now at 12 the lists seem to be mostly taken up by trendy low-quality news that will be irrelevant and less news that doesn't make waves but will probably have more impact in the long run. Actually this might just be a lack of the number of places being scraped. Not an actual example from the site but consider how much an article of someone saying the latest comet is actually alien technology trends (but is completely irrelevant) vs a scientific paper reporting on the measurements of the atmospheric composition of a bunch of exoplanets.
medstrom•4mo ago
Could you guys maybe print it on paper and send it to my physical mailbox, so I can do this ritual with breakfast? :-)
0xdeadbeefbabe•4mo ago
lxgr•4mo ago
toomuchtodo•4mo ago
Guten: A Tiny Newspaper Printer - https://news.ycombinator.com/item?id=42599599 - January 2025 (106 comments)
Getting my daily news from a dot matrix printer - https://news.ycombinator.com/item?id=41742210 - October 2024 (253 comments)
sota_pop•4mo ago