Yeah, they do. Go talk to anyone who isn't in a super-online bubble such as HN or Bsky or a Firefox early-adopter program. They're all using it, all the time, for everything. I don't like it either, but that's the reality.
Not really. Go talk to anyone who uses the internet for Facebook, Whatsapp, and not much else. Lots of people have typed in chatgpt.com or had Google's AI shoved in their face, but the vast majority of "laypeople" I've talked to about AI (actually, they've talked to me about AI after learning I'm a tech guy -- "so what do you think about AI?") seem to be resigned to the fact that after the personal computer and the internet, whatever the rich guys in SF do is what is going to happen anyway. But I sense a feeling of powerlessness and a fear of being left behind, not anything approaching genuine interest in or excitement by the technology.
We can take principled stands against these things, and I do because I am an obnoxiously principled dork, but the reality is it's everywhere and everyone other than us is using it.
Do you know someone? Using Firefox nowadays is itself a "super-online bubble"
They are already preached at that they need a new phone or laptop every other year. Then there's a new social platform that changes its UI every 6 months or quarterly, and now similarly for their word processors and everything.
This is kinda like how if you ask everyone how often they eat McDonald's, everyone will say never or rarely. But they still sell a billion burgers each year :) Assuming you're not polling your Bsky buddies, I suspect these people are using AI tools a lot more than they admit or possibly even know. Auto-generated summaries, text generation, image editing, and conversation prompts all get a ton of use.
Mmm, summarized garbage.
>Also I imagine you frequently read summaries of books
This isn't what LLM summaries are being used for however. Also, I don't really do this unless you consider a movie trailer to be a summary. I certainly don't do this with books, again, unless you think any kind of commentary or review counts as a summary. I certainly would not use an LLM summary for a book or movie recommendation.
I like LLM's, I've even build my own personal agent on our Enterprise GPT subscription to tune it for my professional needs, but I'd never use them to learn anything.
For example - you summarize a YouTube link to decide if the content of it is something you're interested in watching. Even if summarizations like that are only 90% correct 90% of the times it is still really helpful, you get the info you need to make a decision to read/watch the long form content or not.
Recipe pages full of fluff.
Review pages full of fluff.
Almost any web page full of fluff, which is a rapidly rising proportion.
> And how would I know the LLM has error bounds appropriate for my situation?
You consider whether you care if it is wrong, and then you try it a couple of times, and apply some common sense when reading the summaries, just the same as when considering if you trust any human-written summary. Is this a real question?
I was thinking more along the lines of asking an LLM for a recipe or review, rather than asking for it to restrict its result to a single web page.
The opportunity cost of "missing out" on reading a page you're unsure enough about to want a summary of is not likely to be high, and similarly it doesn't matter much if you end up reading a few paragraphs before you realise you were misled.
There are very few tasks where we absolutely must have accurate information all the time.
However 99% of the times i use this isn't because i need an accurate summary but because i come across some overly long article that i do not even know if i'm interested in reading, so i have Mistral Small generate a summary to give me a ballpark of what the article is even about and then judge if i want to spend the time reading the full thing or not.
For that use case i do not care if the summary is correct, just if it is in the ballpark of what the article is all about (from the few articles i did ended up reading, the summary was in the ballpark well enough to make me think it does a good enough work). However even if it is incorrect, the worst that can happen is that i end up not reading some article i might find interesting - but that'd be what i'd do without the summary anyway since because i need to run my Tcl/Tk script, select the appropriate prompt (i have a few saved ones), copy/paste the text and then wait for the thing to run and finish, i only use it for articles i'm in already biased against reading.
How do I know what I'd be reading is correct?
To your question: for the most part, I've found summaries to be mostly correct enough. The summaries are useful for deciding if I want to dig into this further (which means actually reading the full article). Is there danger in that method? Sure. But no more danger than the original article. And FAR less danger than just assuming I know what the article says from a headline.
So, how do you know its summaries are correct? They are correct enough for the purpose they serve.
Of course, as more and more pieces of writing out there become slop, does any of this matter?
I have it connected to a local Gemma model running in ollama and use it to quickly summarize webpages, nobody really wants to read 15 minutes worth of personal anecdotes before getting to that one paragraph that actually has relevant information, and for finding information within a page, kinda like ctrl-f on steroids.
The machine is sitting there anyway and the extra cost in electricity is buried in the hours of gaming that gpu is also used for, so i haven't noticed yet, and if you game, the graphics card is going to be obsolete long before the small amount of extra wear is obvious. YMMV if you dont already have a gaming rig laying around
So grab ollama and your prefered model, install openwebui.
Then open about:config
And set browser.ml.chat.provider to your local openwebui instance
Google suggests the you might also need to set browser.ml.chat.hideLocalhost to false. But i dont remember having to do that
I like to keep AI at arms length, it's there if I want it but can fuck off otherwise
Lots of people really do seem to want it in everything though
I want in Text to speech (TTS) engines, transliteration/translation and... routing tickets to correct teams/persons would also be awesome :) (Classification where mistakes can easily be corrected)
Anyways, we used TTS engine before openai - it was AI based. It HAD to be AI based as even for a niche language some people couldn't tell it was a computer. Well from some phrases you can tell it, but it is very high quality and correctly knows on which parts of the word to put emphasis on.
https://play.ht/ if anyone is wondering.
On second thought this probably depends on the caption language.
For the most part, Whisper does much better than stuff I've tried in the past like Vosk. That said, it makes a somewhat annoying error that I never really experienced with others.
When the audio is low quality for a moment, it might misinterpret a word. That's fine, any speech recognition system will do that. The problem with Whisper is that the misinterpreted word can affect the next word, or several words. It's trying to align the next bits of audio syntactically with the mistaken word.
Older systems, you'd get a nonsense word where the noise was but the rest of the transcription would be unaffected. With Whisper, you may get a series of words that completely diverges from the audio. I can look at the start of the divergence and recognize the phonetic similarity that created the initial error. The following words may not be phonetically close to the audio at all.
You don't actually state whether you believe Parakeet is susceptible to the same class of mistakes...
Your point about the caption language is probably right though. It's worse with jargon or proper names, and worse with non-American English speakers. If we they don't even get right all the common accents of English, I have little hope for other languages.
It's still AI, of course. But there is distinction between it and an LLM.
[0] https://github.com/openai/whisper/blob/main/model-card.md
Seems kinda weird for it not to meet the definition in a tautological way even if it’s not the typical sense or doesn’t tend to be used for autoregressive token generation?
I very much do want what used to be just called ML that was invisible and actually beneficial. Autocorrect, smart touch screen keyboards, music recommendations, etc. But the problem is that all of that stuff is now also just being called "AI" left and right.
That being said I think what most people think of when they say "AI" is really not as beneficial as they are trying to push. It has some uses but I think most of those uses are not going to be in your face AI as we are pushing now and instead in the background.
But we do have to acknowledge that AI is very much turned into an all encompassing term of everything ML. It is getting harder and harder to read an article about something being done with "AI" and to know if it was a custom purpose built model to do a specific task or is it throwing data into an LLM and hoping for the best.
They are purposefully making it harder and harder to just say "No AI" by obfuscating this so we have to be very specific about what we are talking about.
Having the feature on a menu somewhere would be fine. The problem is the confluence of new features now becoming possible, and companies no longer building software for their users but as vehicles to push some agenda. Now we’re seeing this in action.
Maybe I'll ask Gemini to write one...
LLM's are a product that want to data collect and get trained by a huge amount of inputs, with upvotes and downvotes to calibrate their quality of output, with the hope that they will eventually become good enough to replace the very people they trained them.
The best part is, we're conditioned to treat those products as if they are forces of nature. An inevitability that, like a tornado, is approaching us. As if they're not the byproduct of humans.
If we consider that, then we the users get the shorter end of the stick, and we only keep moving forward with it because we've been sold to the idea that whatever lies at the peak is a net positive for everyone.
That, or we just don't care about the end result. Both are bad in their own way.
You can disable AI in Google products.
E.g. in Gmail: go to Settings (the gear icon), click See all settings, navigate to the General tab, scroll down to find Smart features and personalization and uncheck the checkbox.
> Important: By default, smart feature settings are off if you live in: The European Economic Area, Japan, Switzerland, United Kingdom
(same source as in grandparent comment).
(I desperately want to disable the AI summaries of email threads, but I don't want to give up the extra spam filtering benefit of having the smart features enabled)
All companies push an agenda all the time, and their agenda always is: market dominance, profitability, monopoly and rent extraction, rinse and repeat into other markets, power maximization for their owners and executives.
The freak stampede of all these tech giants to shove AI down everybody's throat just shows that they perceive the technology as having huge potential to advance the above agenda, for themselves, or for their competitors at their detriment.
I'll bear that in mind the next time I'm getting a haircut. How do you think Bob's Barbers is going to achieve all of that?
Although I never saw anybody reporting it was actually useful, it's tasteful, accessible, and completely out of your way until you need it.
I don't personally care if a product includes AI, it's the pushiness of it that's annoying.
That, and the inordinate amount of effort being devoted to it. It's just hilarious at this point that Microsoft, for example, is moving heaven and earth to put AI into everything office, and yet Excel still automatically converts random things into dates (the "ability" to turn it off they added a few years ago only works half the time, and only affects csv imports) with no ability to disable it.
I mean, c'mon, its literally called the fucking windows key and it doesn't work. As per standard Microsoft it's a feature that worked perfectly on all versions before cortana (their last "ai assistant" type push), i wonder what new core functionalities of their product they're going to fuck up and never fix.
Windows as an OS really kind of peaked around Windows 7 IMO... though I do like the previews on the taskbar, that's about the only advancement since that I appreciate at all... besides WSL2(g) that is. I used to joke that Windows was my favorite Linux distro, now I just don't want it near me. Even my SO would rather be off of it.
I want to choose the extensions that go into my browser. I don't even use the browser's credential manager, and I've gotten to a point where I'm just not sure anything is actually getting better.
I will say that the Gemini answers at the top of Google searches are hit or miss, and I do appreciate that they're there. That said, I'm a bit mixed as the actual search results beyond that seem to be getting worse overall. I don't know if it's my own bias, but when the Gemini answer is insufficient, it feels like the search results are just plain off from what I'm looking for.
So, yes, I want AI in "everything".
And it's not a waste of resources if it's not triggered automatically.
In fact, I'd say you're an edge case's edge case. There should be a word for that. Maybe "one-off."
The use-case, which generalised is "pull some information from a web page", is far less niche, and I'd argue extremely common.
I know a lot of people - including non-technical people - who spend a lot of time doing that in ways ranging from entirely manual to somewhat more sophisticated, and the more technically knowledgeable of those have started looking for AI tools to help them with that.
To the extent users "don't want" AI available for things like this, it is mostly because they don't know AI could help with this.
E.g. just a few days ago, I had someone show me how they painstakingly copied column by column from the exact same Notion site I mentioned into a Google sheet, without realising it was trivially automatable. Or rather: Trivially automatable to a technical user like me. But it could be trivially automatable to anyone with relatively little integration effort in the browsers.
Also I believe some agentic tasking can make sense: scroll through all the Kindle unlimited books for critically acclaimed contemporary hard sci-fi.
But stapling on a chat sidebar or start page or something seems lacking in imagination.
But a more nuanced is: the term "AI" has become almost meaningless as everything is being marketed as AI, with startups and bigger companies doing it for different reasons. However, if you mean GenAI subset, then very few people want it, in very specific products, and with certain defined functionality. What is happening now though is that everybody and their mum try to slap it everywhere and see if anything sticks (spoiler: practically nothing does).
Well, if you phrase it this way, then yes, people want this. AI can be useful, and integration is beneficial. But if we are talking about the momentary hype, then no, most people are against stupidly blindly shoving AI into something and getting annoyed with it the whole time.
Personally, I would prefer for apps to safely open up for any kind of integration, and AI being just one automation of many, whatever one prefers. It's so annoying for everything being either a walled garden, guarding every little bit they can grab; or having apps open, but so limited in what they actually can do, that you are basically forced to the walled gardens.
Building AI the Firefox way: Shaping what’s next together - <https://connect.mozilla.org/t5/discussions/building-ai-the-f...>
Well, yes. It's extremely useful. However, the hype bubble means it's getting added everywhere even when there's not a clear and vetted use case.
It works really well for navigating docs as a super-charged search--much better at mapping vague concepts and words back to the official terminology in the docs. For instance, library Z might have "widgets" and "cogs" as constructs, but I'm used to library A which has similar constructs "gadgets" and "gears". I can explain the library A concepts and LLMs will do a pretty good job of mapping that back to the library Z concepts--much better than traditional search engines can do.
That could have been an amazing experience where the AI told me exactly how to use the product. That's what I want. It's not what I got.
Spoiler: you didn't.
However, I think there is a demand of at least one (me) for a Linux system with no AI whatsoever. Firefox could make itself the browser of choice for the minority that don't want any AI. Sure, you can configure it to be AI free, but that is a bit like being able to be vegan at a meaty restaurant where you can always spit out the meat.
Firefox has been struggling of late and they don't do scoped CSS, which makes it as good as IE6 to me, but I think they could get their mojo back by being cheerleaders for the minority that have decided to go AI free. This doesn't mean AI is bad, but there is a healthy niche there.
Apart from anything else, there are new browsers like Atlas that are totally AI. I would say that an AI enabled Firefox is not going to compete with Atlas, but AI free is a market that could be dominated by them.
There is going to be a growing market for no AI. In my own case, my dad was 'pig butchered by an AI chatbot' to die penniless, so I have opinions on AI. Sam Altman would not want to meet me on a bad day, unless he has some AI that specialises in extreme ultraviolence.
Then there is an ever growing army of people that have lost their job to AI to get nothing but rejections from AI powered job boards.
Then there are those that have lost friends to AI psychosis, then there are those that have no water and massive utility bills due to AI data centers. The list goes on!
Sounds like I need to put together an AI free operating system with AI free browser for those that have their own reasons for resenting AI!
I think there is a ton of potential for having an LLM bundled with the browser and working on behalf of the user to make the web a better place. Imagine being able to use natural language to tell the browser to always do things like "don't show me search engine results that are corporate SEO blogspam" or "Don't show me any social media content if its about politics".
It's bad enough what Google did to search; a future where the only thing you get back is a) what the machine allows you to see or create (which may be determined by the built-in agent or by the programmers); b) what the machine wants you to see, & modified to be in line with its whims; & c) hallucinated slop where it is difficult to determine what is real, what is human-originated, & what is constructed out of whole cloth.
I've vibe coded a few Godot games. It's all good fun.
But now everything is forcing it. Google is telling people what rocks are tasty, on Reddit bots are engaging with bots.
From what I can tell the only way to raise VC money is by saying AI 3 times. If the ritual is done correctly a magic seed round appears.
As they say, don't hate the player, hate the game.
If an app is a gateway to a bunch of data, it's cool to be able to "talk" to that data via any built-in LLM-based stuff, but typically the app is just a frontend anyway in that case, so the app isn't really needed.
Other than that, I don't think I'd be happy to see AI anywhere else. I pretty much don't want no AI in my operating system, browser.
I also wouldn’t want to go back to only web search for finding things out. Search engines are generally inferior.
Unfortunately got to meet those KPIs.
I don't want this, but at the same time I think people are overreacting. If Mozilla remains true to their word and this is an opt-in sort of thing, it's hard for me to get too worked up about it. I can just ignore it.
browser.ml.chat.enabled set to false
browser.ml.chat.menu set to false
browser.ml.chat.page set to false
browser.ml.chat.page.footerBadge set to false
browser.ml.chat.page.menuBadge set to false
browser.ml.chat.shortcuts set to false
browser.ml.chat.sidebar set to false
browser.ml.enable set to false
browser.ml.linkPreview.enabled set to false
browser.ml.pageAssist.enabled set to false
browser.tabs.groups.smart.enabled set to false
browser.tabs.groups.smart.userEnable set to false
extensions.ml.enabled set to false
That should do it.Can also use the user config override if you want to do it without having to do that every time you install FF somewhere new (put user.js in the root folder of your firefox profile).
user_pref("browser.ml.chat.enabled", false);
user_pref("browser.ml.chat.menu", false);
user_pref("browser.ml.chat.page", false);
user_pref("browser.ml.chat.page.footerBadge", false);
user_pref("browser.ml.chat.page.menuBadge", false);
user_pref("browser.ml.chat.shortcuts", false);
user_pref("browser.ml.chat.sidebar", false);
user_pref("browser.ml.enable", false);
user_pref("browser.ml.linkPreview.enabled", false);
user_pref("browser.ml.pageAssist.enabled", false);
user_pref("browser.tabs.groups.smart.enabled", false);
user_pref("browser.tabs.groups.smart.userEnable", false);
user_pref("extensions.ml.enabled", false);
It's a garbage feature that no one appears to have asked for.It's frustrating that the choice is between "becoming bad" (firefox) and "much worse" (chrome).
And do what? Use a Chromium-based browser, which is infinitely worse?
Mozilla has now shoved AI down my throat as a user of Firefox. It's one thing if they want to pursue questionable business directions on a purely opt-in basis -- that's their prerogative -- and while I'll take issue with what was in my opinion one of the last bastions of the open web burning money like that, ultimately, at least they didn't force it on the user.
It's another thing when they impose it on the user base, and a user base, at that, that's probably more sensitive to having the latest trend shoved in our faces than the average browser user (I'm not saying this to sound elitist; on the contrary, I think FF attracts obstinate, almost luddite types when it comes to new technology; I think many of us just want a basic, relatively no-frills browser).
If I could have set a systemwide setting to say "Only add AI to things I want", then I would have ticked that box a long time ago.
Maybe YT could add an option for "filter out AI slop". I might pay for YT if they did that.
If it works with my local ollama servers then yeah I don't mind it. I already use the existing AI integration sometimes (which is very basic) for translation and summarisation. It's not bad (translation is definitely better than the builtin one because it is much better at context)
But if it has to be cloud crap then no. I don't want big tech datamining my behaviour.
It's definitely not a viable way for them to make money on services when it comes to me. And I think most firefox users will feel that way. If they didn't care about such things they'd be using chrome.
What's often missing nowadays when integrating AI is creativity and understanding what people really want. It's not easy, but that's what makes products great.
I agree with the article that the AI being introduced into Firefox isn't very compelling and I'd rather it not exist. But I disagree that people don't want AI features in Firefox - they just don't want what they're getting.
That’s it. The rest is just activism and kids playing in a sandbox with non-profit money to pad out their resume with whatever topical keywords might land them their next gig.
The AI inclusion seems like the same reason everyone else is adding AI, they don't want to be left behind if or when it's viewed as an essential feature.
Ah, how the young forget... Mozilla became popular precisely due to their willingness to challenge the market leader at the time [1], namely, Internet Explorer. Going against the market leader should be in their DNA. The fight is not lost just because there's a market leader. If anything, Mozilla is currently losing the battle because the leadership doesn't believe they can do it again.
I'm fine with Mozilla diversifying their income, but I'm not fine with Mozilla sacrificing their browser (the part we desperately need the most) in the name of a "Digital Rights Foundation" that, at this rate, will lose their seat at the negotiating table.
[1] https://en.wikipedia.org/wiki/Usage_share_of_web_browsers#/m...
I do not believe that this is the case. Their #1 revenue source is Google. The moment they start regaining any foothold?
Imagine just collecting that amount from Google as tax, and funding Mozilla publicly.
People in the organization are trying to use what's left of the name recognition and all that money to benefit their own initiatives.
s/Chrome/Internet Explorer/g
Nobody has won until the match is over, and history has a very long tail.
They probably would've achieved enough to sustain Firefox development in perpetuity if they invested most of Google's money in a fund.
It doesn't matter if Firefox became better. There is simply not enough differentiation potential in the core browser product to win by being better. Its all marketing.
I just wish Mozilla sold some stickers/themes as proxy donations and became largely independent.
Nitpick: Firefox is developed by Mozilla Corp., not the non-profit.
An open slack-alike also seems like a good fit for them.
Alas, they have tons of cash but little capacity to do anything useful.
Yep, a federated social network is indeed an ambitious problem, perhaps Mozilla would've been well-suited to tackle it. The problem is not the tech or scope, but timing. 15 years ago everyone was happy to be on FB / Twitter. 10 years ago, Microsoft just bought LinkedIn; Google tried, then killed off a network with 500k DAU; all of that time, there was little space for a new contender.
Mastodon only took off because Twitter went to shit real fast; still most people flocked to mastodon.social, because they heard Mastodon was good, but had no idea what federation is, or why it's important. MAYBE that would've been the perfect timing for Mozilla to launch their own ActivityPub platform.
Firefox is steadily losing market share, and any attempts to do something about it are met with negativity. The 2-4% of users who use it care about their privacy. But they are not being deprived of it; the AI tab is optional, and no one is removing the regular tab. (Of course, it would be better if they allowed the integration of local models or aggregators, such as Openrouter, Huggingface...)
Meanwhile, developers continue to ignore Firefox, testing only Chromium browsers. Large companies are also choosing the Chromium engine for their browsers.
Perhaps if they implement this functionality conveniently, more average users will use Firefox.
The pessimism can get old.
My impression is that this is the reason why they keep losing market share. I never see any positive news about Firefox or Mozilla, and the browser has nothing that would make me switch.
Firefox gained market share because people recommended it and installed it on the computers of friends and family. They seem to have stopped, and its developers don't seem, from the outside, to be interested in doing anything to bring that back.
- An extension system more powerful than Chrome's, which supports for example rich adblockers that can block ads on Youtube. Also, it works on mobile, too
- Many sophisticated productivity and tab management features such as vertical tabs, tab groups, container tabs, split tabs, etc. And now it also has easy-to-use profiles and PWA support just like Chrome
- A sync system which is ALWAYS end-to-end encrypted, and doesn't leak your browsing data or saved credentials if you configure it wrong, like Google's does, and it of course works on mobile too
- And yes, LLM-assisted summarization, translation, tab grouping, etc, most of which works entirely offline with local LLMs and no cloud interation, although there are some cloud enabled features as well
I have LibreWolf and Chrome installed, but not Firefox, and I like part of Firefox in spite of, not because of, the rest of Mozilla. I'd be interested in Ladybird except they threaten to use Swift.
Try leaving the basement.
It's like going from YouTube to Tiktok, for most content we consume, you could cut 90% of it without losing anything of value.
Do I want it to go to some 3rd party AI service? No. Absolutely not. However, if it's configurable like the Copilot extension—where I can pick which AI I'm using—then I'm all for it. I'll just pick a model I've got in ollama and live the dream.
NOTE: I as I wrote this, Firefox underlined "ollama" in red because it failed the spellcheck. Imagine if Firefox had a proper grammar-checking AI too. That would be super useful. I'd love that!
Those who think they don't want AI in their browsers are completely lacking in imagination, IMHO.
No one wants to browse Facebook or Reddit or whatever. The interfaces are user hostile or horrible. If we could interact with our own, private interface and the outcome was submitted to some text/web LLM that then did the interaction with the actual websites, then we would actually be able to use the public internet.
It's possible that this software shouldn't be a browser though, but something else, possibly something which is built on top of a browser engine.
I think this is more a case of there being limited appetite for what Mozilla is doing here. At least so far. I keep that stuff turned off in Mozilla and just don't see the appeal. And I'm saying that as someone who does agentic coding for some things, uses and pays for ChatGPT, uses perplexity regularly, etc. And I did install Atlas the other day. I didn't switch to it and wasn't too impressed with what it does.
I think browser makers (including the big ones) are still a bit struggling to identify use cases beyond doing search via a llm, adding side bars, and trying to find a balance between site security and giving all this full access to what's on the page.
Mozilla using their own limited models seems to have very little to add to this mix. At least my impression. But it's too early to state that user's don't want this.
Some users don't want this, clearly. And some other users really don't like any form of change. But there are other users that might want some of these things if they are well executed.
Anyway, Mozilla's attempts here strike me as yet another weak effort to do "something" that follows in a long line of half assed products and services they've developed, launched (sometimes), and killed over the last decades. I don't think they have what it takes; or at least, they have a lot to prove. And the vague hand wavy announcements for this aren't a great sign that they have this figured out beyond "doing something with AI".
While I really appreciate its existence, I was surprised by the amount of corporate stuff I had to remove setting it up: Frontpage ads from their supporters, search offering completions and extras that border on ads as well, the AI bar being pushed through a popup tutorial…
It definitely felt different from other free software, distinctly similar to a for-profit app in a bad way. All the crap was removable in settings, but still.
HN spent a year discussing the threat that AI posed to Google Search. Well, if it threatens search, then it threatens the browser. They're hedging. How frequently does Mozilla get criticized for failing to do X Y or Z to change with the times (or for doing it late? for having too much ambition, or not enough, sometimes at the same time?).
The fact of the matter is that they're already struggling to remain relevant as it is, and their competitors have been dabbling in this space for a while. They're already going to have the infrastructure, because local LLMs works really well for translation (and being able to do content translation without sending all the content off to Google is obviously a sensible feature for Firefox to have). There's no reason to not at least try to match their competitors. Especially if they could potentially hit on some "killer app", which is really the only way at this point to make up any significant ground in marketshare in a market that is otherwise entirely commodified.
- It runs locally without consuming too much energy or phoning home,
- it can be completely disabled without being re-enabled after an update,
- its training set is ethically sourced and the manifest of training sources is publicly accessible (I'm fine with the training data not being accessible as long as it's properly marked in the manifest),
- and the weights and training code are open,
I would be fine having some sort of AI model available as assistant in FF. I probably wouldn't use it, but I wouldn't have any problems with it being there.
My only beef is they've basically put Claude's webpage on a side pane, with all the issues of a squished webpage.
I also think having a separate mode is really the best middle ground between an all spying ai-browser and one that has none (which makes doing some things with ai more manual)
I have used that feature for a few weeks now and find it utterly useless.
Partly because it is squished. But mostly because it offers no value over just having a tab open with Claude (or in my case Mistral).
The extra buttons (summarize) and integration (context menu) hardly ever work (pages and selections are often too large for gpt, copilot, mistral or even claude and the sidebar just gives an error) but even if they did: what problem do these extra buttons and integrations solve? Am I missing something?
Do note that I would love integration the other way around: to have an AI agent (through an MCP for example) drive my firefox. Safely, contained, etc etc. I am not an AI luddite. I just find the firefox sidebar offering no value at all.
browser.ml.chat.maxLength
So they pretty much have to ship one, to stay relevant. And they are privacy-focused, so I'm happy they are not just using ChatGPT or whatever under the hood to implement support.
For one, because it breaks the Unix philosophy of "doing one thing and doing that well".
In that vein, I do want Firefox to develop/allow/improve an interface so that machines, amongst which AI-MCPs, can drive my firefox. And do so safely, secure, contained, etc.
So that my AI agent can e.g. open a Firefox tab and do things there on my behalf. Without me being afraid it nukes all my bookmarks, and with me having confidence in safety nets so that some other tool or agent cannot just take over my gmail tab and start spamming under my account.
Point is: I really think Mozilla and Firefox have a role to play in the AI landscape that's shaping up. But yet another client to interact with chatbots is not that. Leave that to people building clients please: do one thing and do it well.
But some niceties to e.g. allow running scripts with filtered/permissioned access within a sidebar would be nice.
Here's some ways I can think of:
- seamless integration with local models
- opt in and opt out experience when needed
- ai instrumentation (so fill up tedious long web forms for me)
- ai and accessibility
these are off the top of my head.
it boggles my mind that there are so many convinced that AI doesn't offer good use cases for a browser.
I think the "how they introduce it" part is crucial and it doesn't look like Mozilla has cracked that nut from the announcement. but to say no one wants this is just not true and short sighted.
But I'm certainly one of those users that are getting frustrated with having to turn off all of the AI features in recent releases.
You can't be all things to all people.
Anyway, I would be more afraid of agents than just AI answering about things, generating images/music or whatever. That could affect much more than just privacy.
I also don't need / don't want it's manipulative presence around.
Not to be paranoid, but it's not just about browsers, that's just the most convenient place we've gotten started with this sort of mass surveillance (and control) architecture.
Is there any evidence Mozilla has plans to do this? As far as I know, there's only two companies doing what you describe: Microsoft and Meta. Microsoft being the most invasive (and evil) by a huge amount—because it's at the OS level.
Why would they want AI?
It has just the right acceleration curve and properly works inside nested scrollable elements.
It's stable, got good UI and light on resources. The excellent adblocking is a huge feature.
For the average Joe user, they might want some AI features but most techy users have already got that figured out.
E.g. I had a very good experience in reversing a local bank API with LLMs to download my bank statements in a few seconds by local python scripts instead of several minutes of error-prone clicking in the bank's shitty old interface. The thing that I'd have done in one day, the LLM coded in several minutes by taking recorded request-responses. Yes, the code is a bit gibberish, but why do I care for my local single-user usage?
I can imagine a dozen similar stupid but routine API parsing challenges for LLMs that everyone could use.
If it's not enabled during usual browsing and doesn't snoop in everyday data, but only in a dedicated sandboxed window, I say it's a good design from Mozilla's side.
Really, I intend to push it into a Google Sheet, and ideally I'd just want a bookmark to do that, but for now I guess I'll settle for a script I can give a URL to. For a lot of people's daily manual chores, the ability to ask an LLM to solve it, and bookmark a "ask this again about another page" action would be a gamechanger.
I don't need this. I don't want this. I did not ask for this.
I think what we here see is that commercial interests ruin a browser.
The AI things are pushed by an idea to make firefox more marketable to companies. So Mozilla gets more money, at the expense of users. This is the sad reality that explains why Mozilla behaves that way. Google too by the way.
Not really, outside influencers looking to capture the next hot thing (like Mozilla) and tech-bros, there is no living soul on this planet that wants or is trying to normalise AI browsers.
Second set of features could be language rewriter and translator in web pages and web forums.
Third set of features: extract text notes from a web page. save it to the browser history. Allow AI chatting with this AI text enhanced browser history.
Fourth feature: Bookmark surfing. AI will individually look in each bookmark for resources and information that can be outputted based on chat requests.
The first and only useful scenario in a local setting that actually would be applauded and appreciated. I don't know how it is on some systems, and how much resource it would expend in energy. It wont slow down Firefox off the shelf, because Firefox won't scour the AI index, unprompted.
Edit: rearranged paragraphs.
Dear god, no. The last thing I want to be doing is telling grandma over the phone how to sweet-talk the settings screen into turning her adblocker back on.
I can’t roll my eyes any harder when I hear some ad like “How can agentic AI reshape CRM for your workforce?”
Fixed that for you greed dbags.
This isn’t innovation. Leadership keeps green-lighting trendy distractions while the browser that actually matters keeps slipping behind. And it’s happening because there’s no real oversight, no accountability, and no one willing to say “no” when someone pitches another off-brand hobby project.
Mozilla needs a reality check. Stop burning resources on experiments nobody asked for, remove the people who think this is acceptable, and refocus on the one thing that still gives the organization a reason to exist: building a great browser. Until that happens, they’re just wasting donor money and goodwill while Firefox slowly fades away.
Like, what were they thinking?
I'm glad that they have a single about:config option to turn it all off. First thing I did the minute I saw an "Ask AI" item appear in my right-click context menu.
As a ChatGPT subscriber I use it more since when I can just open a dedicated sidebar in Firefox with ChatGPT inside.
People use Firefox because they want privacy respecting software with good customizability. What Mozilla should be focusing is making their "vanilla" experience as good as possible and keep working on tools which further help user privacy.
Firefox should be performant, compatible, well polished and have the best privacy tools available. Focusing on anything else will make it just a worse version of another browser.
To be honest this makes me really question the leadership of Mozilla. Who is deciding this? And what are these decisions based on. I doubt that it is actual user research.
Those critics then straw-man by saying the AI will take up a ton of resources in your browser (it could be as simple as a text box) or collect your data secretively (what company wants to deal with that PR fallout?).
Whether you like it or not, and regardless of your view on the current state of “AI” and where it’s headed, the undeniable fact is that “AI” has been and is in the zeitgeist now and will continue to be for at least another year or two. If Mozilla Firefox does not show anything on something like this, the general public and the general tech writers (not as invested in Firefox) would write it off further. If Mozilla Firefox does something like this, then the diehard fans will be up in arms about what they see as distractions (and to be frank, Mozilla has had more than a few over the years).
What matters is if Mozilla listens to feedback from a diverse audience instead of being swayed by any specific group. It’s not easy. I’d rather Mozilla try something and goof up or fail instead of just being left behind due to inaction.
In particular I'd also love agentic AI so I can quickly automate tasks on shitty web sites that can't be reasonably automated otherwise.
But even a free, no-signup "summarize this wall of text" would be useful.
I think the adoption of AI browsers shows that there are people who find value in this, and I think a lot more people would be interested if it wasn't getting relentlessly forced on them at every corner, making them refuse it out of principle.
Firefox and Thunderbird, that is it. Everything else was just a ridiculous time and money sink which should've just been spent on those core products.
Summarizing, explaining pages directly, without copying to another app. Reading pages out aloud. Maybe even orchestrating research sessions, by searching and organizing...
there's so much stuff that could get much better if they invested more in AI features -- tab grouping, translation, ad blockers; why are people so triggered? because it might end up being bad?
[1] <https://support.mozilla.org/en-US/forums/contributors/717446> [2] <https://www.instagram.com/p/DPn_Re5AAkN/>
WD-42•1h ago
cedilla•1h ago