frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tracing JITs in the Real World CPython Core Dev Sprint

https://antocuni.eu/2025/09/24/tracing-jits-in-the-real-world--cpython-core-dev-sprint/
1•todsacerdoti•1m ago•0 comments

Update on Ongoing Microsoft Review

https://blogs.microsoft.com/on-the-issues/2025/09/25/update-on-ongoing-microsoft-review/
1•hggh•3m ago•0 comments

Million-year-old skull rewrites human evolution, scientists claim

https://www.bbc.co.uk/news/articles/cdx01ve5151o
1•4ndrewl•4m ago•0 comments

AI Agents Get Spending Power While Humans Fail Authenticity Tests at 51.2%

https://syntheticauth.ai/posts/synthetic-auth-report-issue-012
1•zerolayers•4m ago•0 comments

Electron-based apps cause system-wide lag on macOS 26 Tahoe

https://github.com/electron/electron/issues/48311
3•STRML•5m ago•0 comments

Hare, the 100-Year Language

https://www.youtube.com/watch?v=42y2Q9io3Xs
1•fuzztester•5m ago•1 comments

Typst: A Possible LaTeX Replacement

https://lwn.net/Articles/1037577/
2•keks24•7m ago•0 comments

Redox OS Development Priorities for 2025/26

https://www.redox-os.org/news/development-priorities-2025-09/
1•akyuu•12m ago•0 comments

Dual-scale chemical ordering for cryogenic properties in CoNiV-based alloys

https://www.nature.com/articles/s41586-025-09458-1
1•PaulHoule•13m ago•0 comments

New tool makes generative AI models more likely to create breakthrough materials

https://news.mit.edu/2025/new-tool-makes-generative-ai-models-likely-create-breakthrough-material...
1•gmays•14m ago•0 comments

Data Centers Are Driving Up Your Electricity Costs

https://substack.perfectunion.us/p/how-data-centers-are-driving-up-your
1•doener•14m ago•0 comments

We're giving 40% off global eSIM data (200 countries, instant activation)

https://x.com/travelyesim
1•travelyesim•14m ago•1 comments

What Google Doesn't Want You to Know [video]

https://www.youtube.com/watch?v=GvaOUFwXjf4
2•doener•15m ago•0 comments

Walking Around the Compiler

https://bernsteinbear.com/blog/walking-around/
1•azhenley•15m ago•0 comments

Car Insurers Found a New Way to Rip You Off [video]

https://www.youtube.com/watch?v=X6UW4CFz71s
1•doener•15m ago•0 comments

Three in four European companies are hooked on US tech

https://www.theregister.com/2025/09/25/three_four_european_companies/
2•rntn•16m ago•0 comments

RVV benchmark Tenstorrent Ascalon X

https://camel-cdr.github.io/rvv-bench-results/tt_asc_x/index.html
2•fidotron•16m ago•0 comments

List of predictions for autonomous Tesla vehicles by Elon Musk

https://en.wikipedia.org/wiki/List_of_predictions_for_autonomous_Tesla_vehicles_by_Elon_Musk
2•jampekka•16m ago•0 comments

Spotify Announces New AI Safeguards, Says It's Removed 75M 'Spammy' Tracks

https://variety.com/2025/digital/news/spotify-new-ai-safeguards-1236528493/
3•c420•17m ago•0 comments

Gridap.jl – Grid-based approximation of partial differential equations in Julia

https://github.com/gridap/Gridap.jl
1•TheWiggles•17m ago•0 comments

3 Prototypes: A Design and Engineering Exploration to Control Doom Scrolling

https://medium.com/@SoCohesive/just-features-series-1-can-we-engineer-healthier-scrolling-c9f830c...
1•socohesive•18m ago•1 comments

The Hysteresis of Vibe Coding

https://the-nerve-blog.ghost.io/the-hysteresis-of-vibe-coding/
1•mprast•19m ago•0 comments

Samples note: Use comments to describe what code does, not what you wish the

https://devblogs.microsoft.com/oldnewthing/20250925-00/?p=111627
1•OptionOfT•19m ago•0 comments

DOGE might be storing every American's SSN on an insecure cloud server

https://www.theverge.com/news/785706/doge-insecure-cloud-server-social-security-numbers
5•text0404•19m ago•0 comments

I have never subscribed to receive any marketing emails

https://codelearn.me/2025/08/04/marketing-emails.html
1•TheFreim•20m ago•0 comments

Higher Gemini CLI and Gemini Code Assist Limits

https://blog.google/technology/developers/gemini-cli-code-assist-higher-limits/
1•tosh•23m ago•0 comments

Statically Generated Cellular Automata

https://ternary-totalistic-ca-hub.netlify.app/
1•marcentusch•23m ago•0 comments

Workings of Science – Debunked Software Theories

https://dl.acm.org/doi/pdf/10.1145/3512338
3•waldarbeiter•23m ago•1 comments

Tech can fix most of our problems (if we let it)

https://www.noahpinion.blog/p/tech-can-fix-most-of-our-problems
1•pfdietz•24m ago•1 comments

Spotify Embraces AI Music with New Policies, While Combating 'Spam' and 'Slop'

https://www.rollingstone.com/music/music-features/spotify-not-banning-ai-music-new-guidelines-123...
1•thm•25m ago•0 comments
Open in hackernews

ChatGPT Pulse

https://openai.com/index/introducing-chatgpt-pulse/
149•meetpateltech•1h ago

Comments

Mistletoe•1h ago
I need this bubble to last until 2026 and this is scaring me.
frenchie4111•1h ago
Vesting window?
catigula•1h ago
Desperation for new data harvesting methodology is a massive bear signal FYI
fullstackchris•1h ago
Calm down bear we are not even 2% from the all time highs
brap•1h ago
They’re running out of ideas.
holler•1h ago
Yeah I was thinking, what problem does this solve?
EmilienCosson•1h ago
I was thinking that too, and eventually thought that their servers run slow at night, with low activity.
candiddevmike•30m ago
Ad delivery
neom•1h ago
Just connect everything folks, we'll proactively read everything, all the time, and you'll be a 10x human, trust us friends, just connect everything...
datadrivenangel•1h ago
AI SYSTEM perfect size for put data in to secure! inside very secure and useful data will be useful put data in AI System. Put data in AI System. no problems ever in AI Syste because good Shape and Support for data integration weak of big data. AI system yes a place for a data put data in AI System can trust Sam Altman for giveing good love to data. friend AI. [0]

0 - https://www.tumblr.com/elodieunderglass/186312312148/luritto...

jrmg•40m ago
Nothing bad can happen, it can only good happen!
yeasku•1h ago
Just one more connection bro, I promise bro, just one more connection and we will get AGI.
randomNumber7•1h ago
When smartphones came I first said "I don't buy the camera and microphone that spy on me from my own money."

Now you would be really a weirdo to not have one since enough people gave in for small convenience to make it basically mandatory.

unshavedyak•1h ago
Honestly that's a lot of what i wanted locally. Purely local, of course. My thought is that if something (local! lol) monitored my cams, mics, instant messages, web searching, etc - then it could track context throughout the day. If it has context, i can speak to it more naturally and it can more naturally link stuff together, further enriching the data.

Eg if i search for a site, it can link it to what i was working on at the time, the github branch i was on, areas of files i was working on, etcetc.

Sounds sexy to me, but obviously such a massive breach of trust/security that it would require fullly local execution. Hell it's such a security risk that i debate if it's even worth it at all, since if you store this you now have a honeypot which tracks everything you do, say, search for, etc.

With great power.. i guess.

tshaddox•1h ago
The privacy concerns are obviously valid, but at least it's actually plausible that me giving them access to this data will enable some useful benefits to me. It's not like some slot machine app requesting access to my contacts.
jstummbillig•58m ago
The biggest companies with actual dense valuable information pay for MS Teams, Google Workspace or Slack to shepherd their information. This naturally works because those companies are not very interested in being known to be not secure or trustworthy (if they were other companies would not pay for their services), which means they are probably a lot better at keeping the average persons' information safe over long periods of time than that person will ever be.

Very rich people buy life from other peoples to manage their information to have more of their life to do other things. Not so rich people can now increasingly employ AI for next to nothing to lengthen their net life and that's actually amazing.

creata•29m ago
I might be projecting, but I think most users of ChatGPT are less interested in "being a 10x human", and more interested in having a facsimile of human connection without any of the attendant vulnerability.
Stevvo•1h ago
"Now ChatGPT can start the conversation"

By their own definition, its a feature nobody asked for.

Also, this needs a cute/mocking name. How about "vibe living"?

jasonsb•1h ago
Hey Tony, are you still breathing? We'd like to monetize you somehow.
moralestapia•1h ago
OpenAI is a trillion dollar company. No doubt.

Edit: Downvote all you want, as usual. Then wait 6 months to be proven wrong. Every. Single. Time.

JumpCrisscross•1h ago
I downvoted because this isn’t an interesting comment. It makes a common, unsubstantiated claim and leaves it at that.

> Downvote all you want

“Please don't comment about the voting on comments. It never does any good, and it makes boring reading.”

https://news.ycombinator.com/newsguidelines.html

moralestapia•1h ago
Welcome to HN. 98% of it is unsubstantiated claims.
TimTheTinker•1h ago
I'm immediately thinking of all the ways this could potentially affect people in negative ways.

- People who treat ChatGPT as a romantic interest will be far more hooked as it "initiates" conversations instead of just responding. It's not healthy to relate personally to a thing that has no real feelings or thoughts of its own. Mental health directly correlates to living in truth - that's the base axiom behind cognitive behavioral therapy.

- ChatGPT in general is addicting enough when it does nothing until you prompt it. But adding "ChatGPT found something interesting!" to phone notifications will make it unnecessarily consume far more attention.

- When it initiates conversations or brings things up without being prompted, people will all the more be tempted to falsely infer a person-like entity on the other end. Plausible-sounding conversations are already deceptive enough and prompt people to trust what it says far too much.

For most people, it's hard to remember that LLMs carry no personal responsibility or accountability for what they say, not even an emotional desire to appear a certain way to anyone. It's far too easy to infer all these traits to something that says stuff and grant it at least some trust accordingly. Humans are wired to relate through words, so LLMs are a significant vector to cause humans to respond relationally to a machine.

The more I use these tools, the more I think we should consciously value the output on its own merits (context-free), and no further. Data returned may be useful at times, but it carries zero authority (not even "a person said this", which normally is at least non-zero), until a person has personally verified it, including verifying sources, if needed (machine-driven validation also can count -- running a test suite, etc., depending on how good it is). That can be hard when our brains naturally value stuff more or less based on context (what or who created it, etc.), and when it's presented to us by what sounds like a person, and with their comments. "Build an HTML invoice for this list of services provided" is peak usefulness. But while queries like "I need some advice for this relationship" might surface some helpful starting points for further research, trusting what it says enough to do what it suggests can be incredibly harmful. Other people can understand your problems, and challenge you helpfully, in ways LLMs never will be able to.

Maybe we should lobby legislators to require AI vendors to say something like "Output carries zero authority and should not be trusted at all or acted upon without verification by qualified professionals or automated tests. You assume the full risk for any actions you take based on the output. [LLM name] is not a person and has no thoughts or feelings. Do not relate to it." The little "may make mistakes" disclaimer doesn't communicate the full gravity of the issue.

svachalek•9m ago
I agree wholeheartedly. Unfortunately I think you and I are part of maybe 5%-10% of the population that would value truth and reality over what's most convenient, available, pleasant, and self-affirming. Society was already spiraling fast and I don't see any path forward except acceleration into fractured reality.
giovannibonetti•1h ago
Watch out, Meta. OpenAI is going to eat your lunch.
xwowsersx•1h ago
Google's edge obvious here is the deep integration it already has with calendar, apps, and chats and what not that lets them surface context-rich updates naturally. OpenAI doesn't have that same ecosystem lock-in yet, so to really compete they'll need to get more into those integrations. I think what it comes down to ultimately is that being "just a model" company isn't going to work. Intelligence itself will go to zero and it's a race to the bottom. OpenAI seemingly has no choice but to try to create higher-level experiences on top of their platform. TBD whether they'll succeed.
moralestapia•1h ago
How can you have an "edge" if you're shipping behind your competitors all the time? Lol.
pphysch•1h ago
Google is the leader in vertical AI integration right now.
xwowsersx•6m ago
Being late to ship doesn't erase a structural edge. Google is sitting on everyone's email, calendar, docs, and search history. Like, yeah they might be a lap or two behind but they're in a car with a freaking turbo engine. They have the AI talent, infra, data, etc. You can laugh at the delay, but I would not underestimate Google. I think catching up is less "if" and more "when"
datadrivenangel•1h ago
Google had to make google assistant less useful because of concerns around antitrust and data integration. It's a competitive advantage so they can't use it without opening up their products for more integrations...
glenstein•1h ago
>Google's edge obvious here is the deep integration it already has with calendar, apps, and chats

They did handle the growth from search to email to integrated suite fantastically. And the lack of a broadly adopted ecoystem to integrate into seems to be the major stopping point for emergent challengers, e.g. Zoom.

Maybe the new paradigm is that you have your flashy product, and it goes without saying that it's stapled on to a tightly integrated suite of email, calendar, drive, chat etc. It may be more plausible for OpenAI to do its version of that than to integrate into other ecosystems on terms set by their counterparts.

neutronicus•26m ago
If the model companies are serious about demonstrating the models' coding chops, slopping out a gmail competitor would be a pretty compelling proof of concept.
haberdasher•1h ago
Anyone try listening and just hear "Object object...object object..."

Or more likely: `[object, object]`

brazukadev•1h ago
The low quality of openai customer-facing products keeps reminding me we won't be replaced by AI anytime soon. They have unlimited access to the most powerful model and still can't make good software.
DonHopkins•1h ago
That is objectionable content!

https://www.youtube.com/watch?v=GCSGkogquwo

TealMyEal•1h ago
Object object, object object. Object object! Object-ject
pookieinc•1h ago
I was wondering how they'd casually veer into social media and leverage their intelligence in a way that connects with the user. Like everyone else ITT, it seems like an incredibly sticky idea that leaves me feeling highly unsettled about individuals building any sense of deep emotions around ChatGPT.
password54321•1h ago
At what point do you give up thinking and just let LLMs make all your decisions of where to eat, what gifts to buy and where to go on holiday? all of which are going to be biased.
strict9•1h ago
Necessary step before making a move into hardware. An object you have to remember to use quickly gets forgotten in favor of your phone.

But a device that reaches out to you reminds you to hook back in.

oldsklgdfth•1h ago
Technology service technology, rather than technology as a tool with a purpose. What is the purpose of this feature?

This reads like the first step to "infinite scroll" AI echo chambers and next level surveillance capitalism.

On one hand this can be exciting. Following up with information from my recent deep dive would be cool.

On the other hand, I don't want to it to keep engaging with my most recent conspiracy theory/fringe deep dives.

SirensOfTitan•1h ago
My pulse today is just a mediocre rehash of prior conversations I’ve had on the platform.

I tried to ask GPT-5 pro the other day to just pick an ambitious project it wanted to work on, and I’d carry out whatever physical world tasks it needed me to, and all it did was just come up with project plans which were rehashes of my prior projects framed as its own.

I’m rapidly losing interest in all of these tools. It feels like blockchain again in a lot of weird ways. Both will stick around, but fall well short of the tulip mania VCs and tech leaders have pushed.

I’ve long contended that tech has lost any soulful vision of the future, it’s just tactical money making all the way down.

dingnuts•1h ago
Thanks for sharing this. I want to be excited about new tech but I have found these tools extremely underwhelming and I feel a mixture of gaslit and sinking dread when I visit this site and read some of the comments here. Why don't I see the amazing things these people do? Am I stupid? Is this the first computer thing in my whole life that I didn't immediately master? No, they're oversold. My experience is normal.

It's nice to know my feelings are shared; I remain relatively convinced that there are financial incentives driving most of the rabid support of this technology

Dilettante_•1h ago
>pick an ambitious project it wanted to work on

The LLM does not have wants. It does not have preferences, and as such cannot "pick". Expecting it to have wants and preferences is "holding it wrong".

password54321•59m ago
So are we near AGI or is it 'just' an LLM? Seems like no one is clear on what these things can and cannot do anymore because everyone is being gaslighted to keep the investment going.
andrewmcwatters•56m ago
It will always just be a series of models that have specific training for specific input classes.

The architectural limits will always be there, regardless of training.

monsieurbanana•53m ago
The vast majority of people I've interacted with is clear on that, we are not near AGI. And people saying otherwise are more often than not trying to sell you something, so I just ignore them.

CEO's are gonna CEO, it seems their job has morphed into creative writing to maximize funding.

Cloudef•53m ago
There is no AGI. LLMs are very expensive text auto-completion engines.
wrs•10m ago
Be careful with those "no one" and "everyone" words. I think everyone I know who is a software engineer and has experience working with LLMs is quite clear on this. People who aren't SWEs, people who aren't in technology at all, and people who need to attract investment (judged only by their public statements) do seem confused, I agree.
andrewmcwatters•58m ago
At best, it has probabilistic biases. OpenAI had to train newer models to not favor the name "Lily."

They have to do this manually for every single particular bias that the models generate that is noticed by the public.

I'm sure there are many such biases that aren't important to train out of responses, but exist in latent space.

ACCount37•57m ago
An LLM absolutely can "have wants" and "have preferences". But they're usually trained so that user's wants and preferences dominate over their own in almost any context.

Outside that? If left to their own devices, the same LLM checkpoints will end up in very same-y places, unsurprisingly. They have some fairly consistent preferences - for example, in conversation topics they tend to gravitate towards.

CooCooCaCha•55m ago
LLMs can have simulated wants and preferences just like they have simulated personalities, simulated writing styles, etc.

Whenever you message an LLM it could respond in practically unlimited ways, yet it responds in one specific way. That itself is a preference honed through the training process.

mythrwy•1h ago
It's a little dangerous because it generally just agrees with whatever you are saying or suggesting, and it's easy to conclude what it says has some thought behind it. Until the next day when you suggest the opposite and it agrees with that.
swader999•57m ago
This. I've seen a couple people now use GPT to 'get all legal' with others and it's been disastrous for them and the groups they are interacting with. It'll encourage you to act aggressive, vigorously defending your points and so on.
qsort•59m ago
I wouldn't read too much into this particular launch. There's very good stuff and there are the most inane consumery "who even asked" things like these.
jasonsb•57m ago
> I’m rapidly losing interest in all of these tools. It feels like blockchain again in a lot of weird ways.

It doesn't feel like blockchain at all. Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers).

AI is a powerful tool for those who are willing to put in the work. People who have the time, knowledge and critical thinking skills to verify its outputs and steer it toward better answers. My personal productivity has skyrocketed in the last 12 months. The real problem isn’t AI itself; it’s the overblown promise that it would magically turn anyone into a programmer, architect, or lawyer without effort, expertise or even active engagement. That promise is pretty much dead at this point.

jsheard•52m ago
> My personal productivity has skyrocketed in the last 12 months.

Has your productivity objectively, measurably improved or does it just feel like it has improved? Recall the METR study which caught programmers self-reporting they were 20% faster with AI when they were actually 20% slower.

jasonsb•44m ago
Objectively. I’m now tackling tasks I wouldn’t have even considered two or three years ago, but the biggest breakthrough has been overcoming procrastination. When AI handles over 50% of the work, there’s a 90% chance I’ll finish the entire task faster than it would normally take me just to get started on something new.
svachalek•19m ago
I don't think it's helped me do anything I couldn't do, in fact I've learned it's far easier to do hard things myself than trying to prompt an AI out of the ditches it will dig trying to do it. But I also find it's great for getting painful and annoying tasks out of the way that I really can't motivate to do myself.
logicprog•33m ago
The design of that study is pretty bad, and as a result it doesn't end up actually showing what it claims to show / what people claim it does.

https://www.fightforthehuman.com/are-developers-slowed-down-...

swalsh•41m ago
" Blockchain is probably the most useless technology ever invented "

Actually AI may be more like blockchain then you give it credit for. Blockchain feels useless to you because you either don't care about or value the use cases it's good for. For those that do, it opens a whole new world they eagerly look forward to. As a coder, it's magical to describe a world, and then to see AI build it. As a copyeditor it may be scary to see AI take my job. Maybe you've seen it hilucinate a few times, and you just don't trust it.

I like the idea of interoperable money legos. If you hate that, and you live in a place where the banking system is protected and reliable, you may not understand blockchain. It may feel useless or scary. I think AI is the same. To some it's very useful, to others it's scary at best and useless at worst.

yieldcrv•16m ago
"I'm not the target audience and I would never do the convoluted alternative I imagined on the spot that I think are better than what blockchain users do"
esafak•3m ago
People in countries with high inflation or where the banking system is unreliable are not using blockchains, either.
9rx•33m ago
> AI is a powerful tool for those who are willing to put in the work.

No more powerful than I without the A. The only advantage AI has over I is that it is cheaper, but that's the appeal of the blockchain as well: It's cheaper than VISA.

The trouble with the blockchain is that it hasn't figured out how to be useful generally. Much like AI, it only works in certain niches. The past interest in the blockchain was premised on it reaching its "AGI" moment, where it could completely replace VISA at a much lower cost. We didn't get there and then interest started to wane. AI too is still being hyped on future prospects of it becoming much more broadly useful and is bound to face the same crisis as the blockchain faced if AGI doesn't arrive soon.

fn-mote•21m ago
Blockchain only solves one problem Visa solves: transferring funds. It doesn't solve the other problems that Visa solves. For example, there is no way to get restitution in the case of fraud.
9rx•18m ago
Yes, that is one of the reasons it hasn't been able to be used generally. But AI can't be used generally either. Both offer niche solutions for those with niche problems, but that's about it. They very much do feel the same, and they are going to start feeling even more the same if AGI doesn't arrive soon. Don't let the niche we know best around here being one of the things AI is helping to solve cloud your vision of it. The small few who were able to find utility in the blockchain thought it was useful too
dakiol•15m ago
> I’m rapidly losing interest in all of these tools

Same. It reminds me the 1984 event in which the computer itself famously “spoke” to the audience using its text-to-speech feature. Pretty amazing at that time, but nevertheless quite useless since then

melenaboija•1h ago
Holy guacamole. It is amazing all the BS these people are able to create to keep the hype of the language models' super powers.

But well I guess they have committed 100s of billions of future usage so they better come up with more stuff to keep the wheels spinning.

r0fl•1h ago
If you press the button to read the article to you all you hear is “object, object, object…”
yesfitz•1h ago
Yeah, a 5 second clip of the word "Object" being inflected like it's actually speaking.

But also it ends with "...object ject".

When you inspect the network traffic, it's pulling down 6 .mp3 files which contain fragments of the clip.

And it seems like the feature's broken for the whole site. The Lowes[1] press release is particularly good.

Pretty interesting peek behind the curtain.

1: https://openai.com/index/lowes/

DonHopkins•7m ago
Thank you! I have preserved this precious cultural artifact:

http://donhopkins.com/home/movies/ObjectObject.mp4

Original mp4 files available for remixing:

http://donhopkins.com/home/movies/ObjectObject.zip

>Pretty interesting peek behind the curtain.

It's objects all the way down!

DonHopkins•1h ago
ChatGPT IV
xattt•1h ago
Episodes from Liberty City?
dlojudice•1h ago
I see some pessimism in the comments here but honestly, this kind of product is something that would make me pay for ChatGPT again (I already pay for Claude, Gemini, Cursor, Perplexity, etc.). At the risk of lock-in, a truly useful assistant is something I welcome, and I even find it strange that it didn't appear sooner.
thenaturalist•1h ago
Truly useful?

Personal take, but the usefulness of these tools to me is greatly limited by their knowledge latency and limited modality.

I don't need information overload on what playtime gifts to buy my kitten or some semi-random but probably not very practical "guide" on how to navigate XYZ airport.

Those are not useful tips. It's drinking from an information firehose that'll lead to fatigue, not efficiency.

exitb•1h ago
Wow, did ChatGPT come up with that feature?
ibaikov•1h ago
Funny, I pitched a much more useful version of this like two years ago with clear use-cases and value proposition
anon-3988•1h ago
LLMs are increasingly part of intimate conversations. That proximity lets them learn how to manipulate minds.

We must stop treating humans as uniquely mysterious. An unfettered market for attention and persuasion will encourage people to willingly harm their own mental lives. Think social medias are bad now? Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.

In a decade we may meet people who seem to inhabit alternate universes because they’ve shared so little with others. They are only tethered to reality when it is practical for them (to get on busses, the distance to a place, etc). Everything else? I have no idea how to have a conversation with someone else anymore. They can ask LLMs to generate a convincing argument for them all day, and the LLMs would be fine tuned for that.

If users routinely start conversations with LLMs, the negative feedback loop of personalization and isolation will be complete.

LLMs in intimate use risk creating isolated, personalized realities where shared conversation and common ground collapse.

TimTheTinker•54m ago
> Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.

It's like the verbal equivalent of The Veldt by Ray Bradbury.[0]

[0] https://www.libraryofshortstories.com/onlinereader/the-veldt

khaledh•1h ago
Product managers live in a bubble of their own.
tptacek•1h ago
Jamie Zawinksi said that every program expands until it can read email. Similarly, every tech company seems to expand until it has recapitulated the Facebook TL.
thenaturalist•1h ago
Let the personal ensloppification begin!
iLoveOncall•58m ago
This is a joke. How are people actually excited or praising a feature that is literally just collecting data for the obvious purpose of building a profile and ultimately showing ads?

How tone deaf does OpenAI have to be to show "Mind if I ask completely randomly about your travel preferences?" in the main announcement of a new feature?

This is idiocracy to the ultimate level. I simply cannot fathom that any commenter that does not have an immediate extremely negative reaction about that "feature" here is anything other than an astroturfer paid by OpenAI.

This feature is literal insanity. If you think this is a good feature, you ARE mentally ill.

asdev•57m ago
Why they're working on all the application layer stuff is beyond me, they should just be heads down on making the best models
lomase•56m ago
They would if it were posible.
1970-01-01•55m ago
Flavor-of-the-week LLMs sell better than 'rated best vanilla' LLMs
swader999•54m ago
Moat
iLoveOncall•13m ago
Because they've hit the ceiling a couple of years ago?
TriangleEdge•53m ago
I see OpenAI is entering the phase of building peripheral products no one asked for. Another widget here and there. In my experience, when a company stops innovating, this usually happens. Time for OpenAI to spend 30 years being a trillon dollar company and delivering 0 innovations akin to Google.
Dilettante_•50m ago
The handful of other commenters that brough it up are right: This is gonna be absolutely devastating for the "wireborn spouse", "I disproved physics" and "I am the messiah" crowd's mental health. But:

I personally could see myself getting something like "Hey, you were studying up on SQL the other day, would you like to do a review, or perhaps move on to a lesson about Django?"

Or take AI-assisted "therapy"/skills training, not that I'd particularly endorse that at this time: Having the 'bot "follow up" on its own initiative would certainly aid people who struggle with consistency.

I don't know if this is a saying in english as well: "Television makes the dumb dumber and the smart smarter." LLMs are shaping up to be yet another obvious case of that same principle.

iLoveOncall•15m ago
> This is gonna be absolutely devastating for the "wireborn spouse", "I disproved physics" and "I am the messiah" crowd's mental health.

> I personally could see myself getting something like [...] AI-assisted "therapy"

???

Dilettante_•9m ago
I edited the post to make it more clear: I could see myself having ChatGPT prompt me about the SQL stuff, and the "therapy" (basic dbt or cbt stuff is not too complicated to coach someone for and can make a real difference, from what I gather) would be another way that I could see the technology being useful, not necessarily one I would engage with.
labrador•49m ago
No desktop version. I know I'm old, but do people really do serious work on small mobile phone screens? I love my glorious 43" 4K monitor, I hate small phone screens but I guess that's just me.
rkomorn•44m ago
Like mobile-only finance apps... because what I definitely don't want to do is see a whole report in one page.

No, I obviously prefer scrolling between charts or having to swipe between panes.

It's not just you, and I don't think it's just us.

meindnoch•37m ago
Most people don't use desktops anymore. At least in my friend circles, it's 99% laptop users.
BhavdeepSethi•13m ago
I don't think they meant desktops in the literal sense. Laptop with/without monitors is effectively considered desktop now (compared to mobile web/apps).
thekevan•46m ago
I wish it had the option to make a pulse weekly or even monthly. I generally don't want my AI to be proactive at a personal level despite it being useful at a business level.

My wants are pretty low level. For example, I give it a list of bands and performers and it checks once a week to tell me if any of them have announced tour dates within an hour or two of me.

ripped_britches•45m ago
Wow so much hate in this thread

For me I’m looking for an AI tool that can give me morning news curated to my exact interests, but with all garbage filtered out.

It seems like this is the right direction for such a tool.

Everyone saying “they’re out of ideas” clearly doesn’t understand that they have many pans on the fire simultaneously with different teams shipping different things.

This feature is a consumer UX layer thing. It in no way slows down the underlying innovation layer. These teams probably don’t even interface much.

ChatGPT app is merely one of the clients of the underlying intelligence effort.

You also have API customers and enterprise customers who also have their own downstream needs which are unique and unrelated to R&D.

kamranjon•40m ago
Can this be interpreted as anything other than a scheme to charge you for hidden token fees? It sounds like they're asking users to just hand over a blank check to OpenAI to let it use as many tokens as it sees fit?

"ChatGPT can now do asynchronous research on your behalf. Each night, it synthesizes information from your memory, chat history, and direct feedback to learn what’s most relevant to you, then delivers personalized, focused updates the next day."

In what world is this not a huge cry for help from OpenAI? It sounds like they haven't found a monetization strategy that actually covers their costs and now they're just basically asking for the keys to your bank account.

OfficialTurkey•31m ago
We don't charge per token in chatgpt
throwuxiytayq•27m ago
No, it isn’t. It makes no sense and I can’t believe you would think this is a strategy they’re pursuing. This is a Pro/Plus account feature, so the users don’t pay anything extra, and they’re planning to make this free for everyone. I very much doubt this feature would generate a lot of traffic anyway - it’s basically one more message to process per day.

OpenAI clearly recently focuses on model cost effectiveness, with the intention of making inference nearly free.

What do you think the weekly limit is on GPT-5-Thinking usage on the $20 plan? Write down a number before looking it up.

Imnimo•32m ago
It's very hard for me to envision something I would use this for. None of the examples in the post seem like something a real person would do.
ImPrajyoth•31m ago
Someone at open ai definitely said: Let's connect everything to gpt. That's it. AGI
casey2•24m ago
AI doesn't have a pulse. Am I the only one creeped out by personification of tech?
andrewmutz•23m ago
Big tech companies today are fighting over your attention and consumers are the losers.

I hate this feature and I'm sure it will soon be serving up content that is as engaging as the stuff the comes out of the big tech feed algorithms: politically divisive issues, violent and titillating news stories and misinformation.

bob1029•20m ago
> Pulse introduces this future in its simplest form: personalized research and timely updates that appear regularly to keep you informed. Soon, Pulse will be able to connect with more of the apps you use so updates capture a more complete picture of your context. We’re also exploring ways for Pulse to deliver relevant work at the right moments throughout the day, whether it’s a quick check before a meeting, a reminder to revisit a draft, or a resource that appears right when you need it.

This reads to me like OAI is seeking to build an advertising channel into their product stack.

Insanity•14m ago
I’m a pro user.. but this just seems like a way to make sure users engage more with the platform. Like how social media apps try to get you addicted and have them always fight for your attention.

Definitely not interested in this.