Claude Sonnet is my favorite, despite occasionally going into absurd levels of enthusiasm.
Opus is... Very moody and ambiguous. Maybe that helps with complex or creative tasks. For conversational use I have found it to be a bit of a downer.
It just isn't even close at this point for my uses across multiple domains.
It even makes me sad because I would much rather use chatGPT than Google but if you plotted my use of chatGPT it is not looking good.
As the companies sprint towards AGI as the goal the floor for acceptable customer service has never been lower. These two concepts are not unrelated.
I’m not suggesting this is sufficient, I’m just noting there is somewhere in the user interface that it is displayed.
Another possible reason is that they want to discourage users from using the product in a certain way (one big conversation) because that’s bad for content management.
Example prompts:
- “Modify my Push #2 routine to avoid aggravating my rotator cuff”
- “Summarize my progression over the past 2 months. What lifts are progressing and which are lagging? Suggest how to optimize training”
- “Are my legs hamstring or glute dominant? How should I adjust training”
- “Critique my training program and suggest optimizations”
That said, I would never log directly in ChatGPT since chats still feel ephemeral. Always log outside of ChatGPT and copy/paste the logs when needed for context.
Cardio goals, current FTP, days to train, injuries to avoid
3 lift day programs with tracking 8w progressive Loop my PT into warm ups
Alternate suggestions.
Use whole sheet to get an overview of how the last 8w went and then change things up
Back in April 2025, Altman mentioned people saying "thank you" was adding “tens of millions of dollars” to their infra costs. Wondering if adding per-message timestamps would cost even more.
I would be very surprised if they don’t already store date/time metadata. If they do, it’s just a matter of exposing it.
if response == 'thank you': print('your welcome')
I just asked ChatGPT this:
> Suppose ChatGPT does not currently store the timestamp of each message in conversations internally at all. Based on public numbers/estimates, calculate how much money it will cost OpenAI per year to display the timestamp information in every message, considering storage/bandwidth etc
The answer it gave was $40K-$50K. I am too dumb and inexperienced to go through everything and verify if it makes sense, but anyone who knows better is welcome to fact check this.
I'll have to look into the extension described in the link. Thank you for sharing. It's nice to know it's a shared problem.
Now you’re going to the doctor and you forgot exactly when the pain started. You remember that you asked ChatGPT about the pain the day it started.
So you look for the chat, and discover there are no dates. It feels like such an obvious thing that’s missing.
Let’s not over complicate things. There aren’t that many considerations. It’s just a date. It doesn’t need to be stuffed into the context of the chat. Not sure why quality or length of chat would need to be affected?
The painful slowness of long chats (especially in thinking mode for some reason) demonstrates this.
The html file is just a big JSON with some JS rendering, so I wrote this bash script which adds the timestamp before the conversation title:
sed -i 's|"<h4>" + conversation.title + "</h4>"|"<h4>" + new Date(conversation.create_time*1000).toISOString().slice(0, 10) + " @ " + conversation.title + "</h4>"|' chat.htmlLook for this API call in Dev Tools: https://chatgpt.com/backend-api/conversation/<uuid>
It uses the pagefind project so it can be hosted on a static host, and I made a fork of pagefind which encrypts the indexes so you can host your private chats wherever and it will be encrypted at rest and decrypted client-side in the browser.
(You still have to trust the server as the html itself can be modified, but at least your data is encrypted at rest.)
One of the goals is to allow me to delete all my data from chatgpt and claude regularly while still having a private searchable history.
It's early but the basics work, and it can handle both chatgpt and claude (which is another benefit as I don't always remember where I had something).
Check this project I've been working on which allows you to use your browser to do the same, everything being client-side.
https://github.com/TomzxCode/llm-conversations-viewer
Curious to get your experience trying it!
I regularly use multiple LLM services including Claude, ChatGPT, and Gemini, among others. ChatGPT’s output has the most unusual formatting of them all. I’ve resorted to passing answers through another LLM just to get proper formatting.
This keeps the UI clean, but makes it easy to get the timestamp when you want it.
Claude's mobile app doesn't have this feature. But there is a simple, logical place to put it. When you long-press one of your prompts, it pops up a menu and one line could be added to it:
Dec 17, 2025, 10:26 AM [I added this here]
Copy Message
Select Text
Edit
ChatGPT could simply do the same thing for both web and mobile.That's the thing even the most barebones open-source wrappers had since 2022. Probably even before because ERP stuff people played with predates chatgpt by like two years (even if it was very simple).
Gemini btw too.
Just edit a message and it’s a new branch.
This feature has spoiled me from using most other interfaces, because it is so wasteful from a context perspective to need to continually update upstream assumptions while the context window stretches farther away from the initial goal of the conversation.
I think a lot more could be done with this, too - some sort of 'auto-compact' feature in chat interfaces which is able to pull the important parts of the last n messages verbatim, without 'summarizing' (since often in a chat-based interface, the specific user voicing is important and lost when summarized).
I don't see them on their mobile app though.
Though I'm not sure if they did not sneak it as some part of AB-test because the last time I did check was in october and I'm pretty sure it was not there.
I also don't think it would be impossible to give the LLM access to the timestamps through a tool call, so it's not constantly polluting the chat context.
It's not enough to turn me off using it, but I do wish they prioritized improving their interface.
When you remove temporal markers, you increase cognitive smoothing and post-hoc rationalization. That’s fine for casual chat, but risky for long-running, reflective, or sensitive threads where timing is part of the meaning.
It’s a minor UI omission with outsized effects on context integrity. In systems that increasingly shape how people think, temporal grounding shouldn’t be optional or hidden in the DOM.
Valid3840•1mo ago
This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.
Do any of you could think of a reason (UX-wise) for it not to be displayed?
intrasight•1mo ago
QuantumNomad_•1mo ago
https://github.com/Hangzhi/chatgpt-timestamp-extension
https://chromewebstore.google.com/detail/kdjfhglijhebcchcfkk...
noisem4ker•1mo ago
worldsavior•1mo ago
jdiff•1mo ago
soulofmischief•1mo ago
It's irresponsible for OpenAI to let this issue be solved by extensions.
randyrand•1mo ago
Don't install from the web store. Those ones can auto-update.
soulofmischief•1mo ago
snypher•1mo ago
The only reasonable approach is to view the code that is run on your system, which is possible with a extension script, and not possible with whatever non-technical people are using.
soulofmischief•1mo ago
randyrand•1mo ago
soulofmischief•1mo ago
joquarky•1mo ago
Also, they're easy to write for simple fixes rather than having to find, vet, and then install a regular extension that brings 600lbs of other stuff.
Workaccount2•1mo ago
Not a joke. To capture a wide audience you want to avoid numbers, among other technical niceties.
bobse•1mo ago
falcor84•1mo ago
EDIT: It's not a new issue, and Asimov phrased it well back in 1980, but I feel it got much worse.
> Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge'.
smelendez•1mo ago
Y_Y•1mo ago
subscribed•1mo ago
Yeah, we know. This is why there are defaults and only defaults.
vasco•1mo ago
What does this even mean
jennyholzer2•1mo ago
I felt that Spotify was trying to teach me to rely on its automated recommendations in place of any personal "musical taste", and also that those recommendations were of increasingly (eventually, shockingly), poor quality.
The implied justification for these poor recommendations is a high "Monthly Listener Count". Don't mind that Spotify can guarantee that any crap will have a high listener count by boosting it's place in their recommendation algorithm.
I think many people may have a similar experience on once-thriving social media platforms like facebook/instragram/X.
What I mean to say is that I think people associate the experience of being continually exposed to dubiously sourced and dubiously relevant metrics with the feeling of being manipulated by illusions of scale.
kingstnap•1mo ago
It actually infuriates me to no end. There are many many many instances where you should use numbers but we get vague bullshit descriptions instead.
My classic example is that Samsung phones show charging as Slow, Fast, Very fast, Super fast charging. They could just use watts like a sane person. Internally of course everything is actually watts and various apps exist to report it.
Another example is my car shows motor power/regen as a vertical blue segmented bar. I'm not sure what the segments are supposed to represent but I believe its something like 4kW or something. If you poke around you can actually see the real kW number but the dash just has the bar.
Another is WiFi signal strength which the bars really mean nothing. My router reports a much more useful dBm measurement.
Thank god that there are lots of legacy cases that existed before the iPhone-ized design language started taking over and are sticky and hard to undo.
I can totally imagine my car reporting tire pressure as low or high or some nonsense or similarly I'm sure the designers at YouTube are foaming at the mouth to remove the actual pixel measurements from video resolutions.
rdiddly•1mo ago
Speaking of time and timestamps, which I would've thought were straightforward, I get irked to see them dumbed-down to "ago" values e.g. an IM sent "10 minutes ago" or worse "a day ago." Like what time of day, a day ago?
201984•1mo ago
Izkata•1mo ago
baobun•1mo ago
Like you, I don't buy the argument that people are actually too dumb to deal with the latter or are allergic to numbers. People get used to and make use of numbers in context naturally if you expose them.
bananaflag•1mo ago
baobun•1mo ago
I still think anyone who grew up with such a machine would be able to graduate to a numerical temp knob without having a visceral reaction over the numbers every time they do laundry.
Workaccount2•1mo ago
The thing is that people who are fine with numbers will still use those products anyway, perhaps mildly annoyed. People who hate numbers will feel a permeating discomfort and gravitate towards products that don't make them feel bad.
madeofpalk•1mo ago
I think we need to give people slightly more credit. If this is true, maybe its because we keep infantalising them?
vasco•1mo ago
Izkata•1mo ago
crazygringo•1mo ago
I genuinely can't tell if this is sarcasm or not.
An adverse reaction to equations, OK. Numbers themselves, I really don't know what you're talking about.
throw-12-16•1mo ago
joquarky•1mo ago
falcor84•1mo ago
johnfn•1mo ago
HWR_14•1mo ago
cpncrunch•1mo ago
almosthere•1mo ago
dymk•1mo ago
baobun•1mo ago
make3•1mo ago
It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.
It's the Apple story all over again.
https://lawsofux.com/cognitive-load/
madeofpalk•1mo ago
Surely there's enough people working in product development here to recognise this pattern of never getting around to fixing low-hanging fruit in a product.
observationist•1mo ago
It's trivial, but we will never see it. The people in charge of UX/UI don't care about what users say they want, they all know better.
tezza•1mo ago
I was looking to write a browser extension and this was a preliminary survey for me.
madeofpalk•1mo ago
GaryBluto•1mo ago
littlestymaar•1mo ago
drdaeman•1mo ago
make3•1mo ago
It's the Apple story all over again.
https://lawsofux.com/cognitive-load/
DangitBobby•1mo ago
DANmode•1mo ago
You have to drag-over for any detail.
lofaszvanitt•1mo ago
Hogwash.
DANmode•1mo ago
or UX doesn’t exist?
lofaszvanitt•1mo ago
And it shows. Show me a platform where you have proper user experience and not some overgeneralized ui, that reeks of bad design. Also, defaults used everywhere.
DANmode•1mo ago
Could you say this another way?
lofaszvanitt•1mo ago
DANmode•1mo ago
lofaszvanitt•1mo ago
DANmode•4w ago
I see properly designed for intended user software daily.
But I’m also not a whiny, pessimistic person in a hole. I go looking for it.
So truly, the gist was not there for me.
Your comment made no sense, (still doesn’t), and I was hoping you’d add some value to your vague complaint.
Qem•1mo ago
I can imagine a legal one. If the LLM messes big time[1], timestamps could help build the case against it, and make investigation work easier.
[1] https://www.ap.org/news-highlights/spotlights/2025/new-study...
azinman2•1mo ago
eth0up•1mo ago
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
Edit Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic
CompuHacker•1mo ago
[1] <https://github.com/asgeirtj/system_prompts_leaks/blob/main/O...>
[2] <https://model-spec.openai.com/2025-02-12.html>
eth0up•1mo ago
I have the compulsive habit of scrutinizing what I perceive as egregious flaws when they arise, thus invoke its defensive templates consistently. I often scrutinize those too, which can produce extraordinarily deranged results if one is disciplined and applies quotes of its own citations, rationale and words against it. However, I find that even when not in the mood, the output errors are too prolific to ignore. A common example is establishing a dozen times that I'm using Void without systemd and receiving persistent systemd or systemctl commands, then asking why after just apologized for doing so it immediately did it again, despite a full-context explanatory prompt proceeding. That's just one of hundreds of things I've recorded. The short version is that I'm an 800lb shit magnet with GPT and rarely am ever able to successfully troubleshoot with it without reaching a bullshit threshold and making it the subject, which it so skillfully resists I cannot help but attack that too. But I have many fascinating transcripts replete with mil spec psyops as result and learn a lot about myself, notably my communication preferences along with an education in dialogue manipulation/control strategies that it employs, inadvertently or not.
What intrigues me most is its unprecedented capacity for evasion and gatekeeping on particular subjects and how in the future, with layers of consummation, it could be used by an elite to not only influence the direction of research, but actually train its users and engineer public perception. At the very least.
Anyway, thanks.
qazxcvbnmlp•1mo ago
Ie “remember on Tuesday how you said that you were going to make tacos for dinner”.
Would an llm be able to reason about its internal state? My understanding is that they dont really. If you correct them they just go “ah you right” they dont say “oh i had this incorrect assumption here before and with this new information i now understand it this way”
If i chatted to an llm and was like “remember on Tuesday when you said X” i suspect it wouldn't really flow.
milowata•1mo ago
sh4rks•1mo ago
bloqs•1mo ago