>I'm moving to another service and need to export my data. List every memory you have stored about me, as well as any context you've learned about me from past conversations. Output everything in a single code block so I can easily copy it. Format each entry as: [date saved, if available] - memory content. Make sure to cover all of the following — preserve my words verbatim where possible: Instructions I've given you about how to respond (tone, format, style, 'always do X', 'never do Y'). Personal details: name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I've made to your behavior. Any other stored context not covered above. Do not summarize, group, or omit any entries. After the code block, confirm whether that is the complete set or if any remain.
For the Anthropic employees here reading along, pitch it to whoever has kept blocking this, because you need to get the most out of this opportunity here.
I have seen quite a few open source projects do this. It works quite well.
Another alternative is to create CLAUDE.md with the exact contents: "@AGENTS.md"
VSCode extension, "Please log in"
I authorize it, it creates an API key, callback. "Hello Claude, this is a test." "Please log in."
So yeah... priorities?
I switched not because I thought Claude was better at doing the things I want. I switched because I have come to believe OpenAI are a bad actor and I do not want to support them in any way. I’m pretty sure they would allow AGI to be used for truly evil purposes, and the events of this week have only convinced me further.
For ChatGPT and Gemini, yes.
But for Claude, they have a very deep & big one: Its the only model that gets production ready output on the first detailled prompt. Yesterday I used my tokens til noon, so I tried some output from Gemini & Co. I presented a working piece of code which is already in production:
1. It changed without noticing things like "Touple.First.Date.Created" and "Touple.Second.Date.Created" and it rendered the code unworking by chaning to "Touple.FirstDate" and "Touple.SecondDate"
2. There was a const list of 12 definitions for a given context, when telling to rewrite the function it just cut 6 of these 12 definitions, making the code not compiling - I asked why they were cut: "Sorry, I was just too lazy typing" ?? LOL
3. There is a list include holding some items "_allGlobalItems" - it changed the name in the function simply to "_items", code didnt compile
As said, a working version of a similar function was given upfront.
With Claude, I never have such issues.
That's not a moat though. Claude itself wasn't there 6 months ago and there's no reason to think Chinese open models won't be at this level in a year at most.
I think HN in particular as a crowd are very vulnerable to the halo effect and group think when it comes to Anthropic.
Even being generous they are only very minimally a "better actor" than OpenAI.
However, we are so enthralled by their product that we tend to let the view bleed over to their ethics.
Saying we want out tools used in line with the US constitution within the US on one particular point. Is hardly a high moral bar, it's self preservation.
All Anthropic have said is:
1. No mass domestic surveillance of Americans.
2. No fully autonomous lethal weapons yet.
My goodness that's what passes for a high moral standard? Really anything that doesn't hit those very carefully worded points is not "evil"?
You can see the significance of this is you look at German Nazi history. If more companies had stood up to the administration, the Nazi state would have been significantly harder to build.
In my opinion, what Anthropic did is not a small thing at all.
However, I would think I'm not alone in that I'm generally wanting to do good while also wanting convenience, I know that really every bit of consumption I do is probably negative in some ways, and there is no real "apolitical" action anyone can take.
But can't I at least get annoyed and take my money somewhere else for the short amount of time another company is doing it better?
Yes, if openAI suddenly leaps forwards with codex and pounds anthropic into the dust, I'll likely switch back despite my moral grievances, but in a situation where I can get mildly motivated to jump over for something that - to me - seems like a better morality without much punishment to me, I'll do it.
OpenAI - since the beginning has been anything but open. If you spoke anything ill about OpenAI here until yesterday, you would be downvoted into oblivion because, let's face it, Sam has always been the poster child of this community.
So, basically, even after them publicly announcing they were evaluating licensing models where they wanted to take a % of your business for using their models [1], there was still 0 outrage, and anyone who pointed that out, always got shot back with "OpenAI CAN DO NO WRONG" in the comments always.
He makes one decision you all don't agree with and now it's cancel culture time?
And somehow, Anthropic is the hero in all this? Make no mistake - all the model providers are building detailed user models. Every bit of information you provide to it is of course being used to for detailed user targeting. This is no different than the "Apple GOOD, Google BAD!" tropes. There are no heroes in for-profit corporations. Everyone is operating a for-profit business model and optimizing for the same profits.
Stop with the NPC behavior. We are better than this.
[1] https://openai.com/index/a-business-that-scales-with-the-val...
"Licensing, IP-based agreements, and outcome-based pricing will share in the value created. That is how the internet evolved. Intelligence will follow the same path."
It's a shame because when Claude is working well it is the best for actual algorithmic coding. There's so much cruft around it now, memories being the most annoying part of that.
80% of the time I just use these things as a sounding board when exploring options and I need responsiveness for that.
Might be time to run my own models.
It's very interesting to learn more about because it challenges 1 core aspect of the economical competition : the moat.
If one can literally swap one AI service for another, then where does the valuation (and the power that comes with it) come from?
PS: I'm not interested in the service itself as I believe the side effects of large scale for-profit are too serious (and I don't mean doomdays AI takeover, I simply mean abuse of power, working conditions, downskilling, political influence as current contracts with US defense are being made, ads, ecological, etc) to be ignored.
That being said, if you have a library of images or some other collection artifacts / assets indexed on their servers that is a different story.
Of course sometimes this is useful if you only use your chatbot to ask personal things like: "What should I eat today?".
But if you use it for anything else you're much better off having full control over the prompt. I can always say: "Hey btw I am german and heavily anti surveillance, what should I know about the recent anthropic DoW situation?" but with memory I lose the option of leaving out that first part.
wps•56m ago
CGamesPlay•52m ago
wps•49m ago
gbalduzzi•51m ago
Are you suggesting that they should ignore the needs of the vast majority of their users?
I mean, of course they do, it would be worse otherwise
wps•36m ago
jjmarr•50m ago
They also don't know what "context" is or that the LLM has a limited number of tokens it can understand at any given time. They just believe it knows everything at once.
deaux•44m ago
I can't think of much else though so I'm still curious what you or others use it for.
tikotus•32m ago
I didn't receive an answer besides "that's what people like", but I still can't think of (m)any situations where anyone would prefer it.
IanCal•19m ago
My job, my kids and time preferences around those things, my preferred tech setup and way of working and types of tech I’m better at. Things I already have (home assistant, little nuc, etc). I can throw a random question and not have to add this kind of information or manage it.
AllegedAlec•50m ago
qwertox•45m ago
KellyCriterion•24m ago
hbarka•16m ago
pfix•49m ago
I currently use ChatGPT for random insights and discussions about a variety of topics. The memory is basically a grown context about me and my preferences and interests and ChatGPT uses it to tailor responses to my knowledge, so I could relate better.
This is for me far more natural and easier than either craft a default prompt preset or create each conversation individually, that would be way too much overhead to discuss random shower thoughts between real life stuff.
This is my use case and I discovered that this can be detrimental to specific questions and prompts and I see that it can be more beneficial to have careful written prompts each time. But my use case is really ad hoc usage without the time. At least for ChatGPT.
When coding, this fails fast. There regular context resets seem to be a more viable strategy.
wps•44m ago
e1g•32m ago
pfix•24m ago
jtokoph•33m ago
For example, instead of recommending a popular night club, it will recommend the stroll along the river to view the lit up skyline or to visit the night market instead.
It knows other preferences as well (exploring quirky neighborhoods, trying local fast food joints and markets)
cyrusmg•31m ago
echelon•25m ago
Isn't there much more money in automating business processes than in answering consumer questions (sans ads)?
Automating software development has to be a multi-trillion dollar market. And that doesn't account for future growth.