frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I tried Gleam for Advent of Code

https://blog.tymscar.com/posts/gleamaoc2025/
89•tymscar•2h ago•38 comments

Fast, Memory-Efficient Hash Table in Java: Borrowing the Best Ideas

https://bluuewhale.github.io/posts/building-a-fast-and-memory-efficient-hash-table-in-java-by-bor...
39•birdculture•1h ago•1 comments

What is the nicest thing a stranger has ever done for you?

https://louplummer.lol/nice-stranger/
149•speckx•1d ago•93 comments

Analysis finds anytime electricity from solar available as battery costs plummet

https://pv-magazine-usa.com/2025/12/12/analysis-finds-anytime-electricity-from-solar-available-as...
43•Matrixik•1h ago•33 comments

Cryptids

https://wiki.bbchallenge.org/wiki/Cryptids
47•frozenseven•1w ago•4 comments

SSE sucks for transporting LLM tokens

https://zknill.io/posts/sse-sucks-for-transporting-llm-tokens/
8•zknill•4d ago•2 comments

Java FFM zero-copy transport using io_uring

https://www.mvp.express/
76•mands•6d ago•27 comments

Z8086: Rebuilding the 8086 from Original Microcode

https://nand2mario.github.io/posts/2025/z8086/
21•nand2mario•3h ago•4 comments

macOS 26.2 enables fast AI clusters with RDMA over Thunderbolt

https://developer.apple.com/documentation/macos-release-notes/macos-26_2-release-notes#RDMA-over-...
506•guiand•22h ago•260 comments

EasyPost (YC S13) Is Hiring

https://www.easypost.com/careers
1•jstreebin•2h ago

Useful patterns for building HTML tools

https://simonwillison.net/2025/Dec/10/html-tools/
128•simonw•2d ago•42 comments

Photographer built a medium-format rangefinder, and so can you

https://petapixel.com/2025/12/06/this-photographer-built-an-awesome-medium-format-rangefinder-and...
123•shinryuu•6d ago•27 comments

Apple has locked my Apple ID, and I have no recourse. A plea for help

https://hey.paris/posts/appleid/
1341•parisidau•14h ago•765 comments

Go Proposal: Secret Mode

https://antonz.org/accepted/runtime-secret/
84•enz•3d ago•23 comments

How exchanges turn order books into distributed logs

https://quant.engineering/exchange-order-book-distributed-logs.html
97•rundef•5d ago•50 comments

Researchers seeking better measures of cognitive fatigue

https://www.nature.com/articles/d41586-025-03974-w
66•bikenaga•2d ago•11 comments

Indexing 100M vectors in 20 minutes on PostgreSQL with 12GB RAM

https://blog.vectorchord.ai/how-we-made-100m-vector-indexing-in-20-minutes-possible-on-postgresql
64•gaocegege•5d ago•10 comments

A Lisp Interpreter Implemented in Conway's Game of Life (2021)

https://woodrush.github.io/blog/posts/2022-01-12-lisp-in-life.html
65•pabs3•15h ago•2 comments

Show HN: LinkedQL – Live Queries over Postgres, MySQL, MariaDB

https://github.com/linked-db/linked-ql
16•phrasecode•5d ago•6 comments

GNU Unifont

https://unifoundry.com/unifont/index.html
301•remywang•22h ago•70 comments

A 'toaster with a lens': The story behind the first handheld digital camera

https://www.bbc.com/future/article/20251205-how-the-handheld-digital-camera-was-born
62•selvan•5d ago•29 comments

Rats Play DOOM

https://ratsplaydoom.com/
366•ano-ther•22h ago•136 comments

Computer Animator and Amiga fanatic Dick Van Dyke turns 100

187•ggm•10h ago•49 comments

Will West Coast Jazz Get Some Respect?

https://www.honest-broker.com/p/will-west-coast-jazz-finally-get
39•paulpauper•6d ago•18 comments

Dynamic Pong Wars

https://markodenic.tech/dynamic-pong-wars/
16•rendall•1w ago•2 comments

Beautiful Abelian Sandpiles

https://eavan.blog/posts/beautiful-sandpiles.html
113•eavan0•3d ago•17 comments

OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI

https://simonwillison.net/2025/Dec/12/openai-skills/
514•simonw•19h ago•297 comments

Show HN: I made a spreadsheet where formulas also update backwards

https://victorpoughon.github.io/bidicalc/
208•fouronnes3•2d ago•99 comments

Show HN: Tiny VM sandbox in C with apps in Rust, C and Zig

https://github.com/ringtailsoftware/uvm32
177•trj•21h ago•11 comments

Show HN: I audited 500 K8s pods. Java wastes ~48% RAM, Go ~18%

https://github.com/WozzHQ/wozz
6•wozzio•3h ago•4 comments
Open in hackernews

Ask HN: How can I get better at using AI for programming?

58•lemonlime227•3h ago
I've been working on a personal project recently, rewriting an old jQuery + Django project into SvelteKit. The main work is translating the UI templates into idiomatic SvelteKit while maintaining the original styling. This includes things like using semantic HTML instead of div-spamming, not wrapping divs in divs in divs, and replacing bootstrap with minimal tailwind. It also includes some logic refactors, to maintain the original functionality but rewritten to avoid years of code debt. Things like replacing templates using boolean flags for multiple views with composable Svelte components.

I've had a fairly steady process for doing this: look at each route defined in Django, build out my `+page.server.ts`, and then split each major section of the page into a Svelte component with a matching Storybook story. It takes a lot of time to do this, since I have to ensure I'm not just copying the template but rather recreating it in a more idiomatic style.

This kind of work seems like a great use case for AI assisted programming, but I've failed to use it effectively. At most, I can only get Claude Code to recreate some slightly less spaghetti code in Svelte. Simple prompting just isn't able to get AI's code quality within 90% of what I'd write by hand. Ideally, AI could get it's code to something I could review manually in 15-20 minutes, which would massively speed up the time spent on this project (right now it takes me 1-2 hours to properly translate a route).

Do you guys have tips or suggestions on how to improve my efficiency and code quality with AI?

Comments

realberkeaslan•3h ago
Consider giving Cursor a try. I personally like the entire UI/UX, their agent has good context, and the entire experience overall is just great. The team has done a phenomenal job. Your workflow could look something like this:

1. Prompt the agent

2. The agent gets too work

3. Review the changes

4. Repeat

This can speed up your process significantly, and the UI clearly shows the changes + some other cool features

EDIT: from reading your post again, I think you could benefit primarily from a clear UI with the adjusted code, which Cursor does very well.

kamikazeturtles•2h ago
How does Cursor compare to Claude Code or Codex?
jes5199•58m ago
Cursor makes it easier to watch what the model is doing and to also make edits at the same time. I find it useful at work where I need to be able to justify every change in a code review. It’s also great for getting a feel for what the models are capable of - like, using Cursor for a few months make it easier to use Claude Code effectively
listic•1h ago
For the one disinclined to get into closed source, proprietary tools, what is the next best thing to try?

I heard of Cline and Aider, but didn't try anything.

rdrd•3h ago
First you have to be very specific with what you mean by idiomatic code - what’s idiomatic for you is not idiomatic for an LLM. Personally I would approach it like this:

1) Thoroughly define step-by-step what you deem to be the code convention/style you want to adhere to and steps on how you (it) should approach the task. Do not reference entire files like “produce it like this file”, it’s too broad. The document should include simple small examples of “Good” and “Bad” idiomatic code as you deem it. The smaller the initial step-by-step guide and code conventions the better, context is king with LLMs and you need to give it just enough context to work with but not enough it causes confusion.

2) Feed it to Opus 4.5 in planning mode and ask it to follow up with any questions or gaps and have it produce a final implementation plan.md. Review this, tweak it, remove any fluff and get it down to bare bones.

3) Run the plan.md through a fresh Agentic session and see what the output is like. Where it’s not quite correct add those clarifications and guardrails into the original plan.md and go again with step 3.

What I absolutely would NOT do is ask for fixes or changes if it does not one-shot it after the first go. I would revise plan.md to get it into a state where it gets you 99% of the way there in the first go and just do final cleanup by hand. You will bang your head against the wall attempting to guide it like you would a junior developer (at least for something like this).

Alan01252•2h ago
I've been heavily vibe coding for a couple of personal projects. A free kids typing game and bringing back a multiplayer game I played a lot as a kid back to life both with pretty good success.

Things I personally find work well.

1. Chat through with the AI first the feature you want to build. In codex using vscode I always switch to chat mode, talk through what I am trying to achieve and then once myself and the AI are in "agreement" switch to agent mode. Google's antigravity sort of does this by default and I think it's probably the correct paradigm to use.

2. Get the basics right first. It's easy for the AI to produce a load of slop, but using my experience of development I feel I am (sort of) able to guide the AI in advance in a similar way to how I would coach junior developers.

3. Get the AI to write tests first. BDD seems to work really well for AI. The multiplayer game I was building seemed to regress frequently with just unit tests alone, but when I threw cucumber into the mix things suddenly got a lot more stable.

4. Practice, the more I use AI the more I believe prompting is a skill in itself. It takes time to learn how to get the best out of an Agent.

What I love about AI is the time it gives me to create these things. I'd never been able to do this before and I find it very rewarding seeing my "work" being used by my kids and fellow nostalgia driven gamers.

8cvor6j844qw_d6•1h ago
I find Claude Code works best when given a highly specific and scoped tasks. Even then sometimes you'll need to course correct it once you notice its going off course.

Basically a good multiplier, and an assistant for mudane task, but not a replacement. Still requires the user to have good understanding about the codebase.

Writing summary changes for commit logs is amazing however, if you're required to.

mirsadm•1h ago
I break everything down into very small tasks. Always ask it to plan how it will do it. Make sure to review the plan and spot mistakes. Then only ask it to do one step at a time so you can control the whole process. This workflow works well enough as long as you're not trying to do anything too interesting. Anything which is even a little bit unique it fails to do very well.
kgwxd•1h ago
sounds like you're doing all the actual work. why not just type the code as you figure out how to break down the problem? you're going to have to review the output anyway.
Iulioh•1h ago
It's useful to have the small functions all written.

I program mostly in VBA these days (a little problematic as is a dead leanguage since 2006 and even then it was niche) and I have never recived a correct high level ""main"" sub but the AIs are pretty good at doing small subs I then organize.

And yes, telling me where I make errors, they are pretty good at that

At the end of the day I want reliability and there is no way I can't do what without full review.

The funny thing is that they try to use the """best practices""" of coding where you would reasonably want to NOT have them.

Frannky•1h ago
I see LLMs as searchers with the ability to change the data a little and stay in a valid space. If you think of them like searchers, it becomes automatic to make the search easy (small context, small precise questions), and you won't keep trying again and again if the code isn't working(no data in the training). Also, you will realize that if a language is not well represented in the training data, they may not work well.

The more specific and concise you are, the easier it will be for the searcher. Also, the less modification, the better, because the more you try to move away from the data in the training set, the higher the probability of errors.

I would do it like this:

1. Open the project in Zed 2. Add the Gemini CLI, Qwen code, or Claude to the agent system (use Gemini or Qwen if you want to do it for free, or Claude if you want to pay for it) 3. Ask it to correct a file (if the files are huge, it might be better to split them first) 4. Test if it works 5. If not, try feeding the file and the request to Grok or Gemini 3 Chat 6. If nothing works, do it manually

If instead you want to start something new, one-shot prompting can work pretty well, even for large tasks, if the data is in the training set. Ultimately, I see LLMs as a way to legally copy the code of other coders more than anything else

seg_lol•50m ago
This is slightly flawed. LLMs are search but the search space is sparse, the size of the question risks underspecification. The question controls the size of the encapsulated volume in that high dimensional space. The only advantage for small prompts is computational cost. In every other way they are a downside.
bogtog•1h ago
Using voice transcription is nice for fully expressing what you want, so the model doesn't need to make guesses. I'm often voicing 500-word prompts. If you talk in a winding way that looks awkward when in text, that's fine. The model will almost certainly be able to tell what you mean. Using voice-to-text is my biggest suggestion for people who want to use AI for programming

(I'm not a particularly slow typer. I can go 70-90 WPM on a typing test. However, this speed drops quickly when I need to also think about what I'm saying. Typing that fast is also kinda tiring, whereas talking/thinking at 100-120 WPM feels comfortable. In general, I think just this lowered friction makes me much more willing to fully describe what I want)

You can also ask it, "do you have any questions?" I find that saying "if you have any questions, ask me, otherwise go ahead and build this" rarely produces questions for me. However, if I say "Make a plan and ask me any questions you may have" then it usually has a few questions

I've also found a lot of success when I tell Claude Code to emulate on some specific piece of code I've previously written, either within the same project or something I've pasted in

listic•1h ago
Thanks for the advice! Could you please share how did you enable voice transcription for your setup and what it actually is?
binocarlos•1h ago
I use https://github.com/braden-w/whispering with an OpenAI api key.

I use a keyboard shortcut to start and stop recording and it will put the transcription into the clipboard so I can paste into any app.

It's a huge productivity boost - OP is correct about not overthinking trying to be that coherent - the models are very good at knowing what you mean (Opus 4.5 with Claude Code in my case)

kapnap•1h ago
For me, on Mac, VoiceInk has been top notch. Got tired of superwhispr
bogtog•36m ago
I'm using Wispr flow, but I've also tried Superwhisper. Both are fine. I have a convenient hotkey to start/end recording with one hand. Having it just need one hand is nice. I'm using this with the Claude Code vscode extension in Cursor. If you go down this route, the Claude Code instance should be moved into a separate window outside your main editor or else it'll flicker a lot
johnfn•1h ago
That's a fun idea. How do you get the transcript into Claude Code (or whatever you use)? What transcription service do you use?
hn_throw2025•1h ago
I'm not the person you're replying to, but I use Whispering connected to the whisper-large-v3-turbo model on Groq.

It's incredibly cheap and works reliably for me.

I have got it to paste my voice transcriptions into Chrome (Gemini, Claude, ChatGPT) as well as Cursor.

https://github.com/EpicenterHQ/epicenter

hurturue•1h ago
your OS might have a built in dictation thing. Google for that and try it before online services.
bogtog•40m ago
There are a few apps nowadays for voice transcription. I've used Wispr Flow and Superwhisper, and both seem good. You can map some hotkey (e.g., ctrl + windows) to start recording, then when you press it again to stop, it'll get pasted into whatever text box you have open

Superwhisper offers some AI post-processing of the text (e.g., making nice bullets or grammar), but this doesn't seem necessary and just makes things a bit slower

quinncom•26m ago
I use Spokenly with local Parakeet 0.6B v3 model + Cerebras gpt-oss-120b for post-processing (cleaning up transcription errors and fixing technical mondegreens, e.g., `no JS` → `Node.js`). Almost imperceptible transcription and processing delay. Trigger transcription with right ⌥ key.
rgbrgb•12m ago
I use Handy with Claude code. Nice to just have a key combo to transcribe into whatever has focus.

https://github.com/cjpais/Handy

dominotw•45m ago
surprised ai companies are not making this workflow possible instead of leaving it upto users to figure out how to get voice text into prompt.
alwillis•36m ago
> surprised ai companies are not making this workflow possible instead of leaving it upto users to figure out how to get voice text into prompt.

Claude on macOS and iOS have native voice to text transcription. Haven't tried it but since you can access Claude Code from the apps now, I wonder if you use the Claude app's transcription for input into Claude Code.

bogtog•26m ago
> Claude on macOS and iOS have native voice to text transcription

Yeah, Claude/ChatGPT/Gemini all offer this, although Gemini's is basically unusable because it will immediately send the message if you stop talking for a few seconds

I imagine you totally could use the app transcript and paste it in, but keeping the friction to an absolute minimum (e.g., just needing to press one hotkey) feels nice

dboreham•1h ago
1. Introduce it to the code base (tell it: we're going to work on this project, project does X is written in language Y). Ask it to look at the project to familiarize.

2. Tell it you want to refactor the code to achieve goal Z. Tell it to take a look and tell you how it will approach this. Consider showing it one example refactor you've already done (before and after).

3. Ask it to refactor one thing (only) and let you look at what it did.

4. Course correct if it didn't do the right thing.

5 Repeat.

morkalork•1h ago
In addition to what the sibling commenters are saying: Set up guardrails for what you expect in your project's documentation. What is the agent allowed to do when writing unit tests vs say functional tests, what packages it should never use, coding and style templates etc.
helterskelter•1h ago
I like to followup with "Does this make sense?" or similar. This gets it to restate the problem in its own words, which not only shows you what its understanding of the problem is, it also seems to help reinforce the prompt.
serial_dev•1h ago
Here’s how I would do this task with cursor, especially if there are more routes.

I would open a chat and refactor the template together with cursor: I would tell it what I want and if I don’t like something, I would help it to understand what I like and why. Do this for one route and when you are ready, ask cursor to write a rules file based on the current chat that includes the examples that you wanted to change and some rationale as to why you wanted it that way.

Then in the next route, you can basically just say refactor and that’s it. Whenever you find something that you don’t like, tell it and remind cursor to also update the rules file.

mmaunder•52m ago
Solid approach. Don’t be shy about writing long prompts. We call that context engineering. The more you populate that context window with applicable knowledge and what exactly you want, the better the results. Also, having the model code and you talk to the model is helpful because it has the side effect of context engineering. In other words you’re building up relevant context with that conversation history. And be acutely aware of how much context window you’ve used and how much is remaining and when a compaction will happen. Clear context as early as you can per run. Even if it’s 90% remaining.
coryvirok•1h ago
The hack for sveltekit specifically, is to first have Claude translate the existing code into a next.js route with react components. Run it, debug and tweak it. Then have Claude translate the next.js and react components into sveltekit/svelte. Try and keep it in a single file for as long as possible and only split it out once it's working.

I've had very good results with Claude Code using this workflow.

firefax•1h ago
How did you learn how to use AI for coding? I'm open to the idea that a lot of "software carpentry" tasks (moving/renaming files, basic data analysis, etc) can be done with AI to free up time for higher level analysis, but I have no idea where to begin -- my focus many years ago was privacy, so I lean towards doing everything locally or hosted on a server I control so I lack a lot of knowledge of "the cloud" my HN betheren have.
fooker•1h ago
Think of some coding heavy project you always wanted to do but haven't had time for.

Open up cursor-agent to make the repo scaffolding in an empty dir. (build system, test harness, etc. )

Open up cursor or Claude code or whatever and just go nuts with it. Remember to follow software engineering best practices (one good change with tests per commit)

graypegg•50m ago
I love the name "software carpentry" haha.

IMO, I found those specific example tasks to be better handled by my IDE's refactoring features, though support for that is going to vary by project/language/IDE. I'm still more of a ludite when it comes to LLM based development tools, but the best case I've seen thus far is small first bites out of a big task. Working on an older no-tests code base recently, it's been things like setting up 4-5 tests that I'll expand on into a full test suite. You can't take more than a few "big" bites out of a task before you have 0 context as to what direction the vector soup sloshed in.

So, in terms of carpentry, I don't want an LLM framer who's work I need to build off of, but an LLM millworker handing me the lumber is pretty useful.

esafak•47m ago
Practice on an open source repo to allay your privacy fears.
owlninja•1h ago
Would love to hear any feedback using Google's anitgravity from a clean slate. Holiday shutdown is about to start at my job and I want to tinker with something that I have not even started.
hurturue•1h ago
I did a similar thing.

put an example in the prompt: this was the original Django file and this is the rewritten in SvelteKit version.

the ask it to convert another file using the example as a template.

you will need to add additional rules for stuff not covered by the example, after 2-3 conversions you'll have the most important rules.

or maybe fix a bad try of the agent and add it as a second example

thinkingtoilet•1h ago
There are very real limitations on AI coders in their current state. They simply do not produce great code most of the time. I have to review every line that it generates.
seg_lol•54m ago
Voice prompts, restate what you want, how you want it from multiple vantage points. Each one is a light cone in a high dimensional space, your answer lies in their intersection.

Use mind altering drugs. Give yourself arbitrary artificial constraints.

Try using it in as many different ridiculous ways you can. I am getting the feeling you are only trying one method.

> I've had a fairly steady process for doing this: look at each route defined in Django, build out my `+page.server.ts`, and then split each major section of the page into a Svelte component with a matching Storybook story. It takes a lot of time to do this, since I have to ensure I'm not just copying the template but rather recreating it in a more idiomatic style.

Relinquish control.

Also, if you have very particular ways of doing things, give it samples of before and after (your fixed output) and why. You can use multishot prompting to train it to get the output you want. Have it machine check the generated output.

> Simple prompting just isn't able to get AI's code quality within 90%

Would simple instructions to a person work? Esp a person trained on everything in the universe? LLMs are clay, you have to mold them into something useful before you can use them.

ipunchghosts•49m ago
Ask people to do things for you. Then you will learn how to work with something/someone who has faults but can overall be useful if you know how to view the interaction.
bcherny•47m ago
Hey, Boris from the Claude Code team here. A few tips:

1. If there is anything Claude tends to repeatedly get wrong, not understand, or spend lots of tokens on, put it in your CLAUDE.md. Claude automatically reads this file and it’s a great way to avoid repeating yourself. I add to my team’s CLAUDE.md multiple times a week.

2. Use Plan mode (press shift-tab 2x). Go back and forth with Claude until you like the plan before you let Claude execute. This easily 2-3x’s results for harder tasks.

3. Give the model a way to check its work. For svelte, consider using the Puppeteer MCP server and tell Claude to check its work in the browser. This is another 2-3x.

4. Use Opus 4.5. It’s a step change from Sonnet 4.5 and earlier models.

Hope that helps!

dotancohen•40m ago

  > I add to my team’s CLAUDE.md multiple times a week.
How big is that file now? How big is too big?
bcherny•37m ago
Try to keep it under 1k tokens or so. We will show you a warning if it might be too big.

Ours is maybe half that size. We remove from it with every model release since smarter models need less hand-holding.

You can also break up your CLAUDE.md into smaller files, link CLAUDE.mds, or lazy load them only when Claude works in nested dirs.

https://code.claude.com/docs/en/memory

goalieca•37m ago
> I add to my team’s CLAUDE.md multiple times a week.

This concerns me because fighting tooling is not a positive thing. It’s very negative and indicates how immature everything is.

bcherny•33m ago
You might be misunderstanding what a CLAUDE.md is. It’s not about fighting the model, rather it’s giving the model a shortcut to get the context it needs to do its work. You don’t have to have one. Ours is 100% written by Claude itself.
jedberg•25m ago
The Claude MD is like the documentation you hand to a new engineer on your team that explains details about your code that they wouldn't otherwise know. It's not bad to need one.
halfcat•46m ago
> prompting just isn't able to get AI's code quality within 90% of what I'd write by hand

Tale as old as time. The expert gets promoted to manager, and the replacement worker can’t deliver even 90% of what the manager used to. Often more like 30% at first, because even if they’re good, they lack years of context.

AI doesn’t change that. You still have to figure out how to get 5 workers who can do 30-70% of what you can do, to get more than 100% of your output.

There are two paths:

1. Externalized speed: be a great manager, accept a surface level understanding, delegate aggressively, optimize for output

2. Internalized speed: be a great individual contributor, build a deep, precise mental model, build correct guardrails and convention (because you understand the problem) and protect those boundaries ruthlessly, optimize for future change, move fast because there are fewer surprises

Only 1 is well suited for agent-like AI building. If 2 is you, you’re probably better off chatting to understand and build it yourself (mostly).

At least early on. Later, if you nail 2 and have a strong convention for AI to follow, I suspect you may be able to go faster. But it’s like building the railroad tracks before other people can use them to transport more efficiently.

Django itself is a great example of building a good convention. It’s just Python but it’s a set of rules everyone can follow. Even then, path 2 looks more like you building out the skeleton and scaffolding. You define how you structure Django apps in the project, how you handle cross-app concerns, like are you going to allow cross-app foreign keys in your models? Are you going to use newer features like generated fields (that tend to cause more obscure error messages in my experience)?

Here’s how I think of it. If I’m building a Django project, the settings.py file is going to be a clean masterpiece. There are specific reasons I’m going to put things in the same app, or separate apps. As soon as someone submits a PR that craps all over the convention I’ve laid out, I’m rejecting aggressively. If we’ve built the railroad tracks, and the next person decides the next set of tracks can use balsa wood for the railroad ties, you can’t accept that.

But generally people let their agent make whatever change it makes and then wonder why trains are flying off the tracks.

nisalperi•46m ago
I wrote about my experience from the last year. Hope you find this helpful

https://open.substack.com/pub/sleuthdiaries/p/guide-to-effec...

dominotw•44m ago
dont forget to include "pls don't make mistakes"
cat_plus_plus•34m ago
AI is great at pattern matching. Set up project instructions that give several examples of old code, new code and detailed explanations of choices made. Also add a negative prompt, a list of things you do not want AI to do based on past frustrations.
noiv•33m ago
I learned the hard way, when Claude has 2 conflicting information in Claude.md it tends to ignore both. So, precise language is key, don't use terms like 'object', which may have different meanings in different fields.
caseyw•31m ago
The approach I’ve been taking lately with general AI development:

1. Define the work.

2. When working in a legacy code base provide good examples of where we want to go with the migration and the expectation of the outcome.

3. Tell it about what support tools you have, lint, build, tests, etc.

4. Select a very specific scenario to modify first and have it write tests for the scenario.

5. Manually read and tweak the tests, ensure they’re testing what you want, and they cover all you require. The tests help guardrail the actual code changes.

6. Depending upon how full the context is, I may create a new chat and then pull in the test, the defined work, and any related files and ask it to implement based upon the data provided.

This general approach has worked well for most situations so far. I’m positive it could be improved so any suggestions are welcome.

daxfohl•25m ago
Go slowly. Shoot for a 10% efficiency improvement, not 10x. Go through things as thoroughly as if writing by hand, and don't sacrifice quality for speed. Be aware of when it's confidently taking you down a convoluted path and confidently making up reasons to do so. Always have your skeptic hat on. If something seems off, it probably is. When in doubt, exit the session and start over.

I still find chat interface generally more useful than coding assistant. It allows you to think and discuss higher level about architecture and ideas before jumping into implementation. The feedback loop is way faster because it is higher level and it doesn't have to run through your source tree to answer a question. You can have a high ROI discussion of ideas, architecture,algorithms, and code, before committing to anything. I still do most of my work copying and pasting from the chat interface.

Agents are nice when you have a very specific idea in mind, but I'm not yet hugely fond of them otherwise. IME the feedback loop is too long, they often do things badly, and they are overly confident in their oytput, encouraging cursory reviews and commits of hacked-together work. Sometimes I'll give it an ambitious task just in the off chance that it'll succeed, but with the understanding that if it doesn't get it right the first time, I'll either throw it away completely, or just keep whatever pieces it got right and pitch the rest; it almost never gets it right the second time if it's already started on an ugly approach.

But the main thing is to start small. Beyond one-shotting prototypes, don't expect it to change everything overnight. Focus on the little improvements, don't skip design, and don't sacrifice quality! Over time, these things will add up, and the tools will get better too. A 10% improvement every month gets to be a 10x improvement in (math...). And you'll be a lot better positioned than those who tried to jump onto the 10x train too fast because you'll not have skipped any steps.

orwin•18m ago
I want to say a lot of mean things, because an extremely shitty, useless, clearly Claude-generated test suite passed the team PR review this week, tests were useless, so useless the code they were linked to (can't say if the code itself was Ai-written though) had a race condition, that, if triggered and used correctly, could probably rewrite the last entry of any of the firewall we manage (DENY ALL is the one I'm afraid about).

But I can't even shit on Claude AI, because I used it to rewrite part of the tests, and analyse the solution to fix the race condition (and how to test it).

It's a good tool, but in the last few weeks I've been more and more mad about it.

Anyway. I use it to generate a shell. No logic inside, just data models, and functions prototypes. That help with my inability to start something new. Then I use it to write easy functions. Helpers I know I'll need. Then I try to tie everything together. I never hesitate to stop Claude and write specific stuff myself, add a new prototype/function, or delete code. I restart the context often (Opus is less bad about it, but still). Then I ask it about easy refactoring or library that would simplify the code. Ask for multiple solutions each time.

Fire-Dragon-DoL•6m ago
I find all AI code to be lower quality than humans who care about quality. This might be ok, I think the assumpt with AI is that we don't need to look at code so that it looks beautiful because AI will look at it .