Joke aside: Programming languages and compilers are still being optimized until the assembly and execution match certain expectations. So prompts and whatever inputs to AI also will be optimized until some expectations are met. This includes looking at their output, obviously. So I think this is an overblown extrapolation like many we see these days.
Same same.
But even then it is quite impressive.
Concretely in my use case, off of a manual base of code, having claude has the planner and code writer and GPT as the reviewer works very well. GPT is somehow better at minutiae and thinking in depth. But claude is a bit smarter and somehow has better coding style.
Before 4.5, GPT was just miles ahead.
something that was not perl ;)
in ~2005 i lead a team to build horse-betting terminals for Singapore, and there server could only understand CORBA. So.. i modelled the needed protocol in python, which generated a set of specific python files - one per domain - which then generated the needed C folders-of-files. Like 500 lines of models -> 5000 lines 2nd level -> 50000 lines C at bottom. Never read that (once the pattern was established and working).
But - but - it was 1000% controllable and repeatable. Unlike current fancy "generators"..
philipwhiuk•5d ago
I'm highly doubtful this is true. Adoption isn't even close to the level necessary for this to be the case.