# Step 3: Preemptively check for catastrophic magnitude differences
if abs(a) > sys.float_info.max / 2:
logging.warning("Value of a might cause overflow. Returning infinity just to be sure")
return math.copysign (float('inf'), a)
if abs(b) < sys.float_info.epsilon:
logging.warning("Value of b dangerously close to zero. Returning NaN defensively.")
return math.nan
Does the above code make any sense? I've not worked with this sort of stuff before, but it seems entirely unreasonable to me to check them individually. E.g. if 1 < b < a, then it seems insane to me to return float('inf') for a large but finite a.If you are actually doing safety critical software, e.g. aerospace, medicine or automotive, then this is a good precaution, although you will not be writing in Python.
I have to constantly remind Claude that we want to fail fast.
Just raise god damn it
1. the code is actually wrong (and is wrong regardless of the absurd exception handling situation)
2. some of the exception handling makes no sense regardless, or is incoherent
3. a less absurd version of this actually happens (edit: commonly in actual irl scenarios) if you put emphasis on exception handling in the prompt
In go all SOTA agents are obsessed with being ludicrously defensive against concurrency bugs. Probably because in addition to what if driven development, there are a lot of blog posts warning about concurrency bugs.
Also, division by zero should return Inf
a/0 = Inf when a>0
a/0 = -Inf when a<0
a/0 = NaN when a=0
In the context of say a/-0.001, a/-0.00000001, a/-0.0000000001, a/<negative minimum epsilon for denormalized floating point>, a/0
Then a/0 is negative when a>0, and positive when a<0
> According to the IEEE 754 standard, floating-point division by zero is not an error but results in special values: positive infinity, negative infinity, or Not a Number (NaN). The specific result depends on the numerator
Way back when during my EE course days, we had like a whole semester devoted to weird edge cases like this, and spent month on ieee754 (precision loss, Nan, divide by zero, etc)
When you took an ieee754 divide by zero value as gospel and put it in the context of a voltage divisor that is always negative or zero, getting a positive infinity value out of divide by zero was very wrong, in the sense of "flip the switch and oh shit there's the magic smoke". The solution was a custom divide function that would know the context, and yield negative infinity (or some placeholder value). It was a contrived example for EE lab, but the lesson was - sometimes the standard is wrong and you will cause problems if it's blindly followed.
Sometimes it's fine, but it depends on the domain
But with exceptions you can’t use SIMD / vectorization.
Can you give more context on your voltage math? Was the numerator sometimes negative? If the problem is that your divisor calculation sometimes resulted in positive zero, that doesn't sound like the standard being wrong without more info.
(it is in principle possible to construct such a stack, potentially with more context, with a Result type, but I don't know of any way to do so that doesn't sacrifice a lot of performance because you're doing all the book-keeping even on caught errors where you don't use that information)
If you only need it for debugging, then maybe better instrumentation and observability is the answer.
Sometimes yes, sometimes no?
It's a domain specific answer, even ignoring the 0/0 case.
And also even ignoring the "which side of the limit are you coming from?" where "a" and/or "b" might be negative. (Is it positive infinity or negative infinity? The sign of "a" alone doesn't tell you the answer)
Because sometimes the question is like "how many things per box if there's N boxes"? Your answer isn't infinity, it's an invalid answer altogether.
The limit of 1/x or -1/x might be infinity (or negative infinity), and in some cases that might be what you want. But sometimes it's not.
You don’t need exceptions, and they can be replaced by more intricate return types.
OTOH, for the intended use case for signalling conditions that most code directly calling a function does not expect and cannot do anything about, unchecked exceptions reduce code clutter (checked exceptions are isomorphic to "more intricate return types"), at the expense of making the potential error cases less visible.
Whether this tradeoff is a net benefit is somewhat subjective and, IMO, highly situational. but if (unchecked) exceptions are available, you can always convert any encountered in your code into return values by way of handlers (and conversely you can also do the opposite), whereas if they aren’t available, you have no choice.
Most problems stem from poor PL semantics[1] and badly designed stdlibs/APIs.
For exogenous errors, Let It Crash, and let the layer above deal with it, i.e., Erlang/OTP-style.
For endogenous errors, simply use control flow based on return values/types (or algebraic type systems with exhaustive type checking). For simple cases, something like Railway Oriented Programming.
---
1. division by zero in Julia:
julia> 1 / 0
Inf
julia> 0 / 0
NaN
julia> -1 / 0
-Inf
Checked Exceptions are a good concept which just needed more syntactic-sugar. (Like easily specifying that one kind of exception should be wrapped into another.) The badness is not in the logic but in the ecology, the ways that junior/lazy developers are incentivized to take horrible shortcuts.
Checked exceptions are fundamentally the same as managing the types of return-values... except the language doesn't permit the same horrible-shortcuts for people to abuse.
Meme reaction: http://imgur.com/iYE5nLA
_____
Prior discussion: https://news.ycombinator.com/item?id=42946597
Furthermore, the code is happy to return NaN from the pre-checks, but replaces a NaN result from the division by None. That doesn't make any sense from an API design standpoint.
(I used to look out for kaparthy's papers ten years ago... i tend to let out an audible sigh when i see his name today)
I for one really enjoy both his longer form work and his shorter takes.
In particular, I can't think of any non-pathological situation where a python developer should import logging and update logging.basicConfig within an inner function.
Great satire.
I know it's Karpathy, which is why the entire prompt is all the more important to see.
[1] Probably with some "make you sure handle ALL cases in existence", or emphasis, along those lines.
The RL objectives probably heavily penalize exceptions, but don't reward much for code readability or simplicity.
It's so annoying.
My uninformed suspicion is that this kind of defensive programming somehow improves performance during RLVR. Perhaps the model sometimes comes up with programs that are buggy enough to emit exceptions, but close enough to correct that they produce the right answer after swallowing the exceptions. So the model learns that swallowing exceptions sometimes improves its reward. It also learns that swallowing exceptions rarely reduces its reward, because if the model does come up with fully correct code, that code usually won’t raise exceptions in the first place (at least not in the test cases it’s being judged on), so adding exception swallowing won’t fail the tests even if it’s theoretically incorrect.
Again, this is pure speculation. Even if I’m right, I’m sure another part of the reason is just that the training set contains a lot of code written by human beginners, who also like to ignore errors.
These aren't operating on reward functions because there's no internal model to reward. It's word prediction, there's no intelligence.
Subsequently, ChatGPT/Claude/Gemini/etc will go through additional training with supervised fine-tuning, reinforcement learning with reward functions whether human-supervised feedback (RLHF) or reward functions (RLVR, 'verified rewards').
Whether that fine-tuning and reward function generation give them real "intelligence" is open to interpretation, but it's not 100% plagarism.
In this, at least, AI may very well have copied our worst habits of “learning to the test.”
https://chatgpt.com/share/68e82db9-7a28-8007-9a99-bc6f0010d1...
The AIs in general feel really focused on making the user happy - your example, and another one is how they love adding emojis to the stout and over-commenting simple code.
With RLVR, the LLM is trained to pursue "verified rewards." On coding tasks, the reward is usually something like the percentage of passing tests.
Let's say you have some code that iterates over a set of files and does processing on them. The way a normal dev would write it, an exception in that code would crash the entire program. If you swallow and log the exception, however, you can continue processing the remaining files. This is an easy way to get "number of files successfully processed" up, without actually making your code any better.
Well, it depends a bit on what your goal is.
Sometimes the user wants to eg backup as many files as possible from a failing hard drive, and doesn't want to fail the whole process just because one item is broken.
However, LLM generated code will often, at least in my experience, avoid raising any errors at all, in any case. This is undesirable, because some errors should result in a complete failure - for example, errors which are not transient or environment related but a bug. And in any case, a LLM will prefer turning these single file errors into warnings, though the way I see it, they are errors. They just don't need to abort the process, but errors nonetheless.
> And in any case, a LLM will prefer turning these single file errors into warnings, though the way I see it, they are errors.
Well, in general they are something that the caller should have opportunity to deal with.
In some cases, aborting back to the caller at the first problem is the best course of action. In some other cases, going forward and taking note of the problems is best.
In some systems, you might event want to tell the caller about failures (and successes) as they occur, instead of waiting until the end.
It's all very similar to the different options people have available when their boss sends them on an errand and something goes wrong. A good underling uses their best judgement to pick the right way to cope with problems; but computer programs don't have that, so we need to be explicit.
See https://en.wikipedia.org/wiki/Mission-type_tactics for a related concept in the military.
// Return the result
return result;
I find this quite frustrating when reading/reviewing code generated by AI, but have started to appreciate that it does make subsequent changes by LLMs work better.
It makes me wonder if we'll end up in a place where IDEs hide comments by default (similar to how imports are often collapsed by default/automatically managed), or introduce some way of distinguishing between a more valuable human written comment and LLM boilerplate comments.
if random.random() < 0.01:
logging.warning("This feels wrong. Aborting just in case.")
return None
If you completely excise anything too distasteful for a current-day blockbuster, but want a film about a space mining colony uprising you might as well just adapt the game Red Faction instead: have the brave heros blasting away with abandon at corpo guards, mad genetic experimenters and mercenaries and the media coverage can talk about how it's a genius deconstruction of Elon Musk's Martian dream or whatever.
It's like a fine wine pairing for "The Moon is a Harsh Mistress."
<press enter>
damn these ai's are good!
<begins shopping for new username>
I love this and hate this at the same time.
try:
result = a / b
if math.isnan(result):
raise ArithmeticError("Result is NaN. I knew this would happen.")
Adverb + verb
And "毛片免费观看" (Free porn movies), "天天中彩票能" (Win the lottery every day), "热这里只有精品" (Hot, only fine products here) etc[1].
king and rex (king in latin) map to different tokens but will map to very similar vectors.
Some LLMs can output nerd font glyphs and others can't.
If I recall grok code fast can but codex and sonnet can't
Because, and this is a hot take, LLMs have emergent intelligence
I really dislike their underuse of exceptions. I'm working on ETL/ELT scripts. Just let stuff blow up on me if something is wrong. Like, that config entry "foo" is required. There's no point in using config.get("foo") with a None check which then prints a message and returns False or whatever. Just use config["foo"] and I'll know what's wrong from the stack trace and exception text.
I think the Vexing Exceptions post is on the same tier as other seminal works in computer science; definitely worth a quick read or re-read once in a while.
One is that often I do want error handling, but also often I either know the error just won't happen or if it does, something is very wrong and we should just crash fast to make it easy to fix the bug.
But I am not really sure I would expect someone to know the difference in all cases just looking at some code. This is often an about holistically knowing how the app works.
A second thought - remember the experiment where an LLM was fine tuned on bad code (exploitable security problems for example) and the LLM became broadly misaligned on all sorts of unrelated (non-coding) tasks/contexts? It's as if "good or bad" alignment is encoded as a pretty general concept.
Error-handling is good aligned, which I think is why, even with lots of instructions to fail fast, it's still hard to get the LLM to allow crashing by avoiding error checking. It's gonna be even harder if you do want it to do some error checking, and the code it's looking at has some error checking
Less sarcastically but equally as true: they've learned from the tests you stole from people on the internet as well as the code you stole from people on the internet.
Most developers write tests for the wrong things, and many developers write tests that contain some bullshit edge case that they've been told to test (automatically to meet some coverage metric, or by a "senior" developer who got Dilbert principled away from the coalface and doesn't understand diminishing returns).
But then the end goal is to turn out code about as good as the average developer so they can be replaced more cheaply, so your LLM is meeting its objectives. Congrats.
I even had this Cursor rule when I was using Claude:
"- Do not use statements to catch all possible errors to mask an error - let it crash, to see what happened and for easier debugging."
And even with this rule, Claude would not always adhere. Never had this issue with GPT-5.
LLMs often write tutorial-ish code without much care how it integrates with rest of codebase.
Swallowing exceptions is one such example.
One reason for this is that you typically lack a type system that allows 'making illegal states unrepresentable' to some extent, or possibly lack a team that can leverage the available type system to that effect due to organisational pressure, insufficient experience or whatever.
mwkaufma•19h ago
bwfan123•18h ago
mwkaufma•18h ago
simonw•18h ago
dwd•11h ago
I haven't needed to use a service like Fortinet recently and am now wondering if a LLM is part of their tool and if it's better/worse?