typedef struct { size_t size; } PhysicalSize;
typedef struct { size_t size; } AllocatedSize;
PhysicalSize psize = { 123 }; // or { .size = 123 }
AllocatedSize asize = { 234 };
psize = asize; // error
...and in reverse, Rust wouldn't protect you from that exact same bug if the programmer decided to use usize like in the original C code.IME overly strong typing is a double-edged sword though, on one hand it makes the code more robust, but on the other hand also more 'change resistant'.
I still would like to see a strong typedef variant in C so that the struct wrapping wouldn't be needed, e.g.:
(yes this is very much an armchair opinion, I mostly do front-end JS for a living)
A single call where the psize is different than the asize would have caught it.
It's the kind of bug that makes you stop breathing for a brief moment. So I ran this function through Gemini 2.5 Pro, ChatGPT o3 and Grok 3. No context other than the source code fragment was provided. All three also clearly identified the problem.
There is no longer a reason for trivial flaws such as this surviving to release code. We're past that now, and that's an astonishing thing.
These are the questions I ponder: Should we consider the possibility that the ongoing, incomplete and decades long pursuit of "better" languages is now misguided? Or, that perhaps "better" might now mean whatever properties make code easier for LLMs to analyze and validate against specifications, as opposed to merely preventing the flaws humans are prone to make?
So double down on languages that are more represented in training data? I think it's still worthwhile to actually improve programming languages. Relying only on LLMs is fragile. So ideally do both: improve language and AI.
The pragmatic answer to that is: this appears to be highly effective.
What I have in mind is something else, however. Consider the safety nets we try to build with, for example, elaborate type systems. These new analysis tools can operate on enormous contexts and infer and enforce "type" with a capacity far greater what a human mind can hope to approach. So perhaps there is little value to complex types. Instead, a simple type system supported by specification will be the superior approach. seL4 is an example of the concept: C, but exhaustively specified and verified.
Not totally clear what you mean here - are you saying you’re the author of the article or PR, or that you independently discovered the same issue (after the fact)?
The author asked that the reader attempt to find the flaw by inspecting the provided code. The flaw was obvious enough that I was able to find it. The implication is that if it were less obvious, I might not have. I was not attempting to take any credit at all: exactly the opposite.
Excellent question. What would it end up looking like? Would it be full of monads and HKTs? Lambda calculus? WebAssembly?
I hope to learn the answer one day.
They produce so much BS, but at least now we can catch most of it. If we the languages are suddenly optimised for the AIs we couldn't do that anymore.
This is unrelated to "no longer due to ChatGPT times".
We've been able to detect this issue for decades already, see -Wunused-variable.
DIV_ROUND_UP(psize, cols);
I need sleep, so I'm not going to check, but that may appear to be an intermediate value to a C compiler, and thus escape -Wunused-variable.It does indeed (I checked):
$ cat test.c
#include <stdint.h>
#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
struct vdev;
typedef struct vdev vdev_t;
struct vdev
{
uint64_t vdev_ashift;
void *vdev_tsd;
vdev_t *vdev_top;
};
struct vdev_raidz;
typedef struct vdev_raidz vdev_raidz_t;
struct vdev_raidz
{
int vd_original_width;
int vd_physical_width;
int vd_nparity;
};
extern uint64_t vdev_raidz_get_logical_width(vdev_raidz_t *, uint64_t);
static uint64_t
vdev_raidz_asize_to_psize(vdev_t *vd, uint64_t asize, uint64_t txg)
{
vdev_raidz_t *vdrz = vd->vdev_tsd;
uint64_t psize;
uint64_t ashift = vd->vdev_top->vdev_ashift;
uint64_t cols = vdrz->vd_original_width;
uint64_t nparity = vdrz->vd_nparity;
cols = vdev_raidz_get_logical_width(vdrz, txg);
psize = (asize >> ashift);
psize -= nparity * DIV_ROUND_UP(psize, cols);
psize <<= ashift;
return (asize);
}
$ gcc-14 -Wall -Wextra -Wpedantic -Wunused -Wunused-variable -Wunused-but-set-variable -c test.c -o test.o
test.c:28:5: warning: ‘vdev_raidz_asize_to_psize’ defined but not used [-Wunused-function]
28 | vdev_raidz_asize_to_psize(vdev_t *vd, uint64_t asize, uint64_t txg)
| ^~~~~~~~~~~~~~~~~~~~~~~~~
$ clang-19 -Weverything -c test.c -o test.o
test.c:33:31: warning: implicit conversion changes signedness: 'int' to 'uint64_t' (aka 'unsigned long') [-Wsign-conversion]
33 | uint64_t cols = vdrz->vd_original_width;
| ~~~~ ~~~~~~^~~~~~~~~~~~~~~~~
test.c:34:34: warning: implicit conversion changes signedness: 'int' to 'uint64_t' (aka 'unsigned long') [-Wsign-conversion]
34 | uint64_t nparity = vdrz->vd_nparity;
| ~~~~~~~ ~~~~~~^~~~~~~~~~
test.c:28:5: warning: unused function 'vdev_raidz_asize_to_psize' [-Wunused-function]
28 | vdev_raidz_asize_to_psize(vdev_t *vd, uint64_t asize, uint64_t txg)
| ^~~~~~~~~~~~~~~~~~~~~~~~~
3 warnings generated.
(The unused-function diagnostic is expected for this test)I would have also expected clang to catch the double assignment to cols, to be honest.
I don't know these compiler internals, but how come the use of DIV_ROUND_UP() make the compilers ignore the unused state of the end variable?
I mean, after the DIV_ROUND_UP(), theres a bit shift:
psize <<= ashift;
Only after the bit shift, nobody uses the result.Shouldn't this be enough clues for the compiler to realize the variable is unused?
You would think so! I can't imagine why it behaves this way.
The warning implementation must be simple minded: if the variable is ever read then it isn't "unused." The compiler need only set a flag on every read, and later scan all the flags. What you wish it did is track the lexical position of the last assignment and detect when pointless assignment occurs. This is clearly more complex and also has a higher performance cost. So for reasons of laziness and/or performance, the developers just haven't bothered.
That's what I imagine. Having dealt with C compilers for a long time now, it doesn't occur to me to expect better, so I guessed correctly that Wunused was not applicable here.
That makes sense; thanks.
Thank you.
Now give the whole repo before the fix to an LLM and ask it to find this bug with some context the author started with. I'm sure you can fix every software issue that way as well!
struct TypeA { int inner; };
struct TypeB { int inner; };
struct TypeA a1, a2;
// whoops
TypeB result = {.inner = a1.inner + a2.inner};
Don't get me wrong, where I've used this sort of pattern it's definitely caught issues, and it definitely massively lowers the surface area for this type of bug. But I still feel like it's quite fragile.The use case I felt worked best was seperating out UTC datetime vs localised datetime structs - mainly since it's something you already had to use function calls to add/subtract from, you couldn't play with raw values.
If you want free software I recommend IKOS - a is a sound static analyzer for C/C++ developed at NASA. Checks: https://github.com/NASA-SW-VnV/ikos/blob/master/analyzer/REA... Numerical abstract domains: https://github.com/NASA-SW-VnV/ikos/blob/master/analyzer/REA...
Commercial tool like Astree https://www.absint.com/astree/index.htm if you have money.
It seems you are dooming your project the minute you start ignoring your first warning.
I would have expected that the compiler should complain that psize is computed but unused.
Why isn't -Wunused-variable enabled in OpenZFS?
Arguably every unused variable in the code base is a potential bug like this waiting to chomp on some poor user’s data though, so arguably they should be turning it on & dealing with the consequences?
Most definitely, yes.
In general (for any programming language, really), I would advocate for enabling warnings and address them as you go along.
Because that unlocks the developer feature of being able to react when code changes introduces warnings, as they generally point to unsound code.
>The provided code snippet is a function that converts an allocation size (asize) into the largest physical size (psize) that can be safely written to an allocation for a given virtual device (vdev). Here are some observations and potential issues in the code:
Return Value: The function currently returns asize instead of psize. This seems to be a mistake, as the purpose of the function is to compute and return the psize. The correct return statement should be:
return psize;
Comment Clarity: The comments mention that the function does not take into account the effect of gang headers. It might be helpful to clarify what "gang headers" are and how they affect the allocation sizes, especially for someone unfamiliar with the context.Variable Initialization: The variable psize is calculated based on asize, ashift, cols, and nparity. Ensure that these values are correctly initialized and that vdev_raidz_get_logical_width returns a valid value for cols. If cols is zero, it could lead to division by zero in the DIV_ROUND_UP macro.
Assertions: The assertion ASSERT0(asize % (1 << ashift)); checks that asize is a multiple of 2^ashift. If this assertion fails, it will cause the program to terminate. Ensure that this condition is always met before calling this function, or consider handling this case more gracefully.
Macro Usage: The DIV_ROUND_UP macro should be checked to ensure it handles edge cases correctly, such as when psize is zero. If psize is zero, the division could lead to unexpected behavior.
Code Readability: Consider adding more comments to explain the logic behind the calculations, especially for someone who may not be familiar with the RAID-Z implementation.
Error Handling: There is no error handling in the function. Depending on the context in which this function is used, it might be beneficial to add checks for invalid input values and return an error code or handle exceptions accordingly.
Here is a revised version of the function with the return value corrected:
.....(Snipped for formatting issues)
Make sure to test the function thoroughly to ensure it behaves as expected under various conditions.
accept the world has changed my friend!!
And more so, as the NSA, China, Russia etc will be using llms to comb repo's to find zero day vulnerabilities.
Also, filesystems are an insanely complex problem, and ZFS is by far the most complex one to be widely deployed.
I have been playing with NetBSD for a bit and I discovered their FS has a log parameter, using that on mount to me makes their file system almost as good as ext4. Interesting to me it is not advertised too much.
jpfr•7mo ago
At the same time, the tooling has gotten much better in the last years.
Clang-analyzer is fast enough to run as part of the CI. Newer gcc also give quite a few more warnings for unused results.
My recommendation to the project is to
- Remove all compiler warnings and enable warning-as-error
- Increase the coverage of unit tests to >80%
That is a lot of work. But that's what is required for high-criticality systems engineering.
rollcat•7mo ago
Side note: OpenZFS already has an extensive test suite. Merely hitting a code branch wouldn't have caught this one.