I don't think you can ensure that with gradients though: if f and f' are computed in machine arithmetic, cancellation errors might pile up.
Yes, you can. You need to do automatic differentiation with interval arithmetic. This gives you a mathematically correct result for the derivative.
Always keep in mind with interval arithmetic that (-inf, inf) is also a mathematically correct result. Keeping intervals small is the challenge with interval arithmetic and means that every algorithm not specifically developed for it is likely going to lead to useless results.
But that kinda defeats the point of replacing interval arithmetic with gradients, though.
> This blog post assumes a vague understanding of implicit surface rasterization, and how interval arithmetic is used to both skip regions of space and simplify complex expressions.
Can anyone give me a quick rundow of the article?
An implicit surface is defined by F(x,y,z) = 0. Implicit surfaces can be rendered by checking whether the camera ray going through pixel x,y intersects the surface or not. An axis-aligned bounding box (AABB) in 3D is the product of intervals [x_min, x_max]x[y_min, y_max]x[z_min, z_max]. The interval extension of F(x,y,z) gives you an interval [F_min, F_max] for such an AABB. If [F_min, F_max] doesn't contain 0, then you can discard the whole AABB for rendering purposes, because the surface is guaranteed to be not there. This can speed up rendering substantially.
This is total nonsense. The point of Lipschitz continuity is that it is more than continuity and less then differentiability. If you assert that it is differentiable the concept looses all meaning. It is specifically interesting because you do not have to assert differentiability.
I think the author meant that the rate of change is bounded [without the function needing to be differentiable] but used the term "gradient" anyways. I guess you shouldn't use the term "rate of change" either? I guess
The Lipschitz property means the function's output doesn't change faster than linearly with input
Is precise in that contextLipschitz continuity means the differential quotient is globally bounded (where it exists). Differentiability, by definition, means the differential quotient converges at every point.
These are somewhat related, but different things.
They absolutely are, even in the article. E.g. the max function or the 1-norm are non-differentiable, but continuous.
There is literally no point in talking about Lipschitz continuity for differentiable functions, because it is equivalent to the derivative being bounded.
I still think there's a point in talking about Lipschitz continuity for differentiable functions because the algorithms they're talking about depend on the Lipschitz property, not the differentiability.
That is why the definition should not contain the differentiability condition. It is violated by the examples and irrelevant.
My point is that the Lipschitz property, as defined e.g. in the Wikipedia article, is the important part. Also assuming differentiability is unhelpful, because it misses the point and is even violated by the rest of the article.
It is also talking about functions which do not have a derivative at all and therefore could not possibly have that property. The point of Lipschitz functions is that they need not have a derivative.
>So it's not the same thing as the Lipschitz property in analysis, although I guess it's inspired by it--but as a result it's concrete and useful for algorithms.
That is just some arbitrary scaling. Only relevant to have the math look nicer. E.g. if f is Lipschitz for L=10, then f/10 is Lipschitz for L=1.
Functions that do not have a derivative at all can certainly have that property. Perhaps you were confused by the notation "dx", suggesting a differential, for the displacement, rather than the conventional Δx, but the sentence makes no sense if we try to read it as a statement about differentials. Possibly your correspondent is not a Greek Keyboard Achiever. Moreover, the functions they are discussing, while indeed not differentiable, are continuous and differentiable almost everywhere, so you can also prove Lipschitz continuity if you can prove a bound on their gradients at the points where they do exist.
> That is just some arbitrary scaling. Only relevant to have the math look nicer. E.g. if f is Lipschitz for L=10, then f/10 is Lipschitz for L=1.
It's a relevant difference if you don't know the Lipschitz constant L or a bound on it, because then you don't know what to divide by.
The Lipschitz property is not a property of functions with a derivative. I really do dislike this kind of drive by commenting, where you ignore an entire conversation and pretend someone does not understand what a gradient is.
The arbitrary scaling is not arbitrary if the whole algorithm is defined around K=1. The Lipschitz property in analysis is "bounded by any K". The algorithm is specifically using "bounded by K=1".
Also: for the purpose of numerical methods, it does not matter if a function "has a derivative" in a classical sense or not. Certainly distributions, step functions, etc are fine. Numerical methods cannot, after all, tell if a function is discontinuous or not. The article is using some known closed-form functions as examples, but it is trying to illustrate the general case, which is not going to be a closed form but just an arbitrary SDF that you constructed from somewhere---which it will be very useful to enforce the K=1 Lipschitz condition on.
>which it is then trying to force to have that property via a hack of sorts (f/|del f| apparently)
The functions he talks about do not have a derivative. So that hack is nonsensical.
This is taken directly from the paper's introduction, which admittedly uses the more specific terminology of "1-Lipschitz signed distance bounds".
The paper cites the original Hart '96 paper on sphere tracing; quoth Hart, "a function is Lipschitz if and only if the magnitude of its derivative remains bounded".
https://graphics.stanford.edu/courses/cs348b-20-spring-conte...
I wonder if there's a terminology schism here between computer graphics and numerical analysis folks.
The first group just pretends every function has a derivative (even when it clearly does not), the other doesn't.
The linked Wikipedia article gets it exactly right, I do not know why you would link to something which straight up says your definition is incorrect.
There is no point in talking about Lipschitz continuity when assuming that there is a derivative, you assume that it is Lipschitz because it is a weaker assumption. The key reason Lipschitz continuity is interesting because it allows you to talk about functions without a derivative, almost like they have one. It is the actual thing which makes any of this work.
I'd say this is a little pedantic, save for the fact that your function of interest (an SDF) isn't a differentiable function! It has big, crucially important subset of points (the caustic sets) where it fails to be differentiable.
Citation please.
Yes Lipschitz functions need not be differentiable but when they are that bound holds and is very useful. Using that bound is bread and butter in convex analysis of differentiable convex functions.
Curb the snark.
The article links to the Wikipedia page, which gets it right.
"An everywhere differentiable function g : R → R is Lipschitz continuous (with K = sup |g′(x)|) if and only if it has a bounded first derivative"
Lipschitz continuity for differentiable functions is just having a bounded derivative. The Lipschitz property suddenly becomes uninteresting as it just falls out of the assumptions, the interesting fact which allows you to use non-differentiable functions is that not assuming differentiability, but assuming Lipschitz continuity, is enough.
>Using that bound is bread and butter in convex analysis of differentiable convex functions.
It also is the bread and butter in Analysis of PDEs, but it is the bread and butter because Lipschitz continuity is a weaker property than differentiability. In the context of the article you want to talk about non-differnetiable functions, e.g. max, which you couldn't if you assumed differentiability.
The reason this is important because choosing Lipschitz over differentiable is what makes all this work.
It gives the special property that the derivative is bounded, a property that's not true for arbitrary differentiable functions.
How is Lipschitz less than differentiability? e^x is differentiable everywhere but not Lipschitz continuous everywhere, right?
You should compare the local properties.
But now that we've moved the goalposts to local: what about f(x) = x^(3/2) sin(1/x), f(0) = 0? It's differentiable yet not locally Lipschitz-continuous on (0, 1]. (Taken straight from Wikipedia)
I was speaking in very general terms about the subject, I was not making mathematically correct statements. Your are completely correct about the counter example.
You actually need some stronger assumptions, namely that f is C^1, meaning it has a continous derivative. In that case any function f defined on a compact set K has a continuous derivative f' which assumes its maximum on K, giving Lipschitz continuity. Every function in C^1 on K is Lipschitz, yet not every Lipschitz function is C^1 or even differentiable.
To be more specific and to give the reason Lipschitz functions are important, look at the following statement from Wikipedia. "A Lipschitz function g : R → R is absolutely continuous and therefore is differentiable almost everywhere, that is, differentiable at every point outside a set of Lebesgue measure zero."
Given this I hope that you can understand what I said about Lipschitz functions being a weaker version of differentiabilty.
But it is true that general implicit surfaces don't necessarily have constant magnitude gradients. I.e. F(x,y) = x^2 + y^2 - 1 vs. F(x,y) = sqrt(x^2 + y^2) - 1
https://patentimages.storage.googleapis.com/7a/73/2d/8d2eeca...
A paper is usually better: the goal is very explicitly sharing knowledge, and there are peer reviewers and editors whose job is to make sure the writing is clear.
[1]: https://www.bmtdynamics.org/cgi-bin/display.pl?name=rdaic
[2]: https://books.google.fr/books?id=2zDUCQAAQBAJ
[3]: https://en.wikipedia.org/wiki/Taylor%27s_theorem#Taylor's_th...
The folks working on the Taylor model were coming from physics where they had some nasty (very ill-conditioned) nonlinear systems that they needed to solve and developed a package called COSY which implements it.
My understanding is that the Taylor model is effective in practice, but there might have been some confusion around what it was actually capable of. I believe the Taylor model people claimed that their interval inclusions had better than quadratic excess, but this turned out not to be the case. Other people were able to push past the quadratic limit of the well known interval inclusions using other techniques. There are some recent papers by Hormann and Yap along these lines, although I think the first interval inclusion that is better than quadratic dates back further...
The Lipschitz trick here relies on the assumption that the gradient has magnitude 1 everywhere. Evaluating the SDF in a box and using the Lipschitz property lets you get some quick estimates on the range of the SDF over the box.
This is a close cousin of the monotonicity interval inclusion: e.g., if f'([a, b]) > 0, then f([a, b]) is a subset of [f(a), f(b)]. Rounded interval arithmetic makes this rigorous on a computer; you can also do it in multiple dimensions, with higher derivatives, successively tightening inclusions for lower derivatives, etc.
kragen•1d ago
mkeeter•1d ago
You may also enjoy "Spelunking the Deep: Guaranteed Queries on General Neural Implicit Surfaces via Range Analysis", which uses interval arithmetic (ish) to raymarch neural implicit surfaces:
https://arxiv.org/abs/2202.02444
kragen•1d ago