For example (and this might be oversimplifying a bit, computer vision people please correct me if I’m wrong) if you’re interested in knowing whether or not the image contains a cat, then maybe there is some hyperplane P in H for which images on one side of P do not contain a cat, and images on the other side do contain a cat. And so solving for “Does this image contain a cat?”becomes a much easier problem, all you have to do is figure out what P is. Once you do that, you can pass your image into DINO, dot product with the equation for P, and check whether the answer is negative or positive. The point is that finding P is much easier than training your own computer vision model from scratch.
I imagine it would depend on whether DINOv3 captures the information of whether a given person is in the image, which I think is really a question about training data. So naively, I would guess the answer is yes for celebrities and no for non-celebrities. Partially for data/technical reasons, but also maybe due to the murkier legal expectation of privacy for famous people.
Vision transformers also output patch tokens, which can be assembled into a low-resolution feature map (w/32, h/32 is common). So what you do with that data depends on the task. Classification can be as simple as linearly classifying the (whole image) embedding. A semantic segmentation task can do the same, but for every pixel. This is why the DINO authors show a PCA representation of a bunch of images, which show that semantically similar objects are grouped together by colour. Object detectors are more complicated, but the key thing is that once you have these pixel-level features, you can use them as input into existing architectures.
Now to your question: face recognition is a specific application of object re-identification (keyword: Re-ID). The way most of these models work is from the whole-image embedding. Normally you'd run a detector to extract the face region, then compute the embedding, put it in a vector database and then query for nearest neighbours using something like the cosine distance. I've only worked in this space for animals, but humans are far more studied. Whether DINOv3 is good enough out-of-the-box I don't know, but certainly there's a lot of literature looking at these sorts of models for Re-ID.
The challenge with Re-ID is that the model has to be able to produce features which discriminate individuals rather than similar looking individuals. For example with the vanilla model, you probably have a very good tool for visual search. But that's not the same task, because if you give it a picture of someone in a field, you'll get back pictures of other people in fields. That usually requires re-training on labelled imagery where you have a few examples of each person. The short answer is that there are already very good models for doing this, and they don't necessarily even need ML to do a decent job (though it might be used for keypoint detection for facial landmarks).
They made their own DINOv3 license for this release (whereas DINOv2 used the Apache 2.0 license).
Neat though. Will still check it out.
As a first comment, I had to install the latest transformer==4.56.0dev (e.g. pip install git+https://github.com/huggingface/transformers) for it to work properly. 4.55.2 and earlier was failing with a missing image type in the config.
Seems like the tides are shifting at meta
https://www.linkedin.com/posts/yann-lecun_were-excited-to-ha...
DINOV3: Self-supervised learning for vision at unprecedented scale | https://news.ycombinator.com/item?id=44904608
> ViT models pretrained on satellite dataset (SAT-493M)
DINOv2 had pretty poor out-of-the-box performance on satellite/aerial imagery, so it's super exciting that they released a version of it specifically for this use case.
I’m fascinated by this, but am admittedly clueless about how to actually go about building any kind of recognizer or other system atop it.
As for doing it in general, it's a fairly standard vision transformer so anything built on DINOv2 (or any other ViT) should be easy to adapt to v3.
[0]: https://github.com/tue-mps/eomt [1]: https://docs.lightly.ai/train/stable/semantic_segmentation.h...
beklein•5mo ago