frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•6m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•9m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•12m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•19m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•21m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•22m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•23m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•25m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•26m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•31m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•32m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•32m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•33m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•35m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•38m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•41m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•47m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•49m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•54m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•56m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•56m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•59m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•1h ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•1h ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•1h ago•2 comments
Open in hackernews

What Is (AI) Glaze?

https://glaze.cs.uchicago.edu/what-is-glaze.html
24•weinzierl•1mo ago

Comments

Kye•1mo ago
Snake oil. Even if it worked in a way that wouldn't be bypassed quickly, it was too late, and the few artists who've applied it aren't enough to matter in the next training runs. Watching artists pull down years, sometimes decades of already scraped galleries to apply sketchy anti-AI magic was distressing.
debugnik•1mo ago
Their objective is not so much to fight mass scrapping but to prevent fine-tunes with their name on Civitai, copying them specifically. Which happens a lot.

Sadly I agree that Glaze doesn't really work for it.

maplethorpe•1mo ago
Unfortunately, questioning glaze gets you labelled as an enemy. "They want you to think it doesn't work", etc.
_blop•1mo ago
Unfortunately, Glaze does not seem to work. When I've trained a simple style LoRA on a few sets of glazed images using SDXL, the LoRA was still able to reproduce their style.

Another unfortunate consequence of the introduction of Glaze and Nightshade is that some artists which I follow have now started glazing all of their new works which they publish, leading to quite ugly results due to the noise that Glaze produces on high settings, despite questionable efficacy.

rcxdude•1mo ago
Ironically it tends to introduce the kind of artifacts that can exist in AI-generated pics.
zyx321•1mo ago
Even if it doesn't work, it's important to try.

If OpenAI steals all your work, that's copyright infringement - but if you tried to stop them through technical means and they do it anyway, that's felony DRM circumvention.

oytis•1mo ago
It's interesting to try, but misrepresenting your attempts as an unbeatable solution doesn't help anyone.
chomp•1mo ago
It doesn’t, reread the limitations section.
nh23423fefe•1mo ago
useless signaling is all you need
Antibabelic•1mo ago
This page is light on technical detail. What does Glaze do to an image specifically?
oytis•1mo ago
Details are in the paper: https://people.cs.uchicago.edu/~ravenben/publications/pdf/gl...

Don't quite have the domain knowledge to evaluate, but the claims are outlandish

nh23423fefe•1mo ago
i xored 2 images on the front page

https://glaze.cs.uchicago.edu/images/wintersrose.jpg

https://glaze.cs.uchicago.edu/images/wintersrose-glazed-trim...

https://pasteboard.co/7IJPWBDuroMe.png

it splashes rgb noise near edges in the original

darubedarob•1mo ago
Personally i use ai to generate style descriptions without the artists name and the songs name, to work around this.
GaggiX•1mo ago
From what I remember Glaze is using some small CLIP model and LPIPS (based on VGG) for their adversarial loss, that's why it's so ineffective to large, better trained model.

It use SD to do a style transfer on the image using image-to-image, then it use gradient descent on the image itself to lower the difference between CLIP embeddings of the original and style transfer image + trying to maintain LPIPS, then every step is normalized to not exceed a certain threshold from the original image.

So essentially it's an adversarial attack against a small CLIP model, even though today's models are much robust than that.

dale_glass•1mo ago
It's snake oil, and it'd be snake oil even if it worked.

I've yet to hear of it doing anything. I've never heard anyone in an AI group worried about it in any way. No "damn, Glaze ruined my LoRA". To the extent anyone talks about it, it's either non-technical artist groups, or AI groups where somebody intentionally sets out to play with it to see if they can actually make it do something.

But even if it worked in its intended scope, even then it'd be snake oil. Because you can't defeat every AI system simultaneously. Flaws can be exploited, but flaws aren't guaranteed to (and almost certainly won't be) conserved on the long term. So anything that works now isn't going to work tomorrow. And defending against known models today is pointless because they were already successfully created.

The whole idea of attacking an already finished product is a fundamentally flawed approach, and would only possibly work in extremely unlikely and contrived cases. Like v1 not being very good, so the model's maker for some reason decided to pull in additional data, long past a well published adversarial attack on v1, and incorporate that into v2.

techjamie•1mo ago
I wonder if it's a matter of the relative usage of those tools being very small compared to what already exists, and the few artists that want to muddy their work fir it. But even if it was significant, I'm sure the next thing to release would be AI models designed to try and revert nightshade/glaze changes into something that works again.

I wonder if one could do something to protect images in a similar way that Anubis protects webpages from scrapers. Where the data sent from the server is mathematically obfuscated such that the client has to do some heavy calculating to get the final product.

It wouldn't stop an individual from collecting a small sample set, but it would discourage mass scraping.

dale_glass•1mo ago
> But even if it was significant, I'm sure the next thing to release would be AI models designed to try and revert nightshade/glaze changes into something that works again.

I don't think it's even "reverting". Glaze isn't generically anti-AI, Glaze tries to exploit flaws in one particular image AI implementation, by actually testing its reaction to a disturbance in the image.

The approach only works if the model at all cares about the type of disturbance being created.

Other models likely don't notice anything at all, there's nothing for them to revert.

> It wouldn't stop an individual from collecting a small sample set, but it would discourage mass scraping.

IMO, if they want to, they will do it. AI training already requires mass amounts of CPU/GPU power. They can also use it to solve your calculation challenge, and anyone training models will have enough horsepower available as to dwarf anything any reasonable client machine could deal with.

monster_truck•1mo ago
It doesn't do anything. It shouldn't be shared in case people who do not know better are tricked into believing it does
amelius•1mo ago
Cat and mouse ...
zkmon•1mo ago
>> Glaze works by understanding the AI models that are training on human art, and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style.

That's supposed be the single most important sentence for the entire article, but ended being a mouthful which hardly makes sense.

>> So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.

"when" and "then" don't work like that.

I' still trying to see a crisp solution statement beyond "is a system designed to protect human artists by disrupting style mimicry.".

jgalt212•1mo ago
> Many work primarily on mobile devices

I hope they mean tablets here, and not phones. I can't imagine any artist being more productive or effective on a tiny screen vs a large screen.

icpmoles•1mo ago
I couldn't find a link to the source code.

Are they still pushing the "security through obscurity"?

OutOfHere•1mo ago
I came here thinking that AI glaze is what non-AI products use to make their products look shiny to the audience.

Even if this did work now, there is nothing that AI can't adapt to. It'll take just a thousand such images in a random large image dataset for AI to quickly adapt to it, and then it'll be utterly pointless. As such, the effective half-life of any such approach is a year, with any further adversarial adaption yielding a diminished effect.