frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Perceptions of Crime and Disorder

https://arnoldkling.substack.com/p/perceptions-of-crime-and-disorder
1•paulpauper•25s ago•0 comments

Degoogling

https://iamvishnu.com/posts/degoogling
1•vishnuharidas•1m ago•0 comments

ContactTool – Chrome extension to track job applications across sites. KISS

https://www.etsy.com/listing/4462120051/jit-contacttool-chrome-extension-track
1•metatheory•1m ago•1 comments

Show HN: Vaultara – Daily AI News Intelligence Reports

https://vaultara.co/
1•SvenSchnieders•4m ago•0 comments

Claude Used in Iran Strikes

https://www.axios.com/newsletters/axios-am-f0954cb2-2f31-4426-87fd-050095005344.html
1•leopoldj•6m ago•0 comments

Residents living permanently in Japan's cyber-cafés [video]

https://www.youtube.com/watch?v=MtdupS0gRt0
1•simonebrunozzi•6m ago•0 comments

Show HN: Social proof works 2-7x better on AI shopping agents than humans

https://github.com/aaronbatchelder/claude-marketing-susceptibility-eval
1•aaronmb7•8m ago•0 comments

How the Government Deceived Congress in the Debate over Surveillance Powers (2013)

https://www.eff.org/deeplinks/2013/06/director-national-intelligences-word-games-explained-how-go...
4•doener•14m ago•0 comments

Show HN: Reflex – local code search engine and MCP server for AI coding

https://github.com/reflex-search/reflex
1•therecluse26•15m ago•0 comments

Bind 2 Port 0

https://bengarcia.dev/b2p0
1•hahahacorn•15m ago•0 comments

Poll: AI Winter

6•amelius•18m ago•3 comments

Show HN: AI Sees Me – CLIP running in the browser

https://www.howaiseesme.com/
1•jayyvk•18m ago•0 comments

SaaS in, SaaS out: Here's what's driving the SaaSpocalypse

https://techcrunch.com/2026/03/01/saas-in-saas-out-heres-whats-driving-the-saaspocalypse/
1•palad1n•19m ago•1 comments

Dbslice: Extract a slice of your production database to reproduce bugs

https://github.com/nabroleonx/dbslice
1•rbanffy•21m ago•0 comments

Show HN: Updater – one command for macOS app updates

https://github.com/lu-zhengda/updater
2•zhengda-lu•23m ago•0 comments

PEP 747 – Annotating Type Forms – peps.python.org

https://peps.python.org/pep-0747/
2•rbanffy•25m ago•0 comments

Show HN: AfterLive – Preserve a Loved One's Voice and Personality with AI

https://afterlive.ai
1•crawde•26m ago•1 comments

Samsung Galaxy S26 Ultra Privacy Display Testing

https://www.lttlabs.com/articles/2026/03/01/samsung-galaxy-s26-ultra-privacy-display
2•LabsLucas•28m ago•1 comments

Securing AI Model Weights

https://www.rand.org/pubs/research_reports/RRA2849-1.html
1•fi-le•29m ago•0 comments

The information space around military AI is being weaponized against us

https://weaponizedspaces.substack.com/p/the-information-space-around-military
4•rbanffy•32m ago•0 comments

Show HN: ContractPulse – Free intelligence on federal government contracts

https://contractpulse.io
2•signalstackhq•32m ago•0 comments

Sam Altman AMA on DoD Collaboration

https://twitter.com/sama/status/2027900042720498089
8•Palmik•32m ago•1 comments

"All Lawful Use": More Than You Wanted to Know

https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you
3•pchristensen•36m ago•0 comments

Show HN: Agentic Gatekeeper – Auto-patch your code to enforce Markdown rules

https://github.com/revanthpobala/agentic-gatekeeper
1•revanth1108•37m ago•0 comments

Show HN: Deploybase – Compare GPU and LLM pricing across all major providers

https://deploybase.ai
1•grasper_•37m ago•0 comments

TPM-Sniffing LUKS Keys on an Embedded Linux Device [CVE-2026-0714]

https://www.cyloq.se/en/research/cve-2026-0714-tpm-sniffing-luks-keys-on-an-embedded-device
4•Tiberium•38m ago•1 comments

Palantir Sues Swiss Magazine for Accurate Report

https://www.techdirt.com/2026/02/27/palantir-sues-swiss-magazine-for-accurately-reporting-that-th...
6•doener•39m ago•0 comments

3D dashboard to monitor and control your AI coding agents in real-time

https://github.com/coding-by-feng/ai-agent-session-center
1•kasonzhan•43m ago•0 comments

$10M factory in 600sqft room

https://www.youtube.com/watch?v=hqGFcwyXYI0
1•humbfool2•44m ago•0 comments

The Zero-Server Code Intelligence Engine

https://github.com/abhigyanpatwari/GitNexus
1•mercat•48m ago•0 comments
Open in hackernews

Show HN: Imagedojo.ai – Blind arena for Google, OpenAI, and xAI image generators

https://imagedojo.ai/
1•vtail•1h ago
Hi HN,

I was curious which of the three major US AI labs generates images that people like more, so I built ImageDojo.ai.

It shows you two images side-by-side, both generated from the exact same prompt. You vote on which one you like more (you don't see the prompt or which model made each one).

Based on the votes, it calculates ELO ratings for the models — similar to LMSYS Arena for text.

The four models I selected (the original and the new Nano Banana, GPT-Image-1.5, and Grok-Imagine-Image) are all in the same rough price range ($0.02–$0.06 per image), so we're comparing fairly similar-class models. Please try it out and let me know what you think!

Comments

vunderba•1h ago
For reference, have you seen the Artificial Analysis Image Arena Leaderboard? They also show you two images from anonymized models (shows after you vote), and calculates crowdsourced ELO ratings.

https://huggingface.co/spaces/ArtificialAnalysis/Text-to-Ima...

vtail•1h ago
Thanks - and no, I haven't seen this one. I like how they have the edit mode dashboard - show the original image + two edits; I was thinking about doing something like this.

I'm also a bit surprised they have gpt-image-1.5 so high above Nano Banana 2 - my limited testing shows that, at least for the visual styles, people like Nano Banana more.

vunderba•1h ago
Yeah I think that it's part of the issue with a single "squashed" comparative metric. Some users are going to grade higher based on the overall visual fidelity and others are going to value following the prompt.

For a point of reference, I run a pretty comprehensive image model comparison site heavily weighted in favor of prompt adherence.

https://genai-showdown.specr.net

EDIT: FWIW, I agree with your assessment. OpenAI's models have always been very strong in prompt adherence but visually weak (gpt-image-1 had the famous "piss filter" until they finally pushed out gpt-image-1.5)

vtail•1h ago
Very cool site - I think I saw it before here on HN, and I liked it a lot.

Did you manually review all the edit results manually yourself, or do you have some kind of automated procedure?

vunderba•56m ago
Thanks. So I have a bespoke python program that basically does this:

- Takes the platonic set of prompts

- Uses model specific tuning directives with LLMs to create a bunch of prompt variations so that they get a diverse set of natural language expressions to "roll" generations

But I still have to manually review each of the final image - which is pretty time-consuming. I've tried automating it using VLMs (like Qwen3-VL) but unfortunately they can miss the small details and didn't provide as much value as I was hoping.