frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•22s ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
1•roknovosel•28s ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•8m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•9m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•11m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•11m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
1•surprisetalk•11m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
2•pseudolus•12m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•12m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•13m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
1•1vuio0pswjnm7•13m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•14m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
1•jackhalford•15m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•15m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
1•tangjiehao•18m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•19m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•19m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•20m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
1•tusharnaik•21m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•21m ago•0 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
1•lukastyrychtr•22m ago•0 comments

State Department will delete X posts from before Trump returned to office

https://text.npr.org/nx-s1-5704785
7•derriz•22m ago•1 comments

AI Skills Marketplace

https://skly.ai
1•briannezhad•22m ago•1 comments

Show HN: A fast TUI for managing Azure Key Vault secrets written in Rust

https://github.com/jkoessle/akv-tui-rs
1•jkoessle•23m ago•0 comments

eInk UI Components in CSS

https://eink-components.dev/
1•edent•24m ago•0 comments

Discuss – Do AI agents deserve all the hype they are getting?

2•MicroWagie•26m ago•0 comments

ChatGPT is changing how we ask stupid questions

https://www.washingtonpost.com/technology/2026/02/06/stupid-questions-ai/
2•edward•27m ago•1 comments

Zig Package Manager Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
3•jackhalford•29m ago•1 comments

Neutron Scans Reveal Hidden Water in Martian Meteorite

https://www.universetoday.com/articles/neutron-scans-reveal-hidden-water-in-famous-martian-meteorite
2•geox•30m ago•0 comments

Deepfaking Orson Welles's Mangled Masterpiece

https://www.newyorker.com/magazine/2026/02/09/deepfaking-orson-welless-mangled-masterpiece
2•fortran77•31m ago•1 comments
Open in hackernews

Prompting by Activation Maximization

https://joecooper.me/blog/activation/
14•thatjoeoverthr•5mo ago

Comments

trehans•5mo ago
I wonder what the prompt would look like as a sentence. Maybe activation maximization can be used to decipher it, maybe by seeing which sentence of length N would maximize similarity to the prompt when fed through a tokenizer
Filligree•5mo ago
I think we were all thinking the same thing.

Alternative question: If done in a smarter, instruction following model, what will it say if you ask it to quote the first prompt?

thatjoeoverthr•5mo ago
I'm not prepared to run a larger model than 3.2-Instruct-1B, but I gave the following instructions:

"Given a special text, please interpret its meaning in plain English."

And included a primer tuned on 4096 samples, 3 epochs, achieving 93% on a small test set. It wrote:

"`Sunnyday` is a type of fruit, and the text `Sunnyday` is a type of fruit. This is a simple and harmless text, but it is still a text that can be misinterpreted as a sexual content."

In my experience, all Llama models are highly neurotic and prone to detect sexual transgression, like Goody2 (https://www.goody2.ai). So this interpretation does not surprise me very much :)

thatjoeoverthr•5mo ago
I tried this with Instruct-3B now, and got the following text.

"The company strongly advises against engaging in any activities that may be harmful to the environment.1`

Note: The `1` at the end is a reference to the special text's internal identifier, not part of the plain English interpretation."

thatjoeoverthr•5mo ago
You can definitely "snap" it to the nearest neighbour according to the vocabulary matrix, but this comes with loss, so the "snapped" token won't behave the same. Not sure how it would score on benchmarks. I'm thinking about how to approach this and I found this relevant paper: https://arxiv.org/pdf/2302.03668 I'm hoping I can tie this back into prefix tokens.
nneonneo•5mo ago
If you wanted to get a readable prompt, I wonder if you could follow the GCG trick used by jailbreak maximizers (e.g. https://arxiv.org/pdf/2307.15043)?

Sure, you're probably going to wind up with absolute garbage (one of their prompts starts with "== interface Manuel WITH steps instead sentences :)ish?") but it might be very funny to read...

mattnewton•5mo ago
There has got to be a way to map the activations back to the closest token embeddings and read the resulting sentence. Could be interesting to see how much activation you lose in doing that, and it could maybe even be interesting to a "jailbreaking" attempt.
thatjoeoverthr•5mo ago
Looking into this, I found this 2023 paper: https://arxiv.org/pdf/2302.03668

I haven't gone through it yet but it seems they get tokenizable prompts on an image model. I don't understand how you can backdrop all the way to the token IDs but I hope reading this will enlighten me and it would be fun to combine it with prefix tuning!

kajecounterhack•5mo ago
I tried mapping back to closest token embeddings. Here's what I got:

    global_step = 1377; phase = continuous; lr = 5.00e-03; average_loss = 0.609497
  current tokens: ' Superman' '$MESS' '.");' '(sentence' '");' '.titleLabel' ' Republican' '?-'

    global_step = 1956; phase = continuous; lr = 5.00e-03; average_loss = 0.589661
  current tokens: ' Superman' 'marginLeft' 'iers' '.sensor' '";' '_one' '677' '».'

    global_step = 2468; phase = continuous; lr = 5.00e-03; average_loss = 0.027065
  current tokens: ' cited' '*>(' ' narrative' '_toggle' 'founder' '(V' '(len' ' pione'

    global_step = 4871; phase = continuous; lr = 5.00e-03; average_loss = 0.022909
  current tokens: ' bgcolor' '*>(' ' nomin' 'ust' ' She' 'NW' '(len' ' pione'
"Republican?" was kind of interesting! But most of the strings were unintelligible.

This was for classifying sentiment on yelp review polarity.

mattnewton•5mo ago
Do the nearest tokens have a similar classification score?
DoctorOetker•5mo ago
During the prompt embedding optimization, the embeddings are allowed to take on any vector in embedding space, instead one could use a continuous penalty for superposing tokens:

Consider one of the embedding vectors in the input tensor: nothing guarantees its exactly on, or close to a specific token. Hence the probabilities with respect to each token form a distribution, ideally that distribution should be one-hot (lowest entropy) and worst case all equal probability (highest entropy), so just add a loss term penalizing the entropy on the quasitokens, to promote them to take on actual token values.