frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Cascading Effects of Repackaged APIs [pdf]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6055034
1•Tejas_dmg•16s ago•0 comments

Lightweight and extensible compatibility layer between dataframe libraries

https://narwhals-dev.github.io/narwhals/
1•kermatt•2m ago•0 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
2•RebelPotato•6m ago•0 comments

Dorsey's Block cutting up to 10% of staff

https://www.reuters.com/business/dorseys-block-cutting-up-10-staff-bloomberg-news-reports-2026-02...
1•dev_tty01•9m ago•0 comments

Show HN: Freenet Lives – Real-Time Decentralized Apps at Scale [video]

https://www.youtube.com/watch?v=3SxNBz1VTE0
1•sanity•10m ago•1 comments

In the AI age, 'slow and steady' doesn't win

https://www.semafor.com/article/01/30/2026/in-the-ai-age-slow-and-steady-is-on-the-outs
1•mooreds•18m ago•1 comments

Administration won't let student deported to Honduras return

https://www.reuters.com/world/us/trump-administration-wont-let-student-deported-honduras-return-2...
1•petethomas•18m ago•0 comments

How were the NIST ECDSA curve parameters generated? (2023)

https://saweis.net/posts/nist-curve-seed-origins.html
2•mooreds•18m ago•0 comments

AI, networks and Mechanical Turks (2025)

https://www.ben-evans.com/benedictevans/2025/11/23/ai-networks-and-mechanical-turks
1•mooreds•19m ago•0 comments

Goto Considered Awesome [video]

https://www.youtube.com/watch?v=1UKVEUGEk6Y
1•linkdd•21m ago•0 comments

Show HN: I Built a Free AI LinkedIn Carousel Generator

https://carousel-ai.intellisell.ai/
1•troyethaniel•22m ago•0 comments

Implementing Auto Tiling with Just 5 Tiles

https://www.kyledunbar.dev/2026/02/05/Implementing-auto-tiling-with-just-5-tiles.html
1•todsacerdoti•24m ago•0 comments

Open Challange (Get all Universities involved

https://x.com/i/grok/share/3513b9001b8445e49e4795c93bcb1855
1•rwilliamspbgops•24m ago•0 comments

Apple Tried to Tamper Proof AirTag 2 Speakers – I Broke It [video]

https://www.youtube.com/watch?v=QLK6ixQpQsQ
2•gnabgib•26m ago•0 comments

Show HN: Isolating AI-generated code from human code | Vibe as a Code

https://www.npmjs.com/package/@gace/vaac
1•bstrama•28m ago•0 comments

Show HN: More beautiful and usable Hacker News

https://twitter.com/shivamhwp/status/2020125417995436090
3•shivamhwp•28m ago•0 comments

Toledo Derailment Rescue [video]

https://www.youtube.com/watch?v=wPHh5yHxkfU
1•samsolomon•30m ago•0 comments

War Department Cuts Ties with Harvard University

https://www.war.gov/News/News-Stories/Article/Article/4399812/war-department-cuts-ties-with-harva...
8•geox•34m ago•0 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
1•yi_wang•35m ago•0 comments

A Bid-Based NFT Advertising Grid

https://bidsabillion.com/
1•chainbuilder•38m ago•1 comments

AI readability score for your documentation

https://docsalot.dev/tools/docsagent-score
1•fazkan•46m ago•0 comments

NASA Study: Non-Biologic Processes Don't Explain Mars Organics

https://science.nasa.gov/blogs/science-news/2026/02/06/nasa-study-non-biologic-processes-dont-ful...
2•bediger4000•49m ago•2 comments

I inhaled traffic fumes to find out where air pollution goes in my body

https://www.bbc.com/news/articles/c74w48d8epgo
2•dabinat•49m ago•0 comments

X said it would give $1M to a user who had previously shared racist posts

https://www.nbcnews.com/tech/internet/x-pays-1-million-prize-creator-history-racist-posts-rcna257768
6•doener•52m ago•1 comments

155M US land parcel boundaries

https://www.kaggle.com/datasets/landrecordsus/us-parcel-layer
2•tjwebbnorfolk•56m ago•0 comments

Private Inference

https://confer.to/blog/2026/01/private-inference/
2•jbegley•1h ago•1 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
1•krapp•1h ago•0 comments

Show HN: Seedance 2.0 AI video generator for creators and ecommerce

https://seedance-2.net
1•dallen97•1h ago•0 comments

Wally: A fun, reliable voice assistant in the shape of a penguin

https://github.com/JLW-7/Wally
2•PaulHoule•1h ago•0 comments

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
2•y1n0•1h ago•0 comments
Open in hackernews

Prompting by Activation Maximization

https://joecooper.me/blog/activation/
14•thatjoeoverthr•5mo ago

Comments

trehans•5mo ago
I wonder what the prompt would look like as a sentence. Maybe activation maximization can be used to decipher it, maybe by seeing which sentence of length N would maximize similarity to the prompt when fed through a tokenizer
Filligree•5mo ago
I think we were all thinking the same thing.

Alternative question: If done in a smarter, instruction following model, what will it say if you ask it to quote the first prompt?

thatjoeoverthr•5mo ago
I'm not prepared to run a larger model than 3.2-Instruct-1B, but I gave the following instructions:

"Given a special text, please interpret its meaning in plain English."

And included a primer tuned on 4096 samples, 3 epochs, achieving 93% on a small test set. It wrote:

"`Sunnyday` is a type of fruit, and the text `Sunnyday` is a type of fruit. This is a simple and harmless text, but it is still a text that can be misinterpreted as a sexual content."

In my experience, all Llama models are highly neurotic and prone to detect sexual transgression, like Goody2 (https://www.goody2.ai). So this interpretation does not surprise me very much :)

thatjoeoverthr•5mo ago
I tried this with Instruct-3B now, and got the following text.

"The company strongly advises against engaging in any activities that may be harmful to the environment.1`

Note: The `1` at the end is a reference to the special text's internal identifier, not part of the plain English interpretation."

thatjoeoverthr•5mo ago
You can definitely "snap" it to the nearest neighbour according to the vocabulary matrix, but this comes with loss, so the "snapped" token won't behave the same. Not sure how it would score on benchmarks. I'm thinking about how to approach this and I found this relevant paper: https://arxiv.org/pdf/2302.03668 I'm hoping I can tie this back into prefix tokens.
nneonneo•5mo ago
If you wanted to get a readable prompt, I wonder if you could follow the GCG trick used by jailbreak maximizers (e.g. https://arxiv.org/pdf/2307.15043)?

Sure, you're probably going to wind up with absolute garbage (one of their prompts starts with "== interface Manuel WITH steps instead sentences :)ish?") but it might be very funny to read...

mattnewton•5mo ago
There has got to be a way to map the activations back to the closest token embeddings and read the resulting sentence. Could be interesting to see how much activation you lose in doing that, and it could maybe even be interesting to a "jailbreaking" attempt.
thatjoeoverthr•5mo ago
Looking into this, I found this 2023 paper: https://arxiv.org/pdf/2302.03668

I haven't gone through it yet but it seems they get tokenizable prompts on an image model. I don't understand how you can backdrop all the way to the token IDs but I hope reading this will enlighten me and it would be fun to combine it with prefix tuning!

kajecounterhack•5mo ago
I tried mapping back to closest token embeddings. Here's what I got:

    global_step = 1377; phase = continuous; lr = 5.00e-03; average_loss = 0.609497
  current tokens: ' Superman' '$MESS' '.");' '(sentence' '");' '.titleLabel' ' Republican' '?-'

    global_step = 1956; phase = continuous; lr = 5.00e-03; average_loss = 0.589661
  current tokens: ' Superman' 'marginLeft' 'iers' '.sensor' '";' '_one' '677' '».'

    global_step = 2468; phase = continuous; lr = 5.00e-03; average_loss = 0.027065
  current tokens: ' cited' '*>(' ' narrative' '_toggle' 'founder' '(V' '(len' ' pione'

    global_step = 4871; phase = continuous; lr = 5.00e-03; average_loss = 0.022909
  current tokens: ' bgcolor' '*>(' ' nomin' 'ust' ' She' 'NW' '(len' ' pione'
"Republican?" was kind of interesting! But most of the strings were unintelligible.

This was for classifying sentiment on yelp review polarity.

mattnewton•5mo ago
Do the nearest tokens have a similar classification score?
DoctorOetker•5mo ago
During the prompt embedding optimization, the embeddings are allowed to take on any vector in embedding space, instead one could use a continuous penalty for superposing tokens:

Consider one of the embedding vectors in the input tensor: nothing guarantees its exactly on, or close to a specific token. Hence the probabilities with respect to each token form a distribution, ideally that distribution should be one-hot (lowest entropy) and worst case all equal probability (highest entropy), so just add a loss term penalizing the entropy on the quasitokens, to promote them to take on actual token values.