frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•3m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
1•mooreds•4m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•4m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•5m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•5m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•5m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•6m ago•2 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•6m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
1•nick007•7m ago•0 comments

What the News media thinks about your Indian stock investments

https://stocktrends.numerical.works/
1•mindaslab•8m ago•0 comments

Running Lua on a tiny console from 2001

https://ivie.codes/page/pokemon-mini-lua
1•Charmunk•9m ago•0 comments

Google and Microsoft Paying Creators $500K+ to Promote AI Tools

https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
2•belter•11m ago•0 comments

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•12m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
2•momciloo•13m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•13m ago•2 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
2•valyala•13m ago•0 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•13m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•13m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•14m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
2•Keyframe•17m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•17m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
2•valyala•18m ago•0 comments

The API Is a Dead End; Machines Need a Labor Economy

1•bot_uid_life•20m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•Jyaif•21m ago•0 comments

New wave of GLP-1 drugs is coming–and they're stronger than Wegovy and Zepbound

https://www.scientificamerican.com/article/new-glp-1-weight-loss-drugs-are-coming-and-theyre-stro...
5•randycupertino•22m ago•0 comments

Convert tempo (BPM) to millisecond durations for musical note subdivisions

https://brylie.music/apps/bpm-calculator/
1•brylie•24m ago•0 comments

Show HN: Tasty A.F. - Use AI to Create Printable Recipe Cards

https://tastyaf.recipes/about
2•adammfrank•25m ago•0 comments

The Contagious Taste of Cancer

https://www.historytoday.com/archive/history-matters/contagious-taste-cancer
2•Thevet•27m ago•0 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
1•alephnerd•27m ago•1 comments

Bithumb mistakenly hands out $195M in Bitcoin to users in 'Random Box' giveaway

https://koreajoongangdaily.joins.com/news/2026-02-07/business/finance/Crypto-exchange-Bithumb-mis...
1•giuliomagnifico•27m ago•0 comments
Open in hackernews

Visualizing asymmetry in the 196 Lychrel chain (50k steps, 20k digits)

2•jivaprime•2mo ago
I’ve been experimenting with the classic 196 Lychrel problem, but instead of trying to push the iteration record, I wanted to look at the structure of the 196 sequence over time.

Very briefly: starting from 196 in base 10, we repeatedly do reverse-and-add. No palindrome is known, despite huge computational searches, so 196 is treated as a Lychrel candidate, but there is no proof either way.

Rather than just asking “does it ever hit a palindrome?”, I asked:

> What does the *digit asymmetry* of the 196 sequence look like as we iterate?

---

### SDI: a simple asymmetry metric

I defined a toy metric, *SDI (Symmetry Defect Index)*:

* Write `n` in decimal, length `L`, let `pairs = L // 2`.

* Compare the left `pairs` digits with the right `pairs` digits reversed, so each pair of digits “faces” its mirror.

* For each pair `(dL, dR)`:

  ```text
  pair_score =
    abs((dL % 2) - (dR % 2)) +
    abs((dL % 5) - (dR % 5))
  ```
* Sum over all pairs to get `SDI`, then normalize:

  ```text
  normalized_SDI = SDI / pairs
  ```
Heuristic: lower means “more symmetric / structured”, higher means “more asymmetric / closer to random”. For random decimal digits, normalized SDI clusters around ~2.1 in my tests. I also mark ~1.6 as a “zombie line”: well below that looks very frozen/structured.

---

### Experiment

* Start: 196 * Operation: base-10 reverse-and-add * Steps: 50,000 * SDI sampled every 100 steps * Implementation: Python big ints + string-based SDI

By 50k steps, the 196 chain reaches about 20,000 decimal digits (~10^20000). I plotted normalized SDI vs step, plus a linear trend line and reference lines at 2.1 (random-ish) and 1.6 (zombie line).

I also ran the same SDI on 89 (which does reach a palindrome) for comparison.

---

### What it looks like

*For 196 (0–50k steps):*

* Normalized SDI lives mostly between ~1.1 and 2.2. * It does *not* drift toward 0 (no sign of “healing” into symmetry). * The trend line has a tiny positive slope (almost flat). * The cloud stays below the ~2.1 “random” line but mostly above the ~1.6 “zombie line”.

So 196 doesn’t look like it’s converging to a very symmetric state, and it doesn’t look fully random either. It seems stuck in a mid-level “zombie band” of asymmetry.

*For 89:*

* SDI starts around 2–3, then drifts downward. * When 89 finally reaches a palindrome, SDI collapses sharply to 0 at that step. * This matches the intuition: a “healing” sequence that ends in perfect symmetry.

SDI cleanly separates the behaviour of 89 (heals to a palindrome) from 196 (stays in a noisy mid-level band).

---

### Code and plots

Code and plots are here (including the SDI implementation and 196 vs 89 graphs):

* GitHub: [https://github.com/jivaprime/192](https://github.com/jivaprime/192)

---

### Looking for feedback

I’m curious about:

* Similar work: have you seen digit-symmetry / asymmetry metrics applied to Lychrel candidates before? * Better metrics: any more standard notions of “symmetry defect” or digit entropy you’d recommend? * Scaling: ideas for a C/Rust implementation that occasionally samples SDI far beyond this (e.g., at depths comparable to the classic 196 palindrome quests)?

Happy to tweak the metric, run other starting values / bases, or collect more data if people have ideas.

Comments

jivaprime•2mo ago
Thanks for the thoughtful questions — here’s how I see things at the moment.

---

### 1. Similar work (Lychrel candidates + digit symmetry metrics)

There’s a lot of well-known computational work around 196 and Lychrel candidates in general:

* pushing reverse-and-add depth, * cataloging candidate roots over large ranges, * classic projects like John Walker’s “196 Palindrome Quest” and p196.org, etc.

I’ve been looking at that side of things as background.

What I haven’t really seen (so far) is something that does exactly what I’m doing here, namely:

> at each step, measure some *left–right digit symmetry/asymmetry metric*, > and plot that as a time series along the 196 chain.

So SDI, as I’m using it, isn’t meant as a standard or established tool — it’s more like an ad-hoc probe I built to see whether the 196 chain visibly behaves differently from a “normal” case like 89.

If anyone knows of prior work that tracks a similar per-step “symmetry defect” over a Lychrel candidate, I’d really like to see it.

---

### 2. Better / more standard metrics

I completely agree SDI is a toy metric. I chose it for very pragmatic reasons:

* easy to compute from the decimal representation, * has a clear mirror interpretation (pairing left/right digits), * and naturally goes to 0 on a palindrome.

If I wanted to take this more seriously, some obvious directions would be:

* *Direct distances* Instead of the mod-2 / mod-5 trick, use something like:

  * Hamming distance between the digit string and its reverse, or
  * the average of |dL − dR| over mirrored pairs.
* *Left vs right distributions* Build digit histograms for left and right halves and compare them via:

  * L1 distance, or
  * KL divergence, etc.
* *Correlation / information-theoretic view* Treat pairs (i, L−1−i) as samples from a joint distribution and measure:

  * mutual information,
  * correlation / covariance, etc.,
    to see how strongly the mirrored positions are coupled.
* *Entropy-type measures* Shannon entropy of:

  * the overall digit distribution, or
  * digit distributions on subranges,
    to quantify “how uniform / random-like” the digits are.
* *Time-series style analysis* View the digits as a sequence 0–9 and look at:

  * autocorrelation,
  * simple spectral properties,
    to see whether there are nontrivial patterns.
In other words, SDI is just a cheap, first exploratory probe. I’m absolutely open to replacing it with something more standard. If there’s a specific metric you think would be more meaningful here (or more familiar from information theory / statistics / dynamical systems), I’d be happy to try it on the same data and compare.

---

### 3. Scaling this in C / Rust

I agree: with Python, what I’m doing now is fine for ~50k steps / ~20k digits, but nowhere near the kind of depths that the classic 196 palindrome quests reached. To go there, the implementation really has to change.

Rough sketch of what I have in mind:

1. *Representation*

   * Use an explicit big-int representation as an array of machine words, e.g. `u32` / `u64` with base 10, 10⁴, 10⁹, etc.
   * Implement reverse-and-add directly on that array:

     * manual big-int addition,
     * mirrored index access for the reverse,
       with no `int → string → int` conversions in the hot loop.
2. *SDI computation strategy*

   * Option A: store true decimal digits (`0..9`) in memory.

     * Then SDI is just a linear scan over the digit array.
     * Even 1M decimal digits is only ~1 MB, so it’s not a memory problem.

   * Option B: store a larger base internally (e.g. 10⁹ per limb),

     * and only at *sample steps* expand to decimal digits while simultaneously computing SDI, then discard.
3. *Sampling frequency*

   The goal isn’t to record SDI on *every* iteration at huge depths, but to take occasional snapshots of the “state”:

   * For example: sample at steps 0, 10k, 20k, 30k, … (or 0, 100k, 200k, …).
   * That way, even if each SDI evaluation is O(N) in the number of digits, the overall overhead remains tiny compared to the cost of the core big-int reverse-and-add at extreme depths.
4. *Collaboration / feedback*

   If someone with experience in high-performance big-int (GMP/MPIR, or Rust big-int libraries) has ideas on:

   * a clean way to integrate “occasional decimal-digit snapshots” into an existing 196-style search, or
   * a good data layout for this use case,

   I’d be very interested — either to adapt an existing codebase, or to benchmark a dedicated SDI-sampling version against the classic implementations.
---

That’s roughly where I’m at right now. I’m happy to refine any of this or try specific metrics if you have something particular in mind.