frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

LLMs are powerful, but enterprises are deterministic by nature

3•prateekdalal•2h ago•2 comments

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

46•UmYeahNo•1d ago•28 comments

Ask HN: Ideas for small ways to make the world a better place

13•jlmcgraw•16h ago•19 comments

Ask HN: Non AI-obsessed tech forums

23•nanocat•13h ago•20 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

44•Invictus0•1d ago•11 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•4d ago•514 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

2•netfortius•11h ago•1 comments

Ask HN: Who is hiring? (February 2026)

313•whoishiring•4d ago•512 comments

AI Regex Scientist: A self-improving regex solver

6•PranoyP•18h ago•1 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

17•jchung•2d ago•12 comments

Ask HN: Why LLM providers sell access instead of consulting services?

4•pera•1d ago•13 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•1d ago•7 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: Is it just me or are most businesses insane?

7•justenough•1d ago•7 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•3d ago•122 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•13h ago•2 comments

Kernighan on Programming

170•chrisjj•4d ago•61 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•3 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•2d ago•1 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: How Did You Validate?

4•haute_cuisine•1d ago•6 comments

Ask HN: Have you been fired because of AI?

17•s-stude•4d ago•15 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•3d ago•1 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments

Test management tools for automation heavy teams

2•Divyakurian•2d ago•2 comments
Open in hackernews

CPU-only PPO solving TSPLIB lin318 in 20 mins (0.08% gap)

5•jivaprime•2mo ago
Hi all

I’ve put together a repo demonstrating how to train PPO directly on a single TSPLIB instance (lin318) from scratch—without pre-training or GPUs.

Repo:https://github.com/jivaprime/TSP

1. Experiment Setup

Problem: TSPLIB lin318 (Opt: 42,029) & rd400

Hardware: Google Colab (CPU only)

Model: Single-instance PPO policy + Value network. Starts from random initialization.

Local Search: Light 2-opt during training, Numba-accelerated 3-opt for evaluation.

Core Concept: Instead of a "stable average-error minimizer," this policy is designed as a high-variance explorer. The goal isn't to keep the average gap low, but to occasionally "spike" very low-error tours that local search can polish.

2. Results: lin318

Best Shot: 42,064 (Gap ≈ +0.08%)

Time: Reached within ~20 minutes on Colab CPU.

According to the logs (included in the repo), the sub-0.1% shot appeared around elapsed=0:19:49. While the average error oscillates around 3–4%, the policy successfully locates a deep basin that 3-opt can exploit.

3. Extended Experiment: Smart ILS & rd400

I extended the pipeline with "Smart ILS" (Iterated Local Search) post-processing to see if we could hit the exact optimum.

A. lin318 + ILS

Took the PPO-generated tour (0.08% gap) as a seed.

Ran Smart ILS for ~20 mins.

Result: Reached the exact optimal (42,029).

B. rd400 + ILS

PPO Phase: ~2 hours on CPU. Produced tours with ~1.9% gap.

ILS Phase: Used PPO tours as seeds. Ran for ~40 mins.

Result: Reached 0.079% gap (Cost 15,293 vs Opt 15,281).

Summary

The workflow separates concerns effectively:

PPO: Drives the search into a high-quality basin (1–2% gap).

ILS: Digs deep within that basin to find the optimum.

If you are interested in instance-wise RL, CPU-based optimization, or comparing against ML-TSP baselines (POMO, AM, NeuroLKH), feel free to check out the code.

Constructive feedback is welcome!