frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Cascading Effects of Repackaged APIs [pdf]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6055034
1•Tejas_dmg•1m ago•0 comments

Lightweight and extensible compatibility layer between dataframe libraries

https://narwhals-dev.github.io/narwhals/
1•kermatt•4m ago•0 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
2•RebelPotato•7m ago•0 comments

Dorsey's Block cutting up to 10% of staff

https://www.reuters.com/business/dorseys-block-cutting-up-10-staff-bloomberg-news-reports-2026-02...
1•dev_tty01•10m ago•0 comments

Show HN: Freenet Lives – Real-Time Decentralized Apps at Scale [video]

https://www.youtube.com/watch?v=3SxNBz1VTE0
1•sanity•11m ago•1 comments

In the AI age, 'slow and steady' doesn't win

https://www.semafor.com/article/01/30/2026/in-the-ai-age-slow-and-steady-is-on-the-outs
1•mooreds•19m ago•1 comments

Administration won't let student deported to Honduras return

https://www.reuters.com/world/us/trump-administration-wont-let-student-deported-honduras-return-2...
1•petethomas•19m ago•0 comments

How were the NIST ECDSA curve parameters generated? (2023)

https://saweis.net/posts/nist-curve-seed-origins.html
2•mooreds•20m ago•0 comments

AI, networks and Mechanical Turks (2025)

https://www.ben-evans.com/benedictevans/2025/11/23/ai-networks-and-mechanical-turks
1•mooreds•20m ago•0 comments

Goto Considered Awesome [video]

https://www.youtube.com/watch?v=1UKVEUGEk6Y
1•linkdd•22m ago•0 comments

Show HN: I Built a Free AI LinkedIn Carousel Generator

https://carousel-ai.intellisell.ai/
1•troyethaniel•24m ago•0 comments

Implementing Auto Tiling with Just 5 Tiles

https://www.kyledunbar.dev/2026/02/05/Implementing-auto-tiling-with-just-5-tiles.html
1•todsacerdoti•25m ago•0 comments

Open Challange (Get all Universities involved

https://x.com/i/grok/share/3513b9001b8445e49e4795c93bcb1855
1•rwilliamspbgops•26m ago•0 comments

Apple Tried to Tamper Proof AirTag 2 Speakers – I Broke It [video]

https://www.youtube.com/watch?v=QLK6ixQpQsQ
2•gnabgib•28m ago•0 comments

Show HN: Isolating AI-generated code from human code | Vibe as a Code

https://www.npmjs.com/package/@gace/vaac
1•bstrama•29m ago•0 comments

Show HN: More beautiful and usable Hacker News

https://twitter.com/shivamhwp/status/2020125417995436090
3•shivamhwp•29m ago•0 comments

Toledo Derailment Rescue [video]

https://www.youtube.com/watch?v=wPHh5yHxkfU
1•samsolomon•31m ago•0 comments

War Department Cuts Ties with Harvard University

https://www.war.gov/News/News-Stories/Article/Article/4399812/war-department-cuts-ties-with-harva...
8•geox•35m ago•1 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
1•yi_wang•36m ago•0 comments

A Bid-Based NFT Advertising Grid

https://bidsabillion.com/
1•chainbuilder•40m ago•1 comments

AI readability score for your documentation

https://docsalot.dev/tools/docsagent-score
1•fazkan•47m ago•0 comments

NASA Study: Non-Biologic Processes Don't Explain Mars Organics

https://science.nasa.gov/blogs/science-news/2026/02/06/nasa-study-non-biologic-processes-dont-ful...
2•bediger4000•50m ago•2 comments

I inhaled traffic fumes to find out where air pollution goes in my body

https://www.bbc.com/news/articles/c74w48d8epgo
2•dabinat•51m ago•0 comments

X said it would give $1M to a user who had previously shared racist posts

https://www.nbcnews.com/tech/internet/x-pays-1-million-prize-creator-history-racist-posts-rcna257768
6•doener•53m ago•1 comments

155M US land parcel boundaries

https://www.kaggle.com/datasets/landrecordsus/us-parcel-layer
2•tjwebbnorfolk•57m ago•0 comments

Private Inference

https://confer.to/blog/2026/01/private-inference/
2•jbegley•1h ago•1 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
1•krapp•1h ago•0 comments

Show HN: Seedance 2.0 AI video generator for creators and ecommerce

https://seedance-2.net
1•dallen97•1h ago•0 comments

Wally: A fun, reliable voice assistant in the shape of a penguin

https://github.com/JLW-7/Wally
2•PaulHoule•1h ago•0 comments

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
2•y1n0•1h ago•0 comments
Open in hackernews

A Random Walk in 10 Dimensions (2021)

https://galileo-unbound.blog/2021/06/28/a-random-walk-in-10-dimensions/
134•just_human•5mo ago

Comments

smokel•5mo ago
> There is one chance in ten that the walker will take a positive or negative step along any given dimension at each time point.

This confused me a bit. To clarify: at each step, the random walker selects a dimension (with probability 1/10 for any given dimension), and then chooses a direction along that dimension (positive or negative, each with probability 1/2). There are 20 possible moves to choose from at any step.

mturmon•5mo ago
Thanks for this. It goes back to the node connectivity graphs he shows just above that statement.

He is thinking about a random choice among the 20 edges branching out from each vertex.

atoav•5mo ago
I thought multidimensional random walkers would make random choices on all dimensions, so:

  step = [random.choice([-1,0,1]) for _d in range(n_dimensions)]  
  
At least this is how I did 2D random walks as this allows for diagonal steps (with the downside that the walker travels longer steps in that direction).
smokel•5mo ago
The common definition for random walks moves only by unit vectors. Unfortunately, the information on Wikipedia is somewhat limited. The book "Random Walk: A Modern Introduction" (2010) by Gregory Lawler describes things in the first chapter, and is available online for free [1].

[1] https://www.math.uchicago.edu/~lawler/srwbook.pdf

lordnacho•5mo ago
Tangentially related:

https://www.youtube.com/watch?v=iH2kATv49rc

Turns out there is a very interesting theorem by Polya about random walks that separate 1 or 2 dimensional random walks from higher dimensional ones. I thought I'd link this video, because it's so well done.

just_human•5mo ago
Love this quote from Shizuo Kakutani to describe Polya's result: "A drunk man will find his way home, but a drunk bird may get lost forever."
antognini•5mo ago
The behavior of a random walk in a high dimensional space can be counter-intuitive. If you take the random walk trajectory and then perform principal components analysis on it, it turns out more than half of the variance is along a single direction. More than 80% is along the first two principal components.

To make matters even more surprising, if you project the random walk trajectory down into these PCA subspaces they are no longer random at all. Instead the trajectory traces a Lissajous curve. (For example see figure 1 of this paper: https://proceedings.neurips.cc/paper/2018/file/7a576629fef88...)

jadbox•5mo ago
Thank you for sharing. Learning about PCA subspaces and Lissajous curves wasn't originally on my agenda today.
vladimirralev•5mo ago
They say "these results are completely general for any probability distribution with zero mean and a finite covariance matrix with rank much larger than the number of steps". It's not clear to me if that condition implies the number of steps is much lower than the dimensions of the random walk space or perhaps the probability distribution needs to be concentrated into a smaller number of dimensions to begin with? In which case the results is much less shocking.
antognini•5mo ago
The condition is the former. The probability distribution spans the full dimensionality of the space. Basically, the result will hold for an infinite number of dimensions and a finite number of steps. But it will also hold if you take both the number of steps and the dimensionality to infinity while holding the ratio N_steps / D constant with N_steps / D << 1.
thirtygeo•5mo ago
I use PCA quite often for a variety of signal enhancement tasks in natural sciences. This paper presented something that I would not have expected and I found it really interesting.
MarkusQ•5mo ago
> On the other hand, a so-called mountain peak would be a 5 surrounded by 4’s or lower. The odds for having this happen in 10D are 0.2*(1-0.8^10) = 0.18. Then the total density of mountain peaks, in a 10D hyperlattice with 5 potential values, is only 18%.

I believe the odds are actually

0.2 (odds of it being a 5) ×

0.8^10 (odds of each of the neighbors being ∈ {1,2,3,4})

which is ~0.021 or around 2%. This makes much more sense, since 18% of the nodes being peaks doesn't sound like they are rare.

ted_dunning•5mo ago
Even if you assume that they are asking about when given a node is a 5 what is the probability it will not have any 5 as a neighbor the result is about 10% which is your result without the leading 0.2.

That's still not what they got.

jlokier•5mo ago
It's not 0.8^10 anyway, because there aren't 10 neighbours.

There are neighbours in both directions in each dimension, i.e. 20 neighbours in 10 dimensions if you're allowed to wrap around the edges of the lattice.

With wrapping, I think the probability is 0.2 × (0.8^20) ≈ 0.002306 ≈ 0.23%.

If you're not allowed to wrap at the edges, a unformly random point has 2/N probability of having one neighbour on each dimension independently, and (N-2)/N probability of having two. With D dimensions, that's on average D(2N-2)/N neighbours.

With D = 10, N = 5, I think it's 16 neighbours on average, with a distribution from 10 to 20.

That makes the probability of landing on a mountain peak lower than ~2% and higher than ~0.23%. (Not 0.2 × (0.8^16) though, due to the distribution.)

MarkusQ•5mo ago
You are correct about the first part (I actually came back here to add a comment to mine saying that I'd goofed and the exponent should be 20, not 10).

Also, I suspect that the width (your N) should be large enough in most of the cases where we'd be wondering about peaks vs. ridge lines that the large-N limit of (2N-2)/N could safely be taken (that is, 2). [The whole motivation for peaks vs. ridges is the question of whether there is a connected path that passes "near" all the points in the space, which is generally taken to mean within a ball of radius r, with 1 < r << N; since the grid is discrete, this implies N is >> 1.]

ngriffiths•5mo ago
> Therefore, despite the insanely large number of adjustable parameters, general solutions, that are meaningful and predictive, can be found by adding random walks around the objective landscape as a partial strategy in combination with gradient descent.

Are there methods that specifically apply this idea?

I guess this is a good explanation for why deep learning isn't just automatically impossible, because if local minima were everywhere then it would be impossible. But on the other hand, usually the goal isn't to add more and more parameters, it's to add just enough so that common features can be identified but not enough to "memorize the dataset." And to design an architecture that is flexible enough but is still quite restricted, and can't represent any function. And of course in many cases (especially when there's less data) it makes sense to manually design transformations from the high dimensional space to a lower dimensional one that contains less noise and can be modeled more easily.

The article feels connected to the manifold hypothesis, where the function we're modeling has some projection into a low dimensional space, making it possible to model. I could imagine a similar thing where if a potential function has lots of ridges, you can "glue it together" so all the level sets line up, and that corresponds with some lower dimensional optimization problem that's easier to solve. Really interesting and I found it super clearly written.

tech_ken•5mo ago
> Are there methods that specifically apply this idea?

Stochastic gradient descent is basically this (not exactly the sane, but the core intuitions align IMO). Not exactly optimization but Hamiltonian MCMC also seems highly related.

> I could imagine a similar thing where if a potential function has lots of ridges, you can "glue it together" so all the level sets line up, and that corresponds with some lower dimensional optimization problem that's easier to solve.

Excellent intuition, this is exactly the idea of HMC (as far as I recall); the concrete math behind this is (IIRC) a "fiber bundle".

evanb•5mo ago
HMC was essentially designed to mix random walks (the momentum refresh step) with gradient descent (that is, the state likes to 'roll down the potential' ie. minimize the action (loss)).