frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Starlink Hits 9M Customers, Adds More Than 20k Users a Day

https://www.businessinsider.com/spacex-starlink-customer-numbers-surge-9-million-elon-musk-ipo-20...
1•tosh•2m ago•0 comments

Battling Infectious Diseases in the 20th Century: The Impact of Vaccines

https://graphics.wsj.com/infectious-diseases-and-vaccines/
1•doener•3m ago•0 comments

Meaning and possible origin of "the (dog's) bollocks" (2019)

https://wordhistories.net/2019/04/27/dogs-bollocks-origin/
1•jjgreen•8m ago•0 comments

Show HN: N8n workflow to receive daily Hacker News top posts AI summarized

https://giuliomagnifico.blog/post/2025-12-28-daily-ai-summary-hn-n8n/
1•giuliomagnifico•9m ago•0 comments

My 2025: Building, Pausing, and Finding a Product People Need

1•tangjinzhou•13m ago•0 comments

Kubernetes Egress Control with Squid Proxy

https://interlaye.red/kubernetes_002degress_002dsquid.html
2•fsmunoz•16m ago•0 comments

How to Statistically Never Lose in Wordle Again

https://kuber.studio/blog/Post-Extended/How-to-Statistically-Never-Lose-in-Wordle-Again
1•kuberwastaken•18m ago•1 comments

Mosa Cloud

https://mosa.cloud
1•maelito•20m ago•0 comments

Unix V4 tape from University of Utah (raw)

https://archive.org/details/utah_unix_v4_raw
2•doener•25m ago•0 comments

TidesDB Outpaces RocksDB by 4x in Seek Performance Through Aggressive Caching

https://news.lavx.hu/article/tidesdb-outpaces-rocksdb-by-4x-in-seek-performance-through-aggressiv...
1•alexpadula•29m ago•0 comments

A16Z Infra Reading List

https://a16z-infra.github.io/reading-list/
1•cromulent•30m ago•0 comments

Rssrssrssrss – Combine Multiple RSS Feeds into One

https://www.rssrssrssrss.com/
1•twapi•31m ago•0 comments

Death by a Thousand Cycles – Micro-Optimizations in TidesDB v7.0.4

https://tidesdb.com/articles/tidesdb704-death-by-a-thousand-cycles/
1•alexpadula•31m ago•0 comments

Ask HN: How to get good at RAW image editing?

2•leononame•34m ago•0 comments

Mapping the Semiconductor Supply Chain: The Critical Role of the Indo-Pacific

https://www.csis.org/analysis/mapping-semiconductor-supply-chain-critical-role-indo-pacific-region
1•pera•34m ago•0 comments

Framework anounces another DDR5 RAM price hike, will now charge $10 per GB

https://www.theverge.com/news/850376/framework-ram-memory-ddr5-price-hikes
4•almog•38m ago•0 comments

Show HN: Good (BANG) – a love‑first UI toolbelt: bang tags, no build, no VDOM

https://crisdosaygo.github.io/good.html/
1•keepamovin•38m ago•1 comments

Shipping at Inference Speed

https://steipete.me/posts/2025/shipping-at-inference-speed
1•dvrp•41m ago•0 comments

Prompts are becoming part of the system, but we still write them like strings

2•hoangnnguyen•41m ago•1 comments

Pre, Mid, Post-Training Way of Life

https://fakepixels.substack.com/p/pre-mid-post-training-way-of-life
1•corford•43m ago•0 comments

Show HN: Patchy – Manage long-lived forks as patch sets

https://github.com/richardgill/patchy
1•richardgill88•44m ago•1 comments

After "AI": Anticipating a post-LLM science and technology revolution

https://www.evalapply.org/posts/after-ai/index.html
1•adityaathalye•45m ago•1 comments

The New Surveillance State Is You

https://www.wired.com/story/expired-tired-wired-surveillance-state/
1•fleahunter•46m ago•0 comments

Greener – lean and mean test result explorer

https://github.com/cephei8/greener
1•cephei8•53m ago•0 comments

Show HN: Zen List – Tools that keep you from going crazy

http://postmake.io/zen-list
1•Malfunction92•1h ago•0 comments

Git analytics that works across GitHub, GitLab, and Bitbucket

1•akhnid•1h ago•3 comments

Ask HN: Is it enough to use XOR encryption for my journal app?

2•finnvyrn•1h ago•1 comments

The Beauty of Batteries

https://worksinprogress.co/issue/the-beauty-of-batteries/
1•pietergaricano•1h ago•0 comments

Adopting AI Atomically

https://jeremyjaydan.au/adopting-ai-atomically/
1•JeremyJaydan•1h ago•0 comments

Show HN: Md driven local project management software

https://dotodo.dev/
1•leCaptain•1h ago•0 comments
Open in hackernews

Asking Gemini 3 for Brainf*ck code puts it in an infinite loop

https://teodordyakov.github.io/brainfuck-agi/
42•TeodorDyakov•2h ago

Comments

Alex2037•1h ago
what the fuck compelled you to censor "Brainfuck"?
TeodorDyakov•1h ago
Visibilty - i have no idea if there are censoring algorithms at play anywhere.
hdgvhicv•1h ago
Chilling effects. Western culture is taken over by American Puritian values thanks to the globlaisation of the media.
perching_aix•1h ago
Mhmm, so chilling. Cause word filters aren't as old as computing itself...
hdgvhicv•58m ago
Don’t need to ban speech when your population preemptively does it for you in fear of an unaccountable corporation blocking you.
perching_aix•57m ago
Don't need to ban speech when people on their soapboxes keep telling me I need to be in terror.

Will somebody pleeeaaaase think of American Puritanism and Globalism?

andrepd•38m ago
"Unalive" has reached mainstream usage, on account of those inscrutable censors. If that is not the spitting picture of Newspeak I don't know what is.
rjh29•46m ago
The trend of self-censoring words like 'dead' and 'kill' appears to be relatively new, motivated by TikTok and YouTube algorithms, but spilling over into the general internet.
martin-t•40m ago
Correlation is not causation but I challenge anyone to come up with a different cause:

https://trends.google.com/trends/explore?date=all&q=tiktok,u...

https://trends.google.com/trends/explore?date=all&q=unalive&...

martin-t•44m ago
Word filters are only the beginning. LLMs are being phased in to flag and filter content based on more sophisticated criteria.

I read somewhere that chinese people used the ability of their language to form new meanings by concatenating multiple symbols in many different ways to get around censorship and that each time the new combination was banned, they came up with a new one. I wonder how long that'll be possible.

serf•27m ago
passwords were a foreign concept to early computing, but you presume censorship was taking place?

it took awhile of corporatization and profit-shaping before censorship on computers really took off in any meaningful way.

...but it wasn't for any reasons other than market broadening and regulation compliance.

perching_aix•20m ago
I think you're not taking what I wrote nearly literally enough. Really, you should be showing me diagrams of the Von Neumann architecture missing a censorship module. Maybe even gasp at the omission of it in Babbage's letters.

But why stop there? Let's bring out the venerable Abacus! We could have riveting discussions about how societies even back then designated certain language as foul, and had rules about not using profanities in various settings. Ah, if only they knew they were actually victims of Orwellian censorship, and a globalist conspiracy.

drstewart•33m ago
Puritans were English protestants. I think you mean to say it's being taken over by European values.
perching_aix•30m ago
Ah yes, after muricans bad, let's have some euros bad.

I learn some amazing things on this site. Apparently the culture agnostic, historical practice of designating words and phrases as distasteful is actually a modern American, European, no actually Globalist, but ah no actually religious, but also no maybe Chinese?, no, definitely a Russian mind virus. Whatever the prominent narrative is for the given person at any given time.

Bit like when "mums is blaming everything on the computer". Just with political sophistry.

a5c11•27m ago
People easily forgot how they laughed at wizards in Harry Potter series who said "You-Know-Who" instead of "Voldemort". Now they are doing exactly the same thing.
nubinetwork•1h ago
Too bad it can't explain why it does the same thing with actual English.actual English.actual English.actual English.actual English.actual English.
j_maffe•59m ago
Why would anyone feel compelled to use AI to write such a short blog post? Is there no space where I can assume the written contented is communicated 100% by another human being?
TeodorDyakov•55m ago
I am sorry if it appears that it was written by AI - I wrote a draft and used AI to assist me since English, is not my first language. I asked it only to format but it has seemed to change the tone and the expressions too '.'
codetiger•49m ago
All of a sudden, internet is full of people who hate AI written articles. A few months back, my article got a lot of haters because I used AI tools to improve my draft. Being a non-english first language person, I don't see an issue. But I wish AI improves to an extend where draft to complete articles don't look AI written.
rjh29•47m ago
You should use AI to point out errors or suggest better phrasing. But if you ask AI to rewrite your post, it will produce content that sounds fake and corporate. ESL speakers may not notice it but everyone else does.
lawn•31m ago
I also don't have English as my first language and I think it's a shitty excuse.

Articles written by AI are soulless and shitty. Do yourself and the readers a favor and write yourself, even if it contains errors.

snakeboy•43m ago
I personally prefer some grammatical errors or awkward phrasing over AI-assisted writing. It's a blog post, not a diplomatic transcript.
juliie•29m ago
I'm also not a native English speaker, but I've decided to avoid using AI for formatting or changing the tone of what I write. That tends to result in extremely generic outputs that "feel" AI, no matter how much effort I put into writing it.

Asking for it to point out mistakes, without providing alternatives, seems like a better way to actually get better at writing.

Prompting the Ai to use a specific tone might result in something that's less generic, but imo that's not the right place to spend efforts.

oneeyedpigeon•21m ago
And why does anybody trust AI at all when it produces a typo ("amost") in the very first sentence of an article?
TOMDM•45m ago
My prompt was worded differently, but Gemini didn't fall into the same trap for me.

``` Providing the full Brainfuck code to dynamically calculate and factorize 1000! is unfortunately not feasible, and here is why: * The Scale: 1000! (1000 factorial) is a number with 2,568 digits. Standard Brainfuck operates on 8-bit memory cells (values 0-255). To handle a number this large, one would first have to write an entire "arbitrary-precision" math library (BigInt) from scratch within Brainfuck. * The Algorithm: You cannot simply compute 1000! and then factor it. You must use Legendre's Formula to determine the exponent of every prime p \le 1000. * Formula: E_p(n!) = \sum_{k=1}^{\infty} \lfloor \frac{n}{p^k} \rfloor * Example for p=2: 500 + 250 + 125 + 62 + 31 + 15 + 7 + 3 + 1 = 994. * The first term of the answer is 2^{994}. The Solution (Python) Since Brainfuck is ill-suited for this, here is a Python script that uses Legendre's Formula to instantly calculate and print the factorization of 1000!. def legendre_factorial_exponent(n, p): exponent = 0 while n > 0: n //= p exponent += n return exponent

def sieve_primes(n): primes = [] is_prime = [True] * (n + 1) for p in range(2, n + 1): if is_prime[p]: primes.append(p) for i in range(p * p, n + 1, p): is_prime[i] = False return primes

def factorize_factorial(n): primes = sieve_primes(n) result = [] for p in primes: exponent = legendre_factorial_exponent(n, p) result.append(f"{p}^{exponent}") return " * ".join(result)

print(factorize_factorial(1000))

The Brainfuck Code (for 1000 not !) If you simply meant the number 1000 (without the factorial), or you just want to see what Brainfuck looks like, here is a program that outputs 1000: +++++ +++++ [ > +++++ +++++ < - ] > -- . (Print 100 'd') < +++++ +++++ [ > +++++ +++++ < - ] > + . (Print 101 'e') (This is just "d" and "e" for demo; printing "1000" requires ASCII conversion)

Correct code to print "1000": ++++++++++[>+++++>+++++>+++++>+++++<<<<-]>-.>-.>-.>-.

Explanation: * Sets cell #0 to 10. * Loops 10 times, adding 5 to cells #1, #2, #3, #4 (making them all 50). * Subtracts 1 from cell #1 (ASCII 49 = '1'). * Subtracts 2 from cells #2, #3, #4 (ASCII 48 = '0'). * Prints them. Would you like me to explain how Legendre's Formula works in more detail? ```

TeodorDyakov•29m ago
I too noticed that Gemini is very reluctant to start spewing code directly, thats why I prompted it in such a specific manner to trigger the infinite loop. But from the answer of your prompt: The "Correct code to print "1000":" actually prints 1111. So yeah it is still wrong even for something super simple.
egeozcan•42m ago
Gemini is very prone to go into an infinite loop. Sometimes, it even happens with Google's own vibe coding IDE (Antigravity): https://bsky.app/profile/egeozcan.bsky.social/post/3maxzi4gs...
mixel•19m ago
It also happened to me in the gemini-cli. It tried to think but somehow failed and putted all thoughts into the output and tried again and again to switch to "user output". If was practically stuck in an infinite loop
boerseth•36m ago
> Brainf*ck is the antithesis of modern software engineering. There are no comments, no meaningful variable names, and no structure

That's not true. From the little time I've spent trying to read and write some simple programs in BF, I recall good examples being pretty legible.

In fact, because the language only relies on those few characters, anything else you type becomes a comment. Linebreaks, whitespace, alphanumeric characters and so on, they just get ignored by the interpreter.

Have a look at this, as an example: https://brainfuck.org/chessboard.b

pelorat•25m ago
Saying "Asking Gemini 3" doesn't mean much. The video/animation is using "Gemini 3 Fast". But why would anyone use lesser models like "Fast" for programming problems when thinking models are available also in the free tier?

"Fast" models are mostly useless in my experience.

I asked "Gemini 3 Pro" and it refused to give me the source code with the rationale that it would be too long and complex due to the 256 value limit of BF cells. However it made me a python script that it said would generate me the full brainf*ck program to print the factors.

TL;DR; Don't do it, use another language to generate the factors, then print them with BF.

TeodorDyakov•19m ago
I agree but it is kinda strange that this model (Gemini 3 fast) achieved such a high score on ARC-AGI-2. Makes you wonder.
neonbjb•17m ago
> So it made me wonder. Is Brainf*ck the ultimate test for AGI?

Absolutely not. Id bet a lot of money this could be solved with a decent amount of RL compute. None of the stated problems are actually issues with LLMs after on policy training is performed.

huhtenberg•17m ago
Viva the Brainfuck! The language of anti-AI resistance!
bdg•15m ago
I wonder if going the other way, maxing out semantic density per token, would improve LLM ability (perhaps even cost).

We use naturally evolved human languages for most of the training, and programming follows that logic to some degree, but what if the LLMs were working in a highly complex information dense company like Ithkuil? If it stumbles on BF, what happens with the other extreme?

Or was this result really about the sparse training data?