frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•2m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•7m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•15m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
2•eeko_systems•22m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
1•neogoose•25m ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
1•mav5431•26m ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
2•sizzle•26m ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•27m ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•28m ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
2•vunderba•28m ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
1•dangtony98•33m ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•41m ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•43m ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•46m ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
3•pabs3•48m ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
2•pabs3•48m ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•50m ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
1•devavinoth12•50m ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•55m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•1h ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•1h ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
2•mkyang•1h ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•1h ago•1 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•1h ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•1h ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
3•ambitious_potat•1h ago•4 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•1h ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
2•irreducible•1h ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•1h ago•0 comments
Open in hackernews

Run Erlang/Elixir on Microcontrollers and Embedded Linux

https://www.grisp.org/software
206•weatherlight•5mo ago

Comments

Zaphoos•5mo ago
What about Gleam?
nesarkvechnep•5mo ago
Call us when they implement OTP compatibility.
worthless-trash•5mo ago
What part of otp do you need. I have supervision working.

I have typed message passing.. I write erlang wrapping gleam modules.. its pretty easy.

trescenzi•5mo ago
What part is missing? I’ve built a little distributed app that has a cluster registry and dns. There’s a tiny bit of Erlang involved but the majority of it is gleam.

https://github.com/trescenzi/points

jen20•5mo ago
Several pieces: https://gleam.run/roadmap/

Much of it can be worked around as you suggest.

dsincl12•5mo ago
https://github.com/gleam-lang/otp ?
juped•5mo ago
I'm interested in the claimed real-time capabilities, but it's hard to find anything about them written there. Still, I like the hardware integration.
garbthetill•5mo ago
yeah the claim is ambiguous because the beam itself is only guaranteed soft real time, leaving it open ended might make ppl think hard real-time especially since its hardware
elcritch•5mo ago
They support writing RTOS tasks in C as I understand it.
Joel_Mckay•5mo ago
Real-time is some of the most misused jargon in modern history.

In general, most JIT or VM can't even claim guaranteed latency. People that mix these concepts betray their ignorance while seeming intelligent.

FreeRTOS is small and feasible.

VxWorks if your budget is unconstrained.

LinuxRT kernel (used in LinuxCNC) with external context clocking, and or FPGA memory DMA overlap module (zynq SoC etc.)

Real-time is a specialized underpaid area, and most people have too abstract of an understanding of hardware to tackle metastability problems. =3

hoppp•5mo ago
Pretty cool. I am a fan of everything Erlang. Managing large clusters of IOT devices running Beam sounds like a good idea not just because of fault tolerance but for hot swapping code.
worthless-trash•5mo ago
Is this something you do regularly?
worthless-trash•5mo ago
Not sure why I'm downvoted, I'd like to know some good practices when using OTP for IOT, its not an area I know about.
garbthetill•5mo ago
I am the same but for elixir, the beam is awesome & I always wonder why it still hasn't caught on with all the success stories. The actor model just makes programming feel so simple
AnEro•5mo ago
Same, my personal theory where it excels and overachieves is where there is already really fleshed out and oversaturated developer ecosystems (and experienced developer pool) that organizations have alot of legacy software built on it. I think it will gain momentum as we see more need for distributed LLM agents and tooling pick up. (Or when people need extreme cost savings on front facing apis/endpoints that run simple operations)
zwnow•5mo ago
For me its the complete opposite of simple. I am a fan of BEAM and OTP but im a horrible programmer. I have constant fear of having picked the wrong restart strategy in a supervisor. Or about ghost processes or whatever. I have no mentors and learn everything myself. I have no way of actually checking whether my implementations are good. With my skills id manage to make an Elixir system brittle because its not clear to me what happens at all times.
toast0•5mo ago
WhatsApp did what it did and we didn't hire anyone who had experience with OTP until 2013 I think. One person who was very experienced in Erlang showed up for a week and bounced.

We were doing all sorts of things wrong and not idiomatically, but things turned out ok for the most part.

The fun thing with restart strategies is if your process fails quickly, you get into restart escalation, were your supervisor restarts because you restarted too many times, and so on and then beam shuts down. But that happens once or twice and you figure out how to avoid it (I usually put a 1 second sleep at startup in my crashy processes, lol).

Ghost processes are easy-ish to find. erlang:processes() lists all the pids, and then you can use erlang:process_info() to get information about them... We would dump stats on processes to a log once a minute or so, with some filtering to avoid massive log spew. Those kinds of things can be built up over time... the nice thing is the debug shell can see everything, but you do need to learn the things to look for.

pton_xd•5mo ago
> With my skills id manage to make an Elixir system brittle because its not clear to me what happens at all times.

What's so cool about BEAM is you can connect a repl and debug the program as it's running. It's probably the best possible system for discovering what's happening as things are happening.

zwnow•5mo ago
Yea IEx is pretty cool, that's how I test while programming as I do not write tests for everything.
rkangel•5mo ago
> MCU-class footprint (fits in 16 MB RAM)

That is absolutely not an MCU class footprint. Anything with an "M" when talking about memory isn't really an MCU. For evidence I cite the ST page on all their micros: https://www.st.com/en/microcontrollers-microprocessors/stm32...

Only the very very high performance ones are >1MB of RAM.

jdndnc•5mo ago
RAM on MCUs is getting cheaper by the minute.

A couple of years ago it was measured in bytes. Before the RP2040 is was measured in dozens of KiB now it's measured in MiB

While I agree that 16 MiB is on the larger side for now, it will only be a couple of years for mainstream MCUs having that amount on board

FirmwareBurner•5mo ago
>RAM on MCUs is getting cheaper by the minute.

It really isn't. The RP2040 has 256KB RAM. Far away from 16MB.

>now it's measured in MiB

Where? Very few so far and mostly for image processing applications, and cap out at less than 8MB. And those are already bordering on SoCs instand of MCUs.

For applications where 8MB or more is needed, designers already use SoCs with external RAM chips.

>it will only be a couple of years for mainstream MCUs having that amount on board

Doubt very much. Clip it and let's see in 2 years who's right.

jbarberu•5mo ago
Also curious what MCUs you're working with to give you this impression?

RP2040 is 264k, RP2350 is 520k.

I use NXP's rt1060 and rt1170 for work, and they have 1M and 2M respectively, still quite far away from 16M and those are quite beefy running at 500MHz - 1GHz.

tonyarkles•5mo ago
While I generally agree with you, the RT106x line does support external SDRAM as well. I've got an MIMXRT1060-EVKB sitting here on my desk that has 32MB of SDRAM alongside the on-die 1MB of SRAM.
theamk•5mo ago
Those specs are $50 for compute module - a very non-trivial cost.
15155•5mo ago
> NXP's rt1060 and rt1170 for work

These both have FlexSPI controllers capable of interfacing with $3-5 in PSRAM at 8M or 16M.

pessimizer•5mo ago
Bigger processors with more RAM have always been available. The question has always been whether you're going to use a $20 processor when you could do the job with a 50¢ one. It's the difference between your product being cheap and disposable, and you getting to choose your margin based on your strategy; and not being able to move a unit without losing money, hoping to sell yourself to someone who knows how to do more with less.

I'm an Erlang fanatic, and have been since forever, paid for classes when it was Erlang Training & Consulting at the center of things, flew cross-country to take them, have the t-shirt, hosted Erlang meetups myself in downtown Chicago. I'm not prototyping a microcontroller application in Erlang if I can get it done any other way. It's committing to losing from the outset.

edit: I've always been hopeful for some bare-metal implementation that would at least almost work for cheap µcs, and there have been promising attempts in the past, but they seem to have gone nowhere.

toast0•5mo ago
AtomVM runs on esp32, right? It's not an ultra-cheap microcontroller, but it's pretty cheap. AtomVM isn't BEAM either, though. I have no experience with AtomVM though... it didn't seem like a good fit when I was building something with an ESP32 (I didn't see anything about outputting to LCDs, and that was reasonable with arduino libraries... I also saw a library for calendars and thought that would work for my needs and then I got dragged into making it work better), and it would have worked for the stuff i was doing with ESP8266, but I didn't know about it when I was shopping for boards, so I didn't want to pay extra.
cmrdporcupine•5mo ago
Eh. It's getting blurry and has been for some time. To me these days the differentiators are: does it have an MMU? Address lines for external memory? Do you write for an OS or for "bare metal" / RTOS kit? Are there dedicated pins for GPIO?

If you choose some arbitrary memory amount as the criterion it will be out of date by next year.

TrueDuality•5mo ago
You don't necessarily need on-package RAM for this. I'm not sure I'd build a project around this, but 16MiB of RAM would hardly be BOM killer.
PinguTS•5mo ago
Actually it is. If you want to build a cheap sensor or actuator, than any additional component is getting expansive. Remember it is not only the external component, it is also the PCB space, is the production, and the testing after production. This adds up all to the costs.

When you use a µC to make it cheap, then you don't want to use additional components.

marci•5mo ago
For squeezing erlang in KiB sized RAM, the AtomVM project is probably a better fit.

https://github.com/atomvm/AtomVM

PinguTS•5mo ago
I see their board uses a daughter board from Phytec, a German company too. This is based on very high performance NXP MCU, the i.MX 6UL, with additional external DDR RAM.
magicalhippo•5mo ago
NPX calls[1] it an application processor, and is based on a Cortex-A7, not a Cortex-M series microcontroller processor.

That said these nomenclatures are a bit fuzzy these days.

[1]: https://www.nxp.com/products/i.MX6UL

LeifCarrotson•5mo ago
It's a $212 SBC. They've got more L2 cache than most microcontrollers have Flash memory.The fact that it's got an L2 cache at all, much less external LPDDR3 DRAM, is a bit ridiculous. In most parameters - cost, RAM, frequency, storage, power consumption - it's approximately 2 orders of magnitude beyond the specifications of a normal microcontroller.
derefr•5mo ago
Espressif calls the ESP32 an MCU, and at least 1/3 of ESP32 models have >1MiB of onboard PS ("pseudo-static") RAM (i.e. DRAM with its own refresh circuit.) At least 20 of the ESP32 models do have 16MiB.

(And I would argue that the ESP32 is an MCU even in this configuration — mostly because it satisfies ultra-low-power-on-idle requirements that most people expect for "pick up, use, put down, holds a charge until you pick it up again" devices.)

So, sure, if you mean the kind of $0.07 MCU IC you'd stuff in a keyboard or mouse, that's definitely not going to be running Nerves (or any other kind of dynamic runtime. You need to go full bare-metal on these.)

But if you mean the kind of $2–$8 MCU IC you'd stuff in a webcam, or a managed switch, or a battery-powered soldering iron, or a stick vacuum cleaner with auto suction-level detection, or a kitchen range/microwave/etc with soft-touch controls and an LCD — use-cases where there's more-than-enough profit margin to go around — then yeah, those'll run Nerves just fine.

ACCount37•5mo ago
Even ESP32, the quintessential "punches above its weight" MCU, only packs 520KB of RAM by default. At the time of its release, that was a shitton of RAM for an MCU to have!

If you ship MCUs with 16MB of RAM routinely, you're either working with graphics or are actually insane.

defen•5mo ago
The MCU I'm currently working with has 12KB of RAM and it feels luxurious.
ACCount37•5mo ago
Ah, the cultural shock of going from 8 bit cores with 512 bytes to an actual modern chip.
dlcarrier•5mo ago
Espressif's ESP32 line uses an MCU IC, either sold by itself (https://www.espressif.com/sites/default/files/documentation/...) or with flash memory and RAM ICs, all packaged together in a system-in-package footprint. (e.g. https://www.espressif.com/sites/default/files/documentation/...) There are various options for which flash and RAM ICs are packaged together, but the ESP32 die itself is very much an MCU and has only has 520 KB of SRAM.

A managed switch is very computy heavy, and does usually run a microprocessor with a full RTOS, if not Linux itself, which probably costs in the mult-dollar range. It's also not something most people have at home, outside of the switch built into their router. Everything elese you mentioned usually runs on microcontrollers with under 1 MiB of RAM. For example, Infineon's CYUSB306X series ASICs for webcams come in two RAM sizes: 256 KiB or 512 KiB, despite handling gigabits per second of data, and having an MCU at all isn't even necessary. (https://www.latticesemi.com/usb3#_CDED461DE15245C78A2973D4A4...) The Pinecil's Bouffalo BL706 MCU has 123 KiB of RAM, despite being a low-volume product where design time matters more than component cost. (https://wiki.pine64.org/wiki/Pinecil, https://en.bouffalolab.com/product/?type=detail&id=8) Microwave ovens are so high volume that they often don't even use packaged microcontrollers, mounting the die directly on the PCB, with an epoxy blob protecting it, and there's no way any would splurge on megabytes of SRAM. The most advanced microwave oven I've seen was from the 90's and definitely didn't splurge on an microprocessor. (https://www.youtube.com/watch?v=UiS27feX8o0)

A slow MCU with external memory could run anything, like booting into linux on an AVR (https://www.avrfreaks.net/s/topic/a5C3l000000BrFREA0/t392375) but it's going to be extremely slow and not practical for any commercial product, which if produced in any volume will have as little RAM as possible.

the__alchemist•5mo ago
ESP32's are on the high end of FLASH and RAM counts. You are pointing out that there is variance. The kind sof 2-8$ MCUs I've used generally have 512k-2Mb[it] onboard flash and <512k SRAM. (STM32G4, H7, ESP32C3 etc)

The Sub <$1 you refer do will have <100k of these. (STM32C0, G0 etc)

ST is actually moving away from the 2Mb MCUs, and instead offering ~1Mb with Octospi. I believe the intent is to use offboard flash if you want more. (e.g. the newer H7 variants)

imtringued•5mo ago
MCU generally refers to self contained microcontrollers without any external memory. Once you add external memory, there are basically no limits anymore, but if you are going that route a LicheeRV Nano would get you a full Linux system with 256MiB for $16 so why bother targeting "MCU-class" chips?
whalesalad•5mo ago
Sounds like nerves to me? But with soft realtime added in?
toast0•5mo ago
My tldr is grisp is beam on an rtos; nerves is beam on a minimal linux; but they also have grisp allow and grisp forge that are beam on linux. Any of these gives you soft realtime.
thenewwazoo•5mo ago
Nerves is Erlang-as-init on Linux. GRISP is Erlang with RTEMS on metal.
barbinbrad•5mo ago
huge fan of elixir. and definitely have some dumb questions.

in some of the realtime architectures i've seen, certain processes get priority, or run at certain Hz. but i've never seen this with the beam. afaik, it "just works" which is great most of the time. i guess you can do: Process.flag(:priority, :high) but i'm not sure if that's good enough?

toast0•5mo ago
Beam only promises soft realtime. When switching processes, runnable high priority tasks will be chosen before runnable normal or low priority tasks, and within each queue all (runnable) tasks run before a task runs again. But beam isn't really preemptive; a normal or low priority task that is running when a high priority task becomes runable won't be paused; the normal task will continue until it hits its reduction cap or blocks. There's also a chance that maybe you hit some operation that is time consuming and doesn't have yield points; most of ERTS has yield points in time consuming operations, but maybe you find one or maybe you have a misbehaving NIF.

Without real preemption, consistently meeting strict timing requirements probably isn't going to happen. You might possibly run multiple beams and use OS preemption?

heeton•5mo ago
I spoke with Peer (the creator of Grisp) about this at Elixirconf earlier in the year, and I'm not an expert here so I hope I don't misrepresent his comments:

Grisp puts enough controls on the runtime that soft-realtime becomes hard-realtime for all intents and purposes, outside of problems that also cause errors in hard-realtime systems.

(Also, thanks Peer for being tremendously patient with a new embedded developer! That kind of friendly open chat is a huge draw to the Elixir community)

cyberpunk•5mo ago
I did the same workshop some years ago with him also, very nice and patient guy, I can recommend attending if anyone is curious how microelectronics actually work :}
thelastinuit•5mo ago
would be possible to get my 90's computers and run erlang/elixir for a crypto node... or some version of it??
asa400•5mo ago
Yes - Erlang/Elixir wouldn't be the bound here. 90s hardware is plenty for them. They were designed for far less.
dlcarrier•5mo ago
The page says their implementation requires 16 MB of RAM, so a late 90's computer could run it, but even mid-90's computers, like early Pentium models, often came with less RAM than that. If it shipped with Windows 98 or later, it should have 16 MB.
burnt-resistor•5mo ago
To chime in, this definitely won't run on a μC. No effing way.

This is some serious marketing slop posting.