frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Why there are no actual studies that show AI is more productive?

1•make_it_sure•2m ago•0 comments

Show HN: Run any VLM on real-time video

https://overshoot.ai/
1•zakariaelhjouji•5m ago•0 comments

What Automattic's AI Enablement Training Means for WordPress

https://automattic.com/2026/02/25/ai-enablement-wordpress/
1•taubek•5m ago•0 comments

Unredact

https://github.com/Alex-Gilbert/unredact
1•kruuuder•7m ago•1 comments

Ask HN: Meta ad / business account and pages gone

1•holistio•9m ago•0 comments

Show HN: Flora – Compile-time Dependency Injection for Go without reflection

https://github.com/soner3/flora
1•soner3•11m ago•1 comments

Fantasque Player

https://raphaelbastide.com/fantasque-player/
1•tarball•13m ago•0 comments

Salt Typhoon hacked 80 countries – AT&T can't confirm hackers are out

2•Abhscanink•13m ago•0 comments

Attackers prompted Gemini over 100k times while trying to clone it, Google s

https://arstechnica.com/ai/2026/02/attackers-prompted-gemini-over-100000-times-while-trying-to-cl...
3•joozio•15m ago•0 comments

Superpowers for Claude Code: Complete Guide 2026

https://www.pasqualepillitteri.it/en/news/215/superpowers-claude-code-complete-guide
2•doener•17m ago•0 comments

Invoker Commands API

https://developer.mozilla.org/en-US/docs/Web/API/Invoker_Commands_API
2•maqnius•17m ago•1 comments

Show HN: Codebrief – Make sense of AI-generated code changes

https://github.com/that-one-arab/codebrief
1•mo-dulaimi•18m ago•0 comments

Show HN: MindPlexa – Open-source AI-powered infinite canvas: Next.js, React Flow

https://github.com/jayasth/MindPlexa
1•jaysth•20m ago•0 comments

SWE-CI: Evaluating Agent Capabilities in Maintaining Codebases via CI

https://arxiv.org/abs/2603.03823
11•mpweiher•22m ago•1 comments

SCRY 17-source research engine for Claude Code(no API keys, pure stdlib)

https://github.com/Kastarter/scry
1•Kastarted•25m ago•0 comments

Show HN: Cursor skill for Claude Code's /loop scheduler

https://gist.github.com/aydinnyunus/9d507810e78554e2a18668a3dcfd65a8
1•runtimepanic•27m ago•0 comments

Show HN: Go LLM inference with a Vulkan GPU back end that beats Ollama's CUDA

https://github.com/computerex/dlgo
1•computerex•28m ago•0 comments

I built a tool that tailor your resume and cover letter for every job in seconds

https://cvrepair.guru
1•ahmedgmurtaza•33m ago•2 comments

LLMs take the fun out of coding

https://twitter.com/atmoio/status/2030289138126107074
2•vhiremath4•35m ago•2 comments

Show HN: MOCC – Turn your MRR or follower milestones into beautiful mockups

https://mocc-delta.vercel.app/
2•suryanshmishrai•40m ago•0 comments

New Research Reassesses the Value of Agents.md Files for AI Coding

https://www.infoq.com/news/2026/03/agents-context-file-value-review/
6•noemit•42m ago•2 comments

Ask HN: Has finding more competitors ever made you more confident?

1•stokemoney•42m ago•0 comments

The Synthetic Data Playbook: Generating Trillions of the Finest Tokens

https://huggingface.co/spaces/HuggingFaceFW/finephrase
2•JoelNiklaus•44m ago•0 comments

72 commits in a day, a third of them reverting the rest

1•madebyjam•44m ago•0 comments

From Iran to Ukraine, everyone's trying to hack security cameras

https://www.wired.com/story/from-ukraine-to-iran-hacking-security-cameras-is-now-part-of-wars-pla...
3•asplake•55m ago•0 comments

How good is Claude, really?

https://alinpanaitiu.com/blog/how-good-is-claude-really/
3•dmoro•56m ago•1 comments

Show HN: TracePact – Catch tool-call regressions in AI agents before prod

https://github.com/dcdeve/tracepact
1•soydanicg•1h ago•0 comments

Add llms.txt and fix robots.txt for AI agent discoverability

2•nishiohiroshi•1h ago•0 comments

Show HN: JRD Garage – $99 one-time auto shop management (Mitchell1 alternative)

https://jrdconnect.com/apps
1•jaydurangodev•1h ago•0 comments

How to Talk About Books You Haven't Read

https://www.themarginalian.org/2012/06/15/how-to-talk-about-books-you-havent-read/
4•rramadass•1h ago•3 comments
Open in hackernews

Show HN: Brainfuck to RISC-V JIT compiler written in Zig

https://github.com/evelance/brainiac
5•0x000xca0xfe•9mo ago
Hi everybody,

this was my project to learn Zig and RISC-V+x86_64 assembly.

Not sure if anybody is actually interested in yet another Brainfuck compiler, so I'll just write up some random things I learned while building it!

- A primitive assembly stitching compiler is 10x faster than the interpreter. Did not expect that.

- The generated x86 code is really bad (e.g. it always uses 6 or 7 byte sized instructions with 32-bit immediates when there are much smaller ones) but it doesn't really matter. Good code generated by GCC and clang for transpiled Brainfuck->C is not much faster as it's bottlenecked by memory accesses anyways.

- Zig is pretty far along actually. You can make serious projects with it!

- But the community seems to like self-punishment. Unused parameters and variables are hard errors and there is no way to disable that even for debug builds. Makes quickly commenting out part of the code a real PITA.

- I've had a miscompilation due to std.mem.span being broken and two source code breaks going from Zig 0.13 to 0.15 (std.mem.page_size got removed and ArrayList.popOrNull as well).

- But arbitrary size integers are fantastic! And well-defined two's complement behaviour!

Here is for example the code that encodes the c.beqz instruction:

  /// Branch if Equal to Zero (compressed): c.beqz rs1', offset -> beq rs1, x0, offset
  pub fn c_beqz(text: *std.ArrayList(u8), rs1: RV_X, offset: i9) !void {
      std.debug.assert(is3BitReg(rs1));
      std.debug.assert(@mod(offset, 2) == 0);
      const imm: u9 = @bitCast(offset);
      const RV_CB = packed struct(u16) {
          op: u2,
          offset5: u1,
          offset1_2: u2,
          offset6_7: u2,
          rsd_rs1_: u3,
          offset3_4: u2,
          offset8: u1,
          funct3: u3,
      };
      const ins = RV_CB {
          .op = 0x1,
          .offset5 = @truncate(imm >> 5),
          .offset1_2 = @truncate(imm >> 1),
          .offset6_7 = @truncate(imm >> 6),
          .rsd_rs1_ = @truncate(@intFromEnum(rs1) - 8),
          .offset3_4 = @truncate(imm >> 3),
          .offset8 = @truncate(imm >> 8),
          .funct3 = 0x6,
      };
      try appendInstruction(text, u16, @bitCast(ins));
  }
This is really nice as all the exotic integer sizes are actually checked, too.

- Zig support for Windows is good. Porting the project to Windows was very easy.

- When the RISC-V registers are carefully chosen, almost all instructions could be compressed in this projects.

- Compressed instructions and good branching code (using the branch instructions directly when the jump range is small enough instead of branching over a larger jump instruction) did not noticeably change performance on real hardware (OrangePi RV2).

- But somehow QEMU got a massive boost from that. Not sure why exactly.

So, that's about it!

I hope at least something was interesting...

Comments

sylware•9mo ago
thumbs up for this project (everything RISC-V is usually).

I write rv64 assembly (nearly core only, without memory reservation instructions) and run it on x86_64 with a very small (x86_64 assembly written) interpreter.

And your are right, I have had thoughts about a "RISC-V" x86_64 compiler (but it will probably require some runtime unfortunately).

Hopefully, rv22+ hardware with ultra-performant µ-architecture and with the latest silicon process will happen sooner than we expect. One less PI toxic lock and cleaner, _really standard_ assembly (the end game of much software).

0x000xca0xfe•9mo ago
Yeah I can't wait for a performant RISC-V core. Runtime code generation is so easy for RISC-V. I have many ideas or projects where I'd like to use it but it feels kinda pointless when JITed RISC-V machine code on current hardware gets destroyed by any half-decent x86 PC or Mac running naive C code.
sylware•9mo ago
Well, here are the tricks: interpreted rv64 assembly will be "slow"... actually "slower" than x86_64 native code... but in many execution contexts, for many pieces of software, here the first trick: the "slow" interpreted rv64 assembly machine code will be... "fast" enough... The 2nd trick: I have control on my rv64 machine interpreter, and I can write native x86_64 acceleration assembly along side of a rv64 reference implementation (I planned to do just that for my CPU renderer in my wayland compositor... actually I have already AVX2 code for some of that, even though the sweet spot is AVX512, but don't have the hardware for this, yet).

And once we have this rv64 shiny hardware, certainly won't be a drop-in, but the distance to code will be minimal.

One important SDK thing: I am careful at using the smallest number of rv64 machine instructions (we tend to forget 'R' in "RISC-V" means 'R'educed...), and I use basic, really basic, C preprocessors instead of the assembler preprocessor in order to decouple the assembly code from a specific assembler preprocessor. I don't even use assembler pseudo-instructions, or ABI register names, neither compressed machine instructions.

On top of that: I don't use ELF, I use a super minimal executable/system interface dynamic shared library format of my own, omega idiotically simple, which I wrap in ELF binaries for transparent support. People have to come to realize, ELF complexity, for a executable/system interface dynamic shared library is utterly and completely obsolete, even a liability once you are looking for binary stability in time (cf games), proven over more than the last decade.