frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

From Side Project to 185K GitHub Stars

https://learndevrel.com/blog/openclaw-ai-agent-phenomenon
1•rohitghumare•45s ago•0 comments

Show HN: A free online British accent generator for instant voice conversion

https://audioconvert.ai/british-accent-generator
1•Katherine603•52s ago•0 comments

Train and inference GPT in 243 lines of pure, dependency-free Python by Karpathy

https://gist.github.com/karpathy/8627fe009c40f57531cb18360106ce95
1•itvision•1m ago•0 comments

Show HN: Solving Sudoku reasoning via Energy Geometric models

https://www.davisgeometric.com/index.html
1•epokh•3m ago•0 comments

I'm building a crowdsourced podcast episode tonight from YOUR articles

1•pavkatar•3m ago•0 comments

Forwardly-Evaluated Build Systems

https://garnix.io/blog/garn2/
1•birdculture•3m ago•0 comments

The Origins and Limitations of AMD's Revival

https://thechipletter.substack.com/p/the-origins-and-limitations-of-amds
1•rbanffy•14m ago•0 comments

The Website Is Down #1: Sales Guy vs. Web Dude – [video]

https://www.youtube.com/watch?v=uRGljemfwUE
1•sydney6•16m ago•0 comments

From specification to stress test: a weekend with Claude

https://www.juxt.pro/blog/from-specification-to-stress-test/
5•henrygarner•19m ago•1 comments

Linus Torvalds rejects MMC changes for Linux 7.0 cycle

https://www.phoronix.com/news/Linux-7.0-No-MMC-Changes
1•spyke112•20m ago•0 comments

James Van Der Beek, 'Dawson's Creek' Star, Has Died

https://www.cnn.com/2026/02/11/entertainment/james-van-der-beek-death
2•Einenlum•20m ago•0 comments

The UK Royal Mint is running a treasure hunt to find a gold bar

https://www.royalmint.com/shop/limited-editions/the-great-british-treasure-hunt/
1•simonjgreen•21m ago•0 comments

Show HN: Tymr – simple time tracking and invoicing for freelancers

https://www.tymr.digital/
1•hustlecoding•21m ago•0 comments

How Many Biweekly Pay Periods in 2026? (It's Not What You'd Expect)

https://saveku.com/blog/how-many-biweekly-pay-periods-in-2026-it-s-not-what-you-d-expect
1•roywj•24m ago•0 comments

An experiment in demand-gated, AI-generated apparel (no inventory)

https://ilors.com
1•cdalex•25m ago•1 comments

GLM-5: From Vibe Coding to Agentic Engineering

https://simonwillison.net/2026/Feb/11/glm-5/
1•onurkanbkrc•26m ago•0 comments

A stochastic state model for Bitcoin

https://semn.ai/
1•_devfrend•29m ago•0 comments

Show HN: SQBuilder – UI Google Search query builder

https://sqbuilder.fly.dev/
1•Igor_Wiwi•31m ago•0 comments

Europe spending on sovereign cloud infrastructure to triple from 2025-2027

https://www.datacenterdynamics.com/en/news/europe-spending-on-sovereign-cloud-infrastructure-to-t...
2•belter•33m ago•0 comments

SotA ARC-AGI-2 Results with REPL Agents

https://www.symbolica.ai/blog/arcgentica
1•tosh•35m ago•0 comments

China's CO2 emissions have now been 'flat or falling' for 21 months

https://www.carbonbrief.org/analysis-chinas-co2-emissions-have-now-been-flat-or-falling-for-21-mo...
7•JoiDegn•37m ago•1 comments

AI researchers are sounding the alarm on their way out the door

https://www.cnn.com/2026/02/11/business/openai-anthropic-departures-nightcap
2•rramadass•43m ago•0 comments

Heartbeat pings from your .NET workers

https://cron-monitor.com/
1•temakonkin•44m ago•0 comments

Grok4 sabotages shutdown 97% of the time,even if instructed not in system prompt

https://arxiv.org/abs/2509.14260
6•agenticagent•45m ago•4 comments

Python Is for Everyone: Inside the PSF's D&I Work Group

https://georgiker.com/blog/python-is-for-everyone/
1•lumpa•46m ago•0 comments

UK Supreme Court Issues Milestone Judgment for AI and Software Patentability

https://ipwatchdog.com/2026/02/11/uk-supreme-court-issues-milestone-judgment-ai-software-patentab...
3•zoobab•48m ago•0 comments

The missing digit of Stela C

https://johncarlosbaez.wordpress.com/2026/02/12/stela-c/
2•chmaynard•50m ago•0 comments

Warcraft III Peon Voice Notifications but for Codex

https://github.com/mrdavey/codex-peon
1•daveytea•52m ago•1 comments

I'm not feeling the async pressure (2020)

https://lucumr.pocoo.org/2020/1/1/async-pressure/
1•tosh•52m ago•0 comments

Everyone's looking for a bubble. No one sees the stampede

https://www.exponentialview.co/p/bubble-or-stampede
1•swolpers•54m ago•0 comments
Open in hackernews

Brokk: AI for Large Codebases

https://brokk.ai
51•handfuloflight•9mo ago

Comments

jbellis•9mo ago
Hi all, Brokk creator here, happy to answer any questions!

I made an intro video with a live demo here: https://www.youtube.com/watch?v=Pw92v-uN5xI

soco•9mo ago
Is there something also to read for those of us who will never watch videos?
lutzleonhardt•9mo ago
Hi, yes there are some blog posts:

https://brokk.ai/blog/brokk-under-the-hood

bchapuis•9mo ago
Really cool project! I tried it a couple of weeks ago with an Anthropic API key and will give it another shot.

Could you share a bit more about how you handle code summarization? Is it mostly about retaining method signatures so the LLM gets a high-level sense of the project? In Java, could this work with dependencies too, like source JARs?

More generally, how’s it been working with Java for this kind of project? Does limited GPU access ever get in the way of summarization or analysis (Jlama)?

jbellis•9mo ago
That officially makes you an early adopter, thanks!

Yes, it's basically just parsing for declarations. (If you doubleclick on any context in the Workspace it will show you exactly what's inside.)

You have to import the dependencies via File -> Decompile Dependency and then it gets parsed like the rest of your source, only read-only.

I have a love-hate relationship with Java, mostly love lately, the OpenJDK team is doing a great job driving the language forward. It's so much faster than Python, it's nice being able to extend a language in itself and get native performance.

Since we're just using Jlama to debounce the LLM requests, we can use a tiny model that runs fine on CPU alone. The latest Jlama supports GPU as well but we're not using that.

neoncontrails•9mo ago
I'd be interested to try this out. I'm especially keen on AI tools that implement a native RAG workflow. I've given Cursor documentation links, populated my codebase with relevant READMEs and diagram files that I'm hoping might provide useful context, and yet when I ask it to assist on some refactoring task it often spends 10-20 minutes simply grepping for various symbol names and reading through file matches before attempting to generate a response. This doesn't seem like an efficient way for an LLM to navigate a medium-sized codebase. And for an IDE with first-class LLM tooling, it is a bit surprising that it doesn't seem to provide powerful vector-based querying capabilities out of the box — if implemented well, a Google-like search interface to one's codebase could be useful to humans as well as to LLMs.

What does this flow look like in Brokk? Do models still need to resort to using obsolete terminal-based CLI tools in order to find stuff?

lutzleonhardt•9mo ago
We implemented a multi-step process to find the required context:

1. Quick Context Shows the most relevant files based on a pagerank algorithm (static analysis) and semantic embeddings (JLama inference engine). The input are the instructions and the AI workspace fragments (i.e. files).

2. Deep Scan A richer LLM receives the summaries of the AI workspace files (+instructions) and returns a recommendation of files and tests. It also recommends the type of inclusion (editable, read-only, summary/skeleton).

3. Agentic Search The AI has access to a set of tools for finding the required files. But the tools are not limited to grep/rg. Instead you can: - find symbols (classes, methods, ...) in the project - ask for summaries/skeletons of files - provide class or method implementations - find usages of symbols (where is x used?) - call sites (in/out) ...

You can read more about this in the Brokk.ai blog: https://brokk.ai/blog/brokk-under-the-hood

silverlake•9mo ago
No offense, but that video is brutally boring. Even at 1.5x speed I couldn’t get past 10 min. You should transcribe the audio and use an LLM to write a punchy sales pitch.
corysama•9mo ago
How large is "Large"? Are we testing on Unreal Engine? :D
jbellis•9mo ago
no, but I've tested on intellij (~5M loc, takes forever to import b/c of delombok, do not recommend)
lutzleonhardt•9mo ago
I tested it with Ghidra recently and got very good results
saratogacx•9mo ago
Likely not an important note but the name sounds close enough to grok that I assumed this was a spin off of some xAI product. I had to look around to see if it was actually associated (it looks like it isn't) but it may be something to be aware of.
tschellenbach•9mo ago
wrote a guide on how to use cursor for large codebases here: https://getstream.io/blog/cursor-ai-large-projects/ working well over here

cool to see more AI tools address this

ElijahLynn•9mo ago
Thank you! I think this is the next evolution of using LLM for coding. Understanding all the context from large codebases...
lutzleonhardt•9mo ago
The amazing thing here is that the Brokk AI can access your code like an IDE, can ask for usages or gather the summary of a file before deciding to get the implementation of a method! It mimics like a Dev is navigating the codebase. And this is more reliable and token-efficient than the usual grep/rg approach
esafak•9mo ago
This ought to be an IDE plugin. Don't make me context switch.
danjl•9mo ago
The "Read" file list sounds a lot like Copilot Edit mode, where you manually specify the list of files that are added to the context. Similarly, Copilot has an Ask (Chat) mode that doesn't change the code. One of the downsides of all these new IDEs is that it is difficult, even for the developers of those tools, to have enough time to test out coding in each of their competitors. Also, the switching cost of changing IDEs is pretty high, even if they are forks of the same code base, which makes it hard for the users to really test out all the options. In the long run, I expect that the "larger" IDE providers will purchase the smaller ones. IOW, if you wait long enough, all the good bits will be in Copilot (or maybe Cursor with their new funding).
jbellis•9mo ago
(creator here)

idk, everyone else seems to want to take the 40 year old IDE paradigm we're all used to (really! that's how old Turbo Pascal 3 is!) and graft AI onto it. I think we need a fundamentally different design to truly take advantage of the change from "I'm mostly reading and writing code at human speeds" to "I'm mostly supervising the AI which is better at generating syntax than I am."

of course the downside to going against the crowd is that the crowd is usually right, we'll see how it goes!

danjl•9mo ago
I am a huge supporter of completely re-working the IDE UI as well. I'm not arguing for keeping the existing IDE interfaces. I like that folks are experimenting with entirely new interfaces. In fact, I'd go further and suggest that all of the overly complex interfaces used on any sort of content-creation app, like Unity, Unreal, Photoshop, as well as code IDEs, will eventually be completely refactored to remove all the old complexity in favor of either chat-based or other AI-driven interfaces. My point is simply that there are too many new AI-driven IDEs for folks to try out, even the developers of those IDEs. Many of the features in Brokk that were seemingly described in the Brokk 101 blog video as "differentiators" are existing Copilot features. Has the author ever used Copilot? Or just Cursor? Or another AI variant?
danjl•9mo ago
I'd love to see things like Brokk experiment a bit more with what other information to include in our git repositories, besides the code, that helps improve AI-based code generation. For example, perhaps the repo should include more design information about the look-and-feel, as visual information or Figma files, rather than just, say the CSS and HTML. Or it might help if the repository included more business requirements so that the AI has better information to guide prioritization of changes. Obviously other bits, like coding standards, should be included as well, though perhaps using a larger context might mitigate the need for coding standards if the generated code followed the existing code (which often doesn't happen).
bb88•9mo ago
I think that's what's going to happen over time. We're going to be writing more and more code, but supervising an AI mostly.

The big problem is that we're treating the AI as an all knowing oracle. And probably what we should be doing is treating the AI as a colleague -- allowing it to ask questions about the code base to develop the subtle clues.

Often what happens is that subtlety is lost on the code base, and sometimes, AI will think it's an outright error, when in fact it's completely on purpose.

Comments go a long way towards this end, but in large legacy codebases, comments may not exist, and the coders expected people to understand at first glance the code was correct.

Test driving Junie, I've had it remove a feature it thought was broken code, and then it fixes the unit tests, instead of trying to understand if the unit test was actually broken or the feature was broken.

insin•9mo ago
LLM for Large Codebases