frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

US science after a year of Trump

https://www.nature.com/immersive/d41586-026-00088-9/index.html
1•newman314•5m ago•0 comments

Blue4est Paper – BPA-Free Thermal Print Camera Compendium

https://thermalprintcameras.wordpress.com/blue4est-paper/
1•walterbell•12m ago•0 comments

Ask HN: Why does Google Maps still use mercator projection?

1•hbarka•13m ago•0 comments

Show HN: Aident, agentic automations as plain-English playbooks

https://aident.ai/
4•ljhskyso7•19m ago•0 comments

Why AGI Would Shape Humanity in the Shadows the Revelation Trap

1•unspokenlayer•20m ago•0 comments

Governance in the Age of AI, Nuclear Threats, and Geopolitical Brinkmanship [video]

https://www.youtube.com/watch?v=XACETcmQAeM
1•measurablefunc•22m ago•0 comments

Ask HN: Is there any good open source model with reliable agentic capabilities?

1•baalimago•22m ago•0 comments

Show HN: MCP server for searching and retrieving 200k icons

https://github.com/better-auth/better-icons
2•bekacru•25m ago•0 comments

Government Agencies Mandate CSPM for Federal Cloud Contracts

https://www.systemtek.co.uk/2025/05/executive-protection-in-the-digital-age-how-ceos-are-becoming...
2•cybleinc•25m ago•0 comments

DRAM are the mini-mills of our time

https://siliconimist.substack.com/p/dram-the-steel-mini-mills-of-our
1•johncole•26m ago•0 comments

How Shopify's Tobi Lütke Works – David Senra [video]

https://www.youtube.com/watch?v=ZSM2uFnJ5bs
1•simonebrunozzi•29m ago•0 comments

The new Siri chatbot may run on Google servers, not Apple's

https://9to5mac.com/2026/01/21/siri-chatbot-apple-google-servers/
1•_____k•29m ago•0 comments

Long-Term Data Storage: ATA Write-Read-Verify on FreeBSD with Camcontrol (2023)

https://dapperdrake.neocities.org/2023-12-wrv-freebsd-13-camcontrol-hdparm
1•walterbell•33m ago•0 comments

Certified Decision Procedures for Width-Independent Bitvector Predicates

https://dl.acm.org/doi/pdf/10.1145/3763148
1•luu•34m ago•0 comments

400 commits. 14 days. Zero (human) code.

https://tobyhede.com/blog/400-commits-in-14-days/
1•tobyhede•36m ago•0 comments

AI Reulation: Fact and Fiction

https://zenodo.org/records/18333769
1•businessmate•38m ago•1 comments

mRNA cancer vaccine shows protection at 5-year follow-up, Moderna and Merck say

https://arstechnica.com/health/2026/01/mrna-cancer-vaccine-shows-protection-at-5-year-follow-up-m...
2•ubiquitysc•38m ago•0 comments

A uniquely Japanese take on nostalgia (2020)

https://www.bbc.com/travel/article/20200119-a-uniquely-japanese-take-on-nostalgia
1•libpcap•39m ago•0 comments

What Technologies Are Running on 67K+ Websites (Dec 2025)

https://www.dropbox.com/scl/fi/d4l0gby5b5wqxn52k556z/sample_dec_2025.zip?dl=0&e=1&noscript=1&rlke...
1•_chse_•42m ago•1 comments

Faulkner's 1,288 word sentence

https://www.openculture.com/2025/02/when-william-faulkner-set-the-world-record-for-writing-the-lo...
1•Insanity•43m ago•0 comments

Show HN: Deterministic, machine-readable context for TypeScript codebases

https://github.com/LogicStamp/logicstamp-context
1•AmiteK•43m ago•8 comments

Forty years in the Siberian wilderness: the Old Believers who time forgot

https://www.theguardian.com/world/2026/jan/22/forty-years-in-the-siberian-wilderness-the-old-beli...
1•mindracer•43m ago•0 comments

GNU InetUtils Security Advisory: remote authentication by-pass in telnetd

https://www.openwall.com/lists/oss-security/2026/01/20/2
1•blincoln•44m ago•0 comments

Experimenting with a Compiled Language

1•JhonPork•44m ago•0 comments

Isotonic and Convex Regression: A Review of Theory, Algorithms, and Applications

https://www.mdpi.com/2227-7390/14/1/147
1•PaulHoule•45m ago•0 comments

Salesforce ships higher-quality code across 20k developers with Cursor

https://cursor.com/blog/salesforce
1•onurkanbkrc•45m ago•0 comments

Quamina v2.0.0

https://www.tbray.org/ongoing/When/202x/2026/01/20/Quamina-2.0
1•robin_reala•46m ago•0 comments

Do people at Google use Gmail?

2•mr-pink•49m ago•1 comments

RestockAlerts – Realtime Restock Tracker For Lululemon, Aritzia and More

https://restockalerts.com/
1•dandeliontechie•50m ago•0 comments

What Aviation Teaches Us About Auditing

https://docs.eventsourcingdb.io/blog/2026/01/22/what-aviation-teaches-us-about-auditing/
1•goloroden•51m ago•0 comments
Open in hackernews

Brokk: AI for Large Codebases

https://brokk.ai
51•handfuloflight•8mo ago

Comments

jbellis•8mo ago
Hi all, Brokk creator here, happy to answer any questions!

I made an intro video with a live demo here: https://www.youtube.com/watch?v=Pw92v-uN5xI

soco•8mo ago
Is there something also to read for those of us who will never watch videos?
lutzleonhardt•8mo ago
Hi, yes there are some blog posts:

https://brokk.ai/blog/brokk-under-the-hood

bchapuis•8mo ago
Really cool project! I tried it a couple of weeks ago with an Anthropic API key and will give it another shot.

Could you share a bit more about how you handle code summarization? Is it mostly about retaining method signatures so the LLM gets a high-level sense of the project? In Java, could this work with dependencies too, like source JARs?

More generally, how’s it been working with Java for this kind of project? Does limited GPU access ever get in the way of summarization or analysis (Jlama)?

jbellis•8mo ago
That officially makes you an early adopter, thanks!

Yes, it's basically just parsing for declarations. (If you doubleclick on any context in the Workspace it will show you exactly what's inside.)

You have to import the dependencies via File -> Decompile Dependency and then it gets parsed like the rest of your source, only read-only.

I have a love-hate relationship with Java, mostly love lately, the OpenJDK team is doing a great job driving the language forward. It's so much faster than Python, it's nice being able to extend a language in itself and get native performance.

Since we're just using Jlama to debounce the LLM requests, we can use a tiny model that runs fine on CPU alone. The latest Jlama supports GPU as well but we're not using that.

neoncontrails•8mo ago
I'd be interested to try this out. I'm especially keen on AI tools that implement a native RAG workflow. I've given Cursor documentation links, populated my codebase with relevant READMEs and diagram files that I'm hoping might provide useful context, and yet when I ask it to assist on some refactoring task it often spends 10-20 minutes simply grepping for various symbol names and reading through file matches before attempting to generate a response. This doesn't seem like an efficient way for an LLM to navigate a medium-sized codebase. And for an IDE with first-class LLM tooling, it is a bit surprising that it doesn't seem to provide powerful vector-based querying capabilities out of the box — if implemented well, a Google-like search interface to one's codebase could be useful to humans as well as to LLMs.

What does this flow look like in Brokk? Do models still need to resort to using obsolete terminal-based CLI tools in order to find stuff?

lutzleonhardt•8mo ago
We implemented a multi-step process to find the required context:

1. Quick Context Shows the most relevant files based on a pagerank algorithm (static analysis) and semantic embeddings (JLama inference engine). The input are the instructions and the AI workspace fragments (i.e. files).

2. Deep Scan A richer LLM receives the summaries of the AI workspace files (+instructions) and returns a recommendation of files and tests. It also recommends the type of inclusion (editable, read-only, summary/skeleton).

3. Agentic Search The AI has access to a set of tools for finding the required files. But the tools are not limited to grep/rg. Instead you can: - find symbols (classes, methods, ...) in the project - ask for summaries/skeletons of files - provide class or method implementations - find usages of symbols (where is x used?) - call sites (in/out) ...

You can read more about this in the Brokk.ai blog: https://brokk.ai/blog/brokk-under-the-hood

silverlake•8mo ago
No offense, but that video is brutally boring. Even at 1.5x speed I couldn’t get past 10 min. You should transcribe the audio and use an LLM to write a punchy sales pitch.
corysama•8mo ago
How large is "Large"? Are we testing on Unreal Engine? :D
jbellis•8mo ago
no, but I've tested on intellij (~5M loc, takes forever to import b/c of delombok, do not recommend)
lutzleonhardt•8mo ago
I tested it with Ghidra recently and got very good results
saratogacx•8mo ago
Likely not an important note but the name sounds close enough to grok that I assumed this was a spin off of some xAI product. I had to look around to see if it was actually associated (it looks like it isn't) but it may be something to be aware of.
tschellenbach•8mo ago
wrote a guide on how to use cursor for large codebases here: https://getstream.io/blog/cursor-ai-large-projects/ working well over here

cool to see more AI tools address this

ElijahLynn•8mo ago
Thank you! I think this is the next evolution of using LLM for coding. Understanding all the context from large codebases...
lutzleonhardt•8mo ago
The amazing thing here is that the Brokk AI can access your code like an IDE, can ask for usages or gather the summary of a file before deciding to get the implementation of a method! It mimics like a Dev is navigating the codebase. And this is more reliable and token-efficient than the usual grep/rg approach
esafak•8mo ago
This ought to be an IDE plugin. Don't make me context switch.
danjl•8mo ago
The "Read" file list sounds a lot like Copilot Edit mode, where you manually specify the list of files that are added to the context. Similarly, Copilot has an Ask (Chat) mode that doesn't change the code. One of the downsides of all these new IDEs is that it is difficult, even for the developers of those tools, to have enough time to test out coding in each of their competitors. Also, the switching cost of changing IDEs is pretty high, even if they are forks of the same code base, which makes it hard for the users to really test out all the options. In the long run, I expect that the "larger" IDE providers will purchase the smaller ones. IOW, if you wait long enough, all the good bits will be in Copilot (or maybe Cursor with their new funding).
jbellis•8mo ago
(creator here)

idk, everyone else seems to want to take the 40 year old IDE paradigm we're all used to (really! that's how old Turbo Pascal 3 is!) and graft AI onto it. I think we need a fundamentally different design to truly take advantage of the change from "I'm mostly reading and writing code at human speeds" to "I'm mostly supervising the AI which is better at generating syntax than I am."

of course the downside to going against the crowd is that the crowd is usually right, we'll see how it goes!

danjl•8mo ago
I am a huge supporter of completely re-working the IDE UI as well. I'm not arguing for keeping the existing IDE interfaces. I like that folks are experimenting with entirely new interfaces. In fact, I'd go further and suggest that all of the overly complex interfaces used on any sort of content-creation app, like Unity, Unreal, Photoshop, as well as code IDEs, will eventually be completely refactored to remove all the old complexity in favor of either chat-based or other AI-driven interfaces. My point is simply that there are too many new AI-driven IDEs for folks to try out, even the developers of those IDEs. Many of the features in Brokk that were seemingly described in the Brokk 101 blog video as "differentiators" are existing Copilot features. Has the author ever used Copilot? Or just Cursor? Or another AI variant?
danjl•8mo ago
I'd love to see things like Brokk experiment a bit more with what other information to include in our git repositories, besides the code, that helps improve AI-based code generation. For example, perhaps the repo should include more design information about the look-and-feel, as visual information or Figma files, rather than just, say the CSS and HTML. Or it might help if the repository included more business requirements so that the AI has better information to guide prioritization of changes. Obviously other bits, like coding standards, should be included as well, though perhaps using a larger context might mitigate the need for coding standards if the generated code followed the existing code (which often doesn't happen).
bb88•8mo ago
I think that's what's going to happen over time. We're going to be writing more and more code, but supervising an AI mostly.

The big problem is that we're treating the AI as an all knowing oracle. And probably what we should be doing is treating the AI as a colleague -- allowing it to ask questions about the code base to develop the subtle clues.

Often what happens is that subtlety is lost on the code base, and sometimes, AI will think it's an outright error, when in fact it's completely on purpose.

Comments go a long way towards this end, but in large legacy codebases, comments may not exist, and the coders expected people to understand at first glance the code was correct.

Test driving Junie, I've had it remove a feature it thought was broken code, and then it fixes the unit tests, instead of trying to understand if the unit test was actually broken or the feature was broken.

insin•8mo ago
LLM for Large Codebases