frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Async I/O on Linux in databases

https://blog.canoozie.net/async-i-o-on-linux-and-durability/
18•jtregunna•1h ago•4 comments

Hungary's oldest library is fighting to save books from a beetle infestation

https://www.npr.org/2025/07/14/nx-s1-5467062/hungary-library-books-beetles
105•smollett•3d ago•4 comments

Make Your Own Backup System – Part 1: Strategy Before Scripts

https://it-notes.dragas.net/2025/07/18/make-your-own-backup-system-part-1-strategy-before-scripts/
241•Bogdanp•12h ago•81 comments

I tried vibe coding in BASIC and it didn't go well

https://www.goto10retro.com/p/vibe-coding-in-basic
81•ibobev•3d ago•70 comments

The Big LLM Architecture Comparison

https://magazine.sebastianraschka.com/p/the-big-llm-architecture-comparison
6•mdp2021•1h ago•0 comments

Nobody knows how to build with AI yet

https://worksonmymachine.substack.com/p/nobody-knows-how-to-build-with-ai
347•Stwerner•16h ago•274 comments

Local LLMs versus offline Wikipedia

https://evanhahn.com/local-llms-versus-offline-wikipedia/
238•EvanHahn•15h ago•126 comments

Death by AI

https://davebarry.substack.com/p/death-by-ai
298•ano-ther•17h ago•117 comments

Mushroom learns to crawl after being given robot body (2024)

https://www.the-independent.com/tech/robot-mushroom-biohybrid-robotics-cornell-b2610411.html
112•Anon84•2d ago•29 comments

How to Run an Arduino for Years on a Battery (2021)

https://makecademy.com/arduino-battery
43•thunderbong•3d ago•11 comments

Beyond Meat fights for survival

https://foodinstitute.com/focus/beyond-meat-fights-for-survival/
66•airstrike•8h ago•122 comments

Matterport walkthrough of the original Microsoft Building 3

https://my.matterport.com/show/?m=SZSV6vjcf4L
35•uticus•3d ago•21 comments

Borg – Deduplicating archiver with compression and encryption

https://www.borgbackup.org/
36•rubyn00bie•5h ago•6 comments

Ring introducing new feature to allow police to live-stream access to cameras

https://www.eff.org/deeplinks/2025/07/amazon-ring-cashes-techno-authoritarianism-and-mass-surveillance
249•xoa•9h ago•119 comments

What were the earliest laws like?

https://worldhistory.substack.com/p/what-were-the-earliest-laws-really
68•crescit_eundo•4d ago•20 comments

Data and Democracy: Charting Assault on American Democracy and a Path Forward

https://data4democracy.substack.com/p/on-data-and-democracy-mid-year-roundup
59•heavyset_go•6h ago•10 comments

“Bypassing” specialization in Rust

https://oakchris1955.eu/posts/bypassing_specialization/
23•todsacerdoti•3d ago•11 comments

Rethinking CLI interfaces for AI

https://www.notcheckmark.com/2025/07/rethinking-cli-interfaces-for-ai/
157•Bogdanp•15h ago•72 comments

A Treatise for One Network – Anonymous National Deliberation [pdf]

https://simurgh-beau.github.io/
3•simurgh_beau•3d ago•0 comments

The curious case of the Unix workstation layout

https://thejpster.org.uk/blog/blog-2025-07-19/
87•ingve•15h ago•36 comments

Open-Source BCI Platform with Mobile SDK for Rapid Neurotech Prototyping

https://www.preprints.org/manuscript/202507.1198/v1
4•GaredFagsss•3d ago•0 comments

Airbnb allowed rampant price gouging following L.A. fires, city attorney alleges

https://www.latimes.com/california/story/2025-07-19/airbnb-allowed-price-gouging-following-l-a-fires-city-attorney-alleges-in-lawsuit
41•miguelazo•4h ago•48 comments

Piano Keys

https://www.mathpages.com/home/kmath043.htm
49•gametorch•4d ago•49 comments

New York’s bill banning One-Person Train Operation

https://www.etany.org/statements/impeding-progress-costing-riders-opto
90•Ericson2314•6h ago•118 comments

The AGI Final Frontier: The CLJ-AGI Benchmark

https://raspasov.posthaven.com/the-agi-final-frontier-the-clj-agi-benchmark
7•raspasov•6h ago•1 comments

I Used Arch, BTW: macOS, Day 1

https://yberreby.com/posts/i-used-arch-btw-macos-day-1/
62•yberreby•8h ago•62 comments

The borrowchecker is what I like the least about Rust

https://viralinstruction.com/posts/borrowchecker/
195•jakobnissen•12h ago•271 comments

How we tracked down a Go 1.24 memory regression

https://www.datadoghq.com/blog/engineering/go-memory-regression/
153•gandem•2d ago•8 comments

Erythritol linked to brain cell damage and stroke risk

https://www.sciencedaily.com/releases/2025/07/250718035156.htm
57•OutOfHere•6h ago•33 comments

The future of ultra-fast passenger travel

https://spaceambition.substack.com/p/beyond-the-sound-barrier
29•simonebrunozzi•11h ago•71 comments
Open in hackernews

I tried vibe coding in BASIC and it didn't go well

https://www.goto10retro.com/p/vibe-coding-in-basic
79•ibobev•3d ago

Comments

firesteelrain•5h ago
Not surprised; there were so many variations of BASIC and unless you train ChatGPT on a bunch of code examples and contexts then it can only get so close.

Try a local LLM then train it

ofrzeta•4h ago
> ... unless you train ChatGPT on a bunch of code examples and contexts then it can only get so close.

How do you do this?

sixothree•3h ago
RAG maybe?
firesteelrain•5m ago
RAG is good suggestion to pull in runtime without weights
oharapj•2h ago
If you're OpenAI you scrape StackOverflow and GitHub and spend billions of dollars on training. If you're a user, you don't
firesteelrain•4m ago
Gist is

1. Gather training data

2. Format it into JSONL or Hugging Face Dataset format

3. Use Axolotl or Hugging Face peft to fine-tune

4. Export model to GGUF or HF format

5. Serve via Ollama or llama.cpp

empressplay•5h ago
We have Claude Code writing Applesoft BASIC fine. It wrote a text adventure (complete with puzzles) and a PONG clone, among other things. Obviously it didn't do it 100% right straight out of the gate, but the hand-holding wasn't extensive.

I've been using Grok 4 to write 6502 assembly language and it's been a bit of a slog but honestly the issues I've encountered are due mostly my to naivety. If I'm disciplined and make sure it has all of the relevant information and I'm (very) incremental, I've had some success writing game logic. You can't just tell it to build an entire game in a prompt, but if you're gradual about it you can go places with it.

Like any tool, if you understand its idiosyncrasies you can cater for them, and be productive with it. If you're not then yeah, it's not going to go well.

hammyhavoc•3h ago
Ah yes, truly impressive, Pong. A game that countless textbooks et al have recreated numerous times. There's a mountain of training data for something so unoriginal.
recipe19•5h ago
I work on niche platforms where the amount of example code on Github is minimal, and this definitely aligns with my observations. The error rate is way too high to make "vibe coding" possible.

I think it's a good reality check for the claims of impending AGI. The models still depend heavily on being able to transform other people's work.

empressplay•5h ago
I don't know if you're working with modern models. Grok 4 doesn't really know much about assembly language on the Apple II but I gave it all of the architectural information it needed in the first prompt of a conversation and it built compilable and executable code. Most of the issues I encountered were due to me asking for too much in a prompt. But it built a complete, albeit simple, assembly language game in a few hours of back and forth with it. Obviously I know enough about the Apple II to steer it when it goes awry, but it's definitely able to write 'original' code in a language / platform it doesn't inherently comprehend.
timschmidt•4h ago
This matches my experience as well. Poor performance usually means I haven't provided enough context or have asked for too much in a single prompt. Modifying the prompt accordingly and iterating usually results in satisfactory output within the next few tries.
gompertz•5h ago
Yep I program in some niche languages like Pike, Snobol4, Unicon. Vibe coding is out of the question for these languages. Forced to use my brain!
jjmarr•4h ago
I've noticed the error rate doesn't matter if you have good tooling feeding into the context. The AI hallucinates, sees the bug, and fixes it for you.
winrid•4h ago
Even with typescript Claude will happily break basic business logic to make tests pass.
CalRobert•3h ago
That seems like the tests don’t work?
motorest•2h ago
> Even with typescript Claude will happily break basic business logic to make tests pass.

It's my understanding that LLMs change the code to meet a goal, and if you prompt them with vague instructions such as "make tests pass" or "fix tests", LLMs in general apply the minimum necessary and sufficient changes to any code that allows their goal to be met. If you don't explicitly instruct them, they can't and won't tell apart project code from test code. So they will change your project code to make tests work.

This is not a bug. Changing project code to make tests pass is a fundamental approach to refactoring projects, and the whole basis of TDD. If that's not what you want, you need to prompt them accordingly.

chuckadams•2h ago
Fixing bugs is also changing project code to make tests pass. The assistant is pretty good at knowing which side to change when it’s working from documentation that describes the correct behavior.
Terr_•52m ago
> It's my understanding that LLMs change the code to meet a goal

I assume in this case you mean a broader conventional application, of which an LLM algorithm is a smaller-but-notable piece?

LLMs themselves have no goals beyond predicting new words for a document that "fit" the older words. It may turn 2+2 into 2+2=4, but it's not actually doing math with the goal of making both sides equal.

andsoitis•4h ago
> The models

The models don’t have a model of the world. Hence they cannot reason about the world.

hammyhavoc•3h ago
"reason" is doing some heavy-lifting in the context of LLMs.
vineyardmike•4h ago
Completely agree. I’m a professional engineer, but I like to get some ~vibe~ help on person projects after-work when I’m tired and just want my personal project to go faster. I’ve had a ton of success with go, JavaScript, python, etc. I had mixed-success with writing idiomatic Elixir roughly a year ago, but I’ve largely assumed that this would be resolved today, since every model maker has started aggressively filling training data with code, since we found the PMF of LLM code-assistance.

Last night I tried to build a super basic “barely above hello world” project in Zig (a language where IDK the syntax), and it took me trying a few different LLMs to find one that could actually write anything that would compile (Gemini w/ search enabled). I really wasn’t expecting it considering how good my experience has been on mainstream languages.

Also, I think OP did rather well considering BASIC is hardly used anymore.

docandrew•5h ago
Maybe other folks’ vibe coding experiences are a lot richer than mine have been, but I read the article and reached the opposite conclusion of the author.

I was actually pretty impressed that it did as well as it did in a largely forgotten language and outdated platform. Looks like a vibe coding win to me.

sixothree•4h ago
Here's an example of a recent experience.

I have a web site that is sort of a cms. I wanted users to be able to add a list of external links to their items. When a user adds a link to an entry, the web site should go out and fetch a cached copy of the site. If there are errors, it should retry a few times. It should also capture an mhtml single file as well as a full page screenshot. The user should be able to refresh the cache, and the site should keep all past versions. The cached copy should be viewable in a modal. The task also involves creating database entities, DTOs, CQRS handlers, etc.

I asked Claude to implement the feature, went and took a shower, and when I came out it was done.

hammyhavoc•3h ago
Let us know how the security audit by human beings on the output goes.
catmanjan•3h ago
The auditors are using llms too!
nico•3h ago
Im pretty new to CC, been using it in a very interactive way.

What settings are you using to get it to just do all of that without your feedback or approval?

Are you also running it inside a container, or setting some sort of command restrictions, or just yoloing it on a regular shell?

serf•4h ago
please just include the prompts rather than saying "So I said X.."

There is a lot of nuance in how X is said.

pavelstoev•4h ago
I vibe coded a site about vibe 2 code projects. https://builtwithvibe.com/
esafak•3h ago
The "Yo dawg, I heard..." memes are writing themselves today.
clambaker117•4h ago
Wouldn’t it have been better to use Claude 4?
sixothree•4h ago
I'm thinking Gemini CLI because of the context. He could add some information about the programming language itself in the project. I think that would help immensely.
4b11b4•2h ago
Even though the max token limit is higher, it's more complicated than that.

As the context length increases, undesirable things happen.

calvinmorrison•4h ago
This is great. I am currently vibecoding a replacement connector for some old EDI software that is written in a business basic kind of language called ProvideX, and a fork of that has undocumented behaviour.

It uses some built inet ftp tooling thats terrible and barely works, even internally anymore.

We are replacing it with a winscp implementation since winscp can talk over a COM object.

unsuprisingly the COM object in basic works great - the problem is that I have no idea what I am doing. I spent hours doing something like

WINSCP_SESSION'OPEN(WINSCP_SESSION_OPTIONS)

when i needed

WINSCP_SESSION'OPEN(*WINSCP_SESSION_OPTIONS)

It was obvious after because it was a pointer type of setup, but i didnt find it until pages and pages deep into old PDF manuals.

However the vibecode of all the agents did not understand the syntax of the system, it did help me analyse the old code, format it, and at least throw some stuff at the wall.

I finished it up friday, hopefully i deploy monday.

CMay•3h ago
This does kind of make me wonder.

It's believable that we might either see an increase in the number of new programming languages since making new languages is becoming more accessible, or we could see fewer new languages as the problems of the existing ones are worked around more reliably with LLMs.

Yet, what happens to adoption? Perhaps getting people to adopt new languages will be harder as generations come to expect LLM support. Would you almost need to use LLMs to synthesize tons of code examples that convert into the new language to prime the inputs?

Once conversational intelligence machines reach a sort of godlike generality, then maybe they could very quickly adapt languages from much fewer examples. That still might not help much with the gotchas of any tooling or other quirks.

So maybe we'll all snap to a new LLM super-language in 20 years, or we could be concreting ourselves into the most popular languages of today for the next 50 years.

hammyhavoc•3h ago
Fantasy.
ilaksh•3h ago
I think it's a fair article.

However I will just mention a few things. When you make an article like this please take note of the particular language model used and acknowledge that they aren't all the same.

Also realize that the context window is pretty large and you can help it by giving it information from manuals etc. so you don't need to rely on the intrinsic knowledge entirely.

If they used o3 or o3 Pro and gave it a few sections of the manual it might have gotten farther. Also if someone finds a way to connect an agent to a retro computer, like an Atari BASIC MCP that can enter text and take screenshots, "vibe coding" can work better as an agent that can see errors and self-correct.

manca•3h ago
I literally had the same experience when I asked the top code LLMs (Claude Code, GPT-4o) to rewrite the code from Erlang/Elixir codebase to Java. It got some things right, but most things wrong and it required a lot of debugging to figure out what went wrong.

It's the absolute proof that they are still dumb prediction machines, fully relying on the type of content they've been trained on. They can't generalize (yet) and if you want to use them for novel things, they'll fail miserably.

hammyhavoc•3h ago
They'll never be fit for purpose. They're a technological dead-end for anything like what people are usually throwing them at, IMO.
zer00eyz•3h ago
I will give you an example of where you are dead wrong, and one where the article is spot on (without diving into historic artifacts).

I run HomeAssistant, I don't get to play/use it every day. Here, LLM's excel at filling in the (legion) of blanks in both the manual and end user devices. There is a large body of work for it to summarize and work against.

I also play with SBC's. Many of these are "fringe" at best. LLM's are as you say "not fit for purpose".

What kind of development you are using LLM's for will determine your experience with them. The tool may or may not live up to the hype depending how "common", well documented and "frequent" your issue is. Once you start hitting these "walls" you realize that no, real reason, leaps of inference and intelligence are still far away.

SecuredMarvin•54m ago
I also made this experience. As long as the public level of knowledge is high, LLMs are massively helpful. Otherwise not so much and still hallucinating. It does not matter if you think highly of this public knowledge. QFT, QED and Gravity are fine, AD emulation on SAMBA, or Atari Basic not so much.

If I would program Atari Basic, after finishing my Atari Emulator on my C64, I would learn the environment and test my assumptions. Single shot LLMs questions won't do it. A strong agent loop could probably.

I believe that LLMs are yanking the needle to 80%. This level is easy achievable for professionals of the trade and this level is beyond the ability of beginners. LLMs are really powerful tools here. But if you are trying for 90% LLMs are always trying to keep you down.

And if you are trying for 100%, new, fringe or exotic LLMs are a disaster because they do not learn and do not understand, even while being inside the token window.

We learn that knowledge, (power) and language proficiency are an indicator for crystalline but not fluid intelligence

otabdeveloper4•20m ago
> yanking the needle to 80%

80 percent of what, exactly? A software developer's job isn't to write code, it's understanding poorly-specified requirements. LLMs do nothing for that unless your requirements are already public on Stackoverflow and Github. (And in that case, do you really need an LLM to copy-paste for you?)

motorest•3h ago
> They'll never be fit for purpose. They're a technological dead-end for anything like what people are usually throwing them at, IMO.

This comment is detached from reality. LLMs in general have been proven to be effective at even creating complete, fully working and fully featured projects from scratch. You need to provide the necessary context and use popular technologies with enough corpus to allow the LLM to know what to do. If one-shot approaches fail, a few iterations are all it takes to bridge the gap. I know that to be a fact because I do it on a daily basis.

otabdeveloper4•24m ago
> because I do it on a daily basis

Cool. How many "complete, fully working" products have you released?

Must be in the hundreds now, right?

abrookewood•3h ago
Clearly the issue is that you are going from Erlang/Elixir to Java, rather than the other way around :)

Jokes aside, they are pretty different languages. I imagine you'd have much better luck going from .Net to Java.

nine_k•1h ago
This mostly means that LLMs are good at simpler forms of pattern matching, and have much harder time actually reasoning at a significant depth. (It's not easy even for human intellect, the finest we currently have.)
tsimionescu•1h ago
Sure, it's easier to solve an easier problem, news at eleven. In particular, translating from C# to Java could probably be automated with some 90% accuracy using a decent sized bash script.
h4ck_th3_pl4n3t•3h ago
I just wished the LLM model providers would realize this and instead would provide specialized LLMs for each programming language. The results likely would be better.
chuckadams•2h ago
The local models JetBrains IDEs use for completion are specialized per-language. For more general problems, I’m not sure over-fitting to a single language is any better for a LLM than it is for a human.
conception•1h ago
I’m curious what your process was. If you just said “rewrite this in Java” I’d expect that to fail. If you treated the llm like a junior developer or an official project, worked with them to document the codebase, come up with a plan, tasks for each part of the code base and a solid workflow prompt- I would expect it to succeed.
nerdsniper•1h ago
Claude Code / 4o struggle with this for me, but I had Claude Opus 4 rewrite a 2,500 line powershell script for embedded automation into Python and it did a pretty solid job. A few bugs, but cheaper models were able to clean those up. I still haven't found a great solution for general refactoring -- like I'd love to split it out into multiple Python modules but I rarely like how it decides to do that without me telling it specifically how to structure the modules.
ofrzeta•3h ago
It didn't go well? I think it went quite well. It even produced an almost working drawing program.
abrookewood•3h ago
Yep, thought the same thing. I guess people have very different expectations.
Radle•3h ago
I had way better results. I'd assume the same would have happened to the author if he provided the LLM with a full documentation on what ATARI BASIC is and some example programs.

Especially when asking the LLM to create a drawing program and a game the author would have probably received working code if he supplied the ai with documentation to the graphics function and sprite rendering using ATARI BASIC.

xiphias2•2h ago
4o is not even a coding model and very far from the best coding models OpenAI has, I seriously don't understand why these articles are upvoted so much
throw101010•45m ago
> I seriously don't understand why these articles are upvoted so much

It confirm a bias for some, it triggers others who might have the opposite position (and maybe have a bias too on the other end).

Perfect combo for successful social media posts... literally all about "attention" from start to finish.

fcatalan•2h ago
I had more luck with a little experiment a few days ago: I took phone pics of one of the shorter BASIC listings from Tim Hartnell's "Giant Book of Computer Games" (I learned to program out of those back in the early 80s, so I treasure my copy) and asked Gemini to translate it to plain C. It compiled and played just fine on the first go.
edent•1h ago
Vibe Coding seems to work best when you are already an experienced programmer.

For example "Prompt: Write me an Atari BASIC program that draws a blue circle in graphics mode 7."

You need to know that there are various graphics modes and that mode 7 is the best for your use-case. Without that preexisting knowledge, you get stuck very quickly.

throwawaylaptop•56m ago
Exactly this. I'm a self taught PHP/jQuery guy that learned it well enough to make an entire saas that enough companies pay for that it's a decent little lifestyle business.

I started another project recently basically vibe coding in PHP. Instead of a single page app like I made before, it's just page by page single loading. Which means the AI also only needs to keep a few functions and the database in its head, not constantly work on some crazy ui management framework (what that's called).

It's made in a few days what would have taken me weeks as an amateur. Yet I know enough to catch a few 'mistakes' and remind it to do it better.

I'm happy enough.

kqr•46m ago
Not only is it a useful constraint to ask for mode 7, but making sure the context contains domain-expert technology puts the LLM in a better spot in the sampling space.
cfn•29m ago
Just for fun I asked ChatGPT "How would you ask an LLM to write a drawing program for the ATARI?" and it asked back a bunch of details to which I answered "I have no idea, just go with the simplest option". It chose the correct graphics mode and BASIC and created the program (which I didn't test).

I still agree with you for large applications but for these simple examples anyone with a basic understanding of vibe coding could wing it.

baxtr•27m ago
This is a description of a “tool”. Anyone can use a hammer and chisel to carve out wood, but only an artist with extensive experience will create something truly remarkable.

I believe many in this debate are conflating tools and magic wands.

j4coh•12m ago
In this case I asked ChatGPT without the part specifying mode 7 and it replied with a working program using mode 7, with a comment at the top that mode 7 would be the best choice.
Earw0rm•1h ago
Has anyone tried it on x87 assembly language?

For those that don't know. x87 was the FPU for 32-bit x86 architectures. It's not terribly complicated, but it uses stack-based register addressing with a fixed size (eight entry) stack.

All operations work on the top-of-stack register and one other register operand, and push the result onto the top of the stack (optionally popping the previous top of stack before the push).

It's hard but not horribly so for humans to write.. more a case of annoyingly slow and having to be methodical, because you have to reason about the state of the stack at every step.

I'd be very curious as to whether a token-prediction machine can get anywhere with this kind of task, as it requires a strong mental model of what's actually happening, or at least the ability to consistently simulate one as intermediate tokens/words.

silisili•34m ago
I'm going to doubt that. I was pushing GPT a couple weeks ago to test its limits. It's 100% unable to write compilable Go ASM syntax. In fairness it's slightly oddball, but enough exists that it's not esoteric.

In the error feedback cycle, it kept blaming Go, not itself. A bit eye opening.

messe•22m ago
I'm comfortable writing asm for quite a few architectures, but Go's assembler...

When I struggle to write Go ASM, I also blame Go and not myself.

nine_k•1h ago
All these stories about vibe coding going well or wrong remind me of an old joke.

A man visits his friend's house. There is a dog in the house. The friend says that the dog can play poker. The man is incredulous, but they sit at a table and have a game of poker; the dog actually can play!

The man says: "Wow! Your dog is incredibly, fantastically smart!"

The friend answers: "Oh, well, no, he's a naïve fool. Every time he gets a good hand, he starts wagging his tail."

Whether you see LLMs impressively smart or annoyingly foolish depends on your expectations. Currently they are very smart talking dogs.

kqr•48m ago
Somehow this also reminds me of http://raisingtalentthebook.com/wp-content/uploads/2014/04/t...

"I taught my dog to whistle!"

"Really? I don't hear him whistling."

"I said I taught him, not that he learnt it."

Yokolos•1h ago
When I start getting nonsense back and forth prompting, I've found it best to just start a new chat/context with the latest working version and then try again with a slightly more detailed prompt that tries to avoid the issues encountered in the previous chat. It usually helps. AI generally quickly gets itself lost, which can be annoying.
danjc•41m ago
What's missing here is tool use.

For example, if the llm had a compile tool it would likely have been able to correct syntax errors.

Similarly, visual errors may also have been caught if it were able to run the program and capture screens.

wbolt•33m ago
Exactly! The way in which the LLM is used here is very, very basic and outdated. This experiment should be redone in a proper „agentic” setup where there is a feedback loop between the model and the runtime plus access to documentation / internet. The goal now is not to encapsulate all the knowledge inside single LLM - this is too problematic and costly. LLM is a language model not knowledge database. It allows to interpret and interact with knowledge and text data from multiple sources.