frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•26s ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•35s ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•1m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•1m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•3m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•3m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•4m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•4m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•5m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•6m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•7m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•11m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•11m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•12m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•16m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•16m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•17m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
2•samuel246•20m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•20m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•21m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•21m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•22m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•24m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•24m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
3•jerpint•25m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•27m ago•0 comments

Show HN: I'm 15 and built a free tool for reading ancient texts.

https://the-lexicon-project.netlify.app/
3•breadwithjam•29m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•30m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•31m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•33m ago•0 comments
Open in hackernews

ML on Apple ][+

https://mdcramer.github.io/apple-2-blog/k-means/
120•mcramer•4mo ago

Comments

rob_c•4mo ago
Since when did regression get upgraded to full blown ML?
nekudotayim•4mo ago
What is ML if not interpolation and extrapolation?
magic_hamster•4mo ago
A million things.

Diffusion, back propagation, attention, to name a few.

have-a-break•4mo ago
Back prop and attention are just extensions of interpolation.
rob_c•4mo ago
By that logic it's all "just linear maths".

Back prop requires and limits to analytically differentiable in a normal way.

Attention is... Oh dear comparing linear regression to attention is comparing a diesel jet engine to a horse.

aleph_naught•4mo ago
It's all just a series of S(S(S(....S(0)))) anyways.
stonogo•4mo ago
When you find yourself solving NP-hard problems on an Apple II, chances are strong you've entered machine learning territory
DonHopkins•4mo ago
Since when did ML get upgraded to full blown AI?
andai•4mo ago
Since we gave up on AI and ML is eh close enough.
drob518•4mo ago
Upvoted purely for nostalgia.
gwbas1c•4mo ago
Any particular reason why the author chose to do this on an Apple ][?

(I mean, the pictures look cool and all.)

IE, did the author want to experiment with older forms of basic; or were they trying to learn more about old computers?

mcramer•4mo ago
I wrote about my motivation at https://mdcramer.github.io/apple-2-blog/motivation/, which is obviously tongue in cheek. Tl;dr, I refurbished my Apple ][+ to try to recover a game I wrote in high school (https://mdcramer.github.io/apple-2-blog/recover/). After being unable to find the floppy with the game, I thought I'd try something just for giggles.
shagie•4mo ago
One of my early "this is neat" programs was a genetic algorithm in Pascal. You entered a bunch of digits and it "evolved" the same sequence of digits. It started out with 10 random numbers. Their fitness (lower was better) was the sum the difference. So if the target was "123456" and the test number was "214365", it had a fitness of 6. It took the top 5, and then mutated a random digit by a random +/- 1. It printed out each row with the full population. and so you could see it scrolling as it converged on the target number.

Looking back, I want to say it was probably the July, 1992 issue of Scientific American that inspired me to write that ( https://www.geos.ed.ac.uk/~mscgis/12-13/s1100074/Holland.pdf ) . And as that was '92, this might have been on a Mac rather than an Apple ][+... it was certainly in Pascal (my first class in C was in August '92) and I had access to both at the time (I don't think it was turbo pascal on a PC as this was a summer thing and I didn't have a IBM PC at home at the time). Alas, I remember more about the specifics of the program than I do about what desk I was sitting at.

Steeeve•4mo ago
I wrote a whole project in pascal around that time. Analyzing two datasets. It was running out of memory the night before it was due, so I decided to have it run twice, once for each dataset.

That's when I learned a very important principal. "When something needs doing quickly, don't force artificial constraints on yourself"

I could have spent three days figuring out how to deal with the memory constraints. But instead I just cut the data in half and gave it two runs. The quick solution was the one that was needed. Kind of an important memory for me that I have thought about quite a bit in the last 30+ years.

aardvark179•4mo ago
I thought this was going to be about the programming language, and I was wondering how they managed to implement it on a machine that small.
Scramblejams•4mo ago
Same. What flavor of ML would be the most appropriate for that challenge, do you think?
taolson•4mo ago
While not exactly ML, David Turner's Miranda system is pretty small, and might be feasible:

https://codeberg.org/DATurner/miranda

noelwelsh•4mo ago
That's also what I was thinking. ML predates the Apple II by 4 years, so I think there is definitely a chance of getting it running! If targetting the Apple IIGS I think it would be very achievable; you could fit megabytes of RAM in those.
dekhn•4mo ago
Likely any early implementation of ML would have been on a mainframe or minicomputer, not a 6502. A mainframe/minicomputer would have had oodles of storage (both durable and RAM), as well as a compiler for a high level language (which fits what I can see in https://smlfamily.github.io/history/ML2015-talk.pdf and other locations).
noelwelsh•4mo ago
So I've been mildly nerd sniped. It looks like the first target was a PDP-10 [1]. It ran Stanford Lisp used by the "DEC 10" implementation of ML. The architecture is pretty unusual by modern standards, but it doesn't look to be that powerful and seems to top out at around 1MB of RAM. Next up we have a VAX [2] implementation. It's not clear which specific system it was originally developed for, but we're talking early 80s so it probably wasn't much more powerful than the PDP-10. Either way, I think a maxed Apple IIGS with a hefty 8MB of RAM and perhaps overclocked to 14MHz is more than enough raw power to handle ML. Unfortunately I haven't been sufficiently nerd sniped to actually implement this. I leave that as an exercise for the reader ;-)

[1]: https://en.wikipedia.org/wiki/PDP-10

[2]: https://en.wikipedia.org/wiki/VAX

dekhn•4mo ago
an enormous amount of software was developed on the PDP-10 and PDP-11 and later VAX systems that could not have been done on microcomputers in the day. You can't just compare raw RAM and clock rates, the PDPs were set up for multi-user productivity on complex problems and had a wide range of system software to enable building and deploying advanced software.
foobarian•4mo ago
That's funny, pretty sure we used Standard ML on the old oscilloscope Macs in undergrad. Not Apple 2 of course, but still already pretty dated even at that time (late 90s).
amilios•4mo ago
Bit of a weird choice to draw a decision boundary for a clustering algorithm...
mcramer•4mo ago
How so? Drawing decision boundary is a pretty common visualization technique for understanding how an algorithm partitions a data space.
aperrien•4mo ago
An Aeon ago in 1984, I wrote a perceptron on the Apple II. It was amazingly slow (20 minutes to complete a recognition pass), but what most impressed me at the time was that it did work. Since that time as a kid I always wondered just how far linear optimization techniques could take us. If I could just tell myself then what I know now...
alexshendi•4mo ago
This motivates me to try this on my Ministrel 4th (21th century Jupiter Ace clone).
windsignaling•4mo ago
I'm surprised no one else has commented that a few of the conceptual comments in this article are a bit odd or just wrong.

> The final accuracy is 90% because 1 of the 10 observations is on the incorrect side of the decision boundary.

Who is using K-means for classification? If you have labels, then a supervised algorithm seems like a more appropriate choice.

> K-means clustering is a recursive algorithm

It is?

> If we know that the distributions are Gaussian, which is very frequently the case in machine learning

It is?

> we can employ a more powerful algorithm: Expectation Maximization (EM)

K-means is already an instance of the EM algorithm.

mcramer•4mo ago
> Who is using K-means for classification? If you have labels, then a supervised algorithm seems like a more appropriate choice. The generated data is labeled but we can imagine those labels don't exist when running k-means. There are many applications for unsupervised clustering. I don't, however, think that there are many applications for running much of anything on an Apple ][+.

> K-means clustering is a recursive algorithm My bad. It's iterative. I'll fix that. Thanks.

> If we know that the distributions are Gaussian, which is very frequently the case in machine learning Gaussian distributions are very frequent and important in machine learning because of the Central Limit Theorem but, beyond that, you are correct. While many natural phenomena are approximately normal, the reason for the Gaussian's frequent use is often mathematical mathematical convenience. I'll correct my post.

> we can employ a more powerful algorithm: Expectation Maximization (EM) Excellent point. I will fix that, too. "While k-means is simple, it does not take advantage of our knowledge of the Gaussian nature of the data. If we know that the distributions are at least approximately Gaussian, which is frequently the case, we can employ a more powerful application of the Expectation Maximization (EM) framework (k-means is a specific implementation of centroid-based clustering that uses an iterative approach similar to EM with 'hard' clustering) that takes advantage of this." Thank you for pointing out all of this!

JSR_FDED•4mo ago
Applesoft BASIC is just so darn readable. Youngsters have nothing comparable these days to learn the basics of expressing an algorithm without having to know a lot more.

And if it ever became too slow, you could reimplement the slow part in 6502 assembler, which has its own elegance. Great way to learn, glad I came up that way.

nikolay•4mo ago
You don't even need a computer for ML [0]!

[0]: https://proceedings.mlr.press/v170/marx22a/marx22a.pdf