frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

David Lynch LA House

https://www.wallpaper.com/design-interiors/david-lynch-house-los-angeles-for-sale
25•ewf•1h ago•2 comments

Apple: SSH and FileVault

https://keith.github.io/xcode-man-pages/apple_ssh_and_filevault.7.html
271•ingve•5h ago•79 comments

The Sagrada Família Takes Its Final Shape

https://www.newyorker.com/magazine/2025/09/22/is-the-sagrada-familia-a-masterpiece-or-kitsch
141•pseudolus•3d ago•65 comments

Nvidia buys $5B in Intel

https://www.tomshardware.com/pc-components/cpus/nvidia-and-intel-announce-jointly-developed-intel...
828•stycznik•14h ago•486 comments

Want to piss off your IT department? Are the links not malicious looking enough?

https://phishyurl.com/
190•jordigh•3h ago•45 comments

Learn Your Way: Reimagining Textbooks with Generative AI

https://research.google/blog/learn-your-way-reimagining-textbooks-with-generative-ai/
253•FromTheArchives•8h ago•174 comments

This map is not upside down

https://www.maps.com/this-map-is-not-upside-down/
185•aagha•8h ago•292 comments

AI tools are making the world look weird

https://strat7.com/blogs/weird-in-weird-out/
42•gaaz•3h ago•19 comments

Llama-Factory: Unified, Efficient Fine-Tuning for 100 Open LLMs

https://github.com/hiyouga/LLaMA-Factory
20•jinqueeny•2h ago•6 comments

Rupert's snub cube and other Math Holes

http://tom7.org/ruperts/
50•QuadmasterXLII•2d ago•3 comments

Meta’s live demo fails; “AI” recording plays before the actor takes the steps

https://www.reddit.com/r/LivestreamFail/comments/1nkbig7/metas_live_staged_demo_fails_the_ai_reco...
307•personjerry•5h ago•184 comments

Show HN: I created a small 2D game about an ant

https://aanthonymax.github.io/ant-and-apples/
39•aanthonymax•4h ago•6 comments

Show HN: Asxiv.org – Ask ArXiv papers questions through chat

https://asxiv.org/
83•anonfunction•1w ago•6 comments

Visual lexicon of consumer aesthetics from the 1970s until now

https://cari.institute/
19•tontonius•3d ago•2 comments

Slack has raised our charges by $195k per year

https://skyfall.dev/posts/slack
2861•JustSkyfall•1d ago•1234 comments

Launch HN: Cactus (YC S25) – AI inference on smartphones

https://github.com/cactus-compute/cactus
83•HenryNdubuaku•10h ago•42 comments

Classic recessive-or-dominant gene dynamics may not be so simple

https://news.stanford.edu/stories/2025/09/classic-recessive-dominant-gene-dynamics-pesticide-resi...
15•hhs•3h ago•1 comments

TernFS – An exabyte scale, multi-region distributed filesystem

https://www.xtxmarkets.com/tech/2025-ternfs/
201•rostayob•11h ago•77 comments

Tldraw SDK 4.0

https://tldraw.dev/blog/tldraw-sdk-4-0
74•bpierre•6h ago•33 comments

KDE is now my favorite desktop

https://kokada.dev/blog/kde-is-now-my-favorite-desktop/
711•todsacerdoti•13h ago•582 comments

Configuration files are user interfaces

https://ochagavia.nl/blog/configuration-files-are-user-interfaces/
142•todsacerdoti•9h ago•83 comments

Flipper Zero Geiger Counter

https://kasiin.top/blog/2025-08-04-flipper_zero_geiger_counter_module/
215•wgx•12h ago•67 comments

Luau – Fast, small, safe, gradually typed scripting language derived from Lua

https://luau.org/
154•andsoitis•12h ago•71 comments

OpenTelemetry Collector: What It Is, When You Need It, and When You Don't

https://oneuptime.com/blog/post/2025-09-18-what-is-opentelemetry-collector-and-why-use-one/view
70•ndhandala•8h ago•21 comments

Tracking Trust with Rust in the Kernel

https://lwn.net/Articles/1034603/
4•pykello•3d ago•0 comments

The quality of AI-assisted software depends on unit of work management

https://blog.nilenso.com/blog/2025/09/15/ai-unit-of-work/
148•mogambo1•12h ago•88 comments

When Knowing Someone at Meta Is the Only Way to Break Out of "Content Jail"

https://www.eff.org/pages/when-knowing-someone-meta-only-way-break-out-content-jail
238•01-_-•7h ago•121 comments

Show HN: Nallely – A Python signals/MIDI processing system inspired by Smalltalk

https://dr-schlange.github.io/nallely-midi/
8•drschlange•1h ago•1 comments

TIC-80 – Tiny Computer

https://tic80.com/
50•archargelod•4d ago•10 comments

They Know More Than I Do

https://www.cybadger.com/they-know-more-than-i-do-managing-an-expert-team-when-you-cant-do-their-...
9•r4um•3d ago•0 comments
Open in hackernews

Llama-Factory: Unified, Efficient Fine-Tuning for 100 Open LLMs

https://github.com/hiyouga/LLaMA-Factory
18•jinqueeny•2h ago

Comments

Twirrim•1h ago
https://llamafactory.readthedocs.io/en/latest/

I found this link more useful.

"LLaMA Factory is an easy-to-use and efficient platform for training and fine-tuning large language models. With LLaMA Factory, you can fine-tune hundreds of pre-trained models locally without writing any code."

hall0ween•58m ago
are there any use cases, aside from code generation and formatting, where fine-tuning consistently useful?
clipclopflop•27m ago
Creating small, specialized models for specific tasks. Being able to leverage the up front training/data as a generalized base allows you to quickly create a small local model that can generate outputs for that task that can come close to or match the same you would see in a large/hosted model.
metadat•43m ago
This reminds me conceptually of the Nvidia NIM factory where they attempt to optimize models in bulk / en-masse.

https://www.nvidia.com/en-us/ai/nim-for-manufacturing/

Word on the street is the project has yielded largely unimpressive results compared to its potential, but NV is still investing in an attempt to further raise the GPU saturation waterline.

p.s. This project logo stood out to me at presenting the Llama releasing some "steam" with gusto. I wonder if that was intentional? Sorry for the immature take but stopping the scatological jokes is tough.

tensorlibb•26m ago
This is incredible! What gpu configs, budget to ultra high-end, would you recommend for local fine tuning?

Always curious to see what other ai enthusiasts are running!

kelsey98765431•13m ago
FYI it also supports pre-training, reward model training and RL, not just fine tuning (sft). My team built a managed solution for training that runs on top of llama factory and it's quite excellent and well supported. You will need pretty serious equipment to get good results out of it, think 8xh200. For people at home i would look at doing an sft of gemma3 270m or maybe a 1.6b qwen3, but keep in mind you have to have the dataset in memory as well as the model and kv-cache. cheers