frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The world heard JD Vance being booed at the Olympics. Except for viewers in USA

https://www.theguardian.com/sport/2026/feb/07/jd-vance-boos-winter-olympics
2•treetalker•1m ago•0 comments

The original vi is a product of its time (and its time has passed)

https://utcc.utoronto.ca/~cks/space/blog/unix/ViIsAProductOfItsTime
1•ingve•8m ago•0 comments

Circumstantial Complexity, LLMs and Large Scale Architecture

https://www.datagubbe.se/aiarch/
1•ingve•15m ago•0 comments

Tech Bro Saga: big tech critique essay series

1•dikobraz•18m ago•0 comments

Show HN: A calculus course with an AI tutor watching the lectures with you

https://calculus.academa.ai/
1•apoogdk•22m ago•0 comments

Show HN: 83K lines of C++ – cryptocurrency written from scratch, not a fork

https://github.com/Kristian5013/flow-protocol
1•kristianXXI•26m ago•0 comments

Show HN: SAA – A minimal shell-as-chat agent using only Bash

https://github.com/moravy-mochi/saa
1•mrvmochi•27m ago•0 comments

Mario Tchou

https://en.wikipedia.org/wiki/Mario_Tchou
1•simonebrunozzi•28m ago•0 comments

Does Anyone Even Know What's Happening in Zim?

https://mayberay.bearblog.dev/does-anyone-even-know-whats-happening-in-zim-right-now/
1•mugamuga•28m ago•0 comments

The last Morse code maritime radio station in North America [video]

https://www.youtube.com/watch?v=GzN-D0yIkGQ
1•austinallegro•31m ago•0 comments

Show HN: Hacker Newspaper – Yet another HN front end optimized for mobile

https://hackernews.paperd.ink/
1•robertlangdon•32m ago•0 comments

OpenClaw Is Changing My Life

https://reorx.com/blog/openclaw-is-changing-my-life/
2•novoreorx•40m ago•0 comments

Everything you need to know about lasers in one photo

https://commons.wikimedia.org/wiki/File:Commercial_laser_lines.svg
2•mahirsaid•42m ago•0 comments

SCOTUS to decide if 1988 video tape privacy law applies to internet uses

https://www.jurist.org/news/2026/01/us-supreme-court-to-decide-if-1988-video-tape-privacy-law-app...
1•voxadam•43m ago•0 comments

Epstein files reveal deeper ties to scientists than previously known

https://www.nature.com/articles/d41586-026-00388-0
3•XzetaU8•50m ago•1 comments

Red teamers arrested conducting a penetration test

https://www.infosecinstitute.com/podcast/red-teamers-arrested-conducting-a-penetration-test/
1•begueradj•57m ago•0 comments

Show HN: Open-source AI powered Kubernetes IDE

https://github.com/agentkube/agentkube
2•saiyampathak•1h ago•0 comments

Show HN: Lucid – Use LLM hallucination to generate verified software specs

https://github.com/gtsbahamas/hallucination-reversing-system
2•tywells•1h ago•0 comments

AI Doesn't Write Every Framework Equally Well

https://x.com/SevenviewSteve/article/2019601506429730976
1•Osiris30•1h ago•0 comments

Aisbf – an intelligent routing proxy for OpenAI compatible clients

https://pypi.org/project/aisbf/
1•nextime•1h ago•1 comments

Let's handle 1M requests per second

https://www.youtube.com/watch?v=W4EwfEU8CGA
1•4pkjai•1h ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
1•zhizhenchi•1h ago•0 comments

Goal: Ship 1M Lines of Code Daily

2•feastingonslop•1h ago•0 comments

Show HN: Codex-mem, 90% fewer tokens for Codex

https://github.com/StartripAI/codex-mem
1•alfredray•1h ago•0 comments

FastLangML: FastLangML:Context‑aware lang detector for short conversational text

https://github.com/pnrajan/fastlangml
1•sachuin23•1h ago•1 comments

LineageOS 23.2

https://lineageos.org/Changelog-31/
2•pentagrama•1h ago•0 comments

Crypto Deposit Frauds

2•wwdesouza•1h ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
5•lostlogin•1h ago•0 comments

Framing an LLM as a safety researcher changes its language, not its judgement

https://lab.fukami.eu/LLMAAJ
1•dogacel•1h ago•0 comments

Are there anyone interested about a creator economy startup

1•Nejana•1h ago•0 comments
Open in hackernews

AI tools churn out 'workslop', but 'the buck' should stop with bosses

https://www.theguardian.com/business/2025/oct/12/ai-workslop-us-employees
18•devonnull•3mo ago

Comments

didibus•3mo ago
As someone who uses AI for coding, emails, design documents, and so on...

I'm always a bit confused by the "training" rhetoric. It's the easiest thing to use. Do people need training to use a calculator?

This isn't like using Excel effectively and learning all the features, functions and so on.

Maybe I overestimate my ability as a technically savvy person to leverage AI tools, but I was just as good at using on day 1 than 2 years later.

righthand•3mo ago
No people need training for AI the same way they need training for proof-reading. Quality checking isn’t a natural process when something looks 80% complete and the approvers only care about 80% completeness.

My coworker still gets paid the same for turning in garbage as long as someone fixes it later.

dublinben•3mo ago
>Do people need training to use a calculator?

Yes? Quite a bit of time was spent in math classes over the years learning to use calculators. Especially the more complicated functions of so-called graphing calculators. They're certainly not self-explanatory.

What does it say about your skill or the depth of this tool that you haven't gotten better at using it after 2 years of practice?

watwut•3mo ago
One of this article claims that failure of AI projects is because the companies failed to train employees for AI. You do get value out of calculators without training. The training is there so you can unlock advanced more complicated functions.

The article come across as "AI can not fail, it can only be failed" argument.

godelski•3mo ago
Even on just normal calculators.

Quick, without looking it up, can you tell me what the {mc, m+, m-, mr} buttons do? If you're asking "the what buttons?" or "that's not on my calculator" then we have an answer. If you do know these, did you just intuit them or did you learn them from some instruction? If you really did intuit them, do you really think that's how most people do it? (did you actually intuit them...)

happytoexplain•3mo ago
In my experience, "training" usually means just telling people not to blindly trust the output. Like... read it. If you can't personally verify in a code-review capacity that what it wrote is apparently correct, then don't use it. The majority of people simply don't care - it's just blind copy-pasting from StackOverflow all over again, but more people are doing it more often. Of course, like most training, it's performative. 90% of the people making this mistake aren't capable of reviewing the output, so telling them to is pointless.
derektank•3mo ago
I'm arguably much worse at using ChatGPT today than I was 2 years ago, as back then you needed to be more specific and constrained in your prompts to generate useful results.

Nowadays with larger context windows and just generally improved performance, I can ask a one sentence question and iterate to refine the output.

Cpoll•3mo ago
Things I'd include in training: - Mental model of how the AI works. - Prompt engineering. - Common failure modes. - Effective validation/proofreading.

As for internal stuff like emails/design docs... I think using an AI to generate emails exposes a culture problem, where people aren't comfortable writing/sending concise emails (i.e. the data that went into the prompt).

NegativeK•3mo ago
Are employees aware that they can't trust AI results uncritically, like the article mentions? See: the lawyers who have been disciplined by judges. Or doctors who aren't verifying all conversation transcriptions and medical notes generated by AI.

Does your organization have records retention or legal holds needs that employees must be aware of when using rando AI service?

Will employees be violating NDAs or other compliance requirements (HIPAA, etc) when they ask questions or submit data to an AI service?

For the LLM that has access to the company's documents, did the team rolling it out verify that all user access control restrictions remain in place when a user uses the LLM?

Is the AI service actually equivalent or better or even just good enough compared to the employees laid off or retasked?

This stuff isn't necessarily specific to AI and LLMs, but the hype train is moving so fast that people are having to relearn very hard lessons.

dwheeler•3mo ago
Yes, you need training if you want something good instead of slop. For example, when asked to write functions that can be secure or insecure, 45% of the time they'll do it the insecure way, and this has been stable for years. We in the OpenSSF are going to release a free course "Secure AI/ML-Driven Software Development (LFEL1012)". Expected release date is October 16. It will be here: https://training.linuxfoundation.org/express-learning/secure...

Fill in this form to receive an email notification when the course is available: https://docs.google.com/forms/d/e/1FAIpQLSfWW8M6PwOM62VHgc-Y...

didntknowyou•3mo ago
i think the problem is more people punching numbers into the calculator and presenting the answer, without the faintest idea if it is even right (or having the ability to check).
vrighter•3mo ago
replace the word "training" with "convincing" and it starts making more sense