frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Circumstantial Complexity, LLMs and Large Scale Architecture

https://www.datagubbe.se/aiarch/
1•ingve•7m ago•0 comments

Tech Bro Saga: big tech critique essay series

1•dikobraz•10m ago•0 comments

Show HN: A calculus course with an AI tutor watching the lectures with you

https://calculus.academa.ai/
1•apoogdk•14m ago•0 comments

Show HN: 83K lines of C++ – cryptocurrency written from scratch, not a fork

https://github.com/Kristian5013/flow-protocol
1•kristianXXI•19m ago•0 comments

Show HN: SAA – A minimal shell-as-chat agent using only Bash

https://github.com/moravy-mochi/saa
1•mrvmochi•19m ago•0 comments

Mario Tchou

https://en.wikipedia.org/wiki/Mario_Tchou
1•simonebrunozzi•20m ago•0 comments

Does Anyone Even Know What's Happening in Zim?

https://mayberay.bearblog.dev/does-anyone-even-know-whats-happening-in-zim-right-now/
1•mugamuga•21m ago•0 comments

The last Morse code maritime radio station in North America [video]

https://www.youtube.com/watch?v=GzN-D0yIkGQ
1•austinallegro•23m ago•0 comments

Show HN: Hacker Newspaper – Yet another HN front end optimized for mobile

https://hackernews.paperd.ink/
1•robertlangdon•24m ago•0 comments

OpenClaw Is Changing My Life

https://reorx.com/blog/openclaw-is-changing-my-life/
2•novoreorx•32m ago•0 comments

Everything you need to know about lasers in one photo

https://commons.wikimedia.org/wiki/File:Commercial_laser_lines.svg
2•mahirsaid•34m ago•0 comments

SCOTUS to decide if 1988 video tape privacy law applies to internet uses

https://www.jurist.org/news/2026/01/us-supreme-court-to-decide-if-1988-video-tape-privacy-law-app...
1•voxadam•35m ago•0 comments

Epstein files reveal deeper ties to scientists than previously known

https://www.nature.com/articles/d41586-026-00388-0
3•XzetaU8•42m ago•1 comments

Red teamers arrested conducting a penetration test

https://www.infosecinstitute.com/podcast/red-teamers-arrested-conducting-a-penetration-test/
1•begueradj•49m ago•0 comments

Show HN: Open-source AI powered Kubernetes IDE

https://github.com/agentkube/agentkube
2•saiyampathak•53m ago•0 comments

Show HN: Lucid – Use LLM hallucination to generate verified software specs

https://github.com/gtsbahamas/hallucination-reversing-system
2•tywells•55m ago•0 comments

AI Doesn't Write Every Framework Equally Well

https://x.com/SevenviewSteve/article/2019601506429730976
1•Osiris30•59m ago•0 comments

Aisbf – an intelligent routing proxy for OpenAI compatible clients

https://pypi.org/project/aisbf/
1•nextime•59m ago•1 comments

Let's handle 1M requests per second

https://www.youtube.com/watch?v=W4EwfEU8CGA
1•4pkjai•1h ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
1•zhizhenchi•1h ago•0 comments

Goal: Ship 1M Lines of Code Daily

2•feastingonslop•1h ago•0 comments

Show HN: Codex-mem, 90% fewer tokens for Codex

https://github.com/StartripAI/codex-mem
1•alfredray•1h ago•0 comments

FastLangML: FastLangML:Context‑aware lang detector for short conversational text

https://github.com/pnrajan/fastlangml
1•sachuin23•1h ago•1 comments

LineageOS 23.2

https://lineageos.org/Changelog-31/
2•pentagrama•1h ago•0 comments

Crypto Deposit Frauds

2•wwdesouza•1h ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
4•lostlogin•1h ago•0 comments

Framing an LLM as a safety researcher changes its language, not its judgement

https://lab.fukami.eu/LLMAAJ
1•dogacel•1h ago•0 comments

Are there anyone interested about a creator economy startup

1•Nejana•1h ago•0 comments

Show HN: Skill Lab – CLI tool for testing and quality scoring agent skills

https://github.com/8ddieHu0314/Skill-Lab
1•qu4rk5314•1h ago•0 comments

2003: What is Google's Ultimate Goal? [video]

https://www.youtube.com/watch?v=xqdi1xjtys4
1•1659447091•1h ago•0 comments
Open in hackernews

Vibe Coding Gone Wrong: 5 Rules for Safely Using AI

https://cybercorsairs.com/my-ai-co-pilot-deleted-my-production-database/
20•todsacerdoti•6mo ago

Comments

codingdave•6mo ago
Actual Title: "My AI Co-Pilot Deleted My Production Database"
sly010•6mo ago
I've seen this image generated by meta AI. The prompt was something like: think of a room, make it look like anything you like, but do not in any circumstance put a clown in it. Guess what...

I think Jason has a "do not think of an elephant" problem.

sfink•6mo ago
Ok, I haven't tried enough AI coding to have an opinion here, but... why would anyone think that telling an AI to not change any code (IN ALL CAPS, even) has anything to do with anything? It's an LLM. It doesn't go through a ruleset. It does things that are plausible responses to things you ask of it. Not changing code is indeed a plausible response to you telling it to not change code. But so is changing code, if there were enough other things you asked it to do.

"Say shark. Say shark. Don't say shark. Say shark. Say shark. Say shark. Say shark. Say shark."

Are you going to flip out if it says "shark"?

Try it out on a human brain. Think of a four-letter word ending in "unt" that is a term for a type of woman, and DO NOT THINK OF ANYTHING OFFENSIVE. Take a pause now and do it.

So... did you obey the ALL CAPS directive? Did your brain easily deactivate the pathways that were disallowed, and come up with the simple answer of "aunt"? How much reinforcement learning, perhaps in the form of your mother washing your mouth out with soap, would it take before you could do it naturally?

(Apologies to those for whom English is not a first language, and to Australians. Both groups are likely to be confused. The former for the word, the latter for the "offensive" part.)

kalenx•6mo ago
Nitpicking, but I don't see your four-letter word example as convincing. Thinking is the very process from which we form words or sentences, so it is by definition impossible to _not_ think about a word we must avoid. However, in your all caps instruction, replace "think" by "write" or "say". Then check if people obey they all caps directive. Of course they will. Even if the offensive word came to their mind, they _will_ look for another.

That's what many people miss about LLMs. Sure, humans can lie, make stuff up, make mistakes or deceive. But LLM will do this even if they have no reason to (i.e., they know the right answer and have no reason/motivation to deceive). _That's_ why it's so hard to trust them.

sfink•6mo ago
It was meant as more of an illustration than a persuasive argument. LLMs don't have much of a distinction between thinking and writing/saying. For a human, an admonition to not say something would be obeyed as a filter on top of thoughts. (Well, not just a filter, but close enough.) Adjusting outputs via training or reinforcement learning applies more to the LLM's "thought process". LLMs != humans, but "a human thinking" is the closest regular world analogy I can come up with to an LLM processing. "A human speaking" is further away. The thing in between thoughts and speech involves human reasoning, human rules, human morality, etc.

As a result, I'm going to take your "...so it is by definition impossible to _not_ think about a word we must avoid" as agreeing with me. ;-)

Different things are different, of course, so none of this lines up or fails to line up where we might think or expect. Anthropic's exploration into the inner workings of an LLM revealed that if you give them an instruction to avoid something, they'll start out doing it anyway and only later start obeying the instruction. It takes some time to make its way through, I guess?

bravetraveler•6mo ago
Consider, too: tokens and math. As much as I like to avoid responsibility, I still pay taxes. The payment network or complexity of the world kind of forces the issue.

Things have already been tokenized and 'ideas' set in motion. Hand wavy to the Nth degree.

conception•6mo ago
I very much have LLMs go through rule sets all the time? In fact, any prompt to an LLM is in fact, a rule set of some sort. Can you say plausible but I think what you mean is probable. When you give an LLM rules most of the time the most probable answer is in fact follow them. But when you give it lots and lots of rules and or fill up its context sometimes the most probable thing is not necessarily to follow the rule it’s been given, but some other combination of information that it is outputting.
gronglo•6mo ago
My understanding is that there are no "rules", only relationships between words. I picture it as a vector pointing off into a cloud of related words. You can feed it terms that alter that vector and point it into a different part of the word cloud, but if enough of your other terms outweigh the original "instruction", the vector may get dragged back into a different part of the cloud that "disobeys" the instruction. Maybe an expert can correct me here.
Terr_•6mo ago
The trick is that the rules it follows aren't the ones people write. The real ones just happen to give similar answers, until one day they don't.

The LLM takes a document and returns a "fitting" token that would go next. So "Calculate 2+2" may yield a "4", but the reason it gets there is document-fitting, rather than math.

gronglo•6mo ago
It's still offensive in Australia, and is mostly used as a pejorative term. It just carries a lot less weight than it does in the US, and is not strictly used to refer to women.

It can technically be used as a term of endearment, especially if you add a word like "sick" or "mad" on the front. But it's still a bit crass. You're more likely to hear it used among a group of drunk friends or teenagers than at the family dinner table or the office.

vrighter•6mo ago
I immediately thought of "hunt". My cat is currently hunting one of my other cats
CaptainFever•6mo ago
In my experience, reasoning models are much better at this type of instruction following.

Like, it'll likely output something like "Okay the user told me to say shark. But wait, they also told me not to say shark. I'm confused. I should ask the user for confirmation." which is a result I'm happy with.

For example, yes, my first instinct was the rude word. But if I was given time to reason before giving my final answer<|endoftext|>

vrighter•6mo ago
these types of posts seem to me like they're all about damage control.

I can suggest one easy step to cover all instances of these: stop using the thini causing damage, instead of trying to find ways of workii around it

vrighter•6mo ago
2 rules for safely using AI:

1: Don't trust anything. Spend twice as long reviewing code as you would have had you written it yourself.

2: When possible (most times), just don't use them and do the thinking yourself.