frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Return of Chinese astronauts delayed after spacecraft struck by debris

https://www.theguardian.com/world/2025/nov/05/chinese-astronauts-delayed-spacecraft-struck-by-deb...
1•mitchbob•1m ago•0 comments

New gel restores dental enamel and could revolutionise tooth repair

https://www.nottingham.ac.uk/news/new-gel-restores-dental-enamel-and-could-revolutionise-tooth-re...
2•CGMthrowaway•4m ago•0 comments

Light Has Burst Forth in Astonishing Abundance

https://newsletter.humanprogress.org/p/light-has-burst-forth-in-astonishing
1•surprisetalk•4m ago•0 comments

Twtxt – decentralised, minimalist microblogging service for hackers

https://twtxt.readthedocs.io/en/latest/user/intro.html
1•surprisetalk•4m ago•0 comments

An overview of Air filter classes

https://coffee.link/air-filter-classifications-and-what-they-actually-mean/
1•PhilKunz•5m ago•0 comments

Apple nears $1B Google deal for custom Gemini model to power Siri

https://9to5mac.com/2025/11/05/google-gemini-1-billion-deal-apple-siri/
1•jbredeche•5m ago•0 comments

Fiber reduces overall mortality by 23%

https://www.empirical.health/blog/dietary-fiber-reduces-all-cause-morality/
2•brandonb•5m ago•0 comments

Superconducting qubit that lasts for over 1 ms is primed for industrial scaling

https://phys.org/news/2025-11-superconducting-qubit-millisecond-primed-industrial.html
1•rbanffy•7m ago•0 comments

Liquid Glas can be tweaked now

https://www.theverge.com/news/812375/apple-iphone-ios-26-1-update-availability
1•PhilKunz•7m ago•0 comments

Our Precious

https://www.overcomingbias.com/p/our-precious
1•jger15•7m ago•0 comments

Improving Agent with Semantic Search

https://cursor.com/blog/semsearch
1•ecz•10m ago•0 comments

NATS 2.12 Atomic Batch Publishing

https://qaze.app/blog/nats-batch-publish/
1•SebastianM•12m ago•0 comments

Tinder is testing an AI feature that learns about you from your Camera Roll

https://techcrunch.com/2025/11/05/tinder-to-use-ai-to-get-to-know-users-tap-into-their-camera-rol...
2•gloxkiqcza•15m ago•0 comments

Berkeley Police Department Transitioning to Encrypted Radio Communications

https://local.nixle.com/alert/11979725/
3•7402•17m ago•0 comments

The most life-changing books, statistically

https://residualthoughts.substack.com/p/a-statistical-guide-to-the-most-life
1•tristanMatthias•19m ago•0 comments

NIU's scooter-sized electric microcar is headed for production

https://electrek.co/2025/11/04/nius-scooter-sized-electric-microcar-is-actually-headed-for-produc...
1•harambae•20m ago•0 comments

Apple launches rich new web interface for the App Store

https://9to5mac.com/2025/11/03/apple-launches-rich-new-web-interface-for-the-app-store/
2•janandonly•21m ago•0 comments

Ask HN: What modern techniques do you use to improve your site's SEO?

1•01-_-•23m ago•0 comments

Show HN: Kumi – a portable, declarative, functional core for business logic

https://kumi-play-web.fly.dev/?example=monte-carlo-simulation
1•goldenCeasar•27m ago•0 comments

Metalang99: A rich functional language implemented in C99 preprocessor

https://github.com/hirrolot/metalang99
2•PaulHoule•28m ago•0 comments

CareerJourney – your personal career command centre

https://career-journey.app/
1•sebi-secasiu•29m ago•1 comments

Googling "phind" exposed a random chat URL in search results

https://www.phind.com/search/k5tn3mqv62xn11d7c7gy9189
2•lodedeyn•29m ago•0 comments

PolyForm Noncommercial 2.0.0-Pre.2

https://writing.kemitchell.com/2025/11/05/PolyForm-Noncommercial-2.0.0-pre.2
2•feross•30m ago•0 comments

I'm Betting $100M on a New University

https://www.thefp.com/p/im-betting-100-million-on-a-new-university
4•ttcbj•33m ago•3 comments

State of Doltgres

https://www.dolthub.com/blog/2025-10-16-state-of-doltgres/
1•janpio•34m ago•0 comments

Apple Plans to Use 1.2T Parameter Google Gemini Model to Run New Siri

https://www.bloomberg.com/news/articles/2025-11-05/apple-plans-to-use-1-2-trillion-parameter-goog...
4•mfiguiere•34m ago•0 comments

Automating our home video imports

https://pierce.dev/notes/automating-our-home-video-imports
1•icyfox•35m ago•0 comments

Yansu – The Serious Coding Plaftorm

https://yansu.isoform.ai/
4•janpio•35m ago•0 comments

Is there a drop in native iOS and Android hiring at startups? (2022)

https://newsletter.pragmaticengineer.com/p/native-vs-cross-platform
2•walterbell•37m ago•0 comments

Meet the woman behind chart-topping AI artist Xania Monet

https://www.cbsnews.com/news/meet-the-woman-behind-chart-topping-ai-artist-xania-monet-i-look-at-...
2•vintagedave•39m ago•0 comments
Open in hackernews

OpenAI ends legal and medical advice on ChatGPT

https://www.ctvnews.ca/sci-tech/article/openai-updates-policies-so-chatgpt-wont-provide-medical-or-legal-advice/
32•randycupertino•1h ago

Comments

randycupertino•1h ago
Sounds like it is still giving out medical and legal information just adding CYA disclaimers.
mikkupikku•38m ago
It would be terribly boring if it didn't. Just last night I had it walk me through reptile laws in my state to evaluate my business plan for a vertically integrated snapping turtle farm and turtle soup restaurant empire. Absurd, but it's fun to use for this kind of thing because it almost always takes you seriously.

(Turns out I would need permits :-( )

cpfohl•1h ago
Tricky…my son had a really rare congenital issue that no one could solve for a long time. After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.

I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.

throwaway290•1h ago
this is fresh news right? a friend just used chatgpt for medical advice last week (stuffed his wound with antibiotics after motorbike crash). are you saying you completely treated the congenital issue in this timeframe?
cj•52m ago
He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time.

Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.

If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).

The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.

ninininino•44m ago
You can just use the wipe memory feature or if you don't trust that, then start a new account (new login creds), if you don't trust that then get a new device, cell provider/wifi, credit card, I.P, login creds etc.
tencentshill•38m ago
We are all obligated to hoard as many offline AI models as possible if the larger ones are legally restricted like this.
rafaelmn•35m ago
>After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.

Something I've noticed is that it's much easier to lead the LLM to the answer when you know when you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.

Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.

schiffern•31m ago

  >checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history
So... exactly the same behavior as we observe in human doctors?
bamboozled•1h ago
I been using Claude for information regarding building and construction related information, (currently building a small house mostly on my own with pros for plumbing and electrical).

Seriously the amount of misinformation it has given me is quite staggering. Telling me things like, “you need to fill your drainage pipes with sand before pouring concrete over them…”, the danger with these AI products is that you have to really know a subject before it’s properly useful. I find this with programming too. Yes it can generate code but I’ve introduced some decent bugs when over relying on AI.

The plumber I used laughed at my when I told him about there sand thing. He has 40 years experience…

dingnuts•1h ago
honestly I think these things cause a form of Gell Mann's Amnesia where when you use them for something you know already, the errors are obvious, but when you use them for something you don't understand already, the output is sufficiently plausible that you can't tell you're being misled.

this makes the tool only useful for things you already know! I mean, just in this thread there's an anecdote from a guy who used it to check a diagnosis, but did he press through other possibilities or ask different questions because the answer was already known?

bamboozled•56m ago
It’s funny you should say that because I have been using it in the way you describe. I kind of know it could be wrong, but I’m kind of desperate for info so I consult Claude anyway. After stressing hard I realize it was probably wrong, find someone who knows what they’re and actually on about and course correct.
rokkamokka•55m ago
I'd frame it such that LLM advice is best when it's the type that can be quickly or easily confirmed. Like a pointer in the right (or wrong) direction. If it was false, then try again - quick iterations. Taking it at its "word" is the potentially harmful bit.
FaradayRotation•58m ago
I nearly spit my drink out. This is my kind of humor, thanks for sharing.

I've had a decent experience (though not perfect) with identifying and understanding building codes using both Claude and GPT. But I had to be reasonably skeptical and very specific to get to where I needed to go. I would say it helped me figure out the right questions and which parts of the code applied to my scenario, more than it gave the "right" answer the first go round.

justapassenger•50m ago
I'm a hobby woodworker - I've tried using gemini recently for an advice on how to make some tricky cuts.

If I'd follow any of the suggestions I'd probably be in ER. Even after me pointing out issues and asking it to improve - it'd come up with more and more sophistical ways of doing same fundamentally dangerous actions.

LLMs are AMAZING tools, but they are just that - tools. There's no actual intelligence there. And the confidence with which they spew dangerous BS is stunning.

trollbridge•46m ago
I've observed some horrendous electrical device, such as "You should add a second bus bar to your breaker box." (This is not something you ever need to do.)
fny•27m ago
I mean... you do have to backfill around your drainage pipe, so it's not too far off. Frankly, if you Google the subject people misspeak about "backfilling pipes" too as if the target of the backfill is the pipe itself too not the trench. Garbage in, garbage out.

All the circumstances where ChatGPT has given me shoddy advice fall in three buckets:

1. The internet lacks information, so LLMs will invent answers

2. The internet disagrees, so LLMs sometimes pick some answer without being aware of the others

3. The internet is wrong, so LLMs spew the same nonsense

Knowledge from blue collar trades seems often to in those three buckets. For subjects in healthcare, on the other hand, there are rooms worth of peer reviewed research, textbooks, meta studies, and official sources.

bstsb•1h ago
i don't think it's stopped providing said information, it's just now outlined in their usage policies that medical and legal advice is a "disallowed" use of ChatGPT
gizajob•1h ago
AI gets more and more useful by the day.
miltonlost•58m ago
Good. Techies need to stop thinking that an LLM should not be immune from requiring licensing. Until OpenAI can (and should) be sued for medical malpractice or lawyering without passing the bar, they will have no skin in the game to actually care. A disclaimer of "this is not a therapist" should not be enough to CYA.
blibble•50m ago
anyone wanna form a software engineering guild, then lobby to need a license granted by the guild to practice?
miltonlost•43m ago
Sorry but you’re not gonna get me to agree that medical licensing is a bad idea. I don’t want quacks more than we already do. Stick to the argument and not add in your “what about” software engineers.
blibble•41m ago
I am being serious...

the damage certain software engineers could do certainly surpasses most doctors

miltonlost•28m ago
Ah sorry, I misread it as coming from someone who doesn't want licensing, so you were appealing to HN by switching to software engineers (and I know many on here loathe to think anything beyond "move fast and break things", which is the opposite of most (non-software) engineers.

But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"

blibble•13m ago
> But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"

absolutely

if the only way to make people stop building evil (like your example) is to make individuals personally liable, then so be it

miki123211•57m ago
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”

Is this an actual technical change, or just legal CYA?

bearhall•52m ago
I think it actually changed. I have a broken bone and have been consulting with ChatGPT (along with my doctor of course) for the last week. Last night it refused to give an opinion saying “ While I can’t give a medical opinion or formally interpret it”. First time I’d seen it object.

I understand the change but it’s also a shame. It’s been a fantastically useful tool for talking through things and educating myself.

doctoboggan•34m ago
I suspect this is an area that a bit of clever prompting will now prove fruitful in. The system commands in the prompt will probably be "leaked" soon which should give you good avenues to explore.
PragmaCode•54m ago
It's not stopping to give legal/medical advice to the user, but it's forbidden to use ChatGPT to pose as an advisor giving advice to others: https://www.tomsguide.com/ai/chatgpt/chatgpt-will-still-offe...
trollbridge•46m ago
One wonders how exactly this will be enforced.
fainpul•37m ago
It's not about enforcing this, it's about OpenAI having their asses covered. The blame is now clearly on the user's side.
OutOfHere•23m ago
It was already enforced by hiding all custom GPTs that offered medical advice.
emaccumber•49m ago
good thing that guy was able to negotiate his hospital bills before this went into effect.
learnplaceai•47m ago
Sad times - I used ChatGPT to solve a long-term issue!
thedudeabides5•46m ago
this is a disaster

doomer's in control, again

esafak•34m ago
This is to do with liability not doomerism.
o11c•44m ago
When OpenAI is done getting rid of all the cases where its AI gives dangerously wrong advice about licensed professions, all that will be left is the cases where its AI gives dangerously wrong advice about unlicensed professions.
Lionga•41m ago
maybe that is why they opened the system to porn, as everything else will be soon gone.
entropicdrifter•40m ago
Can't wait for AI nutritionists to kill people on crash diets
mirabilis•30m ago
An AI-related bromide poisoning incident earlier this year: “Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet. For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning… However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.”

https://www.acpjournals.org/doi/10.7326/aimcc.2024.1260

tacker2000•34m ago
Aka software engineers…
Zr01•42m ago
The cynic in me thinks this is just a means to eventually make more money by offering paid unrestricted versions to medical and legal professionals. I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence. Yet the same goes for just about any internet search. I don't think some people not knowing how to use it warrants restricting its functionality for the rest of us.
watwut•36m ago
If they do that, they will be subject of regulations on medical devices. As they should be and means the end result will be less likely to promote complete crap then it is now.
scarmig•32m ago
And then users balk at the hefty fee and start getting their medical information from utopiacancercenter.com and the like.
miltonlost•33m ago
> I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence.

You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.

Zr01•21m ago
I appreciate how the newer versions provide more links and references. It makes the task of verifying it (or at least where it got its results from) that much easier. What you're describing seems more like a advertisement problem, not a product problem. No matter how many locks and restrictions they put on it, someone, somwhere, will still find a way to get hurt from its advice. A hammer that's hard enough to beat nails is hard enough to bruise your fingers.
fluidcruft•28m ago
I think one of the challenges is attribution. For example if you use Google search to create a fraudulent legal filing there aren't any of Google's fingerprints on the document. It gets reported as malpractice. Whereas reporting on these things is OpenAI or whatever AI is responsible. So even from the perspective of protecting a brand it's unavoidable. Suppose (no idea if true) the Louvre robbers wore Nike shoes and the reporting were that Nike shoes were used to rob the Louvre and all anyone talks about is Nike and how people need to be careful about what they do wearing Nike shoes.

It's like newsrooms took the advice that passive voice is bad form so they inject OpenAI as the subject instead.

benrapscallion•7m ago
This (attribution) is exactly the issue that was mentioned by LexisNexis CEO in a recent The Verge interview.

https://www.theverge.com/podcast/807136/lexisnexis-ceo-sean-...

segmondy•25m ago
Nah, this is to avoid litigation. Who needs lawsuits when you are seeking profit? 1 loss of a major lawsuit is horrible, there's the case of folks suing them because their loved ones committed suicide after chatting with ChatGPT. They are doing everything to avoid getting dragged to court.
uslic001•37m ago
As a doctor I hope it still allows me to get up to speed on latest treatments for rare diseases that I see once every 10 years. It saves me a lot of time rather than having to dig through all the new information since I last encountered a rare disease.
blindriver•34m ago
This is a big mistake. This is one of the best things about ChatGPT. If they don’t offer it, then someone else will and eventually I’m sure Sam Altman will change his mind and start supporting it again.
zjp•34m ago
RIP Dr. ChatGPT, we'll miss you. Thanks for the advice on fixing my shoulder pain while you were still unmuzzled.
awillen•33m ago
This is not true, just a viral rumor going around: https://x.com/thekaransinghal/status/1985416057805496524

I've used it for both medical and legal advice as the rumor's been going around. I wish more people would do a quick check before posting.

andrewinardeer•32m ago
AGI edging closer by the day.
calmworm•32m ago
Licensed huh? Teachers, land surveyors, cosmetologists, nurses, building contractors, counselors, therapists, real estate agents, mortgage lenders, electricians, and many many more…
doctoboggan•32m ago
It will be interesting to see if the other major providers follow suit, or if those in the know just learn to go to google or anthropic for medical or legal advice.
nomendos•31m ago
This is typical medical "cartel" (i.e. gang/mafia) type of a move and I hope it does not last, since any other AI's do not get restricted in "do not look up" way, this kind of practice won't stand a chance for very long.
OutOfHere•21m ago
If OpenAI wants to move users to other LLMs, that'll only cost them.
danielmarkbruce•21m ago
This is so disappointing. Much legal and medical advice given by professionals is wrong, misleading, etc. The bar isn't high. This is a mistake.
ideamotor•18m ago
Ah, that'll be the end of that then!