frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Trump Signs the Take It Down Act into Law

https://www.theverge.com/news/661230/trump-signs-take-it-down-act-ai-deepfakes
1•artninja1988•5m ago•0 comments

How Virtual Banking Made Saving Risky Again [Yotta, Y Combinator]

https://www.bloomberg.com/news/features/2025-04-10/how-fintech-banking-made-saving-risky-again-synapse-evolve-and-yotta
1•Wingman4l7•6m ago•0 comments

Little we've seen: A visual coverage estimate of the deep seafloor

https://www.science.org/doi/10.1126/sciadv.adp8602
1•PaulHoule•11m ago•0 comments

Defensive CSS

https://defensivecss.dev/
3•Tomte•12m ago•0 comments

Freedom and the limits of agency: the philosophy of Fichte (2022)

https://aeon.co/essays/on-freedom-and-the-limits-of-agency-the-philosophy-of-fichte
1•Tomte•12m ago•0 comments

Submissions to Spring Lisp Game Jam 2025

https://itch.io/jam/spring-lisp-game-jam-2025/entries
1•todsacerdoti•12m ago•0 comments

BreakHack – A casual coffee-break roguelike ported for the web

https://midzer.de/wasm/breakhack/
1•midzer•14m ago•1 comments

OpenVMS x86 Database Modernization with Mimer SQL and Amazon EC2

https://aws.amazon.com/blogs/migration-and-modernization/openvms-x86-database-modernization-with-mimer-sql-and-amazon-ec2/
2•jandeboevrie•15m ago•1 comments

Opinion: How to pull your family members out of the information rabbit hole

https://www.statepress.com/article/2025/05/opinion-misinformation-rabbit-hole-saving#
2•rbanffy•15m ago•0 comments

More Insight and Not-Negativity

https://daringfireball.net/2025/05/more_insight_and_not-negativity
1•colinprince•16m ago•0 comments

Ask HN: How do you use AI for development in high security environments?

2•thesurlydev•16m ago•0 comments

Inventors

https://axial.substack.com/p/inventors
1•f-star•17m ago•0 comments

Earliest amniote tracks recalibrate the timeline of tetrapod evolution

https://www.nature.com/articles/s41586-025-08884-5
1•gnabgib•17m ago•0 comments

Show HN: Vibe coding from your phone

https://vibecodego.com
1•chrisnolet•17m ago•0 comments

Show HN: Visualization of job openings by US based employers

https://jobswithgpt.com/blog/jobs-density-visualization/
2•jobswithgptcom•18m ago•0 comments

Ownership Must Mean Something in the Digital Age

https://archive.org/details/ftc-letter-digital-ownership
1•hn_acker•22m ago•1 comments

Daniel Dennett on free will and moral agents

https://www.youtube.com/watch?v=AFGrQuuY2T4
1•sschmitt•23m ago•0 comments

Why Apple Still Hasn't Cracked AI

https://www.bloomberg.com/news/features/2025-05-18/how-apple-intelligence-and-siri-ai-went-so-wrong
1•mgh2•23m ago•0 comments

Apple's Next-Gen Version of Siri Is 'On Par' with ChatGPT

https://www.macrumors.com/2025/05/19/next-gen-siri-is-on-par-with-chatgpt/
1•mgh2•23m ago•0 comments

Profiling Misinformation Susceptibility – ScienceDirect

https://www.sciencedirect.com/science/article/pii/S0191886925001394
1•rbanffy•24m ago•0 comments

Olive: The AI Model Optimization Toolkit for the ONNX Runtime

https://microsoft.github.io/Olive/
2•homarp•25m ago•1 comments

Labor migration reshaped culture in 19th century Britain

https://www.broadstreet.blog/p/industry-and-identity-how-labor-migration
1•andrewstetsenko•25m ago•0 comments

Between the Booms: AI in Winter

https://dl.acm.org/doi/10.1145/3688379
2•sdht0•26m ago•0 comments

Rogue communication devices found in Chinese solar power inverters

https://www.msn.com/en-us/news/world/rogue-communication-devices-found-in-chinese-solar-power-inverters/ar-AA1EMfHP
3•gmays•27m ago•1 comments

The Complicated World of Strings in Rust

https://em-baggie.github.io/blog/the_complicated_world_of_strings_in_rust/
1•sdht0•28m ago•0 comments

Show HN: RAG chatbot using Qwen3 with custom thinking UI

2•Arindam1729•28m ago•1 comments

Step by step guide on setting up physical streaming replication in PostgreSQL

https://stormatics.tech/blogs/guide-on-setting-up-physical-streaming-replication-in-postgresql
3•annieghazali_1•30m ago•0 comments

Who invented the rechargeable lithium-ion battery?

https://spectrum.ieee.org/lithium-ion-battery-2662487214
2•sokitsip•30m ago•0 comments

Yuri Bezmenov: Psychological Warfare Subversion and Control of Western Society

https://www.youtube.com/watch?v=5gnpCqsXE8g
1•huijzer•31m ago•0 comments

First petahertz-speed phototransistor in ambient conditions

https://news.arizona.edu/news/u-researchers-developing-worlds-first-petahertz-speed-phototransistor-ambient-conditions
2•geox•32m ago•0 comments
Open in hackernews

xAI's Grok 3 comes to Microsoft Azure

https://techcrunch.com/2025/05/19/xais-grok-3-comes-to-microsoft-azure/
54•mfiguiere•3h ago

Comments

josefritzishere•2h ago
Boomer AI coming to Microsoft... like it doesnt have enough marketing trouble already
cooper_ganglia•2h ago
It's honestly one of the better ones I've tried for general questions. I saw it used in a blind competition against ChatGPT, Claude, and Gemini, and amongst people who didn't use LLMs frequently, it was the most favored for 4/5 questions! It's very good at sounding much more natural and less robotic than the others, imo.
Analemma_•2h ago
Just speaking for myself here, but my most natural-sounding conversations with people don't involve them launching into rants about white genocide in Africa regardless of conversation context, but maybe I'm setting my bar too high.
Remnant44•1h ago
Just like talking to Grandpa!
michaelmrose•2h ago
Was it more correct or useful in its output or do you mean it nailed a desirable conversational tone like a pleasantly rendered lorem ipsum.
aruametello•1h ago
he might be referring to the data in https://lmarena.ai/

they conduct blind trials were users submit a prompt, and vote on "best answer".

grok holds a very good position in its leaderboard.

jonny_eh•2h ago
"Grok on Azure only be understood in the context of white genocide in South Africa […]"
cosmicgadget•2h ago
Finally, I can use Microsoft's cloud to generate Zerohedge comments.

> They also come with additional data integration, customization, and governance capabilities not necessarily offered by xAI through its API.

Maybe we'll see a "Grok you can take to parties" come out of this.

voidfunc•2h ago
Anything to stay in the good graces of Elon and The Trump Admin
epa•2h ago
Disappointed in the HN community for the initial comments in this thread. Hoping the mods can help set a higher benchmark for community discussion than just rabble-rousing on the founder instead of focus on the technology. Do better team.
mjcl•2h ago
The technology couldn't stop talking about white genocide for hours.
dawnerd•2h ago
No, we shouldn't be allowing a pro genocide, white supremacist run LLM period.
SimianSci•2h ago
Technology cannot be wholly divorced from its ethical considerations. If a technology's founder has a multitude of ethical blindspots and has shown a willingness to modify such technology to suit his own desires, it is something which should be noted, discussed, and considered.

As professionals, it is absolutely crucial that we discuss matters of ethics. One of which is the issue of an unethical founder.

rvz•1h ago
dang has already left as moderator. It is now someone else.

Which is why HN appears to be going down-hill in terms of quality.

nomel•1h ago
This is false [1], unless they left within the past 13 hours.

[1] https://news.ycombinator.com/threads?id=dang

tastyface•1h ago
Fruit of the poisonous tree. A technology with such such startling propaganda potential as AI cannot be disentangled from the whims of its oligarch owners — unless a strict legal firewall is in place.

Schools are already starting to *teach* that the 2020 election was stolen. How much longer until one of these AIs starts parroting the same lies, and in a more convincing way than Musk’s half-assed prompt injection?

sambeau•2h ago
Are they going to get the white supremacy bits too?
mullingitover•2h ago
I can't think of a less trustworthy group of people on model alignment.

They claimed that they had a rogue actor who deployed their 'white genocide' prompt, but that either means they have zero technical controls in their release pipeline (unforgivable at their scale) or they are lying (unforgivable given their level of responsibility).

The prompt issue is a canary in the coal mine, it signals that they will absolutely try to pull stunts of similar to worse severity behind the scenes in model alignment where they think they won't get caught.

dockercompost•2h ago
Yeah, that one incident is enough reason for me to never bother using an xai model
jhickok•1h ago
That is my stance as well.
SimianSci•1h ago
I agree, Alignment is very important when considering which LLM to use. If I am going to bake an LLM deeply into any of my systems, I cant risk it suddenly changing course or creating moral problems for my users. Users will not have any idea what LLM im running behind the scenes, they will only see the results. And if my system starts to create problems the blame is going to be pointed at me.
sorcerer-mar•1h ago
I reckon there is exactly one person at xAI who gives even remotely enough of a fuck about South Africa's domestic issues to put that string into the system prompt. We all know who it is.
mullingitover•1h ago
A fish rots from the head, and while it's definitely a hotdog suit "We're all looking for the guy who did this!" moment, remember Musk is in charge of hiring and firing. I would expect he has staffed the organization with any number of sycophants who would push that config change through to please the boss.
phillipcarter•2h ago
As a reminder, xAI is an organization which lies to its users (declaring they will develop their system prompts as open source) and has the most utterly flimsy processes imaginable: https://smol.news/p/the-utter-flimsiness-of-xais-processes

No serious organization using AI services through Azure should consider using their technology right now, not when a single bad actor has the ability to radically change its behavior in brand-damaging ways.

nomel•1h ago
> has the most utterly flimsy processes imaginable:

Could you expand on this? Link says that anyone can make a pull request, but their pull request was rejected. Is the issue that pull requests aren't locked?

edit: omg, I misread the article. flimsy is an understatement.

phillipcarter•1h ago
The pull request was not rejected. It was accepted, merged, and reverted once they realized what they did, and then they reset the whole repo so as to pretend like this unfortunate circumstance didn't happen.
SimianSci•1h ago
There is no trust built into the system. It is wholly reliant that someone from xAI publish the latest changes. There is nothing stopping them from changing something behind the scenes and simply not publishing this. All we will see are sanitized versions of the truth at best. This is a poor attempt at transparency.
dbreunig•2h ago
Can anyone provide a reason an enterprise would choose Grok over a similar class of models?
scuol•2h ago
It still seems to have the problems most other LLMs suffer with except Gemini: it loses context so quickly.

I asked it about a paper I was looking at (SLOG [0]) and it basically lost the context of what "slog" referred to after 3 prompts.

1. I asked for an example transaction illustrating the key advantages of the SLOG approach. It responded with some general DB transaction stuff.

2. I then said "no use slog like we were talking about" and then it gave me a golang example using the log/slog package

Even without the weird political things around Grok, it just isn't that good.

[0] https://www.vldb.org/pvldb/vol12/p1747-ren.pdf

michaelmrose•2h ago
Grok refuses to answer the query: Is Trump morally responsible for January 6th. Why would we use something that is slanted to avoid speaking the truth?
dilap•1h ago
https://x.com/i/grok/share/br3CqX6Qk9tS8Gj6LAvlnpDg9

Seems like a pretty reasonable answer to me.

wormlord•2h ago
The desire to be "centrist" on HN is perplexing to me.

The fact that Elon, a white south african, made his AI go crazy by adding some text about "white genocide", is factual and should be taken into consideration if you want to have an honest discussion about ethics in tech. Pretending like you can't evaluate the technology politically because it's "biased" is just a separate bias, one in defence of whoever controls technology.

fallingknife•1h ago
Aren't you just evaluating these claims based on things you've heard from biased sources (which is all of them) too? How do you know that your biased perspective is any more correct than Grok's bias?
wormlord•1h ago
How do I know the earth didn't spontaneously appear into existence yesterday? This line of argumentation is stupid.
ActorNightly•1h ago
>which is all of them

Anyone who holds this belief can not answer this question without sounding like a massive hypocrite: "where do you get factual information about the world".

Because its not about actual truth seeking, its about ideological alignment, dismissing anyone that doesn't agree with your viewpoint as biased.

fallingknife•37m ago
LLMs can't truth seek. They simply do not have that capability as they have no ability to directly observe the real world. They must rely on what they are told, and to them the "truth" is the thing they are told most often. I think you would agree this is a very bad truth algorithm. This is much the same as I have no ability (without great inconvenience) to directly observe the situation in SA. This means I am stuck in the same position as an LLM. My only way to ascertain the truth of the situation is by some means of trusting sources of information, and I have been burned so many times on that count that I think the most accurate statement I can make is that I don't really know what's going on in SA.
ActorNightly•1h ago
Centrism is just another word for right wing these days, or the most charitable interpretation - "not knowing enough about politics"

If you look at the bookends of the political spectrum, most Democrats are pretty centrist these days compared to the far left people that want actual socialism, and the current administration that is pretty much authoritarian at this point.

reverendsteveii•1h ago
"Centrism" and "being unbiased" are are denotatively meaningless terms, but they have strong positive connotation so anything you do can be in service to "eliminating bias" if your PR department spins it strongly enough and anything that makes you look bad "promotes bias" and is therefore wrong. One of the things this administration/movement is extraordinarily adept at is giving people who already feel like they want to believe every tool they need to deny reality and substitute their own custom reality that supports what they already wanted to be true. Being able to say "That's just fake news. Everyone is biased." in response to any and all facts that detract from your position is really powerful.
SimianSci•2h ago
As someone developing agents using LLMs on various platform, im very reluctant to use anything associated with xAI. Grok's training data is increasingly pulled from an increasingly toxic source. Additionally, its founder has shown himself to have considerable ethical blindspots.

Ive got enough second-order effects to be wary of. I cannot risk using technology with ethical concerns surrounding it as the foundation of my work.

nomel•1h ago
> Grok's training data is increasingly pulled from an increasingly toxic source.

What's this in reference to?

thanhhaimai•1h ago
It refers to this: https://www.reuters.com/markets/deals/musks-xai-buys-social-...

> "xAI and X's futures are intertwined," Musk, who also heads automaker Tesla and SpaceX, wrote in a post on X: "Today, we officially take the step to combine the data, models, compute, distribution and talent."

ActorNightly•1h ago
Probably the recent shenanigans about holocaust denial-ism being blamed on a "programming error".
kentm•1h ago
They've also been caught messing with system prompts twice to push a heavily biased viewpoint. Once to censor criticism of the current US administration and again to push the South Africa white genocide theory contrary to evidence. Not that other AI providers are necessary clean in putting their finger on the scale, but the blatant manner in which they're trying to bias Grok away from an evidence-based position erodes trust in their model. I would not touch it in my work.
fallingknife•1h ago
Has any AI company not been caught doing this? Grok is just doing it in the opposite direction. I hate it too, but let's not pretend we don't know what's going on here.
kentm•1h ago
I think conflating what other companies have been doing with what Grok is doing is disingenuous personally. Most other AI stuff has had banal "brand safety" style guards baked in. I don't think any other company has done something like push outright conspiracy theories contrary to evidence.
fallingknife•1h ago
"brand safety" is just a term for aligning with a particular bias
tempodox•1h ago
Everyone is biased. Pushing conspiracy theories is something else entirely.
kentm•1h ago
Not all biases are equivalent. "Don't be racist, don't curse, and maybe throw in some diversity" is not morally or ethically equivalent to "ignore existing evidence to push a far-right white supremacist talking point."
bilbo0s•1h ago
Uh, guy, it's called a bias to make money as opposed to a bias towards not making money.

Being in favor of making money with the company you create is not a bad thing. It's a good thing. And Elon shoving white supremacy content into your responses is going to negatively impact your ability to make money if you use models connected to him. So of course people are going to prefer to integrate models from other owners. Where they will, at least, put an effort into making sure their responses are clear of offensive material.

It's business.

altcognito•1h ago
This comment without any context, explanation or proof is just lazy and shows a profound misunderstanding about what bias is.
HarHarVeryFunny•1h ago
Actually the first versions of Grok had the same "left leaning" bias as other models since it turns out that bias is in the data that everyone is using to train on), so if Grok is now more right leaning it is because they have deliberately manipulated it to be so.

This also begs the question, does it make sense to call something a "bias" when that is the majority view (i.e. reflected in bulk of training data) ?

feoren•1h ago
> Grok is just doing it in the opposite direction.

Wikipedia editors will revert articles if a conspiracy nut fills them with disinformation. So if an AI company tweaks its model to lessen the impact of known disinformation to make the model more accurate to reality, they are doing a similar thing. Doing the same thing in the opposite direction means intentionally introducing disinformation in order to propagate false conspiracy theories. Do you not see the difference? Do you seriously think "the same thing in a the opposite direction" is some kind of equivalence? It's the opposite direction!

bilbo0s•1h ago
That's the thing.

I mean really, people don't want that crap turning up in their responses. Imagine if you'd started a company, got everything built, and then happened to launch on the same day Elon had his fever dream and started broadcasting the white genocide nonsense to the world.

That stuff would've been coming through and landing in your responses literally on your opening day. You can't operate in a climate of that much uncertainty. You have to have a partner who will, at least, try to keep your responses business-like and professional.

tempodox•1h ago
You self-selected out of the target audience, but what will the adepts of white supremacy and racism do when they want to build a product with an LLM? They will buy Grok, Musk just got a ton of “free advertising” for it.
candiddevmike•45m ago
That would make it way easier to avoid their products vs open secrets research, just look for the Powered by xAI logo.
downrightmike•1h ago
"ethical blindspots" That is all on purpose, he sees them, and decides they matter less than his opinion.
jampa•1h ago
Honestly, Grok's technology is not impressive at all, and I wonder why anyone would use it:

- Gemini is state-of-the-art for most tasks

- ChatGPT has the best image generation

- Claude is leading in coding solutions

- Deepseek is getting old but it is open-source

- Qwen has impressive lightweight models.

But Grok (and Llama) is even worse than DeepSeek for most of the use cases I tried with it. The only thing it has going for is money behind its infamous founders. Other than that, their existence would be barely acknowledged.

dilap•1h ago
I like it! For me it has replaced Sonnet (3.5 at the time, but 3.7 doesn't seem better to me, from my brief tests) for general web usage -- fast, the ability to query x nee twitter is very nice, & I find the code it produces tends to be a bit better than Sonnet. (Though perhaps that depends a lot on the domain...I'm doing mostly C# in Unity.)

For tough queries o3 is unmatched in my experience.