frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Lac-Mégantic Rail Disaster (2013)

https://en.wikipedia.org/wiki/Lac-M%C3%A9gantic_rail_disaster
1•slyrus•5m ago•0 comments

Intel's Lion Cove P-Core and Gaming Workloads

https://chipsandcheese.com/p/intels-lion-cove-p-core-and-gaming
3•zdw•9m ago•0 comments

A non-anthropomorphized view of LLMs

http://addxorrol.blogspot.com/2025/07/a-non-anthropomorphized-view-of-llms.html
2•zdw•10m ago•0 comments

Battle of Vukovar: how 1,800 fighters held off a force of 36,000

https://en.wikipedia.org/wiki/Battle_of_Vukovar
1•felineflock•13m ago•0 comments

Derivative Eigenfunctions

https://www.ryantolsma.com/thoughts/2025/07/06/discrete-derivative.html
1•rtolsma•13m ago•1 comments

Ask HN: How do I buy a typewriter?

2•indus•14m ago•1 comments

Room to Think

https://remarkable.com/roomtothink
1•tmseidman•15m ago•0 comments

mTLS vs. HTTP Message Signatures: Tradeoffs in Securing HTTP Requests

1•getvictor•19m ago•0 comments

Nobody has a personality anymore: we are products with labels

https://www.freyaindia.co.uk/p/nobody-has-a-personality-anymore
2•drankl•20m ago•1 comments

Fines coming for Californians caught by drone with illegal fireworks

https://www.sfgate.com/bayarea/article/california-drones-illegal-fireworks-20629637.php
1•c420•21m ago•0 comments

Code and Trust: Vibrators to Pacemakers

https://punkx.org/jackdoe/code-and-trust.html
1•jackdoe•22m ago•0 comments

New Horizons images enable first test of interstellar navigation

https://www.newscientist.com/article/2486823-new-horizons-images-enable-first-test-of-interstellar-navigation/
1•jnord•25m ago•0 comments

Strategies to Better Resist Distractions

https://www.psychologytoday.com/us/blog/in-practice/202507/3-strategies-to-better-resist-distractions
1•exiguus•29m ago•0 comments

Trump's BBB has $85M to move space shuttle Discovery from Smithsonian to Texas

https://www.space.com/space-exploration/space-shuttle/trumps-signing-of-one-big-beautiful-bill-includes-usd85-million-to-move-space-shuttle-discovery-from-smithsonian-to-texas
4•zzzeek•32m ago•3 comments

The New Corporate Memo: Let AI Ease the Pain

https://gizmodo.com/the-new-corporate-memo-let-ai-ease-the-pain-2000624537
2•rntn•38m ago•0 comments

Record-Breaking Results Bring Fusion Power Closer to Reality

https://www.scientificamerican.com/article/record-breaking-results-bring-fusion-power-closer-to-reality/
2•saubeidl•41m ago•0 comments

iOS app using color filter manipulation

1•camputer_•43m ago•0 comments

Early Triassic super-greenhouse climate driven by vegetation collapse

https://www.nature.com/articles/s41467-025-60396-y
3•benbreen•45m ago•0 comments

The Origin of the Research University

https://asteriskmag.com/issues/10/the-origin-of-the-research-university
1•Petiver•46m ago•0 comments

CSS conditionals with the new if() function

https://developer.chrome.com/blog/if-article
1•Destiner•49m ago•0 comments

Frustrated with my Mac constantly lowering the microphone

https://incubo4u.com/
1•incubo4u•50m ago•0 comments

Building the Rust Compiler with GCC

https://fractalfir.github.io/generated_html/cg_gcc_bootstrap.html
25•todsacerdoti•51m ago•0 comments

'Great Dying' wiped out 90% of life, then came 5M years of lethal heat

https://www.cnn.com/2025/07/02/climate/great-dying-extinction-tipping-point-tropical-forests
4•Bluestein•51m ago•2 comments

Useful Utilities and Toys over DNS

https://www.dns.toys/
1•thunderbong•55m ago•0 comments

Context Engineering

https://blog.langchain.com/context-engineering-for-agents/
2•JnBrymn•59m ago•0 comments

LLMs should not replace therapists

https://arxiv.org/abs/2504.18412
35•layer8•1h ago•24 comments

Why English doesn't use accents

https://www.deadlanguagesociety.com/p/why-english-doesnt-use-accents
20•sandbach•1h ago•4 comments

Show HN: FitmMetr – A privacy-first health tracker built by a CSO

https://fitmetr.app/
1•psvisualdesign•1h ago•1 comments

Agentic Coding – Copilot to Coworker

https://jasondsouza.org/post/agentic-coding
1•jasonrdsouza•1h ago•0 comments

Quantum microtubule substrate of consciousness is experimentally supported

https://pmc.ncbi.nlm.nih.gov/articles/PMC12060853/
2•greyface-•1h ago•0 comments
Open in hackernews

Huawei cloned Qwen and DeepSeek models, claimed as own

https://dilemmaworks.substack.com/p/whistleblower-huawei-cloned-and-renamed
108•dworks•6h ago

Comments

tengbretson•5h ago
In the LLM intellectual property paradigm, I think this registers as a solid "Who cares?" level offence.
brookst•5h ago
The point isn’t some moral outrage over IP, the point is a company may be falsely claiming to have expertise it does not have, which is meaningful to people who care about the market in general.
tonyedgecombe•5h ago
Nobody who pays attention to Huawei will be surprised. They have a track record of this sort of behaviour going right back to their early days.
npteljes•5h ago
While true, these sorts of reports are the track records which we can base our assessments on.
didibus•5h ago
Ya, the models have stolen everyone's copyrighted intellectual property already. Not sure I have a lot of sympathy, in fact, the more the merrier, if we're going to brush off that they're all trained on copyrighted material, might as well make sure they end up a really cheap, competitive, low margin, accessible commodity.
lambdasquirrel•5h ago
Eh... you should read the article. It sounds like a pretty big deal.
didibus•1h ago
I did read the article, appart for that it sounds like a terrible place to work, I'm not sure I see what's the big deal?

No one knows how any of the models got made, their training data is kept secret, we don't know what it contains, and so on. I'm also pretty sure a few of the main models poached each others employees which just reimplemented the same training models with some twists.

Most LLMs are also based on initial research papers where most of the discovery and innovation took place.

And in the very end, it's all trained on data that very few people agreed or intended would be used for this purpose, and for which they all won't see a dime.

So why not wrap and rewrap models and resell them, and let it all compete for who offers the cheapest plan or per-token cost?

esskay•5h ago
It is very hard to have any sympathy, they stole stolen material from people known to not care they are stealing.
some_random•5h ago
Claiming to care deeply about IP theft in the more nebulous case of model training datasets then dismissing the extremely concrete case of outright theft seems pretty indefensible to me.
perching_aix•5h ago
Par for the course for emotional thinking, I'm not even surprised anymore.
Arainach•5h ago
Everyone has a finite amount of empathy, and I'm not going to waste any of mine on IP thieves complaining that someone stole their stolen IP from them.
mensetmanusman•3h ago
It’s theft in the way taking a picture of nature that you had nothing to do with is theft.
Arainach•3h ago
This line of argument was worn out and tired when 14 year olds on Napster were parroting it in 1999.
pton_xd•5h ago
> dismissing the extremely concrete case of outright theft seems pretty indefensible to me.

Outright theft is a meaningless term here. The new rules are different.

The AI space is built on "traditionally" bad faith actions. Misappropriation of IP by using pirated content and ignoring source code licenses. Borderline malicious website scraping. Recitation of data without attribution. Copying model code / artifacts / weights is just the next most convenient course of action. And really, who cares? The ethical operating standards of the industry have been established.

gausswho•5h ago
"Saturday was a working day by default, though occasionally we had afternoon tea or even crayfish."

Unexpected poetry. Is there a reason why crayfish would be served in this context?

tecleandor•5h ago
I understood like "even as they made us work on Saturday, we sometimes had the luck of having some afternoon snack", and I guess crayfish might be popular there. Or maybe it's a mistranslation.
alwa•4h ago
Immensely popular, delicious, and very beautiful on a plate or in a bowl, both whole/boiled/stir-fried and as snack packs of pre-peeled tails! See, e.g.,

https://mychinesehomekitchen.com/2022/06/24/chinese-style-sp...

So yes, I read it the same way you do: “They made us work weekends, but at least they’d order us in some pizzas.”

(…and if you’re in the US, you can have them air-freighted live to you, and a crawfish boil is an easy and darn festive thing to do in the summer. If you’re put off by the crustacean staring back at you, and you have access to a kitchen that operates in a Louisianan style, you might be able to find a “Cajun Popcorn” of the tails seasoned, battered, and fried. Or maybe one of the enormous number of “seafood boil” restaurants that have opened in the US in recent years.)

(I feel like those establishments came on quickly, that I notice them mainly in spaces formerly occupied by American-Chinese restaurants, and that it’s felt like a nationwide phenomenon… I suspect there’s a story there for an enterprising young investigative nonfiction writer sort.)

tecleandor•3h ago
Oh! That sounds tasty. I'm in EU, but I'm gonna take note of both. Thanks.
bigmattystyles•5h ago
Old maps (and perhaps new ones) used to add fake little alleys so a publisher could quickly spot publishers infringing on their IP rather than going out and actually mapping. I wonder if something similar is possible with LLMs.
Tokumei-no-hito•5h ago
i have come across this one for example https://github.com/sentient-agi/OML-1.0-Fingerprinting

> Welcome to OML 1.0: Fingerprinting. This repository houses the tooling for generating and embedding secret fingerprints into LLMs through fine-tuning to enable identification of LLM ownership and protection against unauthorized use.

NitpickLawyer•5h ago
Would be interesting to see if this kind of watermarking survives the frankenstein types of editing they are presumably doing. Per the linked account, they took a model, changed tokenizers, and added layers on top. They then presumably did some form of continued pre-training, and then post-training. It would have to be some very resistant watermarking to survive that. It's not as simple as making the model reply with "my tokens are my passport, verify me" when you ask them the weather in NonExistingCity... Interesting nonetheless.
Tokumei-no-hito•58m ago
i have never used it and have limited understand of fine tune models. i only remember see this a few weeks ago and your comment reminds me. i am curious too.
varispeed•5h ago
I often say an odd thing on public forum or make up a story and then see if LLM can bring it up.

I started doing that once LLM provided me with a solution to a problem that was quite elegant, but was not implemented in the particular project. Turns out it learned it from GitHub issues post that described how particular problem could be tackled, but PR never actually got in.

richardw•4h ago
I’ve wondered whether humans who wanted to protect some areas of knowledge just start writing BS here and there. Organised and large scale, with hidden orchestration channels, it could potentially really screw with models. Put the signal to humans in related but slightly removed places.
yorwba•4h ago
The original whisteblower article in Chinese at the bottom (but not the English version at the top) has this part:

实际上,对于后续训了很久很久的这个模型,Honestagi能够分析出这个量级的相似性我已经很诧异了,因为这个模型为了续训洗参数,所付出的算力甚至早就足够从头训一个同档位的模型了。听同事说他们为了洗掉千问的水印,采取了不少办法,甚至包括故意训了脏数据。这也为学术界研究模型血缘提供了一个前所未有的特殊模范吧。以后新的血缘方法提出可以拿出来溜溜。

In fact, I'm surprised that HonestAGI's analysis could show this level of similarity for this model that had been post-trained for a long time, because the computing power used to train-wash the parameters of this model was enough to train a model of the same size from scratch. I heard from my colleagues that they took many measures to wash off Qwen's watermark, even deliberately training on dirty data. This also provides an unprecedented case study for the academic community studying model lineage. If a new lineage method is put forward in the future, you can take it for a spin.

landl0rd•4h ago
The classic example here is subtle, harmless defects/anomalies built into computer chips. Half the stuff china's made is full of these because they're straight ripped from reverse engineering of TI or whomever's stuff.

Very funny that the chinese even do this to each other; equal-opportunity cheats.

throwaway74354•4h ago
It's important part of the culture and is not considered cheating. IP protection laws legal precedents are not the universal truth.

This article on the topic is a good explainer, https://aeon.co/essays/why-in-china-and-japan-a-copy-is-just... , but it's a thoroughly studied phenomenon.

tedivm•4h ago
When I was at Malwarebytes we had concerns that IOBit was stealing our database and passing it off on their own. While we had a lot of obvious proof, we felt it wasn't enough for the average person to understand.

To get real proof we created a new program that only existed on a single machine, and then added a signature for that application. This way there could be no claim that they independently added something to their database, as the program was not malware and literally impossible to actually find in the wild. Once they added it to their database we made a blog post and the issue got a lot of attention.

https://forums.malwarebytes.com/topic/29681-iobit-steals-mal...

e9•3h ago
I was learning OS stuff and made a toy virus for myself back in 1999 and I thought it would be cool if antivirus officially recognized it so I sent a copy to antivirus company(Dr.Web. I think it was called?) and to my surprise now all antivirus databases have it and someone even has gif recording of machine booting up with it… so clearly they must be sharing not just db but also the executables etc
tedivm•2h ago
There are sharing programs between companies, yes, but that isn't what we're talking about here.
belter•2h ago
> When I was at Malwarebytes

I hope you were not the one that decided to uninstall the product, you need to download a support utility... :-)

ateng•4h ago
Youtuber Jay Foreman made a video about fake alleys in maps https://www.youtube.com/watch?v=DeiATy-FfjI
matt3210•5h ago
The question is who really made the original models?
kkzz99•5h ago
Remember that there was a Huawei Lab member that got fired for literally sabotaging training runs. Would not be surprised if that was him.
yorwba•4h ago
I think the case you're talking about is this one: https://arstechnica.com/tech-policy/2024/10/bytedance-intern... where it was a ByteDance intern.
typon•5h ago
LLMs are all built on stolen data. There is no such thing as intellectual property in LLMs.
mattnewton•5h ago
That’s not the point IMO; the point was this was being used to display capabilities to train models with Huawei software and hardware.
mensetmanusman•3h ago
/robots that read books in the library are stealing/
JPLeRouzic•5h ago
That's a very human and very honest report. It presents the confusion there is in some big companies and how the pressure by the management favors dishonest teams. The writer left the company. I hope he is well; he is a fine person.
dworks•4h ago
Yes. In fact, this report should be written in the context of other farewell letters to employers that have been published recently in China. There has recently been one, by a 15-year Alibaba veteran, who decried the decline of the company culture as a cause of its now lacking competitiveness and inability to launch new products.

The issues in this report are really about: 1. Lies about Huawei's capabilities to the country (important national issue) 2. Lies to customers who paid to use Huawei models 3. A rigid, KPI-focused and unthinking organization where dishonest gaming of the performance review system not only works but seems to be the whole point and is tacitly approved (this and the reporters idealism and loss of faith is the main point of the report as I see it)

yorwba•3h ago
I think the reporter's motivations would've come across more clearly if you had posted a paragraph-by-paragraph translation instead of the current abridged version. (I assume Dilemma Works is your Substack.) Lots of details that add color to the story got lost.
egypturnash•4h ago
LLMs are apparently completely incompatible with copyright anyway, so if you can train them without paying a single dime to anyone whose work you ingest, then you should be able to clone them for free. What goes around comes around.
mensetmanusman•3h ago
They are naïvely incompatible, but lawyers will find a way to make it not so.
throwaway48476•3h ago
Chinese efficiency. The west is held back by archaic IP laws.
option•3h ago
Doesn't feel like a healthy culture, IF true. Also, apparently current DeepSeek lab members aren't allowed to travel to conferences. This is all maybe good for execution but absolutely not for innovation
option•3h ago
"Organization: We belong to the “Fourth Field Army” initiative. Under its structure, core language large models fall under the 4th brigade; Wang Yunhe’s small-model group is the 16th brigade."

- Lol, what? So is this literally a part of CCP military?

tedivm•3h ago
I don't think so. The Fourth Field Army doesn't exist anymore (and hasn't since 1955). My guess is the company named their LLM initiative after this for historic reasons, and that these are more like internal project code names than anything else.
jauntywundrkind•3h ago
Meanwhile Apple legitimately built on Qwen2.5-Coder-7B, adding some of their own novel ideas. It mostly seems like custom training for their own code examples, but notably if you turn the temperature up, it can write multiple blocks of code out of order.

https://9to5mac.com/2025/07/04/apple-just-released-a-weirdly... https://news.ycombinator.com/item?id=44472062

maxglute•2h ago
Writer somewhat naive. His Ascend team couldn't get comparable performance (gen1 910A NPUs) initially vs (I assume) Nvidia because obviously. Management supported teams that pivot to cloned alternatives that used GPUs that can be immediately commercialized. Internal office politics make this happen. Ascend team works out kinks (this is huge confirmation), but feel (are) mistreated, i.e. biased bureaucracy, lack of recognition. Many burnout / leave to other Chinese AI companies.

HW strategy/culture has been burning tier1 talent since forever. I remember in the 90s When HW and other domestic PRC telco started poaching from Nortel, Siemens, Lucent etc... the talent (most Chinese diaspora used to comfy western office culture) did not have a good time fitting into an actual Chinese company with Chinese culture (but got paid lots). Many burned out too... yet HW, a particularly extreme outlier of militant work culture, has become dominant..

LBH, both HW post sanctions, is a strategic company, overlapping with semi fabrication, domestic chips, and AI is cubing their strategic value. They can get away with doing anything under the current geopolitical environment to stay dominant. The worthwhile take away from this farewell letter is HW threw enough talent at Ascend that it kind of works now, and potentially can throw enough talent at it to be competitive with Nvidia. AKA how it has always operated, like massive wankers. The intuition from the author and most of us is... you need to reward employees right, cultivate proper workplace environment blah blah blah... but look at HW for the past 30 years. They pay a lot of smart people (including patriotic suckers) A LOT of money, throw them at problems until they break. And win.