frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Convergent CTOS Source Files

https://bitsavers.org/bits/Convergent/ngen/CTOS_source/
1•CTOSian•21s ago•0 comments

AI and the Digital Content Provider's Dilemma

https://katedowninglaw.com/2025/10/14/ai-and-the-digital-content-providers-dilemma/
1•lindenksv1•30s ago•0 comments

GrapheneOS is finally ready to break free from Pixels and it may never look back

https://www.androidauthority.com/graphene-os-major-android-oem-partnership-3606853/
1•MaximilianEmel•46s ago•0 comments

A Review of Bio-Inspired Perching Mechanisms for Flapping-Wing Robots

https://www.mdpi.com/2313-7673/10/10/666
1•PaulHoule•2m ago•0 comments

Google's Pixel 10 Pro Fold explodes during durability testing

https://www.notebookcheck.net/Pixel-10-Pro-Fold-explodes-during-durability-test-and-isn-t-dustpro...
1•didntknowyou•4m ago•1 comments

Information theory for complex systems scientists: What, why, and how

https://www.sciencedirect.com/science/article/pii/S037015732500256X
1•Anon84•6m ago•0 comments

SMuFL

https://www.smufl.org/about/
1•brudgers•7m ago•0 comments

Day 7, Naming Workshop – The Rise of the "Death Star"

https://www.supremefounder.com/naming-workshop.html
1•fmfamaral•11m ago•0 comments

How to Have Productive Conversations About AI

https://zed.dev/blog/reconsidering-ai-steve-klabnik
1•nadis•14m ago•0 comments

Astrodither – Audio reactive WebGL/WebGPU experiment

https://astrodither.robertborghesi.is/
1•dghez•15m ago•0 comments

Show HN: An MCP Server for Testing MCP Servers Using Claude Code

https://github.com/rdwj/mcp-test-mcp
1•rdwj•15m ago•0 comments

SQLPage 0.38: transform SQL queries into web UIs for any DB

https://github.com/sqlpage/SQLPage/releases/tag/v0.38.0
1•lovasoa•19m ago•0 comments

To Panic or Not to Panic

https://www.ncameron.org/blog/to-panic-or-not-to-panic/
2•yurivish•22m ago•0 comments

Standard Model and General Relativity Derived from Mathematical Self-Consistency

https://www.academia.edu/144466150/The_Self_Consistent_Coherence_Maximizing_Universe_Complete_Der...
3•kristintynski•28m ago•2 comments

Boosting Wan2.2 I2V Inference on 8xH100s, 56% Faster with Sequence Parallelism

https://www.morphic.com/blog/boosting-wan2-2-i2v-56-faster/
4•palakzat•29m ago•1 comments

NASA's JPL faces lowest morale in decades after latest layoffs

https://www.latimes.com/science/story/2025-10-14/jpl-announces-it-is-laying-off-550-people
2•divbzero•33m ago•0 comments

Google may be forced to make changes to search engine in UK

https://www.bbc.com/news/articles/c98d7p8l9pro
1•1vuio0pswjnm7•38m ago•0 comments

Bats Catch Migratory Birds and Eat Them in Midair

https://www.nytimes.com/2025/10/09/science/bats-birds-prey.html
1•bookofjoe•40m ago•1 comments

MegaFold: An Open-Sourced AlphaFold-3 Training System

https://supercomputing-system-ai-lab.github.io/blogs/blog/megafold-an-open-sourced-alphafold-3-tr...
2•spikerheado•41m ago•0 comments

Netflix, Spotify Forge Video Podcast Deal

https://www.netflix.com/tudum/articles/netflix-spotify-video-podcasts
2•ChrisArchitect•45m ago•1 comments

AI Platform Resurrects Ancient Rome and Greece Using Scholarly Sources

https://www.forbes.com/sites/lesliekatz/2025/10/13/ai-platform-resurrects-ancient-rome-and-greece...
1•fork-bomber•47m ago•0 comments

First Wap – Wikipédia

https://fr.wikipedia.org/wiki/First_Wap
2•DyslexicAtheist•48m ago•0 comments

The Power of Grassroots Organizations and Their Impact (2024)

https://medium.com/@emergentworks/the-power-of-grassroots-organizations-and-their-impact-0fa2506d...
1•mooreds•50m ago•0 comments

Death by Valuation: The Amazon Aggregator Autopsy

https://www.marketplacepulse.com/articles/death-by-valuation-the-amazon-aggregator-autopsy
2•ilamont•51m ago•0 comments

California becomes first state to regulate AI companion chatbots

https://techcrunch.com/2025/10/13/california-becomes-first-state-to-regulate-ai-companion-chatbots/
1•gmays•56m ago•1 comments

The ancestors of ostriches and emus were long-distance fliers

https://theconversation.com/the-ancestors-of-ostriches-and-emus-were-long-distance-fliers-heres-h...
3•onychomys•1h ago•0 comments

AI models predict sepsis in children

https://news.northwestern.edu/stories/2025/10/ai-models-predict-sepsis-in-children-enable-preempt...
2•gmays•1h ago•0 comments

Hacking the World Poker Tour: Inside ClubWPT Gold's Back Office

https://samcurry.net/hacking-clubwpt-gold
1•samwcurry•1h ago•0 comments

First Wap Tracks Phones Around the World

https://www.lighthousereports.com/methodology/surveillance-secrets-explainer/
2•DyslexicAtheist•1h ago•0 comments

Empty Intervals Are Valid Intervals

https://nigeltao.github.io/blog/2025/empty-intervals.html
3•Bogdanp•1h ago•0 comments
Open in hackernews

Intel Announces Inference-Optimized Xe3P Graphics Card with 160GB VRAM

https://www.phoronix.com/review/intel-crescent-island
75•wrigby•4h ago

Comments

RoyTyrell•3h ago
Will this have any support for open source libraries like PyTorch or will it be all Intel proprietary software that you need a license for?
CoastalCoder•3h ago
Intel puts a huge priority on DL framework support before releasing related hardware, going back to at least 2017.

I assume that hasn't changed.

0xfedcafe•54m ago
OpenVino is entirely open-source and can run PyTorch and ONNX models, so this is definitely not a topic of concern. PyTorch also has native Intel GPU support https://docs.pytorch.org/docs/stable/notes/get_start_xpu.htm...
knowitnone3•3h ago
Any business people here that can explain why companies announce products a year before their release? I can understand getting consumers excited but it also tells competitors what you are doing giving them time to make changes of their own. What's the advantage here?
teeray•3h ago
> What's the advantage here?

Stock number go up

creaturemachine•3h ago
The AI bubble might not last another year. Better get a few more pumps in before it blows.
Mars008•3h ago
AI is not going anywhere. Now everyone wants to get a piece. Local inference is expected to grow. Documents, image, video, etc processing. Another obvious is driverless farm vehicles and other automated equipment. "Assisted" books, images, news,.. already and grows fast. Translation also a fact.
thenaturalist•2h ago
The technology, maybe - and if on local.

The public co valuations of quickly depreciating chip hoarders selling expensive fever dreams to enterprises are gonna pop though.

Spend 3-7 USD for 20 cents in return and 95% project failures rates for quarters on end aren't gonna go unnoticed on Wall St.

baq•2h ago
There is a serious possibility this isn’t a bubble. Too many people watched the big short and now call every bull a bubble; maybe the bubble was the dollar and it’s popping now instead.
thenaturalist•2h ago
Have you looked in detail at the economics of this?

Career finance professionals are calling it a bubble, not due to their suddenly found deep technological expertise, but because public cos like FAANG et. al are engaging in typical bubble like behavior: Shifting capex away from their balance sheets into SPACs co-financed by private equity.

This is not a consumer debt bubble, it's gonna be a private market bubble.

But as all bubbles go, someones gonna be left holding the bag with society covering for the fallout.

It'll be a rate hike, it'll be some Fortune X00 enterprises cutting their non-ROI-AI-bleed or it'll be an AI-fanboy like Oracle over-leveraging themselves and then watching their credit default swaps going "Boom!" leading to a financing cut off.

baq•1h ago
It's possible, circular financing is definitely fishy, but OTOH every openai deal sama makes is swallowed by willing buyers at a fair market price. We'll be in a bubble when all the bears are dead and everyone accepts 'a new paradigm', not before; there's plenty of upside capitulation left judging by some hedge fund returns this year.

...and again, this is assuming AI capability stops growing exponentially in the widest possible sense (today, 50%-task-completion time horizon doubles ~7 months).

Mars008•3h ago
To keep investors happy and stock from failing? Fairy tales work as well, see Tesla robots.
fragmede•3h ago
If you're Intel sized, it's gonna leak. If you announce it first, you get to control the message.

The other thing is enterprise sales is ridiculously slow. If Intel wants corporate customers to buy these things, they've got to announce them ~a year ahead, in order for those customers to buy them next year when they upgrade hardware.

AnthonyMouse•3h ago
If customers know your product exists before they can buy it then they may wait for it. If they buy the competitor's product today because they don't know your product will exist until the day they can buy it then you lose the sale.

Samples of new products also have to go out to third party developers and reviewers ahead of time so that third party support is ready for launch day and that stuff is going to leak to competitors anyway so there's little point in not making it public.

jsnell•3h ago
In this case there is no risk of anyone stealing Intel's ideas or even reacting to them.

First, they're not even an also-ran in the AI compute space. Nobody is looking to them for roadmap ideas. Intel does not have any credibility, and no customer is going to be going to Nvidia and demanding that they match Intel.

Second, what exactly would the competitors react to? The only concrete technical detail is that the cards will hopefully launch in 2027 and have 160GB of memory.

The cost of doing this is really low, and the value of potentially getting into the pipeline of people looking to buy data center GPUs in 2027 soon enough to matter is high.

baq•2h ago
Given how long it takes to develop a new GPU I’m pretty sure this one was signed off by Pat and given it survived Lip-Bu’s axe that says something, at least for Intel.
reactordev•3h ago
This is a shareholder “me too” product
thenaturalist•2h ago
What are they gonna do with their own FAB?

Not release anything?

There'll be a good market share for comparatively "lower power/ good enough" local AI. Check out Alez Ziskind's analysis of the B50 Pro [0]. Intel has an entire line-up of cheap GPUs that perform admirably for local use cases.

This guy is building a rack on B580s and the driver update alone has pushed his rig from 30 t/s to 90 t/s. [1]

0: https://www.youtube.com/watch?v=KBbJy-jhsAA

1: https://old.reddit.com/r/LocalLLaMA/comments/1o1k5rc/new_int...

reactordev•17m ago
Watson…

Yeah even RTX’s are limited in this space due to lack of tensor cores. It’s a race to integrate more cores and faster memory buses. My suspicion is this is more me too product announcement so they can play partner to their business opportunities and continue greasing their wheels.

epolanski•2h ago
I don't think you're giving much advantage to anybody really on such a small timeframe.

Semiconductors are like container ships, they are extremely slow and hard to steer, you plan today the products you'll release in 2030.

schmorptron•2h ago
Xe3P as far as I remember is built in their own fabs as opposed to xe3 at TSMC. This could give them a huge advantage by being possibly the only competitor not competing for the same TSMC wafers
mft_•2h ago
I have no idea of the likely price, but (IMO) this is the sort of disruption that Intel needs to aim at if it's going to make some sort of dent in this market. If they could release this for around the price of a 5090, it would be very interesting.
schmorptron•2h ago
Maybe not that low, but given it's using LPDDR5 instead of GDDR7, at least the ram should be a lot cheaper.
Neywiny•2h ago
Certainly an interesting choice. Dramatically worse performance but dramatically larger only time will tell how it actually goes
Tepix•1h ago
It‘s LPDDR5X
wtallis•45m ago
LPDDR5x really just means LPDDR5 running at higher than the original speed of 6400MT/s. Absent any information about which faster speed they'll be using, this correction doesn't add anything to the discussion. Nobody would expect even Intel to use 6400MT/s for a product that far in the future. Where they'll land on the spectrum from 8533 MT/s to 10700 MT/s is just a matter for speculation at the moment.
baq•2h ago
With this much ram don’t expect anything remotely affordable by civilians.
wmf•47m ago
160 GB LPDDR5 is ~$1,200 retail so the card could be sold for $2,000. The price will depend on how desperate Intel is. Intel probably can't copy Nvidia's pricing.
dragonwriter•40m ago
I mean, even without that, the phrase “enterprise GPU”, does not tend to convey “priced for typical consumers”.
api•2h ago
A not-absurdly-priced card that can run big models (even quantized) would sell like crazy. Lots and lots of fast RAM is key.
bigwheels•2h ago
How does LPDDR5 (This Xe3P) compare with GDDR7 (Nvidia's flagships) when it comes to inference performance?

Local inference is an interesting proposition because today in real life, the NV H300 and AMD MI-300 clusters are operated by OpenAI and Anthropic in batching mode, which slows users down as they're forced to wait for enough similar sized queries to arrive. For local inference, no waiting is required - so you could get potentially higher throughput.

qingcharles•1h ago
I asked GPT to pull real stats on both. Looks like the 50-series RAM is about 3X that of the Xe3P, but it wanted to remind me that this new Intel card is designed for data centers and is much lower power, and that the comparable Nvidia server cards (e.g. H200) have even better RAM than GDDR7, so the difference would be even higher for cloud compute.
halJordan•1h ago
Lpddr5x (not lpddr5) is 10.7 Gbps. Gddr7 is 32 Gbps. So it's going to be slower
codedokode•18m ago
Yes but in matrix multiplication there are O(N²) numbers and O(N³) multiplications, so it might be possible that you are bounded by compute speed.
btian•1h ago
Isn't that precisely what DGX Spark is designed for?

How is this better?

geerlingguy•1h ago
DGX Spark is $4000... this might (might) not be? (and with more memory)
btian•1h ago
This starts shipping in 2027. I'm sure you can buy a DGX Spark for less than $4k in 2 years time.
bigmattystyles•2h ago
I remember Larabee and Xeon-Phi announcements and getting so excited at the time. So I'll wait but curb my enthusiasm.
Analemma_•2h ago
Yeah, Intel's problem is that this is (at least) the third time they've announced a new ML accelerator platform, and the first two got shitcanned. At this point I wouldn't even glance at an Intel product in this space until it had been on the market for at least five years and several iterations, to be somewhat sure it isn't going to be killed, and Intel's current leadership inspires no confidence that they'll wait that long for success.
wmf•42m ago
Xe works much much better than Larabee or Xeon Phi ever did. Xe3 might even be good.
makapuf•2h ago
Funny they still call them graphics cards when they're really... I dont know, matmul cards ? Tensor cards ? TPU ? Well that sums it up maybe, what those are are really CUDA cards.
halJordan•1h ago
Dude, this is asinine. Graphics cards have been doing matrix and vector operations since they were invented. No one had a problem with calling matrix multiplers graphics cards until it became cool to hate AI.
adastra22•1h ago
It was many generations before vector operations were moved onto graphics chips.
boomskats•1h ago
If you s/graphics/3d graphics does that still hold true?
shwaj•19m ago
I think they’re using “vector” in the linear algebra sense, e.g. multiplying a matrix and a vector produces a different vector.

Not, as I assume you mean, vector graphics like SVG, and renderers like Skia.

yjftsjthsd-h•49m ago
GPUs may well have done the same-ish operations for a long time, but they were doing those operations for graphics. GPGPU didn't take off until relatively recently.
wmf•46m ago
This sounds like a gaming card with extra RAM so it's kind of appropriate to call it a graphics card.
eadwu•1h ago
It'll be either "cheap" like the DGX Spark (with crap memory bandwidth) or overpriced with the bus width of a M4 Max with the rhetoric of Intel's 50% margin.
phonon•1h ago
Or it will be cheap, with the ability to expand 8X on a server. Particularly with PCIe 6.0 coming soon, might be a very attractive package.

https://www.linkedin.com/posts/storagereview_storagereview-a...

Tepix•59m ago
Sound as if it won‘t be widely available before 2027 which disappointing for a 341GB/s chip.
storus•10m ago
Intel leadership actually reads HN? Mindblown...