frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Windows 9x Subsystem for Linux

https://social.hails.org/@hailey/116446826733136456
414•sohkamyung•4h ago•105 comments

GitHub CLI now collects pseudoanonymous telemetry

https://cli.github.com/telemetry
160•ingve•2h ago•100 comments

3.4M Solar Panels

https://tech.marksblogg.com/american-solar-farms-v2.html
121•marklit•2h ago•63 comments

The eighth-generation TPU: An architecture deep dive

https://cloud.google.com/blog/products/compute/tpu-8t-and-tpu-8i-technical-deep-dive
57•meetpateltech•2h ago•8 comments

Our eighth generation TPUs: two chips for the agentic era

https://blog.google/innovation-and-ai/infrastructure-and-cloud/google-cloud/eighth-generation-tpu...
117•xnx•2h ago•72 comments

Kernel code removals driven by LLM-created security reports

https://lwn.net/Articles/1068928/
49•edward•2h ago•23 comments

Treetops glowing during storms captured on film for first time

https://www.psu.edu/news/earth-and-mineral-sciences/story/treetops-glowing-during-storms-captured...
18•t-3•1h ago•1 comments

How the heck does GPS work?

https://perthirtysix.com/how-the-heck-does-gps-work
118•alfanick•5h ago•22 comments

Making RAM at Home [video]

https://www.youtube.com/watch?v=h6GWikWlAQA
477•kaipereira•1d ago•133 comments

ChatGPT Images 2.0

https://openai.com/index/introducing-chatgpt-images-2-0/
927•wahnfrieden•19h ago•805 comments

Columnar Storage Is Normalization

https://buttondown.com/jaffray/archive/columnar-storage-is-normalization/
20•ibobev•2h ago•13 comments

Another Day Has Come

https://daringfireball.net/2026/04/another_day_has_come
53•ndr42•17h ago•47 comments

Nobody Got Fired for Uber's $8M Ledger Mistake?

https://news.alvaroduran.com/p/nobody-got-fired-for-ubers-8-million
76•ohduran•3h ago•46 comments

Why Musicians Are Manufacturing Sold-Out Shows

https://www.bloomberg.com/news/articles/2026-04-17/how-bands-like-cameron-winter-s-geese-are-manu...
38•helsinkiandrew•3d ago•32 comments

XOR'ing a register with itself is the idiom for zeroing it out. Why not sub?

https://devblogs.microsoft.com/oldnewthing/20260421-00/?p=112247
98•ingve•7h ago•114 comments

All your agents are going async

https://zknill.io/posts/all-your-agents-are-going-async/
89•zknill•2d ago•54 comments

Monitor your Pi / OMP sessions

https://github.com/BlackBeltTechnology/pi-agent-dashboard
8•ankitg12•3d ago•1 comments

Prefill-as-a-Service:KVCache of Next-Generation Models Could Go Cross-Datacenter

https://arxiv.org/abs/2604.15039
23•matt_d•3d ago•1 comments

Contact Lens Uses Microfluidics to Monitor and Treat Glaucoma

https://spectrum.ieee.org/smart-contact-lens-glaucoma-microfluidics
74•pseudolus•3d ago•2 comments

MuJoCo – Advanced Physics Simulation

https://github.com/google-deepmind/mujoco
68•modinfo•3d ago•13 comments

Garbage Collection Without Unsafe Code

https://fitzgen.com/2024/02/06/safe-gc.html
83•foota•3d ago•24 comments

Windows Server 2025 Runs Better on ARM

https://jasoneckert.github.io/myblog/server-2025-arm64/
159•jasoneckert•3d ago•121 comments

Drunk post: Things I've learned as a senior engineer (2021)

https://luminousmen.substack.com/p/drunk-post-things-ive-learned-as
210•zdw•14h ago•153 comments

The Vercel breach: OAuth attack exposes risk in platform environment variables

https://www.trendmicro.com/en_us/research/26/d/vercel-breach-oauth-supply-chain.html
341•queenelvis•21h ago•112 comments

CATL's new LFP battery can charge from 10 to 98% in less than 7 minutes

https://arstechnica.com/cars/2026/04/catls-new-lfp-battery-can-charge-from-10-to-98-in-less-than-...
73•PotatoNinja•3h ago•33 comments

Acetaminophen vs. ibuprofen

https://asteriskmag.com/issues/14/the-mystery-in-the-medicine-cabinet
544•nkurz•1d ago•342 comments

SpaceX says it has agreement to acquire Cursor for $60B

https://twitter.com/spacex/status/2046713419978453374
703•dmarcos•16h ago•877 comments

Meta to start capturing employee mouse movements, keystrokes for AI training

https://www.reuters.com/sustainability/boards-policy-regulation/meta-start-capturing-employee-mou...
690•dlx•20h ago•453 comments

Britannica11.org – a structured edition of the 1911 Encyclopædia Britannica

https://britannica11.org/
323•ahaspel•21h ago•107 comments

Diverse organic molecules on Mars revealed by the first SAM TMAH experiment

https://www.courthousenews.com/preserved-for-billions-of-years-organic-compounds-found-on-mars/
91•geox•1d ago•7 comments
Open in hackernews

Our eighth generation TPUs: two chips for the agentic era

https://blog.google/innovation-and-ai/infrastructure-and-cloud/google-cloud/eighth-generation-tpu-agentic-era/
111•xnx•2h ago

Comments

TheMrZZ•1h ago
> A single TPU 8t superpod now scales to 9,600 chips and two petabytes of shared high bandwidth memory, with double the interchip bandwidth of the previous generation. This architecture delivers 121 ExaFlops of compute and allows the most complex models to leverage a single, massive pool of memory.

This seems impressive. I don't know much about the space, so maybe it's not actually that great, but from my POV it looks like a competitive advantage for Google.

cyanydeez•34m ago
it is. itll still not create AGI without some breakthrough in instruction vs data separation of concerns
NoiseBert69•1h ago
That cooling system looks crazy. What an unbelievable density.
Keyframe•1h ago
As others have been capturing news cycle eyes, seems to me Google has been going from strength to strength quietly in the background capturing consumer market share and without much (any?) infrastructure problems considering they're so vertically integrated in AI since day one? At one point they even seemed like a lost cause, but they're like a tide.. just growing all around.
baq•1h ago
you've never tried to use gemini 3 I guess - that thing was so unreliable it might as well not be offered; there's also a reason why everybody here is excited for claude and codex, but not really for antigravity.

that said, I actually agree: google IMHO silently dominates the 'normie business' chatbot area. gemini is low key great for day to day stuff.

youniverse•1h ago
Yeah I think there will be a time in a few years (1-2?) when both Google and Apple will get to eat their cake. They aren't playing the same game of speed running unpolished product releases every month to double their valuation. They have time to think and observe and put out something really polished. At least that's the hope! :)
echelon•1h ago
That's because these mega monopolies have diverse income streams and have grown like cancers to tax every system and economy that touches the internet.

Anthropic and OpenAI are having to fight like hell to secure market share. Google just gets to sit back and relax with its browser and android monopolies.

Why did our regulators fall asleep at the wheel? Google owns 92% of "URL bar" surface area and turned it into a Google search trademark dragnet. Now Anthropic has to bid for its own products against its competitors and inject a 15+% CAC which is just a Google tax.

Now consider all the bullshit Google gets to do with android and owning that with an iron fist. Every piece of software has a 30% tax, has to jump through hoops, and even finding it is subject to the same bidding process.

These companies need to be broken up.

Google would be healthier for the economy and its own investors as six different companies. And they shouldn't be allowed to set the rules for mobile apps or tax other people's IP and trademarks.

harrall•39m ago
Google invented the AI architecture that Anthropic and OpenAI based their entire companies on? Based off years of research at Google.

Of course they should have to fight with the inventors of the technology they’re using.

someguyiguess•36m ago
> Google invented the AI architecture that Anthropic and OpenAI based their entire companies on

Source?

ckcheng•30m ago
Unless you don’t think Attention Is All You Need?

https://en.wikipedia.org/wiki/Attention_Is_All_You_Need

IncreasePosts•30m ago
"Attention Is All You Need" was a paper by a bunch of Google researchers
vibe42•1h ago
Their latest open models are pretty competitive with other open models, and some innovation around the smaller sizes (2-4 GB).

They're helping close to the distance to realistic quality inference on phones and other smaller devices.

WarmWash•58m ago
AI adoption isn't existential to Google like it is to OAI and Anthropic. They also can't produce hype like the other two, because anything they say is just going to come off as corporate drivel.
amazingamazing•1h ago
If ai ends up having a winner I struggle to see how it doesn’t end with Google winning because they own the entire stack, or Apple because they will have deployed the most potentially AI capable edge sites.
aliljet•1h ago
The real problem is that scientists doing this sort of early work more often than not want to burn hardware under their desks. Renting infrastructure in Google cloud isn't the only way...
nickandbro•1h ago
I am curious what workloads Citadel Securities is running on these TPUs? Are you telling me they need the latest TPUs for market insights?
vibe42•1h ago
Training their own, closed, internal models on their own data sets? Probably a good way to squeeze out some market trading signals.
nickandbro•1h ago
Reminds me of when hedge funds started laying increasingly shorter fiber-optic cable lines to achieve the lowest possible latency for high-frequency trading.
written-beyond•1h ago
I thought these TPUs were primarily used for inference?
vlovich123•37m ago
TPU8t is for training. But even still, once you’ve trained, you need to run the model too. And these kinds of models already have a huge latency hit so there’s not much hurting running it away from the trading switches.
knowaveragejoe•24m ago
As the article states, there's both training and inference dedicated chips.
pmb•1h ago
At this point, when you are doing big AI you basically have to buy it from NVidia or rent it from Google. And Google can design their chips and engine and systems in a whole-datacenter context, centralizing some aspects that are impossible for chip vendors to centralize, so I suspect that when things get really big, Google's systems will always be more cost-efficient.

(disclosure: I am long GOOG, for this and a few other reasons)

sigmoid10•1h ago
I'd bet that too if their management wasn't so incredibly uninspiring. Like, Apple under Cook was also pretty mild and a huge step down from Jobs, but Google feels like it fell off a cliff. If it wasn't for OpenAI releasing ChatGPT, they might still be sitting on that tech while only testing it internally. Now it drives their entire chip R&D.
WarmWash•1h ago
To be fair, I don't think any of the AI players wanted what OAI did. Sam grabbed first mover at the cost of this insane race everyone else got forced into.
hkpack•52m ago
I am not fan of the era when CEO is expected to be a cult leader type person.

Cook did very well in all areas as well as in not trying to create a cult.

whattheheckheck•46m ago
What would an inspiring leader do differently for you?
someguyiguess•37m ago
Inspire
akersten•43m ago
I'd go long Google too if using Gemini CLI felt anything close to the experience I get with Codex or Claude. They might have great hardware but it's worthless if their flagship coding agent gets stuck in loops trying to find the end of turn token.
fourside•39m ago
Of the big three, Gemini gives me the worst responses for the type of tasks I give it. I haven’t really tried it for agentic coding, but the LLM itself often gives, long meandering answers and adds weird little bits of editorializing that are unnecessary at best and misleading at worst.
surajrmal•33m ago
Gemini CLI isn't a great product unfortunately. While it's unfortunately tied to a GUI, antigravity is a far superior agent harness. I suggest comparing that to Claude code instead.
VectorLock•5m ago
I use Claude Code all day and use Gemini CLI for personal projects and I don't see the huge gap that other people seem to talk about a lot. Truthfully there are parts of Gemini CLI I like better than Claude Code.
paulmist•1h ago
At $15/GB of HBM4 the 331.8TB of HBM4 per pod is 5 million...
nsteel•1h ago
It's HBM3e
zozbot234•53m ago
$15/GB is retail price for DIMM sticks. Is HBM4 really that cheap?
selectodude•40m ago
HBM is just DRAM stacked directly next to the die. The expensive part is gluing it on there. The chips themselves are pretty much the same.
vibe42•1h ago
The pics of the cooling system is pretty good sci-fi / cyberpunk / steampunk inspo.

If the whole AI bubble spectularly collapes, at least we got a lot of cool pics of custom hardware!

NitpickLawyer•1h ago
> If the whole AI bubble spectularly collapes

Every other news for the past month has been about lacking capacity. Everyone is having scaling issues with more demand than they can cover. Anthropic has been struggling for a few months, especially visible when EU tz is still up and US east coast comes online. Everything grinds to a halt. MS has been pausing new subscriptions for gh Copilot, also because a lack of capacity. And yet people are still on bubble this, collapse that? I don't get it. Is it becoming a meme? Are people seriously seeing something I don't? For the past 3 years models have kept on improving, capabilities have gone from toy to actually working, and there's no sign of stopping. It's weird.

vibe42•59m ago
Both are possible; increasing demand and bubble collapse.

The way this could happen is if model commoditization increases - e.g. some AI labs keep publishing large open models that increasingly close the gap to the closed frontier models.

Also, if consumer hardware keep getting better and models get so good that most people can get most of their usage satisfied by smaller models running on their laptop, they won't pay a ton for large frontier models.

hgoel•50m ago
There's a massive amount of demand at the current price point, this does not exclude a bubble considering that the current cost to consumers is lower than what capacity expansion costs.

Though nowadays it feels like the bubble is going to end up being mainly an OpenAI issue. The others are at least vaguely trying to balance expansion with revenue, without counting on inventing a computer god.

nsteel•1h ago
This link has more on the architecture: https://cloud.google.com/blog/products/compute/tpu-8t-and-tp...
fulafel•1h ago
"TPU 8t and TPU 8i deliver up to two times better performance-per-watt over the previous generation" sounds impressive especially as the previous generation is so recent (2025).

Interesting that there's separate inference and training focused hardware. Do companies using NV hardware also use different hardware for each task or is their compute more fungible?

dataking•1h ago
Vera Rubin will have Groq chips focused on fast inference so it points toward a trend. Also, with energy needs so high, why not reach for every feasible optimization?
xnx•1h ago
Nvidia said in March that they're working on specialized inference hardware, but they don't have any right now. You can do inference from Nvidia's current hardware offerings, but it's not as efficient.
FuriouslyAdrift•58m ago
AMD has been doing inference chips for many years and are the leader for HPC.

https://www.amd.com/en/products/accelerators/instinct.html

zozbot234•55m ago
The "training" chips will probably be quite usable for slower, higher-throughput inference at scale. I expect that to be quite popular eventually for non-time-sensitive uses.
electroly•3m ago
I can't answer for NVIDIA but AWS has its own training and inference chips, and word on the street is the inference chips are too weak, so some companies are running inference on the training chips.
cmptrnerd6•1h ago
Which company is building the silicon for Google? Is it tsmc? What node size? I didn't see it with a quick search, sorry if it was in the post.
wina•53m ago
tsmc through broadcom
varispeed•56m ago
I can't help but think we will be "laughing" at this in 10 years time like we laugh at steam engines or abacus.
iandanforth•52m ago
Anyone know if these are already powering all of Gemini services, some of them, or none yet? It's hard to tell if this will result in improvements in speed, lower costs, etc, or if those will be invisible, or have already happened.
kamranjon•51m ago
It's interesting that, of the large inference providers, Google has one of the most inconvenient policies around model deprecation. They deprecate models exactly 1 year after releasing them and force you to move onto their next generation of models. I had assumed, because they are using their own silicon, that they would actually be able to offer better stability, but the opposite seems to be true. Their rate limiting is also much stricter than OpenAI for example. I wonder how much of this is related to these TPU's, vs just strange policy decisions.
gordonhart•47m ago
It's frustrating how cavalier they are about killing old Gemini releases. My read is that once a new model is serving >90% of volume, which happens pretty quickly as most tools will just run the latest+greatest model, the standard Google cost/benefit analysis is applied and the old thing is unceremoniously switched off. It's actually surprising that they recently extended the EOL date for Gemini 2.5. Google has never been a particularly customer-obsessed company...
surajrmal•27m ago
What benefit is there to sticking on older models? If the API is the same, what are the switching costs?
jbellis•30m ago
Flash 2 isn't even at EOL until June but we started seeing ~90% error rates getting 429s over the weekend. (So we switched to GPT 5.4 nano.)
WarmWash•43m ago
Whats interesting to note, as someone who uses Gemini, ChatGPT, and Claude, is that Gemini consistently uses drastically fewer tokens than the other two. It seems like gemini is where it is because it has a much smaller thinking budget.

It's hard to reconcile this because Google likely has the most compute and at the lowest cost, so why aren't they gassing the hell out of inference compute like the other two? Maybe all the other services they provide are too heavy? Maybe they are trying to be more training heavy? I don't know, but it's interesting to see.

someguyiguess•39m ago
They have to have SOME competitive advantage. What reason is there to use Gemini over Claude or ChatGPT? It's not producing nearly the quality of output.
magicalhippo•32m ago
Well comparing Gemini 3.1 Pro vs ChatGPT 5.4 Pro, it's much faster at replying. Of course, if it actually thinks less then that helps a lot towards that. For most of my personal and work use-cases, I prefer waiting a bit longer for a better answer.
WarmWash•14m ago
I recently did my taxes using all three models (My return is ~50 pages, much more than a standard 1040).

GPT (codex) was accurate on the first run and took 12 minutes

Gemini (antigravity) missed 1 value because it didn't load the full 1099 pdf (the laziness), but corrected it when prompted. However it only spent 2 minutes on the task.

Claude (CC) made all manner of mistakes after waiting overnight for it to finish because it hit my limit before doing so. However claude did the best on the next step of actually filing out the pdf forms, but it ended up not mattering.

Ultimately I used gemini in chrome to fill out the forms (freefillableforms.com), but frankly it would have been faster to manually do it copying from the spreadsheets GPT and Gemini output.

I also use anti-gravity a lot for small greenfield projects(<5k LOC). I don't notice a difference between gemini and claude, outside usage limits. Besides that I mostly use gemini for it's math and engineering capabilities.

RationPhantoms•38m ago
They just released their enterprise agentic platform today so my expectation is that might be the gravity well for the Fortune 500's to park their inference on.
magicalhippo•33m ago
I've been trying Gemini Pro using their $20-ish Goole One subscription for a couple of months, and I also find it consistently does fewer web searches to verify information than say ChatGPT 5.4 Pro which I have through work.

I was planning on comparing them on coding but I didn't get the Gemini VSCode add-in to work so yeah, no dice.

The Android and web app is also riddled with bugs, including ones that makes you lose your chat history from the threads if you switch between them, not cool.

I'll be cancelling my Google One subscription this month.

WarmWash•5m ago
I don't sweat sources and almost never check them. I usually prefer to manually check information after it's provided, to prevent the model from borking it's context trying to find sources that justify it's already computed output. Almost all the knowledge is already baked into the latent space of the model, so citing sources generally is a backwards process.

I see it like going to the doctor and asking them to cite sources for everything they tell me. It would be ridiculous and totally make a mess of the visit. I much prefer just taking what the doctor said on the whole, and then verifying it myself afterwards.

Obviously there is a lot of nuance here, areas with sparse information and certainly things that exist post knowledge cut-off. But if I am researching cell structure, I'm not going to muck up my context making it dig for sources for things that are certainly already optimal in the latent space.

zshn25•36m ago
It would be interesting to benchmark a short training / inference run on the latest of TPU vs. NVIDIA GPU per cost basis
jmyeet•35m ago
In recent discussions about Tim Apple [sic] moving on there was a discussion about whether Apple flopped on AI, which is my opinion. Of course you had the false dichotomy of doing nothing or burning money faster than the US military like OpenAI does.

IMHO that happy medium is Google. Not having to pay the NVidia tax will likely be a huge competitive advantage. And nobody builds data centers as cost-effectively as Google. It's kind of crazy to be talking ExaFLOPS and Tb/s here. From some quick Googling:

- The first MegaFLOPS CPU was in 1964

- A Cray supercomputer hit GigaFLOPS in 1988 with workstations hitting it in the 1990s. Consumer CPUs I think hit this around 1999 with the Pentium 3 at 1GHz+;

- It was the 2010s before we saw off-the-shelf TFLOPS;

- It was only last year where a single chip hit PetaFLOPS. I see the IBM Roadrunner hit this in 2008 but that was ~13,000 CPUs so...

Obviously this is near 10,000 TPUs to get to ~121 EFLOPS (FP4 admittedly) but that's still an astounding number. IT means each one is doing ~12 PFLOPS (FP4).

I saw a claim that Claude Mythos cost ~$10B to train. I personally believe Google can (or soon will be able to) do this for an order of magnitude less at least.

I would love to know the true cost/token of Claude, ChatGPT and Gemini. I think you'll find Google has a massive cost advantage here.

someguyiguess•31m ago
Apple has not flopped on AI as you say. They are just focused on privacy and are likely waiting for the time when local models become efficient enough to run on iPhones (which is quickly becoming a reality).

Google could probably train models for orders of magnitude less money as you say, but they aren't. They are not capable of creating high quality models like OpenAI and Anthropic are. Their company is just too disorganized and chaotic.

Anecdotally, I don't know a single person who uses Gemini on purpose.

jmyeet•12m ago
The "waiting for local LLMs" came up re: Apple and IMHO that's too passive for company where if someone else has a better AI assistant, it's going to be a huge problem.

What if somebody cracks the problem if splitting inference between local and remote? What if someone else manages so modularize learning so your local LLM doesn't need to have been trained on how to compute integrals? Obviously we can't disect a current LLM and say "we can remove these weights because they do math" but there's no guarantee there isn't an architecture that will allow for that.

Apple could also be training an LLM Siri 2.0 that knows enough to do the things you want. Setting alarms, sending messages, etc. Apple would have all the information on what the major use cases are and where Siri is currently failing. They can increase Siri's capabilities as local LLM inference improves.

As for Google creating high quality models, I personally believe the models are going to be commoditized. I don't believe a single company is going to have a model "moat" to sustain itself as a trillion dollar company. I base two reasons for this:

1. At the end of the day, it's just software and software is infinitely reproducible and distributable. I mean we already saw one significant Anthropic leak this year; and

2. China is going to make sure we're not all dependent on one US tech company who "owns" AI. DeepSeek was just the first shot across the bow for that. It's going to be too important to China's national security for that not to happen.

And OpenAI's entire funding is predicated on that happening and OpenAI "winning".

knowaveragejoe•21m ago
> I saw a claim that Claude Mythos cost ~$10B to train.

Can you cite this? That seems absurd.

jmyeet•5m ago
I've seen various claims to this (eg [1][2][3]) but nobody reall knows. These may all come from one uunsubstantiated claim. It is I think widely accepted that Mythos is ~10T parameters.

I've seen figures that suggest GPT-4 was 1.8T parameters and cost upwards of $100 million to train (also unsubstantiated), in which case the Mythos figure might be inflated and also include development costs.

So who really knows?

[1]: https://www.softwarereviews.com/research/claude-mythos-previ...

[2]: https://x.com/duttasomrattwt/status/2041903600516133016

[3]: https://www.forrester.com/blogs/project-glasswing-the-10-con...

himata4113•30m ago
I already felt that gemini 3 proved what is possible if you train a model for efficiency. If I had to guess the pro and flash variants are 5x to 10x smaller than opus and gpt-5 class models.

They produce drastically lower amount of tokens to solve a problem, but they haven't seem to have put enough effort into refinining their reasoning and execution as they produce broken toolcalls and generally struggle with 'agentic' tasks, but for raw problem solving without tools or search they match opus and gpt while presumably being a fraction of the size.

I feel like google will surprise everyone with a model that will be an entire generation beyond SOTA at some point in time once they go from prototyping to making a model that's not a preview model anymore. All models up till now feel like they're just prototypes that were pushed to GA just so they have something to show to investors and to integrate into their suite as a proof of concept.

onlyrealcuzzo•19m ago
> They produce drastically lower amount of tokens to solve a problem, but they haven't seem to have put enough effort into refinining their reasoning and execution as they produce broken toolcalls and generally struggle with 'agentic' tasks, but for raw problem solving without tools or search they match opus and gpt while presumably being a fraction of the size.

Agreed, Gemini-cli is terrible compared to CC and even Codex.

But Google is clearly prioritizing to have the best AI to augment and/or replace traditional search. That's their bread and butter. They'll be in a far better place to monetize that than anyone else. They've got a 1B+ user lead on anyone - and even adding in all LLMs together, they still probably have more query volume than everyone else put together.

I hope they start prioritizing Gemini-cli, as I think they'd force a lot more competition into the space.

ALLTaken•18m ago
My friend at google calmly shared having had access to GPT type AI 5 years before, but internally only. They deemed it too powerful to release to.. I'm adding too powerful to release.."I'll add to plebs like us"

This experience makes me believe they have highly advanced AI internally and see no reason and have no will sharing. OpenAI and Claude FORCED them to release what they can, just to stay relevant.

The TPU's are damn awesome and I would love to fab them in small for myself. But it's fully closed sourced I'm afraid. Also Google is known to hate the customer, more or less.

SecretDreams•29m ago
They are missing a header to show the transition in discussion from TPU8t to 8i!

Thanks for posting otherwise.

Edit: actually, looks like the header got captured as a figure caption on accident.

nicman23•19m ago
yeah but can you release the sdk for the pixel 10? it was one of then only reasons which i bought this mid phone