> Initial Disclosure: After extensive research, we believe the evidence justifies a short position in shares of Backblaze (NASDAQ: BLZE). Morpheus Research holds short positions in BLZE, and Morpheus Research may profit from short positions held by others. This report represents our opinion, and we encourage all readers to do their own due diligence. Please see our full disclaimer at the bottom of the report.
Oh, and the finance employees refused to sign off on the books and a few are suing the company.
It sucks, but Backblaze is cooked.
But this is a hit piece. For any hit piece, I think it’s important for readers to know the motivations of the author, regardless of the validity of the claims.
Storage costs money. People love to dream up creative business models where your personal storage is subsidized by some other part of the business, but I think it’s just a matter of time before the business model changes. At the end of the day, there’s a massive gravitational pull that brings everything in line with market rates. These days, I think we have a pretty good idea of what market rates are for cloud storage. Anything less than market rates should be viewed with suspicion.
From another angle, in any long-term relationship with a vendor, you want the vendor to make money. We know that major cloud vendors do, or at least close enough, because they charge the same for small customers and big ones (or close enough). None of them offer unlimited data… or at least, not any more (most of them used to, but those parts of the business always get shut down).
How do you test your backups? You do test your backups, right? Your other comment mentions using dropbox, but that's hardly a real backup solution, and you could easily run into a situation where you need to retrieve files beyond the 30 day window.
And it’s such a good thing to remind people of. You don’t manage a backup service. Nobody cares about backups.
People care about restores.
I would personally be fine running tests on data that is recently backed up and only testing the data in glacier once or twice. Think about why you test backups in the first place—the main errors you’re trying to catch are problems like misconfiguration, backups not getting scheduled or not running, or not having access to your encryption keys. You can put your most recent backup in “infrequent access” and let older objects age out to glacier with lifecycle rules.
Glacier used to have really expensive retrieval costs. That’s now called “glacier deep archive” and as far as I know, major use cases are things like corporate recordkeeping / compliance (e.g. Sarbanes-Oxley). The costumers for deep archive should be sophisticated customers.
For most regular people, I suggest Google Drive or OneDrive because you get the value add of the their ecosystem. With Google, Photos are good. With Microsoft, the Office+OneDrive subscription is a great value.
Are you talking about their B2 product or their backup SaaS? The former has "fee that scales with usage", and the latter probably has enough normal users (ie. not data hoarders backing up 50 TB on the $99/year plan that they're not losing money overall.
$99/year with no limits on data, as far as I can tell. The context here is someone talking about personal backups, not Backblaze’s broader offerings. We know the market rates for cloud storage, more or less, since they’ve converged somewhat, across the industry. This makes it easy to calculate what kind of usage patterns would lead someone to save money at Backblaze.
Cloud services often have a free tier or cheap tier to attract new customers. IMO, this is not it. This is a product. These people are not turning around and signing contracts with Backblaze at work. At least, not many.
So it could be that people are just using it below its limits, or it could be that it’s subsidized by other parts of the business. Overall P&L is irrelevant—you want to be able to explain this product as a profitable product, as some viable sales channel, or as marketing.
My recommendation is to buy storage products where the company is making money off you, or nearly.
When O365 launched, they were using spinning disk for exchange. The issue was that they stranded capacity because of the IOPS needs of exchange. So “free”, (low iops) SharePoint and OneDrive for business data utilized that “free” capacity.
It’s not the same service as backblaze’s client but it does everything I need, with dedupe
The company is being plundered and run for the benefit of the exec team printing shares for themselves. Nobody should be buying shares expecting the price to rise.
I know I could use some open source stuff to get similar functionality, but I feel almost as nervous rolling my own backup as I would rolling my own crypto. Does Wasabi have some client solution that I'm missing?
That's addressed in the article:
>Since 2021, Backblaze’s B2 Cloud Storage segment revenue growth has outpaced its legacy Computer Backup segment. In Q4 2024, the company announced that its B2 Cloud Storage segment generated $17.1 million in revenue, surpassing its Computer Backup revenue of $16.7 million.
Even if Wasabi isn't a straight replacement for Backblaze's entire business, it's a replacement for its biggest and fastest growing segment.
I know nobody who use them, so it all balances itself.
The only downside I found is that they limit free egress to 100% of your storage (so if you store 1tb, you get 1tb egress). In practice I don't think this will be an issue for my use case.
So I'm an example of a person who this morning had almost never heard of Wasabi, knows Backlaze well, and after an hour or two of research I have completely switched over.
Is there a Restic frontend for Windows that you'd be comfortable setting up a family member with?
Doesn’t that seem like insider trading? I guess he can claim he didn’t know so and so was going to quit.
But… are there securities laws in this era where policy decisions are timed to pump/dump the lowest?
honestly I participated in the IPO then sold when it was up a bit - I had faith in their fundamentals as a business but they did not seem mature as a public company.
> Instead, Backblaze’s new CFO, Marc Suidan, joined from Beachbody (NYSE: BODI), a multi-level marketing company
eek
And then promoting them also gives strong mlm vibes.
So I would bet more money on them being an mlm after reading your comment than before
"Uh, what?? Air isn't an MLM, it's just the thing we breath. We all need it to live."
"Huh, I guess air is an MLM after all. Wild."
TFA even specifies this:
> Instead they hired Marc Suidan who joined from Beachbody (NYSE: BODI) - a publicly traded health and fitness company where he was CFO between May 2022 and August 2024 when the company was operating as a “multi-level marketing” company.
[0] https://www.cnbc.com/2011/01/31/beachbody-grows-exponentiall...
[1] https://nypost.com/2024/10/01/business/beachbody-lays-off-th...
If you think he is a good CFO I think you drank the coolaid.
https://marencrowley.com/podcast/the-demise-of-the-beachbody...
(At least they disclose it at the bottom, but it should be in the title.)
- Wasabi
- idrive
Who else?
Cloudflare, scaleway, hetzner, OVH, Oracle, minio and dozens of others all have s3 compatible object stores
Most of them expect AWS have at least inter cloud transfer free (or discount Azure and GCP) being part of bandwidth alliance(BWA).
AWS S3 pricing is designed to keep you in the ecosystem , not be the cheapest, so tend to be costliest not cheapest unless your workload is archival.
For almost all non archival workloads egress costs will be on par or much higher than storage costs and AWS[1] charges a lot for any bandwidth.
AWS also makes migration really expensive so if you started with them, you are likely stuck as you would have to pay up to 10-12 months of cost to migrate out if you are on anything but the standard tier (cheaper the tiers higher the retrieval costs)
[1] All tier one providers have 10-20x b/w costs compared to tier two , however Azure and GCP discount for inter cloud transfers AWS does not
PaywallBuster•6h ago
this is activist short seller report/investigation, some of the points made:
- never been profitable
- execs selling aggressively
- loosing customers to Wasabi
- accused of cooking the books
- being sued by former execs for anti whistleblower/wrongful termination
- execs leaving
ryao•5h ago
k8sToGo•5h ago
mkj•5h ago
perrygeo•4h ago
dist-epoch•4h ago
Twirrim•3h ago
If they're going to compress/decompress, ideally you'd want them to have that at the point of ingestion, then encrypted, and then store that on the target drive.
That way you can put very strong controls (audit, architecture, software etc) around your encryption and decryption, and be at reduced risk from someone getting access to storage servers.
ksec•4h ago
Can't seems to find anything specific about Wasabi uses ZFS on Google. And Wasabi doesn't change you on egress. So I guess they are similar in terms of pricing.
Although B2 seems to be way more popular on HN. I rarely see Wasabi here.
rustc•4h ago
Wasabi only allows as much egress as the amount of data you're storing and I don't think you can even pay for more: https://wasabi.com/pricing/faq#free-egress-policy.
badlibrarian•4h ago
thayne•2h ago
trollied•4h ago
alabastervlog•3h ago
(Unless something’s changed since the last time I checked)
Sytten•3h ago
EDIT: OP is correct for CDN but if you use R2 even as a transparent copy from another S3 like provider it is allowed [1]
[1] https://blog.cloudflare.com/updated-tos/
cs02rm0•4h ago
https://job-boards.greenhouse.io/wasabi/jobs/4615087008
ryao•4h ago
"The developers at Klara Inc. have been outstanding in helping Wasabi resolve ZFS issues and improve performance. Their understanding of ZFS internals is unmatched anywhere in the industry" - Jeff Flowers, CTO, Wasabi Technologies
https://klarasystems.com/
You could also search the OpenZFS repository for commits with the word Wasabi in them.
Twirrim•4h ago
The very quick high level explanation is that in storage you talk about "stretch factor". For every byte of file, how many bytes do you have to store to get the desired durability. If your approach to durability is you make 3 copies, that's a 3x stretch factor. Assuming you're smart, you'll have these spread across different servers, or at least different hard disks, so you'd be able to tolerate the loss of 2 servers.
With erasure encoding you apply a mathematical transformation to the incoming object and shard it up. Out of those shards you need to retrieve a certain number to be able to reproduce the original object. The number of shards you produce and how many you need to recreate the original are configurable. Let's say it shards to 12, and you need 9 to recreate. The amount of storage that takes up is the ratio 9:12, so that's a 1.3x. For every byte that comes in, you need to store just 1.3x bytes.
As before you'd scatter them across 12 shards and only needing any 9 means you can tolerate losing 3 hard disks (servers?) and still be able to retrieve the original object. That's better durability despite taking up 2.7x less storage.
The drawback is that to retrieve the object, you have to fetch shards from 9 different locations and apply the transformation to recreate the original object, which adds a small bit of latency, but it's largely negligible these days. The cost of extra servers for your retrieval layer is significantly less than a storage server, and you wouldn't need anywhere near the same number as you'd otherwise need for storage.
The underlying file system doesn't really have any appreciable impact under those circumstances. I'd argue ZFS is probably even worse, because you're spending more resources on overhead. You want something as fast and lightweight as possible. Your fixity checks will catch any degradation in shards, and recreating shards in the case of failure is pretty cheap.
srean•29m ago
Very interesting. Could you name a few, am curious. I would be happy if erasure codes are actually being used commercially.
What I find interesting is the interaction of compression and durability -- if you lose a few compressed bytes to reconstruction error, you lose a little more than a few. Seems right up rate-distortion alley.