I don't have fun examples with message queues, but I do remember some with filesystems - a popular target to connect cursed backends to. You can store data in Ping packets [0]. You can store data in the digits of Pi - achieving unbelievable compression [1]. You can store data in the metadata and other unused blocks of images - also known as steganography [2]. People wrote software to use Gmail emails as a file system [3].
That's just from the top of my head, and it really shows that sky's the limit with software.
[0] https://github.com/yarrick/pingfs
[1] https://github.com/ajeetdsouza/pifs
[2] https://en.wikipedia.org/wiki/Steganographic_file_system
On a related note, have you seen the prices at Whole Foods lately? $6 for a packet of dehydrated miso soup. This usually costs $2.50 served prepared at a sushi restaurant. AWS network egress fees are similarly blasphemous.
Shame on Amazon, lol. Though it's really capitalisms fault, if you think it through all the way.
Even with the massive margins, cloud computing is far cheaper for most SMEs than hiring an FTE sysadmin and racking machines in a colo.
The problem is that people forget to switch back to the old way when it’s time.
And the only viable answer is the ol’ capitalist saw: they charge what buyers are willing to pay.
That never quite satisfies people though.
That very much depends on your use case and billing period. Most of my public web applications run in a colo in Atlanta on containers hosted by less than $2k in hardware and cached by Cloudflare. This replaced an AWS/Digitalocean combination that used to bill about $400/mo.
Definitely worth it for me, but there are some workloads that aren’t worth it and I stick with cloud services to handle.
I would estimate that a significant amount of services hosted on AWS are paid for by small businesses with less reliability and uptime requirements than I have.
Amazon Video’s original blog post is gone, but here is a third party writeup. https://medium.com/@hellomeenu1/why-amazon-prime-video-rever...
It couldn’t possibly be because AWS execs were pissed or anything… /s
> The main scaling bottleneck in the architecture was the orchestration management that was implemented using AWS Step Functions. Our service performed multiple state transitions for every second of the stream, so we quickly reached account limits. Besides that, AWS Step Functions charges users per state transition.
> The second cost problem we discovered was about the way we were passing video frames (images) around different components. To reduce computationally expensive video conversion jobs, we built a microservice that splits videos into frames and temporarily uploads images to an Amazon Simple Storage Service (Amazon S3) bucket. Defect detectors (where each of them also runs as a separate microservice) then download images and processed it concurrently using AWS Lambda. However, the high number of Tier-1 calls to the S3 bucket was expensive.
They were really deeply drinking the AWS serverless kool-aid if they thought the right way to stream video was multiple microservices accessing individual frames on S3...
"Something amusing about this is that it is something that technically steps into the realm of things that my employer does. This creates a unique kind of conflict where I can't easily retain the intellectial property (IP) for this without getting it approved from my employer. It is a bit of the worst of both worlds where I'm doing it on my own time with my own equipment to create something that will be ultimately owned by my employer. This was a bit of a sour grape at first and I almost didn't implement this until the whole Air Canada debacle happened and I was very bored."
It's tempting to roll your own by polling a database table, but that approach breaks down- sometimes even at fairly low traffic levels. Once you move beyond a simple cron job, you're suddenly fighting row locking and race conditions just to prevent significant duplicate processing; effectively reinventing a wheel, poorly (potentially 5 or 10 times in the same service).
A service like SQS solves this with its state management. A message becomes 'invisible' while being processed. If it's not deleted within the configurable visibility timeout, it transitions back to available. That 'fetch next and mark invisible' state transition is the key, and it's precisely what's so difficult to implement correctly and performantly in a database every single time you need it.
redbell•1h ago