frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Why Not Objective-C

https://inessential.com/2026/02/18/why-not-objective-c.html
1•surprisetalk•40s ago•0 comments

Chemistry in the AI Era

https://www.nature.com/articles/d41586-026-01521-9
1•Brajeshwar•54s ago•0 comments

There is a problem with users abusing flagging on HN (2025)

https://twitter.com/paulg/status/1907528478855201096
1•washingupliquid•2m ago•0 comments

Want to AI proof your degree? Study History

https://froginawell.net/frog/2026/05/want-to-ai-proof-your-degree-study-history/
1•speckx•2m ago•0 comments

Roadside Picnic and the AI Race

https://readgrounded.com/episodes/001-golden-sphere/
1•readgrounded•2m ago•0 comments

'systematic' rape and sexual violence during Hamas' Oct 7 attack on Israel

https://www.cnn.com/2026/05/12/middleeast/report-sexual-violence-hamas-oct-7-attacks-intl
1•Tomte•2m ago•0 comments

Operation: Epic Furious

https://www.epicfurious.com/
1•dmschulman•3m ago•0 comments

Ask HN: Any materials on building distributed rate limiter?

2•ravshan•4m ago•0 comments

"Cannot be explained" – New ultra stainless steel stuns researchers

https://www.sciencedaily.com/releases/2026/05/260510030950.htm
1•bilsbie•5m ago•0 comments

South Korea's housing crisis explained (2025)

https://lgiu.org/south-koreas-housing-crisis-explained/
1•thelastgallon•5m ago•0 comments

Stochastic Parrots: Frequently Unasked Questions

https://medium.com/@emilymenonbender/stochastic-parrots-frequently-unasked-questions-49c2e7d22d11
1•cratermoon•6m ago•0 comments

Bioplastics Toxicity Upon Ingestion: Biotransformation and GI Effects

https://www.mdpi.com/2073-4360/18/9/1091
1•PaulHoule•6m ago•0 comments

Why senior developers fail to communicate their expertise

https://www.nair.sh/guides-and-opinions/communicating-your-expertise/why-senior-developers-fail-t...
1•nilirl•8m ago•0 comments

Apple Sales Coach Will Use AI-Generated Video Presenters

https://www.macrumors.com/2026/05/12/apple-sales-coach-will-use-ai-generated-presenters/
1•ndr42•8m ago•0 comments

Show HN: UIGen – Production UI from any API spec with full override control

https://github.com/darula-hpp/uigen
1•ombedzi•9m ago•0 comments

Bambu Lab 3D printers: Never again

https://www.youtube.com/watch?v=eb48MdtNaDQ
1•chakintosh•10m ago•0 comments

You cannot sell AI written software

https://blog.habets.se/2026/05/You-cannot-sell-AI-written-software.html
1•abnercoimbre•11m ago•2 comments

Heartfelt

https://nicopr.fr/tmp/shades/heartfelt.html
1•bookofjoe•11m ago•0 comments

'I have an A because I use Chat'

https://www.msn.com/en-us/news/technology/i-have-an-a-because-i-use-chat-what-uc-students-say-abo...
1•danorama•12m ago•1 comments

" are ready to take your money"

https://www.rubenerd.au/are-ready-to-take-your-money/
1•speckx•13m ago•0 comments

Humanoid robots to become baggage handlers in Japan airport experiment

https://www.theguardian.com/world/2026/apr/28/humanoid-robots-baggage-handlers-japan-airports
1•PaulHoule•14m ago•0 comments

Incident with CodeQL

https://www.githubstatus.com/incidents/z3jhyg3l0dvx
3•chenrui•14m ago•0 comments

Treat Me Like an Investor

https://cameronwestland.com/treat-me-like-an-investor/
1•camwest•14m ago•0 comments

Fixing headline-only RSS feeds with RSS-fulltext

https://mijndertstuij.nl/posts/introducing-rss-fulltext/
1•mijndert•14m ago•0 comments

ChatGPT Performs Better on Julia Than Python for LLM Code Generation. Why?

https://www.stochasticlifestyle.com/chatgpt-performs-better-on-julia-than-python-and-r-for-large-...
1•thetwentyone•14m ago•0 comments

Ask HN: How do you keep up with blogs from people you follow?

1•kalinkochnev•15m ago•2 comments

Starting 1:1s on the Right Foot

https://personalis.io/blog/one-on-ones
1•sylvanjsmit•15m ago•0 comments

Thomas Massie Has Always Been a Pain in the Ass

https://www.motherjones.com/politics/2026/05/thomas-massie-has-always-been-a-pain-in-the-ass/
1•aworks•16m ago•1 comments

UK Biobank breach prompts the field of genomics to rethink open science

https://www.nature.com/articles/d41586-026-01520-w
1•Brajeshwar•16m ago•0 comments

Show HN: Grunden – Frontier AI inference hosted in Sweden, OpenAI-compatible

https://grunden.ai
3•fsrc•16m ago•0 comments
Open in hackernews

Show HN: I built a Ruby gem that handles memoization with a ttl

https://github.com/mishalzaman/memo_ttl
48•hp_hovercraft84•1y ago
I built a Ruby gem for memoization with TTL + LRU cache. It’s thread-safe, and has been helpful in my own apps. Would love to get some feedback: https://github.com/mishalzaman/memo_ttl

Comments

locofocos•1y ago
Can you pitch me on why I would want to use this, instead of Rails.cache.fetch (which supports TTL) powered by redis (with the "allkeys-lru" config option)?
film42•1y ago
Redis is great for caching a customer config that's hit 2000 times/second by your services, but even then, an in-mem cache with short TTL would make redis more tolerant to failure. This would be great for the in-mem part.
thomascountz•1y ago
I'm not OP nor have I read through all the code, but this gem has no external dependencies and runs in a single process (as does activesupport::Cache::MemoryStore). Could be a "why you should," or a "why you should not" use this gem, depending on your use case.
hp_hovercraft84•1y ago
Good question. I built this gem because I needed a few things that Rails.cache (and Redis) didn’t quite fit:

- Local and zero-dependency. It caches per object in memory, so no Redis setup, no serialization, no network latency. -Isolated and self-managed. Caches aren’t global. Each object/method manages its own LRU + TTL lifecycle and can be cleared with instance helpers. - Easy to use — You just declare the method, set the TTL and max size, and you're done. No key names, no block wrapping, no external config.

JamesSwift•1y ago
For what its worth, ActiveSupport::CacheStore is a really flexible api that gives minimal contractual obligations (read_entry, write_entry, delete_entry is the entire set of required methods), but still allows you to layer specific functionality (eg TTL) on top with an optional 'options' param. You could get the best of both worlds by adhering to that contract and then people can swap in eg redis cache store if they wanted a network-shared store.

EDIT: see https://github.com/rails/rails/blob/main/activesupport/lib/a...

hp_hovercraft84•1y ago
That's actually a really good idea! I'll definitely consider this in a future update. Thanks!
qrush•1y ago
Congrats on shipping your first gem!!

I found this pretty easy to read through. I'd suggest setting a description on the repo too so it's easy to find.

https://github.com/mishalzaman/memo_ttl/blob/main/lib/memo_t...

hp_hovercraft84•1y ago
As in identify where the source code is in the README?
zerocrates•1y ago
I think they mean just set a description for the repo in github (set using the gear icon next to "About"), saying what the project is. That description text can come up in github searches and google searches.
film42•1y ago
Nice! In rails I end up using Rails.cache most of the time because it's always "right there" but I like how you break out the cache to be a per-method to minimize contention. Depending on your workload it might make sense to use a ReadWrite lock instead of a Monitor.

Only suggestion is to not wrap the error of the caller in your memo wrapper.

> raise MemoTTL::Error, "Failed to execute memoized method '#{method_name}': #{e.message}"

It doesn't look like you need to catch this for any operational or state tracking reason so IMO you should not catch and wrap. When errors are wrapped with a string like this (and caught/ re-raised) you lose the original stacktrace which make debugging challenging. Especially when your error is like, "pg condition failed for select" and you can't see where it failed in the driver.

hp_hovercraft84•1y ago
Thanks for the feedback! That's a very good point, I'll update the gem and let it bubble up.
JamesSwift•1y ago
I thought ruby would auto-wrap the original exception as long as you are raising from a rescue block (i.e. as long as $! is non-nil). So in that case you can just

  raise "Failed to execute memoized method '#{method_name}'"
And ruby will set `cause` for you

https://pablofernandez.tech/2014/02/05/wrapped-exceptions-in...

film42•1y ago
TIL! That's pretty cool. I still think if you have no reason to catch an error (i.e. state tracking, etc.) then you should not.
gurgeous•1y ago
This is neat, thanks for posting. I am using memo_wise in my current project (TableTennis) in part because it allows memoization of module functions. This is a requirement for my library.

Anyway, I ended up with a hack like this, which works fine but didn't feel great.

   def some_method(arg)
     @_memo_wise[__method__].tap { _1.clear if _1.length > 100 }
     ...
   end
   memo_wise :some_method
JamesSwift•1y ago
Looks good. Id suggest making your `get` wait to acquire the lock until needed. eg instead of

  @lock.synchronize do
    entry = @store[key]
    return nil unless entry

    ...
you can do

  entry = @store[key]
  return nil unless entry

  @lock.synchronize do
    entry = @store[key]
And similarly for other codepaths
chowells•1y ago
Does the memory model guarantee that double-check locking will be correct? I don't actually know for ruby.
JamesSwift•1y ago
I think it wouldnt even be a consideration on this since we arent initializing the store here only accessing the key. And theres already the check-then-set race condition in that scenario so I think it is doubly fine.
hp_hovercraft84•1y ago
Good call, but I think I would like to ensure it remains thread-safe as @store is a hash. Although I will consider something like this in a future update. Thanks!
wood-porch•1y ago
Will this correctly retrieve 0 values? AFAIK 0 is falsey in Ruby

``` return nil unless entry ```

chowells•1y ago
No, Ruby is more strict than that. Only nil and false are falsely.
wood-porch•1y ago
Doesn't that shift the problem to caching false then :D
RangerScience•1y ago
you can probably always just do something like:

  def no_items?
    !items.present?
  end
  
  def items
    # something lone
  end

  memoize :items, ttl: 60, max_size: 10`
just makes sure the expensive operation results in a truthy value, then add some sugar for the falsey value, done.
madsohm•1y ago
Since using `def` to create a method returns a symbol with the method name, you can do something like this too:

  memoize def expensive_calculation(arg)
    @calculation_count += 1
    arg * 2
  end, ttl: 10, max_size: 2

  memoize def nil_returning_method
    @calculation_count += 1
    nil
  end
hp_hovercraft84•1y ago
This is why I love working with Ruby!
deedubaya•1y ago
See https://github.com/huntresslabs/ttl_memoizeable for an alternative implementation.

For those who don’t understand why you might want something like this: if you’re doing high enough throughput where eventual consistency is effectively the same as atomic consistency and IO hurts (i.e. redis calls) you may want to cache in memory with something like this.

My implementation above was born out of the need to adjust global state on-the-fly in a system processing hundreds of thousands of requests per second.

kartik_malik•1y ago
In React ?