frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Logging Sucks

https://loggingsucks.com/
229•FlorinSays•2h ago

Comments

firefoxd•1h ago
Good write up.

Gonna go on a tangent here. Why the single purpose domain? Especially since the author has a blog. My blog is full of links to single post domains that are no longer.

OsrsNeedsf2P•1h ago
Because it's an ad
thewisenerd•1h ago
it's an ad, for what?

i do not see a product upsell anywhere.

if it's an ad for the author themselves, then it's a very good one.

KomoD•48m ago
At the end there's a form where you can get a "personalized report", I have a feeling that'll advertise some kind of service, it's usually the case.
danielfalbo•1h ago
I see more and more blog posts that contain interactive elements. Despite the general enshittification of the average blog and the internet, this feels like a 'modern' touch that actually adds something valuable to the sufficient ad-free no-popups old blog style.
heinrichhartman•1h ago
A post on this topic feels incomplete without a shout-out to Charity Majors - she has been preaching this for a decade, branded the term "wide events" and "observability", and built honeycomb.io around this concept.

Also worth pointing out that you can implement this method with a lot of tools these days. Both structured Logs or Traces lend itself to capture wide events. Just make sure to use a tool that supports general query patterns and has rich visualizations (time-series, histograms).

the_mitsuhiko•1h ago
> A post on this topic feels incomplete without a shout-out to Charity Majors

I concur. In fact, I strongly recommend anyone who has been working with observability tools or in the industry to read her blog, and the back story that lead to honeycomb. They were the first to recognize the value of this type of observability and have been a huge inspiration for many that came after.

loevborg•1h ago
I've learned more from Charity about telemetry than from anyone else. Her book is great, as are her talks and blog posts. And Honeycomb, as a tool, is frankly pretty amazing

Yep, I'm a fan.

dcminter•50m ago
Could you drop a few specific posts here that you think are good for someone (me) who hasn't read her stuff before? Looks like there's a decade of stuff on her blog and I'm not sure I want to start at the very beginning...
vasco•51m ago
She has good content but no single person branded the term "observability", what the heck. You can respect someone without making wild claims.
alexwennerberg•1h ago
> Logs were designed for a different era. An era of monoliths, single servers, and problems you could reproduce locally. Today, a single user request might touch 15 services, 3 databases, 2 caches, and a message queue. Your logs are still acting like it's 2005.

If a user request is hitting that many things, in my view, that is a deeply broken architecture.

the_mitsuhiko•1h ago
> If a user request is hitting that many things, in my view, that is a deeply broken architecture.

If we want it or not, a lot of modern software looks like that. I am also not a particular fan of building software this way, but it's a reality we're facing. In part it's because quite a few services that people used to build in-house are now outsourced to PaaS solutions. Even basic things such as authentication are more and more moving to third parties.

worik•1h ago
> but it's a reality we're facing.

Yes. Most software is bad

The incentives between managers and technicians are all wrong

Bad software is more profitable, over the time frames managers care about, than good software

the_mitsuhiko•50m ago
The reason we end up with very complex systems I don't think is because of incentives between "managers and technicians". If I were to put my finger to it, I would assume it's the very technicians who argued themselves into a world where increased complexity and more dependencies is seen as a good thing.

Fighting complexity is deeply unpopular.

jdpage•1h ago
Tangential, but I wonder if the given example might be straying a step too far? Normally we want to keep sensitive data out of logs, but the example includes a user.lifetime_value_cents field. I'd want to have a chat with the rest of the business before sticking something like that in logs.
nightpool•9m ago
In some companies, this type of information is often very important and very easily available to everyone at all levels of the business to help prioritize and understand customer value. I would not consider "sensitive" in the same way that e.g. PII would be.
zkmon•1h ago
> Logs were designed for a different era. An era of monoliths, single servers, and problems you could reproduce locally. Today, a single user request might touch 15 services, 3 databases, 2 caches, and a message queue. Your logs are still acting like it's 2005.

Logs are fine. The job of local logs is to record the talk of a local process. They are doing this fine. Local logs were never meant to give you a picture of what's going on some other server. For such context, you need a transaction tracing that can stitch the story together across all processes involved.

Usually, looking at the logs at right place should lead you to the root cause.

holoduke•1h ago
APN/Kibana. All what I need for inspecting logs.
devmor•52m ago
Shoutout to Kibana. Absolutely my favorite UI tool for trying to figure out what went wrong (and sometimes, IF anything went wrong in the first place)
venturecruelty•1h ago
>Today, a single user request might touch 15 services, 3 databases, 2 caches, and a message queue.

Not if I have anything to say about it.

>Your logs are still acting like it's 2005.

Yeah, because that's just before software development went absolutely insane.

otterley•59m ago
One of the points the author is trying to make (although he doesn't make it well, and his attitude makes it hard to read) is that logs aren't just for root-causing incidents.

When properly seasoned with context, logs give you useful information like who is impacted (not every incident impacts every customer the same way), correlations between component performance and inputs, and so forth. When connected to analytical engines, logs with rich context can help you figure out things like behaviors that lead to abandonment, the impact of security vulnerability exploits, and much more. And in their never-ending quest to improve their offerings and make more money, product managers love being able to test their theories against real data.

ivan_gammel•20m ago
It’s a wild violation of SRP to suggest that. Separating concerns is way more efficient. Database can handle audit trail and some key metrics much better, no special tools needed, you can join transaction log with domain tables as a bonus.
otterley•14m ago
Are you assuming they're all stored identically? If so, that's not necessarily the case.

Once the logs have entered the ingestion endpoint, they can take the most optimal path for their use case. Metrics can be extracted and sent off to a time-series metric database, while logs can be multiplexed to different destinations, including stored raw in cheap archival storage, or matched to schemas, indexed, stored in purpose-built search engines like OpenSearch, and stored "cooked" in Apache Iceberg+Parquet tables for rapid querying with Spark or other analytical engines.

Have you ever taken, say, VPC flow logs, saved them in Parquet format, and queried them with DuckDB? It's mind blowing.

ivan_gammel•4m ago
Good joke.
ohans•1h ago
This was a brilliant write up, and loved the interactivity.

I do think "logs are broken" is a bit overstated. The real problem is unstructured events + weak conventions + poor correlation.

Brilliant write up regardless

the__alchemist•1h ago
From what I gather: This is referring to Web sites or other HTTP applications which are internally implemented as a collection of separate applications/ micro-services?
cowsandmilk•1h ago
Horrid advice at the end about logging every error, exception, slow request, etc if you are sampling healthy requests.

Taking slow requests as an example, a dependency gets slower and now your log volume suddenly goes up 100x. Can your service handle that? Are you causing a cascading outage due to increased log volumes?

Recovery is easier if your service is doing the same or less work in a degraded state. Increasing logging by 20-100x when degraded is not that.

otterley•1h ago
It’s an important architectural requirement for a production service to be able to scale out their log ingestion capabilities to meet demand.

Besides, a little local on-disk buffering goes a long way, and is cheap to boot. It’s an antipattern to flush logs directly over the network.

trevor-e•1h ago
Yea that was my thought too. I like the idea in principle, but these magic thresholds can really bite you. It claims to be P(99), probably off some historical measurement, but that's only true if it's dynamically changing. Maybe this could periodically query the OTEL provider for the real number to at least limit the time window of something bad happening.
debazel•59m ago
My impression was that you would apply this filter after the logs have reach your log destination, so there should be no difference for your services unless you host your own log infra, in which case there might be issues on that side. At least that's how we do it with Datadog because ingestion is cheap but indexing and storing logs long term is the expensive part.
Veserv•35m ago
I do not see how logging could bottleneck you in a degraded state unless your logging is terribly inefficient. A properly designed logging system can record on the order of 100 million logs per second per core.

Are you actually contemplating handling 10 million requests per second per core that are failing?

otterley•21m ago
Generation and publication is just the beginning (never mind the fact that resources consumed by an application to log something are no longer available to do real work). You have to consider the scalability of each component in the logging architecture from end to end. There's ingestion, parsing, transformation, aggregation, derivation, indexing, and storage. Each one of those needs to scale to meet demand.
Veserv•8m ago
I already accounted for consumed resources when I said 10 million instead of 100 million. I allocated 10% to logging overhead. If your service is within 10% of overload you are already in for a bad time. And frankly, what systems are you using that are handling 10 million requests per second per core (100 nanoseconds per request)? Hell, what services are you deploying that you even have 10 million requests per second per core to handle?

All of those other costs are, again, trivial with proper design. You can easily handle billions of events per second on the backend with even a modest server. This is done regularly by time traveling debuggers which actually need to handle these data rates. So again, what are we even deploying that has billions of events per second?

otterley•4m ago
In my experience working at AWS and with customers, you don't need billions of TPS to make an end-to-end logging infrastructure keel over. It takes much less than that. As a working example, you can host your own end-to-end infra (the LGTM stack is pretty easy to deploy in a Kubernetes cluster) and see what it takes to bring yours to a grind with a given set of resources.
kgklxksnrb•1h ago
Logfiles are a user interface.
otterley•1h ago
The substance of this post is outstanding.

The framing is not, though. Why does it have to sound so dramatic and provocative? It’s insulting to its audience. Grumpiness, in the long term, is a career-limiting attitude.

b0ringdeveloper•1h ago
I get the AI feeling from it.
otterley•1h ago
It might have been AI-assisted, and it might not have been. It doesn’t really matter. The author is ultimately responsible for the end result.
jupin•1h ago
Some excellent points raised in this article.
charcircuit•1h ago
This article is attacking a strawman. It makes up terrible logs and then says they are bad. Even if this was a single monolith the logs still don't include even something like a thread id, to avoid mixing different requests together.
blinded•43m ago
I see logs worse that that on the daily.
dcminter•1h ago
I've generally found that structured logs that include a correlation ID make it quite easy to narrow down the general area or exact cause of problems. Usually (in enterprise orgs) via Splunk or Datadog.

Where I've had problems it's usually been one of:

There wasn't anything logged in the error block. A comment saying "never happens" is often discovered later :)

Too much was logged and someone mandated dialing the logging down to save costs. Sigh.

A new thread was started and the thread-local details including the correlation ID got lost, then the error occurred downstream of that. I'd like better solutions for that one.

Edit: Incidentally a correlation ID is not (necessarily) the same thing as a request ID. An API often needs to allow for the caller making multiple calls to achieve an objective; 5 request IDs might be tied to a single correlation ID.

Spivak•1h ago
Slapping on OpenTelemetry actually will solve your problem.

Point #1 isn't true, auto instrumentation exists and is really good. When I integrate OTel I add my own auto instrumentors wherever possible to automatically add lots of context. Which gets into point #2.

Point #2 also isn't true. It can add business context in a hierarchal manner and ship wide events. You shouldn't have to tell every span all the information again. Just where it appears naturally the first time.

Point #3 also also isn't true because OTel libs make it really annoying to just write a log message and very strongly pushes you into a hierarchy of nested context managers.

Like the author's ideal setup is basically using OTel with Honeycomb. You get the querying and everything. And unlike rawdogging wide events all your traces are connected, can span multiple services and do timing for you.

yujzgzc•1h ago
You might also need different systems for low-cardinality, low-latency production monitoring (where you want to throw alerts quickly and high cardinality fields would just get in the way), and medium to long term logging with wide events.

Also if you're going to log wide events, for the sake of the person querying them after you, please don't let your schema be an ad hoc JSON dict of dicts, put some thought into the schema structure (and better have a logging system that enforces the schema).

m3047•1h ago
I agree with this statement: "Instead of logging what your code is doing, log what happened to this request." but the impression I can't shake is that this person lacks experience, or more likely has a lot of experience doing the same thing over and over.

"Bug parts" (as in "acceptable number of bug parts per candy bar") logging should include the precursors of processing metrics. I think what he calls "wide events" I call bug parts logging in order to emphasize that it also may include signals pertaining to which code paths were taken, how many times, and how long it took.

Logging is not metrics is not auditing. In particular processing can continue if logging (temporarily) fails but not if auditing has failed. I prefer the terminology "observables" to "logging" and "evaluatives" to "metrics".

In mature SCADA systems there is the well-worn notion of a "historian". Read up on it.

A fluid level sensor on CANbus sending events 10x a second isn't telling me whether or not I have enough fuel to get to my destination (a significant question); however, that granularity might be helpful for diagnosing a stuck sensor (or bad connection). It would be impossibly fatiguing and hopelessly distracting to try to answer the significan question from this firehose of low-information events. Even a de-noised fuel gauge doesn't directly diagnose my desired evaluative (will I get there or not?).

Does my fuel gauge need to also serve as the debugging interface for the sensor? No, it does not. Likewise, send metrics / evaluatives to the cloud not logging / observables; when something goes sideways the real work is getting off your ass and taking a look. Take the time to think about what that looks like: maybe that's the best takeaway.

otterley•53m ago
> Logging is not metrics is not auditing.

I espouse a "grand theory of observability" that, like matter and energy, treats logs, metrics, and audits alike. At the end of the day, they're streams of bits, and so long as no fidelity is lost, they can be converted between each other. Audit trails are certainly carried over logs. Metrics are streams of time-series numeric data; they can be carried over log channels or embedded inside logs (as they often are).

How these signals are stored, transformed, queried, and presented may differ, but at the end of the day, the consumption endpoint and mechanism can be the same regardless of origin. Doing so simplifies both the conceptual framework and design of the processing system, and makes it flexible enough to suit any conceivable set of use cases. Plus, storing the ingested logs as-is in inexpensive long-term archival storage allows you to reprocess them later however you like.

Veserv•24m ago
Saying they are all the same when no fidelity is lost is missing the point. The only distinction between logs, traces, and metrics is literally what to do when fidelity is lost.

If you have insufficient ingestion rate:

Logs are for events that can be independently sampled and be coherent. You can drop arbitrary logs to stay within ingestion rate.

Traces are for correlated sequences of events where the entire sequence needs to be retained to be useful/coherent. You can drop arbitrary whole sequences to stay within ingestion rate.

Metrics are pre-aggregated collections of events. You pre-limited your emission rate to fit your ingestion rate at the cost of upfront loss of fidelity.

If you have adequate ingestion rate, then you just emit your events bare and post-process/visualize your events however you want.

otterley•11m ago
> If you have insufficient ingestion rate

I would rather fix this problem than every other problem. If I'm seeing backpressure, I'd prefer to buffer locally on disk until the ingestion system can get caught up. If I need to prioritize signal delivery when I see back pressure, I can do that locally as well by separating streams (i.e. priority queueing). It doesn't change the fundamental nature of the system, though.

lll-o-lll•12m ago
Auditing is fundamentally different because it has different durability and consistency requirements. I can buffer my logs, but I might need to transact my audit.
otterley•7m ago
For most cases, buffering audit logs on local storage is fine. What matters is that the data is available and durable somewhere in the path, not that it be transactionally durable at the final endpoint.
mrkeen•59m ago
> Your logs are lying to you. Not maliciously. They're just not equipped to tell the truth.

The best way to equip logs to tell the truth is to have other parts of the system consume them as their source of truth.

Firstly: "what the system does" and "what the logs say" can't be two different things.

Secondly: developers can't put less info into the logs than they should, because their feature simply won't work without it.

8n4vidtmkvmk•46m ago
That doesn't sound like a good plan. You're coupling logging with business logic. I don't want to have to think if i change a debug string am i going to break something.
andoando•36m ago
Your logic wouldn't be dependent on a debug string, but some enum in a structured field. Ex, event_type: CREATED_TRANSACTION.

Seeing logging as debugging is flawed imo. A log is technically just a record of what happened in your database.

SoftTalker•12m ago
You're also assuming your log infrastructure is a lot more durable than most are. Generally, logging is not a guaranteed action. Writing a log message is not normally something where you wait for a disk sync before proceeding. Dropping a log message here or there is not a fatal error. Logs get rotated and deleted automatically. They are designed for retroactive use and best effort event recording, not assumed to be a flawless record of everything the system did.
tetha•52m ago
One thing this is missing: Standardization and probably the ECS' idea of "related" fields.

A common problem in a log aggregation is the question if you query for user.id, user_id, userID, buyer.user.id, buyer.id, buyer_user_id, buyer_id, ... Every log aggregation ends up being plagued by this. You need standard field names there, or it becomes a horrible mess.

And for a centralized aggregation, I like ECS' idea of "related". If you have a buyer and a seller, both with user IDs, you'd have a `related.user.id` with both id's in there. This makes it very simple to say "hey, give me everything related to request X" or "give me everything involving user Y in this time frame" (as long as this is kept up to date, naturally)

ttoinou•43m ago
I always wondered why we didnt have some kind of fuzzy english words search regexes/tool, that is robust to keyboard typing mistakes, spelling mistake, synonyms, plural, conjugation etc.
j-pb•38m ago
I actually wrote my bachelors on this topic, but instead of going the ECS route (which still has redundant fields in different components) I went in the RDF direction. That system has shifted towards more of a middleware/database hybrid (https://github.com/triblespace/triblespace-rs). I always wonder if we actually need logging if we had more data-oriented stacks where the logs fall out as a natural byproduct of communication and storage.
thevinter•46m ago
The presentation is fantastic and I loved the interactive examples!

Too bad that all of this effort is spent arguing something which can be summarised as "add structured tags to your logs"

Generally speaking my biggest gripe with wide logs (and other "innovative" solutions to logging) is that whatever perceived benefit you argue for doesn't justify the increased complexity and loss of readability.

We're throwing away `grep "uid=user-123" application.log` to get what? The shipping method of the user attached to every log? Doesn't feel an improvement to me...

P.S. The checkboxes in the wide event builder don't work for me (brave - android)

bambax•44m ago
> Logging Sucks

But does it? Or is it bad logging, or excessive logging, or unsearchable logs?

A client of mine uses SnapLogic, which is a middleware / ETL that's supposed run pipelines in batch mode to pass data around between systems. It generates an enormous amount of logs that are so difficult to access, search and read that they may as well don't exist.

We're replacing all of that with simple Python scripts that do the same thing and generate normal simple logs with simple errors when something's truly wrong or the data is in the wrong format.

Terse logging is what you want, not an exhaustive (and exhausting) torrent of irrelevant information.

asdev•42m ago
this is the best lead generation form i've ever seen
roncesvalles•41m ago
AI slop blogvert. The first example is disingenuous btw. Everyone these days uses requestIDs to be able to query all log lines emanated by a single request, usually set by the first backend service to receive the request and then propagated using headers (and also set in the server response).

There isn't anything radical about his proposed solutions either. Most log storage can be set with a rule where all warning logs or above can be retained, but only a sample of info and debug logs.

The "key insight" is also flawed. The reason why we log at every step is because sometimes your request never completes and it could be for 1000 reasons but you really need to know how far it got in your system. Logging only a summary at the end is happy path thinking.

mnahkies•35m ago
That was difficult to read, smelt very AI assisted though the message was worthwhile, it could've been shorter and more to the point.

A few things I've been thinking about recently:

- we have authentication everywhere in our stack, so I've started including the user id on every log line. This makes getting a holistic view of what a user experienced much easier.

- logging an error as a separate log line to the request log is a pain. You can filter for the trace, but it makes it hard to surface "show me all the logs for 5xx requests and the error associated" - it's doable, but it's more difficult than filtering on the status code of the request log

- it's not enough to just start including that context, you have to educate your coworkers that it's now present. I've seen people making life hard for themselves because they didn't realize we'd added this context

spike021•21m ago
If your codebase has the concept of a request ID, you could also feasibly use that to trace what a user has been doing with more specificity.
mnahkies•14m ago
We do have both a span id and trace id - but I personally find this more cumbersome over filtering on a user id. YMMV if you're interested in a single trace then you'd filter for that, but I find you often also care what happened "around" a trace
ivan_gammel•14m ago
…and the same ID can be displayed to user on HTTP 500 with the support contact, making life of everyone much easier.
xmprt•12m ago
On the other hand, investing in better tracing tools unlocks a whole nother level of logging and debugging capabilities that aren't feasible with just request logs. It's kind of like you mentioned with using the user id as a "trace" in your first message but on steroids.
bob1029•31m ago
> Logs were designed for a different era. An era of monoliths, single servers, and problems you could reproduce locally.

I worked with enterprise message bus loggers in semiconductor manufacturing context wherein we had thousands of participants on the message bus. It generated something like 300-400 megabytes per hour. Despite the insane volume we made this work really well using just grep and other basic CLI tools.

The logs were mere time series of events. Figuring out the detail about specific events (e.g. a list of all the tools a lot visited) required writing queries into the Oracle monster. You could derive history from the event logs if you had enough patience & disk space, but that would have been very silly given the alternative option. We used them predominantly to establish a casual chain between events when the details are still preliminary. Identifying suspects and such. Actually resolving really complicated business usually requires more than a perfectly detailed log file.

eterm•30m ago
Overly dismissive of OTLP without proper substance to the criticism.
ivan_gammel•28m ago
The problem statement in this article sounds weird. I thought in 2025 everyone logs at least thread id and context id (user id, request id etc), and in microservice architecture at least transaction or saga id. You don’t need structured logging, because grep by this id is sufficient for incident investigation. And for analytics and metrics databases of events and requests make more sense.
ardme•14m ago
Maybe better written and simplified to: “microservices suck”.
exabrial•12m ago
Our logging guidance is: "Don't write comments, write logs" and that serves us pretty well. The point being, don't write code "clever code", write obvious code, and try to make it similar to everything else thats been done, regardless if you agree with it.
lstroud•3m ago
Sounds like he’s just asking for an old school Inman style transaction log.
UltraSane•3m ago
Splunk is expensive but it makes searching logs so much faster and more effective. I think of it as SQL for unstructured data.

Logging Sucks

https://loggingsucks.com/
231•FlorinSays•2h ago•75 comments

Show HN: Books mentioned on Hacker News in 2025

https://hackernews-readings-613604506318.us-west1.run.app
216•seinvak•4h ago•91 comments

Weight loss jabs: What happens when you stop taking them

https://www.bbc.com/news/articles/cn98pdpyjz5o
18•neom•33m ago•1 comments

Mullvad VPN: "This is a Chat Control 3.0 attempt."

https://mastodon.online/@mullvadnet/115742530333573065
186•janandonly•1h ago•53 comments

E.W.Dijkstra Archive

https://www.cs.utexas.edu/~EWD/welcome.html
75•surprisetalk•5h ago•6 comments

Show HN: WalletWallet – create Apple passes from anything

https://walletwallet.alen.ro/
171•alentodorov•4h ago•61 comments

You're Not Burnt Out. You're Existentially Starving

https://neilthanedar.com/youre-not-burnt-out-youre-existentially-starving/
79•thanedar•2h ago•66 comments

ARIN Public Incident Report – 4.10 Misissuance Error

https://www.arin.net/announcements/20251212/
115•immibis•5h ago•25 comments

Get an AI code review in 10 seconds

https://oldmanrahul.com/2025/12/19/ai-code-review-trick/
29•oldmanrahul•3h ago•17 comments

Coarse Is Better

https://borretti.me/article/coarse-is-better
149•_dain_•7h ago•78 comments

I Program on the Subway

https://www.scd31.com/posts/programming-on-the-subway
95•evankhoury•4d ago•66 comments

Three Ways to Solve Problems

https://andreasfragner.com/writing/three-ways-to-solve-problems
76•42point2•5h ago•17 comments

Structured Outputs Create False Confidence

https://boundaryml.com/blog/structured-outputs-create-false-confidence
80•gmays•5h ago•48 comments

Ruby website redesigned

https://www.ruby-lang.org/en/
302•psxuaw•13h ago•118 comments

Indoor tanning makes youthful skin much older on a genetic level

https://www.ucsf.edu/news/2025/12/431206/indoor-tanning-makes-youthful-skin-much-older-genetic-level
188•SanjayMehta•15h ago•139 comments

I can't upgrade to Windows 11, now leave me alone

https://idiallo.com/byte-size/cant-update-to-windows-11-leave-me-alone
86•firefoxd•1h ago•67 comments

Show HN: RenderCV – Open-source CV/resume generator, YAML to PDF

https://github.com/rendercv/rendercv
51•sinaatalay•7h ago•28 comments

FWS – pip-installable embedded process supervisor with PTY/pipe/dtach back ends

8•mrsurge•3d ago•1 comments

What I learned about deploying AV1 from two deployers

https://streaminglearningcenter.com/articles/what-i-learned-about-deploying-av1-from-two-deployer...
31•breve•5d ago•19 comments

Measuring AI Ability to Complete Long Tasks

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
211•spicypete•16h ago•163 comments

Show HN: HN Sentiment API – I ranked tech CEOs by how much you hate them

https://docs.hnpulse.com
21•kingofsunnyvale•5h ago•4 comments

Show HN: AI-Augmented Memory for Groups

https://www.largemem.com/
7•vishal-ds•5d ago•1 comments

Show HN: Jmail – Google Suite for Epstein files

https://www.jmail.world
1308•lukeigel•23h ago•303 comments

Show HN: Shittp – Volatile Dotfiles over SSH

https://github.com/FOBshippingpoint/shittp
103•sdovan1•7h ago•57 comments

Decompiling the New C# 14 field Keyword

https://blog.ivankahl.com/decompiling-the-new-csharp-14-field-keyword/
61•ivankahl•4d ago•24 comments

Reasons not to become famous (2020)

https://tim.blog/2020/02/02/reasons-to-not-become-famous/
129•Tomte•5h ago•96 comments

Claude in Chrome

https://claude.com/chrome
293•ianrahman•23h ago•159 comments

ELF Crimes: Program Interpreter Fun

https://nytpu.com/gemlog/2025-12-21
43•nytpu•4h ago•9 comments

Ireland’s Diarmuid Early wins world Microsoft Excel title

https://www.bbc.com/news/articles/cj4qzgvxxgvo
302•1659447091•1d ago•116 comments

Show HN: The Official National Train Map Sucked, So I Made My Own

https://www.bdzmap.com/
61•Pavlinbg•8h ago•15 comments