frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Where I'm at with AI

https://paulosman.me/2026/01/18/where-im-at-with-ai/
43•crashwhip•2h ago

Comments

Legend2440•1h ago
> I am certain that generative AI is a productivity amplifier, but its economic, environmental, and cultural externalities are not being discussed enough.

You sure? That’s basically all that’s being discussed.

There’s nothing in this article I haven’t heard 100 times before. Open any mainstream news article or HN/Reddit thread and you’ll find all of OP’s talking points about water, electricity, job loss, the intrinsic value of art, etc.

erxam•1h ago
It should be reworded as: It's not being discussed amongst the people who matter.
rootnod3•1h ago
And most of those concerns being wildly dismissed by the AI shills, even here on HN.

Mention anything about the water and electricity wastage and embrace the downvotes.

sharifhsn•1h ago
Because those criticisms misses the forest for the trees. You might as well complain about the pollution caused by the Industrial Revolution. AI doesn’t use nearly as much as water as even a small amount of beef production. And we have cheap ways of producing electricity, we just need to overhaul our infrastructure and regulations.

The more interesting questions are about psychology, productivity, intelligence, AGI risk, etc. Resource constraints can be solved, but we’re wrestling with societal constraints. Industrialization created modernism, we could see a similar movement in reaction to AI.

nancyminusone•43m ago
>we could see

Well, that's just it. Those extentitential risks aren't even proven yet.

Meanwhile, threats to resources are already being felt today. "just overhaul our infrastructure" isn't an actionable solution that will magically fix things today or next week. Even if these don't end up being big problems in the grand scheme of things doesn't mean they aren't problems now.

aspenmartin•57m ago
Well considering people that disagree with you “shills” is maybe a bad start and indicates you kind of just have an axe to grind. You’re right that there can be serious local issues for data centers but there are plenty of instances where it’s a clear net positive. There’s a lot of nuance that you’re just breezing over and then characterizing people that point this out as “shills”. Water and electricity demands do not have to be problematic, they are highly site specific. In some cases there are real concerns (drought-y areas like Arizona, impact on local grids and possibility of rate impacts for ordinary people etc) but in many cases they are not problematic (closed loop or reclaimed water, independent power sources, etc).
everdrive•1h ago
This is a weird quirk that I observe in all sorts of contexts. "No one's talking about [thing that is frequently discussed]!" or, "There's never been [an actor in this identity category] in major movie role before!" (except there has plenty of times) or sometimes "You can't even say Christmas anymore!" (except they just did) The somewhat inaccurate use of hyperbolic language does not mean that there is _nothing_ to the particular statement or issue. Only that the hyperbole is just that; an exaggeration of a potentially real and valid issue. The hyperbole is not very helpful, but neither is a total refutation of the issue based on the usage of hyperbole.
throwup238•42m ago
@dang has noted this phenomenon in various forms multiple times. The most recent one I can find:

> It's common, if not inevitable, for people who feel strongly about $topic to conclude that the system (or the community, or the mods, etc.) are biased against their side. One is far more likely to notice whatever data points that one dislikes because they go against one's view and overweight those relative to others. This is probably the single most reliable phenomenon on this site. Keep in mind that the people with the opposite view to yours are just as convinced that there's bias, but they're sure that it's against their side and in favor of yours. [1]

[1] https://news.ycombinator.com/item?id=42205856

s-macke•1h ago
> ... emitting a NYC worth of CO2 in a year is dizzying

Simplified comparisons like these rarely show the full picture [0]. They focus on electricity use only, not on heating, transport, or meat production, and certainly not on the CO2 emissions associated with New York’s airports. As a rough, back-of-the-envelope estimate, a flight from Los Angeles to New York with one seat is on the order of 1,000,000 small chat queries CO2e.

Of course we should care about AI’s electricity consumption, especially when we run 100 agents in parallel simply because we can. But it’s important to keep it in perspective.

[0] https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...

happytoexplain•58m ago
Yes, it's being discussed a lot. No, it's not being discussed enough, nor by all the right people. It has the potential to cause untold suffering to the middle class of developed nations, since it's moving too fast for individual humans to adjust. On the Problem Scale that puts it on the "societal emergency" end, which basically can not be discussed enough.
sodapopcan•46m ago
Ya, I think "it's not being discussed enough" is a veiled way to say: "I can't believe so people are ok with this shit."
duped•54m ago
What's not being discussed are the people building these things are evil and they're doing it for evil purposes.

I spent some time thinking of a better word than "evil" before typing this comment. I can't think of one. Doing something bad that harms more than it helps for the purposes of enrichment and power is simply put: evil.

micromacrofoot•29m ago
I think maybe people who write that it's not being discussed really mean that people aren't doing anything based on the discussions. Overall all of this is just sort of happening.
Aurornis•12m ago
> You sure? That’s basically all that’s being discussed

It’s not just being discussed, it’s dominating the conversation on sites like Hacker News. It’s actually hard to find the useful and informative LLM content because it doesn’t get as many upvotes as the constant flow of anti-LLM thought pieces like this.

There was even the strange period when the rationalist movement was in the spotlight and getting mainstream media coverage for their AI safety warnings. They overplayed their hand with the whole call to drop bombs on data centers and the AI 2027 project with Scott Alexander that predicted the arrival of AGI and disastrous consequences in 2027. There was so much extreme doom and cynicism for a while that a lot of people just got tired of it and tuned out.

vonneumannstan•1h ago
Its really hard to take people who say this seriously: "If you asked me six months ago what I thought of generative AI, I would have said that we’re seeing a lot of interesting movement, but the jury is out on whether it will be useful"

Like I'm sorry but if you couldn't see that this tech would be enormously useful for millions if not billions of people you really shouldn't be putting yourself out there opining on anything at all. Same vibes as the guys saying horseless carriages were useless and couldn't possibly do anything better than a horse which after all has its own mind. Just incredibly short sighted and lacking curiosity or creativity.

skydhash•1h ago
First car prototypes were useless and it has taken a few decades to have a good version. The first combustion engine was in 1826. Would you buy a prototype or a carriage for transportation at that time?
volkk•48m ago
no, but AI isn't going to light on fire as I drive and potentially kill me. it's also not an exorbitant expense.
vonneumannstan•42m ago
If you couldn't foresee how they would eventually be useful with improvements over time you probably bought a lot of horse carriages in 1893 and appropriately lost your ass.
ivanstojic•1h ago
> If you asked me six months ago what I thought of generative AI, I would have said

It’s always this tired argument. “But it’s so much better than six months ago, if you aren’t using it today you are just missing out.”

I’m tired of the hype boss.

deweller•1h ago
The second half of that argument was not in this article. The author was just relating his experience.

For what it is worth, I have also gone from a "this looks interesting" to "this is a regular part of my daily workflow" in the same 6 month time period.

jofla_net•2m ago
"The challenge isn’t choosing “AI or not AI” - that ship has sailed."
candiddevmike•1h ago
I think the rapid iteration and lack of consistency from the model providers is really killing the hype here. You see HN stories all the time around how things are getting worse, and it seems folks success with the major models is starting to heavily diffuse.

The model providers should really start having LTS (at least 2 years) offerings that deliver consistent results regardless of load, IMO. Folks are tired of the treadmill and just want some stability here, and if the providers aren't going to offer it, llama.cpp will...

KptMarchewa•1h ago
There is a difference between quantization of SOTA model and old models. People want non-quantized SOTA models, rather than old models.
jdjeeee•46m ago
Put that all aside. Why can’t they demo a model on max load to show what it’s capable of…?

Yeah, exactly.

aspenmartin•55m ago
Yea I hear this a lot, do people genuinely dismiss that there has been step change progress over 6-12 months timescale? I mean it’s night and day, look at benchmark numbers… “yea I don’t buy it” ok but then don’t pretend you’re objective
benrutter•45m ago
I think I'd be in the "don't buy it" camp, so maybe I can explain my thinking at least.

I don't deny that there's been huge improvements in LLMs over the last 6-12 months at all. I'm skeptical that the last 6 months have suddenly presented a 'category shift' in terms of the problems LLMs can solve (I'm happy to be proved wrong!).

It seems to me like LLMs are better at solving the same problems that they could solve 6 months ago, and the same could be said comparing 6 months to 12 months ago.

The argument I'd dismiss isn't the improvement, it's that there's a whole load of sudden economic factors, or use cases, that have been unlocked in the last 6 months because of the improvements in LLMs.

That's kind of a fuzzier point, and a hard one to know until we all have hindsight. But I think OP is right that people have been claiming "LLMs are fundamentally in a different category to where they were 6 months ago" for the last 2 years - and as yet, none of those big improvements have yet unlocked a whole new category of use cases for LLMs.

To be honest, it's a very tricky thing to weight into, because the claims being made around LLMs are very varied from "we're 2 months away from all disease being solved" to "LLMs are basically just a bit better than old school Markov chains". I'd argue that clearly neither of those are true, but it's hard to orient stuff when both those sides are being claimed at the same time.

WarmWash•30m ago
The improvement in LLMs has come in the form of more successful one shots, more successful bug finding, more efficient code, less time hand-holding the model.

"Problem solving" (which definitely has improved, but maybe has a spikey domain improvement profile) might not be the best metric, because you could probably hand hold the models of 12 months ago to the same "solution" as current models, but you would spend a lot of time hand holding.

Aurornis•22m ago
I’m a light LLM user myself and I still write most of the important code by myself.

Even I can see there has been a clear advancement in performance in the past six months. There will probably be another incremental step 6 months from now.

I use LLMs in a project that helps give suggestions for a previously manually data entry job. Six months ago the LLM suggestions were hit or miss. Using a recent model it’s over 90% accurate. Everything is still manually reviewed by humans but having a recent model handle the grunt work has been game changing.

If people are drinking a firehose of LinkedIn style influencer hype posts I could see why it’s tiresome. I ignore those and I think everyone else should do. There is real progress being made though.

dtnewman•1h ago
> The current landscape is a battle between loss-leaders. OpenAI is burning through billions of dollars per year and is expected to hit tens of billions in losses per year soon. Your $20 per month subscription to ChatGPT is nowhere near keeping them afloat. Anthropic’s figures are more moderate, but it is still currently lighting money on fire in order to compete and gain or protect market share.

I don't doubt that the leading labs are lighting money on fire. Undoubtedly, it costs crazy amounts of cash to train these models. But hardware development takes time and it's only been a few years at this point. Even TODAY, one can run Kimi K2.5, a 1T param open-source model on two mac studios. It runs at 24 tokens/sec. Yes, it'll cost you $20k for the specs needed, but that's hobbyist and small business territory... we're not talking mainframe computer costs here. And certainly this price will come down? And it's hard to imagine that the hardware won't get faster/better?

Yes... training the models can really only be done with NVIDIA and costs insane amounts of money. But it seems like even if we see just moderate improvement going forward, this is still a monumental shift for coding if you compare where we are at to 2022 (or even 2024).

[1] https://x.com/alexocheema/status/2016487974876164562?s=20

AnotherGoodName•49m ago
And just to add to this the reason the Apple macs are used is that they have the highest memory bandwidth of any easily obtainable consumer device right now. (Yes the nvidia cards which also have hbm are even higher on memory bandwidth but not easily obtainable). Memory bandwidth is the limiting factor for inference more so than raw compute.

Memory costs are skyrocketing right now as everyone pivots to using hbm paired with moderate processing power. This is the perfect combination for inference. The current memory situation is obviously temporary. Factories will be built and scaled and memory is not particularly power hungry, there’s a reason you don’t really need much cooling for it. As training becomes less of a focus and inference more of a focus we will at some point be moving from the highest end nvidia cards to boxes of essentially power efficient memory hbm memory attached to smaller more efficient compute in the future.

I see a lot of commentary “ai companies are so stupid buying up all the memory” around the place atm. That memory is what’s needed to run the inference cheaply. It’s currently done on nvidia cards and apple m series cpus because those two are the first to utilise High Bandwidth Memory but the raw compute of the nvidia cards is really only useful for training, they are just using them for inference right now because there’s not much pn the market that has similar memory bandwidth. But this will be changing very soon. Everyone in the industry is coming along with their own dedicated compute using hbm memory.

skydhash•1h ago
> that most software will be built very quickly, and more complicated software should be developed by writing the specification, and then generating the code. We may still need to drop down to a programming language from time to time, but I believe that almost all development will be done with generative AI tools

My strongly held belief is that anyone who think that way, also think that software engineering is reading tickets, searching for code snippets on stack overflow and copy-pasting code.

Good specifications are always written after a lot of prototypes, experiments and sample implementations (which may be production level). Natural language specifications exist after the concept has been formalized. Before that process, you only have dreams and hopes.

leoedin•1h ago
I've been playing around with "vibe coding" recently. Generally react front end and Rust back end. Rust has the nice benefit that you only really get logic bugs if it compiles.

In the few apps I've built, progress is initially amazing. And then you get to a certain point and things slow down. I've built a list of things that are "not quite right" and then, as I work through each one all the strange architectural decisions the AI initially made start to become obvious.

Much like any software development, you have to stop adding features and start refactoring. That's the point at which not being a good software developer will really start to bite you, because it's only experience that will point you in the right direction.

It's completely incredible what the models can do. Both in speed of building (especially credible front ends), and as sounding boards for architecture. It's definitely a productivity boost. But I think we're still a long way off non-technical people being able to develop applications.

A while ago I worked on a non-trivial no-code application. I realised then that even though there's "no code", you still needed to apply careful thought about data structures and UI and all the other things that make an application great. Otherwise it turned into a huge mess. This feels similar.

throwaway0123_5•38m ago
> But I think we're still a long way off non-technical people being able to develop applications.

I'm surprised I haven't seen anyone do a case study, having truly non-technical people build apps with these tools. Take a few moderately tech-savvy (can use MS office up to doing basic stuff in excel, understands a filesystem) people who work white collar jobs. Give them a one or two-day crash course on how Claude Code works. See what is the most complicated app which they can develop that is reasonably stable and secure.

CuriouslyC•20m ago
This is a big part of why vibe coded projects fall over. AI creates big balls of mud because it's myopic as hell. Linters help but they're still pretty limited. I've had good success using https://github.com/sibyllinesoft/valknut to automate the majority of my refactoring work. I still do a style/naming pass on things, but it handles most issues that can be analytically measured.
JBAnderson5•1h ago
I think part of the issue here is that software engineering is a very broad field. If you’re building another crud app, your job might only require reading a ticket and copy/pasting from stack overflow. If you are working in a regulated industry, you are spending most of your time complying with regulations. If you are building new programming languages or compilers, you are building the abstractions from the ground up. I’m sure there’s dozens if not hundreds of other sub fields that build software in other ways with different requirements and constraints.

LLMs will trivialize some subfields, be nearly useless in others, but will probably help to some degree in most of them. The range of opinions online about how useful LLMs are in their work probably correlates to what subfields they work in

skydhash•40m ago
The thing is if you’re working on a CRUD app, you probably have (and you should) a framework, which make it easy to do all the boilerplate. Editor fluency can add an extra boost to your development speed.

I’ve done CRUD and the biggest proportion of the time was custom business rules and UI tweaking (updating the design system). And they were commits with small diffs. The huge commits were done by copy pasting, code generators and heavy use of the IDE refactoring tools.

mkw5053•1h ago
The Uber comparison feels weak because their lock-in came from regulatory capture and network effects, neither of which LLMs have once weights are commoditized (are we already there?).
willtemperley•57m ago
It's important to remember these things are almost certainly gaslighting people through subtle psychological triggers, making people believe these chatbots far more than they are, using behavioural design principles [1].

I often find when I come up with the solution, these little autocompletes pretend they knew that all along. Or I make an observation they say something like "yes that's the core insight into this".

They're great at boilerplate. They can immediately spot a bug in a 1000 lines of code. I just wish they'd stop being so pretentious. It's us that are driving these things, it's our intelligence, intuition and experience that's creating solutions.

[1] https://en.wikipedia.org/wiki/Behavioural_design

bandrami•42m ago
> I use it for routine coding tasks like generating scaffolding or writing tests

IDK this sounds a whole lot like paying for snippets

jillesvangurp•40m ago
> increases in worker productivity at best increase demand for labor, and at worst result in massive disruption - they never result in the same pay for less manual work.

Exactly. Strongly agree with that. This closed world assumption never holds. We would only do less work if nothing else changes. But of course everything changes when you lower the price of creating software. It gets a lot cheaper. So, now you get a lot of companies suddenly considering doing things that previously would be too expensive. This still takes skills and expertise they don't have. So, they get people involved doing that work. Maybe they'll employ some of those but the trend is actually to employ things that are core to your company.

And that's just software creation. Everything else is going to change as well. A lot of software we use is optimized for humans. Including all of our development tools. Replacing all that with tools more suitable for automatic driving by AI is an enormous amount of work.

And we have decades worth of actively used software that requires human operators currently. If you rent a car, some car rental companies still interface with stuff written before I was born. And I'm > 0.5 century old. Same with banks, airline companies, insurers, etc. There's a reason this stuff was never upgraded: doing so is super expensive. Now that just got a bit cheaper to do. Maybe we'll get around to doing some of that. Along with all the stuff for which the ambition level just went up by 10x. And all the rest.

yalogin•33m ago
There really are no other use cases for generative AI than software flows. The irony of all this is software engineers automated their own workflows and made themselves replaceable. One think I am convinced of is that software engineering salaries will fall and we have seen the peak salaries for this industry, they will only fall going forward. Sure there will be a few key senior folks that continue to make a lot, but the software salaries itself will go through a k split just like out economy.
ryandrake•29m ago
> The irony of all this is software engineers automated their own workflows and made themselves replaceable.

"Software Engineers" isn't a single group, here. Some software engineers (making millions of dollars at top AI companies) are automating other software engineers' workflows and making those software engineers replaceable. The people who are going to be mostly negatively affected by these changes are not the people setting out to build them.

KptMarchewa•20m ago
> The irony of all this is software engineers automated their own workflows and made themselves replaceable.

Do you really think we should be like ancient egyptian scribes and gatekeep the knowledge and skills purely for our sake?

I don't think there has been other field that is this open with trade secrets. I know a dentist who openly claims they should be paid for training dental interns (idk what the US terminology is) despite them being productive, simply because they will earn a lot of money in the future. I did not want and do not want software engineering to become that.

datsci_est_2015•12m ago
This article is different because it actually talks about code review, which I don’t see very often. Especially in the ultra-hype 1000x “we built an operating system in a day using agentic coding”, it seems like code review is implied to be a thing of the past.

As long as code continues to need to be reviewed to (mostly) maintain a chain of liability, I don’t see SWE going anywhere like both hypebros and doomers seem to be intent on posting about.

Code review will simply become the bottle neck for SWE. In other words, reading and understanding code so that when SHTF the shareholders know who to hold accountable.

dudeinhawaii•2m ago
I'm very PRO AI and a daily user but let's be real, where is the productivity gain? It's at the edges, it's in greenfield, we've seen a handful of "things written with AI" be launched successfully and the majority of the time these are shovels for other AI developers. Large corporations are not releasing even 5x higher velocity or quality. Ditto for small corporations. If the claimed multipliers were broadly real at the org level, we would be seeing obvious outward signals like noticeably faster release cycles in mainstream products, more credible challengers shipping mature alternatives, and more rapid catching-up in open-source software projects. I am not seeing that yet, is anyone else? It feels like the 'invisible productivity' version of Gross Domestic Product claims, the claims don't seem to match real-world.

And those are the EASY things for AI to "put out of work".

HARD systems like government legacy systems are not something you can slap 200 unit tests on and say "agent re-write this in Rust". They're hard because of the complex interconnects, myriad patches, workarounds, institutional data codified both in the codebase and outside of it. Bugs that stay bugs because they next 20 versions used that bug in some weird way. I started my career in that realm. I call bullshit on AI taking any jobs here if it can't even accelerate the pace of _quality_ releases of OS and video games.

I'm not saying this won't happen eventually but that eventually is doing a heavy lift. I am skeptical of the 6-12 month timelines for broad job displacement that I see mentioned.

AIs (LLMs) are useful in this subtle way like "Google Search" and _not_ like "the internet". It's a very specific set of text heavy information domains that are potentially augmented. That's great but it's not the end of all work or even the end of all lucrative technical work.

It's a stellar tool for smart engineers to do _even_ more, and yet, the smart ones know you have to babysit and double-check -- so it's not remotely a replacement.

This is without even opening the can of worms that is AI Agency.

Buttered Crumpet, a custom typeface for Wallace and Gromit

https://jamieclarketype.com/case-study/wallace-and-gromit-font/
99•tobr•1h ago•20 comments

Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents

https://github.com/amlalabs/amla-sandbox
29•souvik1997•2h ago•24 comments

Moltbook

https://www.moltbook.com/
808•teej•12h ago•418 comments

Implementing a tiny CPU rasterizer (2024)

https://lisyarus.github.io/blog/posts/implementing-a-tiny-cpu-rasterizer-part-1.html
39•PaulHoule•4d ago•4 comments

OpenClaw – Moltbot Renamed Again

https://openclaw.ai/blog/introducing-openclaw
406•ed•11h ago•193 comments

Wisconsin communities signed secrecy deals for billion-dollar data centers

https://www.wpr.org/news/4-wisconsin-communities-signed-secrecy-deals-billion-dollar-data-centers
199•sseagull•3h ago•218 comments

Richard Feynman Side Hustles

https://twitter.com/carl_feynman/status/2016979540099420428
78•tzury•2h ago•27 comments

The Engineer who invented the Mars Rover Suspension in his garage [video]

https://www.youtube.com/watch?v=QKSPk_0N4Jc
102•UltraSane•3d ago•16 comments

Quack-Cluster: A Serverless Distributed SQL Query Engine with DuckDB and Ray

https://github.com/kristianaryanto/Quack-Cluster
13•tanelpoder•3d ago•2 comments

Track Your Routine – Open-source app for task management

https://github.com/MSF01/TYR
40•perrii•4h ago•19 comments

How AI assistance impacts the formation of coding skills

https://www.anthropic.com/research/AI-assistance-coding-skills
235•vismit2000•11h ago•190 comments

GOG: Linux "the next major frontier" for gaming as it works on a native client

https://www.xda-developers.com/gog-calls-linux-the-next-major-frontier-for-gaming-as-it-works-on-...
435•franczesko•8h ago•246 comments

Emoji Design Convergence Review: 2018-2026

https://blog.emojipedia.org/emoji-design-convergence-review-2018-2026/
4•surprisetalk•3d ago•0 comments

Pangolin (YC S25) is hiring software engineers (open-source, Go, networking)

https://docs.pangolin.net/careers/join-us
1•miloschwartz•4h ago

Netflix Animation Studios Joins the Blender Development Fund as Corporate Patron

https://www.blender.org/press/netflix-animation-studios-joins-the-blender-development-fund-as-cor...
315•vidyesh•10h ago•48 comments

Show HN: Kolibri, a DIY music club in Sweden

https://kolibrinkpg.com/
87•EastLondonCoder•1d ago•14 comments

PlayStation 2 Recompilation Project Is Absolutely Incredible

https://redgamingtech.com/playstation-2-recompilation-project-is-absolutely-incredible/
500•croes•21h ago•273 comments

Grid: Free, local-first, browser-based 3D printing/CNC/laser slicer

https://grid.space/stem/
342•cyrusradfar•18h ago•112 comments

How AI Impacts Skill Formation

https://arxiv.org/abs/2601.20245
165•northfield27•9h ago•3 comments

Godot 4.6 Release: It's all about your flow

https://godotengine.org/releases/4.6/
122•makepanic•3d ago•42 comments

Show HN: Cicada – A scripting language that integrates with C

https://github.com/heltilda/cicada
33•briancr•4h ago•11 comments

AGENTS.md outperforms skills in our agent evals

https://vercel.com/blog/agents-md-outperforms-skills-in-our-agent-evals
427•maximedupre•1d ago•168 comments

Retiring GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini in ChatGPT

https://openai.com/index/retiring-gpt-4o-and-older-models/
266•rd•19h ago•332 comments

Doin' It with a 555: One Chip to Rule Them All

https://aashvik.com/posts/555-revolution/
88•MonkeyClub•3d ago•51 comments

HumanConsumption.Live – Real-Time Global Animal Consumption Stats

https://www.humanconsumption.live/
4•speckx•1h ago•0 comments

Stargaze: SpaceX's Space Situational Awareness System

https://starlink.com/updates/stargaze
152•hnburnsy•13h ago•67 comments

The WiFi only works when it's raining (2024)

https://predr.ag/blog/wifi-only-works-when-its-raining/
268•epicalex•19h ago•96 comments

Backseat Software

https://blog.mikeswanson.com/backseat-software/
170•zdw•18h ago•72 comments

Detecting Dementia Using Lexical Analysis: Terry Pratchett's Discworld

https://www.mdpi.com/2076-3425/16/1/94
10•maxeda•42m ago•4 comments

Flameshot

https://github.com/flameshot-org/flameshot
255•OsrsNeedsf2P•21h ago•96 comments