frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•2m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•3m ago•1 comments

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•8m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•10m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
1•tosh•16m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•19m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•20m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•23m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•25m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•26m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•29m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•31m ago•4 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•32m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•34m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•36m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•38m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•41m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•46m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•47m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•51m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments
Open in hackernews

Show HN: Async – Claude code and Linear and GitHub PRs in one opinionated tool

https://github.com/bkdevs/async-server
107•wjsekfghks•5mo ago
Hi, I’m Mikkel and I’m building Async, an open-sourced developer tool that combines AI coding with task management and code review.

What Async does:

  - Automatically researches coding tasks, asks clarifying questions, then executes code changes in the cloud
  - Breaks work into reviewable subtasks with stack diffs for easier code review
  - Handles the full workflow from issue to merged PR without leaving the app
Demo here: https://youtu.be/98k42b8GF4s?si=Azf3FIWAbpsXxk3_

I’ve been working as a developer for over a decade now. I’ve tried all sorts of AI tools out there including Cline, Cursor, Claude Code, Kiro and more. All are pretty amazing for bootstrapping new projects. But most of my work is iterating on existing codebases where I can't break things, and that's where the magic breaks down. None of these tools work well on mature codebases.

The problems I kept running into:

  - I'm lazy. My Claude Code workflow became: throw a vague prompt like "turn issues into tasks in Github webhook," let it run something wrong, then iterate until I realize I could've just coded it myself. Claude Code's docs say to plan first, but it's not enforced and I can't force myself to do it.
  - Context switching hell. I started using Claude Code asynchronously - give it edit permissions, let it run, alt-tab to work on something else, then come back later to review. But when I return, I need to reconcile what the task was about, context switch back, and iterate. The mental overhead kills any productivity gains.
  - Tracking sucks. I use Apple Notes with bullet points to track tasks, but it's messy. Just like many other developers, I hate PM tools but need some way to stay organized without the bloat.
  - Review bottleneck. I've never shipped Claude Code output without fixes, at minimum stylistic changes (why does it always add comments even when I tell it not to?). The review/test cycle caps me at maybe 3 concurrent tasks.
So I built Async:

  - Forces upfront planning, always asks clarifying questions and requires confirmation before executing
  - Simple task tracking that imports Github issues automatically (other integration coming soon!)
  - Executes in the cloud, breaks work into subtasks, creates commits, opens PRs
  - Built-in code review with stacked diffs - comment and iterate without leaving the app
  - Works on desktop and mobile
It works by using a lightweight research agent to scope out tasks and come up with requirements and clarifying questions as needed (e.g., "fix the truncation issue" - "Would you like a tooltip on hover?"). After you confirm requirements, it executes the task by breaking it down into subtasks and then working commit by commit. It uses a mix of Gemini and Claude Code internally and runs all changes in the background in the cloud.

You've probably seen tools that do pieces of this, but I think it makes sense as one integrated workflow.

This isn't for vibe coders. I'm building a tool that I can use in my day-to-day work. Async is for experienced developers who know their codebases and products deeply. The goal is to make Async the last tool developers need to build something great. Still early and I'm iterating quickly. Would love to know what you think.

P.S. My cofounder loves light mode, I only use dark mode. I won the argument so our tool only supports dark mode. Thumbs up if you agree with me.

Comments

mmargenot•5mo ago
I think this is a neat approach. When I interact with AI tooling, such as Claude Code, my general philosophy has been to maintain a strong opinion about what it is that I actually want to build. I usually have some system design done or some picture that I've drawn to make sure that I can keep it straight throughout a given session. Without that core conception of what needs to be done, it's a little too easy for an LLM to run off the rails.

This dialogue-based path is a cool way to interact with an existing codebase (and I'm a big proponent of writing and rewriting). At the very least you're made to actually think through the implications of what needs to be done and how it will play with the rest of the application.

How well do you find that this approach handles the long tail of little things that need to be corrected before finally merging? Does this approach solve the fiddly stylistic errors that need to be made on its own, or is it more that the UI / PR review approach that you've taken is more ergonomic for solving them?

wjsekfghks•5mo ago
hey! that's awesome to hear, thanks for the feedback.

we've tried a lot of things to make code more in-line with our paradigms (initially tried a few agents to parse out "project rules" from existing code, then used that in the system prompt), but have found that the agents tend to go off-track regardless. the highest leverage has just been changing the model (Claude writes code a certain way which we tend to prefer, vs GPT, etc) and a few strong system prompts (NEVER WRITE COMMENTS, repeated twice).

so the questions here are less about that, but more about overall functional / system requirements, and acknowledging that for stylistic things, the user will still have to review.

frankfrank13•5mo ago
Looks cool, tbh I think i'd be more interested in just a lightweight local UI to track and monitor claude code, I could skip the linear and github piece.
wjsekfghks•5mo ago
Thanks for the feedback. Yeah, that is where we are heading as said in the demo video. We will follow up shortly to release local tool :)
ahinchliff•5mo ago
I second this. I love the flow you are building but I want this to run locally :)
JoshPurtell•5mo ago
Something I'd consider a game-changer would be making it really easy to kick off multiple claude instances to tackle a large researched task and then to view the results and collect them into a final research document.

IME no matter how well I prompt, a single claude/codex will never get a successful implementation of a significant feature single-shot. However, what does work is having 5 Claudes try it, reading the code and cherry picking the diff segments I like into one franken-spec I give to a final claude instance with essentially just "please implement something like this"

It's super manual nd annoying with git work-trees for me, but sounds like your setup could make it slick

wjsekfghks•5mo ago
Interesting. So, do you just start multiple instances of Claude Code and ask the same prompt on all of them? Manually cherry picking from 5 different worktrees sounds complicated. Will see what I can do :)
JoshPurtell•5mo ago
Yeah, exactly, same prompt.

I agree, it's more complex. But, I feel like the potential with a claude code wrapper is precisely in enabling workflows that are a pain to self-implement but nonetheless are incredibly powerful

k__•5mo ago
I hope it works better than GitHub Copilot Agent.
artur_makly•5mo ago
Whats the benefit of cloud hosting it?
wjsekfghks•5mo ago
The main benefit is that you can issue tasks on mobile. And, initially we were just a mobile app. When we decided to build a desktop version, we just reused all the infra we had. Realized for desktop, cloud isn't necessary. So, we are trying to migrate to local now
mgrandl•5mo ago
Your docs on selfhosting are a bit light. Can you use the mobile app while selfhosting? That would be the main selling point for me.
chis•5mo ago
Great pitch, you've articulated the pain point super well and I agree with it.

I have personally had no luck with prompting models to ask me clarifying questions. They just never seem to think of the key questions, just asking random shit to "show" that they planned ahead. And they also never manage to pause halfway through when it gets tough and ask for further planning.

My question is how well you feel it actually works today with your tool.

wjsekfghks•5mo ago
Honestly, it's not there yet and I'm iterating to making it better and consistent. But, I've had a few moments where it got questions and implementations right and it felt magical. So, wanted to share it with more people and see how people like the approach.
dbbk•5mo ago
Interesting you say that. My workflow is just to use Claude Code with Opus in Plan mode, have it write a plan, and ask "What clarifying questions do you have for me" and it always prompts me to answer very good questions.
rylan-talerico•5mo ago
Super cool. Have been looking for something like this. Nice work!
wjsekfghks•5mo ago
thank you :) let us know how it feels
pjm331•5mo ago
Very cool! I’ve been building an internal tool at work that’s very similar but primarily focused on automatically triaging bugs and tech support issues, with MCP tools to query logs, search for errors in bugsnag, query the db etc. also using linear for issue tracking. They’ve been launching some cool stuff for agent integrations.

And sorry I’m a light mode fan

wjsekfghks•5mo ago
Nice, are you building a linear app? I saw their recent post about integrating cursor, devin, etc into their platform.

And, light mode? I'm sorry, we can't be friends anymore

pjm331•5mo ago
yup was building it as a linear agent https://linear.app/developers/agents
_1tem•5mo ago
I've been planning to build something like this for a while now (just for myself). Love the planning workflow, will likely steal that idea.

But code review is more than just reviewing diffs. I need to test the code by actually building and running it. How does that critical step fit in to this workflow? If the async runner stops after it finishes writing code, do I then need to download the PR to my machine, install dependencies, etc. to test it? Major flow blocker for me, defeats the entire purpose of such a tool.

I was planning to build always-on devcontainers on a baremetal server. So after Claude Code does its thing, I have a live, running version of my app to test alongside the diffs. Sort of like Netlify/Vercel branch deploys, but with a full stack container.

Claude Code also works far better in an agentic loop when it can self-heal by running tests, executing one-off terminal commands, tailing logs, and querying the database. I need to do this anyway. For me, a mobile async coding workflow needs to have a container running with a mobile-friendly SSH terminal, database viewer, logs viewer, lightweight editor with live preview, and a test runner. Diffs just don't cut it for me.

I do believe that before 2025 is over we will achieve the dream of doing real software engineering on mobile. I was planning to build it myself anyway.

wjsekfghks•5mo ago
Completely agreed. The first version of our app was on mobile. We implemented preview deployment for frontend testing (and we were going to work on backend integration testing next). But yeah, without a reliable way to test and verify changes, I agree it's not a complete solution. We are going to work on that next.

FYI, our initial app demo: https://youtu.be/WzFP3799K2Y?feature=shared

arjun810•5mo ago
We had exactly the same desire and built it as well, with a nice mobile UI and live app previews. Would love to get your feedback — let me know how to contact you if you’re curious.
_1tem•5mo ago
Would love to check it out! interpreterslog-removesuffix@protonmail.ch
bjtitus•5mo ago
I've really been enjoying the mobile coding agent workflow with [Omnara](https://omnara.com/). I'd love to try this as well with a locally hosted version.
wjsekfghks•5mo ago
you can also give our mobile app a try :)
furyofantares•5mo ago
> Traditional AI coding tools

I love this phrase :)

wjsekfghks•5mo ago
:)
chris_st•5mo ago
thumbs down
wjsekfghks•5mo ago
:( light mode gangs
brainless•5mo ago
I love your video, it is very clear. I am building in this space so I am very curious and happy about all the products coming in to help the current tooling gap. What is not clear to me is how Async works, is it all local or a mix of local or cloud since I see "executes in cloud" but then I see a download-able app.

I see a lot of information on API endpoints in the README. Perhaps that is not so critical to getting started. Perhaps a `Getting Started` would help, explaining what is the desktop app and what goes into cloud.

I have been hosting online sessions for Claude Code. I have 100+ guests for my session this Friday. And after "vibe coding" full time for a few months, I am building https://github.com/brainless/nocodo. It is not ready for actual use and I first want to use it to build itself (well the core of it to build the rest of the parts).

wjsekfghks•5mo ago
To clarify, most of the execution (writing code or researching) is happening on the cloud. And we use Firestore as DB to store tasks. The app (both desktop and mobile) is just interface to talk to those backends. We are currently working to see if we can bring majority of the execution to local. Hope this makes it a bit clearer.
brainless•5mo ago
Thanks for the clarification.

Does this mean that my codebase gets cloned somewhere? Is it your compute or mine, with my cloud provider API keys?

wjsekfghks•5mo ago
If you use the app as is, it will be cloned to our server. If you choose to host your own server, it will be on yours
brainless•5mo ago
OK thanks.
Terretta•5mo ago
> Show HN: Async – Claude code and Linear and GitHub PRs in one opinionated tool

Sadly, this seems inaccurate. Appears to be Claude Code and GitHub PRs, but not Linear.

It should be Linear, since Linear does an extraordinary number of useful things beyond "issue list".

Since it seems to have nothing to do with Linear, I'm surprised the headline says it it's those three things, by trademarked brand name.

Speaking of tracking tasks:

> Tracking sucks. I use Apple Notes with bullet points to track tasks...

Claude Code seems very good at its own "org mode", using .md file outline and checklists to organize and track progress as well as keep an easy to leverage record.

It is also able to sync the outline level items with GitHub issues, then plan and maintain checklists under them as it works, including the checklist items in commits and PRs, and even help you commit that roadmap outline snapshot at the same time to have progress through time as diffs...

motoxpro•5mo ago
I had the opposite reaction, in that it seemed exactly like those three things. Sans the years of development on each but the idea seems really clear. "Build Linear, but the person who does the work is Claude and the 'state' of the work (code) is git (GitHub)"
reilly3000•5mo ago
Thumbs up for dark mode. I really want to love this but I can’t get over the idea of paying GCP to have cloud run clone my repo over and over again every time I interact with Async. I’m still going to try it, but I think I’d rather rent a VM and just have it be faster. This is coming from someone who deals with big fat monorepos, so maybe it’s not that bad for the average user.
wjsekfghks•5mo ago
we are trying to run execution locally using local claude code
basic_banana•5mo ago
It is pretty similar to async code, but i guess async is more like linear, while async code is more likely a codex cloud for claude code.

https://github.com/ObservedObserver/async-code

wjsekfghks•5mo ago
cool, let me check it out
7thpixel•5mo ago
If you'd like some feedback, I ran this through my algo and analyzed what is unclear and potentially risky for bring this forward:

What's unclear:

Exact AI coding capabilities, free tier limitations, and the revenue model beyond the hosted version.

Risky Assumptions:

- Users will find the UX/UI sufficiently intuitive for immediate adoption.

- Companies will see ROI in reduced dev time/cost when using Async.

- The AI agent can clarify requirements accurately on a variety of tasks.

Hope this helps!

wjsekfghks•5mo ago
Thank you so much for the feedback! Could you elaborate more on the point about UX/UI. We are trying to see what we can do to make onboarding and issuing the first task as easy as possible. I'd love to hear your insights on that
selinkocalar•5mo ago
The hard part with tools like this is maintaining context across different data models. GitHub PRs, Linear tickets, and LLM conversations all have different information architectures. Are you doing any semantic linking between related items, or just surface-level aggregation?
wjsekfghks•5mo ago
We use one model across all three surfaces called Task. Those do have different information architectures, i.e. Linear tickets have correspondence or comments, LLM conversations have chat history and code reviews have diffs and comments. But, at the end of the day, all those information are used to output the correct code and we are doing just that.
saaammm•5mo ago
been working on building exactly this lol