frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
1•Brajeshwar•1m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
2•Brajeshwar•1m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
1•Brajeshwar•1m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•5m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
1•righthand•8m ago•0 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•9m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•9m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
2•vinhnx•10m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•14m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•19m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•23m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•25m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•25m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
4•okaywriting•32m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•35m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•35m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•36m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•37m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•37m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•38m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•38m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•43m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•43m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•44m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•44m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•52m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•52m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
2•surprisetalk•55m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•55m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
2•surprisetalk•55m ago•0 comments
Open in hackernews

Ask HN: In-house or outsourced data annotation? (2025)

3•yogoism•8mo ago
While big tech often outsources data annotation to firms like Scale AI, TURING, and Mercor, companies such as Tesla and Google run in-house teams.

Which approach do you think is better for AI and robotics development, and how will this trend evolve?

Please share your data annotation insights and experiences.

Comments

PaulShin•8mo ago
Interesting question. As the founder of an AI collaboration platform (Markhub), we live and breathe this problem every day. My take is that the best approach isn't a simple choice between in-house vs. outsourced, but a hybrid model focused on the quality and context of the data.

For our foundational models (e.g., text summarization), we start with powerful base models like Gemini and fine-tune them. But the real magic happens with our proprietary data, and for that, outsourcing is not an option.

Here's our approach: Our own product, Markhub, is our primary annotation tool.

When our early users give feedback—like circling a button on a screenshot and commenting "This color is wrong"—they are, in effect, creating a perfect piece of labeled data: [Image] + [Area of Interest] + [Instruction].

We call this "Collaborative Annotation" or "In-Workflow Labeling." The data quality is incredibly high because it's generated by domain experts (our users) as a natural byproduct of their daily work, full of real-world context. This is something an external annotation firm can never replicate.

So, to answer your question on how the trend will evolve: I believe the future isn't a binary choice between in-house and outsourced. The next wave will be tools that allow teams to create their own high-context training data simply by doing their work. The annotation process will become invisible, seamlessly integrated into the collaboration flow itself.

yogoism•8mo ago
That's a great insight, Paul. As someone who has been researching the data annotation space, your perspective really resonates.

I completely agree that the first-hand, contextual information you get from actual users is something an external firm can never replicate. It seems like the most effective and efficient way to spin the data flywheel at high velocity.

This leads me to a question I've been struggling to understand: If this approach is so powerful, why do you think even companies with the vast resources of Big Tech still rely on what seems to be a riskier path—using external human evaluators—instead of fully building this feedback loop in-house?

I feel like I'm missing a key piece of the puzzle. I would be very interested to hear if you have any thoughts on this.

PaulShin•8mo ago
That's the million-dollar question, and you've hit on the key puzzle piece. I believe the answer lies in distinguishing between two different stages of AI development: "Foundational Model Training" vs. "Product-Specific Fine-Tuning."

1. Foundational Model Training (The Big Tech Approach): To build a base model like GPT-4 or Gemini, you need an unimaginable amount of general, brute-force data. You need millions of images labeled "cat" or "dog," and billions of text examples. For this massive scale of generic data, using large, external teams of human evaluators is often the only feasible way. It's about quantity and breadth.

2. Product-Specific Fine-Tuning (The Markhub Approach): However, once you have that foundational model, the goal changes. To make an AI truly useful for a specific product, you no longer need a million generic data points. You need a thousand high-context, high-quality data points that are specific to your workflow.

For example, an external evaluator can label a button as "a button." But only a real designer using Markhub can provide the critical feedback, "This button's corner radius (8px) is inconsistent with our design system (6px)." This is the kind of nuanced, proprietary data that creates real product value, and it can only be generated "in workflow."

So, I think Big Tech isn't wrong; they're just solving a different problem (building the foundational engine). We, as application-layer startups, have the unique opportunity to build on top of that engine and solve the "last mile" problem by capturing the high-context data that truly makes a product smart.

You're not missing a puzzle piece at all you've just identified the difference between building the engine and building the race car.

yogoism•8mo ago
Thanks so much for that clear explanation—it really made me realize that while companies like Scale AI can thrive during the hype of the foundational-model race, it’ll likely get tougher down the road.

If you don’t mind me asking, as someone on the front lines of AI product development, what challenges have you found to be even more difficult than data annotation?

I’d really appreciate any insights you can share.