frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Was going to share my work

1•hiddenarchitect•2m ago•0 comments

Pitchfork: A devilishly good process manager for developers

https://pitchfork.jdx.dev/
1•ahamez•2m ago•0 comments

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
1•mltvc•6m ago•0 comments

Why social apps need to become proactive, not reactive

https://www.heyflare.app/blog/from-reactive-to-proactive-how-ai-agents-will-reshape-social-apps
1•JoanMDuarte•7m ago•1 comments

How patient are AI scrapers, anyway? – Random Thoughts

https://lars.ingebrigtsen.no/2026/02/07/how-patient-are-ai-scrapers-anyway/
1•samtrack2019•7m ago•0 comments

Vouch: A contributor trust management system

https://github.com/mitchellh/vouch
1•SchwKatze•7m ago•0 comments

I built a terminal monitoring app and custom firmware for a clock with Claude

https://duggan.ie/posts/i-built-a-terminal-monitoring-app-and-custom-firmware-for-a-desktop-clock...
1•duggan•8m ago•0 comments

Tiny C Compiler

https://bellard.org/tcc/
1•guerrilla•10m ago•0 comments

Y Combinator Founder Organizes 'March for Billionaires'

https://mlq.ai/news/ai-startup-founder-organizes-march-for-billionaires-protest-against-californi...
1•hidden80•10m ago•1 comments

Ask HN: Need feedback on the idea I'm working on

1•Yogender78•11m ago•0 comments

OpenClaw Addresses Security Risks

https://thebiggish.com/news/openclaw-s-security-flaws-expose-enterprise-risk-22-of-deployments-un...
1•vedantnair•11m ago•0 comments

Apple finalizes Gemini / Siri deal

https://www.engadget.com/ai/apple-reportedly-plans-to-reveal-its-gemini-powered-siri-in-february-...
1•vedantnair•12m ago•0 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
3•vedantnair•12m ago•0 comments

Emacs-tramp-RPC: high-performance TRAMP back end using MsgPack-RPC

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•fanf2•13m ago•0 comments

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•18m ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•20m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

2•amichail•21m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•27m ago•2 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•29m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•29m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•30m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•31m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•32m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•32m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•33m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•35m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
5•codexon•35m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•36m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•40m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•41m ago•0 comments
Open in hackernews

Ask HN: Has AI breathed new life into Semantic (web) Technologies?

4•rottyguy•9mo ago
The Knowledge Graph Conference is currently happening in NYC and there's a bit of talk around KG assisting AI with various things like truth grounding (RAG) and Agentic development. Curious if anyone is seeing more discussions of Semantic Technologies in their orbit these days?

Comments

dtagames•9mo ago
Text, whether "semantic" or not, just gets tokenized and stored as weighted numbers in a model. It looses all its "semantic-ness."

So I would say the opposite is true. AI tools are removing the need for special declarative wrappers around a lot of text. For example, there's no need to surround a headline with <H1> when you can ask a GPT to "get the headlines from all these articles."

There are a couple kinds of wrapping that do help working with LLMs. That's markdown in prompts and JSON and XML in system instructions for MCP. But RAG refers to the non-LLM end of the process, getting data from files or a database, so the style of training data doesn't directly affect how that works.

bjourne•9mo ago
Quite the contrary. The idea behind the semantic web was to make content machine-readable by manually annotating it. For instance, this comment would have fields like "author", "date", "language", and maybe "ip" to make it interpretable to the machines. You don't need that because the machines can figure it out without the annotations. A run-of-the-mill computer vision model can tag an image much better and much more accurately than most humans.
rasmus1610•9mo ago
For the creation part of a KG I do understand this. But for inference and knowledge organisation, there is still value in graph based semantic structures imo
evanjrowley•9mo ago
Multiple comments here state that AI eliminates the need for Semantic web tech, and I can understand that perspective, but it's also a narrow way of interpreting the question. While LLMs produce great results without relying on semantic relationships, they could also be used to build semantic relationships. There's probably applications there worth exploring. For example, if a general-purpose LLM can build a semantic dataset for solving specialized problems, then might that approach be more efficient versus training a specialized LLM?
dtagames•9mo ago
They do build those relationships, and by being trained on large, general data sets rather than specialized ones. There's no need for special markup to achieve that.
evanjrowley•9mo ago
Right. So I think the potential value here is not using a special markup to enable LLMs, but leveraging LLMs to build the special markup so that it can be applied towards other uses.
dtagames•9mo ago
I guess. If you really wanted something that took some text and wrapped it in markup, you can ask for that already with ChatGPT. It's easy to say, "lookup the top selling DSLR cameras and make me a list of their features in JSON."

I can't see anyone making a special tool just for that as a general use case. Everybody wanting such output would want their own format for the markup, and we're right back to XML.

Today, it's simpler to use RAG, which is just using the LLM to figure out the "English" part, then using regular (procedural, normal programming code) tools to put things in boxes, or data storage, (or markdown), etc. If you really want consistent output, you can't have the LLM generate it. You would need to RAG that output.

evanjrowley•9mo ago
The markup in this case would be some type of semantic web format, like JSON-LD[0] or OWL[1], or some database that can process SPARQL[2] queries. Goal being the "inverse" of something like OWL2Vec[2].

EDIT1: A few weeks ago, a team of Brazilian researchers published a report about using ChatGPT to enhance agriculture-focused OWL dataset[4].

EDIT2: In addition to training LLMs on ontologies, it looks like Palantir is using ontologies as guardrails to prevent LLM hallucinations[5]. Makes sense.

[0] https://json-ld.org/

[1] https://www.w3.org/TR/owl2-syntax/

[2] https://www.w3.org/TR/sparql11-query/

[3] https://arxiv.org/abs/2009.14654

[4] https://arxiv.org/abs/2504.18651

[5] https://blog.palantir.com/reducing-hallucinations-with-the-o...

moonhive•9mo ago
Yes, AI—especially LLMs—have definitely rekindled interest in semantic technologies, but more from a pragmatic angle than the original Semantic Web vision. Knowledge graphs, ontologies, and structured data are now seen as valuable tools for improving things like grounding, retrieval-augmented generation (RAG), and reasoning in agents. The difference is that instead of expecting the whole web to be semantically annotated (which didn’t scale well), now organizations are building domain-specific graphs to augment AI performance. It’s like the Semantic Web finally found its killer app—just not in the way it was initially imagined.