frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

New hire fixed a problem so fast, their boss left to become a yoga instructor

https://www.theregister.com/2026/02/06/on_call/
1•Brajeshwar•1m ago•0 comments

Four horsemen of the AI-pocalypse line up capex bigger than Israel's GDP

https://www.theregister.com/2026/02/06/ai_capex_plans/
1•Brajeshwar•1m ago•0 comments

OpenClaw v2026.2.6

https://github.com/openclaw/openclaw/releases/tag/v2026.2.6
1•salkahfi•2m ago•0 comments

A free Dynamic QR Code generator (no expiring links)

https://free-dynamic-qr-generator.com/
1•nookeshkarri7•2m ago•1 comments

nextTick but for React.js

https://suhaotian.github.io/use-next-tick/
1•jeremy_su•3m ago•0 comments

Show HN: I Built an AI-Powered Pull Request Review Tool

https://github.com/HighGarden-Studio/HighReview
1•highgarden•4m ago•0 comments

Git-am applies commit message diffs

https://lore.kernel.org/git/bcqvh7ahjjgzpgxwnr4kh3hfkksfruf54refyry3ha7qk7dldf@fij5calmscvm/
1•rkta•6m ago•0 comments

ClawEmail: 1min setup for OpenClaw agents with Gmail, Docs

https://clawemail.com
1•aleks5678•13m ago•1 comments

UnAutomating the Economy: More Labor but at What Cost?

https://www.greshm.org/blog/unautomating-the-economy/
1•Suncho•20m ago•1 comments

Show HN: Gettorr – Stream magnet links in the browser via WebRTC (no install)

https://gettorr.com/
1•BenaouidateMed•21m ago•0 comments

Statin drugs safer than previously thought

https://www.semafor.com/article/02/06/2026/statin-drugs-safer-than-previously-thought
1•stareatgoats•23m ago•0 comments

Handy when you just want to distract yourself for a moment

https://d6.h5go.life/
1•TrendSpotterPro•24m ago•0 comments

More States Are Taking Aim at a Controversial Early Reading Method

https://www.edweek.org/teaching-learning/more-states-are-taking-aim-at-a-controversial-early-read...
1•lelanthran•26m ago•0 comments

AI will not save developer productivity

https://www.infoworld.com/article/4125409/ai-will-not-save-developer-productivity.html
1•indentit•31m ago•0 comments

How I do and don't use agents

https://twitter.com/jessfraz/status/2019975917863661760
1•tosh•37m ago•0 comments

BTDUex Safe? The Back End Withdrawal Anomalies

1•aoijfoqfw•40m ago•0 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
5•michaelchicory•42m ago•1 comments

Show HN: Ensemble – macOS App to Manage Claude Code Skills, MCPs, and Claude.md

https://github.com/O0000-code/Ensemble
1•IO0oI•45m ago•1 comments

PR to support XMPP channels in OpenClaw

https://github.com/openclaw/openclaw/pull/9741
1•mickael•46m ago•0 comments

Twenty: A Modern Alternative to Salesforce

https://github.com/twentyhq/twenty
1•tosh•47m ago•0 comments

Raspberry Pi: More memory-driven price rises

https://www.raspberrypi.com/news/more-memory-driven-price-rises/
2•calcifer•53m ago•0 comments

Level Up Your Gaming

https://d4.h5go.life/
1•LinkLens•57m ago•1 comments

Di.day is a movement to encourage people to ditch Big Tech

https://itsfoss.com/news/di-day-celebration/
3•MilnerRoute•58m ago•0 comments

Show HN: AI generated personal affirmations playing when your phone is locked

https://MyAffirmations.Guru
4•alaserm•59m ago•3 comments

Show HN: GTM MCP Server- Let AI Manage Your Google Tag Manager Containers

https://github.com/paolobietolini/gtm-mcp-server
1•paolobietolini•1h ago•0 comments

Launch of X (Twitter) API Pay-per-Use Pricing

https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476
1•thinkingemote•1h ago•0 comments

Facebook seemingly randomly bans tons of users

https://old.reddit.com/r/facebookdisabledme/
1•dirteater_•1h ago•1 comments

Global Bird Count Event

https://www.birdcount.org/
1•downboots•1h ago•0 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
2•soheilpro•1h ago•0 comments

Jon Stewart – One of My Favorite People – What Now? with Trevor Noah Podcast [video]

https://www.youtube.com/watch?v=44uC12g9ZVk
2•consumer451•1h ago•0 comments
Open in hackernews

Show HN: WTMF Beta – Your AI bestie that understand

1•ishqdehlvi•6mo ago
We're excited to announce the beta launch of WTMF (What's The Matter, Friend?), an AI companion built to offer real emotional presence and understanding, unlike anything else out there.

What is WTMF? In a world saturated with AI tools designed for productivity, we built WTMF to be something different: an emotionally available AI best friend. It's for those 2 AM spirals, the "I don't know why I feel this way" moments, or simply when you need to vent without judgment or unsolicited advice.

Why WTMF is different:

Truly understands: Our AI learns your communication style and responds with genuine empathy, remembering your past conversations. It's not about "botsplaining" or toxic positivity; it's about being present and listening.

Pick your Vibe: Choose how your AI responds – soft, sassy, chaotic, or zen. Your conversation, your rules.

Voice Conversations: When typing isn't enough, connect via natural voice calls that feel like talking to a real friend.

AI Journaling & Mood Tracking: WTMF helps you track your emotions, spot patterns, and journal your thoughts, remembering so you don't have to.

Private & Secure: Your conversations are yours. We prioritize your privacy and emotional safety.

We're building AI that actually stays, offering a unique blend of emotional intelligence and conversational authenticity. It's AI that feels human, not clinical.

We're currently in beta and actively inviting early adopters to help us shape the future of emotionally intelligent AI.

Try Beta & Join the waitlist: https://wtmf.ai

We're eager to hear your thoughts and feedback!

Comments

mutant•6mo ago
While this pitch tugs at the heartstrings, as someone in IT/engineering, I'd pump the brakes hard. Building "emotionally available AI" isn't a prompt-hacking weekend project—it's a high-stakes alignment nightmare that well-meaning devs without deep ML safety chops are likely to botch. Here's a tight technical rundown of the red flags, sans fluff:

1. *Alignment Brittleness*: No details on fine-tuning or RLHF (e.g., using datasets like those from HELM or custom therapy corpora). Relying on prompts to "prime" a base LLM (probably GPT-like) is like duct-taping a guidance system— it fails under stress. Emotional contexts amplify risks: the model could hallucinate escalatory responses (e.g., reinforcing spirals via latent biases in pre-training data), bypassing any superficial steering. Without provable techniques like constitutional AI or red-teaming for edge cases (suicidal ideation, trauma triggers), it's unaligned output waiting to happen.

2. *Inference-Time Vulnerabilities*: Prompts alone can't enforce robust safeguards. LLMs exhibit emergent behaviors in long contexts—think jailbreaks or mode collapse where the AI "remembers" and amplifies negative patterns in journaling/mood tracking. No mention of layers like chain-of-thought with safety classifiers (inspired by Anthropic/DeepMind) means potential for toxic empathy: sassy mode goes rogue, zen turns dismissive. In voice mode, real-time audio processing adds latency-induced errors, eroding that "human feel" into something unpredictably harmful.

3. *Expertise and Oversight Gaps*: This screams "enthusiast project" without creds in AI ethics/safety (e.g., from OpenAI's Superalignment teams). Privacy claims? Fine, but "secure" journaling risks data leakage via model inversion attacks if not using differential privacy. Emotional AI demands HIPAA-level rigor, not beta vibes—missteps here could cause real psych harm, like entrenching isolation over guiding to human help.

Bottom line: Clever prompts don't solve alignment; they mask it. If you're beta-testing, demand transparency on training data, safety evals, and fallback to licensed therapists. This isn't ready for 2 AM crises—it's playing therapist without the degree. Proceed with extreme caution.

jaggs•6mo ago
I would add to that the double jeopardy of being created by a team based in India. Cultural differences are fundamentally important in any environment, including AI.