frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•52s ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•1m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•6m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
2•dragandj•7m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•8m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•9m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•10m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•11m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•13m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•13m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•14m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•14m ago•0 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•15m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•17m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•18m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•19m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•19m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
1•mooreds•20m ago•0 comments

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
2•paulpauper•23m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•23m ago•1 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•24m ago•1 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•24m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•27m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•27m ago•0 comments

The age of a treacherous, falling dollar

https://www.economist.com/leaders/2026/02/05/the-age-of-a-treacherous-falling-dollar
2•stopbulying•27m ago•0 comments

Ask HN: AI Generated Diagrams

1•voidhorse•30m ago•0 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
7•josephcsible•30m ago•1 comments

Show HN: A delightful Mac app to vibe code beautiful iOS apps

https://milq.ai/hacker-news
6•jdjuwadi•33m ago•1 comments

Show HN: Gemini Station – A local Chrome extension to organize AI chats

https://github.com/rajeshkumarblr/gemini_station
1•rajeshkumar_dev•33m ago•0 comments

Welfare states build financial markets through social policy design

https://theloop.ecpr.eu/its-not-finance-its-your-pensions/
2•kome•37m ago•0 comments
Open in hackernews

The Evolution of Software Development: From Machine Code to AI Orchestration

https://guptadeepak.com/the-evolution-of-software-development-from-machine-code-to-ai-orchestration/
14•guptadeepak•8mo ago

Comments

guptadeepak•8mo ago
I've been building software for over two decades, from debugging assembly code in India to now running AI companies. The pace of abstraction in our field continues to accelerate in ways that fundamentally change what it means to be a developer.

Major tech companies are already generating 25-30% of their code through AI. At GrackerAI and LogicBalls, we're experiencing this shift firsthand. What previously took weeks can now be prototyped in hours.

Three key insights from this transformation:

Architecture becomes paramount: AI can generate functional code, but designing robust distributed systems and making trade-offs between performance, cost, and maintainability remains distinctly human.

Quality assurance complexity scales: As more code becomes AI-generated, ensuring security, maintainability, and efficiency requires deeper expertise. The review process becomes more critical than initial coding.

Human-AI collaboration evolves: We're moving from imperative programming (telling computers how) to declarative (describing what) to natural language goal specification.

The most interesting challenge: while AI excels at pattern matching, true innovation—creating entirely new paradigms—remains human.

For those integrating AI into development workflows: what unexpected quality challenges have you discovered between AI-generated code and existing systems?

Deepak

proc0•8mo ago
> while AI excels at pattern matching, true innovation—creating entirely new paradigms—remains human.

If humans remain in the loop, the promise of AI is broken. The alternative is that AI is still narrow AI and we're just applying it to natural language and parts of software engineering.

However the idea that AI is a revolution implies it will take over absolutely everything. If AI keeps improving in the same direction, the prediction is that it will even be innovating and also doing all of the creative and architectural work.

Saying there is a middle ground is basically admitting AI is not good enough and we are not on the track that will produce AGI (which is what I think so far).

Terretta•8mo ago
> If humans remain in the loop, the promise of AI is broken.

In any craft, if assistants remain in the loop, the promise of mastery is broken.

Or is it?

In the contemporary art world, artists and their workshops enjoy a remarkably symbiotic relationship. ... It can be difficult, however, from our contemporary perspective to reconcile the group mentality of workshop practice with the pervasive characterization of individual artistic talent. This enduring belief in the singular “genius” of artists ... is a construct slowly being dismantled through scholarly probing of the origins and functions of the renaissance workshop.

The modern engineer would do well to model after Raphael:

Soon after he arrived in Rome, Raphael established a vibrant network of artists* who were able to channel his “brand” and thereby meet (or at the very least, attempt to meet) the extraordinary demand for his work.

https://www.khanacademy.org/humanities/renaissance-reformati...

* read: agents

proc0•8mo ago
The difference here is that there wouldn't even be a need for Raphael, at least if all the projections are right as to where AI is going.

Replacing humans with other humans is one thing. Replacing humans with machines is on a completely different level. Anyone that says we will work along AI has not thought this through. In contrast to the industrial revolution where machines are doing things that humans are not capable of, i.e. lifting heavy things, bending and shaping steel, mixing tons of cement, etc., AI is taking over what makes humans unique as a species: our cognitive abilities.

Of course, all of this hinges on whether AI will reach this level of reasoning and cognition, which right now is not certain. LLMs did scale up to have impressive and surprising abilities, but it's not clear if more scaling will produce actually intelligent agents that can correct themselves and have reliable output. Not to mention the compute cost which is orders of magnitude more than the human brain and will be a huge limitation.

kriro•8mo ago
When I was teaching my first AI 101 class (must have been around 2010) I ended the first lecture with a reading assignment of "Man–Computer Symbiosis" by J.C.R. Licklider and asked the students to discuss if the future will be all AI or AI assistants. I still recommend this paper today and personally think if there's a path to AGI there will be a longish period of symbiosis before and not just a paradigm shift overnight.
skydhash•8mo ago
> We're moving from imperative programming (telling computers how) to declarative (describing what) to natural language goal specification.

We have that a long time ago with Prolog (53 years ago). Which is just a formal notation for logical proposition. Lambda calculus isn’t imperative as you’re describing relations between inputs and outputs.

The more complex the project, the more detailed a spec needs to be and the more efficient code is compared to natural languages at getting the details across.

userbinator•8mo ago
we're experiencing this shift firsthand

I read that as "we're experiencing this shit firsthand"... and I'd agree with that assessment. Software has gone quantity-over-quality and AI is only going to accelerate that decline.

proc0•8mo ago
One recurring pattern I see with the current AI predictions is a kind of contradicting idea of AI being really intelligent agents, but also AI needs to be checked carefully. Either they're intelligent enough to not need humans or we are changing the definitions of intelligence.

For example, in the article:

> We're approaching a inflection point where the barrier to creating software will be primarily conceptual rather than technical.

And then...

> Developers will need to audit AI-generated code for vulnerabilities, implement security best practices, and ensure compliance with increasingly complex regulations.

Again, if AI is going to be so good at coding, why would it not be able to implement best practices, and generate perfectly compliant code with a few prompts? I think it's interesting that the promise of AI clearly implies that it will do everything humans can, yet I keep reading how engineers need to still check what the AI is doing. It's like self-driving cars that still need you to have your eyes on the road and hands on the wheel. Seems like the implied promise of the technology cannot quite reach its destination.

If we use the metaphor of the bird and the airplane, we're basically expecting airplanes to fly like birds, takeoff from the ground, flap its wings. Airplanes are much faster than birds, but needs a runway for takeoff and lots of fuel. Similarly current LLMs can synthesize huge amounts of text, summarize it, etc., but they have cognitive limitations that are crucial to solving problems in the way humans do.

I think there is something beyond this metaphor though. I think the brain is tapping into some algorithm from which mathematical reasoning emerges. This algorithm has side-effects that look like human reasoning, and it's also the missing ingredient to make machines properly communicate and collaborate with humans (and also allow them to be properly agentic).

skydhash•8mo ago
Are they tools? So we can apply the reliability measure, as to how often they help us do something more efficiently instead of hindering us?

Are they assistants, a step further than tools? So after training the can provide a more conxtetual help and we can offload menial tasks to them while whe do the more abstract thinking?

So far there’s no answer. They promises us assistants while delivering something that’s worse than any tooling. It’s an impressive tech, but not that useful on its own.

proc0•8mo ago
I think assistants is a good descriptions. There are several sci-fi universes where there exists these two kinds of AI, one is like what we have now (which is already incredible really) and the other is AI that has sentience and doesn't even consider itself to be artificial but just consciousness in a different substrate. I think the game Mass Effect has this, Warhammer 40K as well, and some others, but I think that's the distinction that we need. Assistant smart algorithms vs. human-like artificial minds.
keiferski•8mo ago
I’m not a programmer so take my metaphor with a grain of salt.

I am, however, a writer, and so your comment made me think of the difference between writing and editing. It’s perfectly possible (and common, even) for someone to be an amazing writer but a terrible editor, and vice versa. The writer focuses on production, and typically even thinking about editing during that process is detrimental to the quality of the work.

The Ai-human coding situation seems relatively analogous to this.

proc0•8mo ago
In that case you would be using it as a tool, which of course has already made a huge impact and will continue to do so. There is however a promise being made by the leading AI companies, than in fact we will have the equivalent of human digital minds. The implication of this is that both the editor and the author can be AIs. It's just the direction Google, OpenAI, etc. are at least trying to go in.

Part of the discussion is whether this is hype or not, and I'm just saying that at the moment it seems like hype because they're clearly more like tools right now, and it's unclear if we will make the leap such that they do become digital minds without a few more breakthroughs.

BoiledCabbage•8mo ago
> a kind of contradicting idea of AI being really intelligent agents, but also AI needs to be checked carefully. Either they're intelligent enough to not need humans or we are changing the definitions of intelligence.

Why is everyone so flummoxed by this? Your coworker is intelligent, but still needs code reviews.

Why is it than whenever people think of artificial intelligence the only options they see are dumb as a rock / pure parrot, or some omniscient god? There is no intelligence on earth that falls in either of those categories, but thats the only two options people can use to visualize AI and if it not one to them it must be the other.

Intelligence exists without being perfect gods. I feel like people have watched too much sci-fi.

sarchertech•8mo ago
That’s the thing though coworkers don’t actually need code reviews.

Years ago reviews of every code change before deployment weren’t a thing. Most of the time it was fine.

Today, it’s fairly rare for a code review to find a serious bug in a PR.

The majority of PRs from the majority of developers work fine. They do what they’re supposed to and they don’t bring production crashing down.

That’s not close to true for AI.

JambalayaJimbo•8mo ago
Rubber stamp PRs have the norm at every single place I have worked. No one has the time or mental energy for reading other’s code.

Also why would we not just get another AI to do code review? It would be significantly faster and cheaper than a human, if equivalent.

proc0•8mo ago
> Why is it than whenever people think of artificial intelligence the only options they see are dumb as a rock / pure parrot, or some omniscient god?

That's not a bad question. I think it's because of the implications of building an autonomous intelligent agent. If it's actually intelligent then it has to understand basic instructions and also follow simple logical reasoning to avoid pitfalls, consistently.

i.e., if a human artist is really good at generating art, if I ask them to please create a character with the left foot forward, it's an easy and obvious task they can do. Currently AI is really good at many things, which is great, but then it fails at simple things you would expect it to know if can do these other really hard things. This disconnect is what causes them to not be reliable, and therefore not as useful.

That said, who knows we could be months away from a leap that will give them internal logical structures that will allow them to actually reason from first principles consistently every time. In my opinion, this is the leap we need in order for all of the current hype to be real.

localghost3000•8mo ago
I’ve warmed to this tech a bit, but christ would I like to hear more takes from dudes who aren’t running a fucking AI company. It’s impossible to take anything they say as anything other than a god damn ad.
userbinator•8mo ago
s/Evolution/Devolution/g
visarga•8mo ago
I recognize some GPTisms in this article. "Holistic Thinking" and "Ethical Considerations" being things humans are necessary for... it likes to say that a lot. I don't agree much, it seems superficial LLM limitation chanting.

What I consider things AI can't do without humans

- responsibility and accountability, because we have bodies, we can be punished

- the very specific life experience we have which is essential for grounding AI, like imagine your experience on the job after a few years

- expressing our preferences, nobody can do that for us, we are supposed to ask and evaluate the outcome ourselves

- walking, touching, and accessing the world physically; we can implement and test ideas for real, and validate AI work concretely; AI without validation is just an ungrounded idea generator

- provide the opportunities for AI to generate value; we bring the problems, we collect the outcomes of AI work; AI can't create value on its own

So accountability, tacit experience, telos, validation, opportunities for value creation.