frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Implementing Logic Programming

https://btmc.substack.com/p/implementing-logic-programming
1•sirwhinesalot•1m ago•1 comments

Openvino-Plugins-AI-Audacity

https://github.com/intel/openvino-plugins-ai-audacity
1•petethomas•1m ago•0 comments

(LLM self fine-tuning) Unsupervised Elicitation of Language Models

https://arxiv.org/abs/2506.10139
1•xianshou•4m ago•0 comments

Ask HN: Creatives – want a portfolio site that feels like you?

1•pomdevv•6m ago•0 comments

Flohmarkt v0.10.0 Released

https://codeberg.org/flohmarkt/flohmarkt/releases/tag/0.10.0
1•midzer•6m ago•0 comments

Fiber-Optic Drones the New Must-Have in Ukraine War

https://www.rferl.org/a/ukraine-fiber-optic-drones-russia/33344310.html
1•austinallegro•9m ago•0 comments

JavaFlow – highly concurrent, async programming with deterministic execution

https://github.com/panghy/javaflow
1•simonpure•9m ago•0 comments

Show HN: Yupp – Every AI for Everyone

https://yupp.ai/
1•wennyyustalim•11m ago•0 comments

A collection of sample agents built with Agent Development (ADK)

https://github.com/google/adk-samples
1•tanelpoder•11m ago•0 comments

Show HN: WaveGen – turn blog articles into text overlay videos, not slops

https://wavegen.ai
1•lululi1989•12m ago•0 comments

Humpback Whales Are Way Cooler Than You

https://nautil.us/humpback-whales-are-way-cooler-than-you-1216796/
1•dnetesn•14m ago•0 comments

When Monsters Came for Mathematics

https://nautil.us/when-monsters-came-for-mathematics-1217312/
1•dnetesn•15m ago•0 comments

China AI Companies Dodge US Chip Curbs by Flying Suitcases of Hard Drives Abroad

https://www.wsj.com/tech/china-ai-chip-curb-suitcases-7c47dab1
4•walterbell•18m ago•1 comments

Recognizing and Communicating with diverse intelligences [video]

https://www.youtube.com/watch?v=OD5TOsPZIQY
1•downboots•24m ago•0 comments

Introducing Sulka, the Hardened Yocto Distro

https://ejaaskel.dev/introducing-sulka-the-hardened-yocto-distro/
4•FrankSansC•25m ago•1 comments

How we built our multi-agent research system

https://www.anthropic.com/engineering/built-multi-agent-research-system
1•mfiguiere•26m ago•0 comments

Show HN: Start building and monetizing your Bluesky audience

https://bluesky-bot.com/
1•cranberryturkey•26m ago•0 comments

Scientists discover 230 new giant viruses that shape ocean life and health

https://news.miami.edu/rosenstiel/stories/2025/06/scientists-discover-230-new-giant-viruses-that-shape-ocean-life-and-health.html
1•gmays•27m ago•0 comments

Google using ZK for age assurance

https://blog.google/around-the-globe/google-europe/age-assurance-europe/
2•abhv•28m ago•0 comments

StillOS 10 Preview – Brand New Distro Aimed to Be as Consumer Ready as Possible

https://www.stillhq.io
3•pizzalovingnerd•29m ago•1 comments

Step-by-Step Guide: Posts by Month in Bear Blog

https://yordi.me/step-by-step-guide-posts-by-month-in-bear-blog/
1•Froodooo•29m ago•0 comments

The State of React and the Community in 2025

https://blog.isquaredsoftware.com/2025/06/react-community-2025/
1•switz•30m ago•0 comments

We found a germ that 'feeds' on hospital plastic – new study

https://theconversation.com/we-found-a-germ-that-feeds-on-hospital-plastic-new-study-256945
2•PaulHoule•33m ago•0 comments

GPU-accelerated Llama3.java inference in pure Java using TornadoVM

https://github.com/beehive-lab/GPULlama3.java
1•pjmlp•33m ago•0 comments

Show HN: Omi Desktop – open-source meeting summarizer

https://github.com/BasedHardware/omi/tree/main/app
1•kodjima33•34m ago•1 comments

3Blue1Brown Follow-Up: From Hypothetical Examples to LLM Circuit Visualization

https://peterlai.github.io/gpt-circuits/
1•peterlai•35m ago•1 comments

Building Foundation Models into Banks

https://building.nubank.com/foundation-models-ai-nubank-transformation/
1•rafaepta•35m ago•0 comments

The Stress of Wall Street Is Sending Men to Pelvic Floor Therapy

https://www.bloomberg.com/news/features/2025-06-13/wall-street-stress-sends-nyc-men-to-pelvic-floor-therapy
1•petethomas•37m ago•0 comments

Indian scientists search for the perfect Apple

https://www.bbc.com/news/articles/c0l05762elpo
1•1659447091•38m ago•0 comments

Photonic processor could streamline 6G wireless signal processing

https://news.mit.edu/2025/photonic-processor-could-streamline-6g-wireless-signal-processing-0611
1•rbanffy•40m ago•0 comments
Open in hackernews

The Case for Software Craftsmanship in the Era of Vibes

https://zed.dev/blog/software-craftsmanship-in-the-era-of-vibes
155•Bogdanp•21h ago

Comments

hintymad•20h ago
I think the key question is what new or unique problems we need to solve. Unique problems demand unique designs and implementations, which can't be threatened by vibe coding, which requires software craftsmanship. That said, AI-assisted coding should still improve our productivity, therefore reducing the average number of engineers per project. Hopefully Jevon's Paradox will come into play, but that's really not a technical problem but a business one.
majormajor•18h ago
Can you run out of new and unique problems without running out of economic innovation? If there are no new business processes, no new features or products that require new complexity, how is there room for new entrants? You either get a race to the bottom, commodity prices for commodity goods; or market and regulatory capture and artificially high prices and barriers to entry.

Where is the room for "vibe coder" - or really any of the business people that would hypothetically be using agents to "just write their own code"?

Teams and tools will certainly look different. But people crave novelty which is a big part of why I've never been on an engineering team that was staffed enough to come even close to the output the business owners and product teams desired. So I don't think we'll actually hit the end of the road there.

--

In a true-super-human-AGI world things look very different, but then there's one question that matters more than any others: where is the money coming from to build and maintain the machines? The AGI agent that can replace an engineer can actually replace an entire company - why go to a vibe-coding entrepreneur and let them take a cut when you can just explain what you need to an agent yourself? The agent will be smarter than that entrepreneur, definitionally.

hintymad•3h ago
> Can you run out of new and unique problems without running out of economic innovation?

Probably not, but the numbers may vary. Case in point, we have a booming chip industry now, but the demand for EE graduates is still far less than that for CS graduates, even under the current tough market condition.

polishdude20•20h ago
Honestly I feel AI has helped me be a better craftsman. I can think "oh it would be kinda nice to add this little tidbit of functionality in this code". Previously I'd have to spend loads of time googling around the question in various ways, wording it differently etc, reading examples or digging a lot for info. This would sometimes just not be worth adding that little feature or nice to have.

Now, I can have Claude help me write some code, then ask it about various things I can add or modify it with or maybe try it differently. It gives me more time to spend figuring out the best thing for the user. More various ideas for how to tackle a problem.

I don't let it code willy nilly. I'm fairly precise in what I ask it to do and that's only after I get it to explain how it would go about tackling a problem.

tmsh•20h ago
+1 with greater power comes greater responsibility.

Power doesn’t mean lack of craft. Just different things to craft. Eg we don’t hand-roll assembly anymore.

Still have to know when you need to dive deep and how to approach that.

ChrisMarshallNY•19h ago
I still write my own code. What I have, are a couple of LLM subscriptions (Perplexity and ChatGPT), that I regularly consult. I now ask them even the “silliest” questions.

The last couple of days, I tried having ChatGPT basically write an app for me, but it didn’t really turn out so well.

I look forward to being able to train an agent on my personal technique and style, as well as Quality bar, and have it write a lot of my code.

Not there, yet, but I could see it happening.

mulmen•19h ago
I really don’t think training an agent on “your style” is the future. We’re more adaptable than the agents.

I think programming is a job people don’t need to do anymore and anyone who called themselves a software engineer is now a manager of agents. Jira is the interface. Define the requirements.

Writing your own code will still be a thing. We’ll even call those people hackers. But it will be a hobby.

bluefirebrand•18h ago
Unfortunately, management seems to be agreeing with you

I hope engineers have the sense to really set high prices when we're asked to fix the broken code that your "managers of agents" don't know how to fix

mulmen•3h ago
The open question is who is better at defining requirements for AI agents. Managers or engineers?
AdieuToLogic•17h ago
> I think programming is a job people don’t need to do anymore and anyone who called themselves a software engineer is now a manager of agents. Jira is the interface. Define the requirements.

That Grand Canyon sized logical leap quoted ignores a vital concept: understanding.

To "define the requirements" sufficient enough for any given feature/defect/etc. requires a degree of formality not present in prose. This already exists and is in use presently:

  Programming languages.  See 4GL's[0] and 5GL's[1].
0 - https://en.wikipedia.org/wiki/Fourth-generation_programming_...

1 - https://en.wikipedia.org/wiki/Fifth-generation_programming_l...

majormajor•18h ago
The question is: can an LLM actually power a true "agent" or can it just create a pretty decent simulation of one? When your tools are a bigger context window and a better prompt, are there some nails that are out of your capacity to hit?

We have made LLMs that need far less "prompt engineering" to give you something pretty-decent than they did 2 years ago. It makes them WAY more useful as tools.

But then you hit the wall like you mention, or like another poster on this thread saw: "Of course, it's not perfect. For example, it gave me some authentication code that just didn’t work." This happens to me basically daily. And then I give it the error and ask it to modify. And then that doesn't work. And I give it this error. And it suggests the previous failed attempt again.

It's often still 90% of the way there, though, so the tool is pretty valuable.

But is "training on your personal quality bar" achievable? Is there enough high-quality draining data in the world, that it can recognize as high-quality vs low? Are the fundamentals of the prediction machine the right ones to be able to understand at-generation-time "this is not the right approach for this problem" given the huge variety and complexity in so many different programming languages and libraries?

TBD. But I'm a skeptic about that because I've seen "output from a given prompt" improve a ton in 2 years, but I haven't seen that same level of improvement for "output after getting a really really good prompt and some refinement instructions". I have to babysit it less, so I actually use it day to day way more, but it hits the wall in the same sort of very similar, unsurprising ways. (It's harder to describe than that - it's like a "know it when you see it" thing. "Ah, yes, there's a subtly that it doesn't know how to get past because there are so many wrinkles in a particular OAUTH2 implementation, but it was so rare a case in the docs and examples that it's just looping on things that aren't working.")

(The personification of these things really fucks up the discussion. For instance, when someone tells me "no, it was probably just too lazy to figure out the right way" or "it got tired of the conversation." The chosen user-interface of the people making these tools really messes with people's perceptions of them. E.g. if LLM-suggested code that is presented as an in-line autocomplete by Copilot is wrong, people tend to be more like "ah, Copilot's not always that great, it got it wrong" but if someone asks a chatbot instead then they're much more likely to personify the outcome.)

MangoCoffee•19h ago
I'm a .NET developer working on backend for the last 7 years. I used to work with WinForms, WebForms, and ASP.NET MVC 5. Lately, I've been wanting to get back into frontend development using Blazor.

I have the GitHub Copilot for $10, and I’ve been "vibe coding" with it. I asked the AI to give me an example of a combo box, and it gave me something to start with. I used a Bootstrap table and asked the AI to help with pagination, and it provided workable code. of course to seasoned frontend developer, what i'm doing is simple but i haven't work on front end for so long, vibing with AI have been a good expereinces.

Of course, it's not perfect. For example, it gave me some authentication code that just didn’t work. Still, I’ve found AI to be like a “smart” Google, it doesn’t judge me or tell me my question is a duplicated like Stack Overflow.

AdieuToLogic•18h ago
> I can think "oh it would be kinda nice to add this little tidbit of functionality in this code". Previously I'd have to spend loads of time googling around the question in various ways, wording it differently etc, reading examples or digging a lot for info.

Research is how people learn. Or to learn requires research. Either way one wants to phrase it, the result is the same.

> Now, I can have Claude help me write some code, then ask it about various things I can add or modify it with or maybe try it differently.

LLM's are statistical text (token) generators and highly sensitive to the the prompt given. More importantly in this context is the effort once expended by a person doing research is at best an exercise in prompt refinement (if the person understands the problem context) or at worst an outsourcing of understanding (if they do not).

> I'm fairly precise in what I ask it to do and that's only after I get it to explain how it would go about tackling a problem.

Again, LLM algorithms strictly output statistically generated text derived from the prompt given.

LLM's do not "explain", as that implies understanding.

They do not "understand how it would go about tackling a problem", as that is a form of anthropomorphization.

Caveat emptor.

polishdude20•17h ago
We can go on all day about how an LLM doesn't explain and doesn't actually think. In the end though, I've found myself being able to do things better and faster especially given a codebase I have no experience in with developers who aren't able to help me in the moment given our timezone differences
AdieuToLogic•17h ago
> We can go on all day about how an LLM doesn't explain and doesn't actually think.

This is an important concept IMHO. Maintaining a clear understanding of what a tool is useful for and what it is not allows for appropriate use.

> In the end though, I've found myself being able to do things better and faster especially given a codebase I have no experience in ...

Here, I have to reference what I wrote before:

  Research is how people learn. Or to learn requires
  research. Either way one wants to phrase it, the
  result is the same.
If you don't mind me asking two philosophic questions;

How can one be confident altering a codebase one has no experience with will become "better" without understanding it?

Knowing an LLM produces the most statistically relevant response to any given query, which is orthogonal to the concepts of true/false/right/wrong/etc., and also knowing one has no experience with a codebase, how can one be confident whatever the LLM responds with is relevant/correct/useful?

polishdude20•14h ago
The thing about code is you can run it to confirm it does what you want it to do and doesn't do what you don't want it to do. Sprinkle in some software experience in there as well.
farhanhubble•20h ago
I like the article. In fact just yesterday I quipped to someone about how the quality of AI output will be determined by the competence of its "operators".

I have always had a strong drive to produce awe-inspiring software. At every job I have had, I have strived for usability, aesthetics, performance and reliability. I have maintained million LoC codebases and never caused a major issue, while people around me kept creating more problems than they fixed. However I do not recall one single word of encouragement, let alone praise. I do recall being lectured for being fixated on perfectionism a few times.

I took me 15 years to realize that consumers themselves do not care about quality. Until they do, the minority of engineers who strive for it are gonna waste their efforts.

Yes software is complex. Yes, you cannot compare software engineering to mechanical, electrical engineering or architecture. But do we deserve the absolute shit show that every app, website and device has become?

SebFender•19h ago
Exactly and that's why we're here - against all odds you do your best to make things right - that's THE job.
fiddlerwoaroof•19h ago
I don’t think it’s true that “consumers don’t care about quality” but rather that their concern for quality doesn’t really manifest itself in those terms. Consumers care about critical tools being available when they need them and businesses often have a hard time situating feature requests in the broader desire for utility and stability (in part because these are things only noticed when things are really bad).

Part of my growth as a developer was connected with realizing that a lot of the issues with quality resulted from miscommunication between “the business” and engineers advocating for quality.

3dsnano•19h ago
agree that we (users, humans, customers) all are desperately reaching for something steady, well designed, rugged.

something that people thought about for longer than whatever the deadline they had to work on it. something that radiates the care and human compassion they put into their work.

we all want and need this quality. it is harder to pull off these days for a lot of dumb reasons. i wish we all cared enough to not make it so mysterious, and that we could all rally around it, celebrate it, hold it to high regard.

bitwize•17h ago
Consumers don't care about your code, only what it does for them. If your crappy software provides its intended service quickly, accurately, and reliably enough, your customers consider it a win. Any further improvements on those axes are just gravy -- expensive gravy.
farhanhubble•16h ago
No. Consumers don't care about what they get. May be a small minority of tech savvy ones do but others don't or at least they don't know how to demand better software because software comes with no promises and guarantees.
fhd2•15h ago
Consumers certainly get annoyed if their software is hard to use, has errors, is slow or down. I've run many user tests with arbitrarily selected test subjects, and this part is universal. This is what good software development can fix, or ideally avoid in the first place.

One kinda heart breaking thing I observed was that especially older users wouldn't get mad at the software, but themselves. They thought everybody else is using this just fine, and they're somehow not smart enough. Motivated me to go that extra mile whenever I can.

MangoCoffee•14h ago
>they don't know how to demand better software

This is such a nonsensical statement. Consumers will always find something else if what they're currently using isn't up to what they paid for.

People can only tolerate so much. My current employer just switched from one HR software to another

farhanhubble•13h ago
What's nonsensical, here? How many of us can rid ourselves of the terrible Wifi router the ISP forced upon us? The sluggish operating system on the TV? The stupid navigation system installed by your car manufacturer.

A small minority can but in general the software you can use is determined by IT, procurement and leadership, and corporations controlling operating systems and hardware.

bigstrat2003•3h ago
That is a very different argument from your original one. You are correct that consumers don't always have much choice about the quality of products, and they have to take something crappy or go without entirely. But that's not the same as "consumers don't care about what they get". They care, they simply are unable to get something better.
insin•19h ago
The "Emerging" diagram on the Agentic Engineering page [1] linked from this post is the first time I've seen the exact thing I think of when people start frothing about fleets of agents driving fleets of agents, while just assuming multiplying all those <= 1.0s together will somehow automatically create 1.0

[1] https://zed.dev/agentic-engineering

mattnewton•19h ago
I want to like zed. I keep trying it.

Ultimately though none of the rendering speed improvements or collaboration ideas make a difference to me. Then there are major feature gaps like not being able to open Jupyter notebooks or view images when connected over ssh that keep bringing me back to vscode editors where everything just works out of the box or with an extension. The developers have great craftsmanship but, they are also tasked with reimplementing a whole ecosystem.

And ultimately I think native performance just keeps being less and less of a draw as agents write most of the code and I spend time reviewing it where web tools are more than adequate.

I want craftsmanship to be important as someone who takes pride in their work and likes nice tools. I just haven’t seen evidence of it being worth it over “good-enough and iterate fast” engineering. I don’t think this vision of engineering will win out over “good enough and fast”

jeremyjh•18h ago
I think Zed has the potential to become a good editor some day, and it might be the only editor with that potential. But yes, right now VS Code is more acceptable.
citizenpaul•15h ago
>don’t think this vision of engineering will win out over “good enough and fast”

Oh I'm sure of it. However that won't be good enough for the MBA'S. My prediction is that AI slopware is going to drive the average quality of software back down to the buggy infuriating 1000 manual workarounds sofware of the late 90's and early 00's.

Then the pendulum will swing back.

srhtftw•14h ago
> I want to like zed. I keep trying it. ... Ultimately though none of the rendering speed improvements or collaboration ideas make a difference to me.

I feel this way as well. I've tried to incorporate Zed into my workflow a few times but I keep getting blocked by 30 years of experience with Emacs. E.g. I need C-x 2 to split my window. I need C-x C-b to show me all my buffers. I need a dired mode that can behave like any ordinary buffer. Etc. etc.

Sadly the list is quite long and while Zed offers many nice things, none are as essential to me as these.

uludag•7h ago
> I just haven’t seen evidence of it being worth it over “good-enough and iterate fast” engineering.

Aren't things bound to come to a point where quality is a defining feature of certain products? Like take video game software for example. The amount of polish and quality that goes into good selling games is insane. There video game market is so saturated that the things that come up on top must have a high level of polish and quality put into them.

Another thought experiment: imagine thousands of company startups for creating task managers. I can't imagine that those products with strong engineering fundamentals wound't end up on top. To drive this point even further, despite the rise of AI, I don't think I've seen even one example of a longstanding company being disrupted by an "AI native" "agentic first" company.

ramesh31•18h ago
Instead of asking how can we ship more code or how can we ship better code, why not "how can AI give me a better life"? Machines are supposed to make our lives easier. If I can output the same quality at a faster rate of speed, why can't I have that time back to my own life now? This is how my view on agentic coding is evolving toward. I don't want to be under the pressure of doubling my productivity for an employer. I want to capture that gain for myself.
aiforecastthway•18h ago
> If I can output the same quality at a faster rate of speed, why can't I have that time back to my own life now?

We have done a terrible job at allocating the net benefits of globalization. We will (continue to) do a terrible job at allocating the net benefits of productivity improvements.

The "why" is hard to answer. But one thing is clear: the United States has a dominant economic ideology, and the name of that ideology is not hardworker-ism.

bluefirebrand•15h ago
> I don't want to be under the pressure of doubling my productivity for an employer. I want to capture that gain for myself.

Unfortunately this will never happen. I don't think it has ever happened in the history of capital

When a machine can double your productivity, capital buys the machine for you and fires one of your coworkers

Or the machine makes it so less skilled people can do the same work you do. But since they're less skilled, they command less pay. So capital pays them less, and devalues your work

Already seeing this with AI. My employer is demanding all engineers start using LLM tools, citing that it is "an easy 20% boost". Not sure where they got the number but whatever.

But is everyone going to get a 20% raise? No. Not a chance. Capital will always try to capture 100% of any productivity gains for themselves

balamatom•14h ago
>"how can AI give me a better life"?

that's an easy one: by destroying itself, and taking social media and smartphones with it

dgb23•12h ago
You need to own parts of the company you work at for that to happen.
jumploops•16h ago
One of the more exciting aspects of LLM-aided development for me is the potential for high quality software from much smaller teams.

Historically engineering teams have had to balance their backlog and technical debt, limiting what new features/functionality was even possible (in a reasonable timeframe).

If you squint at the existing landscape (Claude Code, o3, codex, etc.) you can start to envision a new quality bar for software.

Not only will software used by millions get better, but the world of software for 10s or 1000s of users can now actually be _good_, with much less effort.

Sure we’ll still have the railroad tycoons[0] of the old world, but the new world is so so vast!

[0]https://www.reddit.com/r/todayilearned/s/zfUX8StpXM

SoftTalker•16h ago
If Sturgeon’s Law holds (and I see no reason it wouldn’t) we won’t get better software, we’ll get more shit, faster.
jumploops•13h ago
10% of a large pie is more than 10% of a small pie (:
sudahtigabulan•12h ago
The same applies to the crap 90% ;)
ecb_penguin•16h ago
> One of the more exciting aspects of LLM-aided development for me is the potential for high quality software

There is no evidence to suggest this is true.

LLMs are trained on poor code quality and as a result, output poor code quality.

In fact, the "S" in LLM stands for security, which LLMs always consider when generating code.

LLMs are great, but the potential for high quality software is not one of the selling points.

NitpickLawyer•15h ago
> LLMs are trained on poor code quality and as a result, output poor code quality.

This is an already outdated take. Modern LLMs use synthetic data, and coding specifically uses generate -> verify loops. Recent stuff like context7 also help guide the LLMs towards using modern libs, even if they are outside the training cut-off.

> In fact, the "S" in LLM stands for security, which LLMs always consider when generating code.

This is reminiscent of "AI will never do x, because it doesn't do x now" of the gpt-3.5 era. Oh, look, it's so cute that it can output something that looks like python, but it will never get programming. And yet here we are.

There's nothing special about security. everything that works for coding / devops / agentic loops will work for security as well. If anything, the absolute bottom line will rise with LLM-assisted stacks. We'll get "smarter" wapitis / metasploits, and agentic autonomous scanners, and verifiers. Instead of siems missing 80% of attacks [0] while also inundating monitoring consoles with unwanted alerts, you'll get verified reports where a codex/claude/jules will actually test and provide PoC for each report they make.

I think we've seen this "oh, but it can't do this so it's useless" plenty of times in the past 2 years. And each and every time we got newer, better versions. Security is nothing special.

[0] - https://www.darkreading.com/cybersecurity-operations/siems-m...

WD-42•14h ago
I agree with most of your argument but I do think security is somewhat special.

You can vibe code an entire mess and it'll still "work". We've seen this already. As good as LLMs are they still write overly verbose, sloppy and often inefficient code. But if it works, most people won't care - and won't notice the security flaws that are going to be rife in such large, and frankly mostly unread, codebases.

Honestly I think the security world is primed for it's most productive years.

NitpickLawyer•14h ago
> most people won't care - and won't notice the security flaws that are going to be rife in such large, and frankly mostly unread, codebases.

I agree. But what I'm trying to say is that we'll soon have automated agents that look for vulnerabilities, in agentic flows, ready to be plugged into ci/cd pipelines.

> Honestly I think the security world is primed for it's most productive years.

In the short term, I agree. In the long run I think a lot of it will be automated. Smart fuzzers, agentic vuln scanning, etc. My intuition is that we'll soon see "GAN"-like pipelines with red vs. blue agents trained in parallel.

namaria•12h ago
If the solution to all problems with attaching gpu farms to our workflows is to attach more gpu farms to our workflows, I can't see how this isn't just an elaborate scam.
ecb_penguin•8h ago
> I agree. But what I'm trying to say is that we'll soon have automated agents that look for vulnerabilities, in agentic flows, ready to be plugged into ci/cd pipelines.

We already have that, and we can see it doesn't perform very well.

An agent that has no reasoning ability will not generate better code than what it was trained on.

https://garymarcus.substack.com/p/llms-dont-do-formal-reason...

SupremumLimit•16h ago
The case the article tries to make doesn’t stack up for me.

What you get when it becomes easier to generate code/applications is a whole lot more code and a whole lot more noise to deal with. Sure, some of it is going to be well crafted – but a lot of it will not be.

It’s like the mobile app stores. Once these new platforms became available, everyone had a go at building an app. A small portion of them are great examples of craftsmanship – but there is an ocean of badly designed, badly implemented, trivial, and copycat apps out there as well. And once you have this type of abundance, it creates a whole new class of problems for the users but potentially also developers.

The other thing is, it really doesn’t align with the priorities of most companies. I’m extremely skeptical that any of them will suddenly go: “Right, enough of cutting corners and tech debt, we can really sort that out with AI.”

No, instead they will simply direct extra capacity towards new features, new products, and trying to get more market share. Complexity will spiral, all the cut corners and tech debt will still be there, and the end result will be that things will be even further down the hole.

agumonkey•14h ago
I don't know if it's been documented or studied, but it seems the availability argument is a fallacy. It just open the floodgates and you get 90% of small effort attempts and not much more. The old world where the barrier was higher guaranteed that only interesting things would happen.
dgb23•12h ago
Trivially, fewer interesting things happen if the barrier is incidental to some degree.

I think the more pressing issues are costs: opportunity cost, sunk cost, signal to noise ratio.

scelerat•5h ago
It seems there's some kind of corollary to what you're saying to when (in the US) we went from three major television networks to many cable networks or, later, when streaming video platforms began to proliferate and take hold -- YouTube, Netflix, etc.: The barriers to entry dropped for creators, and the market fragmented. There is still quality creative content out there, some it as good as or better than ever. But finding it, and finding people to share the experience of watching it with you is harder.

Same could be said of traditional desktop software development and the advent of web apps I suppose.

I guess I'm not that worried, other than being worried about personally finding myself in a technological or cultural eddy.

HPsquared•12h ago
It's Carlyle's idea of "the cheap and nasty" in the age of software.
guappa•12h ago
I wrote this many years ago, when I moved from symbian (with very few apps available) to android, with a lot of apps, but having to spend several hours to find a half decent one.
namaria•12h ago
Increasing energy input to a closed system increases entropy.

Why on earth people expect to attach gpu farms to render characters into their codebase to not only not increase its entropy but to lower it?

roxolotl•10h ago
Unless I’m totally misreading the article they are saying what you’re saying and then taking it and using it as an argument for why we should care about quality. They aren’t saying quality will necessarily happen. They are saying that because there will be a whole lot more noise it will be important to focus on quality and those who don’t will drown in complexity.
lcnPylGDnU4H9OF•5h ago
> No, instead they will

The article is making a normative argument. It is not saying what people "will" do but instead what they "should" do.

vkaku•15h ago
The article is basically: We sell tools that help you vibe but we had to do craft hard and long.

That ship has sailed because everyone wants to sell what you're selling. Craftmanship is a blah blah ... maybe, but this is being sacrificed for profit, and the profit happens because everyone wants to vibe. ;)

I'm feeling that vibe brah.

jeisc•14h ago
We can imagine the electricity failing but AI can't because it would be like us imagining the sun failing to shine
jauntywundrkind•14h ago
Particularly anything where the target audience is also highly technical: you can't just throw consumerietic shovelware at folks. They are going to notice!

It's challenging & fun to build things where your audience is gonna form their own technical impressions of how well crafted your thing is! It's fun for engineering to matter!

dismalaf•12h ago
As long as I've followed the software industry there's always been people saying "blah blah craftsmanship blah". The fact is, people don't care about craftsmanship. You can't see the code of most of the software you use and even when you can (FOSS), who's actually looking?

Also, the few times I've use "handcrafted" software it's underwhelming in terms of functionality, apart from a few FOSS programs that have thousands of contributors.

Life is short and hardware is relatively cheap.

lionkor•12h ago
Everything you use every day depends on a few core libraries and services that are truly crafted by some competent people. Take your browser and all the various fantastic libraries it uses to decode and display content to you. How about cryptography libraries like OpenSSL? Linux itself, I would consider the result of craftsmanship, same with most GNU software. You're really, really, really missing a lot here with your evaluation.

You're right, people don't care about craftsmanship, until their vibe coded TLS implementation causes their government to track them down and have them executed. Then, suddenly, it matters.

People don't "care" about material science either. Without it, we'd be screwed.

dismalaf•12h ago
Yes, I did mention that there's a few FOSS projects that are handcrafted that work because of lots of contributors or at least eyes (I guess I should have also mentioned corporate maintainers). Foundational libraries, Linux kernel, stuff like that.

But on the whole, a lot of software (mostly user facing) is pretty bloated, filled with bugs and no one cares.

dgb23•12h ago
The people who care about craftsmanship, trust that you care.

When friends and family ask me about which software to buy/use/install, I recommend them something that I think is well crafted. They ask me, because they know I care.

dismalaf•12h ago
So what do you recommend?
dzonga•12h ago
we've come a long way from the days of software that does one thing well. back in the days of desktop apps craftsmanship was high VLC, different mac os apps etc.

once the web took over - people took quality for granted since you can easily ship a fix (well those never happen). then move fast & break things. software has been shoddy since. Yeah you can make excellent software on web technologies - so the web isn't the problem.

it's US.

neoden•11h ago
I am so much out of sync with this idea that a text editor must be blazingly fast. The latency of processing my input was never an issue for me in text editors unless it was an obvious misbehaviour due to a bug or something. And 120Hz text rendering is a thing that I couldn't care less about.
ale•10h ago
In software like VSCode the milliseconds stack up fast if you're switching between projects constantly and/or doing any kind of remote development.
sandos•9h ago
Iv'e seen people, and even been one myself, where the screen latency could be a problem for your raw text processing speed. We were well under 25 years old at the time though, using very low-level languages, and after 30 I never really felt that the rendering was ever too slow for me.

With larger projects and more modern code this is simply not an issue, and hasn't been for decades for me at least.

If you are a 10x developer coding assembly, sure?

godshatter•6h ago
Crafting code and designing applications is the fun part of what I do. I do it whether or not someone pays me to. Why would I want to hand that over to an AI app? If I took up painting as a hobby, why would I want a robot to paint for me?
sokoloff•4h ago
You might want a robot to prepare paints for you and put them into convenient containers, to prepare and trim brush hairs put them in a ferrule onto a stick, prepare canvas or other substrates, cut frame pieces at precise and repeatable angles, or wash/clean up.

That would leave you to the creative part of painting, while removing some of the more mechanical and less creative parts.