it was unpleasant.
And therein lies the risk: research labs may become wholly dependent on companies whose agendas are fundamentally commercial. In exchange for access to compute and frontier models, labs may cede control over data, methods, and IP—letting private firms quietly extract value from publicly funded research. What begins as partnership can end in capture.
Sell assets like government real estate to themselves at super cheap rates and then set up as many dependencies as they can where the government has to buy services from them because they have nowhere else to turn.
To give an example this missile dome bullshit they are talking about building which is a terrible idea for a bunch of reasons.. but there is talks at the moment of having this run by a private company who will sell it as a subscription service. So in this scenario the US military can’t actually fire the missiles without the explicit permission of a private company.
This AI thing is the same scam.
If LLM/AI is critical to national security, then it should be funded solely via the Dep of Defense budget, with no IP or copy right derivatives allowed.
This is the intention of tech transfer. To have private-sector entities commercialize the R&D.
What is the alternative? National labs and universities can't commercialize in the same way, including due to legal restrictions at the state and sometimes federal level.
As long as the process and tech transfer agreements are fair and transparent -- and not concentrated in say OpenAI or with underhanded kickbacks to government -- commercialization will benefit productive applications of AI. All the software we're using right now to communicate sits on top of previous, successful, federally-funded tech transfer efforts which were then commercialized. This is how the system works, how we got to this level.
I think that's the crux of the guy you're responding to's point. He does not believe it will be done fairly and transparently, because these AI corporations will have broad control over the technology.
Having been in this world though, I didn't see a reluctance in federal labs to work with capable entrepreneurs with companies at any level of scale. From startup to OpenAI to defense primes, they're open to all. So part of the challenge here is simply engaging capable entrepreneurs to go license tech from federal labs, and go create competitors for the greedy VC-funded or defense prime incumbents.
My reluctance is when we talk about fraud, waste, and corruptions in government, this is where it happens.
The DoD's budget isn't $1T because they are spending $900B on the troops. It's $1T because $900B of that ends up in the hands of the likes of Lockhead martin and Raytheon to build equipment we don't need.
I frankly do not trust "entrepreneurs" to not be greedy pigs willing to 100x the cost of anything and everything. There are nearly no checks in place to stop that from happening.
You can hope that a defense company is doing the right things in terms of supply chain attacks, but that's a pretty lucrative corner to cut. They'd not even need to cut it all the time to reap benefits.
The only other alternative is frequent audits of the defense company which is expensive and wouldn't necessarily solve the problem.
Reasonably there should be a two way exchange? It might be okay for companies to piggyback on research funds if that also means that more research insight enters public knowledge.
There’s zero acknowledgment or appreciation of public infra and research.
They already are. Who provides their computers and operating systems? Who provides their HR software? Who provides their expensive lab equipment?
Companies are not in some separate realm. They are how our society produces goods and services, including the most essential ones.
I'm kind of disappointed that their dashboard has been moved or offline or something for the past few years. https://b2510207.smushcdn.com/2510207/wp-content/uploads/202... is what it used to look like.
That said, I'm not very confident such a situation would happen in reality. I'm not confident current industry leaders can see past a quarter and nearly certain they can't see past 4. Current behavior already indicates that they are unwilling to maximize their own profits. A rising tide lifts all ships, but many will forgo the benefit this gives them to set out to explore for new and greater riches and instead are only able to envy the other ships rising with them. It can be easy to lose sight of what you have if you are too busy looking at others.
[0] Simplified example illustrated by Iterative Prisoner's Dilemma: https://www.youtube.com/watch?v=Ur3Vf_ibHD0
[0.1] Can explain more if needed but I don't think this is hard to understand.
I want to interrogate AI optimist type people because even if AI is completely safe and harmless I literally see only downsides.
Is your perception that living in theorized extreme comfort correlates to "reward"?
It's mostly because the actual stated and actualized goal of real AI is clearly bad.
It's like if you approached me and said "I'm going to erase meaning from your life. You will want for nothing. The entire paradigm of reality will be changed: your entire evolutionary struggle, made meaningless. You will want for nothing: "be" nothing. Also this might potentially kill you or some rich person or more likely nation-state could enslave you."
The actual stated goals seem negative to me. I'm not very interested in erasing and surpassing human achievement in every way. It seems inherently bad. I don't think that's an absurd perspective.
I think the disconnect here is asked in "what is the purpose of life" and I don't think any reasonable interpretation of that is "be obscenely comfortable".
> The Lab's science and technology digital magazine presents the most significant research initiatives and accomplishments from national-security-related programs as well as projects that advance the frontiers of basic science. Our name is an homage to the Lab's historic role in the nation's service: During World War II, all that the outside world knew of the top-secret laboratory was the mailing address - P.O. Box 1663, Santa Fe, New Mexico.
https://researchlibrary.lanl.gov/about-the-library/publicati...
And discussed on HN: https://news.ycombinator.com/item?id=43765207
This does feel like a step change in the rate at which modern AI technologies and programs are being pushed out in their PR.
To the extent that further improvements to AI are either snake oil or just hard to monopolise on, doing everything else first is of course the best idea.
Even though I'm more on the side of finding these things impressive, it's not at all clear to me that the people funding their development will be able to monopolise the return on the investment - https://en.wikipedia.org/wiki/Egg_of_Columbus etc.
Also: the way the biggest enthusiasts are talking about the sectoral growth and corresponding electrical power requirements… well, I agree with the maths for the power if I assume the growth, but they're economically unrealistic on the timescales they talk about, and that's despite that renewables are the fastest %-per-year-growth power sector and could plausibly double global electrical production by the early 2030s.
The major question is: at what point will unaddressed climate change nullify these economic gains and make the fact that anyone worried about them feel silly in retrospect?
Or put another way, will we even have the chance collectively enjoy the benefits of that work?
- To what extent AI will actually be helpful in solving the climate crisis?
- To what extent the power generation growth fueled by AI will be critical to solving the climate crisis, and conversely, how badly we'll be screwed without it?
"Degrowth" is not an option. It hasn't been for a long time now. We can't just "social solution" our way out this problem anymore.
Electricity + home heating + cars is not 100%, but cutting emissions in half means you double the time before reaching any given threshold. For many problems the last 10% is the most challenging but we get 10x as long to solve it and 10x as long to implement mitigation strategies.
That’s what makes climate change critical, the next year is more important than what happens 10 or 20 years from now.
Add heating for buildings + hot water + industrial processes that can use electricity instead of fossil fuels alongside indirect effects like methane leaks from pipelines and drumroll.
We can get to ~50% reduction while saving money over using fossil fuels.
However, doing so requires ramping up electricity production and storage.
Too many AI accelerationists are treating these questions as foregone conclusions, which seems like an enormously dangerous bet without a clearer pathway to a truly beneficial outcome.
It may very well be that some form of AI (which form? hard to say - probably not LLMs) are a part of the solution. It may just as well be that they are not. But when building software, the age old advice to “start with the problem, not the solution” comes to mind.
The number of engineers I’ve worked with over the years (I’ve been one of them) who are convinced that the feature they’re building is the right thing to build, only to realize later that it doesn’t solve a problem people care about…is a very large number.
Regarding degrowth, I’m not advocating for it. With that said, that will be the unwanted result forced on us by the realities of the environment if we can’t put a lid on the climate issue.
This is not helpful. There are many reasons degrowth won't generally help humanity, but the benefits are particularly aplicable to western nations and their diplomatic relations. Certainly many western nations can bear degrowth without significant loss in quality of life. The wealthy just gotta take a significant cut to their waistlines.
> We can't just "social solution" our way out this problem anymore.
This certainly seems to be the liberal solution. Short of evicting them from our society what better choices do we have?
2. Inevitable AI winter
3. Keep running the plants, clean energy achieved, stop burning coal, global warming solved
The absolute dollar value might seem high, because we're working with the budget of not just a country but the wealthiest country, but as a percentage it is quite low. You can certainly pull funds from other areas too, like the military, which also greatly benefit from such research endeavors.
Even if these were exclusively non-weapons and non-military based technologies being developed it'd be naive to not recognize that the best military defense is to stop a war before it happens. That comes through many avenues, but notably the development of new technologies, especially those that benefit people as a whole (e.g. medicine or see the Taiwan strategy). But even then, it would also be naive to think that the same technology couldn't be adapted to military uses. Anything can be a weapon if you use it wrong enough.
But note that we're also seeing a reduction in federal research funding. We're also seeing less fundamental research and these types of problems need a strong pipeline through the classic TRL scale[0]. I think you'll find some of that discussion over in yesterday's thread about Bell Labs. The pipeline doesn't work if you don't take risks and are rushing. You need a fire under your ass but too hot and you get burned. It's easy to be myopic in today's settings, and that's not a great idea for an organization who needs to have an outlook in terms of decades and centuries (i.e. government) as opposed to the next election cycle or next quarterly earnings report.
Mind you, we've done these things before. Both the Space Race and Manhattan Project. At the height of the Space Race NASA's budget was over 4.41% of the federal budget[2]. I'm not sure what percent the Manhattan Project's budget was, but it is very clear that this is A LOT cheaper than what actual war costs[3]. We're talking about less than a month of war costs. Remember, we spent over a $750bn over in Iraq[4]. The question is not if we have the money, but what we want to spend it on. Personally I'd rather stuff like this than bombing people. Frankly, you can eat the cake too, as it makes it cheaper to bomb people as well...
[0] https://en.wikipedia.org/wiki/Technology_readiness_level
[1] https://news.ycombinator.com/item?id=43957010
[2] https://en.wikipedia.org/wiki/Budget_of_NASA
[3] https://en.wikipedia.org/wiki/Manhattan_Project#Cost
[4] https://en.wikipedia.org/wiki/Financial_cost_of_the_Iraq_War
The people qualified to fix global warming aren't the same people qualified to work on ML.
And it's corollary: something being in the news or social media means everyone else has stopped working on other problems and is now solely working on whatever that headline's words say.
I've worked with hundreds of Data Scientists and every one had the ability to work on different problem areas. And so they were working on models to optimise truck rollouts or when to increase compute for complex jobs.
If as a society we placed an increased emphasis on efficiency and power consumption we would see a lot more models being used in those areas.
The problem is we don't want to do it.
At best its a needle in a haystack approach, and one that seems to toss out methodical, reasoned science in favor of a blunderbuss.
I don’t understand this line of reasoning - it’s like saying “you’re not allowed steam engines until you drain all of your mines”. It’s moralistic, rather than pragmatic.
I think it's safe to call it a rhetorical fallacy because its underlying premise implies humanity can somehow only focus on a very finite number of things at once and that funds directed to 'your thing' would somehow otherwise be directed to 'their thing' which is even more absurd.
These scales use energy usage as the measure of progress which you are saying is proof that reducing energy use is reducing progress.
Our current use of electricity per capita, at least in the west, is ridiculously unsustainable if we aren't willing to keep abusing fossil fuels as an efficient store and source of energy.
I'm a for reducing our impact, but we have to stop kidding ourselves with the pipe dream that we just need the right source of energy to be able to grow our energy use per capita indefinitely.
what are some examples of 'hardly' ?
I assume "AI" in contemporary articles, especially as it pertains to investments, means "Generative AI, especially or exclusively LLMs."
> Clearly computer networking is worthy of public investment, but given the capture of this administration by military industrial interests, how can we be sure that public networking funding isn't just handouts to the president's cronies?
Our country's tight relationship between the government, military, academia, and industrial has paid off repeatedly, even if it has some graft.
this whole "benchmarks" thing is laughable. I've been using Gemini all week to do code assist, review patches, etc. Such impressive text, lists of bullets, suggestions, etc., but then at the same time it makes tons of mistakes, which you then call it on, and it predictably is like "oh sorry! of course!" yes of COURSE. because all it does is guess what word is most likely to come after the previous word. is there a "benchmark" for "doesn't hallucinate made up BS?" because humans can do very well on such a benchmark.
In any case, if you want to take a stab at it, feel free to go ahead and start a company whereby you as the owner would assign the CEO responsibilities to an AI.
What stood out to me was how cautious they are. It’s not about letting AI make decisions, but about spotting potential issues earlier so humans can step in sooner.It’s not AI making the call, it’s more like helping the call happen earlier. And I really respect that approach.
LAsteNERD•7h ago