My observation has been that smart people don't want this anymore, at least not within the context of an organization. If you give your employees this freedom, many will take advantage of it and do nothing.
Those that are productive, the smartest who thrive in radical freedom and autonomy, instead choose to work independently. After all, why wouldn't they? If they're putting in the innovation the equity is worth way more than a paycheck.
Unfortunately, that means innovation that requires a Bell Labs isn't as common. Fortunately, one person now can accomplish way more than a 1960's engineer could and the frontier of innovation is much broader than it used to be.
I used to agree with the article's thesis but it's been nearly impossible to hire anyone who wants that freedom and autonomy (if you disagree, <username>@gmail.com). I think it's because those people have outgrown the need for an organization.
This was addressed in the article
> Most founders and executives I know balk at this idea. After all, "what's stopping someone from just slacking off?" Kelly would contend that's the wrong question to ask. The right question is, "Why would you expect information theory from someone who needs a babysitter?"
also this hilarious quote from Richard Hamming:
> "You would be surprised Hamming, how much you would know if you worked as hard as [Tukey] did that many years." I simply slunk out of the office!
I think an answer to that was a lot clearer in the 1960's when going from idea to product was much harder.
What products could Shanon have made only knowing information theory? Or CSRO knowing only ODFM solved multipath? Did Bob Metcalf make more money when everyone had Ethernet or if he'd licensed it much more exclusively?
It's very hard for a single fundamental result to be a durable competitive advantage compared to wider licensing on nicer terms. That's particularly true when much else goes into the product.
Sure, licensing information theory is a bit of a stretch, but Shannon literally built one of the first artificial intelligence machines [1]. 2025 Shannon would've been totally fine building his own company.
If you see these idols through their singular achievements, then yes of course it's hard to imagine them outside the context of a lab, but rarely are these innovators one trick ponies.
By the way, Bob Metcalfe did indeed start his own company and became pretty successful in doing so.
[1] https://en.wikipedia.org/wiki/Claude_Shannon#Artificial_Inte...
I do think there is a lot less low hanging fruit which makes the comparison apples and oranges. Google is like Bell Labs today, and what did they invent? LLMs? Compare that to information theory, the transistor, Unix, etc.
Yep, agree with this statement. That's exactly what I think happened.
Quite the opposite, always a mad rush towards profit at any cost.
When an employer or occupation provides a fully respectable career for life, that's your job and it's fully respectable to have that be your life's work from that point onward, plus information theory doesn't represent the full 1% of what Shannon had to offer anyway :)
Why would someone who is not motivated by financial gain care?
> I was motivated more by curiosity. I was never motivated by the desire for money, financial gain. I wasn't trying to do something big so that I could get a bigger salary.
— Claude Shannon
Interesting, this is something that I'd love to do! I'm already planning on pursuing custom chip design for molecular simulation, but I don't really want to handle the business side of things. I'd much rather work in a paid lab than get rich and sell it off. Plus, you can do so much more with a team vs being independent.
I was also homeschooled though (unschooling and tjed philosophy) so I've always been picking my own projects. Sometimes I wonder if the lack of generalist researchers comes down to education (another thing I'd love to pursue).
People want to do good work and people want to feel like they're doing good work. If you create an environment where they feel trusted and safe, they will rise to your expectations.
I had way more trouble with people working too hard but with misaligned ideas of what "good" meant—and stepping on each other's toes—than with anyone slacking off. It's easy to work around somebody who is merely ineffectual!
And, sure, a bunch of stuff people tried did not work out. But the things that did more than made up for it. Programming and quantitative modeling are inherently high-leverage activities; unless leadership manages out all the leverage in the name of predictability, the hits are going to more than make to for the flubs.
I am well aware that people in companies can work effectively on teams and that people rise to the occasion in that context. If it didn't work, companies wouldn't hire. But that's not what the article is about.
I don't think it's a strictly better environment but in many dimensions going solo is now better than any company. I do often long for that shared magnetic field though.
Many of the smartest people I know are good at ignoring bureaucratic requirements, or at least handling them with the minimum effort necessary. And that often includes business, which many of them see as a subcategory of bureaucracy.
As the article notes, several companies (Apple, Google, etc.) could (currently) afford to fund such a lab, but there is no way their management and shareholders would approve.
There's a reason for this: research labs seem to benefit competitors as much as (or more than) the companies that fund them. This wasn't an issue for AT&T when it was a monopoly, but it is now. Personally I don't see it as a problem (since one home run innovation could pay for the entire lab) but company managers and shareholders do.
On the other hand, Apple does seem to have a de facto AI lab with a good deal of resource waste, so maybe that's good.
> As the article notes, several companies (Apple, Google, etc.) could (currently) afford to fund such a lab, but there is no way their management and shareholders would approve.
Google did set up such a lab. The mission of Google Brain was literally to hire smart people and let them do work on whatever they want. ("Google Brain team members set their own research agenda, with the team as a whole maintaining a portfolio of projects across different time horizons and levels of risk." -- https://research.google.com/teams/brain/). Unsurprisingly, Google Brain is the place that originated the Transformer that powers the current AI craze (and many, many, many other AI innovations).
The current tech giants spend a lot of money on "research," where research means optimizing parts of the product line to the 10^nth order of magnitude.
Arguably, Google Brain was one such lab. Albeit with more freedom than normal.
Which is fine, it's their money. But then they (and the broader public) shouldn't bemoan the lack of fundamental advances and a slowdown in the pace of discovery and change.
You mean they renamed it/merged it with another group that has similar freedom and focus on research
Unfortunately these opportunities have dried up as companies either got rid of their research labs or shifted the focus of their research labs to be more tied to immediate business needs. Many of my former classmates and colleagues who were industrial researchers are now software engineers, and not due to intentionally changing careers. Academia has become the last bastion of research with fewer commercialization pressures, but academia has its “publish or perish” and fundraising pressures, and now academia is under attack in America right now.
I once worked as a researcher in an industrial lab, but the focus shifted toward more immediate productization rather than exploration. I ended up changing careers; I now teach freshman- and sophomore-level CS courses at a community college. It’s a lot of work during the school year, but I have roughly four months of the year when I could do whatever I want. Looking forward to starting my summer research project once the semester ends in a few weeks!
For example, the best non-AI TTS system is still Ivona TTS that originated at Blizzard in like 2007. The best open source solution is espeak and it's permanently stuck in 1980... Ivona was bought up by Amazon and now they don't even use the original software, but do charge money per word to use the voice via Amazon Polly. They could open source it, but they don't.
We don't even have something as basic as text to speech freely available, whether you are disabled or not. That is a problem. You have this amazing innovation that still holds to this day, squandered away for nothing.
Why can't we just have an institute that develops these things in the open, for all to use? We clearly all recognize the benefit as SysV tools are still used today! We could have so many amazing things but we don't. It's embarrassing
When I was at Apple for several years, there were definitely at least two such groups.
It’s not trivial to foster such environments, but they do still exist in different forms.
If Bell Labs let people xplore for multiple years, a few months probably isn't enough time.
There have been many attempts to replicate the success of the Skunkworks, but they've all failed because the organizers thought they could improve on it.
They were responsible for the tailless X-36, the X-37 space plane and, allegedly, much of the groundwork for the winning NGAD design.
um... the UK sent the magnetron they had recently invented (1940) to the US in a spirit of wartime cooperation and because their own research and industrial base was already maxed out at the time. pretty sure they sent an owners manual and schematics too. probably even some people?
(magnetrons, for generating microwaves, were the essential component for radar)
However examples No. 11 and 12 had the number of resonators increased to 8 in order to maximise the efficiency of the valve with the magnetic field provided by the then available permanent magnet, E1189 also incorporated cooling fins to enable the device to be air rather than water cooled.
Sample No.12 was taken to the USA by E. Bowen with the Tizard mission and upon testing at Bell Labs produced 10 times the power at 5 times the frequency of the best performing American triodes. A certain amount of confusion arose as the drawings taken by Bowen still showed the 6 resonator anode but an X-Ray picture taken at Bell Labs revealed the presence of 8 resonators.
The E1189 or its Navy equivalent NT98 was used in the Naval radar type 271 which was the Allies first operational centimetric radar. The early RCM’s like the E1189 were prone to mode jumping (frequency instability) under pulse conditions and the problem was solved in by means of strapping together alternate segments a process invented by Sayers in 1942. Strapping also considerably increased the magnetron’s efficiency.
via, https://www.armms.org/media/uploads/06_armms_nov12_rburman.p...and another account, https://westviewnews.org/2013/08/01/bell-labs-the-war-years/...
UK physicist James Sayers was part of the original team that developed the magnetron in the UK. He did join the Manhattan Project in 1943, so perhaps before that he came over to the US (to Bell Labs) as part of the radar effort: in that case strengthening Bell Labs contributions, weakening any claim to reverse engineering :) When Lee de Forest "invented" the triode tube amplifier, he had no idea how it worked. When Shockley "invented" the transistor, his team grumbled that he had stolen their work (similar to Steve Jobs, the boss, taking over the Macintosh project when his own Lisa project failed) but in any case, it was not actually understood yet how transistors worked. "How the First Transistor Worked: Even its inventors didn’t fully understand the point-contact transistor" https://spectrum.ieee.org/transistor-history
In these cases, the bleeding edge of R and the bleeding edge of D were the same thing. A certain amount of "reverse engineering" would have been mandatory, but it's really "reverse sciencing", "why did my experiment turn out so well", rather than "reverse engineering a competitor's product to understand how did they make it work so well."
https://en.wikipedia.org/wiki/MIT_Radiation_Laboratory
In early 1940, Winston Churchill organized what became the Tizard Mission to introduce U.S. researchers to several new technologies the UK had been developing. Among these was the cavity magnetron, a leap forward in the creation of microwaves that made them practical for use in aircraft for the first time. GEC made 12 prototype cavity magnetrons at Wembley in August 1940, and No 12 was sent to America with Bowen via the Tizard Mission, where it was shown on 19 September 1940 in Alfred Loomis’ apartment. The American NDRC Microwave Committee was stunned at the power level produced. However Bell Labs director Mervin Kelly was upset when it was X-rayed and had eight holes rather than the six holes shown on the GEC plans. After contacting (via the transatlantic cable) Dr Eric Megaw, GEC’s vacuum tube expert, Megaw recalled that when he had asked for 12 prototypes he said make 10 with 6 holes, one with 7 and one with 8; and there was no time to amend the drawings. No 12 with 8 holes was chosen for the Tizard Mission. So Bell Labs chose to copy the sample; and while early British magnetrons had six cavities American ones had eight cavities... By 1943 the [Rad Lab] began to deliver a stream of ever-improved devices, which could be produced in huge numbers by the U.S.'s industrial base. At its peak, the Rad Lab employed 4,000 at MIT and several other labs around the world, and designed half of all the radar systems used during the war.
that seems to be the source of the reverse engineering idea, and I think Bell Labs' role (which is quite important) was more toward perfecting the devices for manufacture at scale, as it was an arm of a giant leading edge industrial company.
I'm not diminishing Bell Labs nor anybody there, it was a lot of smart people.
Something I've been curious about and thought I'd ask the room here since it was mentioned.
It seems to me that "the radar effort" was very significant, almost Manhattan Project levels itself. In every book about scientists in WW2 or the atomic bomb that I've read, it seemed everyone had a friend "working on radar" or various scientist weren't available to work on the bomb because they were, again, "working on radar."
Was this true or just something I'm overanalyzing?
Guess who pioneered the venerable Silicon Valley, it's HP (then Agilent, now Keysight). Their first killer product was the function (signal/waveform) generator. HP basically the Levi's of the radar era, making tools for the radar/transistor/circuit technology gold rush.
One of the best academic engineering research labs in the world for many decades now is MIT Lincoln Lab, and guess what it's a radar research lab [1].
I can go on but you probably get the idea now.
[1] MIT Lincoln Laboratory:
The magnetron was one of several technologies that the UK transferred to the USA in order to secure assistance in the war effort.
how is that not in the spirit of wartime cooperation? with spirited cooperation, each side contributes in order to get what they want from cooperation
if you want more nuance, the American administration, completely upper class https://oldlifemagazine.com/look-magazine-april-12-1949-fran... was 100% behind helping the UK and got the job done, but we have a political system that has to respond to the common people, and just as the English labour party has never thought "oh, what can we do to help the US?", neither has the American populace in reverse, on top of the traditional American individualism and revulsion toward European monarchical and imperial wars.
Difference is, we don't bitch about it.
Britain is completely entitled to be proud of its absolute grit, prowess, and determination wrt the second world war, but the US did right by them too. America was already on the rise, but not entirely self-confident (that had begun wrt WWI but had not become a birthright till after WWII.) We didn't have a 19th century empire that collapsed (although we were in certain respects a self-contained 19th century western empire), and we were perfectly positioned (geography, population, GDP, English Common Law legal system plus bill of rights, but lacking other tired old ideas about class) to assume the mantles not only of British hegemony, but also French, German, Dutch, Belgian and the other "imperial thrones" that were now unoccupied. it was to our benefit but it was not "our fault" or even "our doing"
There's no problem with that at all, it's what every power has had to do in order to reach that status throughout history. I was just calling out that it was primarily a transaction.
1. find 100 highly motivated scientists and engineers
2. pay them each $1m/year
3. put them in a building
4. see what happens!
1. Already found X highly motivated scientists and engineers.
- in my case people that must like chemicals, electronics, software, stuff like that
2. $1Mil funding x X but it's got to be paid back in 5 years so a viable business model needs to be initiated right away even if the technology is as much as a year out from possible release or commercialization.
- each person needs to be worth a million over 5 years, that's hard enough to find, it would be more of a needle in a haystack to find experimentalists where it's good to sink a million per year for a decent length of time, but that can be worked up to. If serious research is involved, stealth must be an option
3. Put them in X number of buildings.
- works better than you think, and "nobody's" doing it
4. Some of these are profit centers from day 1, so you could even put franchising on the table ;)
- you'd be surprised what people who've already invented a lifetime of stuff could do with the kind of resources that can enable a motivated creator who has yet to make very remarkable progress, so leverage both
What happens if they don't though?
Universities are the place for low agency people in todays world.
Otherwise things will just fragment into cliques and fights, like any university department.
Surely the lab scientists and engineers would assert that they need a bigger budget than the mathematicians, and so on.
Cynically, it likely would have ended up a zombie shell of itself, like IBM
Topically, assuming it avoided such a fate and was held in high regard by the industry and employees, this current administration would likely be meddling with every single grant, contract, or project that did not align with the administrations' priorities (see: colleges and research grants)
<paraphrase>
The reason is very simple. There was a big picture motivation: the war, followed by the cold war. Once the big picture motivation wasn't there anymore, that sort of organizational structure(or lack of it) does not work the same way. What ends up happening is what a sibling comment has noted:
> My observation has been that smart people don't want this anymore, at least not within the context of an organization. If you give your employees this freedom, many will take advantage of it and do nothing.
</paraphrase>
You might say, but `grep` wasn't used for war! Correct, but it came up as a side effect of working on much larger endeavours that tied into that bigger picture.
This has been true for most of recent human history. You might know this already, but Fourier was part of most of Napoleon's expeditions, and his work on decomposing waveforms arose out of his work on the "big picture": ballistics.
A lot of large US tech corporations do have sizable research arms.
Bell Labs is certainly celebrated as part of a telephone monopoly at the time though AT&T actually pulled out of operating system development related to Multics and Unix was pretty much a semi-off-hours project by Ritchie and Thompson.
It's true that you tend not to have such dominant firms as in the past. But companies like Microsoft still have significant research organizations. Maybe head-turning research advancements are harder than they used to be. Don't know. But some large tech firms are still putting lots of money into longer-term advances.
See also VSCode and WSL.
And if we ain't impressed with LLMs then wtf! I mean maybe it is just nostalga for the old times.
Lots of great stuff is coming out. Quantum computing. Open source revolution producing Tor, Bitcoin, Redis, Linux.
I think we are in the Golden age!
And it is not all from one place. Which is better.
.NET and Java also started as research projects, as did GraalVM, Maxime, LLVM, many GHC features, OCaml improvements,....
It feels anti-efficient. It looks wasteful. It requires faith in the power of reason and the creative spirit. All these things are hard to pull off in a public corporation, unless it's swimming in excess cash, like AT&T and Google did back in the day.
Notably, a lot of European science in 16-19 centuries was advanced by well-off people who did not need to earn their upkeep, the useless, idle class, as some said. Truth be told, not all of them advanced sciences and arts though.
OTOH the rational, orderly living, when every minute is filled with some predefined meaning, pre-assigned task, allows very little room for creativity, and gives relatively little incentive to invent new things. Some see it as a noble ideal, and, understandably, a fiscal ideal, too.
Maybe a society needs excess sometimes, needs to burn billions on weird stuff, because it gives a chance to to something genuinely new and revolutionary to be born and grow to a viable stage. In a funny way, the same monopolies that gouge prices for the common person also collect the resources necessary for such advances, that benefit that same common person (but not necessarily that same monopoly). It's an unsetllting thought to have.
But lizard brains gotta keep folks under their thumb and horde resources. Alas.
Basic econ 101: inelastic demand means supply can be as expensive as the limited number who are lucky enough to get it are able to afford.
Bell Labs, generally think tanks, they work by paying _enough_ to raise someone to the capitalist society equivalent of a Noble.
Want to fix the problem for everyone in society, not just an 'intellectual elite'? Gotta regulate the market, put enough supply into it that the price is forced to drop and the average __PURCHASE POWER__ raises even without otherwise raising wages.
The market of course needs regulation, or, rather, stewardship: from protection of property rights all the way to limiting monopolies, dumping, etc. The market must remain free and varied in order to do its economic work for the benefit of the society. No better mechanism has been invented for last few millennia.
Redistribution to provide a safety net to those in trouble is usually a good thing to have, but it does not require to dismantle the market. It mostly requires an agreement in the society.
[1]: https://en.m.wikipedia.org/wiki/Economic_calculation_problem
A revenue neutral UBI check at some subsistence level and killing all other government assistance including lower tax brackets would in the short term significantly lower the standard of living for many low income Americans and boost others. However people would try and maximize their lifestyle and for most people that would be through working. Others would opt out and try and make being really poor work for them.
Essentially you remove central planning around poverty and as the government stops requiring rent stabilized apartments etc. Which in the short term pushes a lot of poor people out of major cities but simultaneously puts upward pressure on wages to retain those workers and pushes down rents via those suddenly available apartments. It doesn’t actually create or destroy wealth directly, you just get a more efficient allocation of resources.
Adding a land tax too, now that would be, that would really, that would fix some things
This returns us back to the problem of some guaranteed payments to those we don't want to let die, and maybe want to live not entirely miserably, and the administration thereof.
Another danger is the contraction of the economy: businesses close, unable to find workers, the level of UBI goes down, people's income (UBI + salary) also goes down, and they can afford fewer goods, more businesses close, etc. When people try to find work because UBI is not enough, there may be not enough vacancies, until the economy spins up again sufficiently. It's not unlike a business cycle, but the incentive for a contraction may be stronger.
There’s a long way between uncomfortable and death here, entitlement spending is already over 10k/person/year and that’s excluding the impact of progressive taxation. Revenue neutral flat tax and a 20+k UBI isn’t unrealistic. A reasonable argument can be made for universal healthcare being part of UBI, but that’s a separate and quite nuanced discussion.
Not that I think there’s any chance of a UBI in the US, but it’s an interesting intellectual exercise.
Citation needed. If you're referring to the USSR, please pick an economic measure that you think would have been better, and show why the calculation problem was the cause of its deficiency. USSR was incredibly successful economically, whether it was GDP growth, technological advancement, labor productivity, raw output, etc. Keep in mind all of this occurred under extremely adverse conditions of war and political strife, and starting with an uneducated agrarian population and basically no capital stock or industry.
The Austrian economist Hans-Hermann Hoppe writes of Hayek's calculation problem:
> [T]his is surely an absurd thesis. First, if the centralized use of knowledge is the problem, then it is difficult to explain why there are families, clubs, and firms, or why they do not face the very same problems as socialism. Families and firms also involve central planning. The family head and the owner of the firm also make plans which bind the use other people can make of their private knowledge […] Every human organization, composed as it is of distinct individuals, constantly and unavoidably makes use of decentralized knowledge. In socialism, decentralized knowledge is utilized no less than in private firms or households. As in a firm, a central plan exists under socialism; and within the constraints of this plan, the socialist workers and the firm’s employees utilize their own decentralized knowledge of circumstances of time and place to implement and execute the plan […] within Hayek’s analytical framework, no difference between socialism and a private corporation exists. Hence, there can also be no more wrong with the former than with the latter.
Indeed, a private company usually operates in a way a centralized monarchy / oligarchy would operate: the bosses determine a plan, the subordinates work on implementing it, with some wiggle room but with limited autonomy.
Larger companies do suffer from inefficiencies of centralization, they do suffer waste, slowdowns, bureaucracy, and skewed incentives. This is well-documented, and happens right now, as we facepalm seeing a huge corp doing a terrible, wasteful move after wasteful move, according to some directives from the top. This is why some efficient corporations are internally split into semi-independent units that effectively trade with each other, and even have an internal market of sorts. (See the whole idea of keiretsu.)
But even the most giant centralized corporations, like Google, Apple, or the AT&T of 1950s, exist in a much, much larger economy, still driven mostly by market forces, so the whole economy does not go haywire under universal central planning, as did the economy of the late USSR, or the economy of China under Mao, to take a couple of really large-scale examples.
In the same basic econ 101, you learn that real estate demand is localized. UBI allows folks to move to middle of nowhere Montana.
Social connections like family / friends / potential mates
Livelihood needs like education / jobs / foods (1st world, the food they like is fresh / better; historic / other food exists!)
General QoL climate / beauty / recreational opportunities
Many big cities cost more because it's where the opportunity is, or where their family that previously/currently prospered from that opportunity resides. For many of us on HN it's where the sort of jobs we'd be good at contributing to society exist. Even if some corp opened an office in the middle of Montana there wouldn't be anything else there as other opportunities. Heck given UBI, I'd rather join Star Fleet with awesome healthcare for all, cool technical challenges, and anything other than Starbase 80.
That's why experiments need to be made.
Now with research pay Bell was right up there with other prestigious institutions, elite but not like the nobility of old.
I would say very much more like a "Gentleman" scientist of antiquity, whether they were patrons or patronized in some way, they could focus daily on the tasks at hand even when they are some of the most unlikely actions to yield miracles.
Simply because the breakthroughs that are needed are the same as it ever was, and almost no focused tasks lead in that direction ever, so you're going to have to do a lot of "seemingly pointless" stuff to even come up with one good thing. You better get started right away and don't lift your nose from the grindstone either ;)
People get bored doing nothing, and enjoy contributing to their community.
No, they're not going to go get shitty factory jobs. But that's OK, because all those jobs are now automated and done by robots.
But they are going to go and do something useful, because that's what people do. The anti-UBI trope that "given basic income, everyone will just sit around on their arses watching TikTok videos" has been proven wrong in every study that measured it.
> but! A lot, probably more than we imagine, would get bored and… do something.
I'm of the same belief. We're too antsy of creatures. I know in any long vacation I'll spend the first week, maybe even two (!), vegging out doing nothing. But after that I'm itching to do work. I spent 3 months unemployed before heading to college (laid off from work) and in that time taught myself programming, Linux, and other things that are critical to my career today. This seems like a fairly universal experience too! Maybe not the exact tasks, but people needing time to recover and then want to do things.I'm not sure why we think everyone would just veg out WALL-E style and why the idea is so pervasive. Everyone says "well I wouldn't, but /they/ would". I think there's strong evidence that people would do things too. You only have to look at people who retire or the billionaire class. If the people with the greatest ability to check out and do nothing don't, why do we think so many would? People are people after all. And if there's a secret to why some still work, maybe we should really figure that out. Especially as we're now envisioning a future where robots do all the labor.
I think people are drawn to labour but not drudgery, and a lot of jobs don't really do much to differentiate between the two. I reckon if less people had to worry about putting bread on the table what we'd see is a massive cultural revival, a shot in the arm to music and the arts.
- The Manual, by the KLF
A socialism where the only way to work is to own a part of an enterprise (so no "exploitation"is possible) would likely work much better, and not even require a huge state. It would be rather inflexible though, or mutate back into capitalism as some workers would accumulate larger shares of enterprises.
The problem with social democracy is that it still gives capitalists a seat at the table and doesn't address the fundamental issues of empowering market radicalism. Some balance would be nice, but I don't really see that happening.
https://www.oecd.org/en/topics/policy-issues/public-finance-...
How is that 'market radicalism' ?
How is government spending ~25 trillion USD a year somehow not considered?
A good example of market radicalism at play in the US is the healthcare system. "Everyone knows" that the market is better at allocating resources, but we actually have terrible outcomes and there is no political will to change an obviously broken system. Despite having real-world examples of single-payer (and hybrid) healthcare systems out there that are more cost effective and have better outcomes.
On what market the cost of services to be rendered would be held secret, or at least obscure? Try shopping for a particular kind of surgery across hospitals. How many plans does your employer offer? Would you rather buy a plan independently? Why?
The whole "insurance" system is not insurance mostly, but a payment scheme, of a quite bizarre, complicated, and opaque kind. The regulatory forces that gave birth to it are at least as strong as any market forces remaining in the space, and likely stronger.
OTOH direct markets are a poor way to distribute goods that have a very inelastic and time-sensitive demand, like firefighting, urgent healthcare, or law enforcement. Real insurance is a better way, and a permanent force sustained by tax money usually demonstrates the best outcomes. The less urgent healthcare is, the better a market works, all the way to fitness clubs which are 100% market-based.
I think we've become overly metricized. In an effort to reduce waste we created more. Some things are incredibly hard to measure and I'm not sure why anyone would be surprised that one of those things is research. Especially low level research. You're pushing the bounds of human knowledge. Creating things that did not previously exist! Not only are there lots of "failures", but how do you measure something that doesn't exist?
I write "failure" in quotes because I don't see it that way, and feel like the common framing of failure is even anti scientific. In science we don't often (or ever) directly prove some result but instead disprove other things and narrow down our options. In the same way every unsuccessful result decreases your search space for understanding where the truth is. But the problem is that the solution space is so large and in such a high dimension that you can't effectively measure this. You're exactly right, it looks like waste. But in an effort to "save money" we created a publish or perish paradigm, which has obviously led to many perverse incentives.
I think the biggest crime is that it severely limits creativity. You can't take on risky or even unpopular ideas because you need to publish and that means passing "peer review". This process is relatively new to science though. It didn't exist in the days of old scientists you reference[0]. The peer review process has always been the open conversation about publications, not the publications themselves nor a few random people reading it who have no interest and every reason to dismiss. Those are just a means to communicate, something that is trivial with today's technologies. We should obviously reject works with plagiarism and obvious factual errors, but there's no reason to not publish the rest. Theres no reason we shouldn't be more open than ever[1]. But we can't do this in a world where we're in competition with another. It only works in a world where we're united by the shared pursuit of more knowledge. Otherwise you "lose credit" or some "edge".
And we're really bad at figuring out what's impactful. Critically, the system makes it hard to make paradigm shifts. A paradigm shift requires a significant rethinking of the current process. It's hard to challenge what we know. It's even harder to convince others. Every major shift we've seen first receives major pushback and that makes it extremely difficult to publish in the current environment. I've heard many times "good luck publishing, even if you can prove it". I've also seen many ideas be put on the infinite back burner because despite being confident in the idea and confident in impact it's known that in the time it'd take to get the necessary results you could have several other works published, which matters far more to your career.
Ironically, I think removing these systems will save more money and create more efficient work (you're exactly right!). We have people dedicating their lives to studying certain topics in depth. The truth is that their curiosity highly aligns with what are critical problems. Sometimes you just know and can't articulate it well until you get a bit more into the problem. I'm sure this is something a lot of people here have experienced when writing programs or elsewhere. There's many things that no one gets why you'd do until after it's done, and frequently many will say it's so obvious after seeing it.
I can tell you that I (and a large number of people) would take massive pay cuts if I could just be paid to do unconditional research. I don't care about money, I care about learning more and solving these hard puzzles.
I'd also make a large wager that this would generate a lot of wealth for a company big enough to do such a program and a lot of value to the world if academia supported this.
(I also do not think the core ideas here are unique to academia. I think we've done similar things in industry. But given the specific topic it makes more sense to discuss the academic side)
[0] I know someone is going to google oldest journal and find an example. The thing is that this was not the normal procedure. Many journals, even in the 20th century, would publish anything void of obvious error.
[1] put on open review. Include code, data, and anything else. Make comments public. Show revisions. Don't let those that plagiarize just silently get rejected and try their luck elsewhere (a surprisingly common problem)
The OECD's total GDP per year is ~50 trillion. So 1 percent is roughly 500 Bn on research.
So there clearly has to be some accountability. But no doubt it could be improved. As you say publishing everything these days makes more sense with platforms like arXiv.
With taking pay cuts to do research, have you ever seen places offer part time work for something and then allow people to research what they want in the other time ?
Or researchers just doing this with other jobs ?
Ha. Hmm. I just realised I have a cousin who does this.
Look at some of the most famous success stories in comedy, art, music, theatre, film, etc.
A good number of them did their best work when they were poor.
"Community" is a great example. Best show ever made, hands down. Yet they were all relatively broke and overworked during the whole thing.
It's because they believed in the vision.
Science requires much more concentration on abstract thinking, loading a much larger context, if you will. It's counterproductive to do it while busy with something else. It overworks you all right, and it demands much more rigor than art.
All revolutionary new technology is initially inefficient, and requires spending a lot of time and money on finding efficient solutions. First electronic computers were terribly unwieldy, expensive, and unreliable. This equally applies to first printing presses, first steam engines, first aircraft, first jet engines, first lasers, first LLMs (arguably still applies). It's really hard to advance technology without spending large amounts of resources without any profit, or a guarantee thereof, for years and years. This requires a large cache of such resources, prepared to be burnt on R&D.
It's investment into far future vs predictable present, VC vs day trading.
Today, I'm in a corporate research role, and I'm still given a lot of freedom. I'm also genuinely interested in practical applications and I like developing things that people want to buy, but my ability to do those things owes a lot to the relatively freewheeling days of NSF funding 30+ years ago.
> Notably, a lot of European science in 16-19 centuries was advanced by well-off people who did not need to earn their upkeep, the useless, idle class, as some said.
I heard a recent interview with John Carmack (of DOOM fame) who described his current style of work as "citizen scientist", where he has enough money, but wants to do independent research on AI/ML. I am always surprised that we don't see more former/retired hackers (whom many got rich from a DotCom), decide to "return to the cave" to do something exciting with open source software. Good counterexamples: (1) Mitchell Hashimoto and his Ghostty, (2) Philip Hazel and his PCRE (Perl Compatible Regular Expressions) library. When I retire (early -- if all things go well), the only way to that I can possibly stave off a certain, early death from intellectual inactivity would be something similar. (Laughably: I don't have 1% of the talents that John Carmack has... but a person can try!)I know that Elon Musk is not a popular figure nowadays, but he very correctly stated that competition is for losers, and the real innovators build things that competitors are just unable to copy for a long time, let alone exceed. SpaceX did that. Google, arguably, did that, too, both with their search and their (piecemeal acquired) ad network. Apple did that with iTunes.
Strive to explore the unknown when you can, it may contain yet-unknown lucrative markets.
(There is, of course, an opposite play, the IBM PC play, when you create a market explosion by making a thing open, and enjoy a segment of it, which is larger than the whole market would be if you kept it closed.)
I think a better approach is to competition is that it should be irrelevant. One should be developing a solution that solves the problem statement.
You start by creating a myth: "this place breeds innovation". Then, ambitious smart people wanting to innovate are drawn to it.
Once there, there are two ways of seeing it: "it was just a myth, I'll slack off and forget about it" or "the myth is worthwhile, I'll make it real".
One mistake could end it all. For example, letting who doesn't believe outnumber or outwit those who "believe the myth".
So, small pieces: A good founding myth (half real, half exaggerated), people willing to make it more real than myth, pruning off who drags the ship down.
Let's take that "productivity" from this myth perspective. Some people will try to game it to slack off, some people will try to make the myth of measuring it into reality (fully knowing it's doomed from the start).
A sustainable power of belief is quite hard to put into a formula. You don't create it, you find it, feed it, prune it, etc. I suspect many proto Bell Labs analogues exist today. Whenever there's one or two people who believe and work hard, there is a chance of making it work. However, the starting seed is not enough by its own.
If you ask me, the free software movement has plenty of supply of it. So many companies realized this already, but can't sequester the myth into another thing (that makes monry), even though free software already makes tons of (non monetary) value.
> There was a mistaken view that if you just put a lab somewhere, hired a lot of good people, somehow something magical would come out of it for the company, and I didn't believe it. That didn't work. Just doing science in isolation will not in the end, work. [...] It wasn't a good idea just to work on radical things. You can't win on breakthroughs - they're too rare. It just took me years to develop this simple thought: we're always going to work on the in-place technology and make it better, and on the breakthrough technology. [0]
> John Pierce once said in an interview, asserting the massive importance of development at Bell:
>> You see, out of fourteen people in the Bell Laboratories…only one is in the Research Department, and that’s because pursuing an idea takes, I presume, fourteen times as much effort as having it.
The big business mistakes were in the 1970s, when RCA tried to become a diversified conglomerate. At one point they owned Hertz Rent-A-Car and Banquet TV dinners.[2]
[1] https://eyesofageneration.com/rca-sarnoff-library-photo-arch...
But the ultimate problem with TFA is that it seems to be written to portray venture capitalists(?), or at least this group of VCs who totally get it, as on the side of real innovation along with ... Bell Labs researchers(?) and Bell Labs executives(?) ... against the Permanent Managerial Class which has ruined everything. Such ideas have apparently been popular for a while, but I think we can agree that after the past year or two the joke isn't as funny as it used to be anymore.
Furthermore, Big Oil notoriously suppressed any hint of their internal climate change models from being published and hired the same marketing firms that Big Tobacco employed.
People talk about climate change as though we're all equally responsible but this is false, there may be few saints on this subject but there's certainly degrees of sinner and these people are at the very highest level of blame in my opinion. How much of the world will be uninhabitable by the end of the century due to their lies delaying timely climate action?
The key differentiator was giving them freedom and putting them in charge, not isolating them.
If you just look at the success stories, you could say that today's VC model works great too - see OpenAI's work with LLMs based on tech that was comparatively stagnating inside of Google's labs. Especially if nobody remembers Theranos in 50 years. Or you could say that big government-led projects are "obviously" the way to go (moon landing, internet).
On paper, after all, both the "labs" and the VC game are about trying to fund lots of ideas so that the hits pay for the (far greater) number of failures. But they both, after producing some hits, have run into copycat management optimization culture that brings rapid counter-productive risk-aversion. (The university has also done this with publish-or-perish.)
Victims of their own success.
So either: find a new frontier funding source that hasn't seen that cycle yet (it would be ironic if some crypto tycoon started funding a bunch of pure research and that whole bubble led to fundamental breakthroughs after all, hah) or figure out how to break the human desire for control and guaranteed returns.
Well, thing is actually it is kind of horrible. You are basically handing the choice of what to develop where - something that is actually kind of important to society in general in the hands of unelected rich farts that look into making more money. Doesn't help when many of those farts have specific views and projects of society in the future as a technofascist hellscape.
And you could argue the vast majority of what the VC model has given us is scam and services that are very ethically dubious, surveil everyone everywhere, try to squeeze as much money/value out of people, without actually solving many real problems. There is also the thing that the very model is antitethical to solving problems - solving problems costs money, and doesnt bring anything back. Its inherently unprofitable, so any solution is doomed to become more and more enshitified.
Hello? We have 17(!) federally funded national labs, full of scientists doing the work this article waxes nostalgic about. Through the Laboratory Directed Research and Development (LDRD) program they afford employee scientists the ability to pursue breakthrough research. However, they are facing major reductions in funding now due to the recent CR and the upcoming congressional budget!
Make of that what you will.
It's an illusion that no-strings-attached funding exists. The government has an agenda and you're free to research anything you want, as long as it is one of the pre-determined topics. It's a very political process.
the difference with this lab idea and a vc like YC is that vc portfolio companies need products and roadmaps to raise investment and for driving revenue. whereas an asset manager is just investing the money and using the profits to fund engineering research and spinoff product development.
firms like this must already exist, maybe i just never hear about their spinoffs or inventions? if not, maybe a small fund could be acquired to build a research division onto it
[1] The Art of Doing Science and Engineering by Richard W. Hamming:
https://press.stripe.com/the-art-of-doing-science-and-engine...
https://paulgraham.com/hamming.html
Said it was amazing.
You certainly have expressed a lot of contempt and disrespect for your own employees, as much as you hate all the black people you refuse to hire. Do you not want your all white employees to know what you think of them either?
It must not be a very successful company, and you must be lying through your teeth about it, if you can't say its name.
And i'll put my personal details on here the day Hackernews makes it mandatory. Not when some very strange stalker demands it.
Each person, "principal investigator", has a lab which they built.
They only have so much space, and so much budget, but they get a clean slate.
And they're all different. But they all have brilliant ideas they need to work out.
Advantages of doing it like this were proven earlier, and it was still going strong like this in the 1970's.
It was still kind of an academic model.
In these decades there was sometimes special schooling where students were groomed to take their place in this exact lab, starting as pre-teens. Nobody imagined it would ever dwindle in any way.
This is what places like Exxon and DuPont still looked like in 1979 too.
Without being quite an actual monopoly, one thing that's in common is that anything you invent that could be the least bit useful to an employer that size, they can surely afford to make the most of it like few others can.
So the scientists could go wild to a certain extent as long as they were guided by the same "north star" that the org as a whole recognized. Whether it feels like you're getting closer or not, that's the direction you must be drawn to.
You should have seen some of the stuff they built.
Oil companies can have an amazing budget sometimes.
When somebody had a breakthrough, or OTOH scuttled a project or transferred to a different technical center, their lab would be cleared out so a completely different project could get underway with a new investigator building from the ground up. This could take a while, but eventually some very well-equipped labs having outstanding capabilities can develop.
As an entrepreneur, I liked it down in the basement where they would auction off the used gear from about a dozen of those labs at once, after it had sat in storage for a period of time.
After critical mass was achieved, then I had way more fairly current equipment at my immediate disposal than any dozen of my institutional counterparts could benefit from the mainstream way. Turns out I really could get more accomplished and make more progress my own way than if I was actually at a well-funded institution instead. Using equipment they once owned and were successful with to a certain extent, and I usually became only a little bit more advanced, and only some of the time.
Most things are truly the "least bit useful" anyway ;)
Today we have a huge oversupply of scientists, however there's too many of them to allow judging for potential, and many are not actually capable of dramatic impact.
More generally, a standard critique for "reproducing a golden age" narratives are that the golden age existed within a vastly different ecosystem and indeed - stopped working due to systemic reasons, many of which still apply.
In particular, just blaming 'MBA Management' does little to explain why MBAs appeared in the first place, why they were a preferable alternative to other types of large scale management, and indeed how to avoid relapsing to it over a few years and personnel shifts.
Overall I am afraid this post, while evocative , did not convince me what makes 1517 specifically so different.
Agree with this in particular as a good symptom of the "tectonic shifts". I usually blame the Baumol effect, i.e., the increasing difficulty of the inherently human task: keeping science education/science educators up-to-date. Especially when faced with the returns on optimizing more "mechanical" processes (including the impressive short term returns on "merely" improving bean-counting or, in Bell Lab's/IBM's later eras, shifting info-processing away from paper)
I doubt AI or VCs* have any significant role to play in reducing the friction in the college-to-self-selling-breakthrough pipeline, but they should certainly channel most of their efforts to improving the "ecosystem" first
TFA has right ideas such as
>Make sure people talk to each other every day.
Which already happens here on HN! (Although it's mostly different daily sets of people but.. the same old sets of ideas?)
*Not if the main marketable usecase for college students is to game existing metrics. And I don't see no Edtech in the RFS either
Whatever the reason it is definitely not because they are effective managers.
I suspect it's more of a social class phenomenon - they speak the language of the ownership class and that engenders trust.
I could be wrong but while 'business schools' existed before then the MBA as a upper class ivy league thing exactly dates to that time.
Radars, computers (Enigma crushers), lasers, many less visible inventions that had a great impact in say, materials science - those barrels had to be durable, planes lighter and faster, etc this allowed do build fancier stuff. Whole nuclear industry and its surrounding.
Another factor: cold war, there was incentive to spend money if only there was some chance to win some advantage.
Realistically speaking it's also much harder to achieve the same level of impact back then as most, not just low-hanging, fruits have been plucked.
Won't this always be the case?
I mean, if you look back 50 years, they look like low-hanging fruits, but at the time, they weren't, only with the benefit of hindsight do they look like low-hanging fruits.
Similarly people in 50 years will say we had all the low-hanging fruits available today in subject/area/topic X, although we don't see them, as we don't have hindsight yet.
This is a whole topic of its own, intertwining with the rise of neoliberalism and short-termist managerial capitalism. I don't think we have to get into that every single time we point out a case where short-termist managerialism fails.
This is a fairly sweeping anti-science statement without any supporting evidence to back it up. Fairly typical of the HN anti-public research hive mind sadly.
The lack of money to fund research isn't god given, it's a feature of capitalism. You get literally trillions of dollars worth of pension funds sloshing around in need for returns - but fundamental research isn't going to directly and imminently make billions of dollars, the timescale for ROI in basic research is usually measured in decades. So all that money went towards social networks, cryptocurrencies, AI or whatever else is promising the greatest ROI - and that can be argued to all be not to the betterment of society, but actually to its detriment.
It used to be the case that the government, mostly via the proxy "military-industrial complex" provided that money - out of that we got GPS, radar, microwaves, lasers, even the Internet itself - to counteract this. Unfortunately, Republican (or in other countries their respective Conservative equivalents) governments have only cut and cut public R&D spending. And now, here we are, with the entirety of the Western economy seeking to become rent-seekers with as little effort as possible...
And billionaires aren't the rescue either. Some of them prefer to spend their money on compensating for ED by engaging in a rat's race of "who has the bigger yacht/sports team", some of them (like Musk) invest on R&D (SpaceX, Tesla) but with a clear focus on personal goals and not necessarily society as a whole, some of them just love to hoard money like dragons in LoTR, and some try to make up for government cuts in social services and foreign development aid (Gates, Soros), but when was the last time you heard a billionaire just devoting even a small sliver of their wealth to basic R&D that doesn't come with expectations attached (like the ones described in TFA's "patronage" section)?
Decades later, AT&T was broken up into the baby bells and the consent decree was removed at that time. Bell Labs' fate was then sealed - it no longer had a required legal minimum funding level, and the baby bells were MBA-run monstrosities that were only interested in "research" that paid dividends in the next 6 months in a predictable fashion.
The funding model is an integral part of the story.
>>You can move to the United States. (We will help with visas.)
This is no longer viable for anyone who isn't already a US citizen. Not sure how serious about investing in individuals that VC is, but from talking to 16 to 22 year olds _none_ of them want to move to the US with ICE deporting students for saying the wrong thing online - or the perception they do. US universities and businesses are suffering from brain drain that unless reversed in the next 3 years will be a drag on the US economy for decades.
LeoPanthera•13h ago
https://www.youtube.com/playlist?list=PLDB8B8220DEE96FD9