Creating and maintaining an organizational structure of people, that can manage both initial resourcing, ongoing resource allocation, such that the fruits of the labor from the machines is distributed in a way that maintains:
1. Social coherence within the organization so that it doesn’t dissolve from infighting 2. The ability to survive in a marketplace that assumes competition and does not jeopardize 1.
I’ve been working through this a long time and the core problem is having people who can agree on credit assignment and not becoming forcibly consumed by a larger system.
So simply organizing as an LLC or B Corp or Coop or whatever sets the long term destiny of your prospects.
what you’re proposing is the idealized form of techno-humanism singularly.
However unless you solve for the “economic problem of humans existing” you end up creating the perfect grey-goo producing corporation that benefits like 100 people long term.
Society has to reorganize and align itself epistemically and economically before your dream could survive.
If, later, that capability level reaches a point where they can take people’s jobs, then the data which noobs PAY to give to AI right now, will absolutely help them do exactly that.
And the paying noobs, agree to the terms, which say they aren’t allowed to use their own chat logs to compete with AI — literally the worst, most retarded deal in history, and millions of idiots think NOT taking it is dumb?
“If you’re not skillmaxxing with ClosedAI hours every day, ngmi” is the manipulation du jour for these assholes
Human organization that we see today occurs because of the exchange of economic activity. Some people have time they trade for in exchange for a store of value they can use to purchase things the need. Others pay people to produce things and then sell things they need. This is a distributed economy. It runs in a circle, with money traveling from producer to labor to producer (through consumer), etc in a giant circle.
The value of money is in the properties of money, part of that is partially based on what you can spend that money on (medium of exchange). The value of something is different between people, and even between the utility between other comparable alternatives (marginal utility), it is subjective.
Now if your time value that you can exchange for money goes to zero what happens? You don't exchange your time for free.
There is no demand for time value, no work can be exchanged for value and indirectly later for food, no production can occur because the store of value has been degraded and lost as the medium of exchange fails. You might start off being ok, but then failing since cost on the producer side is payed out first, and shortfalls in profit of projection to actuals will determine if they shut down for legitimate business.
Under AI, a producer can still produce, but no one will have the money to buy their goods after money sieves out of the hands of consumers. You have dynamics where the producer can't sell, and as money becomes more scarce and less liquid the value deflates until its worthless. This is called deflation.
If you print enough money, you might avoid deflation, but the purchasing power also fails being debased, there is no information in prices anymore and so distortions start occurring in the price of things. This leads to shortages in critical goods needed for survival, because demand can't be determined through economic calculation. You also have producers artificially constraining supply to raise price level. This leads to chaotic whipsaws in price with the chaos growing until a point is hit money loses its properties and then deflation occurs again.
The only indicators available to guide action are lagging indicators, forming the impossibility of this hysteresis problem. By the time you know you have a problem, there is no effective action that can be taken to correct course. You need the impossibility of future sight to solve this type of problem, and this occurs in any fiat based system given sufficient time. Legitimate producers leave the market when there is no profit, remaining producers are fueled by money printing, and you have a collapse to non-market socialism which then leads to these problems (despite there seemingly being a market, just not a functioning one).
There are two fundamental guidelines that must be met to stay on the safe path. Producers must make a profit in terms of purchasing power, and Factor markets must make sufficient compensation that they can support themselves, a wife, and three children, also in purchasing power.
Both have largely failed as a trend, and will fail in objective measure by 2030-2033, if not sooner. AI eliminates demand in the factor markets.
When you have so many people alive, who can do nothing in exchange for food, and no one willing to give them food, these people are left to die, and they won't go quietly. Order fails, supply chains fail, and the production flows needed to support that many people break down.
Malthusian reversion in ecological overshoot, which accounts for Catton's observations, will mean the planet might not support 2bn people in total after this happens. There are 8.2bn people alive today.
While AI works mindlessly as a slave, the difference in those numbers will be free to be starving humans, desperate humans with weapons that may eventually end up dying, or following a brass verdict.
When you fail to have a working plan for survival based in reality, the alternative is death, and the loss of rationality of so many people today is a sign that favors death and violence over alternatives. Failing to act to stop destructive dynamics is the same thing as supporting the outcomes when existence is on the line.
We do not live in a world of surplus, we live in a world of scarcity, and some people have complacently been raised as summer children; unaware and unprepared for the coming winter.
It cannot be called assumption, when it comes directly from the horses mouth.
Plan and Do. You implicitly assume that you will survive the environment you intend to create, at its most basic.
Its not a big assumption to assume you are a rational and good person that intends to create an environment capable of raising and allowing children to survive, we wouldn't be having this discussion if I didn't make that assumption.
A big assumption would be your insane and delusional, but I didn't say that, did I? I wouldn't bother saying a thing if that were the case.
I pointed out the problems, and the important parts that should naturally occur with thought and a good education.
Minds much greater than yours (and mine) have attempted to tackle the underlying problems going all the way back to the turn of the last century (1900s). None of these have been solved, and the only thing learned has largely been that they likely can't be solved given the mathematical property of chaos. Mises wrote extensively on these subjects in the 30s.
Failing to think rationally based in external measure and failing to have a plan that is tempered by reality is a choice towards destruction/annihilation, once all indirection and contradictions have been resolved.
So you are either not a good and rational person, or you don't intend to survive this. Thank you for clarifying.
Which assumption do you see that's incorrect? I don't see one, and I'm sure you prefer clarity to arguing.
I’m not claiming to fix the global economy, nor denying real risks like job loss or scarcity. Labeling me a "summer child" assumes I am naive about those challenges...another projection.
In short, I described a practical benefit available today, not a perfect future. A thoughtful reply would engage with those points instead of refuting a position I never took.
Was your original question simply about your personal, individual level gains, or were you asking about a broader perspective (how it would affect all of us)? If it's the former, that's not a very interesting topic. Maybe the other poster assumed you wanted to discuss the latter?
It sounds like you took their comments personally, leading you to stop reading early. I can't blame you here, as I have done so myself before, but you might be missing some good points in the post which don't pertain to you specifically.
> When you have so many people alive, who can do nothing in exchange for food [...]
AI, for now, only exists in the digital space and that's what it will primarily disrupt (at least initially). You're still going to need real people to mow the grass, maintain homes/buildings, ship goods and perform other basic services that underpin society. None of that changes drastically with AI in the picture.
Some people will be out of jobs or replaced, but fundamentally they will just have to do something more tangible to provide value in order to pay the bills.
Feel free to verify yourself through independent means if you have the capability to do so, its a lot of time investment, which is well worth it considering the subject matter represents an existential threat to survival.
Knowledge of existential dangers typically fall into a category where a person would be glad to spend any amount of time or money (assuming they were capable of it), to continue surviving past that danger.
You can find the relevant parts under social contract theory, Malthus/Catton on ecological overshoot, Mises & Menger, and others on the economic foundations and dynamics of fiat/money printing currency, History (Federalist Papers, and many others). If you've had a classical education (trivium/quadrivium curricula) none of this should surprise you.
Being deadly serious here, there is a mountain of vetted material that supports what I said, its not an original thought, it builds on the foundations that have largely been left with the shoulder's of giants by what is called education today. Public education doesn't really teach critical thought, and there are ongoing fights between administrators and the philosophy departments where they are trying to remove those optional classes entirely. You can't teach a lot of philosophy without having taken a course in critical thinking.
> AI for now only exists in the digital space.
You are mistaken about that. AI is also being used in the industrial automation space, as well as other domains such as building architecture and manufacturing (i.e. there are already 3d-printed buildings, CAD machining, AI based QA, etc).
I'm sure you've heard of Siemens, there are many other global companies that have already bridged the digital divide, some started this about 15-20 years ago, and technology has only improved since then.
Integration is now happening at an exponential pace because it has short term profits involved seeing as most business people see labor as the most expensive cost. It is fueled by non-reserve debt financing (money printing).
The magnitude between the peaks of the boom bust cycle is dependent on the shortfall of demand (growth/production) to capital(+profit baked in) lent over the entire cycle as an aggregate.
The continuance of the cycle depends on money retaining its fundamental properties, and those properties fail under money printing, which has been the status quo since 1970s to avoid deflation. Its a narrowing cliff-side that ends at a point no one can see ahead of time.
The failure of Coolidge to both initially regulate the rural bank loans, as well as failing to bail them out later; prior to the great depression led to the wholesale collapse of seasonal lending facility that largely caused the great depression as a chaotic whipsaw. This wasn't corrected until the US entered WW2.
These mechanics have repeated many times if you look into the detailed histories from primary sources. Penn Station bankruptcy collapse (1970s), Savings and Loan (1980s), Atari (1980s), dotcom bust (1990s), 2008 (CDOs), and the sharp increase in money printing with the Fed setting reserve rates to 0% (non-fractional reserve system) and changing to Basel3, the latter of which is based in objective value that fails given that value subjectivity has been long proven by economists and experts.
The issue is a core issue with understanding the mechanics of capital formation.
People are also generally extremely bad at recognizing exponential and logarithmic changes, and we have cognitive biases towards survivorship bias that confound towards an incomplete understanding which is punished when reality punches back (and it doesn't pull its punches).
You neglect that at the end of the cycle, like with any ponzi scheme the outflows exceed inflows, and value collapses to 0. There is nothing that can be exchanged.
Initially in such structured systems (debt and money printing) the value and benefits are front-loaded, but someone always has to pick up the tab with these types of systems when the investment is bad, if no one does or even has the ability to, then the same thing that happened under Coolidge happens at a global socio-economic level.
Distortions are generally not visible except in retrospect, far long after you can do anything to change the outcome.
That is why it is so critically important to have the discussions and conversation of these dangers occur before integration actually takes place.
The process of integration creates a burnt bridge, preventing anyone from going backward, it sails right into a maelstrom, risking the existence of all members both alive and those yet unborn by its members.
There are systems where there are points where you simply cannot sail the ship to safety because the dynamics involved overpower any human action cumulative or individual, and anyone on a ship like that will have its remnants crushed by overpowering forces in the vortex. The visibility might be poor, but knowing about the danger ahead of time may allow safe passage in its vicinity. An analogy sure, but there are many systems like this where without the proper frame of reference, you sail blind, and destructive outcomes await.
Whether its a maelstrom, a dam breaking, a avalanche, or tsunami, people survive by recognizing small but critical details and taking action well beforehand.
If you are free to be human because the tools are working, your marginal contribution will be small, since one guy who accepts not being human could supervise the tools.
fxtentacle•7h ago
I expect that as soon as AI starts to autonomously create value, there'll be a corporate race to capture that value. And then all the other humans won't gain any free time to do their thinking, creating, or living.
scottfalconer•6h ago