I am so ready and eager for a paradigm shift of hardware & software. I think in the future 'software' will disappear for most people, and they'll simply ask and receive.
1) Is the ultimate form of this technology ethically distinguishable from a slave?
2) Is there an ethical difference between bioengineering an actual human brain for computing purposes, versus constructing a digital version that is functionally identical?
[0] https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
So, probably not really falsifiable in the sense you are considering, yeah.
I don’t think that makes it meaningless, nor a worthless idea. It probably makes it not a scientific idea?
If you care about subjective experiences, it seems to make sense that you would then concern oneself with subjective experiences.
For the great lookup table Blockhead, whose memory banks take up a galaxy’s worth of space, storing a lookup table of responses for any possible partial conversation history with it, should we value not “hurting its feelings”? If not, why not? It responds just like how a person in an online one-on-one chat would.
Is “Is this [points at something] a moral patient?” a question amenable to scientific study? It doesn’t seem like it to me. How would you falsify answers of “yes” or “no”? But, I refuse to reject the question as “meaningless”.
You can't be serious. Whatever one wishes to say about the framing, you cannot deny conscious experience. Materialism painted itself into this corner through its bad assumptions. Pretending it hasn't produced this problem for itself, that it doesn't exist, is just plain silly.
Time to show some intellectual integrity and revisit those assumptions.
I don’t think there’s any distinction between sentience resulting from organic or digital processes. All organic brains are subject to some manner of stochasticity which determines emergent behavioral properties, and I think the same will be true of digital or hybrid brains.
So if you clone me in digital form, it’s not me anymore—it’s not even my twin, it’s something which was inspired from me, but it’s not me. This is now a distinct individual because of the random processes which govern behavior or personality etc., a different person, so to speak. So I never appreciated why MC felt any attachment or responsibility towards his images, other than perhaps the kindness you’d exhibit towards other persons.
The images, or persons as I’d like to think of them, in the story were shown as sentient. But sentience is only one part of consciousness, and the images in the story Lena seem incapable of self-determination. Or maybe they’re some equivalent of a stunted form of animal consciousness, not human consciousness. Human consciousness is assertive about its right to self-determine by nature of being an apex organism.
But even cows and sheep get mad and murderous when you’re unkind to them. Donkeys will lash out if you’re being a jerk. So I think two things: 1) simply creating an image of behaviors is not creating consciousness, and 2) human consciousness possesses a distinct quality of self-determination.
The main thing I’ve noticed about conscious beings is that they have a will to assert themselves, and those that don’t possess or demonstrate that quality to an appreciable degree in animals or humans are usually physiologically damaged (maybe malnutrition or trauma). I don’t expect consciousness born out of digital processes to be any different.
But:
> it would be fair to say that the more advanced the field becomes, the less difference there is between the artificial brain and a real brain.
I don't think it would be fair to say this. LLMs are certainly not worthy of ethical considerations. Consciousness needs to be demonstratable. Even if the synaptic structure of the digital vs. human brain approaches 1:1 similarity, the program running on it does not deserve ethical consideration unless and until consciousness can be demonstrated as an emergent property.
(Frankly, this is all a category mistake. Human minds possess intentionality. They possess semantic apprehension. Computers are, by definition, abstract mathematical models that are purely syntactic and formal and therefore stripped of semantic content and intentionality. That is exactly what allows computation to be 'physically realizable' or 'mechanized', whether the simulating implementation is mechanical or electrical or whatever. There's a good deal of ignorant and wishy-washy magical thinking in this space that seems to draw hastily from superficial associations like "both (modern) computers and brains involve electrical phenomena" or "computers (appear to) calculate, and so do human beings", and so on.)
The article does not distinguish between training and inference. Google Edge TPUs https://coral.ai/products/ each one is capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. So inference is already cheaper than the 20 watts the paper attributes to the brain. To be sure, LLM training is expensive, but so is raising a child for 20 years. Unlike the child, LLMs can share weights, and amortise the energy cost of training.
Another core problem with neuromorphic computation is that we currently have no meaningful idea how the brain produces intelligence, so it seems to be a bit premature to claim we can copy this mechanism. Here is what the Nvidia Chief Scientist B. Dally (and one of the main developers of modern GPU architectures) says about the subject: "I keep getting those calls from those people who claim they are doing neuromorphic computing and they claim there is something magical about it because it's the way that the brain works ... but it's truly more like building an airplane by putting feathers on it and flapping with the wings!" From "Hardware for Deep Learning" HotChips 2023 keynote. https://www.youtube.com/watch?v=rsxCZAE8QNA This is at 21:28. The whole talk is brilliant and worth watching.
datameta•1d ago
rcoveson•1d ago
DavidVoid•1d ago
[1]: https://www.youtube.com/watch?v=OOK5xkFijPc
quantum_state•1d ago