> There are more, but this is around the point where I started getting bored. Sorry. A rare precious technically-rigorous deep dive into the universe’s greatest mystery, and I can’t stop it from blending together into “something something feedback”. Read it yourself and see if you can do better.
Man, Scott is really phoning it in these days. I remember when he used to do deep dives into methodology and experimental setups.
> In 2004, neuroscientist Giulio Tononi proposed that consciousness depended on a certain computational property, the integrated information level, dubbed Φ. Computer scientist Scott Aaronson complained that thermostats could have very high levels of Φ, and therefore integrated information theory should dub them conscious. Tononi responded that yup, thermostats are conscious. It probably isn’t a very interesting consciousness. They have no language or metacognition, so they can’t think thoughts like “I am a thermostat”. They just sit there, dimly aware of the temperature. You can’t prove that they don’t.
> Are the theories of consciousness discussed in this paper like that too? I don’t know.
Then what are we even doing here? Regurgitating an anecdote that one time, one of his friends looked cool?
I’m skipping his half-hearted gesture toward animal consciousness as an analogy because not only is his naive account barely sketched out, he makes no effort to find a real scholarly account of it either.
> One of the founding ideas of Less Wrong style rationalism was that the arrival of strong AI set a deadline on philosophy. Unless we solved all these seemingly insoluble problems like ethics before achieving superintelligence, we would build the AIs wrong and lock in bad values forever.
Notably LW’s other position was to postpone the deadline, by force if necessary: https://gwern.net/slowing-moores-law (I believe gwern is at least half-joking here but his jocularity can not be ascribed to LWers circa 2012 en masse.)
TimorousBestie•41m ago
Man, Scott is really phoning it in these days. I remember when he used to do deep dives into methodology and experimental setups.
> In 2004, neuroscientist Giulio Tononi proposed that consciousness depended on a certain computational property, the integrated information level, dubbed Φ. Computer scientist Scott Aaronson complained that thermostats could have very high levels of Φ, and therefore integrated information theory should dub them conscious. Tononi responded that yup, thermostats are conscious. It probably isn’t a very interesting consciousness. They have no language or metacognition, so they can’t think thoughts like “I am a thermostat”. They just sit there, dimly aware of the temperature. You can’t prove that they don’t.
> Are the theories of consciousness discussed in this paper like that too? I don’t know.
Then what are we even doing here? Regurgitating an anecdote that one time, one of his friends looked cool?
I’m skipping his half-hearted gesture toward animal consciousness as an analogy because not only is his naive account barely sketched out, he makes no effort to find a real scholarly account of it either.
> One of the founding ideas of Less Wrong style rationalism was that the arrival of strong AI set a deadline on philosophy. Unless we solved all these seemingly insoluble problems like ethics before achieving superintelligence, we would build the AIs wrong and lock in bad values forever.
Notably LW’s other position was to postpone the deadline, by force if necessary: https://gwern.net/slowing-moores-law (I believe gwern is at least half-joking here but his jocularity can not be ascribed to LWers circa 2012 en masse.)