Prefer the observations that gen-AI can retrieve, in a manner that looks remarkably lifelike.
Use gen-AI to assist composition, under the expectation that it will retrieve code with surprisingly high context sensitivity.
Never expect gen-AI to generally reason. It can't do it.
In pursuit of vast monetary incentives, the designers of gen-AI have overwhelming reasons to align gen-AI to fake reasoning. One very easy way to do this is to make agents appear agreeable. A more pernicious way is observe edge-cases, and mechanically turk additional context sensitivity for those cases (RLHF).
If AGI is anywhere on the horizon this will be surprising given that there is so little understanding of the nature of intelligence. Nevermind the ethical concerns surrounding any pursuit of sentient AI.
_wire_•42m ago
Use gen-AI to assist composition, under the expectation that it will retrieve code with surprisingly high context sensitivity.
Never expect gen-AI to generally reason. It can't do it.
In pursuit of vast monetary incentives, the designers of gen-AI have overwhelming reasons to align gen-AI to fake reasoning. One very easy way to do this is to make agents appear agreeable. A more pernicious way is observe edge-cases, and mechanically turk additional context sensitivity for those cases (RLHF).
If AGI is anywhere on the horizon this will be surprising given that there is so little understanding of the nature of intelligence. Nevermind the ethical concerns surrounding any pursuit of sentient AI.