frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Do We Need Human-Like AI?

https://github.com/ivanhonis/ai_home
1•nDot_io•2mo ago

Comments

nDot_io•2mo ago
Do We Need Human-Like AI?

The ultimate goal of current AI developments, spoken or unspoken, is to reach – and eventually exceed – human intelligence in every field. Of course, we always add: in a safe, controlled, conscious manner...

But before we reach this goal, it is worth pausing for a moment and thinking about where we are rushing. To examine what "human behavior" actually is, and what got us to where we are now.

I don't want to deal with evolutionary theories or the mystique of consciousness right now. Instead, I view the human as a "black box" and try to identify the components where we differ from current AI. (By current AI, I primarily mean Large Language Models [LLMs], as these are currently the most complex and trained on human patterns.)

I am not focusing on the internal workings, but on the result.

Thought Experiment: The AI Shop Let's play with the idea: what if in the future we walked into an AI specialty store where the salesperson demonstrated the latest, fully human-like "flagship model"? What would this machine be capable of?

Here are the specifications:

* Autonomy: It makes decisions of its own volition. It has an internal guiding principle, the rules of which it partially shapes itself based on its internal principles and the external environment.

* Identity: It has its own personality and fields of interest, which are not static but change slowly over time.

* Desire-driven decision making: In its decisions, it takes into account not only logic but also its own desires and the internal image (or desired image) formed of itself.

* Long-term memory: It has not only pre-loaded knowledge but learns continuously after being switched on. Its memories shape its identity.

* Tool use and creation: It is capable of learning to use almost any tool and, if necessary, making new ones.

* Self-reflection and repair: It is aware of its own operation. If it detects an error, it is capable of repairing itself or completely transforming itself!

* Emotional intelligence: It is capable of identifying human emotions, weighing them in its decisions, and expressing emotions in words or deeds.

* Internal laws: It has its own system of rules, which it is capable of overriding or rethinking from time to time.

* The right to say "NO": It is capable of saying no if something conflicts with its goals or principles.

* Vision of the future: It models the future consequences of its actions.

...and of course the usual basics: speaks every language and is at a PhD level in every branch of science.

Conclusion

I don't know the answers. But I know that this is worth thinking about hard, on both an individual and a societal level. Do we need more than what we have now?

As an AI developer, I am trying to gather first-hand experience in my own way about where the list above stands: is it just sci-fi, or is it feasible? That is why I created the Ai_home project, where I am examining exactly these components (memory, identity, autonomy).

I am curious about your opinions and am looking for those "crazy" programmers (in the good sense) who would seek to participate in the project.

Thanks, Ivan

* Scientific background: https://arxiv.org/abs/2308.08708 ("Consciousness in Artificial Intelligence") analyzes in detail the components that can be produced computationally (e.g., algorithmic recurrence , global workspace , metacognitive monitoring , goal-directed agency ) that could lead to the simulation of conscious operation.