Consciousness

"Artificial Intelligence"

This topic contains 1 reply, has 1 voice, and was last updated by  PopeBeanie 4 years, 7 months ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #24717

    PopeBeanie
    Moderator

    This topic’s in quotes, because the term AI is currently defined by we humans in various ways. At least at this moment, let’s try to focus primarily on our shared human experiences, with less focus or speculation on how (e.g.) alien intelligence or AI itself might someday propose or enforce its own definitions on us; 🙂 lol haha I’ll be back, beep!

    What follows is a kind of overview on the topic, starting with the following quote:

    There are 85 billion neurons, each with an average of 10,000 connections to other neurons. That means the possible pathways through the brain are more than the number of atoms in the universe. If this wasn’t enough of a problem to challenge, there are many different types of neurons in the brain, many different types of neurotransmitters, many different sensory inputs, and some support systems such as glial cells that have an unknown function in cognition.

    Rather than thinking of the brain as an independent and isolated computer that runs according to instructions from different parts, it’s better to think of it as a deeply connected element of a larger system that includes the body and external environment.

    When I argue with my physiology professor, he insists that neurophysiology is ultimately all digital to start with, so designing human brain based AI is not much of a stretch. I disagree!

    • Micro-second level chemical interactions can vary depending on remote brain cell connections (which as noted above are numerically astronomical), local conditions of thousands or millions of other brain cells of various types (i.e. not just “neurons”), with additional interactions that depend even on other conditions in the body, e.g. myriad hormones in the blood.
    • If brain activity is not the ultimate definition of “analog”, then what is!? Yes, neurons output one digital spike as their primary signal, but can we just ignore all the analog input and analog effects caused by each neuron’s “digital” activity? No, we can’t, and…
    • Any attempt to capture (or “upload” to computer) at once the condition of every cell and chemical state in a brain in (say) one specific second (or millisecond or microsecond) is subject to change practically instantly, depending on other inputs from the rest of the body, and the surrounding environment.

    AI implementation will remain artificial, simulations of human intelligence, not to mention of human consciousness for our foreseeable future.

    See an overview for laymen as written by a PhD in Neurophysiology here: quora.com/Why-is-it-so-hard-to-map-a-human-brain

     

    #31780

    PopeBeanie
    Moderator

    AI is usually, in itself, a side-issue to a discussion of consciousness, yet people love to speculate on the inevitability of consciousness in AI. This, before we can even define our own consciousness. I personally feel that purposefully building any “consciousness” into AI could be considered as unethical as it would be to experiment with any human‘s personal consciousness, without their permission.

    As different people or groups of people create and embellish “AI” or various versions of it, I’m sure questions of ethics will arise (for more than one possible reason), but perhaps a long time from now. Still I feel strongly that any kind of AI or “created” consciousness should be required to include an on/off switch, a feature to be kept in mind by any designer intent on creating any kind of sentient entity, and by any owner or potential casualty of it.

    Meanwhile, even more philosophically pertinent than discussions on how to compare “zombies” to humans, discussions on AI consciousness can inform how we view human consciousness. Perhaps the notion of “creationism” should also inform us since some humans may intentionally attempt to create a new form of (possibly self-serving) consciousness, or (heaven forbid, so to speak) incidentally create a dangerous or even malevolent form of consciousness if/when attempting to design AI simulators of human behavior.

    Aside from the personal opinions of mine listed above, as pertains to human consciousness vs an artificially created consciousness (or at least their possible emotions), here is an interesting article Tacit Creationism in Emotion Research, and below are a few excerpts from it:

    […] A fully evolutionary foundation for emotions research discourages hopes for simple elegant models but it can nonetheless advance research by dispelling misconceptions and suggesting new questions.

    […] This article argues that progress in emotions research has been slowed by tacit creationism. By tacit creationism I mean viewing organisms as if they are products of design, without attributing the design to a deity. Few scientists attribute the characteristics of organisms to a supernatural power, but many nonetheless view organisms as if they were designed machines. Organisms are, however, different from machines in several crucial ways.

    […] Thinking about emotions as if they were products of design encourages searching for a specific number of emotions with distinct boundaries and specific functions, as if they were parts of a machine. However, because emotions are products of natural selection, we should instead expect many states with indistinct boundaries and multiple functions.

     

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.