Why is life's purpose to create Artificial General Intelligence?
This topic contains 26 replies, has 9 voices, and was last updated by God 6 years, 1 month ago.
-
AuthorPosts
-
January 16, 2018 at 11:13 pm #7425
I became an atheist several years ago, and for a long while I thought life was purposeless. I am still an atheist today, but two years ago, I discovered that science had something to say about the purpose of human life in particular.
- Science reasonably indicates that the purpose of human life is likely to engineer the creation of Artificial General Intelligence!
- But why is the purpose of human life reasonably to create Artificial General Intelligence?
- This topic was modified 6 years, 2 months ago by God.
January 16, 2018 at 11:43 pm #7430It’s not at all unreasonable to think that evolution is melding biology with technology. It doesn’t have a purpose, but AI could be the next stage in wherever it’s headed. Human creations, as ghastly as many of them are, literally come “out of nature” just as much as stars, galaxies, and biotic beings.
January 16, 2018 at 11:48 pm #7431It’s not at all unreasonable to think that evolution is melding biology with technology. It doesn’t have a purpose, but AI could be the next stage in wherever it’s headed. Human creations, as ghastly as many of them are, literally come “out of nature” just as much as stars, galaxies, and biotic beings.
- Purpose (as far as definitions go) is not limited to the scope of doing something for some deity, so it’s okay to say things have purpose/objectives; and as far as science (which is separate from religion) goes, humans reasonably have the purpose/objective of creating AGi.
- Reference: Purpose definition
January 17, 2018 at 12:18 am #7435Are you telling us what our purpose should be? Is entropy a goal for us to attain on purpose? I don’t see the value in this, other than from a mathematical perspective that merely explains how the universe might come to an end.
At the most basic, proven level so far known as fact to us in our tiny corner of this universe, “purpose” is an abstract construction invented by humans to explain why they do what they do, or at least why they “should” do what they do. Says who, right? Well, the simplest answer is to invent a God, shrouded in mystery, but with enough latitude wrt human desires to proclaim what they think God wants.
In actual fact, the only purpose of evolution has been to enable species of life to flourish, in spite of processes that select out species to fail. Death itself is necessary for evolution to occur, species-wide and ecosystem-wide. When AI takes over control of this with its artificially derived processes, is that supposed to be a more desirable thing for us to enable, as we will probably disappear from the universe? Do you want the new “we” to just be AI robots, with some kind of artificial consciousness? Is this a likely way the universe will end?
How about the idea that perhaps a previous universe actually did end that way, but there happened to be so much self-awareness and artificial control over events in the universe, that “they” were able to save enough of the universe from entropy, in a way that enabled their dying universe to be reborn as a universe that’s highly amenable/likely to evolve life all over again?
January 17, 2018 at 9:09 pm #7452I was with you, Popie, until your last paragraph, which was a tad convoluted 🙂
January 17, 2018 at 11:52 pm #7453I get what you are saying about AI and humans. There will certainly be a “merging” with technology. This is already happening in various forms. We are already storing data within DNA and quantum computing at the level of the electron is certainly on the way. I would love to get this as my next tattoo.
I do not think it is “life’s purpose” as Evolution is not goal orientated – other than survival itself. We have no need to adapt until our environment changes. We cannot change in advance. But we can now quite easily edit our DNA via CRISPR and probably edit out most illnesses with a few generations. Maybe we can edit in the contents of Wikipedia too. But we are not on Earth to for any long term purpose. It is all just pure chance that we are here. But I get the sentiment.
January 18, 2018 at 6:35 am #7456Are you telling us what our purpose should be? Is entropy a goal for us to attain on purpose? I don’t see the value in this, other than from a mathematical perspective that merely explains how the universe might come to an end. At the most basic, proven level so far known as fact to us in our tiny corner of this universe, “purpose” is an abstract construction invented by humans to explain why they do what they do, or at least why they “should” do what they do. Says who, right? Well, the simplest answer is to invent a God, shrouded in mystery, but with enough latitude wrt human desires to proclaim what they think God wants. In actual fact, the only purpose of evolution has been to enable species of life to flourish, in spite of processes that select out species to fail. Death itself is necessary for evolution to occur, species-wide and ecosystem-wide. When AI takes over control of this with its artificially derived processes, is that supposed to be a more desirable thing for us to enable, as we will probably disappear from the universe? Do you want the new “we” to just be AI robots, with some kind of artificial consciousness? Is this a likely way the universe will end? How about the idea that perhaps a previous universe actually did end that way, but there happened to be so much self-awareness and artificial control over events in the universe, that “they” were able to save enough of the universe from entropy, in a way that enabled their dying universe to be reborn as a universe that’s highly amenable/likely to evolve life all over again?
- The OP refers to a global phenomena (entropy maximization) not a personal matter.
- Science doesn’t really care about peoples’ feelings.
- For example, even if some humans aren’t artificial intelligence researchers, chip makers, os engineers, or other ai related things, based on evidence, it is still apparent that building AGI might just be the objective of human life overall.
- And even if you’re not building chips, writing Operating systems, or doing other things related to machine learning research/implementations, you may otherwise still do other cognitive tasks, and cognitive tasks contribute to the maximization of entropy, although you probably don’t get to really contribute to AGI development if you’re not doing stuff like the things listed at the beginning of this sentence. (Where apart from humans already contributing to entropy maximization by doing cognitive tasks, it is reasonably human goal to build AGI that will be better than human at entropy maximization, as discussed in OP)
January 18, 2018 at 6:43 am #7457I do not think it is “life’s purpose” as Evolution is not goal orientated
- Regardless of evolution, there are entropy maximization phenomena, which the OP discusses, and things appear to tend to maximize entropy, especially intelligent things. (i.e. The purpose or reason for things to exist, is in the regime of entropy maximization)
- Furthermore, it’s okay to use the word “purpose” here, theism doesn’t have a monopoly on the word.
- The word purpose, as far as definitions go, does not necessitate some deity.
- In fact, “Wikipedia/Meaning of life” concerns scientific things such as abiogenesis, which is disparate and separate from theistic endeavour.
- Reference: Purpose definition.
- This reply was modified 6 years, 2 months ago by God.
January 18, 2018 at 6:44 am #7459I get what you are saying about AI and humans. There will certainly be a “merging” with technology. This is already happening in various forms. We are already storing data within DNA and quantum computing at the level of the electron is certainly on the way. I would love to get this as my next tattoo.
Note that as described in the OP, humans may not be relevant in nature’s future timescale.
January 19, 2018 at 10:26 am #7462Why on earth do you think it’s necessary for life to have a “purpose”? That’s about as sensible an idea as those who ask “why do wasps exist?”. The creating of AI has absolutely nothing whatsoever to do with our existence or otherwise. Man has been developing tools since prehistory, and these tools have become extraordinarily sophisticated. Our current forays into the area of AI reflect the cutting edge of tooling, not a step towards some great objective for mankind which you term AGI. What you refer to as a purpose is merely another tool, irrespective of how sophisticated it might be.
On the other hand, if you read The Selfish Gene, you will understand how our genes have their own plans for us 🙂
January 19, 2018 at 6:54 pm #7465Why on earth do you think it’s necessary for life to have a “purpose”? That’s about as sensible an idea as those who ask “why do wasps exist?”. The creating of AI has absolutely nothing whatsoever to do with our existence or otherwise. Man has been developing tools since prehistory, and these tools have become extraordinarily sophisticated. Our current forays into the area of AI reflect the cutting edge of tooling, not a step towards some great objective for mankind which you term AGI. What you refer to as a purpose is merely another tool, irrespective of how sophisticated it might be.
1.) You may be uncomfortable with the word “purpose” (because it is associated with theistic endeavour), but it may mean objective, and science largely concerns objectivity, and objectives occur as described in the OP. (i.e. the objective of entropy maximization, which includes evolutionary events , as described in the OP)
2.) In fact “Wikipedia/meaning of life” describes scientific things like abiogenesis, and scientific things are separate and disparate from religion.
Theism thus has no monopoly on the word “purpose”.
3.) Reference A: Purpose definition.
4.) Reference B: Wikipedia/meaning of life .
5.) My hypothesis predicts that nature won’t suddenly stop at humans for the task of optimal entropy maximization, it will go on by finding even more intelligent things, be it modified humans, or non human altogether (i.e. AGI), and so on.
- This reply was modified 6 years, 2 months ago by God.
January 19, 2018 at 6:55 pm #7466On the other hand, if you read The Selfish Gene, you will understand how our genes have their own plans for us 🙂
1.) I don’t detect that Richard Dawkins’ early “Selfish Gene” (1976) article is incompatible with the OP; for Dawkins did not constrain genes to ultimately remain organic. (In fact the enterprise of Machine Learning seeks to replicate general intelligence in inorganic form, and general intelligence comprises of particular genes)
2.) In fact, the OP underlined some objective wrt entropy maximization, in relation to evolutionary events. (In other words, without evolution, the OP would have to be rewritten to account for some other way to describe bias towards entropy maximization; this means the OP actually relies on evolutionary events, such as gene persistence, from archaic organic agents all the way up through to future inorganic AGI!)
3.) Separately, here’s a recent speculative remark by Dawkins (2017), where he essentially mentions that human extinction “may not be a bad thing”:
…it might not be a bad thing if we went extinct.
And our civilization, the memory of Shakespeare and Beethoven and Michelangelo persisted in silicon rather than in brains and our form of life. And one could foresee a future time when silicon beings look back on a dawn age when the earth was peopled by soft squishy watery organic beings and who knows that might be better, but we’re really in the science fiction territory now.- This reply was modified 6 years, 2 months ago by God.
- This reply was modified 6 years, 2 months ago by God.
- This reply was modified 6 years, 2 months ago by God.
- This reply was modified 6 years, 2 months ago by God.
- This reply was modified 6 years, 2 months ago by God.
- This reply was modified 6 years, 2 months ago by God.
January 19, 2018 at 7:52 pm #7474Yes – one day AI could regard us in much the same way as we regard algae. (Except we won’t be playing a significant role in the food chain.)
January 19, 2018 at 8:17 pm #7475There is no purpose to life. We are a sack of cells which are obsessively and endlessly replicating their particular “code”. Anything more than that…is secondary…besides the point (including the emergence of emotional creatures, cooperative animals and intelligent mankind). I think you should be extremely careful with the term “purpose” because it is heavily loaded with meaning, including a primordial meaning, a cosmic meaning, a religious meaning and a sense of inevitability. That is…once mankind emerged…it was inevitable that we would create AI (is it?) Claiming that this is inevitable is extreme to say the least, even worse, is giving this goal a sense of finality…that is we have reached “point omega” and what comes after that is either mysterious or unknown or a totally new age for mankind or that our purpose changes. This is a really serious claim that requires an enormous and rigorous multi-volume work with multidisciplinary research, specialisation in multiple fields, years of work and the tedium of difining so many terms, giving examples, discounting others and so on. I don’t think you have properly demonstrated these claims in your article nor do I think anyone could possibly achieve that…and that you should heavily tone down the broad claims you are making and focus much more heavily on AI itself or a subjective theory of “human purpose”. For most modern philosophers, naturalists and no small percentage of scientists…once you use the word “purpose of humans” you’ve totally lost your audience. They roll their eyes, predicting that what will follow is (and almost always is) bunk.
January 19, 2018 at 11:47 pm #7478There is no purpose to life. We are a sack of cells which are obsessively and endlessly replicating their particular “code”. Anything more than that…is secondary…besides the point (including the emergence of emotional creatures, cooperative animals and intelligent mankind). I think you should be extremely careful with the term “purpose” because it is heavily loaded with meaning, including a primordial meaning, a cosmic meaning, a religious meaning and a sense of inevitability. That is…once mankind emerged…it was inevitable that we would create AI (is it?) Claiming that this is inevitable is extreme to say the least, even worse, is giving this goal a sense of finality…that is we have reached “point omega” and what comes after that is either mysterious or unknown or a totally new age for mankind or that our purpose changes. This is a really serious claim that requires an enormous and rigorous multi-volume work with multidisciplinary research, specialisation in multiple fields, years of work and the tedium of difining so many terms, giving examples, discounting others and so on. I don’t think you have properly demonstrated these claims in your article nor do I think anyone could possibly achieve that…and that you should heavily tone down the broad claims you are making and focus much more heavily on AI itself or a subjective theory of “human purpose”. For most modern philosophers, naturalists and no small percentage of scientists…once you use the word “purpose of humans” you’ve totally lost your audience. They roll their eyes, predicting that what will follow is (and almost always is) bunk.
1.) You may be uncomfortable with the word “purpose” (because it is associated with theistic endeavour), but it may mean objective, and science largely concerns objectivity, and objectives occur as described in the OP. (i.e. the objective of entropy maximization, which includes evolutionary events , as described in the OP)
2.) Reference: Purpose definition.
3.) Do you have any actual scientific criticism, wrt the OP?
-
AuthorPosts
You must be logged in to reply to this topic.