Scientists and Engineers Warn Of The Dangers Of Artificial Intelligence

  • metta

    Posts: 39167

    Jan 16, 2015 3:42 PM GMT
    Scientists and Engineers Warn Of The Dangers Of Artificial Intelligence


    http://www.iflscience.com/technology/scientists-and-engineers-warn-artificial-intelligence
  • metta

    Posts: 39167

    Jan 16, 2015 3:44 PM GMT
    Battlestar Galactica

  • metta

    Posts: 39167

    Jan 16, 2015 3:47 PM GMT
    Caprica

  • sothis999

    Posts: 58

    Jan 16, 2015 3:58 PM GMT
    Honestly, I think we'll end up merging with machines when the time comes. More and more we'll have modifications to our biology with technological interfaces. It won't be much of an: us vs. them type of situation. And it cannot be assumed that a machine would have interest in destroying human beings or "conquering us." We don't try to exterminate or conquer many different species of animals that are less intelligent than we are, even if we do control them. There is no way we can predict what WILL happen though, as it is called the "technological singularity" for a reason. But I think humans and machines will become the same thing. Machines will require certain chemical responses to emulate human reasoning and become more "organic" in the process, and humans will be interested in enhancing their lives via technological interfaces. You'll just have a spectrum of how much one way or the other.
  • HottJoe

    Posts: 21366

    Jan 16, 2015 4:06 PM GMT
    Just don't teach them religion!!!icon_eek.gif
  • HottJoe

    Posts: 21366

    Jan 16, 2015 4:08 PM GMT
    sothis999 saidHonestly, I think we'll end up merging with machines when the time comes. More and more we'll have modifications to our biology with technological interfaces. It won't be much of an: us vs. them type of situation. And it cannot be assumed that a machine would have interest in destroying human beings or "conquering us." We don't try to exterminate or conquer many different species of animals that are less intelligent than we are, even if we do control them. There is no way we can predict what WILL happen though, as it is called the "technological singularity" for a reason. But I think humans and machines will become the same thing. Machines will require certain chemical responses to emulate human reasoning and become more "organic" in the process, and humans will be interested in enhancing their lives via technological interfaces. You'll just have a spectrum of how much one way or the other.

    I've often thought this will be a likely scenario. Plus, it might lead to intelligent beings from earth exploring the greater universe.
  • Posted by a hidden member.
    Log in to view his profile

    Jan 16, 2015 4:33 PM GMT
    HottJoe saidJust don't teach them religion!!!icon_eek.gif


    I just hope they're masc androids.
  • HottJoe

    Posts: 21366

    Jan 16, 2015 4:35 PM GMT
    Radd said
    HottJoe saidJust don't teach them religion!!!icon_eek.gif


    I just hope they're masc androids.

    I'm sure straight men will want fembots though.icon_lol.gif
  • Posted by a hidden member.
    Log in to view his profile

    Jan 17, 2015 6:15 AM GMT
    All of this has happened before, and it will all happen again.
  • AMoonHawk

    Posts: 11406

    Jan 17, 2015 7:47 AM GMT
    I think I meet a lot of people in society that exude artificial intelligence icon_lol.gificon_lol.gificon_lol.gificon_lol.gif
  • NeuralShock

    Posts: 411

    Jan 17, 2015 2:27 PM GMT
    I actually work on AI IRL icon_redface.gif
  • Apparition

    Posts: 3534

    Jan 18, 2015 5:35 PM GMT
    ideally, the AI's will be so far beyond us they wont really care too much what we do.

    cf. "the Culture" series Iain M Banks. (eg. Ship minds).
    I think one of the more important things that nobody really thinks about is the problem with machines having rights, is really an "ECONOMIC" problem. Consider an immortal being with even 5% annual interest growth. Eventually they own everything.
  • BIG_N_TALL

    Posts: 2190

    Jan 18, 2015 8:12 PM GMT
    So when are they going to make Data?
  • metta

    Posts: 39167

    Jan 18, 2015 8:43 PM GMT
    Elon Musk Donates $10M to Keep AI From Turning Evil


    http://www.wired.com/2015/01/elon-musk-ai-safety/
  • mwolverine

    Posts: 3386

    Jan 18, 2015 10:15 PM GMT
    I suppose it is fitting that people ascribe human traits to machines.
    Does it tell us more about humans or machines?

    The article doesn't really say all that much.
    I'm disappointed it didn't even mention Asimov.
  • Posted by a hidden member.
    Log in to view his profile

    Jan 18, 2015 10:24 PM GMT
    No SkyNet posts yet? I am disappointed!

    A.I. doesn't use reasoning so everything becomes black and white. It will become evil when you have evil humans putting their corruption into it.

    Mainly any government, lobbyist, and religious facets sticking their poison into the programming.
  • Posted by a hidden member.
    Log in to view his profile

    Jan 18, 2015 10:47 PM GMT
    metta8 saidScientists and Engineers Warn Of The Dangers Of Artificial Intelligence


    http://www.iflscience.com/technology/scientists-and-engineers-warn-artificial-intelligence


    This caught my attention: "He worries that even if most researchers behave responsibly, in the absence of international regulation, a single rogue nation or corporation could produce self-replicating machines whose priorities might be very different to humanity's, and once industries become established they become resistant to control."

    Hal in 2001 This mission is too important for me to allow you to jeopardize it.

    And if you think about Eisenhower's Farewell Address warning of the "Military-Industrial Complex"

    "This conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence -- economic, political, even spiritual -- is felt in every city, every State house, every office of the Federal government. We recognize the imperative need for this development. Yet we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society."

    Makes me wonder what he was afraid of.....or actually knew. Who really controls the government? It seems even the most Liberal President get "the memo" and more wars are the order of the day, and decade.
  • Posted by a hidden member.
    Log in to view his profile

    Jan 19, 2015 1:44 AM GMT
    The dangers of AI can be seen on Person of Interest every Tuesday night on CBS

    Brilliant show.
  • being_human

    Posts: 152

    Jan 19, 2015 5:38 AM GMT
    All these is making me wanna just get up and sit under a tree.
  • Posted by a hidden member.
    Log in to view his profile

    Jan 19, 2015 5:49 AM GMT
    I'm more concerned with the dangers of artificial stupidity.
  • mwolverine

    Posts: 3386

    Jan 19, 2015 6:06 AM GMT
    Is that worse than natural stupidity?
  • Posted by a hidden member.
    Log in to view his profile

    Jan 19, 2015 11:09 AM GMT
    sothis999 saidHonestly, I think we'll end up merging with machines when the time comes. More and more we'll have modifications to our biology with technological interfaces. It won't be much of an: us vs. them type of situation. And it cannot be assumed that a machine would have interest in destroying human beings or "conquering us." We don't try to exterminate or conquer many different species of animals that are less intelligent than we are, even if we do control them. There is no way we can predict what WILL happen though, as it is called the "technological singularity" for a reason. But I think humans and machines will become the same thing. Machines will require certain chemical responses to emulate human reasoning and become more "organic" in the process, and humans will be interested in enhancing their lives via technological interfaces. You'll just have a spectrum of how much one way or the other.


    We don't try to conquer other less intelligent species; we drive them to extinction through neglect. So if AI does that to us should we feel better that it was just them not noticing or giving a shit rather than them engaging in genocide? For the dead humans it won't really matter. Though given our track record I don't see that we deserve any less.
  • metta

    Posts: 39167

    Jan 21, 2015 10:40 PM GMT
    Computer Scientists Generate A Self-Aware Mario That Can Learn And Feel

    2015-01-21_1751_0.png?itok=XrqyM1EG

    http://www.iflscience.com/technology/computer-scientists-generate-self-aware-mario-can-learn-and-feel
  • sothis999

    Posts: 58

    Jan 30, 2015 8:14 PM GMT
    Wyndahoi said
    sothis999 saidHonestly, I think we'll end up merging with machines when the time comes. More and more we'll have modifications to our biology with technological interfaces. It won't be much of an: us vs. them type of situation. And it cannot be assumed that a machine would have interest in destroying human beings or "conquering us." We don't try to exterminate or conquer many different species of animals that are less intelligent than we are, even if we do control them. There is no way we can predict what WILL happen though, as it is called the "technological singularity" for a reason. But I think humans and machines will become the same thing. Machines will require certain chemical responses to emulate human reasoning and become more "organic" in the process, and humans will be interested in enhancing their lives via technological interfaces. You'll just have a spectrum of how much one way or the other.


    We don't try to conquer other less intelligent species; we drive them to extinction through neglect. So if AI does that to us should we feel better that it was just them not noticing or giving a shit rather than them engaging in genocide? For the dead humans it won't really matter. Though given our track record I don't see that we deserve any less.


    This isn't true for the vast majority of species though.