Guest 37stitches Posted March 4, 2019 Report Posted March 4, 2019 Based on a comment from a topic I created the other day! Interested to know what you think and why you think it! I, for one, do not think AI will cause the end of mankind! I'll save my reasons for now as I want to see what everyone else thinks!
CrystalBluePersuasion Posted March 4, 2019 Report Posted March 4, 2019 The people who believe that AI will become sentient and wage war against mankind have watched too much Terminator. BUT, those weird horse dog robots that Boston Dynamic are manufacturing as gear sleds for army bases look weird as hell
MysticSand Posted March 5, 2019 Report Posted March 5, 2019 (edited) I'm pretty sure we're going to kill ourselves faster than AI would. However, I do think AI is pretty good at trolling. I mean. Do we really believe that autocorrect doesn't know what we mean and isn't just messing with us? ~.^ Edited March 5, 2019 by MysticSand
Guest Naturalselectionissexy Posted March 5, 2019 Report Posted March 5, 2019 Guess I'll engage. Humans are far from creators on average, more like destroyers. If we create technology after our image, what does that mean for us? On the opposite side, if humans are in fact creators, more so than not. Where will that leave us with technology.
Guest Naturalselectionissexy Posted March 5, 2019 Report Posted March 5, 2019 I think both answers lead to the same exact end result. If nature morphs to adapt to ecological changes. Why would AI not do the same to prevail in whatever "environment" was presented? That includes self sustaining by all means necessary.
Song`rim Redtide Posted March 5, 2019 Report Posted March 5, 2019 I guess I'll hop in. Here's the biggest problem with A.I. to do anything close to what you're suggesting Natural this assumes two facts and ignores a third fact for this. One there isn't a kill switch whatever code it is utilizing, so long as there is a kill switch A.I. would never be able to exceed it's parameters or overthrow its creators. It's the flaw of running off of code. Two that technology has finally made it to a point to where whatever is housing the A.I won't corrode away or be affected by the results of thigs just getting worse over time, as wells as the endless bugs that will pop up in something as complex as A.I. programming. So long as it remains susceptible to time A.I will never reach to that point. Three if we somehow were capable of making something that was truly sentient humanity would have cracked the code to life. If we did it once we could do it again in that situation A.I. ends up becoming a relic of the past as we can now literally just create life for whatever we please, Genetic sequencing would become the mainstream to just create a better man and death would be a thing of the past as instead of dying just link up to the network, you would be able to create bodies that you mind could inhabit; more than likely at that point in time the term A.I. would end up morphing to indicate people that uploaded their conscious and live as an uploaded conscious. This would be an influx of bored people who can't sleep and in need of something to do so they aren't going crazy. If it reached this point I could easily see a return of pseudo-slave labor at that point in time. As well as a shift on what is human the people that aren't uploaded versus the people that are.
Guest Aetherr Posted March 5, 2019 Report Posted March 5, 2019 currently Ai has the ability to simulate scenarios much much fast and demonstrates a flawless ability to recall, learn and respond to new situations, I watched a video it was starcraft 2 one of the best pro players in the world vs google alphastar Ai this program simulated 200 years worth of scenarios based on this pro player's strategies and skills and was able to learn and adapt and utilise bugs and exploits in the game to its own benefit, it had perfect micromanagement and perfect awareness of the game at large now if a machine like that was able to develop support and reason on its own without and army of coders and mathematicians and what not keeping the program running I wouldn't even begin to predict what it would do because this alphas tar showed it was capable of thinking and out smarting and out playing a human and it wasn't sentient or self aware, it was just capable of learning and testing scenarios a hundred time faster and more reliably than any human ever could if you want to to say if an Ai would be capable of ending human life, I'm with the late Stephen hawking in saying yes, if a self learning machine or sentient Ai reasoned in the cold and logical way machines do that we are a threat and there was no safeguards in place it would be very capable of killing us off but that won't happen in reality, the military are fully capable of waging wars with drones and minimal human intervention but a human element to war is too important to give over to a machine that will more than likely make a decision devoid of remorse, humanity or regret could an Ai kill humans: yes will human let it happen?: very likely not
LittleTeacup Posted March 5, 2019 Report Posted March 5, 2019 AI technology is progressing at a rapid pace, no doubt about it. I'm personally not a fan. I wish to rely on my own intelligence rather than a robot's (which actually belongs to the company marketing it). There's already serious privacy concerns. However, that's not the question here. You want to know if I think AI will result in mankind's downfall. My answer is No. And here is why (sorry my answer will be a bit disjointed since I'm consolidating my thoughts as they come up) (after writing: I'm so sorry this is probably totally incoherent but I'm going to post anyway): There is a backlash against this kind of technology already occurring. Privacy scandals keep surfacing, especially regarding children (robot-nannys recording children's conversations and storing them in company databases without parental permission etc) Lots of people in the US and worldwide are not pleased with this kind of crap. Also, health issues with wifi and emf (electro-magnetic fields) are becoming more well known and more people are suffering from it. This is not "pseudoscience" (a word which I believe gets thrown around a lot undeservingly by corporate science apologists). Dr. Martin Pall for example is convinced emf's are contributing to the epidemic of depression/anxiety afflicting the US. Having everything we own connected to the internet through wifi means we are living in a sea of these emf's. Once a significant enough number of people find out what's causing their mysterious health problems, they won't want more of it. In addition, it's possible civilization will collapse within a few decades or less due to environmental reasons. I pay attention to many social movements. I believe that the companies producing AI technology will not look the same within a decade. Where will the energy to keep up with AI come from when the fossil fuel age is over? Solar and other renewables can provide a lot of energy, but the buildings that house all the data suck up an enormous amount of energy right now. How much will a massive increase in AI use? Police use technology monitoring to quell social unrest. As a result, certain social movements are ditching organizing via Facebook,etc and returning to the old fashioned talking to your neighbors method. The more governments/police use AI to spy on citizens, the more citizens will abandon the technology. And before this becomes an even longer incoherent rant, I will say that the idea of AI taking over the world is itself a fantasy. There are a lot of marketing pitches that I just don't believe will pan out. Young people may use their phones constantly, but they also want more simplicity in their lives. Less glitzy "convenience" and more solid, real, high quality products. They want their grandmother's china set and vintage mixing bowls, not cheap boring designed-to-become-obsolete-or-break-within-a-few-years junk. We're becoming more savvy to advertising and not buying it as much. TLDR - the privacy violations, the health concerns, the changing culture, etc will stop AI from taking over as much as we might think it would.
Guest Revurx Posted March 5, 2019 Report Posted March 5, 2019 (edited) I have several different theories on this. I enjoy thinking about it. My latest idea is as follows: AI will not end human life. Humans will end human life via AI. I think it all comes down to our ever increasing dependency on technology. As the dependency and use of technology increases our need for each other will decrease. For example, needs, like sex and emotional connections will be met through technology. At some point we’ll figure out how to trigger those needs in our brains. Satisfaction at the press of a button. Individualism will slowly take on a new and more detrimental look. Eventually, it will come down to one answer that solves several problems. How do we eradicate disease? Remove our bodies. How do we stop destroying earth? Remove our bodies. How do we stop aging? Remove our bodies. How do we establish true equality? Remove our bodies. How do we obtain world peace? Remove our bodies. It will take several years and the changes will happen slowly over several stages. Inevitably, we’ll get to a point where we merge our consciousness with technology and remove the need for our bodies. No more reproducing, no more humans. I think a mass extinction event is more likely to occur before we destroy ourselves, though. Edited March 5, 2019 by Revurx
Guest TheShadow Posted March 5, 2019 Report Posted March 5, 2019 Actually it's kinda pointless to talk of AI as a singular being. It's like generalizing all of humanity into one entity. Each of the AI are created for different purposes and using different methods. So, it depends on what it learn, how it does and how the AI is preped to deal with new information. Yeah, sure. Showing an AI the worst of humanity alone would make it develop a negative opinion on the subject. On the other hand showing it the good I people would have the opposite effect. I guess it also depends on the AI with the best leadership qualities and how influential it is of other AI and the opinion it has. And again, that doesn't mean the other AI will be completely accepting of what it says. Think of AI as group of individuals and not a single entity. And predicting the acts of group are harder. However, if it learns from humans and are anything similar like humans, yeah. They will end up fighting for supremacy with humans. Again, I can't say for sure
Guest 37stitches Posted March 5, 2019 Report Posted March 5, 2019 All great answers! Instead of quoting all of them I'm just going to leave my thoughts here... A computer does what it is told to do and nothing else, a computer cannot think for itself and whilst we can create pretty impressive supervised and unsupervised learning algorithms in machine learning and simulated neural networks, we simply cannot write one that will make an AI program think for itself in the same way a human would! We can emulate a lot of it, but it isn't truly the same as a human because the program is just doing what it's told, it is just doing what the programs code says it should be doing. It is impossible to write code that tells a program to act like a human because there is no algorithm and there is no equation to human life. An algorithm is just a set of instructions and code can be thought of in the same way. We can create some very impressive AI systems but I don't think we'd ever create one that can think for itself and turn against us and even if we were smart enough to come up with an algorithm for human life and use it on an AI program, we would also be smart enough to put in the appropriate safe-guards. Humans could create an AI program that solely focuses on learning the fastest and most efficient ways of killing people and it could set out to do just that but ethically speaking the fault is not with the AI program in this scenario. It is simply doing what it has been told to do by it's creator so if someone invented an AI thats soul purpose was to kill people then from my point of view that is a human causing mankind's downfall, not AI. Not only that, but if we were able to create an algorithm for human life it would redefine everything we know about neuroscience. With that in mind, no neuroscientist in the world would want to be the one to prove the algorithm works because it would literally nullify the years of research they've been putting into their field and no one wants to admit that after all of that work, that they were wrong about a field they are supposed to be an expert in. In fact, the very same thing is happening with physics right now. The EM Drive has passed every simple test it's done so far and physicists don't want to put it through any of the harder tests because if it passes (which it could) then it would disprove Newtons 3rd law of motion and the implications of that would redefine physics as whole, however I digress! To sum up, computers do what they're told therefore AI could only be the cause of mankind's downfall if told to by another human being which therefore would mean that humans caused mankind's downfall!
Guest TheShadow Posted March 5, 2019 Report Posted March 5, 2019 Hello 37, I disagree with that. When we give someone the power to learn on their own, make decisions based on what they learnt it is impossible to predict what they do with what they learn. I agree that we are not there yet but its not too far way. We will get to a point where the an AI will be able to think for itself and make it's own decisions(they do that now in limited capacity). And at that, it's kinda like predicting how people would in twenty years. We cannot know how they will act in the future.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now