Search
Close this search box.

AI Today Podcast #002: Should We Be Scared of AI?

Podcast #2_ Should We Be Scared of AI_

AI Today Podcast #002: Should We Be Scared of AI?

For a more in-depth and detailed report of the topics discussed in this podcast, download our FREE research report:

[wpdm_package id=’130′]

 

Show notes:

With the recent resurgence of interest in Artificial Intelligence (AI), there’s a tidal wave of investment, activity, and attention being placed on all things even remotely connected to AI technologies. However, many notable titans of the tech industry and others are sounding the alarm bells about the rapid pace of AI development and warning of epic, catastrophic outcomes that can spell the demise of the entire human race. Are these warnings irrational, fear-mongering and hyperbole about far-fetched concepts of what can happen?  Or are these rational, well-thought out concerns about the potentials for a technology we have yet to fully tap?

This podcast outlines the major arguments for and against fearing the growth of AI.

Articles and topics referenced in the podcast can be found below:

_________________

A transcript of podcast is available below:

Kathleen Walch: [00:00:00] Hi everyone and welcome to the AI Today podcast. I’m your host Kathleen Walch.

Ron Schmelzer: [00:00:04] And I’m your host Ronald Schmelzer.

Kathleen Walch: [00:00:06] And our podcast today is going to be on “Should we be Scared of AI?”.

Ron Schmelzer: [00:00:11] So part of the reason why we’re talking about this is that we’ve seen a lot of press recently about some notable tech titans in the industry talking about their fears of AI. And as we started digging into it at first we were a little confused by this because we’re like well why are all these really smart people…

Kathleen Walch: [00:00:29] … Like Elon Musk Stephen Hawking Bill Gates…

Ron Schmelzer: [00:00:34] … why are they so concerned about AI and do they know something we don’t know ….

Kathleen Walch: [00:00:39] … saying it could be the next world war III!

Ron Schmelzer: [00:00:42] Exactly. Like you know all of humanity might no longer exist as a result of AI!

Well let’s dig into this. Let’s really take a look at all the arguments that people are saying both on the reasons the causes for concern for adoption of artificial intelligence and on the other side the people are saying no, these concerns are overblown or a lot of hyperbole. Our position on this is that as we go through this podcast and as we look and examine each one of these arguments is that we’re not necessarily going to come on the side of yes, you should be absolutely concerned, or no, you should just ignore all these concerns. We think there are valid reasons on both sides. But one of the things that we’re going to do is  take a look at these arguments and examine both sides of each argument and then identify what are some of the key markers that we can look for in the industry to see if we’re trending in any one particular direction trending towards things that we should fear or trending towards things that we have no reason for concern.

Kathleen Walch: [00:01:33] ≥Right. And so we’ll talk through this and go through a few of the general arguments that we found. So the first argument is general anxiety about AI.

Ron Schmelzer: [00:01:44] Yeah and I think people have seen in the popular media and in Hollywood you know lots of examples of AI gone wrong. Right. So we’re familiar with HAL in 2001 , Terminator, The Minority Report, The Matrix, Westworld, Tron, I Robot, Robocop. You know all the way back to the 1920s and 30s we have Metropolis, Blade Runner, we have war games in 1980s. It’s like every decade that there’s been Hollywood almost there’s practically been some sort of story about a super intelligence system gone wrong. And it’s no wonder I mean if you think about that there’s a lot of situations and scenarios where things can go wrong right. Where you have a generally superintelligent system with super power and knowledge. You know I think a lot of what people are expressing when they’re expressing their fears about artificial intelligence come from this fear of artificially general intelligence systems or superintelligent systems where we have unknown consequences about that super intelligence system. And I think people talk about this as you know when Elon Musk is talking about this for example he talks about the so-called double exponential where we have you know systems evolving computers and hardware evolving at exponential rates. But we also have you know human knowledge about AI and expertise that also expanding at an exponential rate. So as he said when you put these two exponentials together we’re unable to predict what the outcome of that is. It’s possible you might think that this anxiety is rational. We think that like you know the fears are well-founded. I mean we’ve had these other things like for example you think about nuclear war you know people are scared of nuclear war and there’s reasons for people to be scared of it. There’s reasons for people to be scared of bioterror. There’s reasons to be afraid of cyber warfare. So I think it’s not irrational for people to have this a general anxiety or fear about some new technology that we don’t know what the whole outcome of that is.

Kathleen Walch: [00:03:34] Well because it’s the fear of the unknown. So you don’t know so you don’t know what to be afraid of so you’re just afraid of everything. And I think that that’s this general anxiety and I think that a lot of these people are also looking at worst case scenarios and not thinking about you know maybe some of the good possibilities that can come from this. They are only looking at the negatives. So even if there’s a lot of good there’s one negative and they just hyper focus on the negative and they don’t look at any of the positives that can come from it.

Ron Schmelzer: [00:04:05] Right. So the counter example, so when you hear people saying you know you hear Mark Zuckerberg saying oh Elon Musk is all worked up. He’s just causing fear and fear mongering. You know the general response is if you think about other major societal changes, the industrial revolution and even the information revolution, every time we go through that major changes society we’ve moved from humans and animals doing all of the work load to machines doing all the work load. We’ve moved from majority of we’ve moved from these societies where you had very small close knit community were based and everybody is doing work in their community. Now these much larger urban ecosystems where people work in a job and then go home and kids go to school. These are all major societal changes. And so it’s possible that this transformation is another one of those major societal changes and it’s very disruptive to both the society and to the economics. So it’s no surprise that people are acting with fear. But if you want to use those as examples we’ve made it through those without any harm to humanity is a matter of fact every time we crossed a new technological threshold humanity has advanced. We now live longer and healthier lives, and our quality of life is much better. So that’s the usual counterexample to the general anxiety issue.

Kathleen Walch: [00:05:15] And I do think with some of those examples things have changed in society and we’ve lost certain things but then gained certain things. So for example the telephone you know maybe the written word went down when the telephone came and people were not writing letters as much and communication became more instant. So some people you know fear mongering back then that they were afraid that the written word would go away because people would just communicate on the telephone. That didn’t happen. And I think that that took time to come to that conclusion that that didn’t happen and that’s what’s going to happen with AI. So these people have fears and we’re listening to those fears but they have not yet come true. And that’s where you know we still need to wait and see what happens.

Ron Schmelzer: [00:06:02] I guess sort of a final note here is if we’re going to look to Hollywood as an example I mean there are actually some good examples of AI in Hollywood. We have two of the biggest science fiction legacies out there with Star Trek where we have ya know artificial intelligence systems and the enterprise and lots of systems and people seem to be coexisting just fine in the world of Star Trek. Of course we have Star Wars. Everybody seems to love C3PO and R2D2 and they have no issues with these systems becoming so sentient that they’re going to take over the world. So if you want to look to Hollywood examples we can also look to Star Trek and Star Wars and perhaps some others.

Kathleen Walch: [00:06:33] The next argument that we’ve seen is that mass unemployment will occur due to a replacement of humans with AI workers.

Ron Schmelzer: [00:06:41] Right. So if we move away from just the general fear and anxiety thing, we just put that aside for a second just tell people to embrace or reject their fears. We can still look at some facts around AI that people are concerned that a lot of jobs especially so-called white collar jobs the ones that involve knowledge and knowledge workers and interactions between humans a lot of those jobs are possibly at risk due to artificially intelligence systems and not even necessarily AGI like super intelligent artificial general intelligence systems but just more advanced versions of what we have around today.

Kathleen Walch: [00:07:14] And I don’t think just white collar. I think both white and blue collar, delivery drivers, cab drivers for example I see AI effecting that as well.

Ron Schmelzer: [00:07:22] That’s right. That’s a valid concern. I mean there’s a comment here from Jeremy Howard who says that 80 percent of jobs in the developed world are things that are easily done by smart enough machines and there’s this chart there was an article that.

Kathleen Walch: [00:07:35] And he’s saying 80 percent of jobs today.

Ron Schmelzer: [00:07:37] Right.

Kathleen Walch: [00:07:38] Are easily done by smart enough machines. Not even I think that’s AGI. I think that this could be just a very weak AI focus.

Ron Schmelzer: [00:07:49] Right. I mean exactly it could be very narrow application of AI just for chat bots and customer services we talked about in our first podcast or systems that help people at checkout and retail for autonomous driving cars. You mentioned car drivers and truck drivers. There’s a lot of jobs especially if you look at the allocation of jobs in the United States there’s a chart that was in a recent article that Rodney Brooks quoted in one of his pieces recently that shows that you have 90 percent of retail jobs disappear and a lot of management jobs may disappear as well. And you know there’s certainly no doubt that companies are looking for dramatic increases in productivity from artificial intelligence that can give them ability to respond 24/7 to customer needs that can give them higher degrees of reliability and efficiency.

Kathleen Walch: [00:08:31] And I think also for a cost measure they’re looking to do this. So that is a very valid argument that AI and intelligent machines could replace human employees.

Ron Schmelzer: [00:08:42] Right. So you know Rodney Brooks you know he makes a good counter argument to this employment piece where he says that look at what’s happening now in the ecosystem. He goes you know how many robots are currently operational in those jobs. Zero. You know how many of these have you seen realistic demonstrations of robots doing these jobs. Zero. And so he’s saying like look if you look in the current ecosystem we’re barely able to get machines to do an accurate job of transcribing voice and responding to human commands. You know for us to be worried that all of a sudden some call center employee is going to be displaced tomorrow by a job is not going to happen. Now I think sort of what we’re seeing though is we’re slowly seeing the steady creep of AI technology into all of these jobs that people are worried about. Recently I was at some of these restaurants. We were both at Olive Garden and McDonald’s. You can see now that there is a movement towards automated systems. Now I don’t want to call them autonomous or AI but you can see that the people who are making strategic decisions for these companies are clearly thinking about replacing humans with machines.

Kathleen Walch: [00:09:43] Right. And maybe not replacing 100 percent but replacing enough. So will they take away jobs? Maybe not a hundred percent of those jobs but there could be jobs that are affected by that. I think you know cab drivers are we going to have autonomous vehicles where I can call it to my house and it will drive me to the airport at 4 o’clock in the morning I don’t need to worry if you know my Uber driver is up and checking their phone at that time. So there’s something to consider there. With Rodney Brooks argument that right now there’s zero things, zero you know intelligent systems replacing human jobs. I’m not sure that I 100 percent agree with that argument. We’ve seen that systems can write news articles or can help with and by help with I mean write and post social media. So maybe right now there’s a human to approve it and actually send it out but we do have systems that are doing things. So maybe an editor is going to take one final review at the article before it gets pushed out might not be a very thorough edit though so.

Ron Schmelzer: [00:10:50] I think another position here is that this is not actually really an AI argument. I mean you can be looking at the past 10 15 years e-commerce as a whole. Putting AI aside, e-commerce on its own has really disrupted retail. You look at the closure of malls you look at the closure of major stores you look at this increasing scope of e-commerce systems in traditional retail establishments.

Kathleen Walch: [00:11:10] Which I think is what you brought up the olive garden and McDonald’s I think that’s more of a e-commerce than it is actual AI.

Ron Schmelzer: [00:11:17] Right exactly. These aren’t systems that like use any sort of machine learning a deep learning or like big data or anything like that. They’re just simply like automating the experience provided. And I think also the use of mobile phones just the use of mobile devices has really disrupted a lot of businesses. You can also think about all the peer economy stuff whether it’s AirBnB even Uber in its current form has disrupted the taxi industry. And you look at all these peer marketplaces and I think as a whole even if we just stopped AI development you know hit like a brick wall tomorrow. These industries look like they’re being disrupted regardless. So I think one of the things we may have to deal with as a society well what do we do as a society when a lot of the jobs that used to require a lot of human power just no longer require that human power whether it’s because of AI or e-commerce or mobile phones or peer economy or whatever.

Kathleen Walch: [00:12:06] Right. The next argument that we have seen come up is that bad actors can do bad things with AI even in AI’s current form. Which we think is fairly weak as a system right now. We haven’t tapped into what AI can really really do.

Ron Schmelzer: [00:12:24] Right. So if we’re looking at the current state of artificial intelligence there is a lot of valid reason for concern. So you have Putin recently coming out and speaking to those kids at the school where he said artificial intelligence is the future not only of Russia but of all mankind. Whoever becomes the leader in this sphere will become the ruler of the world. These are not very vague statements. And in a couple of articles that we read which we will include in the show notes so you can read some of these articles where we are getting this information, you know Russia has been aggressively investing in military robotics and unmanned systems. They’ve been heavily involved in unmanned vehicles. They’ve been involved in all sorts of things that are the direct application of AI to the military field. But it’s more than just that we’re seeing AI being applied to all this nonmilitary stuff that is also providing strategic advantage whether it’s misinformation, disinformation, fake news, propaganda, the old propaganda of the past is now the new AI system that’s creating fake news and fake commentary that’s impersonating people. And I think the question is like well what do you do. Is there a valid situation to be afraid of AI when somebody who wants to cause damage and harm can basically do so at a larger scale and with much less direct consequence because there are people there when they use AI.

Kathleen Walch: [00:13:41] Right. And you know one thing that we had seen to try and either bring up a counter argument or just even to understand this was that humans right now are helping to educate AI. So for example as Grady Booch had brought up if you want to teach an AI about flowers you feed it the flowers that you like and you’re hoping that, you know I like roses so I’m just going to show the AI a whole bunch of roses and it’s going to learn all different roses and yes it will know some other flowers too but I’m loading the information. Well I have good intentions with what I’m doing. Take somebody who does not have good intentions and is loading the system. The system’s going to learn from its creator. So if a malicious country or a malicious person or a malicious company is teaching the system what exactly is the system learning. Probably not great things. And that’s where we can see this fear coming. You know there’s also an argument well yes you can say that this is a fear but are we really here yet. There’s more immediate fears that we should be concerned about such as North Korea or even a bomb or like certain things.

Ron Schmelzer: [00:14:57] Climate change.

Kathleen Walch: [00:14:57] That too. But I mean more immediate manmade things. So you know you could argue that too but I think that where this fear is coming from again is the unknown that people don’t know who’s building these systems how they’re being used. Ron you also have brought up maybe in AI is impersonating somebody and that is very hard to detect. We’re also at the point now where we had brought up in podcast number one that Ron had an interaction with, he wasn’t sure was it a bot or a human. It acted very human like it answered questions. We don’t now and that is what scares people that they don’t know who they’re talking to.

Ron Schmelzer: [00:15:36] But I do think there is some validity to the specific concern of just using sort of the narrow use of AI. There are drones right now operating in fields of combat that are maybe not fully autonomous but definitely remotely controlled that are you know having direct battlefield consequences. As we were mentioning, you know there is an unknown influence right now of bots in social media and in regular media. And I think it’s hard to detect the difference because the systems kinda hide a lot of those differences. Like for example how do you know when you’re having a conversation with someone on Facebook when they’re somebody that you don’t know is on a Facebook group that you share that they’re commenting on something in their uploading and their downloading. And you don’t know that person. It’s hard to know that if it’s actually a person espousing a real belief or a system basically automating on some sort of grander scale. So I think there are some valid concerns for it. I think we also should be open to the possibility of AI backed, machine learning backed approaches to cyber warfare. So you know automating a lot of social engineering type hacks various sorts of hacks. We’ve definitely heard about viruses that have some machine learning capability that are evolving to their scenarios so they can escape the threats. You know there was the unexpected release of Stuxnet. Outside of its bounds so I think there are some valid concerns with us. We haven’t really heard as much of a strong counter argument to what do you do about the situation when bad people want to do bad things so they buy a lot of people want to address the next big concern we’re Taco which is like smart systems doing bad things and putting in some controls there. But that implies that we’re all working towards a good outcome. So I think this is something we need to get some firmer countermeasures for.

Kathleen Walch: [00:17:14] Agree and so the next argument that we’ve seen is a super intelligent system that doesn’t care about or care for humanity anymore.

Ron Schmelzer: [00:17:24] Right. That this is the one that really comes up the most. So for example you listen to Nick Bostrom or you read his book what he’ll say is that when AI systems are smarter than humans, they’ll be able to not only do the tasks that humans can do better and faster but they’ll be able to invent new things and to be able to invent those things at digital speeds and they can compound those inventions on top of more inventions faster than we can respond to them. So we have a smart system that’s not always able to do things for us of it to invent things and able to like invent more inventions and we get overwhelmed right. We get rapidly into a situation where every moment of every day something is changing and evolving and just our brains are just not capable of responding to that kind of threat. And so his, the big conclusion of all of these folks who are worried about these super intelligent system is that the future will not be shaped by our needs and humanity’s desires but rather the needs and desires of the super intelligence system that will either have to serve or will basically will be operating at its whim. Ya know we’ll either be treated like ants that it doesn’t really care about us and can step on us or it will treat us like an invading alien force. So that’s like the big fear right of the super intelligence system.

Kathleen Walch: [00:18:35] Right. And again because that’s the fear of the unknown we don’t know how fast these systems can and will learn. And I think that right now they are not learning as fast as humans or are as smart as humans I should say. But at what point will they be as smart as humans. And then at what point will they surpass human intelligence. And I think that this also comes down to then what is intelligence and how are humans going to be able to recognize this intelligence if we are not even close to what these machines are.

Ron Schmelzer: [00:19:11] Right. And I think the other concern here is that we’re going to have these improvements in machine learning that are exponential because they are able to create new systems and they said this is much more disruptive than the industrial revolution. So yes continuing. You know I also want to quote a little bit from the book “Our final invention artificial intelligence and the end of the human era” and quite a suggestive book title by James Barrat in which he says in like in a little as a decade artificial intelligence could match and then surpass human intelligence. Corporations and government agencies around the world are pouring billions of dollars into achieving AI’s Holy Grail human level intelligence. Once AI has attained it scientists argue it will have survival drives just like our own and we may be forced to compete with a rival more cunning more powerful and more alien than we can imagine. So it’s quite a scenario. And I think one of the things we want to think about is well first of all what does it mean to be smarter than humans. I mean so we talked about what is the nature of intelligence. Is it just knowing more things, is it just the ability to react faster, is it the ability to synthesize more information. It’s not really quite clear what that means but I think really that is the crux of all these folks who are saying slow down put controls in place so that these systems don’t you know burst pass their boundaries.

Kathleen Walch: [00:20:22] And we’re talking about legal controls we’re talking about actual controls within the system that’s getting built so that the system can not learn or become smart in certain ways.

Ron Schmelzer: [00:20:34] Yeah and some people are proposing legal legislative regulation.

Kathleen Walch: [00:20:39] But that’s not going to control a system. If the systems truly learning from itself they don’t care about the law.

Ron Schmelzer: [00:20:46] So I mean it’s hard to respond to this directly because it presupposes a fairly big notion. So it really presupposes the notion that systems can even get that intelligent, right. And James Barrett talks about how we will get there in less than a decade. And I think especially if you read the best sort of counter argument to all of this is that blog post keep referring to from Rodney Brooks on September 7th 2017 and one of his essays for and AI where he writes about the seven deadly sins of predicting the future of AI. And if I can summarize I mean basically what he’s saying is that we’re a lot farther away from this general vision of artificial general intelligence that everyone fears than we think. And let me just quote a few bits that of it he says that my own opinion is that of course it is possible in principle, if I could paraphrase, to get artificial general intelligence and he says he would never even have started an AI if he didn’t believe it was. But he doesn’t necessarily think that we as humans may even be capable or smart enough to figure out how to achieve sorts of things that people are talking about artificial intelligence and the current state of AI is pretty far away from this. And he talks about the example of we know all the wiring in the worm C. elegans with all of its 7000 connections and 302 neurons. And the project to even replicate that basic worm is still only halfway there and we still don’t have it working. And so you know the brain which has over 100 billion neurons and you know tons of connections you know how can we possibly get there. So I think like the big counter argument is that we’re just a lot farther away from this vision than people are thinking.

Kathleen Walch: [00:22:20] Right, and I mean do I think we’re a decade away. Absolutely not. Are we a hundred years away. I don’t know. It could be within my lifetime let’s say I live to be 80 you know we have 50 more years. Could it be within my lifetime? Maybe. But right now what we have seen from systems if we want to call them AI systems like Alexa or seri or google home that they are kind of pitiful right out as compared to a human. So I can ask them certain questions and most of the time they’ll understand me but give me maybe not always the answer I want and you know for example I’ve asked Alexa what the weather is and I ask her that most mornings. Some mornings she doesn’t understand my question and I have to repeat it. That to me isn’t very smart right now.

Ron Schmelzer: [00:23:08] Right. And they’re not capable of complicated questions.

Kathleen Walch: [00:23:10] They’re not. And I think that they’re also capable of certain voices. I mean my two year old talks to Alexa all the time and Alexa doesn’t understand her. So that to me is not intelligence.

Ron Schmelzer: [00:23:21] Yeah well your two year old talks to me all the time and I don’t understand her either. (laughter) But then again I think this is a very valid concern if like if we’re truly looking at, like if we go based on the assumption that the companies who are producing AI technology are producing their best technology now and are releasing it and maybe we’re seeing the commercially available stuff and maybe we’re maybe two or three iterations behind what they are actually doing. It definitely seems that we’re farther away than we want to be. I mean we’ve been chasing the vision of autonomous vehicles for a while and we’re certainly many years into it and we still don’t have cars driving all around the streets. We haven’t quite figured out all the challenges there. So you can make some of those arguments. I think what people will counter those counter arguments and say things always move faster than you think. Right. And so you know while you may think it’s like a hundred years away you may actually really only be 10 years away. But I think that’s the biggest-if we can sort of apply our Cognilytica analysis here is like nobody really knows, I mean nobody knows. Elon Musk doesn’t know. Bill Gates doesn’t know. Stephen Hawking doesn’t know. Rodney Brooks doesn’t know. Grady Booch doesn’t know.

Kathleen Walch: [00:24:20] My two year old doesn’t know.

Ron Schmelzer: [00:24:21] (laughter) Nobody knows right. Nobody really knows because we could be one amazing innovation away from making AGI happen. I mean this is what happened in machine learning with the evolution of deep learning that just sort of came out of the blue and all of a sudden we had all of these major innovations and image recognition and things that we thought were going to be a lot more computational intensive. Turned out not to be because we learned this way of doing it. We could be one innovation away from accelerating AGI. You never know. But I think that’s the point. Nobody knows. And I think it’s really interesting. I think part of the reason people are so surprised at the intensity and the dedication to which Elon Musk is sounding the alarm bells about AI because he himself is so involved with AI. Right, so for example you know he’s involved obviously he’s got his Tesla cars he’s working on those to make those more AI to let them park by themselves and drive by themselves, there is clearly a lot of AI there. He’s working on the open A.I. which is really the path to safe AI development. But clearly he must know what’s happening in AI if he’s working on that. He’s also trying to do his neural lace thing a brain machine physical implant. You know that allows you to effectively merge in a symbiotic way with a digital intelligence. And you know each talks about all sort of he invests in a lot of AI companies is involved in deep mind. So here’s somebody who is deeply involved in a lot of these activities who’s sounding the warning bell and you might say oh well that should be a reason for concern because here’s somebody who’s involved sounding warning bells but then you also people who have been involved for a longer periods of time who are saying let’s not on the warning bells. I think this general fear of a system that is so smart that it can cause problems. I think that sort of fear is really hard to place into this should we really be afraid or should we not. If you listen to Rodney Brooks he’s saying this is much more complicated than it looks. We’re much further away from it. That should make you feel more secure. The one thing I do want to add though is the folks who are talking about putting the reins on AI development like Nick Bostrom. He’s saying well like let’s just assume, lets take you know Rodney Brooks and these folks at their word and assume that it really is far away. Is there any harm in creating control structures now in the eventuality that things do progress to the point where we don’t have this control. Right. And you could say well it is not really out of fear but I guess over abundance of caution to do that does it still make sense to do that even though we may not ever reach that place where that stuff is needed. I don’t know what you think.

Kathleen Walch: [00:26:43] Sometimes I look at that almost as putting the cart before the horse. You know you’re going to maybe but certain systems, certain checks and balances in place but they might not be the right checks and balances. So we’re assuming and going down a certain path thinking that AI will take us one way and it might actually take us another. As you said we could be one big step away but we don’t know what that step is. So we’re making assumptions that it’s a certain step but it’s actually a different one. So you can put checks in place. They’re just not always the right ones. Sort of like this brings me back to laws around talking on your cell phone. You could make maybe some guesses as to where that would go you know texting and driving and yes OK maybe you could assume that it would have more accidents. Assume this, assume that. But then it actually you know happened. Laws were created after it happened and now there’s laws in place. So you can argue both ways. You know people could have made laws like oh well the first cell phone that came out literally was like plugged into the phone you know your phone cars you hold it on the phone. It wasn’t really a speaker from back in the 80s. Then they became smaller. They were flip phones. Now they are smartphones. So you could make assumptions but I don’t know if people back in the 80s when you know say that they were writing this law to try and get ahead of it or in the 70s when they were trying to get ahead of it. We have ANY idea what it is now so you can put these systems these checks in place but they’re going to have to change and evolve over time. So you have to be careful with that too.

Ron Schmelzer: [00:28:10] Yeah I think that’s actually one of the concerns of the folks who have been in the industry for a while is that even though there may be sort of disregarding a lot of these fears that we’ve identified I think really the bigger issue is that there’s a general fear that over abundance of caution with AI could cause basically the next AI winter. So for those who are not familiar the winter were two periods of time when AI investment was steeply declined. You know a lot of advancement in AI had you know almost ground to a halt and basically AI advanced, stopped really advancing during those periods of the winter. And there’s a concern now because there’s all this interest in AI right now. There’s like tons of interest in AI. There’s lots of hype around AI. There’s a lot of investment in AI even things that are really AI are getting.

Kathleen Walch: [00:28:53] Are pretending to be AI.

Ron Schmelzer: [00:28:55] So like there’s all this interest which is great. Which is great for AI researchers, which is great for AI developers, great for product companies, great for everybody. But there is a concern that if we start focusing on these fears and we start focusing on these worries and we start putting controls in place whether there are technological controls or legal controls that it could cause the next decline in interest in AI and basically say well maybe artificial intelligence is a weapon of mass destruction let’s treat it like that and stop being so enthusiastic and all of a sudden all of this opportunity dries up. And now we’ve basically caused a bad outcome that we would not have otherwise wanted where we could have benefited from artificial intelligence.

Kathleen Walch: [00:29:32] Right. And we can say that well maybe there’s another winter but there’s been a lot of talk now and I think that people always need to just be cognizant of what others are doing behind the scenes. As we talked about Putin and Russia and other countries you can say that it’s like a nuclear weapon and you can try and stop research and development on it. But guess what? There’s still a lot of countries. They’re putting a lot of money into nuclear weapons even if that’s not how we fight war right now because everybody wants. So maybe they’re doing it behind the scenes. So just because you know people think winter is coming with AI you have to be careful about how much you really want to stop the research and the development and make sure that you’re not halting progress out of fear.

Ron Schmelzer: [00:00:18] I think if we can use the past as any lesson, this is probably maybe a good way for us to sum up is that the last two times that we’ve had all of this interest in AI and then it stopped because of the winter and then we had a new resurgence in interest in AI and then it stopped, and now we have this new resurgence in AI the last two times it stopped was not because of the AI systems were doing more than we expected but is actually the reverse. We had expected more of the AIs than we actually got out of them. And so the decline of interest would be like we were expecting it to do all of these amazing things and then we realized we hit the limitations of technology, of our ability, and then people pulled back. And then there was a new wave of interest like look at all the great possibility and expectation and we had all this great promise. And once again over promised under-delivered people pulled back. Now seems like we’re in this new cycle. People are really interested and now they’re worried about this outcome and I think we can only hope that AI does not fail to meet up to even its most basic expectations in that it’s here for the long term and that we can expect an AI that’s more like R2-D2 and C-3PO and the enterprise computer and a lot less like Terminator and Hal on some of the things we are worried about.

 

Login Or Register

cropped-CogHeadLogo.png

Register to View Event

cropped-CogHeadLogo.png

Get The AI Today Podcast #002: Should We Be Scared of AI?

cropped-CogHeadLogo.png

AI Best Practices

Get the Step By Step Checklist for AI Projects

login

Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!