AI CYBERCRIME And What You Need To Know To 🛡️ PROTECT Your Business 🛡️

Dave Erickson 0:00
You might think you're living in a technology thriller novel when you hear about how AI is being used for cybercrime, but it's not fiction, it's today's reality. On this ScreamingBox podcast, we are going to discuss how AI might be used for cybercrime and how to protect your business. Please like our podcast and subscribe to our channel to get notified when the next podcast is released.

Dave Erickson 0:42
Everyone is realizing the opportunities that AI can bring to business. Unfortunately, this also includes people whose business is taking advantage of others. Those people are realizing AI can help them commit their crimes and help them steal millions of dollars. Welcome to the screamingbox technology and business rundown podcast. In this podcast, I Dave Erickson and my intriguing co-host Botan Seres, are going to look at how AI can be twisted into a force of evil and crime with Guy Morris, author and IT executive. Guy is a retired business executive who spent over 38 years in leadership roles at Fortune 500 companies in software, high tech and global energy. This includes senior positions at Microsoft, Oracle and IBM. He has a background as a thought leader and innovator in early stage technologies, including computer modeling and artificial intelligence. Guy is an author of intelligent thrillers and writes about subjects such as AI, national security and cyber security. His upcoming book, the AI tsunami, a survival guide for humanity, is expected in June. He is also a published songwriter for Disney Records, a patented inventor and a Coast Guard charter captain. Guy. Welcome to the podcast,

Guy Morris 1:58
Dave. Thank you so much for inviting me, pleasure to be here.

Dave Erickson 2:00
To begin with, what got you interested in technology in the first place?

Guy Morris 2:08
Well, I, I was a homeless runaway, and when I went to college, I only had one goal in mind is, where can I make money? I got involved in business and economics, and I was really good at that, but one of the things we had to do was to build a macroeconomic model, which means I needed to become a programmer. I needed to learn how to work with the time. It was a big IBM mainframe. This was way back in the late 70s, and I designed, not only did I build my macroeconomic model that got me on the Dean's list, but it got me a full scholarship for graduate school, it got me invited to an invitation to attend Harvard. I got a letter from the Federal Reserve because I had pioneered some of the early non linear regression algorithms to predict productivity from technology sales in the economy. And one of, and that experience, while it got me all kinds of great kudos, and it got me into graduate school. Started my first job was with IBM, and I became passionate about the potential for technology to change our economy and change our world in fabulous, fascinating, at that point, almost unheard of ways. And while my economics degree got me into senior leadership and it got me into strategy positions, it,I never lost the bug to say I can change the business by looking at these new technologies. Now, you have to bear in mind that when I started, there was not a single personal computer on any desktop anywhere on the planet, and so we were looking at a complete revolutionary change, and I wanted to not only be part of it, I want it to be help driving it. Now what's happening is that we're facing another revolutionary change that's going to be just as disruptive as the last one, and we're just as unprepared for it.

Dave Erickson 4:03
And I assume that's the AI revolution.

Guy Morris 4:07
Exactly.

Dave Erickson 4:08
Well, there's with every revolution, there is the good and there is the bad. And we have definitely talked about some of the things that AI can do and some of the potentials of AI, and nobody is doubting that it has some real potentials. It also has real impact, and it has some, some negative sides to it. I think today we're going to kind of explore a little bit about what are the darker sides of AI, and maybe, I guess, to start off, this question, when you are looking at AI, what are the things that kind of scare you the most?

Guy Morris 4:47
Well, there's a number of things. And first off, we have to split the dangers of artificial intelligence into two giant categories. One is the technology related risks, the things that are inherent in the technology itself, hallucinations, emergent properties, value misalignments, the power that it can have to find vulnerabilities in the system. That's technology related risks. And then we have the people related risks. What happens when you have this powerful technology in the hands of a malicious actor, sociopathic billionaire, corrupted CEO or politician, a crime lord, a drug lord. We're creating powerful tools that are already showing up on the black market and on the dark web, fraud, GPT, worm, GPT, Dark Web or dark birdGPT. These are all AI tools designed specifically to help malicious actors find vulnerabilities and write code to to take advantage of them faster and more efficiently than they could today, there's been a 1200% increase in the number of phishing attacks just since chat GPT came out alone, so we're seeing a deluge, a spike, of criminal activity as a result of this, some of it being driven by governmental applications, and a lot of it being driven by the weaponization of the current AI platforms.

Botond Seres 6:16
And what are some of your thoughts on the currently emerging phishing letters? Because from my point of view, I've seen certain changes, some for the better. They seem to be much more refined, much more accurate to the source that they are trying to copy. For example, a Microsoft 365 letter of billing, and on the other hand, it does seem, or it does feel, like it's easier to recognize now, because almost all of these letters appear to be excessively long, like the new phishing letters,

Guy Morris 6:56
I think what you're saying is that we are seeing some progress in terms of some of the operating systems and some of the platform solutions. But those solutions are always and they always have lagged the problem in the marketplace, right? We don't build the solutions for vulnerabilities we don't, aren't aware of yet. Oftentimes, the hackers find those vulnerabilities for us. Now, I think there's great opportunity that for that recently, I think you guys are all aware, anybody who's involved in the industry is aware that the anthropic mythos model has recently been pulled from consumer release for private release, because it's so good at pointing out system vulnerabilities, finding them and plugging them, that it's going to be resold as an enterprise application, which is a great news for the industry. The only challenge that I have with that is I was in consulting, tech consulting for years, and you could get companies to spend 10s of millions of dollars on new technology if they thought it could help their business grow, add new businesses, increase sales and lower cost, but get them to spend $1,000 on security, and that's a fight, because security is one of those things that doesn't have an immediate return on the investment, and it's something that even even today, there's a vast segment of our enterprise population is especially small, medium businesses that are vulnerable because they don't invest in their cyber security. So what will happen, I'm hoping, out of this, and what will happen, for sure, is we will see some we will see an increase in cyber attacks. We'll see an increase in ransomware. We're already starting to see that. We'll see an increase in phishing and personal cyber attacks. We'll see an increase in how this is used by even corporations and non ethical ways and but, and the solutions then will follow that. So we're going to see problems before we see the solutions. And we're already seen.

Dave Erickson 9:00
What are some of the problems that you think are coming versus the ones that are already here?

Guy Morris 9:08
One of the most insidious things I read that that happened, it's already happened. There was a company in England, UK, and they were defrauded out of $200,000 on a contract because of a deep fake of their CEO basically providing instructions to their team to engage in a transaction with another company, and it turned out to be a fraudulent company, and the deep fake turned out to be fraudulent. We're going to see new forms of fraud that, and new forms of identity theft that invoke fraud than we've ever seen before. Fish, fishing attacks now have the ability to not only create a really well written, well crafted email that sounds legitimate, but they can follow it up with AI videos of a fictitious actor that pretends to be somebody on the internet who's not what they are pretending to be, a businessman, depending a business leader, a salesperson, a representative, lawyer. You've now opened the door. We've opened the door to incredible amounts of personal level fraud that we're not necessarily in our heads prepared to deal with. The other thing I think we're going to deal with is cracking of cell phones. We’ve already seen this. There's two applications in particular. One is was developed by the Israeli government, and while they say that it's tightly regulated, there's a lot of instances where it's not tightly regulated. It's Pegasus. And what Pegasus can do is absolutely even if you have a password on your cell phone, even if you have it secured, they can crack through that, get access to your phone all the information on a real time basis, and you'll never even know it's there. There's called, basically a zero click hack. You don't even have to answer an email or answer a text. There's another black market version of that called having a mental block, Predator that does the same thing, and it's a black market version of Pegasus that's being used by criminals in airports and everyplace else. So as we become more digital, we become more vulnerable, and we have to couple our passion for digital excellence and for digital progress and for digital innovation with a passion for making sure that it's secure, because right now, we've opened up a Pandora's box for criminal activity.

Botond Seres 11:35
So certainly, there are new dangers, but somewhat importantly also there are existing dangers, pre-existing dangers that have now a significantly lowered barrier to entry, to say, operating a scam call center, which now can be done by a single person with a laptop. Previously, there would have been employees needed at the site to carry out a scam involving five to 10 people. Now we can have five to 10 agents, which is an absolutely amazing breakthrough in technology,

Guy Morris 12:13
Absolutely. And there's a, there's a parallel to that. So the question, and this is, this gets into not only business issues, business decisions, but social ones, humanity issues as we're, I was in the enterprise when, you know, one year we would have 600 accountants on two floors on a major, major street in Wilshire Boulevard in LA. We implement systems, technologies, mainframe computing and budgeting. And a year and a half later, we have 50 accountants covered into a corner of one floor. And so we're going to see a labor impact of technology. I think we have an opportunity. The difference is, before it what happened before is that people just simply had to either reskill or get computer skills to read, find those new jobs, we're going to have an even worse version of that, and most of the the IMF, the Goldman Sachs, the the the the IMF, or the, sorry, it's early morning for me. I'm still in my zombie syndrome, not a problem. My brain hasn't fully clicked in yet, the World Economic Forum, that's what I'm trying to think of. World Economic Forum, the Bank of England, all kinds of major institutions are predicting anywhere between three to 500 million jobs displaced. That's a huge amount of our workforce. These are not low level labor jobs. These are mainstream management, professional leadership, things that people had to get, degrees to, to, to get when those jobs go away, these people are going to be forced with some major issues. And some of those people are going to be tech people with highly technical skills. And so we have to find a way of transitioning, of using AI to empower people without making them devalued. Without devaluing them, and that's going to be a major change in our business, because you want people involved in your security, not just software. Software is amazing. It can help detect and isolate and remove Malicious threats much faster. We're going to see much better tools against major hacking organizations like Scattered Spider and all of these others that are pretty powerful malicious organizations. But it's going to take people to run those systems. It's going to take people to strategize to implement them. It's going to take people to know how to integrate that into the business operations. It's going to take people. One of the suggestions that came out recently was the idea that, rather than laying people off, we take the people we have, empower them, and go to a four day work week. You know, we have to look at those ways of how do we not use technology to replace the human element, but use it to enhance the human element. And I think right now, there's a tendency with AI to think AI will let me get rid of people. I think that's a false premise. I think that's a poorly thought out strategy. And as a person who dealt with strategy, AI strategy for decades, I saw dozens of companies leap into trying to adopt a technology before they understood how that technology would affect their business or affect their customers. And so we're at that same we're at that same level now where we're seeing companies make a knee jerk reaction to adopt AI because they feel like they have to to stay competitive, and they're going to do the same thing that companies who leapt into some major technologies before found that they're going to lose their shirts if they're not first thinking through how the change in their business is going to impact them, and working through that first, getting the people prepared, training the people, getting them used to the idea that, yes, we used to do this on paper, but now we're going to do it on computers or whatever that changes. And I think we're going into the same technology, same sort of transition with, with AI, especially when it comes to cyber security, and especially when it comes to productivity.

Dave Erickson 16:21
Yeah, you know, they're not thinking about, Oh, well, we're going to shed some people and if the IMF is correct, and 300 million white collar jobs disappear, what the companies aren't thinking is, well, most of our sales to consumers come to white collar, well paid people, and all of a sudden, 300 million of them are disappearing. What's going to happen to our business? They're not thinking that. They're not thinking the logic right?

Guy Morris 16:50
Now, not only that, but they're taxpayers. We're already struggling to maintain our deficits. Now take away 100 million tax middle class taxpayers out of the tax roll, and you don't replace that with wealthy corporations taxing the AI implementations if you don't have the right tax basis, this becomes, and that's one of the reasons I write my books, at my thrillers, is this, is it? This is all the Gee, what could go wrong scenarios. And there's a lot of them, and they're, they're not just the technology that, what could go wrong in terms of national security, what could go wrong in elections? What can go wrong economically? What can go wrong in terms of banking? So we're dealing with a completely revolutionary technology that will change how we run the business the world in every single way, and every single one of those areas has a new level of risk. So it's not risk just in the IT sector, and that's what a lot of people are misjudging. This is a broader risk to society on almost every level, and we have to think about it holistically if we're going to survive this, this initial tsunami of change if we're going to survive that and come out with a more utopian scenario at the end, versus a more dystopic

Dave Erickson 18:07
Yeah, to follow that up a little bit, you know, AI is making things more efficient. And we're talking about white collar jobs disappearing on the other side of the coin, the criminal side of the coin, you have all these large criminal enterprises that are now having to lay off their criminals because AI is doing the job for them. So you have all these criminals who used to make money doing crime now they're saying, Oh, well, then I'm going to get into it by using AI so it could expand their, that use, because they need to make money too, right?

Guy Morris 18:41
Well, and there's all those, as I said, there's all those trained, you know, pretty smart people, that now have no jobs, thinking, well, maybe I can find a criminal career. I gotta, I gotta do something, put food on the table. But you're right, there's, there's, now we have freelance criminals. It's, it's going to be a, a wild west circus for a while, there's going to be chaos, and we have to anticipate that. As business leaders, we have to get ahead of that and strategize beyond that. And the problem is, is most businesses are just trying to figure out how they're going to make the sales this month, how they're going to make payroll this month, how they're going to survive the price increases from tariffs and the other economic disruptions from the Iran war. We're very myopic when we comes to our business operations, where I found in working with strategy with major corporations, they're not always really good at looking 5 to 10 years out, maybe maybe 2 to 3 years, but that, you know, next quarter, the end of the year, their bonus, that's their primary focus. And we need this kind of technology that really requires more of a shift in that level of thinking, where we can, need to start thinking about what are the longer term as. And how do we lay the foundations? You know, you don't just add AI on top of what you're doing. It's a foundational change. And so we have to look at the foundations of our IT stack our operational stack, our organization, before we can really isolate where are the real early win points for AI and then how do we change operations to take advantage of it without destroying the people, and that's that will take time and enterprise like Microsoft and Oracle and you know, some of the other major corporations that Chase Manhattan have the resources to hire a staff of people, not just a single person to champion, but hire staff of people to really fully understand and integrate the business change operations. Most medium businesses and small businesses don't, and that's where they're going to need extra help, extra online resources, extra consulting resources that are affordable. That's where I think we're going to see the most potential damage, especially, and I'll say this one of the worst industries and one of the most vulnerable, is healthcare. Healthcare does not spend money on cybersecurity. They just don't believe in it. They are the most but they have some of the most sensitive systems in terms of user information, client you know, healthcare information, financial information, they have some of the most sensitive, sensitive systems out there, and I expect to see a number of ransomware attacks on hospitals over the next several years.

Botond Seres 21:34
That does make me wonder if we'll see a resurgence in our gap systems. Because it used to be really big a few decades ago to just completely isolate systems off the grid. I do wonder if this new way of discovering exploiting vulnerabilities, using AI will sort of bring a resurgence to that sector. It's been phased out.

Guy Morris 22:05
Already has. It's already doing that. We're already seeing a resurgence of attack. Yeah, I think that I've yeah, as I said, there's been a 1200% increase in, in just phishing attacks alone, but there's been an equal increase in a number of heart attacks on, on hardened systems. And I think as AI comes out, more malicious actors are learning they can use AI to discover those vulnerabilities. And I think, I think I was actually satisfied. I was actually really happy that Anthropic pulled Methos off the market because that we, it would have just empowered the criminal actors even more. But it has the promise of getting ahead of that curve, at least for the major government and enterprises that they'll, they'll sell it to. I would love to see cybersecurity if we had a functional Congress, and let's just say that one of the biggest risks of AI is a dysfunctional Congress. Okay, I'm just going to put it out there. I don't care which side of the political spectrum you're on, it doesn't matter to me; I'm agnostic, but we have to have a governing body that legislates to protect businesses and risks. And right now, they've punted and kicked the can down the road. The EU is doing a better job. They had the AI act. They've had, there's a couple of other acts that are coming down the road. I'm trying to remember the names of them, but they're there. The EU is much more progressive and and proactive in looking at the risks and cyber security. The challenge the EU has is all of the major AI companies are based either in China or the US, and they have a hard time enforcing their laws on those companies, except within the EU domain, and so they're struggling right now, France has basically deposed Elon Musk for some for some legal settings, and he refuses to even show up, and they have no real way of enforcing that unless the US gets behind it. And so without the US really leading this charge of security and safety for AI, we're going to see some problems, and I don't see it as a technology problem. I see it as a legislation and a leadership issue.

Botond Seres 24:28
Speaking of registration, there is already an issue globally with how companies in general are extremely difficult to prosecute, and now we have companies developing AI that is going to be even more difficult to prosecute.

Guy Morris 24:47
That's correct, and we will see, because of the lack of legislation. It, we will be forced to basically legislate AI issues through the courts. That takes time. It. A lot more expensive. Couple weeks ago, we just saw a New Mexico court find Meta liable for their, their, the addictive nature of their algorithms, and find them 370 odd million dollars. Now, Meta is obviously going to appeal it; they're going to fight it to the end. It might even end up in the Supreme Court at some point, but there, there has to be some accountability for these companies, otherwise they won't change.

Dave Erickson 25:29
Yeah, I think that's, that's, that's kind of part of what the responsibility of the government is, is to put in these guide rails. But it's, it's not going to happen unless the public is aware of the danger, and they push the government into doing something, because governments don't do anything unless they're pushed into it by the people. And right now, a lot of people aren't even aware of the potential dangers that they are facing, or that they even have the ability to say something, or what to say. I think that's a major obstacle. What do you think?

Guy Morris 26:03
I think it’s a major obstacle, and one of the reasons so I started writing my books. When my first book came out in 2020, I'd been aware of the AI issues for probably a decade before that now they weren't quite acute, but I understood. Anybody who's been in the lab, been in development, understands the path that these things can take, and where that, where they within go to and what time frame. And so in 2020 I was talking about lethal autonomous weapons. I was talking about the lack of regulation. I was talking about the dangers of deep fake, I was talking about the vulnerability of our cyber environment relative to AI and cyber security with all these new weapons coming down the line. And so I started trying to, using fictional narratives, similar to how Michael Crichton used Jurassic Park to talk about cloning. What happens when you take this amazing technology and you combine it with, combined it with hubris and pride? It's the same formula, but with AI, and we're seeing the same thing. So we've been seeing some of these things. These have been issues for a decade now. They're and they're just now creeping into the public consciousness. I've been concerned with lethal autonomous weapons, and we've been developing them for at least eight years, and but the idea that the government is developing lethal autonomous weapons, didn't even make public awareness, and even then, it was only minimal with the issues with Anthropic and the DOD and so there is a public lack of awareness, and I think that's starting to change a little bit. It's certainly started to change with chatGPT that the power of language models. But people are forgetting that AI is more than language models. AI is coding. AI is medical systems. AI is diagnostic systems. AI is security systems, surveillance systems, national security, facial recognition. It runs at airports and train stations, policing of uh, systems with AI. There's all sorts of applications for intelligence to be applied to specific domains of knowledge. And the idea that it's just about good language is scary, because we're a language oriented species, and the way we get defrauded and fooled and diluted fastest is through language. So it's scary, but it's an incomplete picture, and right now, the public is still not fully aware of the dangers of AI.

Botond Seres 28:35
To be honest, I still have a very difficult time understanding how generations saw dystopias and utopias in popular fiction, like, for example, let's just take Terminator and Star Trek. And we all collectively went, Yeah, let's do Terminator.

Guy Morris 28:58
Well, I, you know, I tell people, I said, we're not at Terminator yet, you know. And the real data of and I told people, I said, before Terminator will happen, the first, the first application for AI in weapon systems will be drones. It's just the, it's the most flexible platform: It's agile, it navigates, it has ability to define targets. That was the first potential application. But there's others. China's working on robotics that does lead to the Terminator kind of thinking. Israel, Israel I featured in my, one of my last books, is working on basically a sniper rifle that is facial recognition driven. it could shoot from 1500 yards away and have nobody even at the controls, just using a facial and so we're dealing with some very dangerous technologies. Now the difference between utopia and dystopia is awareness, regulation and control. The longer we go where we're afraid because we're we have to compete with China. We're afraid to regulate any of the safety controls around AI. The longer we go where we're, we're relieving it unregulated, the more opportunity we have for the difference between utopia and dystopia are decisions, human level decisions. It's as simple as that. We could make the right choices today to put people safety, economic safety; we could put those things first. The problem that I'm having, and the reason I write my thrillers, is because there's always the human equation involved, and that human equation is the one that's distorting the potential or creating the potential for a more, darker scenario. It's because we don't want to address the security risks, we don't want to address accountability. We want to win the war against China at all costs, and unfortunately, that will cost, will tend to be more of a human cost.

Dave Erickson 31:08
Yeah,

Guy Morris 31:08
So you're right. We don't, we don't. There's this talk about one or the other. It has the potential for both. My envision, my belief is that we'll go through more of a darker something dark will have to happen in order for the world to wake up and say, we can't tolerate this. We have to do something about it, similar to what happened to happen with nuclear weapons. We had to realize nuclear weapons proliferated for decades before we finally realized that it was such a danger, existential danger, that we got our politicians to do something about it. Right now, the only people that are aware of the issue are the technical people involved. So I'm a believer in what Max Tegmark put out a couple years ago out of MIT, which was a letter, an open letter, to basically warn about these things and say, We have to get ahead of this curve to regulate and control this They even called for a six month moratorium on any AI development, so people could take those same bright minds and bring them together and say, how do we control the worst scenarios? Not one lab on the planet did it, because every single lab on the planet is driven more about the money than they are about the human benefit. And this is where the difference in technology between dystopia and utopia is always about the human element in between.

Dave Erickson 32:26
People don't tend to think of it this way, but I think one of the reasons why governments are not moving on this fast is because they're in an arms race. They see it as we need to develop our AI army before the other guys develop their AI army. And so if we bring this subject up, or if we have any guide rails or legislation or anything that limits AI, then we can't win. And I don't,

Guy Morris 32:56
By definition, a human decision.

Dave Erickson 32:57
Yeah,

Guy Morris 32:58
Right.

Dave Erickson 32:58
You know,

Guy Morris 32:58
It's not about the technology, it's about the corruption of our human systems. And that's, I think that's the scariest thing that we don't want to really admit. We have 6000 years of history of corrupt leaders abusing people for their own benefits, and now we're giving them this incredibly powerful tool they refuse to regulate because they want to use it for their own power. And that's, I think, that I'm not afraid of the technology. I'm afraid of the people in control of the technology and in control of using it for government and malicious purposes. And that's where we have to be afraid, and that's where I think we can get, we have to get our leaders. We have to get our Congress involved. We have to get them to get on this and stop stalling. And the longer they stall, the more advanced we get. We've already got models that can basically find vulnerabilities in major systems. If that model ever got out, then the whole United States, every company in the nation, would be vulnerable.

Dave Erickson 33:58
Yeah and

Guy Morris 33:59
So these are powerful tools.

Dave Erickson 34:00
And other nations are, other nations are looking to develop those tools so they can use them, right? Just like our government; I'm sure is doing the same, yeah.

Guy Morris 34:11
but so, but if we break the whole problem down to this is a, a, a weapons of mass destruction race, then we're basically saying that the value of that, the, the version of AI that humanity will experience will then tend to be more on the dystopian side. If that's our goal, if that's our value, if that's what's driving the development right now is the weapons enablement, then that's going to drive how this is experienced on the world stage, because we'll ignore the safety protocols. We'll leave it open to the, to the malicious actors, and it will be up to each individual and each individual company to protect themselves. And that's the message we need. To relay to people that you can't count on the government to provide protections against this technology. Every individual and every company needs to take responsibility and to do that appropriately, to do that well and cost effectively, means to get involved, to get educated, get immersed in the technology. Don't be afraid of it and keep it at a distance and say, I'm not going to deal with AI because I'm afraid of AI. This is the time to basically say, if I'm going to survive the AI tsunami, I need to embrace the AI technology and use it appropriately and wisely in my business. That's the right strategy. The fear of what could go wrong is going to guarantee that it will go wrong, and not doing anything about it will guarantee that it will go wrong. I've seen this over four decades of working with enterprises. This is a pattern. Pay attention to the pattern. Ignoring the problem won't make it go away. And that's the opportunity that we have right now. Is educating people, getting them aware, letting them know that this is a bigger problem than they're expecting, and that the sooner they get ahead of it, the safer they'll be, the more they'll be able to take advantage of the technology, as opposed to fighting against the disadvantages.
Dave Erickson 36:15
If you could do anything, if you could have, let's say the Congress do whatever you wanted. If you were president and could do whatever, what would you want done now to help protect society and people from where AI could go in a dystopian way? What would you do to make it so things will go in a more positive direction.

Guy Morris 36:43
I think there's two big things that we should be doing already. One is we should be looking at these tools, like Methos, that could be protective tools, and we should standardize those across the businesses at low cost. We need to make this a national priority. Simple. We're not making it a national priority. There's too many companies, too many businesses, that are vulnerable simply because they're not being offered the tools, the tools right now are being sold at high profits. There's too much to gain. It's too expensive. We need to help ensure that the core of our systems across the nation. Number one, we need to find those solutions and make those easy to get, inexpensive to get, and make those standard offerings from everywhere we can. Now, I love the fact that Microsoft recently went to Europe and is not charging Europe for some of the new security updates that they're up that they need because of AI. So I think that's the right approach, where software companies need to say this is a software fundamental issue. We're going to offer software solutions that help close those vulnerabilities within our system, harden up those, those systems, and make that available to you. So that's what number one, number two, we need to be we need to be starting Non Proliferation treaties with China, with Russia, with Iran, with North Korea and Israel, all major players in AI right now, and look at make, looking at what are we going to allow and not allow from lethal autonomous weapon systems? We need a, a salt level treaty around nuclear weapons. We need one of those around AI. If we did just those two things, protect the systems at home, work with our adversaries and partners over and foreign countries to lay out some international foundations for just like we have the Geneva Conventions, just like we have international treaties on national on pollution, on navigation, on nuclear weapons, we need one on AI. So I would start with two things.

Botond Seres 38:47
So wouldn't a Non Proliferation agreement mean that other countries would be unable to develop them. Like we have seen, that the nuclear Non Proliferation agreement is somewhat controversial, since it hampered the development of nuclear energy quite a bit globally,

Guy Morris 39:08
Nothing's perfect. We have to, have to, we have to step back from the, from the criteria that we have to have a perfect solution, because we've never had a perfect solution on anything ever so we even if it's imperfect, a solution gets us down the road right. They can always be enhanced. They can always be improved. We don't know. It's hard to know when companies are cheating, when countries are cheating around those proliferation issues. We've been suspecting that Russia has cheated on their Non Proliferation issues for decades, but that still slowed the progress down. It still made it easier to detect when somebody might be cheating. Still made it easier to get a an international body around the idea that, no, that's not allowed. And so you're right. These are not perfect to treaties, but if we don't, if we don't try, what's going to happen? Happen is we are going we I know right now that DARPA is working on building non lethal, or lethal autonomous weapons. And so is China, so is Israel, so is North Korea. I know Iran. I don't know what they're doing now, because I think they're being set back, but I would imagine that they're continuing to work on that. And so we have to at some point in time saying, do we want machines to have the power to take life without human accountability? When that happens, who's responsible? If it's if a machine malfunctions and destroys a village that's a war crime, who's accountable? Or if I'm going to hold China accountable for that, how do I do that if I'm not accountable for it. So I think even though these kinds of treaters are not perfect, I think we're missing an opportunity to not start those discussions and start the human conversation around as a humanity, what are we going to allow ourselves to deteriorate to do we really want to allow a terminator scenario to start building are we going to be culpable in that? Or are we going to draw a line in the sand and say, No, it's at a certain point in time. This is acceptable use of AI. This is a non acceptable use of AI.

Botond Seres 41:11
Oh, absolutely. Autonomous weapons in general have been around for a while, at least in prototype form. I believe I saw a demo on maybe Ted's around a decade ago, and they were already flying drones with face recognition with a tiny bit of fake c4 strapped onto them. And it's been a thing for a very long time.

Guy Morris 41:37
Well, it has now, one of the inspirations for my novel series actually was, happened in the late 90s. I discovered it was a small scientific blurb and obscure magazines, magazine that from Associated Press, that a program had escaped the Lawrence Livermore Labs at Sandia, which is an NSA spy lab. So I'm reading this article, I'm saying a spy program has escaped the NSA labs and they can't find it and and my first thought was, okay, that's got to be a typo. You know, it was supposed to say the program was lost, or it was stolen or it broke, but the, the verb they used, it was Associated Press, was, was escape. So I spent almost a year trying to reverse engineer, how would I design a program so that it could have the ability to escape. What functions would that? Does that really imply? And then, why the heck would the NSA design a program like that? What was the espionage purpose of that program? And lo and behold, they sent two FBI agents to my door. This was in the 90s, this internet, right? So it started me realizing that we were, our especially in government and in the lab, we were pushing technology way faster. It's much farther advanced than what we're seeing in the commercial sector and what's released so and that's always the case. The software development in the lab is always years ahead of commercial and so, and the government is always a few years ahead of that, because they're leveraging some of these commercial labs for government purposes, but not allowing it to be released into government products. And so the development is leading, even the product development, so the technology development is leading. So there's always about a five to 10 year lead time between some of these technologies. And sure enough, what the government could do in the lab that DARPA lab around mobile technologies, we now see in agents, we now see in crawlers, we now see in all kinds of other applications. And so we've got to I'm the hardest part for me is to try and envision and imagine where the government's working now with these things and and pull back to what do we need to prepare for in society before it actually hits the market?

Dave Erickson 43:51
What? What do you think are some of the things that people in business can do to kind of, I don't know if protect themselves is the right word, but limit the effectiveness of some of these AI tools from a you know that are being used for malicious reasons. What do you think?

Guy Morris 44:13
Well, cyber security is always a tough nugget. There's always a lot of disciplines involved. There's multiple layers to it, multiple layers to the protection of the stack. I think companies right now need to realize that their systems are vulnerable, and they're more vulnerable now than they've ever been before, to the extent that they might have under invested in cybersecurity before. They'll need to think in terms of a strategic investment now to protect their business for the future, I think for them, that's going to be because AI is so unique. That's going to mean require training for people about phishing emails and what to respond to, what not to respond to, how to be careful about that. We'll take some additional investments in their security stack that I think are, we've got a lot of stock that are really outdated right now. They're with known vulnerabilities. Companies are still on them, and was always the most frustrating thing for me at Microsoft is to say, Sure, we can put in this new extra high tech, super powerful, super expensive business system, but until you fix your security stack, you're just opening that up for people to attack. And so I think it's awareness right now is one of the biggest things, and just basic fundamentals. If we can get normal corporations to do the basic fundamentals of cybersecurity even before we even start considering the AI, that's an improvement. The AI piece, I think, is going to need new solutions that are going to be specific to the AI phenomenon. I'm excited about the possibility that Mythos could be part of that

Botond Seres 45:53
Guy, in your personal opinion, what's the future of AI

Guy Morris 46:01
That, is a, actually, a powerful question. There. There's a CEO of one of the AI companies. He's now former CEO. They let him go. But one of the things she said is that there's AI represents a an event horizon for humanity. There's a point in time where the technology is moving so fast into fields that we really don't really fully understand that there's the event horizon is the point in time of a black hole where we really don't understand what's happening anymore. We can't predict the future after that point. And he was saying that there's an event horizon that he thought at the time, this was about 2023 that was about three years away. I would have put it four years away, but I still think, right now it's about two years away. I think the event horizon is AGI and I think that we don't really know what we're on that path. We're getting closer and closer, we still don't know what we're going to do with a machine that's smarter than we are, and I think that it's opening up a lot of ethical, legal and operational decisions that we're not prepared for. So I think right now, I think what we can predict for sure is that we will, for sure, see incredible advances in productivity, incredible advances in capabilities across multiple spectrums, from medicine to education to security to surveillance to all kinds of applications, logistics, supply chain. I can come up with 1000 different great benefits for artificial intelligence, and there are 1000 great benefits for artificial intelligence, but I don't see us, I don't see us addressing the security and the dangers and a proactive basis. So something now. We've already seen kids commit suicide. We've already seen other personal tragedies coming out of this. We've already seen some corporations pay out the nose because they weren't prepared for the new for the new fishing tool that got them involved in a ransomware we're going to continue to see that, but it's going to get worse, because we already know that the government is looking at AI for mass surveillance and lethal autonomous weapons. We have to expect that those things are real and that, that we're going to see some negative implications from AIi be along with the positive. And we have to expect both realistically and so how that will iron out in the end. Again, it gets back to how much are we willing to hold people accountable, hold corporations accountable, hold governments accountable. Are we willing to regulate the use of the technology and are we willing to assign accountabilities and responsibilities and penalties for misuse? And how fast we move towards the benefits versus how fast we move towards the negative will depend on some of those senior leadership decisions that are yet to be made.

Dave Erickson 49:06
Maybe you can take some time and talk about your upcoming book, the AI Tsunami, a survival guide for humanity, which you have coming out in June. Maybe you can tell us a little bit about it.

Guy Morris 49:20
Well, in writing my novels, I incorporate, I infuse and saturate my books with a lot of facts about AI and the dangers and a number of other things and my, my readers were all wondering at the time when I came out with the first book in 2020 they were saying, Oh, this is great science fiction. And it took me a little while to convince them. I said, No, no, this is actually in the lab today. This is functioning. You're just not seeing what's going on, because the news is focused on other things. And so the AI tsunami is a comprehensive view targeted towards non technical people. So it's the consumer, the professional, the small business owner, to help them understand. Understand how this technology came about. It's been 70 years in development. Most people don't realize that the first AI conference was in 1956 at Dartmouth. And we've been working on phases of this technology, algorithms and regression models and pattern recognition and speech and voice, and all of the various components we're now seeing have had threads for the last 70 years. And so we'll look at the different stages, the different types of functions, and then we'll look at the risks across dating and relationships and cyber security and banking and jobs and national security and all of these areas. And then we'll take a dystopic look, we'll take a utopian look, and then we'll basically provide the reader with some guidelines targeted for the individual or their small business. How do you start to plan ahead so that you can take advantage of the technology, minimize the risks, so that you can not only survive but thrive over the waves of change? And so I'm trying to take away the fear by giving them a basic common knowledge of laying the foundation of what this is, so that they're not thinking Terminator and magical stuff. I break it down to things they can understand, their phone, their GPS, technologies they're already using today, and make it an easier transition to understand what this is and then what they can do to both either take advantage of it or stay protected from it.

Botond Seres 51:30
Guy, Thank you so much for being on our podcast and discussing how AI might be used by bad people to do bad things to your business.

Dave Erickson 51:39
Well, we're at the end of the episode today, and I'm sure we can all sleep better now. But before you go, we want you to think about this important question: how will you protect yourself and your business from criminals using AI to rob your business?

Botond Seres 51:54
For our listeners, please subscribe and click the notifications to join us for our next ScreamingBox technology and business rundown podcast, and until then, try using AI to protect yourself.

Dave Erickson 52:09
Thank you very much for taking this journey with us. Join us for our next exciting exploration of technology and business in the first week of every month. Please help us by subscribing, liking and following us on whichever platform you're listening to or watching us on. We hope you enjoyed this podcast, and please let us know any subjects or topics you would like us to discuss in our next podcast by leaving a message for us in the comment sections or sending us a Twitter DM till next month. Please stay happy and healthy.

Creators and Guests

Botond Seres
Host
Botond Seres
ScreamingBox developer extraordinaire.
Dave Erickson
Host
Dave Erickson
Dave Erickson has 30 years of very diverse business experience covering marketing, sales, branding, licensing, publishing, software development, contract electronics manufacturing, PR, social media, advertising, SEO, SEM, and international business. A serial entrepreneur, he has started and owned businesses in the USA and Europe, as well as doing extensive business in Asia, and even finding time to serve on the board of directors for the Association of Internet Professionals. Prior to ScreamingBox, he was a primary partner in building the Fatal1ty gaming brand and licensing program; and ran an internet marketing company he founded in 2002, whose clients include Gunthy-Ranker, Qualcomm, Goldline, and Tigertext.
Guy Morris
Guest
Guy Morris
Retired from a 38-year executive career with Fortune 100 software, high-tech and global energy, Guy Morris has also been a published song writer for Disney Records, a patented inventor, a Coast Guard charter captain, an adventurer, and now, a self-published author of thrillers. From cartel death threats in Latin America to the shark diving in Moorea – from a Board Room to a recording studio - from child homelessness to corporate jets, Guy pulls from a rich life of diverse experience to write books that thrill, educate, and inspire thoughtful dialogue on genuine issues facing humanity. Since his 2021 debut, Guy has released three pulse-pounding thrillers inspired by true stories, actual technologies, global politics and history. His Kirkus recommended books have earned BookTrib’s Best 25 Books of 2021, Reader’s Favorite Gold and Silver Awards, finalist for IAN Book of the Year, and semi-finalist for Cinematic Book. His articles have been published in SD Voyager and Mystery & Suspense magazines. In early 2022, he formed the non-profit Author Event Network association, where local accomplished authors collaborate to sign books at community events and festivals to engage readers, build platform and sell more books at higher margins.
AI CYBERCRIME And What You Need To Know To 🛡️ PROTECT Your Business 🛡️
Broadcast by