Experts on AI in Finance: Kevin Shamoun, SVP at Fortis - Featured Image

Experts on AI in Finance: Kevin Shamoun, SVP at Fortis

By Fortis |

In a wide-ranging interview, Techopedia speaks with Kevin Shamoun, senior VP at Fortis, about the challenges and opportunities for AI in financial services. From fraud to predictive technology, from customer services to the inevitable day something goes wrong in the real world, there is plenty to think about for 2024 and beyond.

Techopedia speaks with Kevin Shamoun, senior vice president (SVP) at payment solutions provider Fortis and Vice-Chair of the AI Committee at the Electronic Transactions Association, about the use of artificial intelligence (AI) in the financial services industry.

We also delved into the emergence of bad actors using generative AI to commit fraud, but how AI can be used to help predict and identify fraud.

On Generative AI in Finance

Q: How are companies in the financial services industry using AI?

A: Right now, it’s mostly not generative AI being used; it’s mostly the technology that’s been around for a while, such as machine learning — models that are being trained around risk tolerances, and so forth.

In financial services, the use of generative AI is really going to be around the service aspect of it: what can we do for chatbot automation? What can we do for phone call automation? What are the useful tools to respond to people’s service needs?

The challenge with AI in the financial sector going forward becomes if you use AI to try to predict things — like “what’s going to happen in the stock market?”.

If it’s generative AI, that’s one thing because people have always had their models for trying to predict the stock market.

But algorithms are thinking about and processing on their own — how are they making decisions to buy or sell something? In the financial sector, that becomes a concern because it could have real financial consequences for a company.

What if it does something wrong, and you have to go back to inspect what it did? — it’s already too late.

So, training those types of generative AI models that try to think on their own and be predictive, you’re not going to see that for quite some time. I don’t think people are going to be willing to put their own money at stake as a test. The reward’s not there.

Q: Is there a way to avoid AI algorithm bias in financial decision-making?

A: You could try, but it’s inevitable. AI is going to start coming up with its own opinions based on the data sets it is provided. Humans have bias — whether you say you’re unbiased or not, there’s no such thing.

Our perceptions are reality, so the rules that you’re trying to apply to AI are unfair.

If you look at what humans do, how do you expect something to be perfect? And what is the definition of perfect? Because my definition of bias is different from your definition.

It’s not a fair statement to say that a generative AI can be unbiased because it’s not that black and white.

On AI and Financial Fraud

Q: How can AI be used to detect fraud and illegal activity?

A: A lot of AI is being used today in, for example, transaction processing. From both the card issuing standpoint and the payment space, they’re doing a great job with utilizing AI technology to prevent and protect from fraud before the transaction even runs — technology such as 3D Secure 2 has been out for a little while.

There are over 200 data points that are aggregated from the consumer’s device when they’re making a transaction, which is sent to the issuer to see if the issuer knows the device. We’re in a data-driven world now, and there’s so much being aggregated — it’s quite astounding.

Q: Is there a threat that the use of AI apps and tools could result in more fraud?

A: From a bad actor standpoint, they are always trying. That’s the game — it’s the good guys vs the bad guys.

In generative AI you have ChatGPT, and then bad actors come up with some creative names like ChaosGPT and FraudGPT; it’s the evildoers trying to wreak havoc — and it will be this continuous fight.

It’s been like that since the beginning of technology, people looking at ways to — not even really hack — but bend the rules a little bit. That’s probably where the biggest threat is: the guys that ride the line between what you’re allowed to do and what you’re not allowed to do.

They’re just a little bit over the line of what you’re not allowed to do, and so they have an advantage over the guys that are trying to do the right thing.

Q: Is the financial services industry prepared to deal with the possibilities of AI?

A: I think so. It’s not new; it’s just different. Somebody pulls a new weapon out, so it’s: how you defend against the weapon — do you need a new shield?

The attack vectors haven’t changed. In an API-driven world, the bad actors, even with AI, are still trying to manipulate the same APIs. They just have a different tool in the tool belt to try to attack better, faster, easier.

It’s about keeping up on defense side to say: do we have everything locked down — is there a window open somewhere that shouldn’t be? Is there one that an AI bot may be able to find that traditional hackers didn’t find or didn’t see before?

The big threat is attackers having the ability to automate with AI and be dynamic in their attacks, and it doesn’t take a person to code it. They can go and update their attacks because that’s what you told the AI to do, and it’s going to execute the plan.

AI always thinks it is right, so if you give it an instruction and you ask: ‘is this right?’, of course, it’s going to say yes because it just gave you the answer — even if it’s completely wrong and it’s delusional.

Obviously, that’s the risk of over-reliance on it, it makes up things to tell you that sound very right. It’s interesting that the term they’ve coined for it is “hallucination“.

Why Humans are the Front Line in Cyber Security

Q: Is there a need for organizations to take a different approach to cybersecurity?

A: It’s about education. At the end of the day, if you’re talking about security, the biggest weakness is us as people.

So, it’s just keeping people informed of what to look out for and what scams are out there, and then the responsibility comes down to the device manufacturers.

If they can provide a device that can’t be easily penetrated, then the rest of the responsibility falls on consumers from a business or personal standpoint, to just make sure that you’re educated to not click the link you don’t know.

Your CEO is not going to ask you for gift cards through text messages — it sounds funny, but I hear these stories all the time, and it still comes down to the human factor — that’s where most breaches happen.

If you look at most of the breaches that we’re talking about within the last few years, they all started with a person. The latest one with MGM being shut down for so long, it came down to somebody spoofing somebody on a phone call to get access. They didn’t break in; they didn’t blaze their way through the front door, they picked the lock.

It comes down to consumer awareness, and it’s tough to discern what’s true out there today. The social media platforms are doing a better job, but it’s a fight that will never end.

The Regulatory Landscape

Q: What is your view of the regulatory environment, in light of the White House recently issuing an Executive Order on managing the risks of AI?

A: It’s about time, I think we’re behind. The reality of it is the rest of the world, specifically the EU, is a lot further ahead than we are from a regulation perspective, so it’ll be interesting to see how the US starts to address those things.

It’s really hard to regulate something like AI. Hopefully, the government can come up with a framework that is fluid — but if history tends to repeat itself, that’s not going to be the case. Just with everything else in technology, it always seems to lag behind.

The balance is really tough to strike. If you look at past regulations — I’ll use cryptocurrency as an example — the government was slow to regulate, and then they’re finally coming up to speed with the CBDC side of things.

The government needs to put a framework in place that allows somebody to be flexible. And if you put the guardrails up around the tenets of what’s going to happen in AI, what you can and can’t do, it’s really around following principles.

So if you can get the right principles in place foundationally then I think it will be OK.

The startups that would have concerns are probably the ones that you want to have concerns.

I do believe there’s a place for regulation in things like AI because there’s a lot of real impact that can happen with something thinking on its own; it’s quite scary.

The AI Trends in 2024

Q: What are the main AI trends going into 2024?

A: It’s going to be around what we are allowed to put generative AI on. Are you allowed to put generative AI on — I’ll call it a machine — something that’s not just where I can talk to it on my computer, but where it can actually take an action.

That’s really where the biggest regulation is probably needed from a government oversight standpoint because then it becomes real.

To take the iRobot example, you put generative AI inside some type of bot that now is beyond its programming of just walking around or doing a specific function — it can do something or make a decision on its own, and when you ask it why it did that, it says just like a human ‘I don’t know I just did it, I don’t know why’.

That’s really when it becomes real, and I think we’re going to start to see some of that next year.

Unfortunately, I think what will happen is somebody’s going to get hurt, and there’s going to be a huge public outcry over it — rightfully so — and that’s when it’s going to become really real for people.

All these startups, their response will be: ‘Oops, it wasn’t supposed to do that’. Well… it wasn’t supposed to, but it did it anyway, so what checks did we have in place? And how do you prevent it from overriding the checks you put in place?

Because early on with ChatGPT you could prompt and manipulate it to get answers that probably shouldn’t have been provided in the first place.

Again, the people are sort of riding the line of what you’re supposed to do with it — and they’re just trying to keep prodding.

People think that AI is going to revolutionize different sectors, and I think they’re right in some regards. But it probably will not be as broad as they think. When they think about people in the service industry and assume AI is going to take over all these jobs, I actually don’t think that’s going to be the case.

I’ll use an analogy of when you had a lot of people cutting grass by hand, and then somebody came up with a lawnmower. It just revolutionized it, and what ended up happening is it actually created the ability for more jobs because it’s very easy to go clear a field to do something different on it; now I can go play a sport easily.

So now we need people to go run the sport, now I can put a stadium here. What people fail to realize is that while it’s going to take over all of these jobs, it’s the jobs people probably don’t want anyway, which is why we’re working towards replacing them — and then it’s going to open up different opportunities, it’ll be a paradigm shift for where people are needed.

When you say AI is going to replace jobs, if we can’t find people for those jobs now, are they really taking over all of these jobs that we have vacancies for anyway? My perspective is it’s going to level people up more so than take over their jobs.

About Kevin Shamoun

Kevin Shamoun serves as a Senior Vice President (SVP) for FortisPay (Fortis Payment Systems).

Formerly he was the founder of Zeamster, which Fortis acquired.

Kevin has extensive operational and technical knowledge of the payment industry, including almost 20 years of experience working with major Independent Service Organizations (ISOs) and major Financial Institutions.

He has managed all aspects of the payment industry. encompassing both acquiring and issuing.

He has been responsible for designing, deploying, maintaining, and securing critical systems that supported multiple organizations.

He is currently the Vice-Chair of the Electronic Transaction Association (ETA) Technology Committee and holds a Bachelor of Science from Oakland University and a Master of Science in Business Information Technology from Walsh College.

By Techopedia | https://www.techopedia.com/experts-on-ai-in-finance-kevin-shamoun-svp-at-fortis