Artificial Intelligence and Compliance Pt.1 – #SCW5

This week Matt Alderman, Josh Marpet, and Scott Lyons discuss how artificial intelligence and machine learning can be used for compliance.

Recorded 8.31..2021

STATS: Matt 25% | Scott 13%| Josh 59% 

PCI Counter: 8

Matt Alderman 0:01
This week, Jeff man is out which means we won’t be talking PCI. Okay, maybe a little. Instead we’re gonna talk artificial intelligence and compliance part one. In the compliance news. What does your business need to know about the California consumer Privacy Act known as CCPA. Canada data breach tally sore since privacy laws arrived. What you need to know about the US cloud Act and the UK COPPA act in more security and compliance weekly starts now.

SPONSOR 0:37
Security weekly production and now, it’s the show that bridges the requirements of regulations compliance and privacy with those of security, your trusted source for complying with various mandates building effective programs and current compliance news. It’s time for security and compliance weekly.

Today’s organizations face an evolving set of security threats and continually changing compliance requirements. as your business grows, privacy concerns only multiply and add to a dynamic set of priorities. Today’s organizations need to integrate risk security and privacy into a cohesive program online business systems team of seasoned security practitioners work closely with you to assess your security posture, policies, procedures, and technologies providing tailored solutions that are specifically aligned to your business’s risk profile and ultimately ensure the protection of your brand. To learn more about online business systems go to securityweekly.com/online.

Matt Alderman 1:40
Welcome to Security and Compliance Weekly. This is episode number five recorded November 5 2019. I am your host, Matt Alderman here in Colorado. Joining me remotely are my co-hosts Scott Lyons and Josh Marpet.

Scott Lyons 1:57
And Matt, how are you Matt?

Matt Alderman 1:59
What’s up guys? So this is the first time we’ve done like the full thing, like from the get go and Jeff’s not here to enjoy it.

Scott Lyons 2:09
Well, it’s odd that we’re recording the fifth episode. On November the fifth, right? Isn’t there a famous poem about the fifth?

Josh Marpet 2:18
Of November?

Scott Lyons 2:19
Yeah,

Josh Marpet 2:19
It is Guy Fawkes Day.

Scott Lyons 2:21
Pretty much.

Josh Marpet 2:23
Oh, wow. I didn’t even think about that until you mentioned it well done.

Matt Alderman 2:26
Yeah, you guys are good. Don’t ask me about that kind of stuff. That’s that that stuff’s for my wife, not for me. All right, the new securities. Remember, the new security weekly website is officially live visit security weekly.com. To check out all of our new sorting and filtering capability. Please let us know if you find any issues or have any feedback by sending an email to website at securityweekly.net. This new show will be added to the website, hopefully right after the show. So soon, you’ll be able to get all these episodes from the new website as well. Also, please help us with our annual listener survey, use the survey drop down from the new website and select 2019 listener survey to give us your feedback. We always like to take that feedback and make improvements to the shows and our programming. Alright, gentlemen, we’re going to talk AI and compliance. who’s gonna kick this thing off for us?

Scott Lyons 3:21
You know, AI, Josh, I’ll take it for a second. AI and compliance. You know, we can do a lot of shows about AI and compliance. And in fact, I think it’s part of our roadmap to split this up. Right? So today, we want to really discuss what AI and compliance looks like, why what is the playing field look like? And then in the upcoming series, we’re going to show how AI actually is implemented for compliancy. And bringing on AI experts talking about TensorFlow, talking about the different modeling aspects, how it all comes together. It’s it’s going to become a very integral part of trying to make compliance simple for companies and corporations.

Matt Alderman 4:07
Sounds interesting. So this is part one. So we’re going to do some background, I think on what is AI and ML? And what are some of the potential compliance use cases for this? right?

Josh Marpet 4:19
Exactly. We’re gonna talk about what AI ML means. We’re going to talk about what the differences between artificial intelligence and machine learning, we’re gonna talk about the different uses that we’ve seen, thought about and dreamed up where that can be used in compliance. And actually, Scott and I just presented on that down at a conference in Miami. So that was so much horrible pain going to Florida in the wintertime. And we had a great time there actually. So we went to a Caribbean gaming conference and was fascinated with the annual Caribbean and America’s gaming regulation forum. And GovRisk gave it it was fascinating to meet with the regulators, government officials, the operators, those the people that run the casinos and such, and related parties, if you will, from government to commercial. And they really, it’s fascinating how they all have to work together. And they were ultimately terrified and fascinated by the artificial intelligence, machine learning and compliance. So it was a lot of fun.

Matt Alderman 5:19
Sounds great. So where do we start gentlemen? Who’s gonna start with the definitions of machine learning and artificial intelligence? Because we throw those terms around a lot, but what do they actually mean? And what’s the difference between them?

Josh Marpet 5:33
Okay, so the idea is, I mean, when you say artificial intelligence, people think Skynet and Terminator, especially with the new movie, right? And the answer is that that’s sort of true. Those terminators did count as artificial intelligences. However, they’re not that close to around the corner. Okay? Don’t everybody freak out quite yet. The idea is really that artificial intelligence is an aspect of machine learning. Machine learning is a pattern recognition system. So machine learning says, I’m going to take in a huge amount of data. And I’m going to ingest it all. And I’m going to look for patterns in the data. And I’m going to look for correlations. And at the point where the machine can take correlations and turn them into this is a causation. That’s when we started starts to hit artificial intelligence, rather than just machine learning. Does that make sense? I can go a lot. Yeah.

Matt Alderman 6:24
So no, no, but I think that’s a good delineation, right? is machine learning is looking for patterns and correlation. Right? It’s looking for that. But when you go to do something about it, you move away from aspects of machine learning into artificial intelligence.

Josh Marpet 6:42
Exactly. That’s, that’s exactly Well said. Well done. Sir. That’s, that’s exactly the difference. It’s when you can, any machine properly programmed, can look for patterns. It says, Well, this always happens before this, this always happens after this, these always happened together, the patterns are fine. It takes a thinking engine, like a human, to determine that one causes the other, the other is caused by one, whatever. So it’s, it’s that aspect. And this is a very big simplification, don’t get me wrong. But the idea is that when you start determining causation, that’s when you start getting into intelligence, you start ascribing meaning to the events. And that’s what we’re looking for. So you can say that a SIM, a security incident event management tool, like Splunk, or any of the others out there, those are machine, machine learning systems, they do pattern recognition, they look for patterns and thresholds and such. And the data that they’re mostly looking for thresholds are kind of primitive machine learning. But they are machine learning of a sort. But it takes a human because we’re the intelligence behind the sim to say, this is a problem, this pattern that you’ve realized sim Splunk, logarithm, whatever is caused by a malicious actor or simply a problem with the machine being turned on and off remotely or whatever the causation behind it is what we as humans bring to the table, when a machine can bring that to the table. That’s when artificial intelligence will be real.

Matt Alderman 8:13
Got it? So what are some of the good examples of use cases? artificial intelligence use cases today? What would be considered AI? I think we have a good understanding what ml is, what would be a good use case for AI now?

Josh Marpet 8:28
Well, okay, so a use case that’s currently being done? Is that what you’re asking? Yeah. Well, I mean, facial recognition is one that’s in very common use around the world. Matter of fact, if you’ve been to an airport in the last five years, in the US, at least, when you leave the secure area, you go through a cattle chute, if you’ll excuse the expression, where you actually walk, person by person, and they’ll have turnstiles and doors that automatically open and only allow one person in, etc, etc, to make sure that one person at a time is neat shoot, that’s because as you walk through, there’s a camera in front of you, behind you to your left side and to your right side, to get a very solid facial recognition enrollment set for you into the database. That’s lovely. It’s so polite of them to make it convenient, right. I mean, that’s, I think it’s lovely. But that was sarcasm in case you weren’t clear, just Just checking. And using those facial recognition enrollments to determine movement patterns to find patterns, not just of my movement, but of the people I’m affiliated with, and the people they’re affiliated with. And this is the classic NSA, circle of, you know, how many jumps does it take before you’re outside of the organization, group of friends, terrorist group, whatever. It’s, is that machine learning, it’s pattern recognition. It really is still just machine learning. Artificial Intelligence is still kind of in its infancy. We can tell it patterns that we ascribe to malicious actors, but that’s the human providing the intelligence And then giving it a lookup table. Does that make sense?

Matt Alderman 10:02
Yeah, known as supervised learning.

Josh Marpet 10:05
Exactly, exactly, exactly. I try not to bring too many terminologies in, but if you’ve got, if I say that any DNS request that’s more than 15,000 in a minute, that’s probably either a malicious actor having tried to exfiltrate information through DNS, or it’s a really screwed up machine. On the one hand, it could be just a mouth malfunction, in which case, all right, just go fix the machine reimage it do something with it? On the other hand, it’s a malicious actor trying to exfiltrate information, which is it, we’re gonna have to examine the packets, we’re gonna have to take a look at this stuff. And we don’t know until we examine it closer.

Scott Lyons 10:44
So … I’m sorry, go ahead. I apologize.

Josh Marpet 10:49
No, go ahead, please.

Scott Lyons 10:50
Oh, what I was gonna say was, what you’re pointing towards is the way that pattern recognition is done. Right? There’s a difference between correlation and causation.

Josh Marpet 11:03
Right. That’s exactly it. Actually. I’m gonna – would it be horrible for me to share a screen?

Matt Alderman 11:09
No, not too bad.

Josh Marpet 11:11
Okay. Oh, that’s not the screen I wanted to share.

Matt Alderman 11:16
Yeah, make sure you share the right one. We don’t want to we don’t want to see

Josh Marpet 11:19
I have a monster monitor. So I can share screen but it’s gonna be ugly.

Matt Alderman 11:26
Share window.

Josh Marpet 11:28
It doesn’t allow me to do that. Center. Okay, fine. Here, you should be able to see my screen now. will make you smaller. Okay, Matt, don’t take it personally. So here is the artificial intelligence machine learning difference. So germs are the causation of both bad smells and disease. But people before germ theory, the only thing they knew was that bad smells and disease happened around each other. So they thought that bad smells caused the disease. They didn’t even have the right paradigm, if you will, to pick up on. On what was the actual causation, which are germs, we had to get germ theory out there in order to figure that out? Does that make sense? So that’s the difference between causation and correlation. That’s the really simple one. But what I really I’m just going to show you one more slide. This is the presentation we gave. And I should actually just hit present, I suppose. Whatever. So slideshow, come on, wakey wakey current slide. There we go. Okay, that’s better. And these are all the things we thought of that could be used or done with artificial intelligence machine learning in a casino or regulatory environment. And I think it’s kind of interesting. Sorry?

Is it that small? Booger. Hang on a second. Let me not share screen. Let me try and share a window. teensy bit frustrating. Sorry, guys. Oh,

Matt Alderman 13:05
Technology. You gotta love it?

Josh Marpet 13:07
What’s that?

Matt Alderman 13:09
technology? Gotta love it.

Josh Marpet 13:10
Well, it’s frustrating because I have a monster screen. And it’s not letting me share just a window. It’s only letting me share the entire screen. So

Matt Alderman 13:21
they have to take over your whole screen?

Josh Marpet 13:24
Yeah, it’s frustrating. And I have this huge screen. So the text looks tiny. Because I can’t really zoom in that easily.

Matt Alderman 13:31
Come in plus.

Josh Marpet 13:33
Yeah, I could do that. Alright, fine.

Scott Lyons 13:35
Josh, I think I got it. Hold on one second. Right there. How about that?

Matt Alderman 13:42
There we go. Now we can see it.

Josh Marpet 13:44
Okay, good. Okay, there we go. So there’s those are the roles that we thought of, for, for AML, inside of a casino and regulatory organization. So for a company, we can track evidence trends, we can monitor departments, basically, this is turning into a supervisory system. All right, we can see what’s going on, we can track things, think of it as a sim for compliance. Okay. And we expect that some companies will start tracking these things using machine learning, as so they’re going to branch out from security and go into compliance as well. All they’ve got to do is extend what they’re doing into different realms, if you will, okay, they’re already tracking data, why not track other data, like compliance data? So that’s interesting, because Could it be that you’ll have an extra Splunk module or logarithm module or any of the other sim modules? That’ll be compliance? Absolutely. We think so. That’s internal to a company on that left, left top there.

Matt Alderman 14:44
True. And we’ve seen some aspects of that in the industry, right? I mean, bar. Remember, the sim was driven a lot by compliance. There’s people that debate me on this topic, but sim really took off when log correlation and log management and other things and some of the early regulatory requirements came into play. And everybody tried to build a set of compliance checks in their Sim. And still do that today, but not to the extent I think you’re talking about in this example, which is expanded beyond just some of the technical control data, because from a compliance perspective, yeah, you still have technical controls, but you have management and operational controls, policies, procedures, other things that drive an overall governance policy, those aren’t traditionally, capabilities in the SIM, I even spent a little bit of time doing this with qualis, when we built policy compliance module, which was configuration, bringing in some workflow capabilities to collect some nice nontechnical controls, right. So we’ve seen aspects of the security industry, move into pieces of this. And then you have a whole separate industry over on the GRC RM side that does this more from the questionnaire, policy control framework structures on that side?

Josh Marpet 16:00
Oh, great, when you raise What a great pointer is, I’m gonna stop you for just a second because I got to address that. So you’ve got the GRC tools, which are normally tied around the idea of a risk register. And they’re normally tied around the idea of an asset list or an asset inventory. So what they do is they tie a risk to an asset, this application has not. So this application, which is an asset, okay, uses these pieces of data, which are assets. And the risk to that is if we lose this data, or get this data breach or whatever, it’s going to cost us a million dollars a minute, or if the system goes down, it’s going to cost us whatever the e-commerce prices for that, you know, 50 bucks a day. $1,000 a second, whatever it is, right. But and they have the questionnaires and the surveys to ask the questions. Hey, third party compliance. Are you a third-party vendor? Are you compliant, here are the things you know, like the SIG, and whatever else, right from the Santa Fe group. So you’ve got all this good stuff on the GRC side, which is valid, but it doesn’t have the concepts that the sim vendors have, which is that this is a real time monitoring, engagement, a real time monitoring world. And you need to keep on top of the process, as well as the data. So it’s, it’s a fascinating thing. I’m eagerly awaiting to see when they actually get in their heads to talk to each other. I’m eagerly waiting to see a stem do or compliances, talk to Splunk, talk the logarithm and say, No, guys, we really should work together. Yeah, that’s when I think it’s gonna really mature.

Matt Alderman 17:30
We tried this, and so have a lot of experience in this realm. And we tried this for a while. The problem is most of the Yeah, most of the GRC solutions are not architected to scale to the amount of data that the Splunk and the logarithms of the world actually have. A lot of these systems were built with an architecture that said, Look, I’m gonna go out periodically ask a bunch of questions, get answers, analyze them and give you some results. What we’re talking about is a real-time system that is looking at control state on an ongoing real time basis, and making risk decisions and updates into the overall risk model. And in the GRC’s were built to handle that real-time data flow. I spent almost two and a half years at RSA trying to get Archer to suck in more than, you know, a million records into their database. Because when you talk about these tools, we’re talking probably billions of records that need to be analyzed and monitored in real-time in order to make that concept over again. Yeah, right. So I love this concept, right. But there is, but what I’m seeing is a disconnect in the technologies, you’ve got the Sims doing their mlps. There’s, there’s like an artificial intelligence engine that sits on top of compliance and risk around it that hasn’t really been built yet in the industry that needs to consume all that data. And I’m not sure that the existing GRC IRF vendors are in architectural position to do that.

Josh Marpet 19:04
They’re not, as far as I know. Let’s be clear, as far as I know, they’re not. So you’ve got and you phrased it perfectly. Thank you, Matt. That was beautiful. So I’m actually going to bring this back to security. So I think Paul might pick up on this or one of the guys from Enterprise Security weekly to security weekly. Think of pentesting 10 years ago, it was a point in time, I went in, I collected data by breaking by analyzing by looking at stuff. And then I went, I left and I wrote a report. And here’s your report, and I’m done. Goodbye. I’ll see you in a year. Right. And that’s where the GRC tools are right now. Now look at pentesting we’re moving closer and closer and closer to continuous pen-testing, continuous scanning, continuous automated this continuous done that etc. Right? And those are where the sim tools are pulling in and those types of tools, not just Sims are pulling in that data, analyzing it on as you said, a real-time basis with millions and billions and whatever’s of records Continuously looking for patterns continuously looking for correlations. Right? Am I vaguely correct? and Ellen, what you’re saying? Yep. Okay, so now we’ve got this, this compliance is sort of following security, because we’ve got these GRC tools, and they’re like, we’re doing a point in time assessment. Oh, my God, doesn’t that sound familiar and old-fashioned insecurity? Well, guess what? It’s still old-fashioned and compliance. These tools need to step up. And they need to make themselves available to sim tools as data gathers or data feeds, if you will, they need to either or build their own, that’s fine. I’m good with that. That is a continuous monitoring of all of these different things. Scott, can you pop that back up that slide

Scott Lyons 20:47
Yeah. Give me one second.

Josh Marpet 20:48
The one, to accompany external regulators and gaming specific, because every different viewpoint, you have different things that are real time data collection, and analysis system. I’m sorry, that’s like a SIM, let’s be honest, can use to audit and grade your compliance stance. And that includes everything from your evidence that includes your process and procedures, that includes your staff, look on the top left internal to a company. So this is inside of a company like a casino, or I don’t care like a manufacturer. Where is your auditable? repositories of evidence? Is the evidence coming in in a proper fashion? Can I link that to my change control? What do you mean? Well, if I if I add five new servers for five new applications? Do we have onboarding checklists for the servers? Have we used the gold the proper gold image five times I said, five, right? Have we done our task completion that links your change management with your evidence in your auditable repositories? It’s analyzing these different feeds of data from these different silos to make sure that things are being done properly. If I have four onboarding checklists, but all five systems are registered is up and running. Where’s the fifth one? Somebody did a custom build. Now, is there a change control ticket for a custom build? That’s fine? Is there no change control ticket for a custom build? alert? Why? Because both security and compliance are gonna have problems with that custom server. And we want to make sure that they know about it. This is just as you said, that this is just as you said, this is real time information that needs to be out there on a continuous basis. Sorry, I just you brought up wonderful points. I’m really enjoying it. Sorry.

Scott Lyons 22:29
Well, to add to what you were saying about pentesting. Let me just step in here. You know, you look at it from one aspect, which is the security side Now let’s look at that as at the compliance side, right? We’re seeing with the advent of PCI four, oh, and sorry, Matt, we have to talk about this for a second, we’re seeing PCI starting to hone in on what it really is coming down to right, which means the first PCI one is to get everybody on board, PCI two and three is to start saying this, you know, this part of the compliance goes here, this part goes here, this part goes here. And now we’re gonna start to see PCI start to hone in on actual protection of systems. So dictating what needs to be looked for in a pen test how that report needs to be given. And then later in the episode, when we’re talking about the news, you’re going to start to see especially in FINRA situations, where auditors are being forced to talk to the board about their, their high risk findings. Right? So being able to have the the pipe of data and the trickle down of AI ml, right to be able to give you those high-level targets is really where all of this is headed.

Matt Alderman 23:44
Yeah, yeah, Right. And it’s not just internal, right, we can use this for all of our third party monitoring, which has been a major headache and is still a major headache for organizations. Because again, it’s a want in, it’s a point in time assessment, you go out, you say this is a high risk vendor, you analyze them once a year. What good is that when they’re making changes the other 364 days a year? How do you know that the risk profile that third party vendor didn’t change? Right? So there’s a lot of use cases where bringing these two capabilities together is a Nirvana state, but but noise built it yet? And then regulators could use this? Yeah.

Josh Marpet 24:26
The security scorecard and bitsight that are looking at third parties, and they’re saying, you know, we’re watching them on ongoing basis. So they’re trying, but on the other hand, I don’t know that they’re, they’ve got the right mindset. And I’m not saying they’re wrong or bad, please, I’m not saying that in any way, shape, or form. I’m just saying that the way that they’re doing it and the mindset that they have to collect the data, analyze the data and hand it over to you display that data might be not one that compliance can use per se. But there’s let’s talk about this just from the point of view of the regulators, from the regulators, I can use these kinds of systems to determine which ones should I audit Who’s being compliant? There’s all kinds of ways that these kinds of systems can and should and will be used. And we’re not thinking about it well enough in this industry. And I think that’s fascinating. I think there’s a lot of opportunity. And I think there’s a lot of opportunity for people to do their stuff better, well enough that they will not be brought to the attention of the regulator, shall we say, because, you know, sort of like in business, you don’t want to be on the front page of The Wall Street Journal, sort of the same thing and regulation and compliance. You don’t want to be on the regulator’s dashboard, if you know what I mean.

Scott Lyons 25:30
And to add to what you’re saying, Josh, it’s almost like risk needs to be calculated at the change control board level. Right? Well, so when you’re doing what’s the impact, because a pen test is just a point in time, you know, are we secure? At? You know, on February 5, are we secure today? Are we secure tomorrow, when we implement a new change? I don’t know. What kind of risk does that involve into the company? Don’t know, right? So when you’re talking about AI, and ML and trying to tune all of this risk profiling, interest reports are a major factor of what the AI needs to be able to identify.

Josh Marpet 26:08
It’s a fascinating problem. It really is. It’s a fascinating problem. Because you’ve got all of these different data sources, that security, which has been obviously the driver of bringing in Intel and data sources call it threat intelligence, call it whatever you want, I don’t care. But they bring in all this data into the system. And we’ve been getting closer and closer to take data for security, and monitor it, analyze it, look for correlation and think about causation that humans do for so long now. But now you’ve got the fact that compliance is starting to generate so much data as well, as we add more compliance regulations ccpa GDPR. PCI because Jeff Man isn’t here, we have to say PCI at least five times, and I think Scott, you got like four or 10 of them? I’m not sure.

Scott Lyons 26:52
Yeah, yep, yep.

Josh Marpet 26:54
And all these different regulations are out there, we’re seeing more and more pop up on the state level on the federal level, on the international level, we’ve talked about them a dozen times, I’m not going to name all of them. But compliance is going to be creating as much data as security as if not more, okay, in some aspects, it’ll be more in some assets, it’ll be less, some companies, whatever, you get the idea, they’re going to be creating in real time, a crud ton of data, that’s a technical term, metric, credit is the real technical term, but you get the idea. And if we don’t analyze that data, in the same way that we analyze security data, we are adding risk. And we’re adding risk not just to the staff, the management and the executives, but we’re adding risk to the board. When we do that, you will see controls being handed down from the board, that might not be the best way to do it, it might be better to build them from bottom up rather than top down.

Scott Lyons 27:55
Well, at the end of the day, it’s all about how, you know, can we still see our cat photos? You know? I mean, if we’re not taking a holistic approach to compliance, especially when trying to involve new and emerging tech, I mean, it’s it’s, it’s all well and good to talk about new and emerging technology like AI and ml. But if you’re not doing the basics right now, and you’re not looking at the business of doing basics, you know, there’s, there’s, there’s, there’s, there’s no way that you can actually go forward and say, well, let’s let’s use AI, you know, right now, the unfortunate part is that there is no turnkey solution for it, there is no, you know, let’s buy, let’s buy a system and thunk it right into our network, you know, um, so.

Josh Marpet 28:42
Effectively, what we’re saying is, is that AI can be incredibly useful AI and want to use AI, please, ai ml are interchangeable. AI and ml can be incredibly useful to compliance to security, to the overall health and risk management of any company organization out there. Right now, it can do data collection, and data analysis and look for correlations, causation is still up to the human mind, though. All right. And that’s fine. In the future, it’s going to be very different and very interesting. And what we’re waiting for, at least on this show, is to find a system that will pull in the compliance data. And we’ll actually use it for these types of real time monitoring that we’ve seen security systems now for the last five or 10 years. How long is Splunk been around?

So yeah, I think this is gonna be interesting.

Matt Alderman 29:35
Eight, nine, yeah.

Josh Marpet 29:37
Eight Nine. Yeah. Yeah. I think this is gonna be interesting, Matt.

Matt Alderman 29:40
Yeah, I think the use cases is it is the interesting part is because I’ve seen this having been on the GRC side in the security side of this fence, always knew where these systems could come together. The problem was, you really almost have to start from scratch and so you really need to find somebody who wants to go solve This problem go out and build a highly scalable solution that can ingest the data from the existing security tools, put a layer of control risk mapping contextualization at the asset layer on top of it, and be able to build a real time integrated risk management system for lack of a better term. That would be really, really interesting to see. And I think the foundations of that technology are they’re just no one’s I haven’t seen anybody actually started to build it yet.

Josh Marpet 30:30
If anybody does, please let us know. Anybody seen it or is built it, please let us know. We’d love to hear about it. That would be awesome. If not, somebody out there might need an idea and, you know, build it. We’d love to see that too.

Matt Alderman 30:43
Yeah, they might need some expertise. They might know where to come.

Josh Marpet 30:46
Hey, we can talk. We can talk.

Matt Alderman 30:50
Alright, with that, let’s conclude this part one of AI and compliance. We’ll take a quick break and then we’ll cover the compliance news for this week.

Translate »