Be the architect of your AI-driven future at our digital event "Blueprints for Generative AI."

NEW Tool:

Use generative AI to learn more about data.world

Product Launch:

data.world has officially leveled up its integration with Snowflake’s new data quality capabilities

PRODUCT LAUNCH:

data.world enables trusted conversations with your company’s data and knowledge with the AI Context Engine™

Upcoming Digital Event

Be the architect of your AI-driven future at "Blueprints for Generative AI." 

View all webinars

Data Security and AI: How to get them on the same page with Jesse Sedler

Clock Icon 68 minutes
Sparkle

About this episode

Data governance and security may be scary and hard, but with the right solutions in place, data teams can enable the right balance between security and usability to drive their businesses forward!

Tim Gasper [00:00:07] Hello, everyone. It's time once again for Catalog& Cocktails, your honest, no BS, non- salesy conversation about enterprise data management with tasty beverages in hand. I'm Tim Gasper, long- time product guy, customer guy at data. world, joined by Juan Sequeda. Hey, Juan.

Juan Sequeda [00:00:23] Hey, Tim, how are you doing? I'm Juan Sequeda, principal scientist and the head of the AI Lab here at data. world. And as always, it's Wednesday, middle of the week. Towards the end of your day, there's sunset somewhere. And we're here to go talk about data and to talk about a topic that I don't think we've ever touched or barely touched that much. We'll get to that in a second. Really excited to have Jesse Sedler who's the VP of product at 1Touch. Jesse, how are you doing?

Jesse Sedler [00:00:48] Great. Excited to be here. Appreciate you guys. Appreciate the invite and the energy, too. Very excited.

Juan Sequeda [00:00:53] All right. Well, we got a lot to talk about. We're going to talk about security which doesn't sound like the most exciting thing, but we'll get into it. And with some cocktails, things will change. But let's kick it off. Tell and toast, so what are we drinking and what are we toasting for today?

Jesse Sedler [00:01:07] So, I am drinking just Jack Daniels with a one ice cube in it. Nothing all that fancy. It's what I had laying around. Also grew up drinking Jack Daniels, so that's typically one of my drinks of choice or, better, bourbon because most bourbon is better than this. From a toasting standpoint, I think probably two things coming to mind. Well, one thing coming to mind is that I'm taking vacation next week so I'm very excited to try and get through the rest of this week. So, that is also because we're expecting our second kid end of June. So, that I feel like is worthwhile and just excited to get some time off and expand on family coming pretty soon.

Tim Gasper [00:01:49] I love that.

Jesse Sedler [00:01:49] Cheers.

Juan Sequeda [00:01:51] Congratulations and I will cheers to that because I guess we... Well, I guess I've said this before but I'm also expecting our second one in two weeks. So, this is actually our last live episode for a couple of weeks, for a couple of weeks. We're still going on so we got some surprises for folks every week. So cheers, Jesse. And cheers to family.

Jesse Sedler [00:02:17] Amen to that, yes.

Juan Sequeda [00:02:18] Tim, how about you? What are you drinking?

Tim Gasper [00:02:21] Well, I'm drinking a classic mojito, so I got some rum and some lime, some sugar in here, a couple other things. So, I'm excited to break into this. I was actually thinking about it. Try not to repeat cocktails and I'm like, " I don't think I've had a mojito on the show yet." So, I'm excited to have that and I will cheers to both of you soon to be fathers. I'm very excited. Cheers to family.

Juan Sequeda [00:02:50] Cheers.

Jesse Sedler [00:02:50] Thank you.

Juan Sequeda [00:02:50] So again, as usual, I go into my bar. I'm like, " What's there?" And I see some Aperol spritz and some rum. So, I'm like, " Let's go find some stuff." And I mix it up. So, this is Aperol rum. I have a dash of orange bitters, some Texas honey and a splash of lime sparkling water. So then I asked GPT, it's like, " Hey, give me a name for this." And it calls it the Lone Star Sunset, the Lone Star because of Texas, the colors of Aperol and orange bitters mimic the beautiful colors of a sunset, the sweetness of a honey and the fresh lime sparkling water can remind you of a refreshing Texas late afternoon which actually is really nice and refreshing right now. So, I'm having a Lone Star Sunset, thanks to GPT for the-

Jesse Sedler [00:03:33] Good job, GPT. Cheers.

Tim Gasper [00:03:35] Yeah, respect.

Jesse Sedler [00:03:35] All right, cheers.

Juan Sequeda [00:03:36] And I mixed it up pretty good. Okay. So-

Jesse Sedler [00:03:40] By the way, respect to Jack Daniels. Jack Daniels and Coke was and still is my go- to drink, so.

Tim Gasper [00:03:48] Got us through the teenage years.

Juan Sequeda [00:03:53] All right, warm- up question, what is something that you always keep secure given that today's our topic about securities?

Jesse Sedler [00:04:00] Yeah, it's one of those things that's funny because a lot of security people are really bad when it comes to personal security, which is the same thing as the doctor who smokes a cigarette type of thing that you just see all the time. You're like, " That doesn't make sense." I would say that the physical things in my life or what I keep secure most, so my passport, my kid's social security card. I am personally not that good when it comes to my security hygiene online, admittedly. Please don't hack me now that I'm saying that. But I find that online is just hard. It's just challenging. So for me, it's the things that I can keep tucked under my bed or under a mattress type of stuff, kind of old school. That's where I'm going with it.

Juan Sequeda [00:04:44] I'm with you on that. I think I keep my passport. I have that in my very safe place. I can lose anything but not the passport. One of the things I freak out is that I'm traveling overseas and about to get on the airport and, " Where's my passport?" It happened. I do have password managers.

Jesse Sedler [00:05:08] Those are good tools. Yeah. It's funny. I talked to a CISO friend recently who was telling me he got a 5, 000- pound safe coming to his house because he likes to really take his home security very seriously. So it's gotten me now thinking about I do need a safe, I don't think I need a 5, 000- pound safe that requires specialized people to move it and a special truck, but getting a small lock box to keep those things safe instead of my sock drawer can be a good step up. I thought that was hilarious.

Tim Gasper [00:05:34] That's a pretty legit safe, 5, 000 pounds. Is that like bombproof or something?

Jesse Sedler [00:05:39] Yeah. He said it lasts for, I don't know, 10 hours in a fire. It was something where I was like I respect that. That's awesome. It makes sense. If you have a fire in your house, you want those things to last. I was like, " All right, yeah, I need to up my game."

Juan Sequeda [00:05:56] All right, Tim, you?

Tim Gasper [00:05:58] I'll tell a super- fast story and then I know we want to jump into things which is that. So there's a lot of things I try to keep secure. But a few months ago, so we have ring cameras around our house. We were like, " Why is our ring camera, why did it go off in the middle of the morning like 5: 00 in the morning?" We're up at 8: 00. And some random guy walked into our backyard at 5: 00 in the morning. We have ring cams, you can see them, and opened our shed up and walked into the shed and then left. And so we were like, " What the hell?" So we ran outside, we looked in the shed. Nothing was taken. Lawnmowers still there, everything's still in there. So, I now lock my shed. So, anybody who's listening thinking about trying to break into my shed, don't. And by the way, if you're going to go in there, why didn't you steal anything? I don't know.

Juan Sequeda [00:06:47] Yeah. Okay.

Jesse Sedler [00:06:51] Friendly words there.

Juan Sequeda [00:06:52] You have a very secure shed. Okay. Well, all right, talk about security. Let's dive in. Honest and no BS. AI is so exciting right now but when you say the word security, you're like, " Ugh." What's your reaction to that?

Jesse Sedler [00:07:09] I get it. I completely get it. I mean, like anything else, it's you are a layer getting in the way of somebody who's trying to do something and accomplish a task. And if you're a data scientist and you need access to certain datasets, the last thing you want is to be told, " Here, fill out this form, justify it. Let me have 15 managers who don't know anything about your job trying to prove it. And oh, we'll get back to you in a month." So yeah, security I think historically has been... I mean, the way the security folks talk about it is that we're the department of no, right? The CISOs are typically just, "No, no, no, no, no. Because why would I take the risk for whatever the project is?" I think we're starting to see the inflection point a little bit more where it's, actually, I have a board mandate that says I need to do AI. So now I don't have to say no, I may need to say maybe. And gradually over time get to a world of where yes but or yes maybe or let me do something to that data before you access it so it's not showing Jesse Sedler, it's showing John Smith anonymized or tokenized in some special way.

Juan Sequeda [00:08:10] This already started with a T- shirt. I don't have to say no, now I can say maybe. That's AI.

Jesse Sedler [00:08:14] The bumper sticker, yeah, we'll print that one out.

Juan Sequeda [00:08:18] Okay. That's another T- shirt. We started strong on this one.

Tim Gasper [00:08:23] Yeah, I like that. I relate this, too. We have a lot of governance, data governance folks on our show. And I can see some relatability here where governance goes through a similar cycle of like, " Oh, well, you guys are kind of the police. You tell me what I can and can't do." I think security goes through a little bit of a similar motion there. And when it gets more mature and I think with AI now, there's new opportunities, you do start to see that flip a little bit where it's not always just about no, it's about, hey, with data governance, how does it become more data enablement with security? How does it become more acceleration safely?

Jesse Sedler [00:09:06] Yeah, completely. And I think the interesting thing too, so I've been working in security for, I don't know, 10 years or so, and initially it was you would have a discussion with a CISO about data challenges. And it's usually, " How do I lock it? Tell me what I need to lock up and let me encrypt it so that nobody could ever use it and I feel great because it's encrypted." We're now seeing at least kind of in the day- to- day standpoint is that a lot of the conversations are a CISO and a CDO talking jointly about a project which is initially telling us that it's no longer just lock the door, it's responsibly lock the door or responsibly give access to whatever it is that you need. But do that for a short period of time through the right people for the right data, for the right reasons. And I think that paradigm is I think the chat GPT world all of a sudden a year plus later has really accelerated just that overall movement of people doing these projects together because it's no longer the singular focus of risk reduction, whatever that could mean potentially. Because now, it's how do I take that risk, reduce it, but also enable revenue driving, targeting AI projects and all that fun?

Juan Sequeda [00:10:13] So, this is interesting. I did not think we're going to get to this topic right now, but the CISO and the CDO, let's dive into that a little bit. How are you seeing the interaction between these two folks? Because we have a lot of data leaders who are listening to the podcast here. So, basically kind of open question here is, as a data leader and CDOs, how should you be interacting with your security folks in your organization and how do you get to collaborate better or more to be more efficient?

Jesse Sedler [00:10:49] Yeah, that's a great question. A lot of it is security teams have been around for a long time and I think that some of these data teams are just relatively newer CDOs. I think overall, if you look at some of the Gartner research, it's just becoming newer discipline in a sense. I know that that's not really fair, but lots of companies are still hiring their initial CDOs where they've had CISOs for a while. So I think some of it is the empathy that any of us do on a day to day when talking to new personas or new users is that the CISO just gets beat up left and right. It's a relatively thankless job where on any holiday, any weekend, middle of the night, something happened, they get a phone call. So it's people who have been historically beaten down. And coming from the security space, no matter how many people you throw at security and how many tools you buy, there's holes, there are always going to be holes. There's always risk that you just assume. So, it's just consistently walking around with some of the doors in your house that are open and you're just like, "Eh, all right, I hope no one goes into my shed in the backyard because I don't think that anything's important in there but maybe there is." And so from a data person taking that perspective of yes, you want to use the assets that you're sitting on, but also recognizing from the security person of how scary that notion actually could be. I think that just that empathy and taking that type of approach when having the conversation could go a long way. The other point I would make there too is that a lot of the CISO security community comes from a, this is a very big generalization so I do apologize, but comes from a DOD somewhat military background. Now, we see a lot of that is that you've kind of done your time in the military or the intelligence community as a natural path going into cybersecurity. Now, people who have done 20, 30 years in the military, now just there's a different viewpoint on the world that again, it's acknowledging, it's understanding and it's trying to connect based on those parameters that I think would enable a much seamless or easier conversation between stakeholders who, at the end of the day, everyone wants their businesses to grow up. At the end of the day, the CISO just doesn't want to get in trouble or be on the front page of the Wall Street Journal because an issue happened. So, reconciling those dueling interests and talking to them I think would make for better collaboration.

Juan Sequeda [00:13:09] I love how you're bringing this up of the whole empathy. And honest, no BS, I think that a lot of the data folks come in thinking about, " Oh, we're the latest, greatest. Look at all the stuff we're going to go do." I'm like, " Calm the fuck down. There's a lot. There's a bigger pictures around here." And there's people, as you said, who've been doing this for a long time, too. They've been dealing with data stuff too, right? And yeah, you got to understand there's a history around this stuff. But we need to work together. We need to work together.

Jesse Sedler [00:13:39] Well, a lot of people... Sorry, Tim, go ahead.

Tim Gasper [00:13:41] Oh, no, finish your thought.

Jesse Sedler [00:13:43] I say a lot of the security people have been burned one time or another just from doing it for enough time. Every stat you read is like everyone's going to be breached at one point in the next three years or so. So likely they've been breached or they're about to be breached or they already had their breach and they're a year into it and they're waiting for the next one. It's just that consistent of like, "When am I going to punch in the face?" It's coming pretty soon. So, how do you have a conversation with somebody who's bracing to get hit at all times so the tension is so tight? Again, they need a cocktail, that helps, but I don't think you can do that.

Tim Gasper [00:14:14] Cocktails do help. Do you have a recommendation to folks maybe who are on the security side who are feeling that kind of defensiveness or a CDO who wants to interact with the security side but they're worried about triggering something? Is it just empathy? What else would you kind of guide there?

Jesse Sedler [00:14:38] Yeah. I think in many ways, it's like any other set of potentially conflicting challenges where the best thing to do is just talk to each other. And I think I see it again. I work for a pretty small company so even our internal things here and there. We get on Slack. Things get blown out of proportion. You send an email, you misinterpret something. You sit in a room together or you do a Zoom like this and you spend 10 minutes just being like, " Hey, do you have kids? What's your family like? Who are you as a human?" Make the connection. This is very generic advice. I realize this is not tailored to your answer, but I just keep getting back to it's the same thing that people are people end of the day. Yeah, you want to do your job successfully, you should do that. Stockholders are important. You have to do what's right for them at the end of the day and you want to drive revenue and growth, but it's got to start from the connection as a human and then from there to build on top of that trust.

Tim Gasper [00:15:29] Yeah, no, very well- stated. And if everybody was doing it, we could call it common sense but I'm not sure how common it is, so we'll just call it sense.

Jesse Sedler [00:15:40] Yeah, I love that.

Tim Gasper [00:15:43] So Jesse, you've got a lot of experience in the security area and obviously we mentioned already around AI and some of the things that are really moving fast in that space. And what I hear a lot from data leaders, from innovators, from product leaders, technology leaders are sort of high- level concerns around, " Oh yeah, yeah, we're really excited about AI." Maybe we even have an AI mandate, but we're still trying to figure out what this all means from a security standpoint. Sending data to a third party, evil. Third party, we really got to figure out the security around this. And I'm just curious about are you seeing that as well? And what kind of recommendations do you give or do you think of when you think about ... A company's thinking about tackling an AI project, what do they really need to think about from a security standpoint?

Juan Sequeda [00:16:38] That's great. Just to add to that, it's like what does" figuring out" actually meaning? What are people actually doing to" figure it out?"

Jesse Sedler [00:16:46] Yeah, yeah, I love that. It's a tough question. I think a lot of these things. I mean, any of these kind of gold rushy types of moments, not saying that AI is or isn't because it's been around for a long time, but there's just been this immediate rush to what can we do, how do we do it, what do we use? And we've seen companies we work with who just say, I'm blocking all of the IP addresses that relate to ChatGPT or anything else similar to that. I just don't care. It's too risky. I don't know what it looks like. I need time to wrap my head around whatever this could be so I can understand the threat. And we've heard those a few times and it's like, " Hang on. I'm writing my essays and writing emails with this thing. It's not as dangerous as you make it seem." But the initial reaction has to be kind of back to that initial point too of like, " You've been burned so many times over, it's just easier to say no." And so part of that discussion that we're seeing is that because we're talking to both personas. And so a lot of it is, well, how do you tell the security person, yeah, there's a lot of value in helping your data themes? Maybe you have a bonus attached to it. Maybe your boss has a bonus attached to it. Maybe it just makes your life easier if people have more access to things. The starting point that we typically would point people back to is both. You got to know what you have before you can even make a decision in terms of what is the product. Take inventory. You're running a shoe store. Take inventory of what you have and make a decision off of that. And once you know what you have, label stuff, put the labels on it so you know that this is top secret and this is just secret and this is garbage that I don't care about, and this is 25 years old that we shouldn't be sitting on anymore. Having a notion of what do you have seems to be the ultimate starting point for any of these projects to at least lead to some level of success. Because otherwise, you're kind of pointing and shooting in the dark and you don't really know what you're hitting. And then of course somebody uses something incorrectly because you never initially put the controls in place anyway or you overdo it or underdo it which leads us to other issues as well. So from a recommendation standpoint, that would be just 0. 1 of just like what do you have or what's relevant that you might have? How do you know what's there? And then once you know that, you can build a program around it. But until then, you can't make the cake without the ingredients.

Juan Sequeda [00:19:11] Yeah, this is-

Tim Gasper [00:19:12] And we have heard of it.

Juan Sequeda [00:19:12] I mean, not to get salesy here, but just a little parenthesis in general, what you're just saying is you need to catalog, you need to create that inventory. So I think salesy stuff aside here, you can't manage what you don't know you have here, right? So that's why it's so important to have an inventory of these things and be able to go, I like how you say, just label it. It should be common sense but apparently, is this common sense or not or it's just sense, Tim?

Tim Gasper [00:19:44] That's a good question.

Juan Sequeda [00:19:45] Honestly, what you've been seeing is how much people... This sounds like very basic stuff but in your experience, is it basic? Are people doing this or they're like going off and they're thinking about it, they're figuring it out, but they haven't even inventory, they don't even know what the-

Tim Gasper [00:20:01] Or maybe put another lightweight, why are people struggling with this?

Juan Sequeda [00:20:05] There we go.

Jesse Sedler [00:20:08] A lot of it is just data volume. And a lot of it too is the inertia of if you're 100- year- old insurance company who's sitting on all this information and you haven't done it in the past, you're not going to start doing. Again, I should definitely diet and eat better. I know it's the right thing to do. I don't know. I'm going to eat what I'm going to eat and that's just the way that things are. And until I get in real trouble from my doctor, I'm going to keep doing that. And that unfortunately, we see the same thing in the security space of just people are like, " Yeah, I'm going to wait until I get in trouble," or, " I'm going to overdo it and wait until I get yelled at before I'm going to take the next step that I have to do." So while conceptually, yeah, it should be pretty easy to put labels on stuff, the reality of it is just the investment, the time, the headache. Some teams we hear just like, " Eh, there's a various kind of willy- nilly of these are the datasets that can be used. These are the ones that we don't have any idea what's on so nobody touches them." It's like but how do you know what's in there? If that's the transaction data you need to go write a marketing targeting campaign, you might want to know what's in there.

Tim Gasper [00:21:11] Is this going to slow down the adoption of more wild sort of wide- scale AI applications? When you're doing something on a small POC, just a few files or a few datasets, obviously you have a lot more control. And sometimes in these POC situations, you can even kind of build a wall around what you're doing. As you want to start doing things more widely, this approach of I'll wait until I get in trouble, that sounds problematic.

Jesse Sedler [00:21:42] Yeah, big time. Well, I think that the answer depends on the industry too, right? Because for lower margin industries where you're competing on every penny matters, successful projects that can give you an extra three or four- basis points for your revenue growth, it's probably worth unlocking a little bit more than you're comfortable with just to see if you can prove out that AI is doing it. Where other industries where there's a bit more flexibility, you can compete on other vectors. I'm thinking about insurance as an interesting one because you buy an insurance policy, you're competing on pennies at that point. There's not a lot of upside and they're making their money more on the investment side of the house. But if they can make a little bit more money on a policy because they sold you your home insurance and your car insurance, and now you want to go buy your life insurance and they can give you a better package because they can run a model that says Jesse is X years old living in so- and- so probably doesn't have life insurance, go target him with the ad or the coupon. Now, a lot of value to doing that, but you have to be willing to take that leap.

Juan Sequeda [00:22:44] Yeah, no, this is the human behavior part that comes in of realizing that people just try to get done things as little, they don't want to have a lot of effort. They want, " Let me just go do this quickly. People are okay with it. Then I'll continue doing my own thing." And, " Oh, we got this issue. We need to go inventory. Okay. We'll start inventorying." And they're like, " Yeah, look at what we're doing. We're inventorying things in a spreadsheet, whatever. We're doing it." Okay, good. You have a problem. And then you're like, " Well, things are back to normal. We don't have to go do this project anymore," and then we stop. So then it just keeps going. And then nothing happens again until three years later then you get burned again. So, does that mean that that's just how life is and period? I mean, we never will be incentive to go do the foundation. We won't be incentivized to be healthy. But there are people who decide to take a healthy lifestyle and stuff, right?

Jesse Sedler [00:23:40] Yeah, entirely. And I guess I don't mean to be quite so doom and gloom about this stuff then again, I've been in security for 10 years so it's oftentimes the lens that we take on the world. There's definitely a movement much more so to just trying to wrap your head around the problem. We see it in our business. We see it in the industry. Kyle went to an IDC conference a few weeks ago and it was just AI everything. And even the security sessions were AI security, both using it as well as how do you let others use it responsibly with regulations and responsible AI and all those other practices and all the buzzwords you could think of kind of get thrown into the mix, too. So, there's a broader push both from the analyst community, I think the Wall Street investment community of how do companies do more and start utilizing AI that gets people more excited? And I think the other thing that will start changing it is that I think generationally speaking as the, I don't know what the generation is now, who's in the '50s and'60s but those who are going to be moving out and retiring and potentially are struggling with just this new age versus the kids coming out of college in this ChatGPT time of COVID where they were stuck at home learning new skills. I think those things will usher in this overall new wave of how we are viewing the usage of data. And no longer just the talking points of data is the new oil and all that fun stuff but actually actioning against that because it becomes more inherent to our day- to- day. Same thing as I think from a health standpoint, our kids will be naturally healthier just because we understand a bit more about health and cigarettes are not that good for you, and grass- fed beef and stuff like that that didn't exist but now it's just becoming part of the culture that we all live in day to day.

Tim Gasper [00:25:22] Yeah, I think that's very well- stated. So you mentioned you're at IDC and people are talking a lot about AI and security and combination with each other. You talked about taking inventory, labeling your data. It didn't require AI for that to be a best practice. That's been a best practice that we probably should have been doing for a while. Do you think that security in the context of AI, is there anything new? Are there some new things that we need to consider here around security for AI or do you think it's the same stuff we've been talking about and just adds more urgency and more importance to it?

Jesse Sedler [00:26:05] Yeah, there's newer sub- industries coming out on AI security as a specific technical domain. Frankly, I've not gone that deep into it to understand. I know a lot of it is more around model registry security and making sure models are not being misused. Okay, makes sense. New asset, you should protect.

Tim Gasper [00:26:27] New problems and some new technology, needed to solve those problems.

Jesse Sedler [00:26:29] Yup. Others are trying to figure out when this data goes into the model, how easy is it to get out of the model or out of the outputs of the model? Again, is that super impactful these days? I have no idea, frankly. To me, a lot of these things tend to just get back to the basics of what do you have, what does it look like, who can access it? And the biggest breaches typically come from John Smith left the company, that credential sat open for an extra 90 days after John Smith left. A bad guy got in, found the credential and was able to maneuver around their active directory to provide them access to everything. And then over the span of the next two years, they exfiltrated 100 million rows of information, that became the problem. And it's like the Wall Street Journal types of that's like every other week, there's another one, there's another one. Pretty much always because-

Tim Gasper [00:27:22] Somebody will open that bucket, open to the internet or whatever, right? Yeah.

Jesse Sedler [00:27:24] Yeah. I think that a lot of this stuff gets back to a lot of these same sets of challenges. Yes, there are different methods of how do you secure AI? We're actually, even from our own business standpoint, again non- salesy but just kind of the anecdotal stuff we hear from our customers is that a lot of them do want to unlock, right? They are really serious about how do I utilize LLMs and GPT especially, obviously, for unstructured data which has primarily been the harder thing to wrap your head around. It's one thing to inventory columns in a database because again, it's limited to who can access it. John Smith, the normal employee is not accessing it so it's easier. End of the day, your chat logs, your Slack logs, your email server, stuff like that I'm assuming is quite rich in terms of just the relevance when it comes to LLMs and companies now where it's like, " Huh, I see it. I get the value of it. I still have to know within my emails what is permissible, what isn't." And that's where the labeling gets pretty challenging because just the mountains and petabytes of unstructured data that can exist in even small company. It doesn't have to be pick your big bank who has astronomical amounts of data that you can't even quite fathom, but even mid- sized companies.

Juan Sequeda [00:28:44] Just labeling your Slack.

Jesse Sedler [00:28:46] Yeah.

Juan Sequeda [00:28:46] Your chat, your email, yeah.

Tim Gasper [00:28:49] Yeah. There's a lot of unstructured information and you have to approach that in an automated approach because otherwise, it's impossible.

Jesse Sedler [00:28:59] It's so much. And then if you think about you create a PowerPoint that has customer information in it and you send it to four people on Slack, you've made five copies of it without even thinking about it. And then you email it to two other people. You made seven copies of the same thing that's now saved in an email server, a Slack server, also in an Office 365. It's everywhere. So, just the volume of that problem is so big that to the original question of why isn't everyone doing it? It's because I think that at times, it's this insurmountable thing that's just there's no chance I can go solve it, so why even pick away at it and start poking the mountainside or whatever it is? A little bit with-

Tim Gasper [00:29:39] Trying to dig the mountain with a toothpick or something like that?

Jesse Sedler [00:29:42] Yeah, exactly.

Tim Gasper [00:29:44] No, I think you bring up an interesting point there which as much as we try to lock down databases or even lock down S3 buckets and things like that, you're always two seconds away from somebody downloading a spreadsheet and emailing it to somebody, right?

Juan Sequeda [00:30:03] Yeah, no, this last point that you just said, there's so much of the unstructured, the text, the documents around that and then that gets shared. So, one of these so it's in Slack, it's in an email server, it's in an Office 360 and so forth. And I'm like, " That mountain is just so gigantic that people look at and like how do you even start? Even if you automated it labeling the stuff, how do you know that that was correctly labeled and why?" So, you're either like, " Let's lock it down, nobody can use it," or you go the other way around and it's like, " Well, let's go see what could happen and kind of hope for the best. But that doesn't seem like the right strategy too. So, then you end up locking it down. So, yeah, I get the problem.

Jesse Sedler [00:30:50] Yeah. Well, that's-

Tim Gasper [00:30:51] Hey, Michael asked an interesting question if you want to fly that up here.

Juan Sequeda [00:30:55] Let me see. What is the question here? Michael says, " How about the implications of leveraging AI to improve our capability to label data?"

Jesse Sedler [00:31:05] That's a great point. That's a great question. Not putting on my sales hat, so I'm not going to talk about kind of-

Tim Gasper [00:31:14] We're going to test your ability to not be salesy either.

Jesse Sedler [00:31:17] It's hard. But that's exactly the answer is that you don't necessarily have to label every single document that is found, right? Because even documents that are just, there's nothing in them, it's just you email the cat meme to your friends. Don't label that one, right? So, presumably an AI system could look through your email server and just find the things that likely contain sensitive data. Maybe it starts at metadata and reviews the metadata of who was it sent to. Knowing the role, that might be sensitive and that's how it unlocks or at least opens up the relevant things. And so, there's tooling out there that exists that does utilize various forms of machine learning models to get smarter over time to pick up trends based on your company or company profile, what data could be sent or not sent, and then send it out. Where it does get really challenging though is in the world of images and things that are not just textual- based. Because textual- based, you can write regex having been around for so long, it's pretty a standard way of understanding is that 16 digits a credit card number or a phone number. Yes, it's hard but it's very different than in that image, is there a credit card or is that image a cat? How do you solve that one? And also the heft around an image file is so much greater than just your standard email that the horsepower needed to go scan those things at scale. Yes, it becomes quite the challenge. But it's a very good question because that is part of the answer is using AI to help solve some of these things versus just AI as a means to corporate usage.

Juan Sequeda [00:32:49] Yeah, okay. The magnitude of the problem is evident to me more than I will actually thought about. I mean, I've always known that security is a big deal but this conversation is making me realize like, " Oh wait, it's much bigger than I thought about." Now, let's add to that. Here's another thing we talked about before going live. Well, there's also a bunch of data. All data we're not even talking about, we're talking about what's on our Slack servers or email servers. So, there's also these mainframes and stuff that we were like there's a bunch of legacy very valuable data we want to go do stuff. And now people are very motivated to migrate and upgrade all of this. So, how does AI and security deal today with this old data and mainframe data? But also part of it is that sociocultural aspect is that folks who work in that space, in that old school mainframe stuff, where is their head at when it comes to security?

Jesse Sedler [00:33:52] So that one, I've just been getting into more and more as of recently because it's both a super interesting topic and it's also just from a business standpoint, it's a good area to be focusing on. I will say that if you think about a mainframe or an HPE NonStop system, some of these legacy systems that are still being modernized and maybe have a thousand people or a thousand companies who are using them, 2000 at most, every credit card transaction, every ATM transaction is running through one of those machines on the backend across the world. It's not like a few of them. It's like 90% of credit card transactions run through an IBM mainframe. That's astounding and astronomical and crazy as a concept. And if you think about like we've been working with a few companies in this space who have 40 years of data sitting on a mainframe of every credit card transaction that's ever happened, talk about data volume but also talk about the value of that data. If they know that five years ago, Jesse went to the grocery store and bought diapers and there's a credit card transaction at Target to buy diapers somewhere and it's sitting on that mainframe, knowing that and the value of that data, having it at the right time is critical because then they can hit me with a coupon the next day to say, " Hey, go buy more diapers." They hit me now with it five years later. I don't need diapers. Don't waste your time on it. So the downstream impact of understanding these really legacy systems that are still used and won't be going away anytime soon and how that relates to the currency of the data and how that currency of data then relates to the usability and the value derived from it I think is absolutely massive. But you're 100% right from a cultural standpoint that there's not a lot of COBOL engineers who are out there. There's not a lot of people going into COBOL code these days anymore. So, there is definitely a cultural hesitation to don't touch it, it's mine. Don't come down to my basement where my machine is and don't bother me. I don't want to use it. But you'll see with IBM right in there, some of the recent announcements they've had, again non- salesy, I don't work at IBM, did work at IBM previously and we partnered with them, separate story.

Juan Sequeda [00:36:03] I'm a family of IBMer so I-

Jesse Sedler [00:36:05] You are? Okay. Yeah, you're in Austin. But there's now movements to how do we start using Watson whatever on the mainframe and unlocking some of that data and bringing it into the rest of the distributed world so anybody can access it. And again, access it in maybe some anonymized, tokenized way so that the data is safe. You don't know it's Jesse, you just know that somebody living in Atlanta did X, Y and Z. Yeah, it makes a lot of sense as another area of trying to pull information out of and making it usable. It's just a really old area to get into.

Juan Sequeda [00:36:43] In your experience, what you've been working with, when will we get off these mainframes to go... I wonder again if AI is a catalyst to go start saying, " Wow." You talked about you have all these transactions that you have in there. I mean, we need to be leveraging more of this stuff. There's motivation to go do this, right? Are we at a point in time where we are going to get off of these mainframes or no? I mean, 10, 20 years from now, we're going to still have the exact same conversation?

Jesse Sedler [00:37:20] I would bet because it seems like the mainframe is dying conversation has been going on for 15, 20 years. And I do think that there's probably less companies who are on the mainframe. There's probably not a lot who are buying net new. They do refresh every couple of years. So you can read the revenue numbers from IBM of like they'll have a huge year and then it will flatline. And a huge year because the next mainframe came out three years later because everyone's just refreshing their cycle. It doesn't seem like with the I guess existing inertia that's already exists there. And if you think about a big credit card company trying to rewrite a credit card application or a processing application or a ATM- based application, it works. Their uptime is I don't know how many nines were at the end of that, but it never ever, ever, ever goes down. You're not going to move it to the distributed side and it's run on a normal server because that's going to explode. And cloud is interesting. There is more payments being processed on cloud these days and there's secure ways of doing it via various forms of encryption. But I think we see more of that with Toast and newer payment processing type of things that exist versus your Visa card. I can't imagine a world where Visa, MasterCard, Travelers, any of them to say, " Yeah, we're done with that thing." It's so ingrained to the way that the business operates as basically critical infrastructure. I mean, like our energy grid I would think in many ways, too. It's a fascinating thing just to go read about how these things are built and the time that goes into it and just the resources and what they're capable of doing has definitely encouraged folks to just ChatGPT and Google around on that.

Juan Sequeda [00:39:06] Yeah, you're getting me interested. I think we're talking about this in the day, Tim, like, " Oh, how do we like to go learn? Oh, let's go find YouTube videos or whatever." I think that's what I'm probably going to do tonight or later. Let's go look at some history stuff on mainframes. It is super fascinating that I think you talk about the small quantity of high quality and just high- valuable data is happening through there. And then you said 90% of credit card transactions like that needs to be completely secure. And then the talk about AI, we're like, " Well, this security over that is much more important than what we can go do with AI there." So, that balance is not going to shift every time soon. So, even though there's probably small amount of companies, there's still the big ones out there, the gigantic ones and ones that will be in 10 years.

Jesse Sedler [00:39:54] Yeah, 70% of the Fortune 500 I think according to the IBM website. I saw that stat recently. I was like, " That's great." It just shows the staying power of that technology. It doesn't matter how old it gets. They keep refreshing it. And again, it's billions of dollars a year. It's not small business. It's billions and billions and billions. It's really incredible.

Tim Gasper [00:40:13] I think there's been some high- profile failures, too, to try to move off of mainframes and adopt more" modern technology." But then a company spends many millions of dollars to make that technology shift only to find that the new system is 10 times slower. And they're like, " Well, shoot, let's just keep doing the mainframe thing. It's faster," right?

Jesse Sedler [00:40:40] Yeah.

Tim Gasper [00:40:41] I mean, that's a difficult thing and it certainly makes the security conversation and the AI conversation both a little hairier because now, you're having to take into account systems that don't fit into the modern regime.

Jesse Sedler [00:40:55] Yeah, 100%. And if you think about when Slack goes down, I feel like it goes down every couple of months just as a tool and it's like, " Oh my, how do I do my job without Slack? What am I going to do? How do I talk to anybody?" And if you think about the fact that you own a credit card, anywhere around the world at any time, it just works. Somehow it goes from that terminal, wherever the hell you are, back to a system to verify you are who you say you are and you can walk away with your pack of gum. It shows the staying power of that system and the need to have it. That's always up at all times. And again, maybe there are future ways of moving that data off very easily and quickly into some distributed data lake that allows it to be used without compromising both the security of it as well as just the uptime of the actual underlying system.

Tim Gasper [00:41:43] Yeah.

Juan Sequeda [00:41:45] Well, okay. We'll see where this is going to end up. And I think what I'm taking away from our conversation today here is that empathy is what we need for data and security. And I think, Tim, that's been a theme. If I dive into the four years of the podcast, that is one that really highlights, comes up over and over again. And I think that's something that we need just as people, I mean be more kind and understand people around us because at the end of the day, we're all trying to go to the same place. We're working in organizations that all have a shared common goal. And security on one side, governance and that stuff, they may seem like the naysayers but there's a reason why they're doing it, right? They want to protect, right? So I think we also want to understand where we're coming from. Before we go to lightning round, there's always a quote that we use a lot. I think it's one of our listeners... Mark Kitson brought it up the first time. We talk about governance. It's like governance is like brakes in a car. Why do you have brakes in a car? Well, to slow you down like, " Hey, we need to slow down." But you can also think about it as brakes in a car enables you to drive fast safely.

Jesse Sedler [00:43:00] Yup.

Juan Sequeda [00:43:01] Does security apply the same here?

Jesse Sedler [00:43:03] Yeah, I like that. And I think that, yes, right? The short answer is yes, absolutely, that's what security is in many ways designed for is to give you that, " Hey, take a deep breath. Make sure it's a thing you want to do." And for the security people, it's the person asking the right person in the right role? Trying to do the right thing with that data at the right time, that all seems like it's a relevant thing that should be done. In a perfect world, zero trust as a concept is one in the security space that's thrown around a lot. It's one of these somewhat overused terms, but the idea is just to only give the least amount of access needed to accomplish a task. It's very DOD- centric in its thought process. And the whole concept there is to make sure people can do their jobs, but do it where they don't need additional information. So there's no additional risk being assumed as a result of it. So I think from a brakes in your car standpoint, yeah, you should have brakes so you can go fast. You still shouldn't go 120, 130 because the brakes are there going to help but it's not going to help you fast enough. So, there's still going to be that little bit and piece that touches it. And the other point on it, too, is that there's a lot of blurred lines in the security space between governance and security. We deal with a lot of security teams who have governance tasks and a lot of governance teams who have some security stuff and then privacy gets thrown in there somehow as the other cousin in the family that people kind of care about. We have to deal with it because we're told to deal with it. And we get our board asks, we talk about it all the time persona- wise of like, " Well, who cares about solving this problem?" It's like they all do. Well, who is it? It's a security governance privacy person because sometimes that's one person, sometimes it's a team, sometimes it's inaudible.

Tim Gasper [00:44:52] I like how you bring up privacy as kind of this weird cousin because sometimes it's like, " Oh, actually our legal team is in charge of that." And you're like, " Oh, really? Oh, I guess we got to get the lawyers involved," right?

Jesse Sedler [00:45:03] Yeah. Because at the end of the day, it's all getting back to the same things, be responsible with your information. Use it if you want to use it. If you can assume the risk of using it, be responsible. And if you're not responsible, then the security governance, privacy compliance, risk, legal, other teams all come down on you very hard. So, there is some interesting interplay there because it's very blurry what that line is.

Juan Sequeda [00:45:28] I like how we say it at data.world like, " We are all in legal," which is true because we ought to be very responsible in what we do. So, everyone's part of the legal team.

Tim Gasper [00:45:38] Yup.

Juan Sequeda [00:45:40] Shout out to our legal team. We love you.

Jesse Sedler [00:45:42] That's a good T- shirt too. Bumper sticker, T- shirt, " We are all legal."

Tim Gasper [00:45:48] We'll add it. Shirt number 400.

Juan Sequeda [00:45:54] Oh man. A joke or constant joke, Tim, we just need to start a T- shirt shop. But we actually can't start it because we always say we're going to go do it but we never do it. But anyways...

Tim Gasper [00:46:09] Too much of a meme now, we can't.

Juan Sequeda [00:46:10] But we do need to get new T- shirts. So, we'll get some new honest, no BS T- shirts and we'll get them to you, Jesse. These are very exclusive T- shirts that only former alumni of Catalog& Cocktails guests get them.

Jesse Sedler [00:46:27] I'm honored. Well, I have 16 minutes or 14 minutes to go. So, let's see if I make it.

Tim Gasper [00:46:31] How much cocktail do you have?

Juan Sequeda [00:46:36] Anything else before we go to our lighting round?

Jesse Sedler [00:46:42] The other thing that's interesting and I don't know how much time we need to reserve for lightning, but as a topic area that personally interested in is the whole world of privacy- enhancing technology. So if you guys are familiar with federated learning or homomorphic encryption, the whole premise around it instead of just naming various technologies because that's no fun is how do you obscure data in such a way that you can still analyze it and still get the validity and the usefulness of the information out but in such a way that the end user can't see it? So there's lots of techniques that are starting to be commercialized in more and more ways that allow you to encrypt data in such a way that it is fully encrypted with the right security levels of AES which is the industry standard today. But you can train the model on it. So you get around some of these challenges of you don't want to give access. I'd give you access to encrypted data. You can't see anything, you can't steal it. There's nothing that's there. But your neural network, you want to train on it, it will walk away being a fully trained model at the end of the day. And so those technologies are not only are they super fascinating just in terms of what they do and how they work, but just thinking about the business models and world that unlocks of, " Hey, two hospitals can start sharing patient data without needing to go through a year of paperwork just because the data is protected so they can start doing better imaging on rare diseases as an example." So, putting that out there because inaudible.

Juan Sequeda [00:48:05] No, this is good because this ties back to the other topic we had before on the newer sub- industries of AI security. This is a perfect example of that that-

Jesse Sedler [00:48:14] Yes, I should have brought that.

Juan Sequeda [00:48:15] No, no, this is an excellent one. And I love that example, right? Two hospitals can share patient data so they can just do better image recognition and stuff like that.

Tim Gasper [00:48:23] Forms of anonymization and things like that that allow you to still have analytic value out of the data, right?

Jesse Sedler [00:48:31] Yup. Yeah. The startup I was at previously was working in multiparty computation which allows multiple parties to share information together. So, think about three companies who want to share. The example is hospital was a good one we would see. We'd see banks to share for anti- money laundering. Or if you're Target and you're selling Procter& Gamble shampoo, P& G wants to know what is actually being sold through Target at an individual level, not just at a high level with how many skews were moved but who bought them end of the day. No one wants to share that because it's sensitive. That's credit card transaction data. You can open that up and now your supply chains can be optimized as a result. Now, there's business models that are not yet unlocked because data is locked down, but these technologies' differential privacy, confidential computing and enclaves on the cloud allow for truly shareable ways of securely handling and using somebody else's information without seeing it. And that becomes the Holy Grail from a security practitioner's standpoint. Assuming these things can be performant enough and useful enough, you don't have to modify applications, that really if we could share information between 15 hospitals to train stuff on, that's good for humanity. We all should root for that and want that.

Tim Gasper [00:49:47] I would love to at some point have a conversation. I almost imagine a panel or something. We should someday do a honest, no BS conference or something like that where I would love to have a plan on this-

Juan Sequeda [00:49:59] That we should do, not like a T-shirt example.

Tim Gasper [00:50:01] ...topic like data sharing, homeopathy, like differential privacy, I think this is a super interesting topic area.

Juan Sequeda [00:50:08] And especially because there's new business models that would be unlocked with it, so something we've never seen before. What is the state of this technology today? How usable, scalable is it?

Jesse Sedler [00:50:19] Yeah, it's relatively nascent. Now, there's investment dollars pouring into it. There are Intel and AMD and the big makers are optimizing chips to allow these workloads to run in specialized ways. The big cloud providers are offering confidential computing I think they all offer in various methods. But it's early because the initial use cases are still the whole idea of, " Hey, three banks can share data to go find fraud." It's a cool story. Why would Citibank and JP Morgan start sharing? They have methods of doing it to that. Now, some of the hospital examples, it's nice in practice. Nobody's really that altruistic that they're going to start doing these things. So the challenge, I spent about a year and a half in that area, was that there's not the burning need for that use case. But I do think that this kind of ChatGPT rush in the past year and a half is driving companies to want to share more information and use more information. And a good way of doing it is if you can't see it, you can't really get in trouble for somebody else using it. I can just get the value of it. So, the hope is that that accelerates some of the adoption in this area. As well as education and regulatory because until the regulators say, " Yes, if you use homomorphic encryption, that is considered anonymization." Now, those rules have not been written yet. So people are waiting on the EU to decide. I mean, the US regulatory regime is quite challenging just overall with states and federal that there's a lot of hesitancy to start using the tools because big companies don't know, " Will I get in trouble if I use it and something happens to the data?" Even though it's encrypted mathematically, does the policy cover my ass?

Juan Sequeda [00:51:58] This is... I think you're bringing this up. I remember some stuff that we do-

Jesse Sedler [00:52:03] Sorry. I'll wait until the end for the inaudible.

Juan Sequeda [00:52:04] No, no. We'll never know. Things happen.

Tim Gasper [00:52:06] That kind of always happens though. These conversations right at around minute 50, something really interesting happens.

Juan Sequeda [00:52:12] Well, what's interesting is that you said that up to now, people have been talking about this, right? It's like a great, " Oh, it's very altruistic if you go do this." And I've been part of these projects before. My previous company, we were trying to go do like, " Let's help create the golden record of a patient because everybody has all this information. It's going to help everybody." But at the end, it was like, " Well, who's actually going to go do this if it's set up something center. Who's going to pay for this stuff? I don't want to pay for it." You close your fingers. And it's not a priority because... So, it's going to be interesting. At the end of the day, a lot of this stuff is really this is where regulations will come in and can really be powerful and saying, " No, you have to do it for these reasons. So there's a motivation to do that." And then that's how new technologies get uplifted that you'll have new business models that will come up. And at the end, it will help people I think. But there needs to be a catalyst and sometimes it's like, "This is for the good of humanity." It's like, " Yeah, but who's going to spend more money going to go do it first?" So, unless somebody's arm is being twisted or multiple arms be twisted at the same time, I think that's when we'll see a change.

Jesse Sedler [00:53:16] Well, it's like GDPR when it came out. And was it 2016, 2017? Privacy beforehand was we should care about it. Yeah, that's helpful. Now, you're being forced to care about it. Again, I don't think companies really care because it's the right thing to do to safeguard our data and to let people put in data subject requests. But conceptually, for the end user on the backend who says, " Hey, big company, what data do you have on me? Please delete it." It's a pretty powerful thing to now have to abide by. Yeah, they yell and scream when they have to do it, but regulation says you have to so go do it. I agree with you. I think this will be the same type of thing of something bad is going to have to happen, then something good will happen and then there'll be a regulation and then the technology will come as a result of it.

Juan Sequeda [00:53:57] But I think it's because it starts with something bad happening, but it doesn't start like, " Oh, something good can happen. Let's go focus on the good." That's usually when something bad happens first.

Tim Gasper [00:54:05] Yeah. It doesn't start with, " Well, maybe we can cure cancer if we." But it should ideally, right?

Juan Sequeda [00:54:12] So what we should do is some people actually try to go do this but then let the bad thing happen but nobody wants to take that risk. So we have the chicken and egg here. Anyways, we're not going to solve this problem today. We can have inaudible. All right, let's do this, lightning round. Okay, I got it. First question, to really take a more open and unlocked approach to data security, is technology going to solve that or is this really a people and empathy problem?

Jesse Sedler [00:54:38] I'm going to say people and empathy. Technology is a means by which you can enable the actual technical safeguards to happen. But if people don't want to have it happen and there's no real why should I help you out, I think that even to the last point too about with regulation that causes the change. I think we just need people to act in some way that will then cause the unlocking and more of the potential to be seen of that data usage.

Tim Gasper [00:55:05] People, people, people.

Jesse Sedler [00:55:07] Unfortunately.

Tim Gasper [00:55:08] Second question. There's so much unstructured data. We talked about that in the middle of our conversation today. Specifically around unstructured data, do you think that ultimately we will solve this problem of labeling it all accurately, right? Or is it always going to be you've got to slice the problem up and control the surface area?

Jesse Sedler [00:55:35] That's a tough one. I'm going to go with I do think that we will find ways to solve that problem. I think that there's definitely enough skills and money being poured into this trying to solve that that I think that eventually, that will be solved. And I think that as models develop to understand just natural language better and then use that to basically make better inference of what needs to be labeled that that could cause the world where scanning petabytes of data in a reasonable amount of time, we're talking days and weeks, does become a reality, especially with faster compute and cheaper compute that we're seeing more and more.

Tim Gasper [00:56:12] Yeah. So, it sounds like you're saying AI will be key to this.

Jesse Sedler [00:56:16] Absolutely.

Juan Sequeda [00:56:18] All right, next question. Right now, who has more power on how fast the AI conversation is moving, the CDO or the CISO?

Jesse Sedler [00:56:24] Ooh, that's a tough one. I think it's going to be the CISO because a lot of times, the CISO is going to... I mean, it's a lot easier to say no on things than to say yes, open it up especially I think in big corporate IT. So, I think unfortunately, the department of no will still have the upper hand in that overall discussion. But I would imagine that's going to change very, very quickly just with how companies are starting to unlock some of the data and the revenue benefits that they're getting. Once one company figures out how to generate an extra 2% top line because models we're predicting things better, everyone else gets on board. So, I'm hoping for that day to come because I think that just again, it makes for a better overall experience for all of us when it comes to the vendors that we work with.

Tim Gasper [00:57:20] Yeah, I think your thought process on that checks out. I mean, I think about the cloud movement and everybody started moving their data to Snowflake and stuff like that. And everybody was like, " Oh my God, cloud." And it took a little time and once everybody figured it out, then they figured it out. It's just we need to let this play out, right?

Jesse Sedler [00:57:37] Big time. Again, that's a person thing. It's a comfort zone in many ways.

Tim Gasper [00:57:42] Yeah, it is. I've got my book over here, Crossing the Chasm. People are going to be in a different spot in their adoption journey.

Jesse Sedler [00:57:53] I'm currently reading that. I know one of the questions was going to be, " What are you reading?" I started reading it.

Tim Gasper [00:57:58] Really?

Jesse Sedler [00:57:59] Yeah.

Juan Sequeda [00:58:00] That's a classic.

Tim Gasper [00:58:01] Great book.

Jesse Sedler [00:58:01] It is a classic, fascinating. Read it in business school, reading it again just in the midst of trying to cross the chasm currently. Yeah. Great, great. Love that.

Tim Gasper [00:58:11] That's true. Especially if you're a VP of product at a startup or something like that, this is mandatory reading, right?

Jesse Sedler [00:58:17] Yeah, it's one of the bibles.

Tim Gasper [00:58:20] All right, last lightning round question for you. This is a late edition. This is how you know that the lightning round is truly dynamic and it comes together at the last minute. Do you think that differential privacy, homomorphic encryption, this whole space which you call kind of a nascent subspace, is this going to become valuable for enterprises in the next three years or do you think the opportunity is going to take longer to realize?

Jesse Sedler [00:58:44] Yeah, no, I think in the next three years we will see much broader adoption of some of these technologies. Some of them are not ... Homomorphic encryption is the true Holy Grail. Might not ever come, but differential privacy just as a technology, yes, I do think within the next couple of years, we'll see much broader adoption. Enclaves for sure just because it is a lot easier to use. Tokenization has been around for a long time. Some use it really well, but it can be challenging.

Tim Gasper [00:59:14] Yeah. So, some of the more advanced stuff that's going to take longer because it's newer. But tokenization, differential privacy, some of these things which have been around a little longer, we're going to see some adoption.

Jesse Sedler [00:59:26] Yeah. And federated learning, it's in use today by Apple, Google. Others are using it on all of our phones as an example.

Juan Sequeda [00:59:32] All right. That's interesting. All right, takeaway time, Tim. Take us away with takeaways.

Tim Gasper [00:59:41] Taking away with takeaways, all right. So, so much good stuff today, Jesse. Really appreciate all the great nuggets you dropped today. We started off with honest, no BS AI is so exciting. But when you hear security, you go, "Eh." So what's the answer here, right? What's the honest, no BS on this? And you said, " Hey look, maybe there's this perception." And it's not unwarranted in terms of security being this layer of getting in your way for your particular task you're trying to do, this" department of no." But this inflection point is really starting to happen and you can see this in action with things like these board mandates on AI. There's this opportunity. And how do you see people reacting? Well, you can see. It doesn't happen across the board but you can see it happening more often now. The CISO and the CDO coming together talking, collaborating early in the life of the project. And later in our conversation, we talked about they're important but sometimes awkward cousin privacy, bringing them in early too, governance early. You want all these folks to be there early because ultimately, we can collaborate together. It's about creating value for the business. It's not just about risk reduction. When we talked about the CISO and the CDO working together, you said security has been around for a long time, CDO is a little bit newer. So there's a need to understand each other. There's a need for empathy. Security sometimes can take this perspective of, " Hey, when am I going to get hit in the face again?" Every three years or so, there's some kind of a breach of some kind and you have to mitigate and you have to remediate against it. So, that sets up a certain culture. And if you work through that, you have to understand, you have to have empathy for each other. And when we talk about security for AI projects, some CISOs are just straight up blocking things like ChatGPT because quote they need to wrap their head around it. They've been burnt in the past. So, they're trying to make sure they're doing the right thing for the business. But you got to understand quickly, the space is moving really fast so you can't wait. And a really important strategy is taking inventory. Understand what you have. Create a catalog. Use scanning tools and things like that that can help you gain more data intelligence and find an opportunity to really set yourself up to move fast while being secure. And even though it happened later in our chat, you did bring up some of these newer technologies and sub- industries around security using things like homomorphic encryption, differential privacy, etcetera. There's a lot of really interesting opportunity there and I think that's a good pin in future conversations we should have for future follow- ups. So much more but, Juan, what were your big takeaways?

Juan Sequeda [01:02:20] Yeah. I mean, going back on that AI security, there's more like a model registry so things are misused, what data goes into the model and how easy is to get it out? So, it's interesting to start, people who are listening, thinking about what's the next thing I'm interested to get involved in and so forth. But that was a great segment right there. But at the end of the day when we think about security, we have to go back to basics. What do we have? Who has access to that stuff? Because at the end of the day, security issues that we see on the newspaper are just the basic stuff. John Smith left the company and their access was shut down and that's how it all started. And then, we had that conversation about using AI or how are we going to label all this unstructured data which is really challenging. Just imagine, you're sharing something, a document, whatever that's shared on Slack, then you email it and so forth. That's getting saved in so in different places. So that mountain of just data out there is just so gigantic. And then think about not just text and think about images. You have to go scan those images because maybe there's a credit card number over that. Even the horsepower, the machinery, the compute to go through all that is so challenging. So, the mountain of just challenges in front of you is gigantic. We did touch a lot about this mainframe. I think it was what we talked about there was like, " Hey, 90% of credit card transactions still run through IBM mainframes and there's just so much legacy valuable data in there." But there's that cultural hesitation of just don't touch it. And I think my takeaway here is that I would just want to go learn more about the history of mainframes. It hasn't gone away. It doesn't seem like it's gone away anyways. And then, we brought up the whole analogy, the brakes in the car. Is it to slow you down or drive fast safely and how does it apply for security? And you say, " Well, yeah, if you embrace the policy approach and some of the best practices, just don't have to drive 130 miles per hour. You want to still follow some of the rules right there." And that's another good quote, privacy, that other cousin. And then at the end of the day, it's security governance, privacy compliance, legal risk. Just be responsible. And I'll quote from our legal team, " We are all part of the legal team." How did we do? Anything we missed?

Jesse Sedler [01:04:27] You guys nailed it. That was a hell of a summary.

Juan Sequeda [01:04:30] Well, that's all you. All right, we're way over but we're having fun here talking about it. What's your advice? Who should you invite next and what resources do you follow?

Jesse Sedler [01:04:43] So, my advice would be, be empathetic as much as you can be, right? Again, not even stealing that from the US army. But really being people to people will solve more of these challenges and understanding what somebody else is thinking about going through the challenges they're facing of why they want to do it makes a whole world of difference. Very basic but I think it's just a core principle of everybody's day job to be focused on.

Tim Gasper [01:05:07] Wise words.

Jesse Sedler [01:05:09] What was the second question?

Juan Sequeda [01:05:10] Who should we invite next?

Jesse Sedler [01:05:13] I think that the privacy- enhancing technology debate and discussion as it relates to AI is one that I don't know if you guys have had that on before, but having somebody who knows that space much more intimately than I can but also can talk to AI would be great. Happy to recommend some folks if that's helpful.

Tim Gasper [01:05:33] Definitely.

Jesse Sedler [01:05:35] Yes.

Juan Sequeda [01:05:35] This is a great topic, something to go look into more.

Jesse Sedler [01:05:38] Yeah. So I think that area is good. Sadegh Riazi is the person that I would suggest because I used to work with him. He's one of the experts on homomorphic encryption and multi- party computation as it relates to AI and just being make for a super- interesting conversation. Again, those tools are just so powerful. If you can get it right, it's just things we can't yet comprehend in terms of business model and value. In terms of resources, so I am reading Crossing the Chasm so you already got me there on one which is great. I personally do a lot of... I like my local politics. So I'm from Atlanta, I read the Atlanta Journal Constitution, their daily political roundup of whatever's happening in the Georgia State House which is entertaining at times and not entertaining at others. There's a couple of security blogs that I read. Ross Haleliuk is one of them. He's a fantastic writer who's a product manager. And he just writes long- form pieces as it pertains to security but from different lenses. His stuff is awesome because it's non- technical but it provides you basic like the baseline understanding of a lot of these challenges so that you can just relate to it as just a person picking up an article and reading it.

Tim Gasper [01:06:47] Can you say that again? You said Ross, what's it?

Jesse Sedler [01:06:49] His last name is Halileuk, H- A- L- I- L- E- U- K.

Juan Sequeda [01:06:54] You got it right, Tim.

Tim Gasper [01:06:56] I was pretty close.

Jesse Sedler [01:06:58] A little hard one but he just wrote a book. He's got a good following on LinkedIn and his blog is just it's awesome to get into some of these topics. Very focused on cloud security. And the final one from a podcast standpoint, Lenny's is one of the product management podcasts that everybody listens to. It just goes by Lenny's. I think there's a last name to it but again, it's on there on iTunes or whatever the Google version of that would be. And that's just brings in product managers and product professionals to talk about how they grew their companies. Again, it's just fascinating listens.

Juan Sequeda [01:07:34] Well, that was very thorough. I really appreciate the recommendations.

Tim Gasper [01:07:38] Well, those are good. Yeah, I haven't heard these before like Ross Halileuk, I want to follow that.

Jesse Sedler [01:07:44] Yeah, he's awesome. And I'm kind of a product guy. I haven't heard of Lenny's podcast. I got to check that out. I feel like I'm out of the loop. Jesse, this was truly awesome.

Juan Sequeda [01:07:55] Thank you so much, Jesse. We've learned so much. We got into topics that we usually don't, and that's why we love this. And we had some honest, no BS conversation. And we had some great T- shirts that one day will come out.

Jesse Sedler [01:08:06] Yes, got to have the good T- shirts. And appreciate the opportunity. Thank you guys and thanks for the listeners. Yeah, appreciate it.

Juan Sequeda [01:08:13] Cheers.

Jesse Sedler [01:08:14] Cheers.

Special guests

Avatar of Jesse Sedler
Jesse Sedler VP of Products, 1touch.io
chat with archie icon