NEW Tool:

Use generative AI to learn more about data.world

Product Launch:

data.world has officially leveled up its integration with Snowflake’s new data quality capabilities

PRODUCT LAUNCH:

data.world enables trusted conversations with your company’s data and knowledge with the AI Context Engine™

PRODUCT LAUNCH:

Accelerate adoption of AI with the AI Context Engine™️, now generally available

Upcoming Digital Event

Are you ready to revolutionize your data strategy and unlock the full potential of AI in your organization?

View all webinars

Responsible AI with Mariam Halfhide

Clock Icon 58 minutes
Sparkle

About this episode

Responsible AI, what is it and who is responsible when it comes to AI? Mariam Halfhide, Responsible AI Expert at Xebia and former C&C guest, is here to discuss the answers with Tim and Juan on this week's episode.

Tim Gasper [00:00:32]:
Hello, everyone. Welcome. It's time once again for Catalog & Cocktails. Your Honest No-BS non-salesy conversation about enterprise data management with tasty beverages in hand. I'm Tim Gasper, customer product guy at data.world, joined by Juan Sequeda. Hey, Juan.

Juan Sequeda [00:00:47]:
Hey, Tim. How are you doing? I'm Juan Sequeda, principal scientist here at data.world. And as always, it's time for taking a break of your day. And let's go have some conversations about data in an Honest No-BS way. And today we have a guest who we have not had many guests who've repeated, and Mariam Halfhide is one of those new kind of, I think there's going to be a new type of guest, which I guess we've been here at least twice. She was on the podcast before we had a conversation about AI ready data. And one of the topics that is bound to AI ready data, which we did not get too into that, which we said, okay, we need to have this conversation. Who best have it with. With Mariam is on responsible AI, and Mariam is a responsible AI expert at Xebia Data. Mariam, great. Great to have you back. How are you doing?

Mariam Halfhide [00:01:37]:
Thank you. Doing really well. Great to be back, actually. And yeah, also very excited about this topic. It's really close to my heart. Yeah.

Juan Sequeda [00:01:46]:
All right, well, before we kick off discussions, so I know we're in full acknowledgement on it, Honest No-BS. We recorded this conversation, but when we're doing the pre recorded, we always like to talk about what are, what's our latest drink, cocktail drink that we were discovering that you are enjoying right now.

Mariam Halfhide [00:02:05]:
I actually went down and made a special drink for this episode because it's almost, you know, end of the afternoon here, and I could still use some sugar and caffeine in my blood. So I made, like an ice cream coffee, like an ice cream variation of the coffee. So it's a vanilla ice cream, and then on top of it, it's a coffee. And it's very, very, very tasty on a hot day like this one.

Tim Gasper [00:02:33]:
That sounds delicious.

Juan Sequeda [00:02:34]:
That's actually better. That's a great idea.

Tim Gasper [00:02:39]:
Is it kind of like, what do they call it? Like an affogato or whatever.

Mariam Halfhide [00:02:43]:
I don't know. I just know like order it in the shop. You get ice cubes instead of ice cream, but ice cream tastes better.

Juan Sequeda [00:02:50]:
All right, that's what we go try. And then also, if you want to turn into a little cocktail, put a little bit of Bailey's on top. I think that you could look.

Mariam Halfhide [00:02:57]:
Oh yeah, yeah. You could make an alcohol version of it. Definitely.

Juan Sequeda [00:03:02]:
Tim, what about you? What's your latest interest in drinking?

Tim Gasper [00:03:05]:
Well, actually similar to Mariam, I actually have a sort of a cocktail too. Even though it's kind of breakfast time around here right now. I've been doing some more mocktails lately. This is a Kin Euphorics. It is called the Kin Spritz. It's kind of like a citrusy kind of flavor and it's got a little tiny bit of caffeine in it. So a nice little alt alcohol, no alcohol in it. Mocktail for the morning here.

Juan Sequeda [00:03:31]:
And one thing I've been doing a lot is I really like cucumbers and I've just been trying to go add cucumbers to a bunch of stuff and like all types of drinks and cocktails I'm doing. So I mean your typical, right, gin tonics with some cucumbers, but just trying to mix all that up and with aperol stuff and then also just adding cucumbers to my water. I just enjoy that. It's so refreshing. I love it.

Mariam Halfhide [00:03:54]:
It is.

Juan Sequeda [00:03:55]:
All right, so topic today is responsible AI, but warm up question. So what is something that you consider that you're really responsible at?

Mariam Halfhide [00:04:05]:
How do you define responsibility?

Juan Sequeda [00:04:09]:
Hold that thought for discussion. But I don't know, based on your definition of responsible then for me.

Mariam Halfhide [00:04:18]:
So in regards to responsible AI also, it has to do a lot with actually making a conscious decision. So if I have to tell really about what am I in life responsible for? I would say myself in the first place. So my thoughts, my emotions and my behavior. Yeah. Owning my own self in that sense. Yeah, I would call it that. It's not an easy task, I can tell you. Yeah. What's the alternative? Right?

Juan Sequeda [00:04:47]:
Yeah, that's a good proponent. I mean, think about yourself. I mean, all this responsibility, it all kind of starts with like one.

Mariam Halfhide [00:04:55]:
Yeah. That's also, I think the red thread at the end because, yeah. If you can, cannot own yourself in your own world and your own, let's say reality. Yeah. Then I, how can you count to be responsible for someone else, especially if you're a mother like me and also have like two kids to be responsible for? Yeah.

Juan Sequeda [00:05:15]:
Tim, how about you? What are you, what do you consider yourself responsible at?

Tim Gasper [00:05:18]:
Oh, my goodness. Well, after your answer, Mariam, I feel like I need to upgrade my answer. I need to improve it. I was thinking, man, what am I responsible for? And I was like, you know what? I'm responsible for making sure that my kids have lunches when they go to school in the mornings. So I make sure that that happens with 100% success.

Juan Sequeda [00:05:37]:
I'm going to tie mine a little bit, too. Like, when I'm not traveling, when I'm here at home, I'm responsible for the entire morning routine. And I think I'm really good at it. Like, I have my whole wake up, get everybody here. I do these things to get everybody's drinks and coffee and then breakfast and all that stuff. I think I've got it all to a good kind of throw. Like, even like, in my calendar, people will see it. Like, I'm busy with my morning routine, with the family. Like, I can't do anything,.

Tim Gasper [00:06:01]:
Block it off, Right?

Juan Sequeda [00:06:03]:
But I also really responsible with, like, our groceries and food. Like, that's me. I do that and I do that. So when I travel, I'm always responsible, like, making sure that I cook everything. So my family has all the food because my wife doesn't like to cook. So they have everything. I think I'm really good at that. I love it when I come back and like, yeah, they ate everything. It's good. So responsibility. Responsibility. All right, let's kick this off. Honest, No-BS. What is responsible AI? But honestly, before we can get into that, what do we define as responsible?

Mariam Halfhide [00:06:40]:
Yeah. So I deliberately chose the term responsible AI and not AI ethics, because I came to realize that ethics is a very broad term, you know, means something else for somebody else. And with that thought, actually, it came to me that, you know, everybody in that sense is an ethicist. Everybody has their own moral compass that they're guided by. And who am I to make a judgment about somebody else's moral compass? I cannot do that because there is no universal, you know, right or wrong in this case. Everything is context and perspective. And so from that thinking further, I felt like if I call it responsible AI, then it's mainly indicates the fact that you actually consciously thought about it and made your considerations while thinking about it with the data you had available at that point in time. In that sense, you're owning it. And that's why I called it responsible, because, yeah, then you approach it responsibility, even though if I don't know if your moral compass does not match up with mine, but if you actually. It's a conscious choice. Yeah. Who am I to say that it's the wrong one?

Tim Gasper [00:07:47]:
Yeah. No, that's interesting. This contrast between the ethics versus responsibility. I know some people like to talk about both. They like to talk about AI ethics. They like to talk about AI responsibility you mentioned. So responsibility is you actively thought about it and you're actively owning it. And when you think about responsibility around AI, I guess, but maybe more generally is responsibility. Well, actually, now let's zoom into AI. Right. Is responsibility like who is responsible? I feel like who is a very interesting question when it comes to responsibility around AI.

Mariam Halfhide [00:08:25]:
Yes, definitely. And that's also, I think you have separate parts of responsible AI that belong to separate people. But in the end, I would say if you take the team developing that product responsibly and that team, it's not just the developers, it's also people that are around privacy, it's also people that are around security. It's a multidisciplinary team that is involved in the responsible AI development or development of that product. So to say that is end to end responsible for their responsible product only, you know, certain parts within that account to certain, for example, the strictly development team would be responsible only for the technical implementation of the principles that are already are designed or defined beforehand. And those are usually defined by the top management, you know, by the upper like c level stakeholders. So there's, everybody has a play in there and dependent on the phase you're in, that part you're responsible for. But yeah, the total responsibility of the responsible AI product is end to end for that whole team.

Juan Sequeda [00:09:31]:
So you brought up right now what the principles, and I think this is going to go in, this is going to get a, I want to dive into what are the principles and start kind of connecting this into like things, how we're seeing it in how legally these things are being defined now with the EU act and stuff like that. So what are these principles that we think or you think that are just guiding for anyone doing any type of AI product? And how is that being kind of influenced and evolving from what's happening from a legal standpoint?

Mariam Halfhide [00:09:58]:
So there is of course some, some principles that overlap, but overall those principles are also very different per organization. So what I want to say is, like, usually where I find organizations in, it has to do with like a certain journey that they're on. And the first step usually when we get approached is about this, like as an organization, what's your acceptable standard? You know, what's your standard of being acceptable? So it can also be merged with some principles coming from regulations such as AI act, which can go into it a bit later. But usually it is tied to your values as an organization, your organizational values. You want to have principles designed that are tied to your organizational values, to your organizational moral compass. Those principles at the end, those guiding principles are your organizational moral compass in a way that you want to embed in your organization at the end. And so there are some common stuff, like some common guidelines, like human centric, socially beneficial, fair, explainable, transparent, things like that, accountable. But it really depends on the ecosystem that organization is operating at. Some are about respect human vulnerabilities instead of give users what they want, minimize harmful externalities instead of having the paradigm. Or all tech has good and bad. There are many, let's say templates, how you could approach these principles. But in the end, I think it's most important for the organization to know what they stand for as an organization. Because just to give you an example, if you want to be as an inclusive organization, let's say you embrace diversity and inclusion, then yeah, of course your AI solution needs to have a principle embedded in there. Then it's also inclusive for all of its users, just to name one.

Tim Gasper [00:11:51]:
That makes a lot of sense, and I like how you're mentioning some more like, like common, maybe shared or more popular types of principles. But ultimately there's aspects of it that are just specific to your own organization and to the people that are involved in the project and what their values are. Why is responsibility such a big topic around AI, right? Like, versus, like, we don't always talk about like, you know, oh, hey, have you used, you know, that's new SaaS, new SaaS application, that new software as a service, right? Oh, yeah. I don't know if that was a responsible SaaS service, right? Like people don't talk about that, but they talked, talk about it related to AI. Like why is this such a focus?

Mariam Halfhide [00:12:31]:
I think in the end, because it is your foundation for trust in a sense. That's not initially what companies see. I think initially they enter because of this urgency. Like for example, AI act that's just been approved and going to come in effect. So our clients feel a lot of urgency about that. They don't know what to do and then they approach us. But this is my personal, personal segment and I've really tried to kind of inspire my clients. Like, it's nice that you want to take responsibility to be compliant with AI act. But just, you know, regulation is the lowest bar of ethical behavior. You really want to inspire them to just do the right thing. And it mainly comes down to the fact, you know, most of them have kids. Most of them have these conversations on the table. Do we have like no phones on the table or kids addicted to social media or whatever the reason? So there's always like a way to find how AI is affecting the already existing, let's say, less responsible cases of AI is affecting their day to day lives. And it's about making them realize that they also have a piece to say and a responsibility to carry, as all of us do, as professionals in the field, professionals using it or adopting AI, we all carry that responsibility to, to help to evolve. Which way do we actually want our ethical norms to evolve? Because they will be changed by technology. It's a two way street. Ethics is a product of society. Technology changes the society. So ethical norms will also evolve. And we actually have more say in it than we think we do. I think.

Juan Sequeda [00:14:15]:
This is a fascinating kind of section right here. This is a t shirt, quote, regulation is the lowest bar for ethical behavior. And I think that is, that is.

Mariam Halfhide [00:14:26]:
Well, any lower and you're punished by law. By law, right. So that's, that's how at least I look at it.

Tim Gasper [00:14:36]:
You must at least do this. This is the minimum, right?

Mariam Halfhide [00:14:39]:
Yeah, it's like the minimum acceptable. But like in general, if you think about from the perspective, it sounds weird, but it's also part of responsibility. Like, hey, I'm partially responsible to be part of creating the world that I want to live in. A bit of a philosophical perspective, but doesn't make it less true. Then you also have a part like, hey, what I think is acceptable in general, and of course you're also part of a system, but you know, view it as a Jenga metaphor maybe. I think Azerkin mentioned it somewhere in this podcast that inspired me a lot because it's not about that. We are, you know, against AI. We want to not stimulate AI or not. You know, it's really not the point we're making. It's, I think, more about kind of what are the fragilities in our society that we can expose with new technology that are going undermine our ability to actually benefit from all those amazing AI benefits? So I think, I'm not sure if it was Tristan Harris that mentioned this Jenga metaphor I was telling about. Like, yeah, we all want the tall, a more amazing building of AI and its benefits, but do we want that by picking the pieces from the bottom, you know, and kind of destroying the foundation and just focus on the part on the top. That's something I did. Yeah, I found it kind of. I was like, oh, yeah, I could relate to that. The bigger picture. Yeah.

Juan Sequeda [00:16:02]:
I've never, I've never heard about this Jenga metaphor this way. But what you're saying reminds me of like those memes where you're like, you have the data, I mean, you have all this data stuff that's happening and it builds on this foundation of this, like this little stick on the side. Right? So, and I like about the way you're presenting with this Jenga metaphor is like we focus so much kind of on like the out the bigger view, right? Oh, look at that. That's so big. It's so cool. Whatever. But it's like, well, the way that thing was built is because you took that small piece, that piece in the bottom and stuff, it's like, that's probably not the way, the way you want to be able to get this bigger and broader and better. Right. So I think that's a fascinating way of thinking about this. And I think we need to start really thinking what are we investing in one part, but we're taking investments away from what other part? That we're like, well, that's not good that this is kind of we're losing that investment. Right. I think, again, finding it's all about a balance.

Mariam Halfhide [00:16:57]:
So I think an example could be like, yeah, we do get the cool AI art that we love, right? But we're also creating, for example, deepfakes that undermine people's understanding what's true and what's not true in real society. Or, yeah, we might net, we might get new cancer drugs, but by also creating AI that is exposed to language of biology and might have different implications of new biological threats. So yeah, there's always two sides to the coin. And it's just, I don't want to lay too much focus on the negative because that's really not the point I'm trying to make. But it's not a technology problem. I think it's more of a social political problem. It's not even a technology problem because at the end all about the society or system you're going to embed that technology in. Yeah. So let's make it a bit tough being a...

Juan Sequeda [00:17:47]:
Your recommendation? So within to take this practice, really practical organizations who are either one building AI applications internally or they're also buying them, right? They're just buying product services and stuff. So how, how should this be determined? Like, is there, let's say, the team that's building them. Like it's the team themselves who should be thinking about all of these. Okay. These are the bad things that could happen to, or they should just be focused on the task at hand and then there should be some other group of people, committee or whatever that should be thinking about that. How do you see a recommend this stuff going forward? Because the concern I, the concern I have is like that people start spending too much time on the negative things and not start. Right. That's not what we want either.

Mariam Halfhide [00:18:34]:
Yeah. So it's not a one off thing. It's also like an iterative process to me. I view it more like that. You have the CI CD pipeline, and at every step of the CI CD pipeline, there's some questions you can ask yourself to embed responsible, like by design, but that's specifically on the builder, let's say, side of things. But if you look more, let's say, from a use case perspective, then from that ideation all the way to the, well, industrialization and maintenance of such a use case, you also have things you can go through with ideation. You already probably will asking yourself, yeah, should we be doing this? Who are people involved in what way they can be involved? And this is also happening, usually multidisciplinary and all the way to the maintenance. There's steps you go through and in different phase, different things are relevant. But it is an iterative process because there's also feedback loop and revision that takes place because, yeah, you can have a set of guiding principles that guides you tomorrow you might have a different CEO and have a different type of guiding principles. I don't know. It's also something ongoing in a way, a bit like data governance, but not entirely.

Tim Gasper [00:19:46]:
Yeah. So things are always changing. Right. And so your process has to be resilient to those changes. I like you describing it. An iterative process, something that I've seen at a lot of the companies we've been working with lately are like the introduction of AI councils. Like, do you see that as a good practice and does that help with responsibility or does that help with, you know, other things?

Mariam Halfhide [00:20:11]:
So it's. It's a bit different. So normally you go through certain phases as a company. Yeah. You start with the guiding principles, the definition of this on a top level, and then you go to implementation. And implementation is twofold, like the organizational changes that you need to make, including, for example of such an AI console. It would be an example of that. Because it can have different types of forms, and then you will have the actual technical implementation, what I like to call responsible by design. And then, so it depends a bit on a company and their structure. I can imagine that such a council, multidisciplinary council, can work a bit for bigger organizations, but in many cases it can overlap a lot with a data governance steering committee type of council, which is also multidisciplinary. So there's, it's not just one and two. I think you can, a lot of things can be combined within the data governance practices in that sense, specifically the people process part, the implementation part, the technical part has a bit of different components to it.

Juan Sequeda [00:21:17]:
Well, you're talking about responsibility by design. And we always hear privacy by design, governance by design. Like, there's all these things by design at some point. Like, okay, so then everything is by design and the honest, like, yeah, you should be designing these things. Right. But, um, it looks like there's like so much overlap. And I wonder, like how our organization is doing this, saying, like, because talk about privacy, talk about governance of our responsibility. And even if it's like without, with forget the AI part or anything, those things, those principles should be behind, should apply for any of these other types of kind of systems that you're building. Right. So it's like the whole by design takeaway I have you right now is that this whole by design is, is not, shouldn't just be like pieces, different parts. Like there should be a coordinated effort around this. And again, it's all aligned to what is the principles and the moral compass of your organization, which I really love how you're really highlighting that it, I agree.

Mariam Halfhide [00:22:15]:
There's a lot of overlap also with implementation of regulation. Like if you're trying to implement GDPR, a lot of that overlaps with implementation of the AI act. And there's, AI has some other things on top of it, of course, and some different context, but a lot of the processes can pretty much be standardized, and there's also some overlap. So with that being said, I agree. I think that should be essentially a part of every piece of tech we build, like asking these questions, should we be doing this? But I think the problem is exactly the conscious consideration, because some things just seem cool. Let's just build them. So I think I also would be okay with if it would be a proper data management, let's say data product management program with data governance that includes ethical considerations and also embeds that technically that would be, I think, just as the same as responsible by design if you do it well, yeah.

Juan Sequeda [00:23:19]:
I'm just saying, Tim, like, we've been having like, the topic of like, data product and data product managers and stuff is something we've been talking about for years now in a way. And like, we started off this season also with Anna talking about data product management. And it's just, we're all figuring this out, right? I mean, this is kind of something, I've been saying this a lot lately. It's like we're building the future. But I think we are. There has been like this thread of data and data products specifically, like after the whole data mesh stuff, right. Comes up, this is focused on social, on the social technical aspects of data, right. Products. And then we see kind of in the last almost two years, a year and a half, AI through the LLMs and Genever and all that stuff has come up. And it's like, in a way, it's like there's so much stuff. But this is the huge opportunity to, like, really align right now because all of these things are connected somehow. And this is, we have a huge opportunity. I mean, I can imagine, like going to the future, but looking back at this time, I think people will go back saying that the 2020s was an inflection point in kind of the history of enterprise data, because all these things happen at the same time. And it really got people to think about how to kind of manage this holistically instead of like just throwing things all over the place.

Mariam Halfhide [00:24:41]:
Yes.

Juan Sequeda [00:24:41]:
Yeah. I'm just ranting now.

Mariam Halfhide [00:24:43]:
Initially, why I'm trying to inspire our clients, just do it not because you have to and because legislation say so, but do it because it's a great, let's say, basic hygiene for your own company and your own management, which also brings, brings us back to the foundations of data management in a way. So that's essentially the point I'm taking. But there is also a bigger picture at play in sense. You know, whenever technology brings some type of power, it creates a race. And you see that race, you see that race between the bigger, like the big tech companies, right? Newest features deploy as fast as we can. Because if I don't do it, then I would say have a say in the future, then somebody else will. And I also see that race more on a national levels. Like, I see it between the countries. Like, yeah, there's, that's a, it's no, like from Dutch government, for example. There's no way we don't invest in AI. It's just even though it has something like, we will never not invest in AI because if we don't do it, other countries will and we won't have the competitive advantage. So it's almost like this arms race and somewhat it reminds me a bit of the nuclear weapon type of race, although this is like in everybody's pocket. So it's a bit of a different context. It's less obvious maybe, but also that on a bigger picture plays at hand. And when you hear all that, it feels very much, oh, my God, it's inevitable. Like this is going to happen anyway. But I do believe, yeah, even though it's inevitable, it's okay that AI and the development of it, it's a given because AI could solve a lot of our issues. Like it could help us with climate change, it could help us, it could help us with a lot of things, but only if we also dare to really make sure we look also at the, you know, at the, at the other side of the, let's say, let's, let's call it a shadow. I don't know how to call. We also dare to face that shadow because oftentimes it does require us to ask, you know, the difficult questions and we don't want to really think about them. We just want to use the tech. It's so cool. Like, I've cut myself up on that so many times. But also, much like on the individual level, owning your own stuff, this is also something almost like collectively, yeah, we should own also whatever we bring around with that so that it can actually help us to gain all those benefits we want to.

Tim Gasper [00:27:08]:
Yeah, no, I think that's, this is an interesting exploration of how do we make sure that we get all the benefits that we want while making sure to minimize some of the negative aspects. And yet also, so you bring up sort of this Manhattan project esque situation where, you know, like, I hear this all the time where folks are like, oh, yeah, we got to be careful about, you know, AI, you know, make, make sure, you know, we're being responsible, et cetera, but then in the same breath say, but you know, what if we slow down, you know, China's going to win and we don't want China to win. Right? Like stuff like that. Right. Where it's like, oh, well, you know, okay, so you're saying we have to move as fast as possible here, you know, so I think that creates a very interesting environment when talking about things like responsibility and ethics around, around AI because, you know, obviously we have to find a way to be responsible while also moving quickly and maybe. And maybe that's something that's kind of interesting about things like the EU AI act, right. Is that I think sometimes people worry that these regulations, that they slow down innovation, right. And, you know, is that true or.

Mariam Halfhide [00:28:16]:
Not in a way so specific? So I think I completely agree with what you're saying in sense that it's a bit of a weird dynamic. And then I'm trying to just pick up the niche I actually have control over and see what I can, you know, bring my contribution to. Hence the responsible AI stuff. But then, specifically regards to the AI act, on one hand, AI and what is it? So it's a regulatory framework, right, to try to regulate AI. It is quite revolutionary because they're the first block in Europe that they're doing this. And much like GDPR, that over time became like more or less a global standard type of way, that everybody started worried about privacy. This has a potential to do so as well. The thing with AI act, myself, I think it's a well thought act. I think they really did their best. And I'm not, you know, I'm not, how do I say this? You know, politics can be quite complicated in that sense. But I do feel the way it has been done is not as agile as the environment and the ecosystem it is for. So, I mean, it's easy to have criticism because nobody knows what's the best solution. And either way, it is better than nothing. So don't get me wrong, I'm happy that it's there, but it's in a bit of a similar way, like GDPR has been done. So I do have a lot of concerns with regards to actual implementation on a national level. And, you know, the way that it will be, let's say, controlled. Like, who's gonna do that? Who's gonna do all of that? Because already from the GDPR, there was a lot of, you know, lack of resources. And here I feel, yeah, it's gonna take a long time. It already took a long time to bring the AI extending. I think I was, the first proposal was in May 2021 or April 2021. I was one of the first that actually, you know...

Tim Gasper [00:30:28]:
That was when the first kind of draft started to appear.

Mariam Halfhide [00:30:31]:
Yes. And I was finding out, hey, this is going to come up, you know, let's. And nobody paid any attention. And it took them almost three, four years to get it effective. So in the meanwhile, we got chat GPT and all that, like, not to say, yeah.

Tim Gasper [00:30:44]:
Literally, halfway through developing the act, chat GPT came out. Right.

Mariam Halfhide [00:30:48]:
Yeah. Yeah. And I mean, they've tried to make it dynamic in sense that, let's say they define AI. They define AI very broad. And they have like, a list of, like an annex when they have all this AI system that are included in there, so that if a new system arises, they just need to update the annex and not the whole definition. So they made, like, this type of kind of future proof adjustments, so to say. But just in general, the whole governance around it and legislation is very top down, very waterfally. And I understand they're a big responsibility, so they want to get everything right as much as they can. I do get that, but, yeah, I don't know. I feel that it's not as agile as it should be, the whole government. I feel we need a bit more agile governance. And I don't know the perfect answer myself, but I have seen so far one working example, and that would be Taiwan, the country of Taiwan that. That has a bit of a different governance around how they develop policy. It's much more agile and digitalized. So that, you know, if you think of, like, a platform where people are actually all contributing by up voting and down voting policies, and then you have AI that, you know, can go like, hey, let me analyze the content posted by this political tribe over here, and let me analyze the content posted by that political tribe over there. And let me see if I can make a statement that is nuanced enough that can actually bridge those opposites to create consensus instead of viral polarization like Facebook does, for example.

Tim Gasper [00:32:29]:
Yeah. Is this a thing that exists? There's like a service or something like that that's trying to do this.

Mariam Halfhide [00:32:35]:
They have a technical platform just for the civilians to go there. And I think initially it started with Audrey Teng, the digital minister of Taiwan. And she started as a group of civic hackers that brought more transparency into budgetal spending of Taiwan government, I think. And from that, she was asked to be a digital minister. And they've been doing quite some remarkable work, and it's quite broad. Also, they have, like, media competence in their curriculum. For high school kids to already take the responsibility as being the responsible producer of content on the media and not just a passive consumer, like, I don't know, a bunch of stuff that's fascinating, really, in a lot of ways.

Juan Sequeda [00:33:19]:
Like, that would be more of a bottoms up approach.

Mariam Halfhide [00:33:22]:
Yeah. Yeah.

Tim Gasper [00:33:23]:
And more of a participatory approach. That's interesting. Well, maybe. Maybe just to come back a little bit, because I want to come back to this civilian technology platform in a minute. Just going back to the EU AI act. Right. I know a lot of companies in the european region are obviously now understanding this, starting to implement the practices around it. I think for folks in the US and some other parts of the world, maybe they're just still trying to wrap their heads around it. Right. How would you describe the EU AI act to somebody who is new to it? What are some of the key tenets around it? Is there an enforcement aspect? Can people get fined and things like that?

Mariam Halfhide [00:34:04]:
Yes, I think the fine can go up to 7% of your revenue or something of the global revenues. It's not nothing. It's more than with the GDPR, but the way it is set up, it has risk based approach, which basically means you only want to regulate to the extent that it actually poses a risk. So also their vantage point is we don't want to be in a way of innovation. So they also try to balance that. We do think AI is good, has a lot to offer. We just think it might have some potential risks for the human rights and we want to cover those. That's their perspective in that sense. And then you have like, certain risk buckets or risk categories where you have, like, unacceptable risk. Those systems are prohibited, but you expect those to be a very small portion because they usually relate to things like social scoring by public authorities or some biometric specific stuff. No, I'm not even biometric. I think just the social scoring and some, like, exploitation type of cases. And then you have the high risk category, which will be the most regulated one. But we don't expect for it to be for the majority of systems. And for those, like, high risk categories, they don't even have, like, it's a clear list of use cases that are specified in an annex that you have, like, okay, these use cases are high risk. So if you're in this use case, you're high risk. And then you need to go through, like, more strict procedure that you have, like, bunch of obligations from data quality to transparency, human oversight, like a lot of requirements. But then you also need to go undergo a process of CE marking. Like, you know, like the products have, that you get actual ce marking, that it's a safe product to use. The same marking will the AI systems get if they're. If they're from a high risk? And then you have a bit of larger bucket, which is basically AI with a transparency obligation. So it's not very high risk, it's permitted, but you just need to be transparent. And it's really about like, yeah, you need to know whether you're talking too bot or too human, things like that, you know, and. Or whether you have like a generated image or a real image, like transparency in that sense.

Tim Gasper [00:36:18]:
Yeah.

Mariam Halfhide [00:36:18]:
And then the biggest category is just minimal risk, no restrictions, nothing. So you can. Yeah. No regulations there. So they expect that to be like the majority of the systems.

Tim Gasper [00:36:29]:
Interesting. So there's these different risk levels and depending on what level you are. So unacceptable means, like you shouldn't do it at all. Like that's like, not at all. Right. Whereas high risk, to your point, is where most of the regulation is focused. And that's where...

Juan Sequeda [00:36:44]:
And everyone self assessing themselves. Like, how do they put them in the book.

Mariam Halfhide [00:36:48]:
Yeah. And it's also, listen to the point that this is just the legislation itself. So if you a company, you don't know where you stand. You should start with inventory of your use cases, with inventory of your algorithms, like what do you have? And only when you know what you have, you can actually classify is it high risk or not. And only then, you know, only if you have high risk, you'll have a lot of work to do. But if you don't, you're most of it good to go. If you have like the moderate risk, then you just have the transparency obligation, which is, you know, just telling that to the, to the, to your users and people involved. And that's it.

Tim Gasper [00:37:24]:
That makes sense. Interesting. And so basically you need to kind of come to, I'm assuming there's like a kind of a centralized board, right, where you have to bring these things to and essentially get approval, get that watermark for those high risk use cases.

Mariam Halfhide [00:37:42]:
Yeah. So we'll have, if you have the high risk case, you'll have like, at some point like this. They call it conformity assessment. It's like a procedure that will assess it. And this can be done by like external parties. So there's a market for those parties to emerge or maybe they've immersed already. And then through them you go the affixing of the CE marking to go through the process, like very much designed for products in general. And then on the higher level, you have, like, you have the European Commission, and then you have like the national, let's say, let's call it supervisory authority that is on a local level. For European Commission, the actual implementation happens on a national level. And then they just want one contact point from each of the countries. So to say, to have that, to be that authority. And those are also in line with this conformity assessment. Bodies, like the notified bodies that need to perform these assessments and then you have different kinds of operators. So for non EU people of non EU companies that want to market in EU will have to. So that will affect them in that sense. They could not, they will not be able to market in EU if not confirm the AI act and then for them. Yeah. So they're probably either a distributor of such a system or important, like, yeah, depends.

Juan Sequeda [00:39:14]:
And I'm curious, how are you seeing right now that the EU act, AI act being perceived or kind of from outside the EU? Right. Like, so when like GDPR started all for you, then mother company, the rest of the world, or I can talk about the US and start kind of adopting these things in other parts. There's legislation in the US also kind of different states have been bringing it in. What are you seeing from the outside of the EU? How is this being perceived right now? You talk about Taiwan. Curious. From other perspectives, like how is that being perceived?

Mariam Halfhide [00:39:46]:
I've seen two ways so far. One way of proceeding would be, yeah, it was about, it was about time that somebody had some type of regulation and maybe this will form a standard for more countries. And the other one is more. Yeah, why. Why do we need to do that? Like, why is it even necessary? So it will gonna, you know, stand in the way of innovation and both have some truth to it because I think the most, the biggest headache is also, like, you don't want to, you know, end up in bureaucratic long processes while developing something. But yeah, it's much more difficult to come up with something more pragmatic to still have that built in. But I hope we'll be able to do something about that.

Juan Sequeda [00:40:32]:
So it's still early on. I mean, you have the two camps, like, yeah, this makes sense. And virtually, like, no more regulation wide.

Mariam Halfhide [00:40:39]:
Because I think I read today in the news that there are more countries. What was the link? It was something about that there as AI treaty that was also involving other countries like UK and US. And it was quite recently in the news. But yeah, I need to look that up. I don't remember it so well, but it's okay.

Juan Sequeda [00:41:03]:
Well, I mean, there's so much stuff to continue here, but I want to go head into our lightning round questions because we got more stuff in there here. But this is, this is such a spot on conversation. I'm glad we're having this here for all our listeners because it's, it's a takeaway. We're getting into our takeaways, but it's like, I love how, how we're seeing that we need to be pragmatic about this, right? It's, it's not, we gotta speed up, but we gotta be careful around these things, right? So, all right, lightning round questions, kick it off. I'll go the first one. So our universities and the education system preparing the next generation, well, when it comes to responsibility and ethics around AI.

Mariam Halfhide [00:41:49]:
I think right now, and I can also, I can only speak for the Netherlands and I can only speak for right now, I think right now they do a bit more. So back when I was studying, which is quite some time ago, no attention has been given to that. But also we were not in the state where we are now. Now I think there's much more movement because I'm being invited a lot to like guest lecture at universities about this topic. So that says something, I guess. But yeah, I do think that it's dependent on the university or college and their curriculum. I think that differs per university and per curriculum in that sense. But after it's definitely getting more momentum. Like Hon, I've been walking around with this topic for so long, nobody, nobody would care. Like honestly, until, chat GPT came along and then the approval of the AI gave it an extra boost. Now everybody is like, now it's the momentum. I really feel that now. Yeah.

Tim Gasper [00:42:52]:
Second lightning round question for you is, does the EU AI act go far enough to encourage responsible AI?

Mariam Halfhide [00:43:06]:
You mean in terms of the details.

Tim Gasper [00:43:08]:
Just in general, does it go far or do you think that it needs to go farther?

Mariam Halfhide [00:43:14]:
I don't think that it necessarily needs to go farther, but I do think that there is of course some space for interpretation and it's up to anyone to make that interpretation. And I also am not sure how easily are to be found loopholes there. I would not be surprised that a bigger companies that directly, let's say for example, if I read the prohibited risk category, I would say algorithms by meta would fall in directly into that. But I would also be surprised that a company like Meta would find a loophole way before an AI act person or whoever, you know, way before they actually can see. Like, hey. Because they have so much more resources, so much more understanding. So no, it's not perfect, definitely not. But it's better than nothing. And it's also like a trial and error, you know, you learn from it. At least it brings a bit of awareness. Yeah.

Juan Sequeda [00:44:14]:
Next question. Can we develop AI applications quickly yet still be responsible? Or does it really require to slow down a little bit?

Mariam Halfhide [00:44:27]:
I think in a maturity state that we are now as human beings, we might need to slow down a bit. But I do think we can do it quickly and responsibly. I really do believe that we're just not mature enough for that, I think, as humans.

Juan Sequeda [00:44:48]:
Yeah, I like that. I like that answer. I think it's just like, and if. And those who don't slow down, they're gonna trip and fall and now probably slow down.

Mariam Halfhide [00:45:00]:
We'll have some cases gone bad, I think, before we start taking it really seriously. And we already do because of those, I think. But, yeah, yeah, I do think it's, and this is kind of why I'm trying to, like, yeah, let's. Let's do it responsibly and maybe, but at some point we'll know the riddle. You know, we'll be able to do it quickly.

Tim Gasper [00:45:20]:
Yeah. As we learn, we'll be able to be responsible and fast. Whereas right now we obviously know some bad situations. Right. Nobody wants to repeat the, you know, Air Canada situation and things like that. Right. There's becoming these, like the PR fiascos. Right. That become the center of attention. But also there's a lot that we don't know. We're still figuring out the ways that this can be problematic. Right. And as we push the envelope, we're learning together.

Juan Sequeda [00:45:44]:
Just on this particular topic, like we've been doing, if we do the analogy, like with software engineering, like this is something we've been doing for half a century. Right. I mean, even more, you can argue heads up from computations and now really treating, treating data as kind of, as its own thing is something that it's more recent. I mean, that we need to kind of like, treat it more as a first class citizen. So we're, so it's still somewhat in its own infancy. But I like how you said it, it needs to mature.

Mariam Halfhide [00:46:15]:
And that's why I'm a bit afraid that. Not afraid, it's not the right word. But what worries me is that, yeah. If we keep on going on with the deployment race, which I understand very well, that it's there and the dynamic, but that's eventually, like, open source and development is something else than open source in use because once those models are out of, like, there's no way of putting them back in. So I think we need to realize that really, really. Really. Well.

Tim Gasper [00:46:42]:
Yep. No, it makes sense. All right, final lightning round question for you. Do you see organizations caring about responsibility of AI without having to be prompted by regulation?

Mariam Halfhide [00:46:57]:
Very few. Very few. Those are like the leaders in sense, like they really want to do the right thing. So far they were one of my first that they really think about what is my ethical compass as an organization. So it's not only what's legal and what's technically possible, but also what is desirable by me as an organization. So you're actually taking that responsibility? Yeah, most of them not. No.

Tim Gasper [00:47:22]:
That's fair. Yeah. Anytime I, you know, not to speak down on, you know, the, the you, I'm sorry, the US sort of data leadership and things like that. Right. But every once in a while I hear a data leader in the US say, oh, and we care a lot about data ethics, right. Or data responsibility. And I'm like, wow. I'm like, that's so unique and different. Right. Whereas I feel like in places where there's a little more regulation going on in this area, I feel it comes up much more often. It becomes much more part of the discourse, and that's problematic for places where, like the US, where there isn't a lot of regulation yet. Not to say that regulation is always a good thing, right. But certainly it forces the conversation a lot more.

Juan Sequeda [00:48:04]:
One thing. But what's prompting me to think about is like, you're building a house or something. Like, there are things that you are going to do regardless. You don't need to. Regulations. But we've learned those things over time. Right. There's some minimal foundation stuff because, you know, if you don't do that, the house is going to fall, people can die or whatever. Right. So there's like, you don't need any regulations for that type of stuff. But at the beginning of time, we didn't know these things and houses fell or whatever. Right. So, but then there are like other things that you're like, well, I really don't need this. But then it just, then the regulations come in because there's some reasoning behind that, and then that eventually becomes the norm. People started doing that type of stuff and then sometimes it just kind of gets way too much. Right. And people like, yeah, you know what? As long as nobody comes around this neighborhood of this area to go check this. Okay, whatever, right. And if you get a fine budget for the fine.

Mariam Halfhide [00:48:57]:
Yeah, well, this is exactly, I think, the point, it's not about, like, I think it helps to enter the responsible realm, but it's not about, you know, otherwise I would have called not responsible AI, but regulated AI. It's not about like, it's not that I am a big fan of regulation. You know, it's a lot to do for, but it's this basic hygiene. Like, if you have proper data management in place. It will be so much more easier for you to be like a responsible party at the end as well.

Tim Gasper [00:49:24]:
There are benefits to responsibility more than just avoiding a fine.

Juan Sequeda [00:49:33]:
I think we're, from a maturity point of view, the industry is now realizing, oh, hey, those foundation things that we're like, we saw that we don't need that. Like, actually, yeah, we do need that. And that's not only important, that actually makes us successful, but we really want to go try to do. Yeah, we should. Nations.

Tim Gasper [00:49:50]:
Yeah, exactly. Well, you know, usually we do Tim's takeaways over to you, Tim, for your takeaways. But you know what? We're doing a reversal today and I'm going to say Juan, over to you for Juan's takeaway.

Juan Sequeda [00:50:03]:
All right. Not Tim, take us away. But it's Juan I've got to find a j for. That starts with takeaways. But anyways, okay, so we kicked it off. Responsibility, responsible AI. Well, first of all, I like how you say, look, ethics is a very broad topic and responsibility makes much more sense about it because everyone has their own moral compass. I really like that. Think about that. Everybody has their own moral compass. And there's so much context and perspective. And when it comes to responsibility, it's really something that you've actively been thinking about, that you've thought about it and you're actively. Right. Ready to go own it. We can have different moral compasses, but we need to have something when we grew up on what is responsible. So who is responsible when it comes to AI, right? You're building these products. And so there's a team. It's a multidisciplinary team, right? Yes. They're developers who may be in charge of some of the technical aspects, but there's a whole end to end. And so you need to really understand the different scopes and what stage of product the product is in. And so what are these principles? When we talk about AI? And I really love how you brought this up, it really depends on your journey. It depends on your organization's moral compass. And kind of the main takeaway I'm having is that when we start talking about responsible AI, whatever this is really, you ask yourself, what is your moral compass of your organization? What is the moral compass of your leadership? What do you define as something that's access acceptable, right? Because that may, that may be different depending on your organization, on your industry and so forth. There are some common things, right? The common themes, like, is it human centric is there social benefit? You have fair data, findable, accessible, that stuff, explainable, transparent. But essentially it really boils down to what does your organization stand for? What is that most important thing? So why do we need, why are we talking so much about responsible for when it comes to AI, it's a lot about trust, right? And the other one is what we just finished talking about is the whole EU AI act, right? It's, it's another reason. And a t shirt quote here is regulation is the lowest bar for ethical behavior. Any lower and you are punished by law. I love that. Right. So, and something important to realize that ethics is a product of society, and society here is creating technology. We know technology makes changes in society. So everyone really has a whole part in all this.

Juan Sequeda [00:52:12]:
One thing I really love is the whole Jenga metaphor, right? It's not that we're against AI. That's not the point being made. It's really that there are these fragilities in our society which gets exposed with new technology that can really undermine all the benefits. We want to have that really tall Jenga Tower, right? That building, that tall building of AI. But if we're the taking those pieces from the bottom, it can topple. And that foundation is that responsibility. And again, these are, this is the theme. I think if there is the theme, a theme, and everything we've talked about at catalog and cocktails for now on our fifth year, it's that this is not just technology. There's also, these are all social aspects.

Mariam Halfhide [00:52:50]:
Yes.

Juan Sequeda [00:52:50]:
Tim, how about you? You keep going.

Tim Gasper [00:52:52]:
So many good takeaways. All right. You know, we also talked about how should we manage all these negatives around AI and around how to do this responsible AI. And Mariam, you mentioned that it needs to be an iterative process. You think about like CI CD, continuous integration, continuous deployment. As you iterate, you have to address these use cases and at different phases of these projects. Different things matter. And we talked a little bit about some different governance structures, right. If you're a larger organization, maybe something like an AI council can make sense. But if you're a little smaller, you don't need 14 different councils. Maybe you just have one council, right? Maybe it just goes to your executive, right. So that you have to think about, like, how big are you? What's the right kind of governance structure to help? And we talked about, you know, privacy by design, governance by design, responsibility by design. You know, there's this upfront thought process that needs to go into making sure that we're building the right things that it meets the moral compass of our organization. You know, it fits into things like data product management and some other larger topics that we hear people talk about. These are foundations. It's basic hygiene. And I liked a phrase that you used, Mariam. You said the shared thing is conscious consideration, and I like that phrase, conscious consideration. I think that's a good. People sometimes talk about architecture or design or different ways, but conscious consideration, I think, is something that applies really well to thinking about responsibility in the context of data and AI. And then last but not least, we talked about the EU AI act and some of the other things that are going on across the world. You know, the EU AI act is a regulatory framework, and really, it's the first large political bloc that's really implementing something around this. So this really is cutting edge, which means hopefully it's good, but it's possible. It has issues, and we have to resolve those. Right. There's a problem that comes with being first. Right. But there's something bold about it. It is a risk based approach. So there are some things that are unacceptable, some things that are high risk, which you should get validated. You can do that with third parties, like conformity assessments and things like that. Then there's moderate risk where you're supposed to just make sure it's transparent and clear and follow lots of guidelines. And then, of course, there's low or zero risk, you know, where there's ideally nothing you really need to worry about. It's a well thought through act you mentioned, but who's going to do the work? Maybe it's a little too waterfall oriented. Right. Ideally, it'd be a little more agile, and you gave a great example of something that very different, much more democratized. Where you said in Taiwan, they have a civilian technology platform that's allowing people to actually participate and try things and upvote and downvote things. So that gives, I think, an idea of how could we do this differently in the future?

Mariam Halfhide [00:55:34]:
Great summaries.

Tim Gasper [00:55:36]:
How do we do?

Juan Sequeda [00:55:37]:
How did we do?

Mariam Halfhide [00:55:38]:
Great summaries. Very well. Yeah. Much more concise.

Juan Sequeda [00:55:43]:
That was. Well, again, this is all you. So, all right, to wrap us up here, three questions. What's your advice about data, AI, responsibility, life, whatever? Who should invite next and what resources do you follow?

Mariam Halfhide [00:55:59]:
Advice bringing us back to the, you know, individual responsibility. If you want to understand the organizational moral compass and, you know, then you should at least understand your own and understand where your values are, where you stand for, and also not be afraid to raise a question. If you feel that something wrong, because only that creates a culture of ethical awareness. If you are not able to raise a question about whether or not you think we should be doing something, then you're in the wrong company culture.

Juan Sequeda [00:56:32]:
Who should we invite next?

Mariam Halfhide [00:56:39]:
Who should we invite next? Yeah, I don't know if you're familiar with this man. I got inspired by him during the Gartner summit. He was giving a very interesting session about ethical dilemmas and how to resolve them. And that's Frank Biden Dijk. He's Dutch. But yeah, yeah. And that would be a person I would recommend. But yeah, I'm talking to him next week. Who knows?

Juan Sequeda [00:57:04]:
All right, and then what resources do you follow?

Mariam Halfhide [00:57:07]:
What resources? It's a combination. A lot of it is from center for Humane Technology, a lot of it from Pedro Domingos. And in general, I think the. Yeah, yeah. The responsible AI topics that have to do more, like more concrete. How do I say this? Yeah. Implementation side of it I find interesting, but it's a variation. But I think center for Human Technology is a good place to start.

Juan Sequeda [00:57:39]:
Perfect.

Mariam Halfhide [00:57:40]:
Yeah.

Juan Sequeda [00:57:41]:
Well, Mariam, thank you so much for having this discussion with us, because I think, again, this is one of those topics that we have not touched on the podcast before. And I know these are the things that people may not be asking about, but it's in people's mind and they hear all this time and they're like, so this is a great way of preparing people for that. As always, thanks to data.world, who lets us do this for year five. And thank you, Mariam, to being our guest for the second time.

Mariam Halfhide [00:58:06]:
Yes, thank you so much for providing the platform for this. I think it's important to share this.

Juan Sequeda [00:58:11]:
Fantastic. Thank you. You have a good one.

Tim Gasper [00:58:13]:
Cheers.

Special guests

Avatar of Mariam Halfhide
Mariam Halfhide Responsible AI Expert, Xebia
chat with archie icon