The Human Conversation with Marion Mulder

Share:

The Human Conversation Podcast on Leadership and Ethics

Society & Culture


Guest: Marion Mulder


Bringing her experience in both technology driven innovation and social system change to create desired futures for purpose driven organizations.

As a “Co-Thinker” Marion helps organizations become Future-Proof by co-creating inclusive and sustainable futures. Every organization faces the three major waves: digitalization (AI), sustainability, and social impact. Using methodologies such as strategic foresight, scenario planning, systems thinking, systems innovation, design thinking, futures thinking, and agile/SAFe, we collaborate on a transformation towards a more purpose-driven, desired future.

With roots in digital technology since 1998 and driven by a passion for strategy and innovation, she enjoys working together on solutions the world needs.

As a Futurist, she delves into future possibilities, crafting scenarios for both potential and desired outcomes.

Dedicated to being a FutureMaker, she transforms ideas into actions, focusing on setting agendas and catalyzing change to create a more equitable and just world. From emerging technology projects to board positions with impact-driven organizations, my mission is clear.


 

HIGHLIGHTS & TAKEAWAYS:

  • KG: You've had such an interesting career, technology, conscious innovation, the future of work, advocacy, mentoring, speaking worldwide. Is there a red thread that ties all of it together for you? What was the driving force behind all that you do?
  • MARION: You kind of only know that when you sort of look back, right? So I was kind of looking at one point, what is the right thread through my career? And for the longest part, it was digitalization.
  • MARION: I have a book here, it's about the future makers or futurism. And it turns out I'm an explorer or a map maker. An explorer means someone is just going into the unknown path and a map maker, someone who then is while you're walking, might as well draw the map for the people who follow me. And I think that's kind of what I've done. And for the longest time, it's been digital.
  • MARION: I've started following people through the world, some of the AI, we're on stage, we've been doing AI ethics for 20 years. They really were knowledgeable and were talking about responsible AI before the rest of us would know what AI was. And that really was helpful to see that. And that actually also made me see the bias in the technology, but also that it's a reflection of society, especially when you talk about AI now. And of course, society has its flaws. But when you start putting that in technology, it's basically starting to get carved in stone. And it's going to be harder to get it out there. So that's where I became an advocate of conscious innovation. So once you start doing innovation, let's be conscious about what's happening and how this is treating people. It's a great opportunity, but also you have to think about, know, who's benefiting, but also has been hard with this by accident.
  • MARION: The majority of us use technology because we think we're improving the world. And then for some of us, then it becomes a worse place to be in. And that's something we need to be mindful of.
  • MARION: I think my actual view for ethics, besides the fact that I'm a woman, and you know, the world's built for men. I've got this great book standing here called Invisible Women. It's a really good one to explain, you know, clearly the world's built for men by men, and we also live in this world.
  • MARION: Wouldn't it be great if we could just be ourselves at work? And that was what everybody else wants, right? But for us, it's not a given. These countries in the world where you have to hide yourself or if you tell about you know, who you love, that's actually that could actually be grounds for firing you. You have to hide parts of yourself. So that was like, getting my attention from an ethical point of view about, is this really helping everyone and who are we leaving behind? Who are the vulnerable ones?
  • MARION: I have a coach and she said to me, you need to get from knowledge to wisdom. I was like, what does that mean? But I've come to the conclusion, knowledge is much more about thinking you have to have the solutions. And I've been very big on creating solutions. That was like my skill set is like, coming up with solutions. Wisdom is much more about asking questions. So not having answers was asking questions and it's just a different modus.
  • MARION:  I believe in more women on stage, but also I thought, I like doing those things. I like to be on stage. Just said yes and then figured out, so what does this exactly entail? And sometimes I'm gonna get myself into things, why did I say yes to this? But then, you get through it and that are the best stories and the best memories.
  • KG:  Can you share with us what these terms, ethics and integrity mean to you and how you use that sense of what ethics means to you in some of these new sunrise domains that you've been involved?
  • MARION: I think it's about, does it come from the heart? Just let your heart speak. And I'm getting more and more, so consciousness of course also means the way in. And if you're not very conscious, then you know, your ego takes over, but you're also not very prone to what effects are on other people. It is about this gut feeling where, if something tells you it's off, it probably is. And then figuring out, can I figure out what is off by maybe just drilling that down in my mind.
  • MARION: My two ethical questions, is just because we can, should we? And that's kind of, I think, a gut feeling going, maybe this is not the best idea. And you can also just sort of wonder then, so what could possibly go wrong? And that's usually examples from other similar things about, we don't want to be like them. And the other thing is, how is this helping humankind or the planet of the world in a sustainable way? And if we can't answer these questions, then that's a really big red flag.
  • MARION: From an AI point of view, there are two frameworks I work with. One is called the Impact Assessments, Fundamental Rights and Algorithms. And from an AI point of view, it starts with why, so why are we building this? And then, the what are we building? And what kind of data is necessary for that? And then the last question is about human rights, or fundamental fundamental rights. And that there's a list in that document of fundamental rights. The point of this one is not so much filling out the form, but having a conversation with multiple people, and preferably a diverse group of people. So having someone who represents the user. And that could also be like someone internally, but also the affected.
  • MARION: Just because you can should be, that's really your internal guide, your intuition going, if your stomach turns when you see something, listen to it. And how is humanity, how's the world getting a better place because of this?
  • KG: The most powerful formulas or checklists for us are often the ones that are most intuitive and simple. You're asking questions, as you said, that should be questions zero.
  • MARION: This is also about design thinking, what problem are we solving? And who are we solving it for? And part of it is, is the value proposition, what pains and gains are there? And that's the other tasks of people and you whether they're, what are they like? So what are the dislikes about, how that is done? That's where we want to create a product that is actually solving a problem. And you don't want to create a product that creates additional problems.
  • MARION: You're doing it for them, not for yourself. But then it comes back again from the heart, which means involving others and doing it for that versus your own ego and need for validation yourself.
  • KG: One of my favorite examples that I've heard you share in a panel in the past is that I've seen you talk about searching for an image and the effort of the algorithm to be representative and how it can all go wrong. Would you like to talk about that example?
  • MARION: They're like all sorts of different ethnic backgrounds, but also they have had all sorts of different ones and they just went too far with, too diverse as in, we've got one of every flavor. I just asked for a group of friends and you're just giving me bias girls, but then in different sort of clothings and all super skinny. And this is not representative, but that's when we need my bias correction.
  • MARION: How do we get this right? And that's a really interesting point because my right is probably different than your right because we have different lived experiences. Our expectations are different than what we're supposed to see. But it comes back again to whoever developed this product and when they were testing, I was wondering how diverse was the group of testers there that they didn't see this a first time round, or they thought it was acceptable to ship the product when it was clearly not showing a really good reflection of society, just one very specific view on society.
  • KG: Was there a time when you yourself perhaps faced the temptation that they were competing priorities, perhaps as an entrepreneur founder yourself, or you were an advocate expert, subject matter expert helping a client and on one hand there was profitability and first more advantage on the other this is question zero that you raise should you do it simply because you can. Did you ever face that example and could you share it?
  • MARION: When you have to talk about diversity or when you start talking about organizations, we have to talk about business cases. And it's like, I am valid only when I'm a business case, when you can make money off me is that really, that didn't sit right with me. I was actually actively advocating for it internally because we needed to get more diversity and we needed to get the money for it to get it all done. So I had to figure out for myself a way to navigate that. I could step out of it, but then I couldn't reach anything. I also realized that we were talking the language of the group we wanted to convince. So to get them on board, you need to speak their language first. And once they're on board, you can start having these other conversations.
  • MARION: Even if you're the only one in a certain situation, there's always a lot of you somewhere else. You just have to find where your tribe is.
  • KG: Advocate burnout is a very real thing. And that sense of solitude and aloneness is also a very real thing. We evolve and fit ourselves to where we need to be at, where it's advantages for us professionally or it's intellectually stimulating, but also stay connected to what nourishes us. And it's okay to traverse those worlds and be more than one thing.
  • KG: For those early career or mid-career professionals, do you have any observations around ethical dilemmas most relevant today? And what would your advice be to them? In tackling such things, new things might come up that they don't know anything about, don't know how to handle it. Any advice for them?
  • MARION: I think one thing is if your gut says it's wrong, it probably is. It's interesting to see if you can find people who can help you, but you can also find other people to sort of have a conversation to figure out why you have this sense.
  • MARION: The one that bubbled up yet today is digital colonialism. One particular culture takes over and we still have the remnants of that in our daily lives.
  • MARION: If we're not careful, this big tech, it's not just the big tech oligarchy thing. It's also coming with this value. And it's basically wiping out culture, because it's so dominant and we all use the same base.
  • MARION: If AI is a reflection of society, a reflection of what specific society are we talking about? How do we make sure that this beauty of all of our different cultures remains because there's value in it? There's a lot of things that we have all the same. We all have to go, if you go to a conscious level, there's a lot of the things, no matter what culture you're from, that are all the same. But there's these different nuances and how we speak and which language we use and how these languages translate into concepts that we need to make sure isn't a one size fits all.
  • KG: I was fascinated by this recent example at UVA, where a student sued the university because they use facial recognition technology for exam taking. And this student happened to be a dark skinned person. The software did not recognize them. And they lost the ability to take the test. And they said, this is exclusionary. And the university's argument was, they are not the perpetrators of the crime here. They simply paid for software that was used worldwide by many other organizations, many other domains. And so the legal question for me was, so who is the offender here? I'd love for you to throw any light.
  • MARION: I think it's helpful to start thinking of this offer like if it was a car, right? If you buy a car, you do expect a certain thing, the basic that is in there. If there are no brakes in the car and you get in an accident, it's the producer of the car that's very liable. But if you're driving in that car and you hit something, that's not the car's responsibility. That was you driving it. And then, you're liable for that particular damage. I think that's kind of how I would see this particular thing as well.
  • KG: What is something that is inspiring you these days? Is there a quote or a book or perhaps advice that you came across that's made you say, wow, I love this?
  • MARION: I like Braiding Sweetgrass by Robin Wall Kimmerer. And that is about someone who marriages science with indigenous wisdom. And it's things like that I think we need to look for. Something I'm going to be reading soon is about quantum and consciousness (Irreducible by Federico Faggin). But that's like if you're a bit further advanced in this whole concept of consciousness and technology and science. But you also want to get into consciousness and you're like Deepak Chopra. It's called Digital Dharma that's in and he's actively pitching it everywhere.
  • KG: What are the passion topics you're currently excited to talk about?
  • MARION: I give presentations on responsible, inclusive AI. I'm coming to give you something, that's kind of that space is how can we, we want something with AI, we want a strategy for AI, we want to figure out how this is going to help our company. We do want to do this in a humane way. It's not about replacing people, but it's about how are we using technology to advance ourselves as an organization and our people and making sure we get the balance right. If it's about giving me stuff, I'm really interested in consciousness and the combination of quantum and consciousness. And the topic I'm exploring for myself, I'm still figuring out what it means is called augmented consciousness.


REFERENCES & LINKS:

MARION’s LinkedIN/Website page

LINKEDIN

MARION MULDER


MARION’s Book Suggestions

THE FUTURE MAKERS

INVISIBLE WOMEN

BRAIDING SWEETGRASS

IRREDUCIBLE

DIGITAL DHARMA


The Human Conversation Podcast Channels

APPLE PODCASTS

SPOTIFY