Ethical Design – HTW Berlin 2023

Working in tech allows us to touch people’s lives. Ethical design helps us to empower, not exploit, those people.

Let’s design for relationships that empower people.

Guest Lecture – Ethical Design: Principles & A Process

HTW Berlin – International Media Informatics
13 January 2023, Livestreaming to Berlin

There are many frameworks available, but I find myself needing something simpler.

In this guest lecture, we discuss a simple principle that provides a framework for making ethical decisions. Then we walk through a practical process for applying the framework to our own design work.

Video

Guest Lecture – Ethical Design: Principles & A Process

Discussion / Q&A

👇🏼 Skip to the transcript

Downloads

Cover Slide for Ethical Design: Principles & a Process for HTW Berlin by The Greatness Studio

👉 Download the slides here (PDF)

Ethical Design Checklist (PDF)

References

Transcript

[Andreas] And welcome, Brian, to our guest lecture. I’m so happy that you’re here with us with an really super exciting topic.

I think it’s a extremely important topic, and I think also a pretty new topic. You’re an expert on it, so we all have the chance here to learn a lot.

And please, dear students, take the opportunity and discuss with Brian, ask questions, engage with him. Yeah, and I’m really looking forward to this session.

Welcome, Brian. The stage is yours.

[Brian] Thanks, Andreas. I appreciate it. And yeah, thanks, everyone, for attending, for actually being here. I’m happy about that.

And I do appreciate your kind words, but I think expert might be a bit overstating in the sense that, because it’s ethics, because these things are so personal, and because humanity has been struggling with the idea of ethics and ethical design and ethical decision-making for so long, I find it difficult to feel like an expert or anyone to consider themselves an expert.

So I will at least have the intellectual humility to say that I am asking questions kind of like everyone else. And what I want to do here is share with you a little bit my framework that I use for ethical decision-making and a process that I have developed or am still developing together with some other folks around how to apply that framework to actual real work.

So before I actually start showing slides, what I’d like to do is, oh, what I’d like to do is invite everyone, whether you’re sitting or standing, go ahead and straighten your back, whether you’re in a chair or you’re standing up, and feel free to go ahead and close your eyes.

And what we’re going to do now is a very short sort of together meditation where we’re going to reflect on some of the most fulfilling relationships that we have in our lives. But let’s start by focusing on the breath. There’s no need to make any changes. No need to speed up or slow down.

Simply observe your breath without judgment. Where do you feel the breath most saliently? Is it in your stomach, in your chest, maybe in the nostrils of your nose? Investigate it a little bit. Is it warmer or colder when it goes in and out? How does it feel right now?

Okay, so let’s expand our awareness a bit to our bodies and the physical space that we are occupying at the moment. Feel your weight being attracted into the earth through your feet. If you’re sitting, maybe you can feel your butt in the chair, the bottom of your legs. If you have arm rests, maybe you can feel your forearms.

Feel from the inside out that you’re sitting or standing. Let’s expand our awareness, once again, to our body in its entirety. Go ahead and feel any air that’s touching your skin. Feel clothes, the texture of your clothing.

There’s any wind or air currents moving around your hair or on your face, any other sounds around you, just register them, let them go. So now that we are in this state of open awareness, I’d like you to refocus your attention and visualize, reflect on some of the most meaningful and fulfilling relationships you have in your life.

Which relationships do you have that nurture you, that inspire you, challenge you to grow? Which relationships help you feel love, contentment, and fulfillment? Let’s just hold those relationships in our mind for one more moment. Take one last nice, deep breath. And open your eyes.

Thank you so much.

I appreciate you taking that bit of a journey with me. You can open your eyes again. Welcome back. Thanks, Paul. Yes, quite intense indeed.

And I have concentrated on relationships for a reason because relationships represent, for me, the lens that I use to look at ethical design. So I’m gonna go ahead and share my screen now, and I hope everyone can see. If folks cannot see my screen, someone let me know, and I’ll try to take some corrective action. But in any case, here we are. So as I mentioned, I think about relationships as the lens through which I view ethical design.

And the reason why is because I feel that anything that we create, whether we are designing something or coding something or building anything, whatever it is, product, service, anything that we make for someone else creates a relationship between us and the people who interact with that thing, and also the people who interact with that experience of that person with the world around them.

And it’s the choices that we make when we’re designing the thing that actually determine what that relationship’s gonna be. As you can see here, there are two different photos with completely different feelings, completely different moments, completely different characters of relationships. And when it comes to ethical design, when it comes to really creating things that meaningfully help people in the world, I like to encourage us, and I like to encourage also myself to create relationships that feel like this picture.

Empowerment, not exploitation. Love, not fear. Together, not separate. So if there’s anything that you remember or internalize or learn from this small, short session or presentation today, it’s these three things.

If you learn these, you can fall asleep right now, and everything’s okay. Basically, everything that we design creates a relationship. Our choices are the things that determine how that relationship will be.

We define that relationship with our choices. And so I want to encourage us all, and I also encourage myself to empower people. Let’s choose to empower people, not exploit them.

So Andreas, again, thank you so much for inviting me here. I’m really happy to be here and happy to have the opportunity to talk with everyone. And again, everyone, thanks for attending.

My name is Brian Pagán. I found a company called The Greatness Studio, where I help tech creators and creative professionals to use their superpowers for good. My academic background is in psychology, hence the brain emoji. And the sailboat is there because one of the most meaningful and fulfilling relationships in my life is a relationship that I have with one of my best friends, Xenia.

She and I are starting a sailing business this summer where we are going to take people out on the sailboat for like a month, and we’re going to visit different islands in the Ionian Sea and do eco-conservation activities there, like beach clean-ups and meetings with local activists and stuff.

And I’m really looking forward to that. But then, yeah, so maybe a bit more specific to our context here, I’ve also been designing in UX in one way or another for the last over 20 years. So I consider myself to be kind of a UX grandpa in that sense. And I have the gray hair here to prove it as well. So that’s me in a nutshell.

And one thing that I want to talk about today is this ethical design checklist. I created this together with some folks from the Vested Summit, conscious tech, from Stanford Peace Innovation Lab, from UXenzo, and a number of open contributors to our open beta. I won’t dive into it right now because we’re gonna actually talk about this in a bit.

So first, I’ll discuss just briefly what we’re gonna do today. As I mentioned at the top, I’d like to share with you a principle or a set of principles that I use that are very simple in helping me understand whether my choices are ethical or unethical.

And then I’m gonna share with you sort of the process that we came up with, along with this ethical design checklist, around how to apply that principle on that framework to our work in real life. And after that, I’d love to open the floor to an open discussion about any questions or anything that people might not believe or not want to hear or anything that you don’t agree with. I’d love to talk about that with you. That’s why we’re here.

So first, let’s dive into this principles. Ethical. What does this even mean? I mentioned a moment ago that I resist, let’s say, the nice and kind label of an expert, because as you can see of all the images here, these are all people, philosophers who have dealt with this problem, with this question, actually, for the last 500, 1,000 years.

And as a species, humanity, we still haven’t solved it. So I wonder if there is anyone who truly is an expert in ethics in that sense. But because all of these different perspectives exists, all of these different ideas and thoughts and attempts exist to create some kind of a universal system of ethics, I like to keep things simple.

So for me, I like to try to boil things down into simple, practical things that I can use. And so for me, it’s all about making choices either to exploit or empower. And I don’t necessarily see this as binary. I do see these on the spectrum, which I’ll come back to in a moment. But I did get these actually from Adam Grant, and my cat would like to say hi as well.

[Tiger the cat enters 🐈] Hi, Tiger. Aww, so Tiger wants to say hi. Tiger likes Adam Grant as well.

And in his TED Talk, Adam Grant talks about the research that he did around how people perform in organizations, businesses, whatever. And he noticed in his research that there are takers, and there are givers. And the fundamental difference between takers and givers is that takers are focused on helping themselves. And givers are focused on helping everybody.

So this does not mean that takers are mean, and givers are nice. And Grant also makes the distinction between, let’s say, for each category, a giver and taker, you have agreeable people and disagreeable people. And I don’t necessarily like to think of this in terms of people.

These are actually more, in my perspective, patterns of behavior. Just because a person is a giver in one scenario doesn’t mean that they can’t be a taker in another. But as you can see in the graph here, you can have very nice givers, but you can also have very nice takers, people who are very agreeable, who are very friendly who are plotting a way to stab you in the back while they’re smiling in your face, you know?

And you also have, likewise, disagreeable givers, people who maybe have a cantankerous personality or who don’t like interacting with other people or maybe who don’t possess, let’s say, the social skills to be able to communicate their kindness in a way that’s compatible with the way that the people around them want to listen to this communication. So in Adam Grant’s parlance, these people would be considered disagreeable givers. So the point is, let’s try to be givers and not takers, but let’s try to give and not take. And that’s what I mean when I say these are patterns of behavior, because we can choose whether we give or take.

And every single decision that we make, as the way I see it, falls somewhere along the spectrum between take and exploit on one side, give and empower on the other. Everything falls somewhere in between. And I understand that things are not absolute, nothing is absolute.

Context is also very important in every decision that we make. For me, this is the way that I try to categorize and create a taxonomy of the decisions that I make so that I’m better prepared to move around the world and interact with my world in a way that I feel is ethical and follows my values.

So now I’d like to briefly open the floor to folks. This isn’t the discussion part, I’m just taking a break between sections.

So what do you think? Is there anyone who would like to share their own perspective? Is there anyone who doesn’t like what they’re hearing? Who disagrees? Anyone who wants to have a short debate or anything? Any questions?

[Andreas] Brian, there’s one question in chat from Paul.

[Brian] Oh, yeah, please. Yay!

[Andreas] Yeah, what do you think is the ratio between givers and takers?

[Brian] Oh, now I see it. That’s a good question. I don’t know. I think this is, I invite you to check out Adam Grant’s TED Talk. This is maybe a nice moment for me to just mention briefly that at the end of this talk and the slides, there will be a QR code and a link that you can go to where you can download these slides and also all the things that I mentioned here.

I’m gonna mention some news articles, Adam Grant’s TED Talk, for example, there’s links to all that stuff, so if you want to check that out. Adam Grant actually wrote a book about all this stuff, and I think he probably answers that question in his book, at least I hope so. Sorry, I wish I had a better answer for you, Paul.

Alexander, thank you for voicing your concern. In the chat, you mentioned that you understand the concept of givers and takers, but you’re still not convinced that it’s better to be a giver than a taker.

That’s fair enough. I have two, let’s say, responses to this. The first is that ethics indeed are a personal choice. It’s all about what kind of impact that a person, that you as a person want to have in the sense that, another question I thought about starting this session with was, how do you want the world to be different for you having been in it? And in that case, if anyone chooses to be a taker, chooses to help themselves and the people around them at the expense of everyone else, that’s a valid thing.

Many people are extremely successful by doing this. I think most billionaires in the world probably have some kind of, let’s say, mindset around that. But according to Adam Grant’s TED Talk, as far as the way that I’ve understood it, is that takers can perform better than the givers around them in an organization. But the presence of a taker in an organization brings down the performance of the organization as a whole when compared to organizations that have more givers in them. So the collective prospers together better when people are giving around each other.

And when there are takers, when there’s a more competitive culture in the organization, then individual contributors can perform very well. But the team as a whole performs less or performs at a lower rate or lower level.

So in that sense, yeah, I’m not necessarily saying that it’s better to be a giver than a taker, but choosing to give has benefits that help the organization, or the team that you’re maybe a part of. And as a member, a fellow member of team human, as the member of the team that we call the human species, givers help us all to enjoy our human rights, to enjoy our freedom, to enjoy good experiences, whereas takers contribute, for me, to a culture where everyone has to look out for themselves and be a bit more wary and fearful. That’s my perspective. What do you think about that? Does this address your concern, Alexander? Thanks.

“Moral ambivalence toward healthy opportunism in some Asian societies.” Ah, yeah, this is another very interesting thing, Alexei.

So in the interest of time, thank you for, yeah, asking these questions and bringing up these points. They’re super valid, and this is exactly the kind of debate we need to be having around what’s better, what kind of choices should we be making, what kind of ideals should we be working towards?

And I will reiterate and say that while I do have a stake as a fellow human being in whatever things you end up creating as a designer or developer or as a builder, engineer, a tech creator, at the same time, I don’t judge anyone who feels that they need to have this healthy opportunism or take at the, you know, it’s a personal choice in that sense.

Oh, Mika actually also mentioned that unionized workplaces end up being more productive than hierarchical ones. That’s a cool thing I didn’t know about. But yeah, it certainly reinforces the thought that givers together perform better as a team. Thank you for that. So let’s move on briefly. Let’s get into specifics now. So I’m gonna talk about some examples, but first, I would like to show some of the dimensions or the, let’s say, axes along which we can choose to exploit or empower people.

These four axes, so experience, privacy, safety, and rights, are adapted from the Ethical Design Manifesto from Ind.ie, which is now called the Small Tech Foundation. I think their work is pretty great.

And yeah, I really like their Ethical Design Manifesto, but then I adapted it to be a bit more specific to help me because I like specificity, and I’m not super great with ambiguity. So for me, I like to make things explicit and very specific. So in that sense, let’s just look into some, instead of like defining all these terms and stuff, we’ll go into some examples, and hopefully that will shed some light on what’s meant here.

So experience for me really is about, let’s say, the emotional experience or the experiential subjectivity of what a person goes through when they interact with any product or service or system or anything.

And here’s an example from my life. My last name is Pagán. So I have a special character in my name that’s not in the, let’s say, default character set. And for me, it still pisses me off that in 2023, there are still services that don’t support an expanded, let’s say, character set. And what makes it worse is, as you can see, the arrow message here says to please enter a valid name, but this name is my fucking last name. And I know it’s valid, and this pisses me off a lot. But at the same time, it just makes me angry, it doesn’t really ruin my day, I can, you know, fix it.

So as you can see, the little ball on the spectrum over here is still quite in the middle, all right? This isn’t so bad. I can live with this. When things get a little bit more tricky is when we start talking about privacy because privacy is directly and indirectly linked to how much we are free to enjoy our human rights and how much we can be, our lives are robust against censorship or some kind of oppression from governments or any kind of organization or instance that wants to, let’s say, impose power or anything over us.

And this graph by SafeNet really was kind of a shock for me. I didn’t realize that these messaging platforms collected so much data, especially the ones on the left. So if you look, all the red blocks are things that these messaging platforms collect, and the green ones are things that they don’t collect.

So you can see Session there on the right side doesn’t collect anything. It’s incredibly, let’s say, private, but, you know, it’s a little bit less convenient because you have to do a little more work to set up your thing and to find people to chat with. But, you know, what you get back for that is more privacy.

We see things like Signal and Telegram, which are kind of a bit more of a balance. They’re a bit more convenience. They’re a bit more, let’s say, usability-friendly, but they do collect a bit more data.

For example, your contact info so that they can, you know, connect you with people that are in your address book. What bothers me more about this kind of thing, I’d like to bring your attention to these points here, okay?

Sensitive info, I don’t know what SafeNet means when they say sensitive info, but I know that the GDPR, okay, the General Data Protection Regulation, the EU’s law about how to deal with data and privacy, define sensitive info or sensitive data as things that can be used to discriminate against you. For example, your gender, your sexual orientation, your religion, these kind of things.

And I wonder, really, really wonder, first of all, if my assumption is correct that this is what sensitive info means. But I also wonder, if that is the case, what is Facebook Messenger or Bip doing with any of these sensitive information?

Like, that feels a bit scary for me. Also, health and fitness, why does Facebook Messenger need to know my heart rate or my steps? And financial info, this kind of tripped me out a bit. Financial info, browsing history, other data, I’m not sure what that means, but I will say this: the identifiers that most of these apps actually collect or even maybe store on a phone, are all about being able to track our identity outside of the app across the entire internet.

Let’s say Telegram. I use Telegram because I don’t like WhatsApp, I don’t like Facebook, I don’t use any products from Meta if I can avoid it. I also deleted my Instagram account, but Telegram I still use. But I see that they also have these identifiers. So at least the way I understand this is that Telegram leaves an identifier on the phone so that whenever I browse the internet with my phone anywhere else, Telegram knows it’s me on these different websites.

So I’m not trying to freak anyone out. I just wanna say that these data collection policies and practices are choices. Someone chose to collect these data, and I’m sure there’s some kind of reason, probably to sell or earn money.

But yeah, this is, for me, a way that we are exploiting people a little bit on the level of privacy. Let’s go to safety. Another example here is that, for example, the Mercedes-Benz self-driving cars are designed so the algorithm that drives them is designed to sacrifice pedestrians outside the car in order to save the life of the driver or passenger in the case that there’s some kind of accident.

The reason why I consider this more to the left side of the spectrum, more on the exploitation side is that the pedestrians were not involved in the decision to buy a Mercedes car. They were not involved in purchasing the car. They have no influence over the outcome of what happens with this self-driving car. In that sense, they’re very passive stakeholders because they can be injured, potentially killed by someone else’s decisions. So for me, this is a classic pattern of exploitative choice and exploitative behavior. Still, our little red dot is not all the way to the left.

We can go a bit further when I talk about human rights. And in this case, I want to bring up a story that’s got very, well, let’s say it’s extremely salient in the United States right now, but it’s also a story that’s migrating over to Europe and Germany as well in the sense that the justice system is using algorithms, machine learning algorithms to determine how to sentence convicted criminals or people who have been convicted of a crime.

And okay, the idea, the intention, let’s say, is very good. We want to have an objective way that isn’t prone to human emotion or human bias to determine whether a person should be put in jail for longer or shorter or what kind of sentence a person should get.

The problem is, in practice, these machine learning algorithms are being trained by data that are very biased and that are burdened by, let’s say, century, at least, of racism in the police practices, xenophobia, and a lot of other, let’s say, data biases, human biases, human emotions, all that stuff that we’re trying to keep out of the loop.

The data is stuffed with all these biases already. So the machine learning algorithm gets the biases, but because it’s an algorithm, it appears to be objective. And this is the danger of these kind of things when we use them to determine the life and the freedom of a human being.

Because as you can see here, the algorithms, especially in the United States, are very, very biased towards African-American people, Black people, people of color, and that bothers me a lot.

Okay, so that’s intense. I apologize for all of that doom and gloom here. It’s just, you know, either you just had lunch, or you are about to have lunch, and it’s around lunchtime, so I’m sorry about that.

I hope you aren’t doing this on empty stomach, but, whew, let’s go ahead and shake it off a bit and talk. Do you have any thoughts, anything you’d like to share? Any ideas or questions about this stuff?

I see, Alexei, you mentioned Weber-Wulff has a whole section on dealing with UTF-8 as a programmer info too. Cool. Special characters, yes.

I see that you also mentioned that Telegram uses it to geoblock information, for example. Wow, okay. Meh.

Hannes: “This is just wrong.” I’m curious what exactly you meant about what’s wrong.

I agree, but if you wanna chime in, I’d love to hear what exactly it was that you were pointing out that’s wrong there. Yeah, okay. It’s okay if you don’t want to.

Oh, you meant the Mercedes thing. I’ve got it. Yeah, yeah, for sure. I agree. But this is a thing, like, from one perspective, from our perspective, let’s say walking down the street, and a Mercedes could hit us at some point, it’s wrong.

But from a Mercedes perspective, I understand that they don’t wanna sell a product to a person that might, if the product could kill you, you know, they are probably reticent to sell that kind of thing to a person because maybe a person wouldn’t wanna buy that. At the same time, you know, if you’re the one driving the Mercedes, you should be the one to take the risk. You know, I agree with them, the fact that that’s wrong.

Jonas, I think you also point out a good thing that the algorithms don’t take under consideration circumstances that people might become criminals in the first place. As we see the economy going kind of crazy now and things getting very tough for people, especially people who are more economically vulnerable, we see an uptick in crime now because people are desperate.

You know, people need to eat, they need to feed their kids. And if some people need to turn to some kind of illegal activity to earn money or to be able to survive, yeah, I agree. This kind of context is actually quite interesting and needs to be taken into account.

Thank you, Paul. I agree it’s also good to think about this stuff and not fall into these traps ’cause it’s extremely easy, especially when we work at a big company where we have our own little thing that we work on, we work on our little cog, our little widget in the bigger machine. It’s important that everyone pays attention to these kind of things because it’s easy for, let’s say, to go back to it into the earlier discussion, the passive taking, it’s easy to allow things to happen.

And like I said, it’s not necessarily wrong, but if we don’t want something to happen, it is incumbent on us to take some kind of an action. Avoid biased data.

Yes, Marcel. Avoiding biased data, there are lots of different ways to do that. I don’t wanna make this a talk now about, let’s say, research integrity. We can have another lecture on that if you want, because I really like the topic itself. But let’s say having good research practices, not necessarily scientific-level rigor, right?

But at least keeping into account all the stages of research, when we are doing research, keeping into account any biases that we might be able to have. So understanding what bias might be there and taking action to mitigate or minimize or even eliminate those biases. That’s the basic answer to this question.

And I understand that we are humans, so we are going to be biased. It’s just how we are. But acknowledging these biases and minimizing them, taking action to minimize them can be extremely helpful. And I hope this addresses your question.

You can’t talk. You’re in the library, Emmett. No problem. Thanks.

Thanks for putting it in the chat.

“Avoid biased society,” eh. If you have a way to do that, let me know. I have considered very often to just go to a cabin in the woods, you know, and just live as a hermit by myself.

But I tried something along those lines, and I didn’t make me happy. I need people, I like people, I love people, and part of me feels that as, let’s say, an agent of change, someone who feels strongly about these things and who has the, let’s say, opportunity to share what I know and how I work, I feel it is also a little bit my responsibility to bring that knowledge and these mindsets back into society so that we can, yeah, move the biases in a different direction, so we can bias towards empowering rather than exploiting.

Thanks, Lieke. “Preach, yo.” Appreciate that. It feels really good.

Anastasia, what kind of data can they possibly use to predict a criminal? Y

eah, talking about Alexei, yes, Phillip K. Dick stories.

From what I’ve understood, what they are doing is looking at recidivism, blah, excuse me, rates of recidivism in the sense that when someone is convicted of a crime and they get sentenced to some kind of jail time or some kind of punishment, that the algorithms use the data on how often and how likely those people are to commit crimes again to try to extrapolate those data to the the new criminals, let’s say, or the new people convicted of a crime, sorry about that.

And yeah, as you can kind of understand, this is a bit strange logic for me. I think this isn’t something that we should be allowing computers to decide for us. Nothing against computers, but as we can see the system is flawed, and I feel like the price is too high.

Jonas: “not thinking about the root of a problem.” That’s true.

Of course, thinking about roots of problems and, yeah, trying to go back, that introduces also some kind of bias which we need to mitigate. It can be messy to actually consider the messy condition of being a human.

And I think this is one of the reasons why machine learning algorithms are so attractive to people because they quantify lots of stuff, and quantification, quantified data, numbers, and statistics look really good on an Excel spreadsheet, which also look really good on a quarterly report.

So I think this is kind of the appeal because it is messy to deal with human beings. It is messy to try to think about the underlying root causes of what societal patterns are driving people to desperation and to crime. Those are harder questions, I think. And that’s, I think, why we try to use these algorithms.

Is it a good thing? I don’t think so, but no. Yep, “Black Mirror” series, for sure, Anastasia. We’re getting closer and closer, and it’s a bit scary how many of those episodes are kind of coming true now. Erlina, I agree. It’s data collected by humans. You’re right, avoiding bias is all we can do.

Avoiding and minimizing bias. We can never eliminate bias 100%. I think, like I said a moment ago, we just need to understand what kind of bias we can have or what kind of bias can appear in some kind of dataset or some kind of research project and take steps to mitigate that. Good point.

Thank you. Esther, yeah. Better solutions, keep that one in mind because we’re gonna circle back to it in a bit. Thanks, Esther.

All right, thank you for that. Again, congratulations, we got through section two of this presentation in session. We are one step further, and I promise the next part is gonna be a lot more fun and a lot more positive ’cause we’re gonna talk about how we can actually make decisions that empower people more than exploit.

Yeah, here’s the process that I use. And like I said, it’s still in development, so if at the end we can talk about maybe if you have feedback or if you have any thoughts of how we can make this process better, I’m eager to hear it. So in the previous section, we saw what can happen when choices are made mostly on this side of the spectrum. But the art of being human, that messy work that we need to do that we are responsible for a bit is to, at least in my opinion, is to stay more on the empower side of the spectrum.

As I mentioned before, it helps everyone because, ultimately, we are also consumers of technology, of products and services. And so if we put out empowering products and services in the world, that encourages other people to put out empowering products and services, and then we get to benefit from that as well as fellow members of the human society and human race.

So as I mentioned before, this is the ethical design checklist. I wish I could take credit for it, but I can’t. I worked together with some really amazing people on this, and there is still an open Google doc, I think it should still work, with ideas and thoughts of how to improve and shape this checklist. And the reason why we have a checklist now with five simple steps is that I noticed, and I and the folks that I was working with noticed that most of the frameworks out there are either extremely complex and super specific, and they have, like, 100 different steps or many, many, many things to take account of, which is great. They’re very exhaustive.

For me, that doesn’t work in a practical setting when I am trying to hit a design deadline to create a prototype by tomorrow morning because we have a client presentation, blah, blah, blah. I wanted to simplify things in a way that would be actually helpful. And yeah, on the other side of the spectrum, there are lots of frameworks out there that are maybe too general or not specific enough.

So our feeling that this five-step checklist should hopefully reach the balance between specific enough to be used in a practical way, but still also simply enough that it doesn’t conflate things too much or take up too much time, or it’s understandable.

You know, I understand that I’m not the smartest person in the world, so for me, simple things work best. I know who I am. So five steps, I mentioned.

Let’s just go through them really quickly, and then for each of the steps, I’ll dive in a bit more in depth, but as an overview. The way that I think, or the way that I use to make sure that my decisions in design are more empowering than exploitative is by first understanding whom my decisions affect.

This means looking beyond just the actual user of the thing that I’m creating, but also looking at the stakeholders around them, the stakeholders around the environment from the context. And we can go into that in a moment. But once I know who, then it’s also interesting to look at what these people, what each, let’s say, group of stakeholders has in terms of cost and benefit.

And cost and benefit also means risk. So risk can be borne by some people who don’t get any benefit from the system itself. And these people are the most often forgotten whenever we have regular design processes.

In capitalism, we often externalize these things, like, you know, let’s say if a company wants to chop down a rainforest in order to earn money with the wood or plant other crops there, let’s say, who gets forgotten are the ecosystems there, the animals, maybe the indigenous people that live and the rainforest like that.

They are affected by those decisions, but don’t have any benefit from that system. So it’s interesting to understand that. Once I can understand who’s there and how they relate to this project, what are their cost, benefits, and risks, I can also take that information and those data into account in order to design, to empower, and not exploit.

In that sense, once I inform myself, if I learn what the context is in the problem space in which I am working and designing, then I can make better and more informed decisions about how to empower and not exploit. This is not 100% foolproof. I make mistakes because I’m a human.

We all make mistakes because we are human, and that’s okay. As long as we try our best, similar to the way that we can only minimize and never eliminate bias, we can minimize any exploitation that happens, any harm that we might do. But obviously, we still might have some kind of assumptions that aren’t correct or make decisions that have a harmful impact.

And that’s why the next step is so important. Testing our assumptions, do the research. We’ll talk about this more in depth in a moment, but, basically, this is all about, first of all, making sure that the things we think we know are actually true.

And once we put a product or service or any kind of thing out into the wild, also looking at what’s the effect that thing has. We don’t just put it out there and forget about it. It’s good for us to actually see what it’s doing and how people are interacting with it and if it is helping or harming. And finally, as something not necessarily related, per se, to the actual design process itself, but more as a follow-up thing, is if we learn stuff, all the information that we’ve gathered, all the decisions that we made, all of the wonderful stuff that we’ve done, it’s nice to share that with other people because, hey, a rising tide lifts all boats, and if we can help each other, then we’re also helping ourselves.

So let’s get concrete. When we’re thinking about seeing whom our decisions affect, my favorite method for this is called a stakeholder onion diagram. As you can see, it’s a bunch of concentric circles. And each different kind of stakeholder gets in a specific layer or a specific circle.

And we put the product, let’s say, in the middle, I have this image from FigJam, and there’s a FigJam template in the link here. So this link actually works if you download the presentation later on as a PDF.

But if you go to Figma or if you’re in Figma itself, you can just search for stakeholder onion, and here’s your FigJam template for that. But just briefly how it works, the product or service starts in the middle. So the thing that you’re putting out into the world, the thing that’s mediating this relationship between you and all the other stakeholders goes in the middle.

And then for each concentric circle, we move outwards in spheres of influence. So for example, here’s a nice example. Bonk!

Yeah, not nice for me to laugh, but I think it’s kind of funny. So in this case, the VR game is the product in the center of the circle, and the player is the first one with the most direct influence and direct interaction with the VR game itself.

But the employee, so the person sitting at the desk next to that player who’s playing the VR game is also a stakeholder because they’re in the area, and they’re being affected by what’s happening in the VR game and the interaction between the player and the VR game.

Similarly, on another level removed, we can have potentially the company that both of these people are working at. It’s an extreme example, right? But, you know, maybe if the player hits the other person so hard that they feel sick and have to go home, then that’s lost productivity for that company, and there’s work that just doesn’t get done, for example, you know?

I’m not saying that every VR game has to take into account all these different things, but at least mapping them helps us to decide, with intention and consciously, whether we will address the concerns or not.

So let’s make this a bit more specific. So once we know the stakeholders, right, going back, we’ve mapped, let’s say, a bunch of different kinds of stakeholders, not only three, it can be many, many, many. I’ve made stakeholder onion maps like this for healthcare scenarios in Philips where there are 30, 40 different stakeholder groups in there, including like insurance companies, government organizations that have a ministry of health kind of stuff, but also the patient, their family, all this kind of stuff.

Really exciting to do, but it’s also interesting to understand the relationships that each of those different stakeholder groups have with the product or service that you’re creating. And so the four relationships, or let’s say categories of relationships that I like to use as kind of the broad framework is that, obviously, we have our users.

These are the people who have direct contact with the thing, they’re the one using the thing. We have customers who actually buy the thing. And most of the time, especially in consumer products, these are the same person, but these can also be different people. For example, if you work for, well, maybe you have been given some software from the university.

There was a person at the HTW Berlin who made a decision and said, “Okay, we want to give our students a license of this particular product.” So they buy the license, and then you get the license. You’re the user, they’re the customer. At the same time, a user and customer can be the same person, but they are a customer and a user at different stages of their journey, if that makes sense.

Like, people start in the shop as a customer looking at features or looking at different things. We have one mindset when we are gonna buy something, and once we’ve bought it, and we’re actually using it, our mindset shifts to the user. It’s just a different way of thinking, different set of needs, different set of priorities in those different, let’s say, masks or mindsets. But other groups could be active stakeholders.

For example, people who benefit from the thing and actually can influence the thing. So these are, let’s say, the business who sells the product or the product owner or product manager, or this might be people who are brought in to co-create a specific product, for example.

These are active stakeholders thinking in the terms of the fact that they are involved with the process and have influence on the thing that’s being created. And finally, we have passive stakeholders. Passive stakeholders are the ones who are affected by the thing or the interaction with the thing but don’t necessarily have any influence over the thing. These are the folks who get forgotten the most, unfortunately. And as we can see, it can also be a little painful at times.

So yeah, I think it’s time for us to have a quick discussion here about thinking of having a fair balance. All right, so we talked about this VR game a moment ago. We have the player, we have the employee, we have the company, but I can apply this exact same framework to the Mercedes-Benz example.

This is why I asked to keep that in your mind because I’d like to open the floor a bit to have a discussion around, how would you want to balance the needs and risks of all of these different stakeholder groups? If you wanna sell a self-driving car, you have, let’s say, the driver or passengers who are actually inside the car. You have pedestrians who might be walking around the car, and you might have pedestrian family, or you might have, let’s say, local government.

You have a police force, maybe you have also any other kind of people who might be in charge of making sure that the streets are safe. How would you go about this? What are your thoughts around designing to empower and not to exploit in this context? And Esther, if you want to start this conversation since you asked really nice question here, please feel free. Well, I’ll go ahead and actually dive a little bit more specifically into Esther’s question. So in the chat earlier, you said that you don’t think the Mercedes case is as bad as it sounds, though, as you say, you don’t know the whole story, fair enough.

They do have to choose someone, as you point out, for the worst-case scenario. And I agree with you, it is a counterintuitive thing to say like who is gonna buy a car that could potentially kill them, right? Some people would, I certainly would. I don’t necessarily, I wouldn’t wanna buy a self-driving car, but that’s a different story. It’s more about being maybe a control freak on my own side.

But I think there are people who would want to have the, or at least maybe have the choice, or I don’t know, I also understand, but there could be better solutions. And like you say, what should someone answer if they get asked this difficult question?

If I buy this self-driving car, what’s the thing here? What are my risks? And they say, you know, the thing is gonna kill you if there’s an accident. Yeah, it depends. Some people feel okay about that, and some people don’t.

I feel okay, actually, about that. I would prefer that the car would kill me rather than some innocent bystander outside. And like Jonas, as you point out, yeah, every car could possibly kill their owner. But yeah, also, we see enough examples of people drunk driving, for example, who, you know, they put other people in danger. And it seems to me that the drunk driver always survives.

Not always, obviously, but you know, in a story like that, what I’ve seen is that usually the person causing an accident often survives when they caused problems or even killed someone else, unfortunately.

And if you spend enough money on the self-driving car, Artis, yeah, that’s a good question. That’s a good point. On the one hand, let’s say the company here, the company would be a stakeholder in this onion diagram as well because the company wants to earn money, not necessarily only for profit motive, but they could also want to earn money so that they can continue making good cars for people.

You know, if their mission is to make streets safer, I understand that, you know? There are a lot of research, or there is a lot of research that says that if every car on the road was a computer-driven, self-driving car, we could have much more efficient traffic. We could also have safer traffic. They wouldn’t get tired, or they wouldn’t get drunk, or they wouldn’t make mistakes, you know, in that sense.

Or the mistakes would be, you know, fewer. So I understand. However, in this case, it’s a bit different. Alexei, I think it’s a nice question that you bring up or a nice point that you bring up. You say that you think the fundamental question is about whether technology should enhance human possibilities or replace our presence and agencies in certain areas. I think you’re right, and I wonder, so between those two things, I think it’s, again, another balance.

For example, when it comes to a calculator, let’s take something we all are familiar with, calculator has replaced lots of the in-our-head mathematics that we do. It replaced the abacus, it replaced having to write things down on a piece of paper and go through, you know, complex equations on a big chalkboard. You can just type a complex equation into a calculator, and it’ll give you an answer. In that sense, it has replaced certain things, and it means that there are skills that we haven’t developed because we don’t need to because this piece of technology is there.

Another example is GPS navigation. We don’t need to learn how to read a map anymore because we have GPS navigation. But there are certain people who still want to learn how to read a map. I’m one of them. I was a boy scout as a kid, so I, you know, learned that stuff when I was a kid before there were GPS navigation things in everyone’s pocket. But you know, to be fair, I also use GPS, especially if I’m driving in the car, or if I need to find someplace, I’m walking in a new city.

You know, I fire up Apple Maps or TomTom or whatever and look around, you know? It’s perfectly okay. So I think we can do both. I think there are certain areas where it’s fine to replace something where people are, but there are certain things where the risk is so high. Even if the probability of the risk coming true is very low, the impact can be extremely high. And one debate that we’re having now, on the EU level, but I think also on the NATO level, is about autonomous weapons systems.

So we already have, let’s say, drones that are equipped with missiles or machine guns that are controlled from remote control. And the question is, do we want to have AI controlling these things autonomously?

Do we want to have a machine making a decision whether a robot on the street is going to shoot someone or not? I don’t think we should. And for me, this is one of those questions where there are arguments for and against. In my opinion, I think it’s morally wrong to give the choice of life or death to a computer. So to bring it back to your point, I think it is indeed a question of which philosophy do we apply where? What are we okay with replacing humans with, and where do we want, let’s say, technology or AI to support us in expanding our humanity?

I hope that addresses what you’re talking about, Alexei. Well, Alexander, you say that the self-driving car shouldn’t fuck up. But we do have lots, you can just look on any news, there are self-driving cars that mess up all the time.

There was recently a Tesla that just started accelerating out of nowhere and couldn’t be stopped and just ran into, I think, a wall. Thankfully, there wasn’t anyone who got hurt, but there was another one who ran into another car. There are some that just, yeah, technical glitches happen.

This is reality. And while humans are flawed, machines are flawed too because they’re also made by humans. So let’s not forget that the machines we create are, you know, our own creation.

So yeah, those things do happen. If we link self-driving cars, they become efficient and fast technology, and they call it a train. Yes. Yeah, that’s true. We can put them on tracks, and that way it doesn’t have to go anywhere. Just stays in one place. Yeah, yeah, totally. Self-driving train.

Yeah. Marcel, I’m sorry, but you’re right. My answer for most things is it depends, and maybe it’s my academic background as a psychologist, like we are always just frustratingly saying, “Yeah, is it nature? Is it nurture? Well, it depends.”

Sorry about that. I wish I could be more prescriptive, but this is one of those topics that’s super difficult.

Alexei: “the problem is accountability.” One problem certainly is accountability. And this is one of the reasons why it’s interesting for us to think about these groups in terms of the active and passive stakeholders, because risk, just to go back to what you’re talking about about accountability, for me, I feel that if people have the power or the ability to benefit from some kind of decision, then they should also take some kind of risk, and then there should be some kind of stakes involved, right?

So for example, we saw that a lot of bankers in 2008, from 2008 to 2010, where once the laws were a bit less restrictive, there were a lot of banking and investment professionals who were taking super risky things and selling extremely risky investments that used to be illegal. But, you know, at some point, they weren’t anymore. And their companies went bankrupt, but the government bailed them out.

And a lot of times, after the government paid our tax money to those bankers who created the financial meltdown of 2010, I’m not sure if you remember this, but, you know, between 2008 and 2010, maybe 2012, the economy was quite bad. It was hard to find a job. You know, stuff was expensive, inflation went up, and, you know, people were desperate and poor.

The folks who caused it actually got bailed out by the government, and a lot of times, those people ended up getting bonuses after the government paid our tax money to those companies to bail them out. So that, for me, is an example of people taking risks and benefiting immensely from something that they, for themselves, have no risk or stakes at all.

They knew that they were gonna get bailed out, so they knew that anything that they would do wouldn’t matter, if that makes sense, so I agree with your point. Accountability is extremely important.

One could argue that a machine could possibly decide more logical than an emotional human when it comes to life and death. Marcel, that’s a good point. It’s an interesting point. I would counter with the question of, with life or death, is that something that we want only logic to be applied to, or do we want some kind of compassion to be involved? When it comes to life and death, I feel, and this is another reason why these topics of ethics and ethical design are not black and white.

They’re very much gray. And in the in-between and debatable. I would not want a computer or complete logic deciding whether I get to live or die. I would like some kind of reason, some kind of flexibility, some kind of, yes, logic, but also compassion and the ability to make my case to sway or persuade a person’s opinion. I don’t think, in my opinion, that a computer is equipped to make decisions of life and death.

But if you feel differently, that’s perfectly okay. And there are a lot of people, I think, that agree with you. Alexei, being freed from certain decisions detaches you from some aspects of reality. I totally agree, yes.

The weapon example is a good one. It detaches from reality but also detaches from accountability and detaches from guilt. There’s a thing called cognitive dissonance that when we behave a certain way that we feel, at least on the subconscious level, that doesn’t fit with our self-image, if I feel like I’m a good person, but I do something that I feel a bad person would do, then I feel cognitive dissonance.

It’s stress that my brain gives me that says, “Hey, these things don’t match. There’s something up.” And I feel, a lot of times, that when people make decisions as such to exploit other people, when they can give the blame to something else, like a machine learning algorithm, then it helps them to take away this cognitive dissonance.

It helps them take away that stress because then they can say, “Well it wasn’t my decision, it was the computer that did it. It was the algorithm that did it.I didn’t sentence this Black person to a longer sentence than they deserved. No, it was the computer. I’m not the one who decided to throw a bomb on this wedding in Afghanistan. No, it was a computer. I had nothing to do with it,” you know?

This is one of the things that I think is one of the problems and why exploitation is, A, so profitable, and, B, why they go through so much length to, like, hide it or make it look like it’s just a computer decision.

So thank you for that, Alexei. Nell, I think, in those cases, it would be important a driver has the possibility to take control. For sure, for sure. I agree. I would never have a self-driving car that I wouldn’t be able to take control of somehow. No one deciding in advance to takeover in such a situation but also depends on the driver staying aware of the environment, even when they’re not actively driving.

This is a big issue. We have a lot of Tesla drivers now that are sitting in the backseat playing video games and stuff while the Tesla is on the highway. And officially, they’re not supposed to do that. The policy of Tesla is like, “Okay, it’s called Autopilot, but it’s not completely self-driving yet. So please stay in the driver’s seat, hang out, be aware, and you can take over when you need to,” but people aren’t doing it. So yeah, that’s a problem.

I agree. “The Big Short,” Jonas, “The Big Short” is a great, great, great movie. Thank you for mentioning it. It’s super cool, super funny. Also explains everything in really nice detail in a way that’s understandable, and it’s fun to watch. So thank you for mentioning that. That’s really, really cool.

Marcel, that would be a discussion on its own. Oh, yeah. That’s interesting, you prefer a computer to decide when the second option would be a human that has to decide if a small group with a family member or a big group without one should survive. Pretty abstract case, sure. But, yes, this is basically the trolley problem.

If you are familiar at all with any kind of ethical studies or ethics in philosophy, or if you watched “The Good Place,” you might be familiar with the trolley problem. And it’s basically the question whether, if there’s a trolley going down a track, and on one side of the track it’s going to kill five people who are somehow, for some reason, tied to the tracks and can’t move, but there’s another side where there’s only one person who’s tied to the tracks and can’t move, and there’s a lever.

You’re standing at that lever. So the trolley is going to the five people. And the question is, do you pull the lever to switch the trolley to the track with only one person or not? There’s no right answer for this. It’s, you know, people say that killing one person is better than killing five. I would tend to agree. Of course, it’s a thought experiment, so it doesn’t allow for thinking of other ways to stop the trolley without killing anyone. But this is, in essence, what Mercedes, in this case, is doing. What all these self-driving car makers have to do is make this decision.

Do they, you know, flip the switch and kill one person to save five? Or do they kill five to save one? Yeah, Marcel, it’s an interesting one. When we add different variables to the equation, you know, different ages, is it five old people and one baby? You know, that kind of stuff. Or yeah, there are lots of different aspects to this. Is everyone happy with this discussion? I’m really happy that we’ve gotten to have this short discussion. Is there anything else that anyone would like to bring up or ask before we move on? We are kind of almost done with the presentation, and then we can go back to an open discussion, but if there’s anything else on this particular topic, speak now.

Okay, perfect. Let’s move on. So we talked about, up to now, seeing whom our decisions affect, making that stakeholder map, also placing those stakeholders, or let’s say assigning those stakeholders into a specific category, which tells us something about their relationship with the tool or thing that we’re creating, regarding their costs, their benefits, and their risks around it.

We also talked about how we can design to empower and not exploit all these different people and balancing the very difficult decisions that can be there. But now I’d like to talk about how we can test our assumptions. When we make decisions, we make those decisions on the basis of certain assumptions. And if you’re familiar with Lean start-up methodology or even Lean UX, we have this learn, build, measure loop.

\And I really like it because it’s a super simple and super practical way to think about the process that we go through 100 times. You know, every time we make a design decision or, you know, it’s like a fractal thing, like, every decision we make follows a learn, build, measure loop. Every product that we make follows a learn, build, measure loop. And every business even that we, you know, follows the same loop.

So let’s apply it now to making decisions when working on, let’s say, a tech product because that’s, I think, the context that we are in. So in understanding, let’s say, the first two points, seeing whom our decisions affect and what are their cost benefits and risks? For these, I consider that part of the learn bit of the process in the sense that that’s where we can do some interviews, we can do some observation studies, we can get out of the office, talk to people.

Also desk research, just researching stuff, looking for documentation, looking for academic papers can also be interesting. Anything that can tell you more about the problem space before you start trying to address the problem is interesting. So basically, this test or assumption step or part is all about do your research, you know? Go open yourself, listen to the world, and see what kind of signals and information you can receive. After you’ve understood the problem and you start building your solution, it can be helpful to do some prototyping, right?

Create some kind of a prototype that represents or simulates a specific part of your tool or even the whole tool itself. And then you can use that to test what the effect is on people who might be using it or people who might be affected by it. You can use this for usability studies. You can use it for any kind of UX testing. You can use that stuff for just seeing how people react, any kind of stuff like that. Prototypes are wonderful for this, whatever your product is, but nothing beats co-creation. I will say co-creation, for me, is probably my favorite method for making sure that we integrate the viewpoints of different stakeholder groups into our own process.

And thinking about those four categories, right, user, customer, passive stakeholder, or active stakeholder and passive stakeholder, co-creation is our opportunity to make passive stakeholders into actually active stakeholders. Because if I can involve someone who might be affected by my thing in the actual creation process of the thing itself, then they can share their concerns, their perspectives, and we can, together, address those concerns and perspectives in the creative process.

I really, really love co-creation. So once we’ve created our thing and we’ve put it out into the world, there are also ways for us to measure and to understand the impact that our thing has on the world around it. For example, analytics and telemetry. These are just two different ways to say that we can quantify how people are using the thing. Analytics usually is being used in terms of websites. Telemetry is the term that we use for the same thing on a mobile app.

You know, where are people clicking, what journey paths are they taking, you know, how many people do use this feature and not that feature? That kind of stuff. Another thing is customer service.

And I had the wonderful privilege of working, a while back, with a Dutch business pension provider. So the retirement service, you know, they invest people’s retirement funds and would pay out, like, they managed, you know, retirement funds. And that’s not really important, but the important thing is that I actually had direct access to the one customer service representative for all of this, like the online tool that we were working on, and it was fantastic.

She and I had coffee probably, like, once a week, and I loved to ask her questions about, like, all the things people were calling in about, stuff that people didn’t understand. Also, whenever we would launch a new feature, I would ask her about, like, you know, “Have you heard anything about this particular thing? Have people been saying stuff about this? Have you been encountering any problems?”

And it was wonderful, not only because she gave me extremely useful and important and meaningful, let’s say, intelligence about how people were using this stuff that I was creating and putting out in the world, but she was also really nice, and we, you know, we got to have a cool, like, nice working relationship with each other as well. And human-human connection is what life is really all about, right?

And on that note, another thing we can do is rolling research. Another opportunity for human-human connection in the sense that we can, a lot of the observation studies and interviews and, let’s say, usability studies that we can do in the previous stages, so in the build and learn stages as we go backwards, this rolling research is just a framework for us to keep that stuff going continually. And this is especially important when we have a software product that’s continually updated, especially if you are working in agile scrum where you have, like, new releases every two weeks, for example.

It’s really nice to be able to see the effects of the actual changes on the real world examples. And this is how we can test our assumptions and make sure that we do our research. And finally, we’ll talk about sharing what we’ve learned. Let’s go back to Adam Grant. As givers, givers like to help everybody. And one way that we can help everybody is to share the things that we’ve learned as we go along.

Ways that we can do that are as simple as writing a blog, right? Posting articles on Medium. I mean, there’s lots of free stuff where you can, like, write and Twitter threads, newsletters, all this kind of stuff. These are all ways to write and share your knowledge and expertise through your writing.

Another way is to give talks and webinars, similar to what we’re doing now. I don’t wanna get too meta on this, but, you know, that’s one way to do it as well. Hosting a podcast is also a really nice way, just talking to people who know things that you might not know is another way for you to have human-human connection, but also learn stuff from super interesting and inspiring people.

And another way to do both of those things is to coach other people. So coach, let’s say, people who are less experienced than you or people who are in a more vulnerable position than you, or, you know, people who are going through a thing that you’ve been through before, you can help them out. Coaching is a wonderful, wonderful way to give back and, like, help people out and share the stuff that you’ve learned. All of these methods can be used either externally, so like sharing with the world, but they can also be used internally. So if you’re working at a company, for example, and part of what you want to achieve is to shift company culture from one way to another, or you want to help people in the company to become more mature in the things that you’ve been doing in ethical design or in research or anything like this, all of these methods are ways also to share that knowledge inside, internally with the company.

Most companies have some kind of intranet where you can, like, share writings or blogs and stuff. You know, we could take the initiative to organize some kind of, like, weekly or monthly webinar series or talk series where one of the colleagues will, you know, give a talk about something that they’re passionate about or that they understand a lot about. I’ve seen internal company podcasts that were really, really cool where, you know, you have someone who interviews different people and different organization units to get different perspectives on different topics.

And obviously, coaching doesn’t necessarily have to be super formal. It can also be just very informal. You know, coaching someone who came across your path or whom you notice could be helped by your expertise. This is extremely valuable.

Ah, Marcel, you talked about you had an internal radio show from two colleagues that was super great. Yeah, absolutely, that’s a perfect example of this thing I’m talking about because not only does it foster human-human connection, again, that’s what I’m all about, but it’s also a way for, yeah, people to get to know their colleagues better, to get to know other parts of the organization better. And yeah, it helps to contribute, in my opinion, probably to a nice culture as well within the organization. So that was it. We got through the hard part. Now we can talk.

Before we go straight into the discussion, I will close off the presentation by circling back to the very beginning and reminding us that everything we design creates a relationship. Our choices are what define that relationship because our relationship is mediated by that thing that we designed.

So please, let’s choose to empower people and not exploit them, thanks. You can find the slides at the link over here. It’s d3e.co/htw2023. You can also scan the QR code with your phone if you want. Yeah, should work out. Thanks.

Thank you for clapping as well, Marcel. “Sharing information, giving people talking topic, bringing people together,” absolutely, yes. Andreas, is there anything you’d like to add or ask or anything?

[Andreas] No questions from my side, except I want to thank you very, very much for this super inspiring guest lecture, and I guess this was the guest lecture with the most chat questions, yeah.

That’s amazing. I have to count them when I watch the recording. It’s really amazing.

So I think this is a real, yeah, important topic and very relevant topic for all of us. This shows the number of comments and questions here.

Thank you very much, Brian, for this.