
JBI Dialogues
JBI Dialogues
5 Lines of Inquiry with...an AI ETHICIST - feat Prof Katrina Bramstedt
What is AI ethics and what is it not? What is the difference between ‘responsible AI’ and ‘ethical AI’? What’s in the ethics ‘toolbox’ on the front-lines at a global biotech company? How many ‘r’s are there in the word “strawberry”?
In this episode of JBI Dialogues, we chat with Adjunct Professor Katrina Bramstedt - clinical ethicist and associate editor of the Journal of Bioethical Inquiry, with experience in an industry setting - about the fast-evolving world of AI in healthcare.
We talk about some recent developments in AI in healthcare and in research, controversy with AI powered therapy apps and AI generated text.
We ask Katrina about their forecast for what’s next in AI ethics scholarship. And then, we ask so-called generative-AI tool, Claude.ai the same question! The results were interesting…
SPOILER: AI tools are not good at predicting the future (yet?)
The Instagram post mentioned.
----
Find Prof Katrina Bramstedt on LinkedIn.
Adjunct Professor Katrina Bramstedt (QUT) is clinical ethicist and associate editor of the Journal of Bioethical Inquiry. Katrina was also recently recognised by the 2025 Springer Nature Award for Author Service - congrats!
---
Keen for the JBI Dialogues to ask you 5 Lines of Inquiry about your work? You can get in touch by sending us a message or clicking 'send text' above.
Music by Lidérc via Pixabay
Hosted, produced and edited by Sara Attinger
Transcribed by Microsoft Word automatic transcription.
This transcript has not been edited and may contain errors.
Speaker 1
This podcast is being recorded on the unceded lands of the peoples of the Kulin nations. We wish to acknowledge them as traditional owners and we pay our respects to their elders, past and present.
Welcome. You're listening to the JBI Dialogues, the podcast of the Journal of Bioethical Inquiry. I'm Sara Attinger, digital content editor at the journal. This episode of the JBI dialogues. We're doing 5 inquiries and this episode we are joined by Professor Katrina Bramstedt, who is a clinical ethicist and adjunct professor with extensive experience in inpatient and outpatient clinical settings as well as in industry. And Katrina is also an associate editor of the Journal of Bioethical Inquiry, and today we are going to be talking about all things AI and bioethics welcome, Katrina.
Speaker 2
Yeah. Thanks for having me. This is this is great. Wish I was right there with you in Australia today, but I'm happy. To be with you online.
Speaker 1
Thanks so much for for, for for joining us and and lending us your expertise and in advance for answering our five inquiries. I will also say for for the listeners. In preparation for this episode, we thought it could be interesting to ask one of these five questions of a so called AI, so we've chosen to ask Claude AI to answer one of the five questions, and we're going to compare answers between our expert here and what what are sort of. A large language model has to say, but we'll reveal that later on in the episode. I am really looking forward to this because I I personally haven't really done any thinking work on AI and bioethics myself, which is sort of amazing really because it's, you know, it's it's so inescapable. But I think certainly almost all of us at the very least, if you're working in academia or education. You've you've you've come to to grips with AI in some form, so I'm sort of coming at this at A at A at a very, let's say, novice perspective. And I feel like Katrina, you have a lot of experience to draw from and also, you know, the combination of experience practise and also that sort of like intellectual engagement with it as well. So I'm very excited to see where this conversation goes.
Speaker 2
Yeah, me too.
Speaker 1
So I'll just jump into our first inquiry or or question. About you, how have you come to find yourself working on and thinking about AI ethics?
Speaker 2
It is a great question and actually I found myself in the deep end of AI ethics. I actually chair a bioethics department for a. International. Very large biotech company and we're very fortunate to have our own bioethics department embedded in the company. And we do a lot of work within our company on AI. So we make a variety of healthcare products and solutions. And so we're developing AI solutions.
Speaker
For our.
Speaker 2
Patient. And we need to be sure that they're they're appropriate. And so we we put ethics by design. We say right up front and get involved very early on with the ethic ethical development of these new tools and solutions. So my departments involved in that.
Speaker 1
And what sort of, I mean both is is very. I mean ethics is just a very interdisciplinary space. What what are your sort of disciplinary origins?
Speaker 2
Well, you know we we actually do rely on the four basic principles of medical ethics that that everybody uses with. So we've got those in our toolbox. You could say we've also got quite a lot of other ethical principles that we draw on. One of the most important ones is transparency, but we've got explain ability we've got. Topics related to privacy and security. And so we're very concerned about that and some of these things also interweave into laws and regulations. But from the ethics lens, which is where we keep it, we we use our little toolbox of of. Ethical principles and and the company I worked for, we developed as a.
Speaker
That.
Speaker 2
That we want all of our employees to be trained to. And also to abide by when they're developing and commercialising new new product. So yeah, we've actually got quite a toolbox available to us, but we the the very basic we start out with the medical ethics 1. So beneficence, you know, trying to maximise the benefit, not malfeasance, trying to minimise. Farm justice being fair, making sure that there is absolutely, you know, very, we try to avoid bias, minimise it if it's there but avoid avoid it. We don't want bias in our product. Yes, so so that's the starting point. But we've got many more principles to choose from.
Speaker 1
Yeah, yeah, the four principles, the the bread and butter of bioethics. And then, yeah, as you said. Yeah. But a lot. There are a lot of a lot more tools in the in the tool works that we can.
Speaker 2
Bread and butter, yeah.
Speaker 1
That we can. All draw from so so.
Speaker
Yeah.
Speaker 1
My second question is, I suppose if you could give us a bit of a a crash course on on the topic, what is AI ethics and what is AI ethics not?
Speaker 2
Another really good question. If you go to the Internet or especially like LinkedIn, all these people. First of all, they pop up and they they call themselves AI ethicists. They're they're not so beware. And the the the term that that they like to use is responsible AI that that's a really hot phrase out there in the world responsible AI. But when you dig into that, what is responsible AI, that's a very narrow segment of ethical AI. So yeah, being responsible means a few certain things, a few certain aspects. Of of AI. But we want to be bigger and broader in that we want to be ethical. Because responsibility is just a subset of ethics, there's many more things to consider. So when I do training for staff and when I write papers or whatever, I would say the bar should be much higher it should. Be ethical AI. Because ethical AI also looks at the ethical use. Of an AI tool or solution. Not just is it safe and secure and not. Going to breach? You know, data protection that that's very, very narrow, but we. We you want. To be much higher in set, set the bar and be ethical and have ethical AI. I I did an Instagram video not too long ago and you can actually catch it out there about explaining this difference between ethical AI and responsible AI. And I think people are finding it interesting so. Yeah, I'm high.
Speaker 1
Aim high. Aim high. Yeah. Yeah, I'd be interested to see that video. And maybe I can link it in the in the show notes for anyone who is is interested. A sort of just a bit of a a follow up questions to that. What are some of the common misconceptions? So you've mentioned sort of the distinction between responsible and and ethical, and you sort of see ethical as as a bit of a a higher bar, a bit of a, a more ambitious goal. What are the common misconceptions about AI ethics, particularly in the health concept context?
Speaker 2
Ah, that's a great question. I think that's a lot of people. If you look at the surveys. That have been done. There's some great data out there on what patients and healthcare providers are thinking about AI. Compare that to maybe what you read in the media so the media really touts AI as, Oh my gosh, fantastic. What life changing it's going to just upend health. There. But when you actually look at the the research, the data coming out from people talking to and interviewing patients. What do you guys think about AI? How do you think it's going? To change your healthcare. I'm not really sure, I'm not really sure I even want to use it. You know I don't. Really. Trust it and. From the healthcare provider side, yeah, I'm not sure it's really going to be such a game changer, really not. So there's still quite a lot of speculation. Out there in the. Deal. If you're looking at it from the other side of the coin, there's tonnes of hype and there's tonnes of Oh yeah, this is just going to be amazing. But maybe that doesn't quite line up yet with how people are feeling out in the the lay world. So we've got to bring the two together and then bringing 2 together. I think the key issue is trust. People need to feel like they can trust AI when it's working in their healthcare tool or solution. But even before that, they have to know it's in there. That is huge. So that relates to the ethical principle of transparency. Patients, healthcare providers, they need to be informed that AI is running in the tool. There's actually a great case out there on the Internet and I use it in my training. It's called the cocoa case and the cocoa case was this app that was developed by an organisation and it was a platform. For people with mental health. Conditions to to interact and get some peer support and things like this. The problem was. Those people interacting on that app I have no clue. They were interacting with AI. Ah, chat a a computer. They thought they were talking to real people. They were never informed and when they found out they were not happy about it, it really was negative for the reputation of that, that organisation, that. Was they were trying to do. Something good, but they didn't have. That ethics, by design, that forethought put in to think that, gosh, maybe we should tell people that they're actually just talking to a computer. They're really not talking to a human about really personal and deep things. And some people get close, you know. So yeah, that's I think kind of where we're at right now. Transparency and trust building super important so that we get to a stage where the great things that we do have to offer will be embraced and there'll be a huge appetite for it and a trusting appetite by patients and healthcare providers.
Speaker 1
Yeah, a trusting appetite. Interesting. Interesting you. You sort of mentioned when you were talking about that case, you mentioned reputation and I'm wondering. Sort of, with all your professional experience and sort of thinking about the sort of the stage we're at in AI being somewhere between height and and somewhere between people sort of not knowing about it. Do you think that reputation is enough of a sort of threat to developers or or people who are deploying AI to to get them to think about ethics? Or does there need to be something more? Or does it sort of change depending on where you are or what industry you might be in?
Speaker 2
Well, I think. Certainly, if you are a company that has, you know, shares you're on the Stock Exchange and your reputation can be really important because if something goes bad and you get into the New York Times and front page and your reputation goes down the tubes, your stock will also go down the tubes so. Companies who have a really high profile because of their financial setup or or whatever it is their brand name, they definitely always want to protect that. Should that be the driver? For doing ethical AI, gosh, I would hope that the driver for doing ethical AI is just your good ethical corporate values. You know, one of the things that that I like about the company that I work for, we have really strong ethical corporate values. That's our driver. And of course, we're concerned about our reputation as any. Large company would be but. I like to always say don't get the cart before the horse. What don't reverse the order. Let ethics be your driver compliance and all those other things. But they will follow naturally afterwards. Once you have ethics as the driver. When you flip it around and you're so concerned with. Rules, regulations, etcetera, etcetera. Ethics becomes an afterthought. That's not what we want. Ethics should be the 4th, 4th thought. And what is driving your decisions about innovation, about commercialization, about access, all those those questions which are really important.
Speaker 1
Especially when you're working, you know, in an area of technology where you know things are changing all the time and and evolving and developing.
Speaker 2
Yes.
Speaker 1
Not. Not everything can be answered by existing. You know codes of compliance.
Speaker
Etcetera.
Speaker 2
Exactly. And that's something that I, I tell, tell students I say, would you want a law for every single thing you wouldn't. That would drive you crazy in life. What? What a what? A rigid life to have to lead. You know, there are so many situations in life where there isn't a law or a regulation to guide you. So you're going to really need some. And on the flip side? There's plenty of things that are legal that are not ethical. You shouldn't be doing them right? So you got to know you got to know that. So put the ethics first.
Speaker 1
And have a sort of a way of thinking through something through problems. When? Yeah, when you can't rely on other other.
Speaker 2
Exactly.
Speaker 1
Guidance. So.
Speaker 2
My.
Speaker 1
Sort of third inquiry, which you sort of might have already hinted that, or you might have a different sort of example, but I wonder if you could tell us about a sort of case study or example or recent development in AI in or relevant to health context to sort of help us sort of not through some of these issues that you've been raising.
Speaker 2
Yeah, but that that couple case is a great one. The one I talked about because. People need to know if there's AI in their in their solution, so there needs to be some transparency because we want to avoid situations like Coco. There was another, another interesting situation. Not not a not health related but a big university in the United States there was a tragic. Really tragic thing that happened. And and the university sent a letter out to all the students to to acknowledge this tragedy and expressed condolences and things like that. And as it turned out, that was all written by AI. And the students found out they were furious. They said, couldn't you have taken the time to just sit down and pen a letter?
Speaker
And.
Speaker 2
From your heart, you yourself, you have to have a computer.
Speaker 1
Hmm.
Speaker 2
There I mean and I guess you could relate that to healthcare too. I know that we're trying to be efficient and fast and this. And that but. I've been venture that a lot of the messaging and and things from doctor's offices are now being generated by large language models. And are they language appropriate? Are they empathetic? Have people actually screened any of these things that go out to the to the patient community? That's really important. So yes, while we want to be efficient, well, people love shortcuts in life, but you kind of make sure the shortcut is is truly valuable. Yeah. So it's it might not be unless you've really checked it out. Same thing with research. There's a great example out there about using large language models to to help you with your research. Your literature searches and this and that this this article is published in the Hastings Centre. And the funniest thing is when I read this article, one of my own papers was cited. Well, not really, but it was mentioned in the article. How my paper came up in this search, but it's not my paper. I've never published a paper on OBGYN and Ethics or gynaecology and ethics, but yet this paper came up and it was said it was authored by me. It had this beautiful citation and everything. It's not mine. You know, so you have to be so careful. Yeah, shortcuts are can be great. Maybe if you want to get from A to B on a walk. But be careful when you're doing shortcuts. In research and healthcare.
Speaker 1
What are the things? What are the things you're going to miss on the on the journey?
Speaker 2
Yeah, it's not you guys, not. Perfect. And I will make mistakes.
Speaker 1
M.
Speaker 2
Yeah, you got to check it. Yeah.
Speaker 1
Just going back to one of the examples you mentioned just before, actually I I was I was just thinking that I wondered you know if you could sort of and I'm not sure if this was part of of any of the examples that. You spoke about but. I'm thinking particularly about the sort of AI generated letter that you talked about. With the with the university.
Speaker
Yeah.
Speaker 1
I'm wondering if if you know if the content, let's say you were reading this letter and and there's nothing and nothing there to indicate that it was AI or or you could sort of, you know, ask people, do you think it was a real person and people thought it was? A natural person. Then I will. I would wonder how. If you could sort of control for content, I was wondering if people would steal in certain situations. Questions really be upset if it was written by AI, like I wondering if if there would be. If there are some situations where there is a value in something coming from a human in some way in some way in the process beyond content.
Speaker 2
I yeah, it's that's a that's an important point because I I think context can be. Important when we think about the use of of AI. If we think about that letter. The the, The, the university writing that letter should have thought a little bit harder about the context and the situation. And how about how this was a very human thing. So to hand that very human thing because of the context over to a machine. They really missed the boat that that's where the they, they they had such a disconnect with their audience and their audience is their stud. It's. Yes, their students may be paying tuition all this, that and the other. Forget that. But they're supposed to be looking after their students. They have a a duty of care, right? There's certain relationships, certain responsibilities within that relationship between the university and the student, and they sort of forgot about that. They forgot about the human part. Medicine is the same. Way, it's a very human endeavour. That's why we often say medicine is an art. Yes, it's a science, but it's also an art. And we know about so many doctors, for example, who have terrible bedside manner. I mean horrible, right? Well, but they may be geniuses. They may.
Speaker 1
MHM.
Speaker 2
So the Krebs cycle like you can't believe forwards and backwards and with their eyes closed they can do it. Fantastic. OK. But medicine is more than that because it's the human, the human experience. It's that context. Somehow we have to bring that to AI. There's a company in the United States who's been developing more of a they call it like a a truly intelligent, compassionate human, empathetic AI, and a lot of people are gravitating to that because they they do feel that it is less machine. But again, it is still a machine. It's not a human. Can it make ethical decisions? Can it really reason? Yeah, I'm not sure we're there yet.
Speaker 1
Yeah, interesting it is. It is interesting this sort of drive to try and replicate and reproduce compassion, empathy and complex ethical thinking.
Speaker 2
Right. Yeah.
Speaker 1
So we went to 1/4 inquiry, sort of adding context for a broader audience of bioethics scholars. People like myself. How is or how do you think AI ethics is the same or different to other areas of bioethics? Are there new issues or concerns? What sort of old ground and I guess is do you think AI is exceptional in any way compared to other sort of you know technologies that have come through in the last couple of decades?
Speaker 2
Well, I'd say AI technology as technology seems to be moving very fast. But in terms of commonalities between AI ethics and, let's say, bioethics or medical ethics? We we we're still starting with the four basic principles where we look at an AI tool. At least let's talk about developing that tool and and what should how we should be thinking about it. So we do start with those four basic principles, but then we expand and we go beyond that. So in our toolbox at the company I work for, we've got ethics principles that we're looking at when we are developing and commercialising an AI tool way more than 4. And we did some benchmarking, we looked. At well, what this is? Going up on there in, in. The field and. Other companies they don't have so many, they'll have less principles that they're thinking about. We we're on the top end, we seem to have a. Lot. But I'd rather have more than less. Because it gets us thinking more, you know, and. When you have. More though to consider than those. Just just those 4. For example, one of our principles that that like when the dilemma comes in, I'm always looking at stewards. What's stupid to me is a. Very important principle, but I'm. Looking at ethical use, I'm looking at keeping a a human in the loop. Is there. Is there somebody that can always be jumping in and part of the decision making with the patient and the healthcare provider that that's important? It's it's showing empirically that it's certainly important to patients. That that there's a human in the loop. Now that for example is is a principle. That you may. Not see in other segments of bioethics, but you see it in AI ethics. So our toolbox is actually pretty big, pretty heavy. We got a we have a lot to to work with. Yeah.
Speaker 1
And I'm wondering what sort of informs your or these principles, these frameworks? Do you? Because you're sort of deploying them, you know in the industry as these things are sort of happening and and coming out to you, yeah, I'm wondering where you draw these principles from and are they reassessed?
Speaker 2
Yeah, that's a great question. We, we did benchmarking globally to see what had been published already by other in biotech and healthcare, what have what are they saying. Industries. We looked at general academic. Work in AI ethics. What are they saying? More appropriate principles to be used? How about health authorities and groups like the FDA or the TGA? What are they saying about ethical principles for AI and the development and the regulatory affairs that that brings that into the spectrum? We also looked at organisations like The Who, for example, and certainly the ISO. And we pulled all of it together onto a spreadsheet and we looked for. Threads and thieves across. Values and principles, and then that's how we came up with our set that that we felt really valuable. We even have sustainability in our our, our toolbox. We can we consider that because. There's quite a lot of energy use. With AI. And so knowing that we're thinking about, well, what can we do? How what about sustainable energy? And so we're using a lot of wind and sun energy, for example at our company. So we're we're we feel pretty good about that.
Speaker 1
I actually read an article recently that and I actually can't remember the the exact. Number off the top of my head, but I would I do recall the feeling of being astonished that you know, every time you ask ChatGPT, a question. There's actually there's actually a huge and easy cost behind in answering your your inquiry.
Speaker 2
Right. Exactly.
Speaker 1
Yeah, yeah. Because you're just sort of, it's all in the digital space. There's no, there's no sort of visible exchange of resources for you, for your brain to sort of latch onto.
Speaker
Right.
Speaker 2
Yeah. We don't think of it, but it's happening.
Speaker 1
But it's happening. Alright, so I'll move on to our final some line of inquiry. This is the question that I also asked Claude Dot AI, which I know at least such sort of anecdotally in the Australian context is a an AI or an LM tool that is being used commonly in the sort of in academia. Academics, you know, just for their own sort of work purposes what? What do you? Think is next in AI ethics scholarship and and and what are the most critical or or pressing issues for scholars to address and think about.
Speaker 2
I'm thinking that whether there's a few but one of them that comes to my forefront brain is agentic AI. So a AI that supposedly has the ability to, to reason, and to predict it's it's not so much. Relying on the the prompt that you get. Have it and the data that is in the system that it trained on, but it is calculating like a brain and trying to reason and predict that one. I think we need to put our ethical lens to.
Speaker 1
And what did you call that? Sorry.
Speaker 2
Oh agentic AI.
Speaker 1
Agentic AI. Uh, yeah, very interesting. But it's very interesting.
Speaker 2
Now I'm very curious about what your. LM said that's very answer.
Speaker 1
Well, I was actually thinking it's it's it's quite ironic that the the answer you gave was about an AI models that can reason and predict when we've asked a question about, you know, predicting what's, what's the future for, you know, AI and bioethics. And and you know, perhaps. Unsurprisingly, it's sort of said. I think things that are fairly maybe obvious concerns, so things like it sort of names. I think 6 key areas of concern, data privacy and data protection, algorithmic bias and health disparities, transparency and explainable. City and the impacts on things like informed consent, job displacement for healthcare professionals. Autonomous decision making and the role of clinical judgement. So autonomy got a bit of a mention there and also access and resource allocation.
Speaker 2
Yeah, I have to say, it's interesting that, that it gave those replies because there's already a tonne of literature on most of those things. So especially the privacy area. Oh my God, that's probably the biggest one. And this is because there's ethicists who work on that as well as tonnes of privacy and compliance. Lawyers who love that topic. So there's a huge amount of literature on. That and this all came about because of the use of secondary secondary data use for training. To create AI systems and solutions so. And then you have. Things like the GDPR and various privacy regulations around the world and so. People try to. Link and thread it all together and figure it. Out. So that's there's actually. Quite a lot of academic literature on that topic. Access, so that's another one. Yeah, we have to think about, you know, places in the. World, where they they may. Not have a lot of electricity and access to Internet. So you build these fantastic tools, but how? How could you deploy them? How could they actually be used? I got a great tool the other day from my healthcare insurance company and I was so thrilled to use it. Let me tell you, I was so excited. And when I tried to deploy it. It won't deploy because it says unusable on a 5G. Just in my country where I live, that's all. We have is 5G. That's it. There's there's anything else that it's 5G. So it's a bit bizarre that my health insurance company would have even offered this to me and my health insurance company is is based in the country that I live in that's supplying these devices, these medical devices to to their.
Speaker 1
But.
Speaker 2
Patient slash consumers. Clients. Yeah, we can't use them. It's almost ironic. Where was the forethought there?
Speaker 1
So that really doesn't even sound like a, you know, an AI ethics problem. That sounds like an even more fundamental.
Speaker 2
But then I, but on the other hand, I had built up this expectation because I had signed up online, I was going to get this little medical device. I was, it was going to be great because I do a lot of travel and I can take it with me and this is going to be fast. Plastic. So I had built up as a patient this expectation of using this great tool and I'm really let down. I'm really disappointed and I actually feel like now I have a tool that what do I do? Do I dispose of it because it I don't think this is going to recycle. And so then there's the sustainability question that came in my mind and I'm thinking. There's thousands of these out here in my country. What does everybody do with them? Because they can't use them. You know, I'm saying do we ship them back or that's carbon? Should I walk down the street and drop them off? At my insurance. Company I mean. I got all these questions going in my head. Probably I'm overthinking it, but you get you get the point.
Speaker 1
If you're over thinking it, you're probably in the right line of work.
Speaker 2
Definitely. I'm in the right line of work.
Speaker 1
I was actually just thinking, just going back to the the cloud dot AI's response to, you know, the future, the future question. I think what its answer reflected back to me to something that I I've been told by a computer scientist who, you know, taught this fabulous seminar at one of the universities I work at on AI, just to sort of point out the distinction between what we just refer to as. Artificial intelligence more generally, you know, in common parlance, but there there is a a conceptual distinction between artificial intelligence and large language models and called that A is obviously a large language model. And so I'm not surprised that it's spat out basically a summary of what's most common in in the literature out there.
Speaker 2
Right. That's what it reached for, because that's what it's there. That's that's that's in it's training batch of data.
Speaker
Yeah.
Speaker 1
MHM.
Speaker 2
So and and as you said, your own question was about prediction and the future, it really couldn't handle that question. It it didn't really give you a great answer, but that's a good that's a good.
Speaker 1
No.
Speaker
For.
Speaker 2
Example for for reachers researchers and students, you, you this is another reason why you just can't rely on these tools as much as sometimes people think that you can't. You still have.
Speaker
To use your.
Speaker 2
Own brain which is good.
Speaker 1
Yes, it's definitely, definitely good for our soul. Another example that I saw recently is, and I'm not sure if you've seen this, somebody asked a bunch of different large language models or LM's are the question how many Ares are there in strawberry? And because of the way that a lot of LM's work, assigning sort of tokens to words and most larger models couldn't answer this particular question because it it required a level of reasoning that it had been sort of programmed to do and was sort of beyond the whole. Prediction of of commonality of words underpinning the life language models.
Speaker 2
Yeah.
Speaker
Which I thought which?
Speaker 1
I thought was quite interesting so.
Speaker 2
Yeah.
Speaker 1
And maybe feel a little bit better about about the sort of the pace of the pace of of AI AI's progression.
Speaker 2
Great. Yeah.
Speaker 1
Well, that's, that's my 5:00. That's my 5 questions. My 5 enquiries. Katrina, thank you so much for for joining us and and lending your insights and expertise.
Speaker 2
No, it was a lot of fun. I appreciate the opportunity to to be with you. All, thank you.
Speaker 1
Thank you so much for listening to this episode of the Devi Dialogues. We'll see you next time. Take care.