Skip to main content

tv   Rana el Kaliouby Girl Decoded  CSPAN  May 10, 2020 12:00pm-12:57pm EDT

12:00 pm
nation's enduring makeup, its dna as they, metaphorically used, i think. a lesson that would not only be false, but pernicious. >> to watch the rest of this program, visit our web site, booktv.org, and search for wilfred mcclay or the title of his book, "land of hope," using the box at the top of the page. .. >> excited to have you here and so toe share with everybody the full context of what we are going to be doing today. so, rana is the cofounder and ceo of ethics piba and a pioneer in emotional artificial intelligence. we're going to be having a
12:01 pm
fascinating conversation on how to reclaim our humanity by bringing emotional intelligence to technology, and just a little more about rana. she was named by forbes and their list of america's top 50 women in tech and fortune included her also in their list of 40 under 40. she was also the co-host of a pbs know zsa zsa series on a.i. in 2018 and an all-around awesome person and really great to have you here, rana. before we dive in i want to share how this topic ties into our mission. at iv everything we do is to help ourselves unlock our own potential and potential of people around us. that happens through learning and community. the more we learn but the world and ourselves, and the more we surround ourselves with other passionate individuals, to
12:02 pm
better we'll unlock our potential. when i reflect on this topic, emotional intelligence, a.i., this us isn't just about machines. it's also a lot about us and the future that we want to shape and be part of, so, i want to kick off by asking you, why is this topic so important? why have you chosen to write a book on this subject and dedicate -- build a business around it? why is so it critical and what was the journey you lived through to get to here? >> guest: hi, everybody. our emotional intelligence matters. right? and the thesis is, technology has a lot of cognitive intelligence, a lot of iq but no eq, and if we are able to build owe q into our technology that's not only transformed human machine interfaces like how we connect with our phone for laptop, but more importantly to your point, how we connect and communicate with one another and
12:03 pm
that's been the driving force of my work. i've been doing this over 20 years now and it's really fundamentally but improving the way humans connect and communicate with each other. >> host: thank you. and what is at stake here? long-term, if you get this right, get emotional intelligence and machines right and our ability to cocreate, with automated systems, it's to get it right, what can we hope for and if it goes horribly wrong what is at stake? what do we stand to potentially lose. >> guest: emotional intelligence is at the core of how we build trust with each other as humans. and it's often not the explicit kind of license agreements or forms we have to sign to build trust with one another. its usually the implicit nuance, nonverbal stalls we exchange and based on these signals, we make thousands and thousands of
12:04 pm
decisions that involve trusting one another, trussing our colleagues, our family members, our communities, and so if you think about how that works in a virtual world, and of course know with the global pandemic, how do you build trust? how do you build trust with technology? how do you build trust with one another? and so that's what is at stake. to me it's -- if we can't trust each other, we can't really move forward as a society, and the other piece of this is empathy. empathy is at the core of what we are as humans, and i talk but this in the become. we're in the midst of an empathy crisis. hope with this global pandemic and the whole universe going through the same -- i wouldn't say the same -- it's the same kind of situation but of course everybody has their own experience of it. i hope that -- we emerge out of that with more empathy. so empathy is another big theme
12:05 pm
and it's definitely at stake. >> host: thank you so much. you're mentioning two key ingredient, collaboration and life, it does boil down to trust and empathy. and when we talk but a.i., when you say the words, i don't know about you but probably most people visualize like the sci-fi horror story and other sci-fi series, mow e more friend live characters. painting a picture looking ahead, whatever may by, 10, 20, 50 years, how do you think, like in an ideal world, how do you think -- what is the possibility here? if we play our cards right, could machines be as full of empathy and trust building as humans, like is that possible? or is that something that we should absolutely drive for? and i want to ask you if we
12:06 pm
don't dot that, what is the other path and the danger of that. >> guest: if we do this right, i envision basically emotional a.i. becoming ubiquitous. that means the way we enter fate with our technology be it your car or phone or your -- whatever device it is, we interface with our technology them say way we interface and connect with one another, through conversations, we're already seeing that with siri and alexa, et cetera, through perception. so we're starting to see our devices have eyes in form of cameras, et cetera. and then up matily through empathy. and so i think this will manifest -- the framework i use to kind of project that into the future is that it's a tool. in the same way that if you have a hearing problem you go out and by a a hearing aid or if you don't see very well, you have vision problems you use contact lenses or eye glasses.
12:07 pm
well in a virtual world which a lot of us are in, you don't have the same quality of emotional intelligence. your eq is lower so i think of these tools as automaticking our eq. -- augmenting our eq. it's more of a partnership. it's not us vs. the social robots. i think social robots can help us do our jobs better and -- so i'm excite about that. if it goes wrong, and i think we can talk but the ways which this can good wrong. for me it's around respecting privacy and consent and being a trusted partner so that people feel comfortable sharing this very personal information. that's the concern i have. i think we need to think about where these technologies can be deployed. we can kind of double-click into that. and then of course bias. how do you make sure that these technologies are not biased
12:08 pm
against a particular group of people. >> host: absolutely. you're saying how people get tools like hearing aids and so forth. people also buy pets. so there's another element to that, come -- companionship, being a fellow -- like there's also -- like humans don't just need solutions. we want to feel good and it seems like there is ant opportunity here where people can be assisted by machines essentially in a broader way. >> guest: can you imagine if -- again, spending so much time on our devices if there was a life coach or an equivalent of sirery that gets to -- siri that get tuesday know you really well and can flag you seem at extra stressed. maybe you need to take -- do a yoga class and can suggest that
12:09 pm
for you, or if it can say, i he notice you were super unempathetic on that last zoom call or you were kind of mean or rude, whatever. maybe you should consider doing x, y and z, or can flag things like depression, and even suicidal intent. a learning companion. so if you're taking that online class and you don't have the grit to keep going, it can kind of cheer you on. i think there's a lot of opportunity for this idea of a coach that helped you be productive or more healthy or more connected. has your back. >> host: excellent. thank you for that. so, share with us the current landscape. where are we today, what's most cutting edge in real-time, and what are are you most excited about when you look at the different pockets of innovation in this space, what's currently -- to you seems like
12:10 pm
the most exciting, most game-changing technology to help. >> guest: i want to maybe start by explaining the core technology. the way humans communicate only 10% is in the actual choice of words we use. 90%, 90%-plus is nonverbal. that it your facial expression, hand gestures and your vocal intonation. so companies in our space, we are very focused on facial expressions, so mapping, using computer vision and machine learning to map all of thieves facial movements interest -- these facial movements into fund the state of the person, happy, sad, look tired, am i driving a car and falling asleep? all of this cognitive and emotional states are key. and so where i would say -- then we are started come binning vocal intonation. are you monotone, excited in the way you're talking? so all of those nonverbal skills
12:11 pm
we're able to now develop algorithms to detected them and we partner with -- a number of industries look the automotive industry to make sure the driver, for example, is not distracted or falling asleep and then the car can take some action to make sure the occupants are safe. so that's one industry we're very excited about and spending a lot of of time focused on. i would say given the -- again, where we are right now, a big area that we're exploring is around virtual events and how do you create a shared experience when we're all just communicating online, and the example i like to give, which is very -- if we were all in the same room right now, right, which we would have, if the pandemic wasn't happening, you would -- would riff off the energy of the audience, see when
12:12 pm
people perk up based on question and probably try to build on that. it's so hard right now for me because i can't see the expressions of the attendees, and i find that like really isolating. it's not -- it's a different kind of experience, and it's less of a shared experience, but imagine if emotional ai was existing and we could see a real-time kind of engagement chart, so if we -- if you said something it was funny you would see the smile curve go up. so i think there's ways to bring these technologies to create more of a shared experience. >> host: absolutely. the key point here is, given that only 10% of how we communicate is termed by the content of what we're saying and 90% is both gesture based and so forth, the question is how to use technology to better build on that 90% of how we
12:13 pm
communicate, both with each other and also the technology we use. so, how -- what advances right around the corner? what can we hope to see in the very near term, not like 20 earlies but the very near term so i'm using any google home to turn off the lyings and play song. and i know it remembers things about what i requested before but i don't feel like i have a shared history with this machine. i don't feel like it really knows me. that do you think in the near term is going to be the most important changes, not just in terms of what the technology does but specifically how we live and how our -- the way we live is augmented. >> guest: i imagine a lot of those devices and conversational agents, once they have a little empathy and emotional ai they
12:14 pm
move from being trapping transactionam to more conversational for real. for now the way we all use these devices we ask alexa or google home to do something for you, play a us-mexico, order blah blah law and that's it. there's no back and forth. it can't say oh, you look a little tired today and you say i didn't sleep well, maybe i ordered some chicken soup for you. there's no this back and forth and the only way you get there is if you start having a sense of history of a personal profile, what is your baseline so when you deviate from it, it can say, you look a little off today. or if could be you look purpose sited today. haven't going on? could go either way. doesn't have to be negative emotions. but the has to be this sense of it knows who you are as a person, and that only comes with a little bit of an emotional intelligence and that unlocks all sorts of applications beyond
12:15 pm
the transactional. it can really be a conduit for helping us do all sorts of things. >> host: correct to say a major limitation today is simply the recording piece of this? because with smart watch, an apple watch, even up i don't do anything it's still recording how many steps i'm taking, my heart rate and that kind of stuff. so theoretically if i was wearing a device or if there was devices around me that measured my facial expressions, gestures and all that, but also maybe measured chemicals in my body of like -- so would you say that a lot of the limitations are around the recording part of this? and then also like what else is in the equation? what if -- data captured, communicating with you in a way
12:16 pm
that's not super annoying or disruptive. >> guest: exactly. two parts the sensing part and the action part. and we are the sensing part but of cower we collaborate with our part nor like automakers to make sure the sensing part isn't annoying. the limitation for the sense, part depend little 0 then situation and the context. i'll come back to trust. think it's really critical that there is a level of trust that people know their -- the user knows this data is collected -- doesn't have to be roared, itself can be processed on the flee and being done to create a better experience for you. has to be val knew return for you and that's where a lot of the innovation is happening. i'll go back to automotive industry where we spent a lot of time. car companies putting cameras in the vehicle. the data never leaves the car and never gets recorded.
12:17 pm
but it's looking for things like signs of fatigue, signs of distraction, and the value proposition it will be a safer experience. so i think that's very clear value proposition. but it's a dialogue. how much do users and consumers willing to -- what's the word -- be okay with thiessen are so -- these sensors being around us. >> host: you have a second major book, at the end of that his whole -- his prediction and i'm paraphrasing here -- his prediction is that because we find it so difficult to make decisions as humans, like we -- whenever we can we'd rather not make a decision, rather be just kind of given direction. so instead of learning how to
12:18 pm
get to this place, myself, i'll just keep looking at google maps like it tells me. he claims in that book that because of our tendency with that we will give up all the information so that our apps can tell white house to date, what to eat, what business decision to make. could be so many things. and he -- the way he phrases it, he sews it as inevitable. what job to choose if you are given two offer. and then another thing to add to this, because we're stuck at home, i was looking at getting a ply ton and there's a mom payment plan so just connect your bank account to this and we seek your transplantation transactions and i won't do that you but itself get outside the financing you'll do it. do you believe -- is that your
12:19 pm
prediction that now it's just restricted to card but ultimate live if it was connected to the cloud and people got 10x value, pretty open to giving all access to their life technology to get -- >> i think you're getting enough value and you trust the technology, and of course, by definition the creators of the -- folks behind the technology, i moon i think we have seen tons of examples where people are doing that already, and if there's a convenience and there's value and a better experience, yeah issue think people would. people are already using -- we're already giving up so much data, and i think a lot of people didn't realize until the recent tech backlash how much dat we're sharing. talk about power asymmetry. unfortunately so far only a small number of companies or governments have had access to all this dat and we as consumed -- we're not really
12:20 pm
beneficiaries of that data. and i think there needs to be a rebalance of who has access to this data and who is getting value out of it. >> host: if you could help crystallize for us, what is holding us back. >> guest: really complex problem. people's emotions and behaviors are actually pretty complex from a computer vision standpoint. we all look different and express in different ways and how do you map that to an actual understanding of the person's face. i could be smiling all day long but broken from the inside. so it's a complex -- it's not just, oh, if you smile you're happy. if your raise your eye brows you're surprised. that's very naive way -- an oversimply nick indication of humans so we try to -- -- over simply nick indication of humans and it will take time to code for these states and then the
12:21 pm
third is like just consumers have to be onboard and so the value has to be there and we're still experimenting with what are the right applications of this. i was on a call earlier this morning, prepping for an event that is about the future of relationships, and it actually -- we talked but dating apps and how this technology could kind of redefine what dating looks like, especially again given that a lot of this is happening virtually, and very interesting. if i had a dating app, that could say, based on your character, but also based on the chemistry, if you meet this person in the real world, you are going to hit it off. that would be -- i'd pay for that. we don't have that right now.
12:22 pm
>> host: right. so we have a related question for this from our member, helen. she asked, how about future eq enabled -- therapy or even for personal and career coaching? are there knew -- knew nuances and i want toed a a little bit to this question. so there was a great uk based show, that came up a couple years ago and there was these like android like a.i. that looked just like human but a different eye color and they had actually -- everything about the world was the same but one scene that completely blew me away, the main two characters are going in for marriage counseling and it's an a.i. marriage counselor and typically if your -- before seeing that if you asked me what's the very
12:23 pm
last job that robots might take i would sigh something like that. because it's so emotionally interconnected but in the show what happens in the scene is the couple is having an interaction and this human-looking android, like a.i., is kind of like, okay, well, based on 18 billion data points with these actions there's a 97% probability this thing is causing that thing and therefore you should do this with this -- percentage of confidence. you see that and you're like, i would whoa you use a human one again. no, it's everything. so, both with helen's question and that perspective, what would you think -- anything that a eye won't be able to replicate or just -- like emotional and -- iqing news that's an awesome question.
12:24 pm
where do we start. >> i think this example is like really fascinating because i think there's something really magical and amazing and almost an oxymoron in that you can quantify emotions and bring a data driven approach 'otoe something we feel is so irrational and not quantifiable, but is quantifiable. i like your 18 billion data points example to make decisions. the other thing is we do make a lot of our decisions based on our emotions. we make emotional based decisions in terms of where we want to live, career, partners, what have for breakfast, influences or health and our copied of sense of mental well being, how well we sleep. our emotions just like integrated into our lives and people don't realize that. so it makes sense to be able could quantify it. back to the counselor case.
12:25 pm
there's fascinating research out off usc where they brought in post-traumatic stress disorder patients and half of the patients saw a human psychiatrist and the other half sow an avatar and found that that patients were more forthcoming with the avatar, just shared a lot more, just cob identifiedded a lot more, because they perceived the avatars to be lest judgmental than humans. and i just find that kind of sad, right? so will these robot counselors replace human counselors? not anytime soon because there's a lot of work to do but a partnership. maybe for the lest critical case. let's take covid for example. you are getting tested and walk interest a clinic or hospital, maybe the frontline health
12:26 pm
workers are actually social robots, and they say, oh, hi, your don't feel so well, let me do a number of tests on you and if it's critical you get moved to an actual human being. if it's not criticalot got a little empathy you feel betterror, turn around and good home. i don't know. >> host: the ptsd example you gave, a great one. i can see how like the nonjudgmental nature of nonhuman avatar counselor might be desired. it's also the fact that the avatar won't have a bad day. the of tawil be objective, hat no agent, like doesn't deafblind it just does what it does. what it doesn't have is necessarily a shared experience and it's not something that we can have a relationship with.
12:27 pm
so, there's a lot of jobs, even currently, someone has their lawyer, a very personal relationship. their banker, it's very personal relationship. even though 95% of the thing yo do with your banker or lawyer could be automated today. so, do you think mack like the ultimate outcome could be that partnerships, so like you still go to a human ptsd counselor but the human is backed by the a.i. that gives them the super objective data driven stuff, but humans will city pay for the human to -- still pay nor human to human relationship because they want to know their therapist and still maybe in the there's a big data and objective nonemotional things driving what that human counselor is saying. >> guest: i totally agree with that, and this kind of a.i. driven could be a conduit for
12:28 pm
the human counselor. can't get my counselor's ears 24/7 but man there's situations where i can access that a.i. counselor's proxy until i'm able to -- until such time -- we have seen examples, actually a robot, an m.i.t. what is david asking? >> host: i get to his question. >> guest: david is -- oh, this is a good question. there is -- okay so david there is actually a social robot out there called mabu, it's an m.i.t. spinout but they're based in san francisco and the robot is designed to help terminally ill patients. you get sent home with a robot, cute yellow robot and the robot
12:29 pm
makes sure you take medication ted same time and check inside daily and listen out if you're not feeling well, both physically and also mentally and would flag that to an annual nurse or clinician so they can enter season. what's the alternative? you can't extend a human nurse with every terminalie ill patient and that's where it's a partnership. the next best thing to actually having an amazing human being with you the whole time. and also i said this robot is out there. there is a lot more work to be done on the robot to be eexecutive? absolutely but there are videos of patients talking about their relationship with mabu and it's absolutely fascinating, the humans really -- it sounds weird but you do build a connection with these devices and it does -- it can sometimes save the day. because you feel like somebody is on the lookout for you.
12:30 pm
>> host: absolutely. so, rana, that's a fantastic example and just to underline the question david was asking specificalliles is, like, where is the line between reality and science fiction? so -- but the way i want to ask you that is, anything can be more sci-fi possibilities that we're imaginingout think is, there's a line. some things we never happen. just not possible. oh do you think since you're in this industry at the cutting edge, there's nothing that we talk but that isn't going to happen. all going to happen sooner ore later. fact versus fiction, any line to be drawn or inevitableity we'll get there. >> guest: i think where i have a hard time believing -- the scenario where we have this super power agi, or agis that
12:31 pm
will take over the universe and destroy all human beings and this notion of ex-send shall -- threat -- we designing devices. why whoa they turn -- just pull the plug out. so i -- that's where i feel like that's not really -- the entire framework that is humans versus a.i., i don't subscribe to that at all. think of these things as tools that augment our abilities. we get to design them, get to develop them, decide how to deploy them. we're in control. now i don't trust humans. that's where i would -- it's not about the technology it's who is designing them. and do we have a shared set of core values? we don't. some countries care out ethics and some other countries couldn't care less. so that's where the problem is. >> host: absolutely.
12:32 pm
so, building off of that. we have a great question here. i'm betting -- this is the person who is asking the question -- i'm betting you have a fascinating perfect on generaller roles and expectations. a wok exampletive in tech working on making tech more emotionally intelligent, will a.i. evolve to react to different genders, rays, nationalities, differently, any dangers of discrimination there? >> guest: that is an amazing question. so, before we went live, we chatted, like, if -- because we both have a middle eastern background. what if an egyptian dedesigned alexa. i think -- so my experience as a woman in this industry has been really interesting. my cofounder is an m.i.t. professor who started the whole field.
12:33 pm
she wrote the book and she eadvantagized back in the -- evangeliesed computers need to have emotional intelligence if read the book in egypt and its traps follower my life and that's why i'm in the u.s. and doing what i'm doing. got so inspired by the book. my favorite story is when we were at m.i.t. and deciding to start aeffectiva. we went on an investor pitch trip and we had a ton of investors line up, and it was just like really interesting because there we were, two women, scientists, at the timed use to wear the hijab and i was clearly -- we evoted the e word. we did not use the word emotion at all witch dance end around it its sentiment, affect, surveillance, arousal but not
12:34 pm
emotion, and fast forward ten years, i feel the world has changed and there's much more realization that there is a role for emotions and emotional intelligence, and i see that from both men and women, which is awesome. so i think it's changing slowly. now in terms of whether these technologies should respond differently to men versus women, we have 5 billion facial frames that we have collected with people's consent from 90 countries around the world and we already see there are gender differences in how people express emotion. there are cultural differences and how people express emotion so in the u.s. we find that women smile 40% more than men do when they're watching content. in france and germany, it's only 25% more, and in the uk we found no statistical signature indians
12:35 pm
between men and women. early days. we need to dig into that more. but it is really fascinating. >> host: thank you, rana. i think it definitely deserves a youtube skit, alexa in different countries. here's a great question from robert snyder. two questions on the topic of, one, the marriage of eu and ai and getting that right. the question one, other than national governments what institutions are in place or should be in place to keep things healthy and encourage good actors andman nice bad actors and will governments be heavy handed and oppressive or light-handed and resemble peer pressure. >> guest: so the first question is i feel very passionate that as ai thought leader and
12:36 pm
innovators and business leader we need to be he for 'front of this -- forefront in deploying ethical ai. we are part of the farmership on if a eye a consortium start by amazon, facebook, microsoft, number of years ago and invited other stakeholders like amnesty international and aclu, and then a number of startups like atechiva and i'm on the faith committee fair, accountable, transparent and he he can equite and the push us for thoughtful regulation. i'm not going dirk think we should really enforce legislation that tick dates now where the technology can be used and what data can be collected and who has control of it. all of that. i'm against 'kind of a blanket let's not develop this technology, because i think there's so much potential for it to do good in the world. to answer the question, there
12:37 pm
are organizations in place and the idea to collaborate closely with legislators to ensure this has done in a thoughtful way. the challenge is, not all governments and not all -- yeah, not all nations share the same set of core valued. for example, in our space, 2004 biggest competitors or two chinese candidate that have raised billions of dollars of funding and they have access to a ton of data because it's centralized through the government. and theyite for all sorts of applications, including surveillance, and profiling, which we are absolutely not going to do from day one we have turned that industry away and that use case away, including millions of dollars of funding. i believe ultimately our approach will win in the global
12:38 pm
kind of -- again, back to peer-under and consumer advocacy, another reason why i wrote the book. i wanted people to -- i think it's key that the everyday consumer has a voice in how a.i. is developed and deployed in same way we have a voice in fair trade or organic food or going green if think the same thing has to be had around technology and ethical technology. >> host: here is a mine-blowing question. a great one. can't wait to hear your response. my concern is a lot of your examples are stop gaps or what a sew sattal weakness or issue. for example no one should die alone and be terminally ill alone. these technologies risk depleting our relationships and connection by ignoring the dealership human issue how do you consider the ethical signed of building stop gaps and what part of the focus of emotional intelligence is greater empathy.
12:39 pm
>> guest: that's a wonderful question. so, i like to be a realist and pragmatic. is a version of the use where we do away. technology and go back to face to face communication and and just kind of deeper empathetic connection maybe if don't see that as being realistic, it's not where we are headed in given everything that is happening today. so if you expect the technology is part and parsel of where we and are if you accept that unfortunately not everybody has access to mental health care, not everybody has access to health-care. no everybody has access to amazing education. grew up in he middle east. was summer lucky i had excellent education which i can connect the dots why can be doing what i'm doing today, because of my education, bus i'm lucky. so, can we use technology in a
12:40 pm
way that democratizes -- access to health-care and education. the i think technology can be part of the solution. it does not have to be -- dot not have to take away from our connection. it should be able to augment it. >> host: thank you so much. rana. on a related note, anna, ask the following question. if we start to rely more on a.i. to connect, how will this affect our ability to empathize with and understand our fellow human beings without to the a.i. tools? could this get so addictive that just like i can't find directions on my tone go anywhere, will it be like that? if people get too hooked on this, will it actually diminish
12:41 pm
our able to do it without the a.i.? >> guest: we should -- that's a concern. i worry about the opposite situation where if we're all eight can connecting online andt base owed 'text or social media platforms that do not incorporate nonverbal communication or eye contact. i worry bit my son who is 11 and spends a lot of time on this twice and individual row gaming and doesn't need to practice any nonverbal communication when he is doing these things and if you don't use a muscle it atrophies. so, unless we redesign these experiences to incorporate these nonverbal communication, i worry that we lose them altogether, and we don't get to practice empathy. n a virtual environment. so i think there's an opportunity to be creative and reimagine what these -- what social and digital communication could look like in a way that allows every time i'm
12:42 pm
interfacing with a device, be it just the device or as a proxy to a human or group of humans, if i can be practicing making eye contact, which we don't actually in this situation, but if i can be making eye contact, if i can be motivated to be expressive because that's going to make the connection so much more authentic and real, we'll use these things. he he'll use and develop those skills. if we don't they'll good away. >> host: it a fascinating point. not like we should be worried but the technology -- the future technology making us -- it's more about the current technology's limitation because it's not getting across our full humanity. it's better they can see you today for example than just audio. and if we could also tell us more things about you and you can tell thing beside out even more than whats you can see. that would also be really helpful.
12:43 pm
exciting to think this -- keep this in mind. here's a question about data which is like you said, like sensing the problem, what the d george asks, is there any company or organization currently collecting data that can be trusted to use that for our or my benefit rather than their own benefit. >> guest: i think there are examples of act or companies that -- yeah, i think there are some examples of that but just this -- in balance. i'm sure if we asked the same question, to jeff bezos or mark zuckerberg i'm sure they'll say we collecting dat for your own good. it's an interesting question. probably a spectrum of trust.
12:44 pm
>> host: all right. absolutely. rana, one question i wanted to ask you also lake if you were to draw some red lines, what technology should never be able to do or allowed to do, what would those red lines be? one that comes to mind that we should -- like, i'd be curious -- give an example and you tell me what else or could be a or should be a redline. given that people just based envoys mail, text-messages or e-mails hand over the bank account details to bad actors because it's could convincing that they hear, if this technology get razz yell good and it's indistinguishable from a human and not just a human but the most manipulative human ever that can really get under your skin, like i would probably think that's totally a red line, something that is so convincingly human that -- and that is used for certain
12:45 pm
purposes. that's one example. want to know what you this here's a red line that should never be crossed, no matter how capable we get. >> guest: for me, i do not think it's acceptable use toe use this technology without people's consent or opt, in. for us that meant that security or surveillance or lie detection are industries we do not play in and in 201 we got approached by an agency that wanted to give us $40 million of funding, and authorize as a lot of money and we were two months away from running out of cash so it was hard to turn away from that funding but i thought about it and it was not in line with our core values etch we want to be your trusted partner for the dat and that felt like a violation of trust if people didn't know you were checking that data, dent know who you were using it, there was a lot of potential for
12:46 pm
profiling and abuse. so, we just stay away from that use case and i think that is an example of a use case that just reaches the spirit of why we would want to do this, which is bridge a communication gap, not abuse it. >> host: would you make it a requirement that anytime you're interfacing with not a human but this a.i., you should know that's what is happening in should be indicated straight up, like if it's a phone call or any kind of technology, this is not a human? even though -- >> guest: we have had these -- i don't know if anybody is actually researching this bum i would be so curious. if you know somebody looking into this let me know. what if you -- what if you are chatting with an a.i. -- a counselor jew don't notify the
12:47 pm
counselor is human or machine or a hybrid. and what does that affect -- how does that affect -- yeah how much you share, the efficacy of it which is more effective. there's a lot of work to be done there and, then, should the a.i. disclose it's an a.i.? or not? >> guest: a good question. of you could share with us practice dipeek are in our day to day lives, at work, and then as a country or just like humanity at large, what are super practical actionable next extent -- start with the personal level. what are the ways in which we may be already able to leverage these technologies better in our personal lives, and thing iowa have seen that are consumer oriented and helpful already and then the next level for businesses and organizations, what can we leverage and then at
12:48 pm
a governmental or international level, what more could be or already be used and implemented based on what -- >> guest: okay. on the personal basis, applications that track your moot. there's a couple out there. but if done right that can be powerful. it would track your mood and then maybe sync up with your calendar and highlight days of the week or meeting that always leave you in the worst state. could do a -- like the fitbit for your mental health. so that's and examples of that based on physiology that tracks your heart rate can use the camera sensor on the phone and tie it to an level of stress or happiness but there's no to be done there that's one example on the personal level. i would put in a plug for an app i love called day one. it's a journaling app and what i
12:49 pm
love is that it has -- it isn't really exactly emotional a.i. in terms of what i'm developing but i find because it allows for all sorts of different types data and tagging, in a way i use it as a very -- a way of expressing my emotions and an organizal level, we work with a third of the fortune 500 companies worldwide and theyy our technology to quantify how people respond to their con at any time. so if you're creating a short video reel and whatever the message is, the cause is, you often don't really know where -- how do people emotionally engage with that content. so that's one way. there are use cases around and then what is coming around the corner, and applies to society at learn is out motive and safety. -- automotive and safety. we're collaborating with some
12:50 pm
governments where they're deploying these robo taxis or buses or other -- and they want to make sure that it's a safe environment, there's no violence, they want to make sure that people are having a comfortable experience. so, that's all like -- all part of it. and then telehealth. want to end with that. we can leverage technology to flag people who are struggling, flag depression early on, flag parkinson's, autism. there's a lot of potential there. >> host: thank you, rana -- that was great. last kind of question to bring us home, i want to ask you, tell us about your book, girl decoded, scientist's quest to reclaim our humanity by bringing
12:51 pm
ei to technology. what led you to write this book specifically. what are the key themes it focuses on forks everyone who might be interested. this isn't just like a technical manual. this is very much a moment moyer of your story so love to hear about that and then would love to ask you what are other causes and endeavors -- tell us more but your company and causes you care about and how to best support you. that would be great. >> guest: awesome. so girl decoded came out a few days ago, check it out. i also narrated the audio book so if you're into listening to books, to get audio books a -- check i out. it's -- so we started -- this has been in the making for the years. when i first started the idea was to write a book about emotional a eye, lie the conversation we're having, technology think applications, the ethical impreliminary
12:52 pm
indications, and -- impreliminary indications and i met before the publisher and question ed doctor at penguin random house for lunch and he said caveat. i'm not taking on any new a.i. books. there's so men out there. i was like, oh, god. he was like tell me your story. group up in middle east and a pretty conservative family but they supported me education and came to england for my ph.d and thenland at boston at commit started the co and i'm tech ceo so he said that's the store. so pivoted into this memoir with the mission offed evangelizing what emotional a.i. is and using my unusual past and kind of hough i had to overcome a lot of cultural and societal norms. both growing up in middle east and being a technologist in a very male dominate society, and in the hospitals this story will
12:53 pm
inspire other people to forge their own path and find their voice. so, yes, that's the book. i learned learned a -- learn a lot. it's been a youry journey of -- decoating technology and decoding myself. i'm a work in progress. one of the -- of course, check one thing i'm passionate about is paying it forward. we hap amazing internship program which we having to rethink because of covid but we usually have high school students, undergreat, post grad over the summer for an amazing internship program to engage these young people and again thinking through what do we want this technology to look like? and then i'm also -- so i'm passionate about diversity, be it gender, ethnic or age.
12:54 pm
and i'm part of an organization called, all ways, that supports both fee maim found -- female founders and female funders. so if you're interested in any of these please let me know. >> host: we have been listing the links and i'll include this in the thank you message that guess towel everybody after this event. i want to share my key takeway and then ask you to share maybe a final word of wisdom with us. if you left us with just one thing to keep in mine as we go doubt our day-to-day lives as best we can during this pandemic. everything from your learning you want us to keep in mind that would be great. so before that i'll share my key takeway. i lo the title your book, girl decoded. what you said about it's been a journey of both self-reflection. i think that grappling with this
12:55 pm
theme, eemotional intelligence, a.i., notice so much about talking just what is at the technology and what is -- it's really causing to us dig deep and figure out what does it anyone be museum how do we share that, how do we maintain what is so special about it and double-down on it, not necessarily to replace it. i think this is why a.i. facinate people so much. not because they care about how circuits work. they really care what is human, then, if that's what a machine can do. that's key takeaway and i feel like this can make is feel more human more often that not. that's the key takeaway and i want to give it over to you. >> guest: to build on what you said, my mantra is least humanize technology before it dehumanize us and that's the core of what i do.
12:56 pm
i think my last word of wisdom here is just to lead with empathy. whether you're lead inning your teams or organizes or your families or communities, begin with empathy. >> host: i love that. what great call to action. you are wonderful today. so thank you so much for joining us. >> now on c-span2's booktv, more television for serious readers. >> welcome, everybody. i'm steve coll the dean of the columbia journalism school and pleased to welcome you to tonight's conversation with the 2020j. anthony lukas prize winners amount i sorry we're conducting this conversation virtual live rather than in the world room at columbia and i hope we're back together again next year in the

31 Views

info Stream Only

Uploaded by TV Archive on