Skip to main content

tv   Joy Buolamwini Unmasking AI  CSPAN  April 26, 2024 12:46am-1:39am EDT

12:46 am
hi, everyone.
12:47 am
hi. how are you feeling? good i'm very excited. my name is sandra khalil. i'm the head of partnerships with all tech is human. i have the honor of introducing our guests today. dr. joy buolamwini is founder of the algorithmic justice league, an ai researcher and an artist. she is the author of unmasking a.i. my mission to protect what is human in a world of machines. her mit research on facial recognition technologies galvanized the field of ai auditing and revealed the
12:48 am
largest racial and gender disparities in commercial products. at the time, her tedx talk over 1.6 million views on algorithms algorand mc bias served as an early for current ai harms her writing and work been featured in publications including time magazine. she is on the inaugural list of. time 100 ai new times, harvard business review and rolling stone. dr. william sweeney is the protagonist of the emmy nominated documentary coded bias. she a rhodes scholar, world economic young global leader and recipient of the technological innovation award from martin luther king jr center. fortune magazine named the conscience of the revolution. dr. bowman mooney earned, a ph.d. and master's degree from mit, her master's of science.
12:49 am
from oxford university with distinction and a bachelor's degree in computer science from the georgia institute of technology. sinead bovell is, a futurist and the founder of way, an that prepares youth for future with advanced technologies and a focus on nontraditional and minority markets. chanel is a regular tech commentator. cnn talk shows and morning shows. she's been the educator for the non by vogue magazine and to date, she has educated over 200,000 young entrepreneurs on the future of technology. chanel is an eight time united nations speaker. she has given formal addresses to presidents, royalty and fortune 500 leaders on topics ranging from cybersecurity to artificial intelligence and currently serves a strategic advisor to the united international telecommute location union and digital inclusion. thank you.
12:50 am
hello everyone. everyone can hear me. okay, well, we made it. needed. dr. bull and weenie joy, my friend, my fellow sister. this is such an honor and i think to kick things off, you know, two terms that i think need to be a part of the everyday discourse that we all need. understand that really stood out to me in our book. in your book, the first is the coded gaze and the second is the x coded. so what is the coded gaze and who are the x coded? got it. great way to kick out. before i address that, i just want to thank all you for coming out. the first stop of the unmasking book tour, ford was the first.
12:51 am
ford was the first foundation to support the algorithmic justice league. they supported my art. actually, agile has an exhibition piece here at the ford foundation gallery, so please do check out. and now to the kodak days. all right. so who's heard of the male gaze, the white gaze, postcolonial gaze. okay, look, how did gaze extend that? and it's really a question of who has the power to shape the priorities of technol g, but also the prejudices get embedded and. my experience of facing my coded gaze was what you see on the cover? it was halloween. i had a white mask around and i was working on an art project that use face tracking and didn't detect my face that well until. i put on the white mask and i was like, dang, vernon already said a black skin white mask. i just didn't think it'd be so
12:52 am
literal. and so that's what started this journey that became the algorithmic justice league. and really we are focused on to the second term, the x, right? so those who are condemned and convicted. otherwise exploited or excluded by algorithmic systems. and so the focus is, how do we liberate the coded how do we actually make sure that the benefits of artificial intelligence are for all of us, especially marginal communities and not just the privileged few? and so what are some of the ways algorithmic bias and discriminate are being a part of the excluded could be impacting all of our lives. i mean, think of a ism and it's there, right? so you can think of a i deciding who gets hired, who gets fired. amazon had a hiring tool where they that if you had a women's college listed you got deducted. there have been other hiring tools that have been evaluated.
12:53 am
if your name's jared, you play lacrosse, you might get some points, right? so so that's one kind of an example. i also think about ai systems within medicine and. so you have these race based clinical algorithms that aren't actually based on the science. people get denied vital. so that's another space which it can creep up. education as well. you might be flagged as having used the chat bot. they show studies that actually you might be flagged not because you were cheating. english like me could be your second language. so those are some of the everyday examples in which people get x coded. and then my work has focused a lot, as you many of you know, on facial recognition technologies. so i think about people like portia woodruff, who was eight months pregnant when she was falsely arrested by ai powered facial recognition, skin misidentification.
12:54 am
so sitting in a holding cell, having contractions when they finally let her out, she had to be rushed to the emergency room. right. so that's the type of algorithmic discrimination, putting two lives in danger. we could go on. it's a horror story. it's halloween and there are some profound example. more examples in the book from a driverless vehicle. not maybe not seen you. the list goes on and on and my jaw just dropped. every one that i read. so in the book you talk about your viral tedx talk and if you haven't seen it, i highly recommend it and you also discuss some the comments that you received. one such comment was algorithm are math and math isn't biased so can artificial intelligence ever just be a neutral objective tool that's great question. and i've had so many ai rolls, even one of the book reviews was like, you're telling me cpu's computers racist. so how can this happen right. and in fact, i got into computer science because as clueless
12:55 am
people are people are messy sighs hoping i could be in the abstract world and not really have to think too much about bias. but when we look at artificial intelligence and particularly machine learning approaches that are powering many of the systems we're seeing today, the machines are learning from data. the data reflecting past decision. right. and we know let's the gatekeepers for who gets hired might not be so inclusive. and so that's where the bias starts to come in when you have systems that are looking for patterns and patterns reflect a society. so i'm not saying one plus one doesn't equal what you think it was going to equal, but i'm saying once we're applying these types systems to human decision making, the bias creeps in. right? and i think that is something that we hear often that technology is just a neutral tool and it's up to us for how we get how we use it. but you make a really important in your book that there are decisions that get made prior to
12:56 am
the technology, even being deployed. and those decisions very nature of doing things like classifying people can't be neutral. and i think, yeah, that was a section that really stood out me and i want to read a quote from your book and this quote gave me chills. so i thought that this would be the appropriate section to read out. so seeing the faces of women i admired and respected next to labels containing wildly incorrect descriptions like clean shaven adult man was a different experience. i kept shaking my head as i read over the results, feeling embarrassed that my personal icons were being classified in this manner by ai. when i saw serena williams labeled male, i recalled the questions my own gender. when i was a child, when i saw an image of school age, michelle obama labeled with the description toupee. i thought about the harsh chemicals put on my head to straighten my kinky curls and seen the image of a young oprah labeled with no face detected. took me back to my white mask experience.
12:57 am
you went on to say, i want people to see it means when systems from tech giants box us into stereotypes, we hoped to transcend with algorithms. so how you called attention to these specific stereotypes was through a poem you wrote called i ain't i? woman can you tell us more about this poem and what it means to a poet of code? oh wow. that gave me chills. yes, reliving it kids are mean out there. i'd always be asked you a boy or girl when i was growing. so i think it's somehow ironic that this ends up being my research. so after i did the gender shades at mit, where i was doing my master's degree and the results were published, the results had performance showing, okay, for ibm, for microsoft and then later on for amazon. these systems work better on men's faces versus women's faces on a lighter faces versus darker faces.
12:58 am
and then we did a little intersectional analysis. so extremely we saw that it didn't work as well on the faces of dark skinned women like me. and so when i observed that from the data, i wanted to move from performance metrics, performance arts to actually humanize what it means to, see those types of labels. and that's what led to a i tired woman. at first i thought it would be an explainer video like i've done with other projects. and then i was talking to a friend and they said, can you describe what it felt like. and as i started to describe it. he said that, sounds like a poem. so the next morning i woke up with these words in my head, my heart smiles as i bask in their legacies, knowing their have altered many destinies in her eyes. i see my mother's poise in her face. i glimpse of my aunt's grace. i was like, oh, was happening right.
12:59 am
so i can't. i kept going on right. can machines ever see my queens as view them? can machines ever see our grandmothers as we knew? and the descriptions you just shared? right. so to see sojourner truth labeled clean adult male, those are the queens i was talking about. and that led to what my ph.d. ended up focusing on, which was both algorithmic audits like the gender shades paper, which showed performance metrics, but also evocative audits, like 80 woman, which humanizes what ai harms look like or feel like. i love that you use that word to humanize this. so when you decided to pursue algorithmic bias as the focus of your research, this was 2016. it wasn't a topic many had heard of, and it certainly wasn't really discussed in the public. and then your work it courageous early takes on big tech or some of the tech giants are calling
1:00 am
attention to some of the harms in their facial recognition systems. some of the companies lashed out at you and some did come to your defense like dr. timnit gebru, someone that we also all adore and love and shout out, shout out. tell me. but others were fearful to come to your defense, as were some of the academic labs, because they feared it would impact their ability to, get funding or to get a job. so as a student pioneer green, in this research, how did you navigate and in your opinion has the sentiment shifted or the fears over career repercussions still hinder open? discussions about ethics? this is such a great question, will say now that i lead an organization i have more for administration right and keeping things funded and all of that at the time as a grad student i felt that timnit gebru, deborah raji and i, we were really
1:01 am
sticking our necks out and i couldn't understand why more scholars and weren't speaking up as much until i started to follow the money trails. so many of these large tech companies fund many computer science degree programs, particularly phds. i happen be in a place where my advisor didn't have a ph.d. he was on a nontraditional path. i had aspirations of being a poet. so all of these things helped me not feel so much that if if i poked the dragons, they were fire breathing dragons, i would be completely eviscerated so i do think there is still a fear of speaking out. i do think the work of gender shades helped normalize this conversation so others could speak gender shades. one of the things i did, which i was cautioned, was actually naming the companies usually it's a company, a company, b
1:02 am
company, c, keeping my funding like this. good, right? so to name it. but now this is a common practice. and i also have to commend the senior academic who did come to our defense. and later on i did hear there was a cost to doing it as well yeah, i think you i think the research with gender shaded it gives us data to point to and determine ology that we all need when we want to advocate against some of these harms. so i have to ask it. there are many voices in the world of ai who believe that superintelligence and the potential for ai to cause humanity to go extinct. those are the most important harms we should be paying attention to. so as someone who has dedicated their entire working life to combat in harms and i are these the real risks we should be tuning in to x risk. when i think of x risk, i think of the x coded. so i think about the person who
1:03 am
never gets the callback and you explain what x risk is to people that oh sure exist that you want me to talk about what the doomsayers know, just explain the experts. is just the existential risk kind of thing. sure. so you've seen terminator. yeah. people on the internet, you've seen the headlines. it's the another world as we know it is here. we're going to die. that's exodus. so i could become so intelligent. it takes over the already powerful and they become marginalized. this is my take on the x risk. they become marginalized and would be terrible. the face of oppression, wouldn't it? right. so this is x risk as i see a and i what i notice with doing this work since 2006 18 is sometimes there are intellectually interested in conversations that happen within theoretical spaces. right. so what if and then we continue from that. what if?
1:04 am
so we have that with what if ai systems become sentient, which they're not. right. what those general intelligence look like. and i think sometimes can be a runaway narrative that is fictional, which doesn't reflect reality but gets a lot of attention. and the problem with getting so much attention is it actually impairs the agenda for aware funding and resources are going to go. so instead of seeing what we can do to help portia woodruff or robert williams falsely arrested in front of his two young daughters, the money goes elsewhere. so that's kind of the danger that i see. i think it's one thing to have an interesting intellectual converse asian, but that's not necessary only what's going to help people the here and now i mean like in the book how label there's hypothetical and then there's real risk that exists today. and one more thing i wanted to
1:05 am
talk right. i've supported the campaign to end killer robots. ai systems can kill us slowly. so thinking of structural violence, it's not the acute, you know, the bombs drop or the bullet is shot. it's when you don't have access to care, when you live in environments or housing conditions that worsen your life outcomes. right. and so there we see ai being used for those types of critical decisions. that's a different kind of risk. or you mentioned the self driving cars either there is a study that came out showing how it made a difference accuracy like hang on with the kids and other short people. right we're at risk here. so there are different ways. and also it doesn't just have to be biased ai systems act good systems can be abused if we again thinking lethal autonomous weapons systems you got a drone you got a camera you got facial recognition. if it's might get if it's not
1:06 am
out, it might still come it's still a problem. and would you support banning any types of ai technologies or ai powered lethal autonomous weapons and face surveillance right. so it's not just recognition but it could be systems that are tracking your gender, your age, other characteristics. sure. so you've been in the documentary coded bias you were the face of the decode the bias ad campaign. so from these experiences, what do you see media having in these conversations that shape artificial intelligence or shape how we think about artificial intelligence? so i saw the power of media with ai insight woman because it traveled unsurprisingly much further than my research papers. and i wanted say, okay, how do we make these findings accessible but also bring in more people into the conversation. i like to say if you have a
1:07 am
face, you have a place in the conversation about a.i. because it impacts all of us and so the opportunity to be part the coded bias documentary. i was a bit hesitant but then when i saw people would reach out to the algorithm like justice league and say, oh, i'm studying science because of you, i was like, okay, i got to go do my homework. but, you know, i feel inspired to, you know, that kind of they decode the bias was interesting. i was partnering with proctor gamble olay and they invited me be part of an algorithmic audit. i said, are you sure? because on what i know, we'll probably find bias. they're like, that's okay. and based on who i am, i'd like to make the results public. final editorial decision. they said that's i was only talking to the marketing teams. i don't know if the other teams would have then as quick to say, but long story we did that audit
1:08 am
and we did find a bias of different types and olay committed to the consent to data promise which is the first of its type that i've seen from any company. and so showing that there are alternative ways of building consumer facing ai products. it was inspired by their skin promise, which i think it was the year or two after i started modeling for them, they decided there's going to be no more airbrushing or retouching truth in advertising as in which i support body. image i think is great, but i won't lie. i was like way okay so nothing is going to save me. i was exercising and drinking water sleeping of course doing my skincare regime but i thought it had lessons for the tech industry as well. right. when you know that they're standards are a little bit higher you are forced to rise to the occasion. you can't improve what you aren't measuring.
1:09 am
so i think we're all starting wake up to the reality that most of these ai systems, whether it's a facial recognition system, an image generator, a chat bot, they're powered our digital labor a.k.a our data. what advice would you have to legislators on data privacy and why might it not be enough if a company comes out and says, look, we're deleting your data, it's okay, it's all been deleted, why might that not be enough? so? i think of this notion of deep data deletion. so when we're looking at machine learning the type of ai approach that's powering so many, the headline a.i. systems that you'll see like chatting to you or, learning from a lot of data. so yes, the data is important, but the data is used to train a model. right? and then this is used to be integrated into different of products. so if you do like facebook did, we deleted a billion facebook
1:10 am
prints, which they did, there's a $650 million settlement. so there were some reasons you don't go delete a few things. and after they deleted the photos, which i commend right. it shows deletion is possible it was important to note they didn't delete the model so you still have the model that was trained on ill gotten data that's problematic so you can't just stop at the data right and then even if you delete that particular model if you've now open sourced it and it's integrated into other places, it continues to travel right. the ghost of the data lives on within models and the product integration. so when i think of deep data deletion, it's really tracing where the system goes and understanding the data is a seed. i think deep data deletion. everybody remember that we're starting the hashtag tomorrow at 9 a.m., three. okay.
1:11 am
so in your opinion, what can be done prevent algorithmic harm? what we do? where should we go from here? i think at the what i've learned most with my journey is storytelling matters. our stories matter and the started with me taking the step of sharing my experience of coding in a white mask. and then that led to the research that led to the evocative artists. and here we are. well, you know, that escalated quickly. it doesn't escalate that quickly. but stories do matter because you have to be able to name what's happening. so putting out terminology like the code is like the x code it and so forth. it's part of it. so i think that's something we can all do, which is sharing our experiences with different types of ai systems. another piece to keep in mind is the right to refusal i see in airports all the time. face scanning happening and oftentimes people don't know as
1:12 am
a us citizen you have the right to opt out if you go to the tsa website, they'll tell you our tsa are trained to treat you with dignity and respect. there will be clear signage. so it's like, okay, let's take it out of the i research this. i'm looking for the signs. i find one in spanish lafayette does not right on my end right know i can barely see the opt out. other people are not even looking for that sign. and in fact the algorithmic justice league we launched the campaign flight i agl dot or not just so we could have the cool subdomain but it was fun but so people could actually their experiences and over 80% of the people who responded hadn't even seen those types of signs but you can say no and pushing back exercising the of refusal is really important. the other thing is the coded gaze can often be hard to see facial recognition, facial
1:13 am
analysis is. part of the reason i use that as an example, it's so visceral, right? i don't have to write the whole research paper. you can see my friend's face. you can see my face there a difference. what happened? you start the conversation, but there's so many other areas in which air is being used that you may never know. you don't get the more good you don't get the loan. and so i do think due process is important where we have a sense of what systems are being used and until that's mandated, you have to ask if your kid is flagged for some disciplinary situation. they turn out it was an algorithm involved in that you should so i would in each domain you find yourself and you might be at a in a medical facility etc. ask them if there are any ai systems algorithms in use. they may know the answer, right. but it starts that exploration and it also start to potential story for you to share to kind
1:14 am
of join the movement. so speaking of due process and potential pathways for litigation, the white house just announced their executive order on artificial intelligence yesterday, it's supposed to be one of the most comprehensive in the world. i think we need your take on it. are okay, are we moving in the right direction? yeah. not supposed to be at the white house. i came out for you guys. it's all i will say. it is definitely a step in the right direction because it lays out what needs to be done. so of course, the devils are going to be in the details for when it comes to execution. i will say that it builds on the ai bill of rights was released as a blueprint last year and there it's principles based right. we want to make sure that people have protections from algorithmic discrimination. right. that we have privacy and
1:15 am
consent, that systems are actually safe and effective. and i think importantly, there are alternatives. so you don't, for example, have to scan your face to have to the irs, where i think it falls short and i also see many congress actions around ai fairness ai safety, accountability falling short. is this notion of redress. so i'd love to say we figure it out, right? we're working on ai prevention and you know, but what happens? we get it wrong. and what about the people who've already been harmed? so i think redress needs to be a bigger part of the conversation. and you can start redress conversation by tracking the harms. that's why we're building this ai harms reporting platform. so we have the evidentiary record. and do you think it does enough to prevent in the first place, or is leaning on managing? tell me. actually, let me what what did you think?
1:16 am
i think managing risk and managing it did a decent job at tackling and preventing it in the first place in design, how we gather data, i think that was kind of lacking. so we're hitting it from the second end of the value chain or the supply chain, but from how we start, let's design things safety in mind. let's design and gather data to avoid algorithmic harm and not wait to kind of manage. got it. and i want to know her. take. and my final final question before we move into poetry. what of artificial are you excited about that could help humanity excited. interesting. it's kind of ironic to me because of this started because i was using a.i. for something fun. i wanted to create an aspire mirror. so when i go in the mirror in morning, say hello, beautiful. or i could say, make me look
1:17 am
like serena williams. now coco garbo. oh, the athlete you know, and it went wrong. so i. yeah. so that's why we're here. some of the areas that excite me about a.i., but i'm cautiously optimistic. at stake are its applications for health care. so i don't think it a small achievement alpha fold right to predict 200 million protein folding structures. and when i was a little girl i talk about this in the book i used to go to my dad's office and feed cancer cells. he's a professor, medicinal chemistry and computer aided drug development. so i grew up with my dad and poster shows of protein folding structures all over the world and all over our house. and yeah, he wanted me to go into chemistry, but the silicon computers themselves just look so cool, so i ended up going in
1:18 am
a different direction. so that part excites me. but then i also think about so many of the disparities we have in health. there's a company i invested in tech that focuses on women's health. one in three women die of cardiovascular disease, but less than a quarter of research participants are women. so if we're we know about bias data sets, right? and what can go wrong. so i do think have to be really vigilant in order to realize the potential of ai in health. but it's still excites me even though i did not continue the family project through their generation. but i took it a different direction. i you believed in the true ending of the protein fold, which was an algorithm. so maybe you knew all along you're your intuition was like, there's going to be a computer for this. yeah, i'll have to bring you to thanksgiving to defend honor. well, that concludes my questions. thank you for all of your rock
1:19 am
star answers. and i think we're really just getting started. i think now we to hear some poetry, absolu lutely. so i will go over to the over here really quickly. mike, check. one, two. y'all can hear me. okay. all right. so there are a lot of poems in the book, and i am going to a poem that is in part for anyone know what page? the last page part four is. let's say poet versus goliath, the wild. that's the fun chapter for sure. let's see, it's a long book. page 202 to 9. oh, wow. all right, so this poem is called the brooklyn tenants and the reason i chose it is because we're here at ford foundation and ford foundation has been
1:20 am
supporting on the front lines of justice for some time. and the brooklyn tenants follow within that tradition. i was filling out a low point in my research. not sure if, you know, being academic was having that much impact and having the opportunity, share my research with them and see use it for their own resistance campaigns was very inspiring and led to this poem. the brooklyn tenants to the brooklyn tenants resisting and revealing lie that we must accept the surrender of our faces, the harvesting of our data, the plunder of our trace this. we celebrate your courage and no silence, no cancer, and you show the path to algorithmic justice requires a league, a sisterhood, a neighborhood book talks hallway, gatherings, sharpies and posters, coalitions, petitions, testimonies, letters,
1:21 am
research and potlucks, dancing and music. everyone playing a role to orchestrate change to the brooklyn tenants and freedom fighters around the world, persisting and prevailing against algorithms of oppression, inequality through weapons of mass destruction. we stand with you in gratitude. you demonstrate the people have a voice and a choice. when defiant melodies harmonize to elevate life, dignity and the victory ours. thank you. and i think we have a kua so come on up. i'll be here all right feel free, doctor. okay, give it up for dr. joy anthony. okay. the time is 250.
1:22 am
we're going to do 10 minutes of q&a. please feel free to see the roving mikes with trevor and sonya and. yeah, let's take it away. yeah, take it. hello. i, first of all, congratulation on the book and thank you for being here in new york. do you mind standing up? you're okay with that? yeah, okay. i again, thank you for here. since we're in new york, i to ask if you had any thoughts on the action plan that the city just out the city has asked for participation stakeholders for like understanding the city should be regulating and using air. so i thought, you know, given backdrop of that, which was maybe two or three weeks ago and then the executive order yesterday, i thought maybe i'd love to hear what you think about what city level government should be doing in of using air responsibly or should we just be
1:23 am
advocating that people should just not use it for government at all. oh, that's a great question and something i've thought about quite a bit within the space of facial recognition technologies, we've seen ordinances and, different spaces where example, it's probably no surprise that in cambridge and in boston and in brookline, massachusetts, the police can't use facial recognition technologies. so i certainly think what happens at the city level, municipal level matters. my concern here is you don't want to have to live in the right city to have protections. right. and so that's where sometimes see a patchwork of frameworks. but we really do need that federal legislation that gives at least the floor of protection for everybody. so those are my initial thoughts. we got to have hands decisions.
1:24 am
hello and thank you. my name is andrew. i'm here with the institute for advertising ethics and pmg. so thank you for what you've done. here's my question to you since of the funding for what is ai purports to be ai is advertising money. oh what do you think advertise officers can do with their financial willpower to push a dog in the right direction? oh, that's. thank you. yeah. wow, a great question. i will say, i think it's what all companies should be doing, including those who have the advertising dollars, is to put the money towards ai systems that have been vetted and too often what we'll see is that you hear the promises of ai and we
1:25 am
buy into the belief or at least the hope of it. and i think just a first step is seeing if a system is fit for purpose. and so that could be one approach. i want to see some ladies. high. i'm very curious about especially right now because it's such a vital for, you know, legislation and everybody's got it on the plate. but i'm really curious about how many social are involved in these conversations. you know, the reality is moving from human centered because human centric can mean anything to centered right in so i'm curious about in experience how many social scientists who really understand psychosocial well-being are involved in these conversations. yeah i will say it continues to
1:26 am
be limited, but i was so encouraged that one of the architects of the ai bill of rights, who's also now on the un, i advisory council is dr. alondra nelson. and i definitely think social science sensibility is not just nice to have essential for sure, but it continues to be sorely lacking lacking. hello and thank you for all of the work you've done. i still like you a so my question is in really some of the students in the room and of the students who couldn't be here right. given kind of the room that we're living in right now. in the event of also still layoffs, while the same companies are doing these mass layoffs, doubling down on ai
1:27 am
systems. right. is just what is also being unearthed. and the biases of who's being impacted. and so kind of what are some words hope in this and especially hearing as someone who also kind of you're in this space and you didn't kind of like the traditional mainstream. mind set that, hey, i'm maybe within being an academic. these are the hoops, these are the hoops that i want to jump. yeah, but in building resistance, kind of what are some words of hope also in some of the students that you are working with. yeah. just who work is kind of inspiring you that. hey, in the midst of some the the bleak yeah that's within this i feel like what are you seeing here. got it. well part of why i wrote unmask a because it starts with student journey when i while even know
1:28 am
if i want to say anything because, i could get in trouble and i might want to jump one day, you know? so it's all in thought. so. so seeing struggle is real and acknowledging it. and also acknowledging that there can be a cost to speaking up. but think in terms of where there's hope. i met with president biden in june, right of this year, and i had a photo of robert williams and, his two young daughters. and robert was holding the first gender sage justice award. and president biden's like is the high rate the threat of like there is hope once we start having the president asking some of these questions and that a long way off from coding in a white mass and i would say in terms of words of encouragement tech needs us it's not the other way around the perspectives that
1:29 am
you bring what you care is important. and i had to find that for myself when i first wanted to do this research. people are like, how good your math skills? are you sure really want to tell that i had a bit of discouragement from very well-meaning people, you know, they just had to look for me to make sure i don't get hurt and all of that right. so what helped me was having people like dr. timnit gabriel, she was finishing up her phd while i was finishing my master's. and then you're saying that i look to who are inspiring. deb raji reached out on facebook, right, saying, hey, i saw this can i do an internship with you? she didn't know we'd be going toe to toe with amazon. so that also asked later. quickly. but i think finding support in helping each other you know and
1:30 am
we were there for each other as well when it got intense you might have seen some headlines and so forth. so i think that's really important and even just being proactive since aid reached out and my arms and said, i see you're doing a book tour. i'm in new york, can i be part of it to us? that you want the first line you know? and so that's where that's where we are. so i have a lot of inspiration from people proactive in that way. so i mean, my name is joy, so i'm probably going to be generally optimistic, you know, but i wouldn't feel so discouraged because pendulum swing back and forth and i'll just add it's something very present in your book and it's very present in your work that we have the solutions all of the problems that you discuss, you provide solutions for in the book and in work more broadly. nothing is matter of physics that we can't solve.
1:31 am
all of the biggest problems we're facing actually have the answers. it's just executing on them and you make very clear in your book and i found that very inspiring. thank you. oh, wow. so many quest items. maybe. maybe that's perfect because i have a question around solutions. i've been dying to ask where you're going to fall into one. yes, we can do to me awesome. my name is alisha stewart. i'm the founder of an api enabled product, helps journalists find a more representative sample of the globe on. and i'm really curious, dr. first of all, i have to say i'm so inspired by your joyful warrior mentality and the battles you've been through. i'm really curious to hear a like if you were going to create a large language model today that is accurate right and is representative of the global population and will correctly identify serena williams. would you think it would take and who would you call? i actually think the answer is a models.
1:32 am
i think smaller, more focused models. and so one of the major issues, right, when you're dealing with large language models with billions, trillions of tokens, is you don't have the documentation or the data provenance to have an of it. it's a bit of a mystery meat situation. and then an eight ball, shake it out, see what comes toxic. all right, let's it shake it with more focused us. i would actually think of smaller bespoke models based on the context you're looking at. so you'll have a better handle on what the potential risk are, as well as tailor ing it to a specific need or a specific community. so i think it's tempting to think skill scale bigger, bigger, bigger. a lot of men in this field, you know what i do think thinking through other ways is helpful
1:33 am
here. yeah is your question. can you hear me okay. hello. thank you so much speaking and i'm very inspired by all the discussions had so far. i worked at mit as a engineer for a few years and now i'm doing research and public interest tech focused specifically social media platforms at the berkman klein center at harvard. so i think a lot of these tech companies are still in the process of developing these ideas items and are implementing them and using them in future is as they stand now. and i really appreciate mentioned storytelling and really contextualizing the harms that these cause and within the past few weeks there was actually a very severe bug that mehra had reported where there was a translation issue where they use ai to translate arabic text that said palestine in to palestinian terrorists and. so this caused a lot of real world harm and was just feeding
1:34 am
into a lot of, you know, misinformation and false narratives about, you know, people that are being harmed right now in the world. and so this was kind of just brushed off and said, you know, the ai systems that were used to do these translations, there's a open problem with hallucinations. and it's still there's still iterating and focused on fixing these systems. i think these tech companies still need to be held. so what are your thoughts while there's this, the wall is still developing and making that there's guardrails in place so that it actually get released and have these downstream effects while at the same time, understanding that there's still lot of development that's going on and that maybe will be mistakes, but it should be caught, as you know, as soon as possible. yeah, i think about that's a great question, by the way. think about the entire ai development lifecycle. right. so you have design development, deployment, oversight and the part i would always include of
1:35 am
redress. so ideally, you've tested as much as possible before you put in the deployment face. but if there were a consequences and penalties for making that type of egregious mistake right, i do think companies would be incentivized to be a bit more careful before you put out the mystery meat and this goes a bit to your right with large models and other types of large scale approaches to a ai where you have the nuance, you don't have the contact, and instead it's reflecting what terms are more consistent. li used with particular learning groups. i think we saw this with allo. that was a use for some sort of rating system and if you would see the word black women, it would get a negative sentiment, not because black women stand alone. i don't think we negative.
1:36 am
you know, i don't see it. don't see it. but because are so many negative associations that was a pattern that was being picked up. so i do think if there whereas redress and there were consequences for making those sorts of mistakes the more costly the mistakes are the more cautious companies would be. all right, let's a round of applause and shiny bobble. yeah. thank for skipping the white house to celebrate unmasking. i know you all have a copy. a signed copy, but if you don't go to unmasking. okay, i tell your friends, families, everyone to check it out. this is the celebration. this is the book launch today on halloween speaking. i'm asking if you're going to be talking to dr. joy. we ask that you do mask up. we do have some extra masks to the right of me. i'm david polgar with all tech us humor. one of the partners for today's
1:37 am
event alongside algorithmic justice league who you know and ford foundation our beautiful host venue today also the institute for global politics and random house. so thank you for everyone carving out the time today to celebrate unmasking i one last time for dr.
1:38 am

0 Views

info Stream Only

Uploaded by TV Archive on