Skip to main content

tv   Hillary Clinton Micheal Chertoff and Others Discuss the Upcoming U.S....  CSPAN  March 28, 2024 11:24pm-11:54pm EDT

11:24 pm
political parties and from the newsrooms as. -- as well. >> well, thank you. well, very sadly, we are out of time. you set the scene fantastically. i think -- i take away three key points. one is that if women were running the world, i think there would be quite a different tone and set of urgency to this debate. secondly, these issues of misinformation are not entirely new. i mean, they go back a decade but they have dramatically accelerated in recent years. ai is threatening to make it worse. we have no time to lose because of the impending elections. and thirdly, we cannot duck the question of what is happening with the tech companies and their responsibility if we want to move forward to some kind of, if not solution, then containment. we will be hearing from tech companies later on today. we will be hearing from a number
11:25 pm
of other voices about this vital debate. but in the meantime, can you all please show your thank yous to them? [applause] >> hi! how are you? [applause] [whispering] hi.. good evening. hillary: well, if you are not depressed, we will get you there. i could not be happier to have these extraordinary panelists follow up on the setting of the stage because now we want to get a little bit deeper and understand the implications for the upcoming elections. we have four amazing panelists. jocelyn benson is the secretary of state of michigan and she has been in the eye of the storm
11:26 pm
since, well, before the 2020 election by far. but since then, certainly one of the real leaders to try to understand what was happening. the former secretary of homeland security, cofounder and executive chairman of -- group. michael really has just a depth of experience about dealing with, originally, it was online radicalization and extremism. and now, of course, based on his knowledge of that set of threats, he understands, you know, we've got to face what's going to happen in the elections. dara lindenbaum is the commissioner of the federal election commission of the united states and as such, she is part of the group that is trying to, you know, make sense of where money is being spent and what is being done with it, and the impact that it is having. and anna is the vice president
11:27 pm
of global affairs at openai. we really are thrilled that she is here with us because clearly, openai, along with the other companies, you know, is forging new ground. and a lot of it is very exciting. and frankly, a lot of it is very concerning. part of what we want to do is help sort that out, particularly as it does possibly affect elections. michael, let me start with you. because as i said, you really were on the front lines when you're at the department of homeland security and leading efforts to understand and prevent the use of the internet at that point to provide outlets for extremism and the radicalization of people. you know, and now, i think there is legitimate concern about hostile foreign state actors, not just russia. there are others that are getting into the game. why not? it looked like it worked, so join the crowd.
11:28 pm
but we are now worried that they will use artificial intelligence to interfere in our elections this year. can you explain for not just our audience here but the people who are watching the livestream, you know, both the downsides as to how ai can be used by our adversaries. but also, what can we do to protect ourselves? michael: yeah. and thank you again for reading this -- leading this, secretary. let me share. i think in this day and age, we have to regard the internet and information as a domain of conflict. actually, if you go back to the start, even 100 years, it's always been true that our adversaries have attempted to use propaganda and false information to manipulate us. but the tools they had were relatively primitive. what artificial intelligence has done is equip people to have tools that can be much more effective, with respect to the
11:29 pm
information domain. we have talked a little bit about deepfakes and the ability to have simulated video and audio that looks real. and unlike photoshop or some of the things some of us remember from years ago, this has gotten to the point that it's very, very difficult, if not impossible, for an ordinary human being to tell the difference. but i would actually argue that artificial intelligence has capabilities and risks that go beyond that. what artificial intelligence allows an information warrior to do is have very targeted misinformation, and at the same time, and it is not a contradiction, to do that at scale. meaning you do it to hundreds of thousands, maybe even millions of people. what do i mean by that? in the old days, again, 10, 20 years ago, if you sent out a message that was incendiary, you
11:30 pm
affected and positively may be induced a belief by some people but a lot of other people would look at it and go, oh, this is terrible, and it would repel them. that was an inhibiting factor but now, you can send a statement to each individual viewer or listening that appeals only to them and nobody else is going to see it. and you may send it under the identity of someone who is known and trusted by the recipient, even though that is also false. you have the ability to send a curated message that will not influence others in a negative way. the reason i say it's at scale, you can do it millions of times. that's what artificial intelligence does. i think that has created a much more effect of weapon for information warfare. in the context of the election,
11:31 pm
in particular, what are we worried about? obviously, one experience we had, we saw this in 2016 with russians in the trump campaign. there can be an effort to skew the votes to a particular candidate. we have seen that with macron in france in 2017. we've seen it in the eu and other parts of the world. i would argue that this year we are facing something that, in my view, is more dangerous. that will be to be an effort to discredit the entire system of elections and democracy. we had a defeated candidate, whom i won't mention their name, who has talked about a rigged election. imagine the people who are the audience for that, they will start to see videos or audios that look like persuasion examples of written direction. it's like pouring gasoline --
11:32 pm
gasoline on the fire and we could have another january 6. i understand that the reason our adversaries like this is because more than anything else, they want to undermine our unity in our democracy. in a world in which we cannot trust anything, and we cannot believe in truth, we cannot have a democracy. that is going to lead to a third consequence, which will be very dangerous. we are talking about how do you distinguish and teach people to distinguish deepfakes from real things? we don't want to have them misled by the deepfakes. but i worry about the reverse. in a world where people have been told about deepfakes, they say everything is a deepfake. therefore, even real evidence of bad behavior has to be dismissed. that really gives a license to autocrats and corrupt government leaders to do whatever they want. how do we help counteract that? there is some technological
11:33 pm
tools. there is now an effort to do watermarking to video and audio where when a video or audio is created, it has an encrypted mark so anybody that looks at it can validate that it is real and it is not fake. more than that, we have to teach people about critical thinking and evaluation so they can cross check. when you get a story that appears to stand alone, to see what are the other stories and is anybody picking it up. we need to establish trust in voices that are deliberately very careful and very scientific about the way they validate and test things. finally, i think we have to teach, even in the schools, and this will start with kids, critical thinking and values. what it is that we care about and why truth matters, why honor matters, why ethics matters, and
11:34 pm
then to have them bring that into the way they read and look at things that occur online. this is not going to be an easy task, but i do think we need to engage everybody in this process, not just people who are professionals, and make it part of the mandate for civil society over the next year or two. sec. clinton: thank you so much, michael. that was incredible, helping lay the groundwork for what we need to be thinking about. what is the federal election commission doing to try to set up some of those guardrails on ai fueled disinformation ahead of the 20 when he for election -- 2024 election. comm'r lindenbaum: thank you for having me, it's an honor to be a part of this. the short answer is the ftc is fairly limited in what it can do in this space. there is hope on the horizon and there are different ways --
11:35 pm
different ways things are developing. despite the name, the federal election commission only regulates campaign finance laws and federal elections. money in, money out, and transparency. but last year we received a petition for rulemaking asking us to essentially clarify that our fraudulent misrepresentation regulation included artificial intelligence and really deepfakes. and we are in the petition process right now to determine if we should demand that -- amend our regulations and if we can amend our regulations. and if there is a role for these in this space. our language is pretty clear and narrow. even if we can regulate here, it's really only candidate on candidate. if one candidate does something to another candidate, that is all we could possibly cover because of our statutes.
11:36 pm
unless the congress expands that. all is not lost, there are pretty great things that have come out of this, one is what happened during our petition process. we received thousands of comments from the public and many other institutional actors, including a lot of the smaller tech companies and organizations that don't often have a seat at the table. but here it was really an open forum for them to bring their ideas to light. the comments were insightful, they were creative, and it is my hope that congress and states, and others looking at this will read all of these comments as they try to come up with possible creative solutions here. in addition, congress could expand our limited jurisdiction. if you asked me 3, 4 years ago, if there was any chance congress would regulate in the campaign space and come to a bipartisan
11:37 pm
agreement, i would've laughed. but, it's pretty incredible to watch the widespread fear over what could happen here. we have an oversight hearing recently where members on both sides of the aisle were expressing real concern. and, while i don't think anything is going to happen ahead of november, i see changes coming and there are bipartisan discussions. senator klobuchar is leading it, senator warner. they are thinking about ways they can do something. it's really only the deepfakes space, it's not the misinformation, disinformation that's underneath it all. but it's the discussion of ai and how ai is so at the forefront of everything that we are discussing in this country. i think it has brought more to light, misinformation, disinformation and the way the information gets disseminated that it is bringing this
11:38 pm
discussion out. things could change, i'm hopeful. sec. clinton: i really appreciate your talking about that because a lot of people say, well, who oversees elections? tries to make sure that our elections don't go off the rails and we don't have a lot of these problems? as you just heard, it's not the federal elections commission, they are mandated narrow, and they try to make sure people who are contributing to elections have the right to do so and candidates are spending appropriately. so much of the work about regulating elections is done at the states in our country. we are so fortunate to have jocelyn here because, as i said, in introducing her, she has been at the forefront of trying to figure out how to protect our elections, to make sure they have integrity. michigan has recently tried to regulate artificial intelligence
11:39 pm
. i want you to tell us about that legislation and any other actions that you are taking on behalf of your state, and that you know other states are taking. maybe just start with a quick introduction of what you have been facing. you were elected in 2018 and, if you remember, pictures of armed men storming the capital because -- u.s. capitol because they didn't like with the governor was doing about covid, in michigan was at the center of all of the crazy theories that were put forth in 2020 about the election. give us a quick overview and tell us about what you're regulations intend to do and what else needs to be done. >> thank you secretary clinton for inviting me to be a part of this important conversation. we cannot protect the security of our elections if we don't take seriously the threat that artificial intelligence poses to
11:40 pm
our election officials to ensure every vote is counted and every voice is heard and citizens have confidence in their democracy, in their voice, that is our goal in michigan and several other states around the country. we are coming off of being in the spotlight in 2020, rising to that occasion, seeing and living very clearly when people with guns showed up outside my home in the dark of night in december and i'm inside with my four-year-old son trying to keep us safe. that's real. they showed up there, just like they showed up at the u.s. capitol on january 6 because they were lied to. now we are facing an election cycle where those lies will be turbocharged through ai, and we have to empower citizens to stand with us in not being fooled and pushing back on that misinformation and those lies. there in lies the opportunity in the real challenge. how do we, in a moment where the adversary to democracy are focused on sowing seeds of
11:41 pm
doubt, creating confusion, chaos and fear, and everything they do, and now has this new emerging technology that is essentially getting stronger and more effective at being poised to accomplishing those rolls of creating chaos, confusion and fear in our democracy and in our voters minds. how do we respond to that throughout our country, as citizens, by giving each other certain see in confidence that our democracy will stand, just as it prevailed in 2020 and every time before and since. it also so that we can be equipped to have clarity as to how to respond when we get this misinformation. in michigan, first we set up the guardrails. several other states have done this and we hope the federal government joins us. in banning the deceptive use of artificial intelligence to confuse people about candidates, their positions, how to vote, where to vote or anything regarding elections.
11:42 pm
we have drawn the line in the sand, it's a crime to intentionally disseminate through the use of ai deceptive information about the elections. we've require disclaimers and disclosures of any type of information generated by artificial intelligence that's focused on elections. so, for example, one of the things we are worried about is, and we know because of aia can be targeted to a citizen on their phone getting a text saying, here's the address of your polling place on election day, don't go there because there has been a shooting and stay tuned for more information. that will invoke fear in a citizen with the disclaimer disclosure in-place requirement, they have to be disclosed, this has been generated by artificial intelligent. it's not sufficient, but it's a key piece of enabling to push back. the other side is we need to equip that citizen when they receive that text to be fully aware as a critical consumer of
11:43 pm
information as to what to do, where to go, how to validate it, where are the trusted voices. we are setting up voter confidence, building out these trusted voices so that faith leaders, business leaders, labor leaders, community leaders, education leaders can be poised, even mayors and local election officials to be aware in pushback with trusted information. so it's layers upon layers of legal protections and partnerships to equip our citizens with the tools they need to be critical consumers of information and that everything we do between now and every election and leading up to november, helping to communicate to every room we are in that it's on all of us to protect each other from the threat of aia in regards to our elections and in many other spaces as well. and while we as officials will be working to do that, we are also trying to communicate to citizens, this is the moment that will define our country for
11:44 pm
years to come and we all have a responsibility in this moment to making sure we are not fooled, our neighbors aren't full, our colleagues and friends are fooled and equipping us with the tools we need to pushback and speak the truth. value that honor and integrity and help define our country moving forward-based and rooted in those values. sec. clinton: i am a huge fan of weight you and your attorney general in your governor have been doing and i think it would be great if you could get some help to model this, and i'm hoping maybe some tech company or some foundations will talk to you afterwards because we need to show this can work. i saw michael nodding his head. we've got to get -- if this is a fight against this information, we have to try to put up guardrails, we also have to flood the zone with the right information to counter the negativity that is out there.
11:45 pm
so, i hope you can implement that and we can then all learn from it because it's going to be not a problem that goes away after the selection. anna, you have been sitting here, you have been sitting through the first panel. now you have heard our other panelist, and you are truly at the center of this because chatgpt, y'all are moving faster than anybody can imagine, sometimes i think probably yourself about what you are creating on the impact it will have. this is obviously the ground zero year. this is the year of the biggest elections around the world since the rise of ai tech ologies like chatgpt. so, can i ask you, do you agree with what you heard from the panelists about the dangers, but then, tell us what you are doing
11:46 pm
at openai to try to help safeguard elections. give us your assessment. are we overstating it, are we understating it? and what can be done and how can you help us do it? >> i think what's interesting to me listening to your first panel and to your panelists here is that so many of the ideas and the concerns we are already integrating into technology. if i could just say, the one piece of good news is that unlike previous elections, in terms of the tech companies, election officials, even the public and the press, we are not coming in unprepared. this is especially true for me because i was working at the white house in 2016 so this has been top of mind for me, but openai, a relatively young company. this is something that has been top of mind for us for years.
11:47 pm
gpt two, which was several years ago, it in quite embarrassing compared to what it is now, at the time it was state-of-the-art. it produced words of text like a human could write. even then we thought there's a possibility for this to be used to interfere with the electoral process is very significant. we made a decision not to open sources and it was controversial for the resource community but it was because we had this in mind. ahead of 2016, we were not having things like this. in general, we are much more prepared as a society and we are working with the national association of the secretary of state, and with social media companies because one key thing to remember is there is a real distinction so we are not dealing with the same kinds of issues as ai companies. we are responsible for generating ai content rather
11:48 pm
then distributing it. me to work across that chain. in terms of as many have mentioned here and what i hear with almost every interaction with policymakers, deepfakes are a very serious concern. for us, we have an image generator, and we do not allow it to generate images of real people. but in particular, politicians. and now are implementing something that is a digital signature. the great thing about it is that it's not just ai companies, it's the new york times, nikon, the bbc. so it's going to be an ecosystem where there is actually a tool across the ecosystem that will work to help journalists and social media companies identify the content generated by ai. obviously this is not a complete solution, but this was not the case a year ago, so already we are much more advanced with the
11:49 pm
entire ecosystem to deal with these issues. we have investigators and we just took down a bunch of state actors who were using our tools. so, these two pieces of cooperation across all of the players and all of the state-of-the-art interventions that we are building, i think right now, the kind of things that you describe, it's not posten with -- possible with openai tools. we are constantly evaluating what other kind of threats this technology creates that are novel. and i would just wrap up with, i do have optimism otherwise i would not work at openai. one of the things that these tools have the potential to do is to create access to education for new segments of society so, there is a potential these tools
11:50 pm
actually help create citizenry that is more educated and more aware, which is really key. access to a healthy democracy, and it can be used for secretary of state set are incredibly busy and back offices. it is a bit of a race between the positive application for the technologies and the negative ones. this is what's so fantastic, president biden really worked for that one. sec. clinton: we only have a few minutes left but i want to ask each of the panelists, what steps can governments, obviously in national, state, local in our country, ai companies, the platforms, nonprofits, anyone that you think of as to what they could and should do to try to ensure the integrity of this upcoming election, but then for the longer-term, one of the longer term, what other changes we need?
11:51 pm
>> i think it goes back to what i mentioned, which is a really close collaboration with ai companies, social media companies, election officials, civil societies really working together to address this problem and sharing knowledge. because this is a whole society problem, and no single actor is going to be able to fully solve it. >> i think the education component and pushing sources is key. the technology for the change will be a new technology in the future. for what the government does or what the tech companies do, we need to strengthen the trust that people can build in trusted forms of news. i think we are seeing some of that starting to change. sec. chertoff: i would say, in addition to those suggestions, information sharing. when there is an indication that
11:52 pm
something is coming that is start -- part of a wave of disinformation, such share information, including federal and state in the public is very important. i want to say, i know there is some litigation now where some states have tried to make it illegal for the government to share information about this information with the platforms with censorship. i personally think that's nonsense. i think what you are doing is giving information that's helpful and not doing anything that's harmful. comm'r lindenbaum: -- >> i agree, to invest in entities and partnerships focusing on this education and sharing of information and building more collaborative partnerships and teamwork around this, all of that i think has to be the foundation for every entity making their first priority protecting citizens from the way in which ai could be negatively used to have their voice and
11:53 pm
vote in democracy and to recognize that our adversaries to democracy have figured out how to divide us and demobilize, often deter us from believing in our voice to the use of ai. so our response needs to be similarly -- similarly collaborative, national in scope, and focused on empowers citizens and partners all across every arena and sector, tech and beyond to be a part of the pushback and the protection of our citizenry from this threat to our democracy. sec. clinton: i cannot think the four of you enough. maybe out of that panel will become the cooperation. let's get openai, facebook, others together with people like jocelyn and michael who have a lot of the depth and width where the money flows are that she sees, and let's see if there cannot be some cooperative effort

7 Views

info Stream Only

Uploaded by TV Archive on