>> We have six fact checking partners. And so these partners get access to sort of a special district board from us that has potential misinformation. And when they rate a content as false or altered media, then we reduce distribution of the content and we show the checking label. Another thing I want to mention in terms of policy is so we differentiate because they genuine three thought it was true even though it's not. Whereas disinformation is false information that is spread with deliberate intent to mislead people. And we -- everything I just talked about so far is how we tackle misinformation whereas disinformation, we have a separate policy and systems whereby we look at the actors and the behaviors. And so we look at signals such as whether a certain claim is being spread using fake accounts among accounts that coordinate to amplify a certain claim that is not supposed to be as prevalent as it is. And when we find these signals recollection we also take down the entire network. And so it doesn't matter what the account post, they will lose the account and we will not allow the same actors to create another account. And we have a system to detect when an actor that is already removed for violating or policy tries to create a new account to keep spreading what they're trying to spread. With the combination of these things, I think it's how we tackle political and election disinformation again by balancing speech and safety. Because, otherwise, you know, as we do not want to disproportionately penalize people, especially with something like elections is -- you know, there's a lot of of issues that people want to discuss, a lot issues, you know, people want to criticize, and we don't want to disproportionately penalize people for doing that. >> All right. In 20 years for digital and literacy and focusing in Indonesia, I think you have quite a lot of experience and examples on how we're dealing with these issues. >> Yeah. -- 20 years but still going on and I think the challenges with this circumstance more and more. And so what we think that -- in 2024, we have the general election, the parliament members election, president election and also the local leaders, governor and major election that this will be the challenge for the disinformation issue. And I think the -- like campaign or the tax on the election is already happen in 2019. And I think -- and I'm afraid it will be happen in the following elections. So I think our experience is, yeah, we try our best to try to educate the people with some things I have already mentioned before. Like, yeah, we -- maybe, I think, like the people should aware about social media system. It is like, you know, that we give the -- the social media give like not only brought our insight but it also can be simplify our -- our opinion. Because what we read, what we see in the social media is what we want in this -- with this filter algorithm. And also, the disproportionate area is a matter which we need to concern for the people. So I think the -- what I'm -- yeah. I hope it -- it will be challenging for all this next election, but I'm more afraid of the -- this disinformation, misinformation bring people into divisions. It's already happened in 2019 friends being enemy and even some divorces happen because the husband and wife trust the different president, trust the different candidate, trust the different local leaders. So, yeah, I think -- yeah. We not only hope that we have to educate the people, but how the social media set our mind into -- like, yeah, our -- set our mind about this -- our choice for this election. I think this is also important. >> Thank you. Maybe you can give your remarks and reflections on what we could learn from South Korea also from other countries as well. >> One of the reason that policymakers believe election related disinformation needs to be specially treated and specially punished as opposed to other general information is because election take place over a short span of time and people believe that false information, even if it is later corrected, will have adverse impact on the election. So countries around the world flop to criminal punish election related disinformation. But that is disproportionate and that is -- I mean the you are generals of the election is the reason why you should not criminally punish disinformation. Because just as false information can adversely impact the election, the results of the election, criminal punishment can also adversely impact, can have adverse impact on the election, because criminal punishment itself will discourage people from having vigorous debate about issues where falsity and truth need be further debated. And, therefore, there is a less and less possibility for the truth to come out, especially the information about corruption. It's not something that can be -- information about corruption, especially at high level, is not a low hanging fruit in a pursuit of truth. It is hidden in its -- the words of power, its words of secrecy that reporters, people, ordinary people have no access to. So any attempt to obtain information about corruption is bound to be based on very flimsy evidence, words of mouth, very unstable evidence, but that's how the discussion begins. That's Howe the crowdsourcing of -- however the crowdsourcing of information begins. You know, one person witnesses a politician driving somewhere, another person witnesses the politician with, you know, another, maybe gang leader, right? This -- this pieces of puzzle have to be put into a mosaic for the truth to come out, but if any one of the -- if any one of the evidence is -- or if anyone bring in their one piece of puzzle is criminally punished for having no or weak evidence basis for racing that information, for raising the question about the elect ram candidate, then this crowd corresponding will be nip in the bud and the truth will never come out. There is a cases -- there's a case in point about this in Korea. In -- in 2007, there was an election that elected president Lee. And during that election, an opposite -- an opposition politician accused the then candidate, then president candidate Lee for stock price manipulation. Now, there, the opposition -- the opposition politician was criminally punished for disinformation, for election-related disinformation. And President Lee got elected. And even after that election, the conservative served another term. So for ten years, this attempt to raise question about his involvement of stock price manipulation was completely suppressed. Now fast forward to 2020, 2 years ago, President Lee was criminally punished for stock pricing manipulation. Which means for ten years, the entire electorate have been in the dark about the corruption that the former -- now the former president was involved in. These are the lessons that South Korea learned been the dangerous of criminally punishing election-related information -- election-relate the disinformation. >> Thank you with that very important examples that we can learn from there. And risking and to be actively engaged in KIFK communication, you also must be -- have a very important election learned from that area. >> Sure. Thank you. I've been more and more knowledge in term of knowledge gee who is in this activity we use the term for damage now because not only misinformation or disinformation that too many information or uncoordinated is inconsistent information and a balance of miscommunication can also cause a lot of negative impact. So what I want to share is that the latest survey that we have with Nielsen that 40% of public in Indonesia couldn't identify whether or not the news they receive is hoax or not. And this is a very important, especially during the election. And I think what the politicians, the government platform like Meta and civil society in Indonesia, what they need to understand that how the election, the polarization in the election can also give a very negative impact in term of how -- intervention how it is an outcome in the country. For example, back in 2017 and 2018, the election at that time is in 2019, the government is implementing the missiles mobile campaign. The target these children under 14 years of age and it should be 95% confidence universal and national proved and some in national and district. But then some -- probably it's unintentionally, but public start to think that for some political party, that use the hollow issue, for example, to gain support and popular for the public use that a -- in Indonesia on certain faxes in Indonesia, as an example s that caused a lot of consistency from parents to use the untuneization for children. Or where the program -- they have a program that is considered as a government program so it will benefit to the incumbent but not necessarily the -- the commission. So that dive -- completion M composition. So that kind of polarization can create a very negative impact on some of the development program, especially health and immunization. So I think the politician, the political party, how they have platforms need to be aware of that. Whether we talk about how they -- the election and the misinformation during the election can cause other negative impact in the contractor. Over to you. >> All right. Thank you. And I think we are coming to the key questions of our session, like public policy of legislations can dealing is an important tools to deal with misinformation or disinformation or the multistakeholder approach or community engagement is one of the most important issues on this matter. And Alice, maybe you can give your opinion on that. >> Sure. So at Meta we believe that in order to tackle misinformation what's important is really -- I think I echo, you know, my fellow panelists here is really the collaboration between private and public sector and civil society organization. That is probably the most important. And the aim should be really to improve digital literacy to build a more digitally resilient society, the kind of society that can be more informed before they share information online that knows how to find the truth of information that they see online and to be more discertaining when they see information online. Because in the end, like the Internet is -- you know, has unlimited, you know, number of content and information and it's only going to go grow. And I think there's no possible way for platforms or companies to get to every single one of content and for fact checkers to debunk all of them. So in terms of regulations, I think the, sort of, self-regulatory code such as the EU code and Australia code on disinformation is, in our view, the most effective because it offers companies the -- the ability to develop the systems that fit for purpose, that fit, you know, the nature of how the services that we provide work. Because different actually have different ways, you know, to do content moderation. And so the sort of EU code and Australia code kind of schemes, they mandate us to report how our systems work, the effectiveness of our system. Even, you know, the quantitative measure of the system and we're complying with that and we're building the systems to report these things. To the extent that misinformation regulation is going to be pursued, we think that there needs to be -- clear principles need to be incorporated to really balance safety and speech. So in order to define misinformation, for example, it needs to be limited to verify plea false information, not opinions, criticisms and satires, which, you know, people should be free to share. And then it needs to incorporate the element of harm. So when misinformation could risk physical harm or offline harm, then platforms can be required to remove it. But, otherwise, the focus should be on correcting the information, working with fact checkers, you know, allowing people to find out the accurate version of the content. >> Well, before we go on, I want to also acknowledge that we have a remote participation room from Chikaka. Donny is helping us coordinate over there. So if there's any feedback later on, do let us know. And, also, before the end, we hope that we have a little bit of time, if any of you have anything urgent that you believe that you want to share with us. But I really think that the key question right now is that what do we do to move forward? We know that disinformation, misinformation is a problem health, COVID and so on, but some civil society organizations are doing something about t but at the same time, government also setting up laws in many different countries, including many countries and economies in Asia. So what do you think are the impact of these laws? Are they accomplishing what they're trying to do? And if there is any alternative, you know, for example, Meta you talked about -- well, actually, Alice, you talked about some of these codes, self-regulation codes that are in place in some jurisdiction, are they effective? Or what I should say, is what is the better solution if there is one, to engage multistakeholders in improving the digital literacy of the users so that maybe that would -- would that solve the problem? Or would a combination of all these solutions would be a better approach? What is your opinion on that? Maybe we will -- I want to let maybe Dr. Park, you go first. And then maybe we can go -- yeah. Dr. Park, you go first. >> I already made my Korea-based cases of why criminal disinformation law is bad idea. There are other reasons that criminal disinformation law is -- not criminal disinformation law, but regulatory -- regulation-based, regulation-based approach to disinformation is a bad idea. Because if you really look at the disinformation phenomenon, there are three problems. One is state sponsored disinformation is a lot of times much bigger problem than individual originated disinformation. Because when disinformation is state sponsored, it comes with a brand of legitimacy and has really oppressive power on people. You can take the case of Russia and Ukraine war, how Russians are really blinded from the -- from what's going on in Ukraine, what Russian forces are doing in Ukraine. So that's one thing. If you take a regulation-based approach to disinformation, who will have the handle on enforcement of that regulation? The government. But a lot of times, the government is the source of disinformation. That's one reason. The other, one big chunk of how disinformation is really hate speech. Hate speech against minority. There are a lot of false information going around, right? And then people of different religions believe that other religions are false information, are -- you know, are fake news. But, still, they do not cause harms. What disinformation becomes harmful, what is perpetrated by ruling minority against weak -- I mean, ruling majority against the weak minority for the purposes of prosecuting them. So this means that it is not -- the harm does not come from the falsity of information. The harm comes from the context of the information. It really doesn't matter whether the information is false or not. It is -- it really has something to do with who is speaking against whom, through what channel, and the -- and the context of a power relationship between the people speaking and people being targeted. That's why -- that's why regulation-based approach can misfire often because regulation-based approach always starts out with a decision, is this false or is this true. It comes through this binary approach which can really -- which really blinds people from the fact that the real harmful information is hate speech. It's really about racial -- racial, religious discrimination which cannot be really accessed, remedied by regulation-based approach to disinformation. The third reason is that if you really look at disinformation, what really concerns people is not the falsity of the information P. What concerns people is the pattern of the disinformation. How the automated trolling -- automated trolling is promise pop pa gating more and more information that are either hate speech, that are either state-sponsored propaganda. So it is the pattern diffusion that we can focus on is who knows it better than intermediary themselves like Meta or the other players who have formed the platforms. And they have all the technology to allow either to do artificial intelligence, either big data to really detect the source of automated trolling. So that's why I think regulation-based approach is really -- it will misfire. Just two years ago -- again, in Korea, one person -- an online influencer was criminally punished for creating a software that automated adding comments that are pro -- that are supportive of certain presidential candidate. There, he didn't do much other than using the open API. And it is something that many nonprofit government -- nonprofit organizations do around the world, taking information, personal information from a lot of supporters and using those to automate -- uploading petitions to the government in various legislative process. And such criminal punishment on even the pattern of diffusio cann can also misfire as well. I'll get to that when we have time. >> Okay. We're running short of time but anything to add from adriana and Ricky on this very quickly? >> Well, I mean, if I may, for election that's probably true, but for pandemic or for the health-related issue where the people or agency or whoever spread the hoax or misinformation, they clearly gained a lot of benefit from what they're doing. And it cost a lot for society. It cause outbreak, it cause low coverage of immunization. So for issue like health, I think we really need to see how -- I mean, for health intervention, it's clear that it's a public health, effective public intervention. And for those people or any of the organization who spread or make people in doubt or failing from protect them or from their infection, I think probably regulation would stay involved here to kind of stop that. But, again, the literacy of example is a long-term solution, right, I mean, for countries and Indonesia. We are only in the beginning of thinking about digital literacy, how it could include the model today in the education system, things like that. How we equip the healthcare workers with the digital literacy skill to be able to communicate it with the community. So digital literacy, I think we all need to understand that is quite a long-term kind of effort. But for -- I mean, for issues like health, I think I really want to from other colleague is is what do you think should be the solution. I don't want to be sound cliché and say we need to do everything. But I think regulation is still kind of have in term of reducing a number of -- for people to understand what are the risk of the -- of doing that. Over to you. >> I think I want to say is already mentioned but maybe continue to the floor or maybe -- >> Yeah. Actually I also want to say that there's been a lot of comments and questions on the chat, zoom chat, from our online participants. And I'm -- you know, some of them are saying can we show our speakers -- you know, some of the questions. But I guess it's a little bit difficult because they are -- they have to turn themselves to the back to look -- to read some of these comments. But I think many of them are very good and maybe we should, afterwards, going back to some of these comments and maybe, you know, collect some of the information when we drop our report for the APRIGN. But I really want to -- because we're running short on time right now, I want to see if we have any comments from the floor. Again, Donny, from your remote room. >> Yeah. There's several comments, but I try to choose the three of them. One is from Indonesia. Since you mentioned about criminal punishment and other examples and rapidly evolving the analysis social technological and political dimension of children, including creation of spread and impact of disinformation. There any combination of solutions that are effective please open and free information of the system. That's one. And the other one is from auto political particular to Meta. What is Meta doing about disinformation or a politician wearing or what can -- on his declaration. And, yes, the government -- and the third one from Ying to everyone. When we talk about media literacy how about science literacy? And the other opinion is about the governance would not do censor ship to the private chat room. That's tree, thank you. >> Okay. So do you -- Dr. Park and Alice, do you have any feedback? Quick feedbacks to those questions and maybe we will have to run up our session. Alice? >> Sure. On cross border disinformation. So we do have a policy called coordinated behavior. As I mentioned earlier, it's essentially our main way to tackle disinformation. So we look at signals of bad Arthease actor and bad plovers. And that looks at signals where a certain claim is big artificially amplified by coordinated accounts, specifically if they're using fake accounts. So when we detect these types of behaviors and actors, then we take down the entire network. We also launch add library a few years ago in response to issues, I believe, mainly in the U.S. whereby actors from other countries were amplifying certain kinds of political issues in the U.S. And so this thing is now launched in many countries, including in many countries in Asia-Pacific. And so when people want to publish an ad on Facebook that is political in nature, first, they have to register. So we will know the identity, like the people behind it and where they're located. And if they're not based in the country that is actually holding the election, then they are not allowed to advertise that content on Facebook. And the political ads that are published live in our ads library that is publicly available for seven years, even after the ads are no longer running. And so people can see who are spreading what messages and where they're from and who they are. >> Okay. Dr. Park. >> Well, I think this is one example of good solution to disinformation. Self-regulation by the intermediaries. Back in 1930s, the American media scene was a very divisive between right wing newspapers and left wing newspapers that were spreading false news on both sides. They all died out because even if you are really extreme right wing, you want to get balanced information. That's what the consumers want. And for newspapers to retain their consumers, all the newspapers have to move closer to the center. Of course. I mean, there's exception, but mostly, you know, American media, more and more gravitated toward the center to keep their customers. And same thing will happen with the Facebook and other platforms. They have to do this self-regulation to keep people online. There are many, many platforms that just were not in business because people stop going there because it was full of hate speech, it was full of disinformation. The idea is to let this ecosystem flourish and the government should not try to either make use of, make use of disinformation or criminally punish disinformation. Either of it will be a poison for the growth of the ecosystem. >> Well, you seem to be the optimist because people are saying that world is different now because of social media. I don't know whether that is your view. But I think we're coming down to the time that we need to wrap up, so, first of all, we had one question that we want to talk about, which is about the technology impact, but we might not have anytime to really talk about it except if you can round up your thought in less than one minute for each of you about today's discussion. How do we move forward and whether some technologies will become a solution, like AI and meta verse will become a solution or it will be causing any more problems. Any -- any? >> I think we also want to briefly discuss about the trends and what is needed to be debated with the upcoming technology developments like AI or other -- >> Yeah. And admin is telling us to do this. So as quickly as you can. >> It's okay to over run a little bit. >> To over run a little bit. I thought -- okay. I got your signal wrong. So maybe we start with Ricky. Give us a bit of your final thoughts and whether this new technology will be a solution or a problem. >> Okay. Thank you. Having digital literacy is crucial but this is a long-term. But even in this development, people were taught to understand the risk and also encourage something that is not make other people suffer. So that's kind of how people to better communicate, have a conversation on social media, something that from Korea to mention to have that conversation is really important, to provide that space for conversation is important. This is democracy. I think there's certain rules that needs to be applies, especially for institutes like health where -- where when pandemic or misinformation or hoaxes or circulated that could really damage countries in term of having at least experiencing an outbreak or other things that can actually prevent it. So I think that needs to be balanced. And again, I want to highlight coordination for Indonesia. There are different players, different capacities, different interest. And based on Covid-19 experience, at least from what I see, the contribution from the different -- there's always a gap, either what government is doing, what civil society is doing there's always a gap. And this is where I think the engagement is really important. Thank you. >> All right. Dr. Park, Alice, or anyone have any remarks on what is the upcoming challenge in terms of new technologies, AI, or even deep fake technologies? >> Right. So I think I'll speak about -- to wrap up how we will move forward from there. I think I want to echo what everybody says about collaboration and focusing on digital literacy. Personally, I support what was mentioned earlier about getting governments to put digital literacy in the school curriculum. I believe Taiwan has done it. I hope more governments will do it. I don't know about any of you, but I remember having to learn calculus in high school and I have never used calculus ever since. So why do we have to learn that? I think it's would be better off if they all learn digital literacy instead of as a basic required curriculum. Other than that, in terms of regulation, I believe it is more effective for platforms so that platforms can build systems that can really technical the issues based on the requirements that the platforms have. And to have them require to transparently report on the effectiveness of the system. And for governments not to focus on piecemeal approach, like individual content, but to focus on the systems. Thank you. >> There are harms coming from social media it is based on technology and we need technology to fight back. So detecting, again, patterns of diffusion requires operation of AI and big data. And it should shall used actively by the intermediaries to flag or push out. It becomes a problem when regulation comes in and the governments starts dictating which one should be suppressed, which one should be taken out. So I'm all for moderation, because moderation is also form of speech. Moderation is a way for the humanity to become closer, to come closer to truth. And digital and media literacy is important, as you saw in my title, I'm professional but also I'm heading open a nonprofit digital rights organization. We also run media literacy programs. P sitting in the audience is a leading media literacy campaign. And digital literacy is also important because being able to, you know, Google search, being able to use the advices -- devices allow people to obtain capacity to get to the information, better information. And when access is also very important. The UN human rights council also almost every two years repeats this resolution that what is protected online -- I mean, what is protected offline should also be protected online. What rights protected -- what rights protected offline should be also protected online. What they mean is in offline, people speak and their speech evaporates into air. There's no way for the government to really track down each comment made by people offline. That freedom is also -- that freedom also should be present in online as well. People should be able to crack themselves. People should be able to come through -- somebody talked about cracking down on private chat. People should be allowed to share, even if, you know, possibly false information, shaker the information with -- share the others to test the out to share their opinions. If you start cracking down on even private chats to get outside false information, that's not treating speech the same online as offline. >> Okay. I think maybe it's a little bit cliché, but mouth -- multistakeholder collaboration is a must. Because from the upstream to downstream, the upstream conduct like -- the digital literacy, med literacy and so on. The self regulation by intermediaries should be done and then the down stream, the regulation also is the important things to -- like combatting this. So I think my way is again multistakeholder collaboration work. >> All right. Thank you. I think we need to wrap up, but I will -- we will not give you a resume. I think it will be more healthy to have a more open discussion on this, also reflected by Dr. Park, I think the political Dwight P debate needs to be protected. Thank you for all of you, including all the speakers. And we're going to have another session after this. And thank you very much. >> Thank you