Enovate Solutions

Meta’s Experiment in Democratic Governance: Lessons Learned and the Path to AI Decision-Making

Share This Post

Meta recently conducted a large-scale experiment in governance, aiming to involve people from various demographics in decision-making regarding the responsibility of the metaverse it is developing. Over 6,000 individuals from 32 countries and 19 languages were selected to participate in this unique corporate initiative. Through online group discussions and input from external experts, the participants dedicated significant time and effort to the deliberation process. Impressively, 82% of the participants expressed their approval of this format and recommended its implementation in future decision-making by the company.

Meta has made a public commitment to implementing a similar process for generative AI, which is in line with the growing interest in democratic innovation for governing or guiding AI systems. This decision puts Meta in the company of Google, DeepMind, OpenAI, Anthropic, and other organizations that are beginning to explore approaches based on deliberative democracy, an idea that I and others have been promoting. I should note that I am a member of the application advisory committee for the OpenAI Democratic inputs to AI grant. Having had the opportunity to witness Meta’s process firsthand, I am thrilled by its potential as a valuable demonstration of transnational democratic governance. However, for this process to be truly democratic, participants would need more power and agency, and there would need to be greater public transparency throughout the process.

Aviv Ovadya provides advice and guidance on governing artificial intelligence to investors and companies. He is also associated with Harvard’s Berkman Klein Center and GovAI.

In the spring of 2019, I had the opportunity to meet some of the employees who were involved in the establishment of Meta’s Community Forums, also known as the processes. This happened during a conventional external consultation with the company, where we were discussing Meta’s policy on “manipulated media.” Since I had been writing and talking about the potential dangers of generative AI (now commonly referred to as generative AI), I was invited, along with other experts, to offer insights on the policies that Meta should implement to tackle problems like misinformation that could be amplified by this technology.

Around the same time, I came across the concept of representative deliberations, which has gained popularity worldwide as a democratic decision-making approach. This method involves governments seeking public input on challenging policy issues. Instead of using referendums or elections, a small group of randomly selected individuals from the public is chosen through a lottery system. This group then convenes for several days or weeks, receiving compensation, to engage with experts, stakeholders, and each other in order to generate a set of final recommendations.

The discussions among representatives gave me a potential answer to a problem I had been struggling with for a while: how to make decisions about technologies that affect people in different countries. I started advocating for companies to test out these processes in order to address their most challenging problems. When Meta decided to start a pilot program like this, I became an informal advisor to their Governance Lab, which was in charge of the project. I also had the opportunity to closely observe and participate in the design and implementation of their extensive 32-country Community Forum process. It’s worth noting that I did not receive any payment for my involvement in these activities.

The Community Forum was particularly exciting because it demonstrated that it is indeed possible to run this type of process, despite the significant logistical challenges. The proceedings of the forum were largely managed by Meta’s partners at Stanford, and there was no indication that Meta employees were trying to manipulate the outcome. Additionally, the company fulfilled its commitment to have these partners at Stanford directly report the results, regardless of what they were. Moreover, it was evident that some consideration was given to how to effectively implement the potential outputs of the forum. The results included various perspectives on appropriate consequences for the hosts of Metaverse spaces involved in repeated bullying and harassment, as well as suggestions for moderation and monitoring systems that should be established.

In contrast to the negative tone often found in political discussions, the Meta Community Forum provided a refreshing and honest platform for deliberation. However, there were notable flaws in the process. Participants had limited control over their interactions and no direct contact with Meta employees, which made the process feel more like an experiment to gather data rather than a democratic exercise. Additionally, although most participants seemed to grasp the issues at hand, the depth and extent of the deliberation sometimes seemed inadequate. Furthermore, Meta has not yet followed through on its promise to clarify the actions it will take in response to the forum’s results.

When Meta applies a similar approach to generative AI, it should strive to address the flaws of its initial Community Forum and draw inspiration from well-established guidelines used for comparable processes implemented by governments. Considering the fast pace at which AI advancements occur, it is crucial for participants to specify the circumstances in which their recommendations would be relevant and the circumstances in which they would no longer be valid.

Some argue that the best way to deal with platform and AI issues is to let existing democratic governments handle them or to decentralize decision-making. However, both approaches have their limitations. Autocratic and partisan governments have either hindered or exploited relevant regulations for their own benefit. Challenges that span across national borders can be difficult to address effectively. Open source or protocol-based decentralization can only address certain issues such as misinformation and harassment to a limited extent. Crypto-based systems that lack representative deliberation processes result in even greater inequality, as those who hold a significant number of tokens wield disproportionate power. It is important to find ways for companies to make informed and democratic decisions, especially in cases where centralized non-state power may serve the public interest.

What would be required for a representative deliberation to truly achieve its ideal? In order to aim for a global mandate, the process could have a smaller number of participants (around 1000) spread across multiple countries. This would allow for more resources per person and therefore more time for in-depth discussions. Additionally, participants could have more agency to directly suggest new proposals, which could be facilitated by the careful use of AI. Lastly, the deliberations would need to have a structured format that ensures influence, transparency, and seriousness appropriate for a democratic process. For instance, the organizing entity should commit to not only releasing the results but also responding to them by a specific date, and all sessions that are not small group discussions should be made public.

In order to navigate between the dangers of excessive centralization and chaotic decentralization, we must improve our methods of making decisions collectively. These methods will not be flawless from the beginning, or even after a few attempts, but if we want to survive in a world where AI is rapidly advancing, we need to quickly explore and create innovative approaches to governing across borders.

Subscribe To Our Newsletter

Get updates and learn from the best