Monetary Incentives and the Production and Dissemination of Information in Social Media

Justin Muyot

FEU Public Policy Center

June 13, 2022

 

Social media platforms help fund the disinformation machinery. They provide peddlers of disinformation with additional revenue streams from the monetization of content. To be fair, social media platforms have cracked down on disinformation through account suspensions and content takedown. These actions effectively cut access to the monetization features of the different platforms. However, the companies behind social media platforms lack the capacity to review each post or uploaded content. Disinformation agents continue to fall through the cracks and enjoy the benefits of monetization. Clearly, self-regulation of social media platforms has its limits. The governance of these digital public spaces must be shared between the companies operating the platforms, civil society, and governments. Furthermore, if the public wishes to improve the environment of social media networks, more thought must be given on how the rewards from content creation can be channeled to verified and credible content.

Commercializing public attention

Social media platforms utilize algorithms that aim to prolong the time we spend scrolling through feeds and interacting with content. The effective capture of the public’s attention allows the companies behind these platforms to sell the public’s attention to advertisers and generate revenue. Over the last five years, advertising revenues in social media platforms have tripled as the number of users and ad placements continue to grow.

Figure 1. Global Advertising Revenues in USD Millions

  2017 2018 2019 2020 2021
YouTube 8,150 11,155 15,149 19,772 28,845
Meta 39,942 55,013 69,655 84,169 114,934

Note: Meta includes Facebook, Instagram, Messenger and third-party affiliated websites or mobile applications

Sources: SEC 10-K Filings

Figure 2. Global User Base in Billions

  2017 2018 2019 2020 2021
YouTube 1.47 1.58 1.68 1.78 1.86
Meta   2.03 2.26 2.60 2.82

Notes:

  1. Meta figures indicate the number of daily active people (DAP), a registered and logged-in user of Facebook, Instagram, Messenger, and/or WhatsApp.
  2. Meta started monitoring DAP in 2018. Previously, they did not consolidate figures for their different products.

Sources:

  1. SEC 10-K Filings
  2. Statista

Monetization across platforms

In 2012, YouTube introduced the YouTube Partnership Program, allowing users to sign up, upload content, and earn income. This has helped YouTube expand and cement its position as the leading online video platform. Presently, the YouTube Partnership Program gives creators access to several monetization features. These are advertising revenue, channel membership (monthly subscriptions), merch shelf, super chat & super stickers, and YouTube Premium revenue. Aside from adherence to various policies and guidelines, the minimum requirements are 4,000 valid public watch hours in the last 12 months and 1,000 subscribers. In addition to the YouTube Partnership Program, YouTube has been placing advertisements on all uploaded content since November 2020.  According to YouTube’s right to monetize, YouTube can now place advertisements on videos from channels that are not part of the YouTube Partnership program.

Like YouTube, Facebook allows users to monetize their content through a variety of features. These are in-stream advertisements, paid fan subscriptions, and brand collaborations. To be eligible for in-stream advertisements, pages must have at least 10,000 followers, 600,000 total minutes viewed in the last 60 days, and 60,000 minutes of live video in the last 60 days. Paid fan subscriptions are by invitation only, while brand collaborations go through an application process.

Monetization and disinformation

The algorithms social media platforms use to effectively serve advertisements have been studied and taken advantage of to distribute disinformation. These algorithms rely on an individual’s internet trail to determine the type of content that would best capture a user’s attention. Purveyors of disinformation have mastered the use of emotionally charged content to capture public attention and initiate engagement (e.g., comments, reactions, and shares). These micro actions that users perform on social media platforms are processed by algorithms whose primary objective is to keep users interested. This will likely result in disinformation appearing more frequently in a user’s social media feed. As content featuring disinformation go viral, it can enjoy additional revenue streams from its monetization.

Unlike transactions between politicians, intermediaries, and operators which are almost impossible to track and regulate, the monetization of disinformation on social media platforms can be prevented or, at the very least, reduced. If the companies behind social media platforms are sincere in their efforts to foster a better environment in their respective platforms, these companies must respond to important questions regarding the monetization of disinformation.

First, is it ethical for companies operating social media platforms to earn from disinformation? While it is reasonable to allow these companies to recoup its expenses and earn profit, monetizing disinformation further legitimizes its production and dissemination as a viable source of income. This provides individuals and groups involved in the disinformation machinery with a justification for their actions while setting aside the consequences of their activities to society. Furthermore, the act of monetizing disinformation, could be perceived as tacit approval of the presence of disinformation in social media platforms. The companies behind social media platforms must clarify their position. Their sincerity in fighting disinformation will be questioned for as long as they take down disinformation with one hand while monetizing disinformation with the other.

Second, do social media platforms inform advertisers that their advertisements are placed alongside disinformation? Social media platforms offer advertisers with various tools for targeted advertising. These allow advertisers to strategically place advertisements based on sociodemographic characteristics and a user’s internet trail. When advertisers can specify which channels, pages, and content their advertisements will appear in, they are burdened with the responsibility to discern whether the content they help fund is at the very least harmless to society. On the other hand, when advertisers rely on the social media platform’s judgment on ad placement, the brands being advertised open themselves up to the possibility of reputational risk through association with harmful content. More importantly, the largest share of revenues earned by social media platforms come from advertising. Advertisers cannot deny their linkages to the monetization of disinformation. They must be involved in efforts to counter disinformation.

Increased transparency

Any effort to prevent the monetization of disinformation requires greater transparency. As an initial response, an inventory of channels, pages, and accounts that monetize content must be made available. This will allow the public to participate in the identification of sources of disinformation with monetized content. Information about channels, pages, and accounts who promote their content on social media platforms must also be made available. These have been sources of disinformation, especially during the election period. To be fair, the companies operating social media platforms have been taking steps to suspend accounts and remove disinformation. However, it is naïve to assume regulation can be left solely to these companies. There must be third parties involved in the governance of digital public spaces. This will expand society’s capacity to combat disinformation. Third parties can also act as accountability mechanisms that monitor the behavior of advertisers and social network operators.

Anonymity and pseudonymity in the internet has allowed groups and individuals to spread disinformation free from any form of accountability. While there are laws that punish online defamation, legal action fails to prosper when a plaintiff cannot be identified. Government could require social media platforms to perform know-your-customer (KYC) procedures to ascertain an individual’s identity. Governments could then ask these companies to provide information when complaints are filed against users of its networks. In crafting these regulations, civil society must play an active role in ensuring safeguards are established. There is a real possibility that governments could abuse this measure to harass political opposition or infringe upon the privacy of individuals. Requests for personal information must be open to public scrutiny.

Public policy and advertising

There are precedents for advertising regulation in the Philippines. Under the Consumer Protection Act and its Implementing Rules and Regulations, the Ad Standards Council is mandated to ensure that all advertising materials conform to its code of ethics. Unfortunately, this mandate focuses on the content of advertisements and does not extend to the placement of advertisements. Policymakers may wish to contemplate the regulation of digital advertisement placements to curb the monetization of disinformation. Regulations could require advertisers to identify the online content where their advertisements will be featured. Fines or penalties could then be imposed on advertisers found to have displayed their advertisements on content containing disinformation. This policy would work best in reducing the monetization of videos containing disinformation.

Rewarding good content

There are positive externalities to good content. In the case of politics and governance, good content could benefit society through better choice of elected officials, insightful public participation in policy discussion, and stronger demands for government accountability. Economic theory states that goods and services that provide positive externalities will be produced in socially suboptimal quantities. The standard policy response would be to provide government subsidies for creating good content. In ideal situations, this would be sufficient. However, when government is under the control of individuals who benefit from disinformation, there is no motivation to reward good content. Furthermore, good content may even be suppressed in favor of controlled narratives. In such cases, society must consider alternate mechanisms to reward good content.

If the public wants to encourage the production of better content, the environment set by social media platforms and their advertisers must be conducive for doing so. Creators of good content must be prioritized in the distribution of rewards from monetization. To achieve this, a tiered system for creators should be explored. Using the YouTube Partnership Program as an example, preference could be given to good creators by altering the distribution of rewards from YouTube Premium. The share of an ordinary creator that is part of the YouTube Partnership Program from YouTube Premium could be reduced. The reduced amount can then be pooled together and redistributed to creators of good content. In effect, taxes are levied on majority of creators so that creators of good content can be subsidized.

Apart from transfer mechanisms embedded in revenue distribution policies, public participation in rewards distribution can also be considered. Despite the central role of public attention in the revenue model of social media platforms, audiences do not receive a share of the rewards. It can be argued that the attention and personal information the public provides more than makes up for the cost of providing access to social media platforms. Ideally, audiences should receive compensation for the commodification of their attention and personal information. This compensation can then be transferred to creators of good content.

First Steps

Discussions involving monetary benefits are challenging. It involves the direct interests of groups and individuals. Fierce opposition is to be expected from those who stand to lose out. Nevertheless, these are necessary conversations that society must undertake. If Google’s decision not to accept election-related advertisements is any indication, there seems to be room for discussion regarding the monetization of disinformation. The first order of business is to build agreement around the idea that it is unethical to monetize disinformation. This should be followed by increased participation of third parties in the governance of digital public spaces. Over the long term, gains should be institutionalized through public policy and updated terms of service.

References

Facebook. (n.d.). How can I make money on Facebook? Retrieved May 28, 2022, from https://www.facebook.com/business/learn/lessons/how-make-money-facebook

Fournier, J. (2021, November 10). How algorithms are amplifying misinformation and driving a wedge between people. The Hill. Retrieved May 28, 2022, from https://thehill.com/changing-america/opinion/581002-how-algorithms-are-amplifying-misinformation-and-driving-a-wedge/

Gonzales, C. (2021, December 3). Comelec thanks Google, lauds no-political ads policy as ‘good news.’ INQUIRER.Net. Retrieved May 28, 2022, from https://newsinfo.inquirer.net/1522970/fwd-comelec-thanks-google-says-policy-on-political-ads-a-good-news

Owsinski, B. (2020, November 21). YouTube Does Its Creators A Big Favor, Even If They Don’t Realize It. Forbes. Retrieved May 28, 2022, from https://www.forbes.com/sites/bobbyowsinski/2020/11/22/youtube-does-its-creators-a-big-favor-even-if-they-dont-realize-it/

Popper, B. (2017, April 6). YouTube will no longer allow creators to make money until they reach 10,000 views. The Verge. Retrieved May 28, 2022, from https://www.theverge.com/2017/4/6/15209220/youtube-partner-program-rule-change-monetize-ads-10000-views

YouTube. (n.d.). YouTube Partner Program overview & eligibility – YouTube Help. YouTube Help. Retrieved May 28, 2022, from https://support.google.com/youtube/answer/72851?hl=en