DOI:https://doi.org/10.65281/702367
Xingchen Zhao
Jiaxing University, Zhejiang Province, China
Email address:[email protected]
Abstract: Against the backdrop of artificial intelligence profoundly transforming content distribution, whilst algorithmic recommendation technology has enhanced the efficiency of information matching, it has also triggered an increasingly severe ‘information bubble’ effect, presenting a governance challenge in the digital age. Current regulatory efforts face practical constraints such as legal lag, insufficient incentive on the part of platforms, complex technical countermeasures, and a narrow scope of oversight. Consequently, this paper proposes the establishment of a multi-stakeholder collaborative governance framework characterised by ‘government guidance, platform self-regulation, social coordination, and user participation’. Through a dual-pronged approach of institutional innovation and technology for good, this paper aims to break the cognitive loop and restore information diversity, thereby providing a framework for maintaining social consensus and promoting the healthy development of the digital ecosystem.
Keywords: algorithmic recommendation; information echo chamber; regulatory dilemma; collaborative governance
Introduction
Against the backdrop of an era in which the digital economy and artificial intelligence technologies are increasingly intertwined, algorithmic recommendations have become the primary method of information distribution on the internet, fundamentally transforming the way people acquire knowledge and consume information. With the exponential growth of vast amounts of data, platforms have utilised algorithmic models such as collaborative filtering and deep learning to achieve a perfect alignment between information supply and user demand; whilst improving distribution speed, this has also given rise to the risk of algorithmic alienation [1]. Currently, algorithm-driven information flows exhibit a strong tendency towards homogenisation; users, having received highly targeted information, are prone to becoming trapped in cognitive feedback loops. This continuous cycle, reinforced by individual preferences, diminishes the public nature and diversity of social information, giving rise at the micro-level to deep-seated social contradictions such as group polarisation, fractured consensus, and the solidification of digital class divisions. Although relevant regulatory policies have been tentatively established, due to the ‘black-box’ nature of algorithms and the inherent drive of platforms to maximise traffic revenue, current governance measures remain significantly lagging and limited in addressing the issue of information silos. Consequently, this paper’s exploration of the regulatory dilemmas surrounding information silos under platform algorithmic recommendations and the pathways for multi-stakeholder collaborative governance holds significant theoretical and practical significance.
1 Overview
1.1 The technical logic behind algorithmic recommendations
The logic behind algorithmic recommendations involves matching high-dimensional feature vectors and applying non-linear mappings to arrive at the optimal information distribution scheme. From a technical perspective, this is primarily achieved through collaborative filtering and deep learning models (Fig. 1). Collaborative filtering achieves the objective of filtering and distribution based on collective intelligence by identifying similarities in users’ historical behaviour data or correlations between items to predict individual latent preferences; content-based recommendation, on the other hand, forms a refined semantic tagging system through in-depth analysis of metadata such as text and video. With the advancement of neural network technology, algorithms have begun to incorporate elements of reinforcement learning. By monitoring users’ clicks, retention and interaction feedback on pushed content in real time, the loss function is adjusted dynamically, thereby enabling the continuous refinement of recommendation strategies. At its core, this constitutes a closed-loop feedback system oriented towards user engagement and retention; whilst striving to maximise prediction accuracy and click-through rates, it objectively enhances the convergence of information.
Fig. 1 Algorithm Recommendation Logic
1.2 The Formation of Information Silos
The formation of information silos is the result of the interaction between technological distribution mechanisms and individual cognitive psychology. From a technological perspective, the filtering methods driven by algorithms involve the retrospective analysis and feature extraction of users’ historical behavioural data, thereby constructing highly targeted information flow pathways. This filtering process excludes heterogeneous information, resulting in a high degree of homogeneity in the distributed content across both time and space [2]. From a psychological perspective, individuals exhibit tendencies towards selective exposure and confirmation bias during the consumption of information. Users prefer to seek out and accept information that aligns with their existing values, beliefs and aesthetic standards, thereby achieving cognitive equilibrium and reducing the cognitive load of information processing. When algorithms resonate logically with human psychological defence mechanisms, the openness of the information transmission process diminishes. Individuals become confined within a cognitive domain defined by algorithmic logic, creating a self-reinforcing feedback loop. This mechanism severs channels of communication across groups, leaving individuals in a closed environment lacking a control group, and gradually eroding their ability to comprehend the full complexity of the social landscape.
2 The Regulatory Dilemma of Information Silos in Platform Algorithm-Driven Recommendations
2.1 The Lag in Legal Regulation
In an era of rapid development in information and communications technology, most existing legal frameworks were formulated based on the static regulatory approaches of the industrial age and are unable to adequately cover technological distribution activities centred on deep learning and reinforcement learning. Firstly, the formulation of legal norms involves strict procedural and cyclical processes, and the subjects they regulate typically exhibit specific behavioural characteristics. However, as algorithmic models rely on continuous feedback from big data to achieve self-updating and strategy refinement, this ‘black-box’ self-evolutionary nature leads to phenomena such as subject generalisation and unclear characterisation when legal provisions are applied [3]. Secondly, the current legal system contains blind spots in the definition of rights. Traditional rights such as the right to privacy and the right to be informed cannot fully encompass situations where ‘cognitive autonomy’ is compromised in algorithmic recommendations, leaving judicial practice without corresponding overarching legal grounds or adjudication standards to address the group polarisation or fragmentation of consensus caused by information silos. Finally, legal regulation often focuses on ex post facto remedies, whereas information silos shape social cognition subtly and irreversibly. This lag in the feedback loop renders legal intervention ineffective in addressing structural information imbalances. When legislators attempt to intervene through mandatory algorithmic disclosure or transparency requirements, they are constrained by the tension between the protection of trade secrets and the costs of technical implementation. Consequently, the legal system struggles to strike a substantive balance between safeguarding public information diversity and respecting the operational autonomy of market entities.
2.2 Insufficient Internal Drivers of Platform Responsibility
Within the framework of the attention economy, the core commercial objectives of internet platforms lie in maximising user retention, interaction frequency and time spent online. As an efficient tool for monetising traffic, algorithmic recommendation is fundamentally driven by the need to precisely match individual preferences to strengthen user stickiness. The formation of information silos is, in essence, a process whereby platforms reduce cognitive strain and enhance user satisfaction by feeding users highly homogeneous content; this mechanism is highly consistent with the platforms’ profit models. Consequently, requiring platforms to proactively introduce heterogeneous information or forcibly break users’ interest loops inevitably entails the risk of user attrition and diminished commercial value, creating a significant incentive for adverse selection when platforms fulfil their social responsibilities [4]. Furthermore, platforms face an asymmetry of costs and benefits in algorithmic governance. Building a complex algorithmic system capable of balancing information diversity with ethical scrutiny not only requires substantial investment in research, development and operations, but may also result in a loss of competitive advantage due to reduced recommendation accuracy. In the absence of effective external constraints and incentive mechanisms, platforms often tend to adopt strategies that focus on formal compliance rather than substantive governance, using the algorithmic black box as a shield to evade public oversight. This logic of evading responsibility based on technological neutrality obscures the profound value-oriented issues underlying algorithmic distribution, resulting in a lack of subjective willingness on the part of platforms to undertake fundamental reforms at the technological level when addressing external negative effects such as group polarisation and social fragmentation.
2.3 The Complexity of Technological Countermeasures
In today’s digital ecosystem, algorithmic recommendations have evolved into a deep neural network system constructed from vast numbers of parameters, with decision-making pathways exhibiting a high degree of non-linearity and opacity. This ‘technological black box’ characteristic presents regulators with a significant challenge in penetrating the underlying logic when attempting to intervene substantively in information silos. Traditional external auditing methods are often limited to observing homogenised outcomes at the output stage, whilst struggling to trace and determine the compliance of weight configurations at the input stage and feature extraction in intermediate layers; consequently, regulatory actions remain confined to the superficial level. At the same time, algorithmic models possess a strong capacity for dynamic self-adaptation. Through frequent iterative updates and the application of differential privacy techniques, platforms can effectively evade feature extraction by external monitoring tools, creating a de facto technological arms race. In this adversarial environment, regulators must not only possess computational resources and model analysis capabilities on a par with developers, but must also address the challenge of highly fragmented data flows. This high-dimensional technological asymmetry means that any attempt to break down information silos through static parameter settings or a single administrative order is highly susceptible to being rendered administratively ineffective by technological subversion.
2.4 The Unidimensional Nature of Regulation
The current regulatory framework for algorithmic recommendations relies heavily on top-down administrative directives and departmental regulations, exhibiting distinct characteristics of one-dimensional governance. Within this structure, the government, as the sole centre of power, faces extremely high regulatory information costs and risks of enforcement bias. Due to the lack of substantive participation from civil society organisations, expert groups and individual members of the public, the regulatory process often fails to address the underlying logic and ethical nuances of algorithmic operations, resulting in a lack of flexibility in policy implementation at the micro level. The absence of third-party social evaluation mechanisms means that assessment criteria for algorithmic neutrality, fairness and diversity rely excessively on the self-assessment of administrative departments, lacking a dynamic process of negotiation amidst diverse value conflicts. At the same time, users, as the primary consumers of information, are often positioned within the existing regulatory framework as passive subjects whose rights are protected, rather than active participants in governance. This lack of intersubjectivity prevents the effective establishment of self-defence mechanisms for users’ digital rights, making it difficult to foster bottom-up social constraints on the algorithmic power of platforms. A singular regulatory dimension weakens the capacity for resource integration across society in algorithmic governance, limiting the transition of governance measures from punitive sanctions to collaborative incentives, and thereby hindering the ability to address social challenges such as the ‘information bubble’, which is characterised by its pervasive and covert nature.
3. A Multi-stakeholder Collaborative Governance Framework for Information Echo Chambers in Platform Algorithm-Driven Recommendations
To overcome the regulatory challenges outlined above, this paper proposes a multi-stakeholder collaborative governance framework, as illustrated in Figure 2.
Fig. 2 Multi-stakeholder collaborative governance mechanism
3.1 Institutional Innovation at the Government Level
Within the framework of multi-stakeholder collaborative governance, the government should assume the role of a meta-regulator and architect of institutional frameworks, relying on the restructuring of the legal system and the transformation of regulatory mechanisms to constrain algorithmic power. Firstly, it should promote a shift in logic from outcome-based regulation to process-based regulation, and establish a management system for the filing, classification and categorisation of algorithms. Regular algorithm audits should be conducted on leading platforms possessing strong social mobilisation capabilities and significant influence over information dissemination. Platforms should be required to disclose the ethical principles underlying their algorithm design, the biases in parameter weightings, and diversity assurance metrics, thereby enhancing the interpretability of algorithm operations. Secondly, the government should legislate to define the legal nature of the right to information choice and cognitive autonomy, and establish a dynamic algorithm accountability mechanism. External negative effects caused by algorithms, such as social polarisation and the fragmentation of consensus, should be incorporated into corporate responsibility assessment indicators. In terms of administrative measures, a cross-departmental collaborative regulatory platform should be established, integrating resources from departments such as cyberspace administration, industry and information technology, justice, and market regulation to create a closed-loop regulatory system covering the entire chain, from data collection and model training to information distribution. Furthermore, the government should establish a ‘prudent and inclusive’ incentive mechanism, offering policy preferences or credit bonuses to platforms that actively adopt ‘break-the-cocoon’ algorithms and improve information diversity. By providing institutional frameworks that encourage technology to serve the greater good, a balance between commercial innovation and the public interest can be achieved at the macro level.
3.2 Technical Self-Regulation at the Platform Level
Technical self-regulation at the platform level should begin with a logical restructuring of the underlying architecture and a diversification of algorithmic distribution methods, thereby translating social responsibility into executable code logic. Platforms should incorporate a dynamic balancing mechanism between ‘exploration’ and ‘exploitation’ into their algorithmic recommendation models, increasing the proportion of random sampling and boosting the distribution of long-tail content, thereby ensuring that whilst users receive personalised information, they are also exposed to public topics from other fields and diverse themes[5]. Platforms should establish a multi-dimensional recommendation evaluation system, transforming single metrics such as click-through rates and retention rates into comprehensive evaluation functions that incorporate information diversity, source credibility, and subject-matter breadth, thereby eliminating the risk of cognitive narrowing caused by a traffic-first approach at the very source of algorithmic design. Furthermore, from the perspective of technological transparency, platforms should translate algorithmic explainability into concrete practice by displaying key parameters of the recommendation logic to users via interactive interfaces. They should also provide users with customisable tagging functions, allowing them to freely modify their user profiles or clear all recommendation preferences with a single click, thereby restoring the individual’s agency in the information reception process.
3.3 Mechanisms for Social Collaboration and Public Oversight
The establishment of mechanisms for social collaboration and public oversight must focus on bridging the institutional gap between government regulation and platform self-regulation by introducing a third party to impose external constraints on algorithmic logic. Firstly, the organisational functions of industry associations should be strengthened to create unified ethical guidelines for algorithms and industry self-regulatory codes of conduct, thereby reducing the risks faced by individual platforms when implementing ‘breaking the cocoon’ strategies through collective consultation mechanisms. Industry organisations should establish regular compliance assessment and information-sharing platforms to facilitate cross-platform early warning and response mechanisms for social side-effects caused by algorithms. Secondly, the technical oversight role of expert think tanks and research institutions should be fully utilised. An algorithm audit committee comprising experts and scholars from various fields—including law, sociology and computer science—should be established. Using technical methods such as black-box testing and reverse engineering, this committee would conduct regular independent evaluations of the algorithmic neutrality and information diversity of mainstream platforms, whilst publishing authoritative social responsibility reports for the public. Furthermore, a response mechanism for public reporting and feedback must be established, granting relevant social organisations the right to initiate public interest litigation to seek legal redress for algorithmic infringements upon the public’s right to cognitive autonomy. By constructing this multidimensional matrix of social participation, a form of soft constraint reliant on intellectual authority and public opinion can be formed, thereby delineating effective ethical red lines at the boundaries of technological operation (Figure 3).
Fig. 3 Mechanisms for social collaboration and public oversight
3.4 Fostering and Enhancing Users’ Digital Literacy
Fostering and enhancing users’ digital literacy forms the micro-level foundation of a multi-stakeholder collaborative governance approach to establish a fundamental defence against algorithmic alienation by strengthening individuals’ sense of agency. Firstly, a systematic framework for algorithmic education should be established, incorporating algorithmic logic, data security and the principles of information distribution into the national education curriculum and lifelong learning programmes, thereby equipping individuals with the ability to identify algorithmic biases and information filtering mechanisms. By enhancing users’ ability to discern algorithms, they can rationally scrutinise the homogenising tendencies of pushed content, thereby generating a desire to actively seek out heterogeneous information. Secondly, users should be encouraged to exercise their right to choose algorithms and their autonomy in interaction; they should proactively correct their behavioural tags, regularly clear historical data traces, and use different information platforms in combination to break the interest-based echo chambers constructed by individual algorithms, thereby reshaping the boundaries of their personal information intake. Thirdly, at the technical level, auxiliary governance tools should be developed—such as personalised recommendation analysis plugins or applications for detecting information diversity—to empower users with the ability to monitor and intervene in the quality of their own information streams in real time. By establishing a self-regulatory mechanism grounded in individual literacy, a countervailing influence can be exerted on the algorithmic offerings of platforms from the demand side, thereby achieving a harmonious balance between algorithmic logic and human cognitive freedom.
Conclusion
The ‘information echo chambers’ created by platform algorithmic recommendations constitute a systemic risk arising from the interplay between the commercial logic of algorithms, individual cognitive biases and outdated regulatory measures. At present, practical challenges such as lagging legal regulation, insufficient incentive for platforms, and complex technical countermeasures can no longer be resolved through administrative orders alone. This necessitates a shift in governance logic towards a multi-stakeholder collaborative model characterised by government guidance, platform self-regulation, social coordination and user participation. Such an approach aims to reshape information diversity and social consensus in the digital age, providing practical guidance for the construction of an algorithmic ecosystem that is benevolent, transparent and imbued with humanistic care.
References
[1]Zhou J. Intelligent Recommendation Algorithm User Information Cocoon Room Effect Breakthrough Strategy[J].Advances in Computer and Communication,2025,6(4).
[2]Chen J, Ding C. Algorithmic chains and social media mazes: the filter bubble dilemma in Xiaohongshu’s marketing strategy[J].Advances in Social Behaviour Research,2025,16(4):105-110.
[3]Zijun Z. An Analysis of the Formation Mechanism of Social Media Users’ Opinion Polarisation under the Information Cocoon Effect[J].SHS Web of Conferences,2025,22004023-04023.
[4]Xu R. Research on Strategies for Short Video Users to Break the Information Cocoons under the Interference of Barnum Effect[J].Journal of Business and Marketing,2024,1(1).
[5]Zhang X, Cai Y, Zhao M, et al. Generation Mechanism of “Information Cocoons” of Network Users: An Evolutionary Game Approach[J].Systems,2023,11(8):414-.