Algorithmic Identity: How Recommendation Systems Shape Self-Understanding

4 min read
206 views

Introduction

Digital platforms increasingly mediate self-understanding through algorithmic recommendation and curation systems. From Spotify’s music recommendations to TikTok’s “For You” page, algorithms shape what content we encounter, potentially influencing how we understand ourselves. This article examines the intersection of algorithmic systems and identity formation through theoretical frameworks and empirical research.

Theoretical Framework: Algorithmic Identity

Cheney-Lippold (2017) introduces the concept of “algorithmic identity”—the categorical self produced through data analysis and algorithmic interpretation. Platforms construct models of who we are based on behavioral data—clicks, views, searches, engagement patterns—then present content aligning with these algorithmic models.

This process differs from traditional identity formation. Rather than wholly self-determined or purely socially constructed, algorithmic identity represents a hybrid: platforms’ interpretations of our data traces feedback into our self-understanding as we encounter algorithmically-curated content.

The Recommendation Loop: Algorithms recommend content based on past behavior → users engage with recommended content → this engagement reinforces algorithmic models → more similar content gets recommended. This feedback loop may narrow or reinforce particular identity expressions.

Music Streaming and Taste Formation

Music streaming platforms exemplify these dynamics. Spotify’s recommendation system analyzes listening history to suggest new music, create personalized playlists (Discover Weekly, Daily Mix), and categorize users into taste profiles (Prey, 2018).

Research examining Spotify users reveals how recommendations influence taste development. Users describe discovering new artists through algorithmic suggestions, with algorithms effectively serving as taste-makers. However, this raises questions: Does algorithmic curation expand or narrow musical horizons? Do filter bubbles emerge in taste formation?

Morris (2015) documents both expansion and constraint. Users encounter music beyond their prior knowledge, but within bounds of algorithmic similarity. Truly divergent or experimental content may be filtered out as insufficiently similar to past preferences.

Genre Boundaries: Algorithms reify genre categories even as musical practice transcends them. Spotify’s genre classification system—with categories like “indie pop” or “lo-fi hip-hop”—shapes how users understand and navigate music, potentially constraining more fluid genre identities.

Social Media and Presented Self

Social media platforms algorithmically curate both what content users see and whose content gets visibility—shaping both reception and performance of identity.

Algorithmic Visibility: Instagram’s algorithm determines which posts appear in followers’ feeds based on engagement predictions. This creates performative pressure—users craft content likely to generate engagement, potentially constraining authentic self-expression to algorithmic preferences (Duffy & Wissinger, 2017).

Echo Chambers and Identity Reinforcement: Algorithmic curation may create echo chambers where users primarily encounter identity-affirming content. While concerning for political polarization, this also affects identity exploration—LGBTQ+ youth, for instance, may find algorithmically-curated communities providing identity validation and information (Duguay, 2019).

The “For You” Page: TikTok’s powerful recommendation algorithm rapidly constructs user identity models, presenting highly personalized content streams. Users report the algorithm “knowing” their interests with uncanny accuracy, sometimes surfacing content related to identity aspects they hadn’t explicitly sought (Karizat et al., 2021).

Empirical Research: User Experiences

Qualitative research reveals complex, sometimes contradictory user experiences:

Discovery and Constraint: Users appreciate algorithmic discovery—encountering content or communities they might not have found otherwise—while also experiencing constraint as algorithms presume to “know” their interests.

Identity Play and Resistance: Some users deliberately attempt to “confuse” or “game” algorithms, seeking to escape or subvert algorithmic categorization. This represents resistance to algorithmic identity construction.

Multiple Selves: Platform-specific algorithmic identities may diverge—a user’s TikTok algorithmic self differs from their YouTube or Spotify self, reflecting both platform differences and situational identity presentation.

Critical Perspectives

Critical scholars raise concerns about algorithmic identity:

Surveillance and Control: Algorithmic identity construction requires extensive data collection and analysis—a form of surveillance, even if not explicitly repressive (Zuboff, 2019).

Normalization: Algorithms trained on existing data may reinforce dominant norms, marginalizing alternative or minority identities. Gender and racial biases in training data produce biased algorithmic models.

Commercial Motives: Platform algorithms optimize for engagement and advertising revenue rather than authentic self-exploration or human flourishing. Commercial incentives may shape identity formation in problematic directions.

Opacity: Algorithmic systems operate largely opaquely—users don’t fully understand how they’re being categorized or why particular content is recommended, limiting agency in the process.

Implications for Identity Theory

These developments challenge traditional identity theory:

Beyond Social Construction: While identity has long been understood as socially constructed, algorithmic identity adds a technological-computational dimension to this construction.

Distributed Agency: Identity formation involves distributed agency among human actors, platform affordances, and algorithmic systems—no single locus of control.

Data Bodies: Our “data bodies”—the digital representations platforms construct—increasingly mediate social interaction and opportunities, with material consequences beyond symbolic identity.

Conclusion

Algorithmic systems increasingly mediate identity formation, creating complex feedback loops between behavioral data, algorithmic interpretation, content curation, and self-understanding. This represents a significant shift in how identity operates in digital society, requiring ongoing empirical research and theoretical development to understand implications for selfhood, autonomy, and social life.

References

  • Cheney-Lippold, J. (2017). We Are Data: Algorithms and the Making of Our Digital Selves. NYU Press.
  • Duffy, B. E., & Wissinger, E. (2017). Mythologies of creative work in the social media age: Fun, free, and “just being me”. International Journal of Communication, 11, 4652-4671.
  • Duguay, S. (2019). “There’s a whole treasure trove”: Queer YouTubers and the safety of minority spaces. In C. Pullen (Ed.), LGBTQ Identity and Online New Media (pp. 212-224). Routledge.
  • Karizat, N., Delmonaco, D., Eslami, M., & Andalibi, N. (2021). Algorithmic Folk Theories and Identity: How TikTok Users Co-Produce Knowledge of Identity and Engage in Algorithmic Resistance. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-44.
  • Morris, J. W. (2015). Curation by code: Infomediaries and the data mining of taste. European Journal of Cultural Studies, 18(4-5), 446-463.
  • Prey, R. (2018). Nothing personal: algorithmic individuation on music streaming platforms. Media, Culture & Society, 40(7), 1086-1100.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.

Related Research