In this article, we discuss a broad overview of the relationship between social media and youth mental health, including the legal and historical context of social media, role and involvement of caregivers, and strategies for modulation of usage. Additionally, we highlight the unique risk and protective factors of social media use for youth from marginalized groups, including exposure to racism and discrimination. We recommend social media companies avoid existing precedents regarding preferential censorship that has disadvantaged and caused harm to historically, persistently, or systematically marginalized groups.
Key points
- •
Social media use has unique contributions to both ill-being and well-being for youth.
- •
Social media usage may vary by developmental age and group, which contextualizes its impact.
- •
Recommendations about social media use should be tailored to the developmental level of the child and take into account pre-existing mental health conditions, temperamental impulsivity.
- •
Social media may have unique effects on (historically, persistently, or systemically marginalized) or marginalized youth, including exposure to bullying and discrimination but also provides a way to connect with peers, when inaccessible in person.
- •
Parental/caregiver media literacy and engagement with SM affects youth engagement via modeling and potential problematic Internet usage.
Introduction
Legal Background
A discussion about social media and youth mental health must be understood in the context of pre-existing laws that absolve social media platforms of legal responsibility for the content of posts. This includes potentially harmful social media posts that encourage self-harm, eating disordered behavior, and suicide as well as threats of physical harm, and cyberbullying.
Section 230 of the Communications Decency Act of 1996 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This act was later applied to social media platforms as they were developed, absolving the companies of liability for user posts. Later, the American Civil Liberties Union (ACLU) and the Queer Resources Directory among others legally challenged the Communications Decency Act, but maintained Section 230 to protect online gathering spaces for marginalized people from regulation.
Subsequent amendments to Section 230 have created exceptions to social media platforms’ freedom from liability. The FOSTA-SESTA amendment holds platforms liable for posts or advertisements that facilitated sex trafficking. The “EARN IT” Act of 2020 created a commission to “prevent, reduce and respond to the online sexual exploitation of children…” The “SAFE TECH” Act of 2021 removes immunity for social media posts leading to discrimination on the basis of race, sex, religion, and other protected categories. However, this act has been shown to backfire on historically, persistently, or systemically marginalized (HPSM) groups when broad categories of posts are filtered out to avoid lawsuits.
These laws interface with social media companies’ restrictions of use by children and adolescents, although they bear no legal responsibility to verify the age of users. Location-based dating apps usually restrict use to age 18 years and older, but Section 230 frees these sites from legal responsibility to ensure users are actually adults. Lesbian, Gay, Bisexual, Transgender, Queer (LGBTQ) + teens lacking opportunities for in-person romantic relationships that heterosexual teens enjoy are likely to go online for these opportunities. Many dating sites fail to require users to report sex offender status, potentially enabling adults to perpetrate sexual abuse upon teens via these apps.
In response to concern from the public about harmful content and abuse of youth on social media platforms, companies have employed content moderation through their community guidelines. These are established set of policies delineating what types of posts will be taken down and the grounds for account suspension. Much can be learned from the history of censorship in entertainment media to ensure the same mistakes are not repeated. The history of entertainment media demonstrates that whether media was censored or not has repeatedly harmed HPSM groups. In 1915, the National Board of Censorship approved Birth of a Nation, a propaganda film for racist ideology, for a general audience. The lack of censorship there spread hate and racism. In 1930, The Motion Picture Production Code delineating 12 categories of censorable “repellent subjects” fit for censorship, including interracial and LGBTQ+ relationships. In this case, the presence of censorship spread hate and racism. Social media companies must learn from the history of censorship in entertainment media lest they repeat it.
Biases Against Historically, Persistently, or Systemically Marginalized Youth
Social media platforms must moderate content in order to protect youth who use them, but some content moderation may unfairly disadvantage members of HPSM groups. A 2021 mixed-methods study found 3 distinct groups of social media users who experienced content and account removals most often: politically conservative participants (whose content was correctly removed because it violated the platforms community guidelines as offensive, misinformation, and adult or hate speech), transgender participants (whose content was removed for being “adult” despite following community guidelines, critical of a dominant group, or specifically related to transgender or queer issues), and Black participants (whose content was incorrectly removed because the post concerned racial justice or racism). They noted that some content is removed despite it not actually violating the community guidelines. Authors conclude that marginalized social media users are more likely to experience content moderation “false positive and gray areas” and “conservative content removals in the dataset were more likely to represent true positives: content that violated site policies, and thus was correctly removed.” This study suggests a lack of consistency and transparency in social media platforms content moderation. Users often report content because it violates their personal norms, which may be prejudiced. Therefore, the process by which content is flagged for removal can privilege majority identities and experiences and further exacerbate systemic inequities.
Automated moderation also has major pitfalls that also are rooted in prejudice. For example, automation of moderation leads to misclassifying African American Vernacular English as hate speech. Automated moderation also removes antiracist posts that are mistakenly being classified as racist. Details of content moderation are vitally important, especially now as Instagram recently announced options to decrease the amount of “sensitive” content that appears in feeds, failing to describe what criteria will be used to label content as “sensitive” content.
There are other ways in which the functioning of social media platforms disadvantage HPSM users. For people who earn their income from content creation on social media, making content less visible without necessarily taking it down may have major economic implications. Sometimes, this phenomenon is referred to as “shadowbanning”—when content is not removed from a platform, but it is strategically hidden instead. There are also reports of replication of an imaginative post created by an HPSM individual by a non-HPSM user whose post becomes more popular. Any underlying prejudices inherent to the inner workings of the social media sites should be rectified. In a society where these prejudices and hatred exist offline, social media sites should make it a priority to eliminate the hate and prejudice in the inner workings of their sites.
Prevalence of Social Media Use
Social media use, pervasive among young people in the United States, has a profound influence in shaping youth culture, communication, and behavior. A recent study conducted by the Pew Research Center highlights the ubiquity of social media use among American adolescents. Nearly all surveyed teens reported having access to a smartphone or a computer, and the majority to a gaming console or tablet, with children in middle and higher income households likely to have access to a home computer. The percentage of adolescents who endorse being online “almost constantly” doubled since the same question was posed a decade earlier. Social media platforms and related use patterns are rapidly evolving, with the most popular platforms for youth as of 2022 to include YouTube, TikTok, Instagram, and Snapchat. Though social media platforms require users to be at least 13 years old, nearly 40% of children between the ages of 8 to 12 years report using social media.
Trends in social media usage vary among subgroups in the United States, including gender and race/ethnicity. Adolescent girls are more likely than teen boys to use TikTok “almost constantly.” Boys are more likely to use Discord and Twitch. Black and Hispanic teens are more likely than White teens to be online “almost constantly.” Hispanic teens were the most likely to report being on TikTok “almost constantly,” though Black teens as a group were most likely to report using TikTok compared to their non-Black peers. White adolescents are more likely to use BeReal but less likely to use all other social media platforms. Indigenous and Asian American Native Hawaiian Pacific Islander youth were not included in the Pew study sample. A survey of adults found that Asian Americans endorse a higher frequency of social media usage than any other racial group.
During the coronavirus disease 2019 (COVID-19) pandemic, social distancing practices and school closures limited the ability of children and adolescents to socialize in person, contributing to the rapid escalation in their social media use. Between 2019 and 2021, the average amount of daily screen media exposure increased by nearly an hour among tweens, and well over an hour among teens. Among US children aged 8 to 12 years, screen time increased by 1.75 hours daily during the early pandemic and later remained above prepandemic levels by 1.11 hours. This study found screen time was higher among children of Black race or Hispanic ethnicity, older children, and children whose mothers lacked a college education. A cross-sectional study found that 12 to 13 year old children were using social media an average of 0.98 hours per day. Social media platforms afforded the opportunity for one-to-one communication, mutual online friendships, as well as positive online experiences, which mitigated feelings of loneliness and stress during COVID-19-related social distancing. More than ever, social media is part of the everyday life of American youth with consequences for their mental health.
Social Media Effects on Mental Health
Social media shapes the ways youth understand themselves, the world around them, and how they fit into it. Social media can both benefit and harm the mental health of youth. Social media’s contribution to both ill-being and well-being occurs on separate spectrums. For example, watching videos on YouTube for hours may evoke positive emotions and contribute to well-being, while simultaneously displacing quality time with family and friends, creating loneliness and ill-being. Similarly, social media use may make equal contributions to both well-being and ill-being. See Fig. 1 A–C for additional examples. Social media often disrupts sleep and displaces physical activity and in-person interactions, all of which are critical for healthy development and emotion regulation. Social media can exacerbate experiences of bullying and discrimination, increasing risk for negative mental health outcomes including anxiety, depression, and trauma-related symptoms. Involvement with cyberbullying is also associated with an increased risk of suicidal behavior and suicide attempt. , Exposure to highly curated social media posts depicting idealized version of a peer’s physical appearance, life events, or social engagements invites unrealistic comparisons that can adversely affect self-image and elevate risks for mental health concerns including anxiety, depression, and eating disorders.

In a recent 2023 study, participants aged 16 to 21 years in Norway completed online questionnaires about their social media use and mental health. Exploratory factor analysis found 3 main factors correlated with negative mental health: subjective social media overuse (included feeling “addicted”), social obligations (included fearing missing out or failing obligations to comment on friends’ posts), and source of concern (included feeling overwhelmed on social media and worries that use is unhealthy or is being monitored). Of these 3 factors, the factor “source of concern” had the largest association with mental health problems and the weakest association with self-reported amount of time spent on social media. Reaction to any given experience on social media (eg, viewing post about a news story on Instagram) vary significantly depending on characteristics of the user. Users high in “source of concern” may notice that social media is making them feel overwhelmed or monitored and respond by using social media less often or employ certain tactics to minimize harm. Social media platforms do provide unique opportunities for peer engagement, eliciting emotional support or community, and creative outlets that may contribute significantly to well-being.
Effects Specific to Historically, Persistently, or Systemically Marginalized Youth
Social media engagement confers risks and benefits that are universal to all teens, though some factors are specific to HPSM youth. Research documenting the relationship between social media and mental health of youth is relatively new and incomplete, particularly data regarding its unique impacts on youth from HPSM communities. Few studies focus on social media influences on marginalized racial, ethnic, sexual, gender, socioeconomic backgrounds, or differently abled youth. For youth who lack access to peers with shared identities, experiences, or interests in their local communities, social media may facilitate valuable connection with these otherwise inaccessible individuals. Studies of HPSM youth demonstrate that social media can provide social support, identity affirmation and development, and community, allowing youth to feel more accepted. Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, Asexual + (other identities) (LGBTQIA+) youth reticent to inquire about related health information or disclose their identities with caregivers or local peers may benefit by doing so via social media. LGBTQIA+ youth use social media to seek information about LGBTQIA+-related health, news, and representation, as well as obtain emotional support, and explore aspects of their identity.
However, while social media may be protective for members of marginalized groups in some ways, it may also confer uniquely elevated risk for harm. For example, social media-based bullying, which disproportionately affects female adolescents and sexual minority youth, is consistently associated with the risk of depression. , LGBTQIA+ and female adolescents are more likely to experience online harassment and abuse, and suffer associated emotional consequences, including sadness, anxiety, and worry. Several studies demonstrate links between social media use and mental health problems (poor sleep related to social media use, depressive symptoms, poor body image, and eating disorders) find a greater association for girls. Multiple experimental studies show that viewing idealized and filtered images on social media worsened body dissatisfaction and have been linked to disordered eating, depressive symptoms, and low self-esteem, particularly among adolescent girls. ,
Youth of color face specific risks and benefits with regard to their social media engagement, which are mediated by cultural factors and mental health vulnerabilities. They may benefit from racial-ethnic social norms informing identity but often encounter stereotyping, racism, and even traumatic experiences. Exposure to individual and vicarious social media racial discrimination has been found to exacerbate depressive symptoms and substance use problems. Another study found adolescents exposed to online hate messaging are more likely to suffer depression and social anxiety. In this study, the association between social media use frequency and depressive symptoms was stronger in Asian Americans, while the association between social media use and social anxiety was reversed among Black youth. Viewing of race-related traumatic events (eg, videos of race-based violence) has been linked to symptoms of posttraumatic stress disorder and depression, particularly for those who are Latinx or female individuals. Black youth utilize social media to obtain social support regarding race-based traumatic events online, which may mitigate related psychological distress. Adolescents of color experience online racial discrimination at high rates and suffer associated depression, anxiety, and trauma-related symptoms.
Youth with intersectional identities may be subject to compounded stressors related to their multiple identities. One-third adolescent girls of color report exposure to racist content or language on social media platforms at least monthly. Some encounter social media “beauty filters,” which apply skin tones or facial features that are typically associated with White people, further contributing to racially biased beauty standards. Stereotyped depictions of youth with marginalized identities pose culturally specific risks of discrimination and negative self-image. A recent study demonstrated that social media usage among Black and Asian American adolescent girls was associated with increased emphasis on sexual appeal and physical appearance compared to White girls, possibly related to shame caused by such racially biased body ideals and stereotypes as well as fetishization.
Social Media Modulation
A 2022 study found that many youth employ strategies to moderate their social media use. Over 800 participants aged 14 to 24 years responded to questions about advice they would give to someone who is new to social media, if they had ever felt the need to change something about their social media use and if they had ever deleted or thought about deleting a social media account. They expressed concern about the amount of time spent on social media and sometimes experienced challenges in cutting down their use. They also expressed concern about the content they consumed on social media, including edited photos and misinformation. They also worried about what they posted both in terms of their own safety and their interpersonal interactions with others on social media. Many employed settings or features of the app itself to moderate their use, and most reported thinking about deleting a social media account or app or actually doing so. Some described how they changed their social media use for the sake of wellness.
- •
“I had to manually block content I felt was negatively affecting me and force myself to take time away from certain apps”
- •
“I realized that I was only following celebrities and seeing their heavily edited photos and taking them in as reality. When I ended that, I was following only meme accounts and was wasting hours and hours on them so I narrowed my account to only my friends”
- •
“I kind of regret how easy I’ve made it to find out things about me online so I’ve considered deleting some accounts and starting over to blur my efootprint a bit”
- •
“I set reminders on when to get off”
- •
“I have an iPhone and it shows me how much time I spend on social media”
- •
“I was sending too much time on TikTok, and it was messing with my head so I told myself that I will not go on it last like 9 pm .”
Many youth understand how best to moderate their social media use, and providers can help to disseminate these tips and tricks to their peers who struggle. Another study indicated that social media users’ ability to employ sophisticated techniques to regulate their use depended largely on their knowledge of technology. An individual’s knowledge about such techniques seems likely to depend also on having received related formal education, their peer group, conversations with caregivers, and their own curiosity among other factors. All youth should be educated on how social media can affect their well-being, overall media literacy, techniques to moderate use, and digital citizenship.
Formal school-based education on these topics would be invaluable for students. A number of social media education programs have been tested in the school setting, and some states introduced legislation, which would add such education to school curriculums. Social media education can also occur in the doctor’s office. This may require that providers learn about healthy social media modification techniques themselves. A 2022 study found that a video designed to improve provider knowledge and comfortability with giving guidance about implementing changes on Instagram via the in-app settings showed promise. Provider understanding about hiding “like” and “view” counts on Instagram showed the greatest gains. After watching this video, providers could better enable young patients to make social media setting changes, and this could even happen during a clinical visit.
It is also important for social media companies to partner with mental health professionals in the ongoing development of setting modifications so healthy changes are available and beneficial. Meta issued an announcement in early 2024 that Facebook and Instagram will hide potentially harmful content for users aged under 18 years. Now youth searching for content related to suicide, self-harm, or eating disorders on these platforms will be redirected to help resources. However, youth tend to use unrelated hashtags to prevent their posts from being taken down by social media apps due to community violations. Social media platforms set community guidelines for what types of content will be taken down by the platform (eg, graphic violent/sexual content, threats, or the sale of regulated goods). However, there is some content that may not violate their guidelines but is still sensitive in some way. Instagram now has opportunities for users to control the amount of sensitive content they are presented with. They also have an option to allow for increased “fact checking,” which moves content that is “false, partly false, altered or content with missing context” to lower in the person’s feed. As mentioned in the history section earlier, there is a need for platform transparency related to how facts are checked and the sensitive nature of content is determined. Social media sites must use caution so that misclassifications of content do not propagate bias, racism, and hate.
Role of Parents and Caregivers
Parents/caregivers play a critical role in navigating the impact of social media on children and adolescents. Caregivers who are involved in regulating their children’s media consumption and enjoy greater digital literacy may be best positioned to protect them against unhealthy social media habits and experiences. Caregiver mediation refers to parents/caregivers’ management of the time youth spend using media, as well as restrictions on and explanations about content, in order to mitigate harm and maximize benefit. Caregiver mediation may involve ongoing discussions about social media experiences, restricting content inappropriate for a child’s level of maturity, and establishing clear rules for social media use. Caregivers who are active social media users themselves may benefit from greater digital literacy about social media content and influencers and enjoy a clearer perspective on their children’s engagement. A number of related studies indicate parental/caregiver mediation may boost adolescents’ well-being and protect against ill-being.
A caregiver’s own engagement with social media greatly affects the impact of media on their children. Several studies indicate that children’s media engagement reflects their caregiver’s use. , Another demonstrated that adolescents with moderate-to-severe problematic Internet use (PIU) were almost 3 times as likely to have caregivers with PIU themselves. Excessive caregiver social media use can displace essential parent/caregiver–child interactions, which may result in children seeking care and attention in unhealthy ways, including excessive social media use. Caregiver social media posts about their children (ie, “sharenting”) are common and may risk violating children’s digital autonomy and confidentiality. Less common are caregivers who rely on family-based social media posts for income, presenting additional complications for the well-being and development of their children. ,
Given children’s progressive engagement with social media and the growing recognition of how that engagement may affect their health and development, the American Academy of Pediatrics and the American Academy of Child and Adolescent Psychiatry recommend the establishment of a family media plan and provide specific guidance on social media. Parents and caregivers are advised to identify what content they are exposed to on social media, to determine if that content is developmentally appropriate, a source of vicarious prejudice, or includes bullying or abuse. Caregivers should consider limiting access to devices and restricting age-inappropriate materials via device-specific and platform-specific parental controls. Parents/caregivers must consider the effects of their own media habits in family media planning.
Clinics care points
- •
Social media use has unique contributions to both ill-being and well-being.
- •
Recommendations about social media use should be tailored to the developmental level of the child and take into account pre-existing mental health conditions, temperamental impulsivity, and history with bullying.
- •
Recommend that caregivers develop a family media plan regarding use of media devices, including social media.
- •
Educate youth and families about potentially harmful content and identifying misinformation.
- •
Assess how patients use social media and to what extent.
- •
Ask parents/caregivers how they model social media use, including if/how they share information about their children.
- •
Inquire about exposure to cyberbullying, discrimination, and racism.
- •
Discuss how viewing highly curated and idealized online images can impact self-esteem.
- •
Advise caregivers to limit exposure to content that encourages self-harm, eating disordered behavior, suicide, stereotypes, or prejudice.
- •
Support HPSM youth and their families in a culturally sensitive manner.
- •
Identify problematic social media use, which displaces in-person relationships, sleep, physical activity, academics, and so forth.
- •
Recognize how social media affects risks for depression, anxiety, trauma-related symptoms, and eating disorders.
- •
Social media may offer unique benefits for individuals who are uncomfortable expressing LGBTQIA+ identities in person or feel unwelcome by or disconnected to their local communities.
- •
Consider helping patients to make healthy changes to social media settings.
Stay updated, free articles. Join our Telegram channel
Full access? Get Clinical Tree