Is smash or pass just another popularity contest?

The data distribution of the evaluation mechanism reveals deep biases. An analysis by the University of Cambridge in 2023 based on 2 million smash or pass videos showed that the consistency of participants’ judgments on the top 1% of well-known figures was as high as 78%, while the coefficient of variance for the sample of ordinary people was 39.6, confirming that the halo of reputation significantly compressed the subjective judgment range. More crucially, the algorithmic recommendation logic: The TikTok system has increased the video exposure of certified creators to 17 times that of ordinary users, resulting in the completion rate of videos from ordinary people with the same content quality being suppressed at the 32% percentile. The experiment conducted by Brazilian Internet celebrity Caroline shows that when she posts the same image using an anonymous account, the “smash” selection rate drops by 41 percentage points, verifying that the dominant weight of the celebrity effect on the result exceeds that of the objective features themselves.

The index of appearance discrimination is systematically magnified in specific groups. The audit report of the non-profit organization Data for Black Lives pointed out that African American women were 28% more likely to receive a “pass” in entertainment videos than white women, while the rate of Asian American men being marked as “smash” in cross-racial judgments was only 12.3%, significantly lower than the overall median of 23.7%. In 2022, the University of Melbourne conducted a controlled experiment using virtual face generation technology and found that for every LAB color level increase in skin color, the probability of making a “pass” decision increased by 14%. When this mechanism is applied to the sports field, ESPN analysts pointed out that WNBA players are maliciously evaluated 2.9 times more frequently in related challenges than NBA players, and the proportion of negative comments exceeds 63%.

The centralization of judgment power leads to an imbalance in resource allocation. Meta platform algorithm research reveals that super users (with over 500,000 followers), accounting for only 3.7%, have produced 68% of the viral content, and the average initial traffic of their videos is 300% higher than that of new creators. This Matthew effect directly led to the experience of Spanish amateur singer Marta – after her audition video was included in a certain challenge, she received over 8,000 new negative reviews in a single day, causing music companies’ intention to sign her to drop by 70%. Economic model calculations show that for every 1-point decrease in participants’ ratings, the decline rate of creators’ commercial cooperation offers reaches 5.4% (data from the Journal of Social Media Economics in 2024), while the return rate of video advertisements with a negative review ratio of 15% is 38% lower than the average.

image

The interactive influence of identity elements exacerbates systemic injustice. The LGBTQ+ equality organization GLAAD’s monitoring found that transgender people face compound discrimination when participating in judgments. When racial factors are added, the density of negative evaluations surges to 2.3 times that of ordinary videos. The University of California confirmed through cross-analysis that the minimum acceptance probability for African American transgender women is only 5.8%, which is 62 percentage points lower than that of the white cisgender male group. During the 2023 London Fashion Week model protest, designer Chloe pointed out that her plate-size design collection was maliciously labeled with “morbid obesity” in the challenge video, directly resulting in a 42% loss of the brand’s wholesale orders. This indicates that the judgment mechanism may have been distorted into a violent tool that rejects diverse aesthetics.

The failure of platform governance intensifies the damage cycle. The compliance report of the EU Digital Services Act shows that the median delay in identifying personal attacks in smash or pass videos by major platforms is 11 hours, which is 73% higher than that of ordinary videos, resulting in an average of 158 secondary violent incidents triggered by a single controversial content. In terms of algorithms, the ethics team of DeepMind tested and found that the error rate of the image recognition model for marginal groups was 1.8 times the benchmark value. When the system automatically pushed challenge materials, the misreading rate of facial features of people with disabilities reached 23%. What is even more worrying is the impact of young users: The UK’s Children’s Cyber Safety Commission’s tracking shows that the probability of imitative behavior among 14-17-year-olds under such content exceeds 46%, which increases the incidence of appearance discrimination incidents in schools by 29%.

When social games rely on the law of the attention economy to operate, data proves that they essentially replicate the unequal structure of the real society. The judgmental behavior packaged as entertainment actually builds a digital feedback loop that reinforces stereotypes. As the Institute for Digital Ethics at Humboldt University of Berlin warns: When the smash or pass content that generates over 5,000 judgments per second continues to lack a correction mechanism, technological equality will become empty talk.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
Scroll to Top