It’s a story as old as social networks themselves. Social sites begin to gain steam and the influx of user generated content becomes difficult for moderation teams to manage. Site owners who don’t already have strict policies in place are then faced with a choice. Should they modify their terms of service in an effort to protect their brand and their community from potentially harmful content, or leave things as they are? This delicate question, which is directly related to issues of online privacy and censorship, has been hotly debated in recent media reports
Businesses that succeed at social media marketing are able to create unofficial brand ambassadors that are genuinely excited about their products and services and willing to share them with family and friends. But there are also several obvious risks for brands using social media to connect with customers. From illegitimate pages and pornographic avatars to brand attacks of a political nature, companies are often faced with a number of challenges.
Imagine for a moment that your website received billions of comments, photo and video submissions daily. While you might initially be excited about this level of engagement from your community, most site owners would quickly become overwhelmed by managing the massive influx of user generated content.
User engagement is a critical metric of success for social networking sites. If community members are frequently connecting and sharing content, then your business must be doing something right. But as social networks begin to scale, managing the massive influx of comments, photos, and videos from users becomes a daunting task.
Consider what it would be like to sit in a darkened room for eight hours daily reviewing thousands of pornographic, violent, and abusive images and videos. For human content moderators, this represents just another day at the office.