Brands which invite images in their user generated campaigns require specific moderation guidelines.
Brands are increasingly involving consumer-created content in their marketing campaigns: for example Sprint's Human Clock 'Now' campaign, for which eModeration screened over 35,000 user-created videos which appeared on YouTube's home page.
The content created by participants during these campaigns is usually hosted on a branded website. Doritos King of Ads and Domino's Pizza's Show us Your Pizza campaigns are good examples of user involvement: Doritos wanted to find the best ad, which would then be broadcast on TV; and Domino's wanted customers to send in pictures of their Domino's pizzas, the most mouth-watering of which would win $500. There are lots more good examples in our Interaction in Advertising white paper.
These campaigns are based on user-generated images - photos, videos, designs - which require a different kind of moderation from text-based campaigns where, for example, users might post comments to a branded Facebook page.
Images are often forgotten in the moderation debate. But there are some tricky issues that face brands who open their online space to user-created pictures and videos.
What to look for and how to respond
Copyright violation - With the disclaimer that we are not lawyers, and therefore we're not going to offer detailed legal advice on the copyright issue, we would say that this is one of the areas our clients are very keen to police. A good post on the issue of copyright violation was published over on Rich Baker's blog, which confirms that if a user were to upload a picture or video which contained material still under copyright, the publisher (i.e. the brand having control over the website) could be liable. As the post points out, having moderators in place to check the content prior to publication would greatly reduce the chance of copyrighted material getting published.
Brands need to tread carefully with copyright. It is possible to be overzealous and create a storm of bad publicity by being too dictatorial about what is and is not permitted. The Nestle Facebook logo debacle is one example of a brand getting it wrong. Contrast that with the recent Greenpeace competition to design a new BP logo which has received quite a bit of coverage with little, if any, response from BP - which clearly has other priorities at the moment.
Privacy and safety issues - Teens and Tweens are particularly vulnerable (and often naïve) when communicating online. Children will often try to upload pictures that contain identifying details (such as a house, or road name, or school gates), without understanding the risks that could pose. A branded site that is used by children should be particularly careful that no personally identifiable information is published, as part of the brand's duty of care to the user.
Obscene images, footage, logos or avatars - Of course, these must never make it onto the branded website. When one of Starbucks site users included a swastika in their profile picture, people debated whether the brand should act against the user or allow them freedom of speech. It's important to remember that the user is being invited onto your corporate property. Your rules apply. A brand will be associated with the content displayed on its property, and it's the right and responsibility of the brand to protect itself and rule abiding users.
Brands need to be particularly vigilant about offensive material being loaded onto their sites when targeting children. Child created content (CCC) is becoming big business. For example, 1,000 films created by children were submitted to the CBBC's Me and My Movie website in 2009.
Of course, there's some great filtering technology out there that can be used effectively as a sophisticated triage to flag up the images most likely to be inappropriate. Pixel-matching software picks up images or video which matches with blacklists of content previously blocked by moderators, or probably pornographic content. This significantly increases the speed and accuracy of moderation.
'Off-topic' or spammy images - If left unchecked, some users may deliberately and continuously post off-topic images in an attempt to irritate fellow users (trolling). This creates bad feeling in the community and sours people's experience of the brand.
So, how can brands avoid these issues?
Have clear guidelines - provide obvious signposts to community or competition guidelines and encourage users to follow them by enforcing them. If a user contravenes these guidelines, then the brand is within its rights to warn the user, and remove the offensive image (or block the user completely, depending on the seriousness of the breach).
Know what's possible on social networks - a full list of actions that brands can take to moderate imaged uploaded to social networks can be found in our free guide on Moderation in Social Networks. Here's a summary of what you can and can't do with images:
- YouTube: brands can pre or post moderate video responses, but can't moderate avatars or usernames of friends - just reject them if their images, avatar or username is unsuitable.
- MySpace: brands can pre or post moderate both videos and images uploaded to the brand's page.
- Facebook: brands can only post-moderate content, including photos. Some apps such as 'Graffiti' can be hard to remove, and Facebook is notoriously difficult to moderate without third party tools.
Limit avatar choice - branded sites aimed at children may decide to provide a list of avatars that children can choose from, or a way that they can build their own cartoon image with provided components. This removes the risk of users uploading anything offensive or inappropriate.
Moderate, using appropriate moderation tools for the anticipated volume of UGC. Brands should ideally pre-moderate content rather than risk published content causing offense, harm, brand damage or a costly court case. At the very least a brand should provide a robust report and take down procedure so that offensive material can be flagged by users. Keep in mind though, that websites targeting children, tweens and teens should be pre-moderated if at all possible, to protect these vulnerable users.
(This article was first published on mad.co.uk on 12th Oct 2010)