Does Sentiment Analysis account for English as a second language?
Sentiment analysis, in Tracx and most other platforms, uses the established grammatical rules and meanings of the given language as its basis. These do not change whether someone is a native speaker or has learned the given language later on in life. If the question is about recognizing differences in regional speech patterns and slang, the answer is yes these are factored in as much as possible during sentiment analysis. However, it is by no means perfect, as language is dynamic. Tracx offers ways to customize the corpus of words used for sentiment analysis to adjust scoring for your data set.
Can we expand on how to identify Influencers?
Overall, influence in social media is a measure of how much someone is able to affect the relevant discussion and generate engagement from other users. Within Tracx, influence is always defined in terms of the topic you are focused on. This can be at a broad category level, or at a very specific level.
Tracx has a proprietary influencer module, by which Influence is based on four different variables; Reach, Impact, Quality and Volume. Reach is a measure of community size of the user in question. Impact is the measurement of the duration, rate, and level of interactivity around user activity. Quality is the depth and detail of the content that the user creates. Longer form content, with more information about your topic, would get a higher quality score. Topical relevance, as measured by Volume, is the glue of the interest graph and the communities of focus. It is a measure of how many times the user posts about your area of focus (brand, campaign, topic, product, etc.).
How would you recommend addressing a large volume of negative comments and posts directed at a company?
Exact strategies vary on a case-by-case basis.
Generally, it's important to measure the overall reach of the negativity and compare it against your entire company footprint in social. Assessing the scale of the incident will help determine if it warrants a companywide public response, or can be handled by directly connecting with a smaller group of consumers.
It is also critical to not react without proper planning and assessment. While social media happens fast, responding without thinking the situation through and assessing data can exacerbate the situation.
Having a team in place that has considered various social media crisis situations and response plans before they happen should also be part of any company's plan. The team can analyze data and trends on an ongoing basis to be well informed if a negative event occurs.
In reference to employees, how would you track sentiment among them in regard to the organization through social listening if most negative comments would be from non self-identifying employees?
Before tracking any employee social media activity, it is best to assess and fully understand the legality of these pursuits. Ethically speaking, social media tracking may not be the best way to assess employee satisfaction.
Is there any way to determine a correlation between content being pushed out and increase/decrease in positive sentiment? And if so, how do you determine the strength of the correlation if the content is not specific to an initiative, campaign etc. but daily brand content aimed at improving brand reputation?
The Tracx platform allows you to look at content volume, from your own channels and general consumer discussion, and compare it against social sentiment trends. This is an effective way to assess correlation. Because Tracx allows the user to slice and dice data by topic, source, and other demographic filters it's easy to start to understand what specific areas are correlated with positive sentiment.
If you work in a service based industry and most commentary coming in is negative, how do you accurately relay overall brand sentiment and amplify positive engagement?
Social media data is reflective of what people publicly express. It's true that oftentimes consumers only speak up via these channels when they have a complaint. The best way to relay overall brand sentiment is to include social media analysis as part of your overall data mix and to benchmark over time so that there is context specific to your brand/industry. 20% negative sentiment could be terrible in one industry and reflect good performance in another.
Is social listening focused mostly on user comments and conversations? Where does a company blog post fall? Do these tools tap into that content to assess what's important in a category?
The Tracx platform collects and analyzes posts, interactions, and topic mentions from owned company properties as well as consumers. The platform also offers the ability to sort and filter by these different sources. A company blog can be added as a spotlight channel, so that all content form the blog is collected on Tracx.
Can someone address the measurement of Just Noticeable Differences (JNDs) with respect to Sentiment Analysis, i.e. identify a sentiment shift?
Because Tracx gives you all information broken out over time, you can always assess the change or percent change of any metric, including sentiment.
What are your experiences with other languages such as Italian, German and French?
Tracx currently indexes the following languages: English, French, German, Hebrew, Italian, Portuguese, Russian, Spanish, Dutch, Finnish, Swedish, Chinese, Japanese, Hindi, Korean, Greek, Thai, Georgian, Armenian, Arabic, Polish, Bulgarian, Bengali, Czech, Danish, Croatian, Hungarian, Indonesian, Turkish, Romanian and Persian, Ukrainian, Vietnamese, Estonian, Lithuanian, Macedonian, Malayalam, Norwegian, Slovak, Slovene, Albany.
- Traditional Sentiment Analysis: English, Spanish, Italian, German
- Sentity: English (We have the capability to add indexing and sentiment for additional languages as needed)
What is your assumed % level of accuracy for sentiment when using machine learning to do the heavy lifting of the big data? What do you think is acceptable in the industry?
We have recently launched a new entity-based sentiment engine, Sentity, and exact levels of accuracy are pending.
Most automated sentiment engines are not especially accurate, with ranges in the 50-70% area. Accuracy measurement depends on how the tests are run. A key factor is what is counted as the total to make up the denominator of your percent. All sentiment engines have a certain amount of indeterminate posts that receive no sentiment tag. When these are factored in, the percent accuracy is lower than when you just look at posts that receive a score.