While conducting an analysis recently around a bunch of organisations in the same sector, we looked at their respective Klout, Kred and PeerIndex scores. Regular readers of this blog will know that we’re not huge fans of these fairly simplistic measures of ‘influence’ – we obviously prefer our own multi-channel, multi-attribute methodology. However, as long as they are viewed for what they are – just three examples of a much wider cosmos of indicators of social media performance, meaningless on their own but potentially insightful as part of a broader analysis – they can help motivate and focus attention.
Armed with the data, we thought it would be interesting to see if there was any kind of correlation between these competing influence measures.
What we found is that there isn’t really anything to choose between them. Of course, they’re all measuring the same thing so I guess one would expect to see strong correlations. So it doesn’t really matter which one you use. Here’s the data:
Correlation between Klout, Kred and PeerIndex
First up, we looked at PeerIndex. Taking the scores of our sample of 39 organisations, we found a Pearson r value of 0.573 between PeerIndex and Kred, and a slightly higher r value of 0.595 between PeerIndex and Klout.
Based on our sample, both exceed the critical values for r at the 0.01 confidence level, demonstrating a statistically significant relationship between both PeerIndex and Kred scores and PeerIndex and Klout scores.
Then we looked at the relationship between Klout and Kred. When the r value came out at 0.846, we couldn’t quite believe it. On the basis of this, we could confidently state that 99.9 times out of a hundred the relationship between the two scores is not a result of chance.
Whilst Klout, Kred and PeerIndex correlated strongly with each other, whether they’re measuring the right thing is a different matter. There’s no relationship with brand value, for example, unlike our PRINT™ system. When applying the PRINT Index™ to the 50 most valuable global brands, there was a statistically significant correlation between social media performance and brand value. But there was no such correlation for the PeerIndex and Klout scores for the same brands. Kred wasn’t launched then but given the correlations above we can safely say the same would have applied.
So there you have it: three measures of social media influence that are going to produce similar results regardless of which you use, but none of which – on their own – mean anything.
azeemazhar Jan 23 , 2012 at 11:26 am /
Hey great work here.
And it’s good to bring some statistics to the table.
But your sample size if far too small. Specifically, you might be applying the wrong test and looking at a biased sample of data – which is 39 fairly similar organisations. It’s a bit like sampling basket ball players when trying to gauge the average height of a man.
With 100m accounts indexed, I would suggest you need to increase your sample sizes.
Here is where I think you need to push your testing:
1. Larger sample size – you are only sampling 39 organisations – I would recommend sampling around 5000 for each test you do to narrow your confidence intervals.
2. Do some other tests, specifically:
– run the samples of 5,000 within bands – say of PeerIndex 0-30; Peerindex 31-60; and PeerIndex 60-100 and compare against scores that come out of each platform
– compare those scores against other metrics like Log(followers), or list count to see what correlation you get
– take PeerIndex topic data (rather than the overall score) and correlate that against overall PI, or the two Ks.
Niall Cook Jan 23 , 2012 at 11:37 am /
Thanks for dropping by and thanks for the positive comments.
Your point about sample size is certainly valid in terms of representative sampling, but that doesn’t negate the statistically significant correlation we found within the sample we used. That’s the whole point of comparing the correlation coefficients with the relevant critical value for r for the sample size being used.
It’s also worth mentioning that, when we did our analysis of the top 50 global brands, there was also a strong correlation between PeerIndex and Klout scores (0.825 correlation coefficient, again statistically significant) so – whilst we don’t currently have the resources to conduct the kind of tests you’re recommending (although it’s probably something you’d want to do!) – I reckon we’d find similar results.
And anyway, isn’t this strong correlation a good thing for you guys? Shows at least that your respective claims to measure the same things stand up to a bit of scrutiny.
Shawn Roberts Jan 23 , 2012 at 7:30 pm /
Niall, thanks for including Kred in this discussion.
There are a couple of places where Kred is distinct. Most critically, Kred uniquely gives Influence and Outreach scores within communities connected by interests in addition to the Global Kred score used for this discussion. Everyone gets multiple scores so they can see the communities where they have the strongest influence; most times community scores are entirely different from the Global score. We also form our scores on the deepest data; the Datamine underlying Kred is based on over 1,000 days of posts and considers the full Twitter firehose since 2008.Cheers, Shawn
Niall Cook Jan 23 , 2012 at 7:42 pm /
What do you make of the incredibly strong correlation between Kred and Klout scores in our analysis? Just a coincidence or a true reflection of what you’re both trying to measure?
Jon Davey Dec 19 , 2012 at 2:50 pm /
Thing that always buzzes around my brain with these measures is the fact that a doctor who is the best at his chosen body part will spend his time focused on the body part, not on tweeting.
As my wife quoted me the other day “empty vessels make the most noise”
How do we factor that in?