Marvin Minsky, a pioneer in the field of artificial intelligence, argued decades ago that “Easy things are hard” for computers. He meant that computers struggle to do things that humans can do easily and often without really thinking about.
Making sense of ‘free text’ feedback from employee surveys is one of those hard things. Anybody who has tried to analyse this type of feedback knows how daunting and time-consuming it can be because it usually has no obvious structure – it’s free text, after all, so people write about whatever they want to, in as many words as they want to.
When humans analyse free text feedback, they do so in a couple of ways:
· One is by looking for interesting patterns in the feedback, such as recurring issues. For instance, you might review some IT helpdesk data and notice that people keep complaining about problems with the office ‘wi-fi’, ‘internet’ or ‘network’.
· Another is by categorising feedback using existing frameworks or mental models. For example, you might review feedback from an employee engagement survey and classify comments into pre-defined categories, such as ‘leadership’, ‘compensation’ and ‘career progression’.
People tend to use these two approaches together, usually without realising that’s what they are doing.
However, while humans have an innate ability to glean insights from free text feedback, they usually struggle when faced with larger volumes of feedback. They find it time-consuming, become bored and make mistakes.
What’s more, if you give two or more people the same feedback, it’s likely that they will see different things in the feedback and come up with different findings and conclusions.
Arguably, the challenge of analysing free text feedback is one of the reasons why many organisations have become over-reliant on superficial rating scale feedback when surveying employees - but that’s a subject for another blog post.
In recent years, developments in machine learning - and specifically in the field of ‘natural language processing’ - mean that it has become easier for computers to analyse unstructured free text feedback in ways similar to humans, albeit much more quickly, accurately and consistently.
At Audiem, we’ve harnessed these developments to create a tool that does the heavy lifting for workplace professions, allowing them to collect rich text feedback in the knowledge that they can easily and quickly turn that feedback into actionable insights.
It’s easy to see the parallels between how humans make sense of free text feedback and how Audiem goes about it. Audiem analyses free text by:
1. Identifying topics and providing easy to digest summaries of what those topics are about.
2. Categorising free text feedback based on sentiment and using The Workplace Mix, our comprehensive framework for understanding workplaces.
But, unlike humans, Audiem can do both these things accurately and consistently with thousands of lines of free text feedback in just a few minutes.
Want to find out how Audiem can help you make sense of your employee feedback? Then get in touch at hello@audiem.io