Back To All News
broken dishes

The use of data analytics is a fantastic way to help your business gain an edge over your competitors through enriching your understanding of customers and the wider market.

However, it can also be a complex task. As a result, it is not uncommon to see a business end up wasting precious money and resources on poorly executed data analytics projects.

To avoid these pitfalls, we have compiled a list of some of the most common data analytics mistakes (and how you can avoid them).

1) Overestimating the quality and quantity of your data

The fact is, any data analytics project is only going to be as good as the data that is used. It is not uncommon for companies to think they are ready to launch right into a major data initiative, only to find out their data is unclear and in need of a cleanse. 

Businesses often underestimate the amount of data required to gain valuable insights. While a business might be rich in data, this might only be data that tells an inward-facing story and has limitations in terms of external use. This is not to say an extensive amount of data is required to build an effective model. Sensical models, for example, can be created by using around 20 different data points.

However, poor quality data will almost always lead to issues down the road. One clear example is the financial cost this can bring. If a business sets about launching a data strategy, only to find out the data that has been chosen is not up to scratch, they will either have to restart the project and source new data or abandon it completely – both scenarios resulting in financial harm to the business.

To better gauge the quality of the data, data cleansing and enrichment are both viable options. These services, which are offered by external providers, will take stock of the data that is currently available and will provide clear guidelines for forming a data strategy in the future.

2) Analysing wrong

Another common data science mistake (which might come as a result of mistake #1) is incorrectly analysing the data. Mistakes in the analysis process might come in a number of forms. It could be as simple as not properly cleansing the data, or it could be failing to place the data in the relevant context.

One well-known example of incorrect data analysis is mistaking correlation with causation. Each year we see countless academic studies that attribute one factor with one result. Whether that be ‘dog owners are happier’ or ‘children who play team sports develop better social skills’. These sorts of studies essentially only consider one set of variables. This is also a mistake that we see in data science. When we see a link between different data sets, it is easy for us to get carried away and confuse correlation with causation.

One of the most effective ways to avoid these errors is utilising ‘peer review’. Normally associated with the academic or publishing worlds, having a colleague or number of colleagues review the methodology and conclusions throughout the process goes a long way to improving quality control. This is something we permanently embed in our processes at smrtr.

3) ‘Cherry-picking’ data

‘Cherry-picking’ is a common problem wherever data is in use. Politicians, for example, are often criticised for only using a select dataset (cherry picking) in order to make a specific argument. It has also been a common problem in the media coverage of global issues such as global warming and the COVID-19 pandemic. 

But data cherry-picking is not exclusive to the political sphere. When it comes to data analytics, businesses might unwittingly cherry-pick certain data if they are looking to tell a specific story. This might happen when the business goes into the project expecting a certain answer. This will not only negatively impact the effectiveness of the data-led initiative, it can also raise ethical issues.

To avoid data cherry-picking, businesses must ensure they are identifying the correct and appropriate dataset for any given project and must avoid making any preconceptions and let the data tell the story. Another way to avoid any potential cherry picking of data could be to assign a team which includes at least one person that is not connected to the wider objective. This will help ensure the data is selected in an objective way.

To avoid these issues, businesses should consult experts to ensure they are complying with various data privacy laws and generally making the most of the data they hold. At smrtr, we work with our partners to create aggregated datasets that are privacy-compliant, while still remaining effective for commercial use.

To find out more contact us and we’ll be in touch in the next business day.

By Boris Guennewig, Co Founder & CTO at smrtr