Did you know that there are 12 factors to be considered while acquiring data from the web? If no, fret not! Download our free guide on web data acquisition to get started!
In March 2018, a trending headline was how a little known firm, Cambridge Analytica had harvested personal data of roughly 50 million US Facebook users without their permission. Facebook is still reeling from the damage done in this expose – it has lost more than $50 billion dollars in market cap, with business leaders like Apple’s Tim Cook and Tesla Motors’ Elon Musk openly criticizing the way the users’ privacy was breached.
The fiasco put the spotlight squarely back on how ethics is playing out in this age of massive volumes of big data generated in diverse formats from different structured and unstructured sources. Big Data is actually very big, but does its humongous structure give it the leeway to compromise on ethics? Certainly not.
Big data and ethics are two distinct topics. Data, by itself, is not good or bad. However, it starts wearing a moral robe with the analysis and usage of the findings of the data analysis. To avoid this fate, organizations need to implement a strong principle giving due emphasis on privacy and allow users to choose what they want to share and what they don’t.
Interested to know which are the measures that can help you accomplish this challenging objective? Then you are at the right place. Today we look at 7 ways in which you can uphold Big Data ethics and provide an ethical code of conduct for big data experts working with your organization.
Just because organizations have the resources and technologies to collect data, it doesn’t mean that they have to do it. Take the two examples below
a) A cab company taking your location to fetch nearby cabs for quicker cab travel
b) A travel company accessing chats between you and your friend planning the next weekend holiday to offer personalized holiday packages
Obviously, the second instance is a blatant violation of the user’s privacy, especially considering that the user has not even approached the travel agency for information.
At the heart of the Facebook-Cambridge Analytica debacle is the provisions mentioned in the Facebook terms of service, which has since then been updated. As a layman user would simply select ‘Agree’ or ‘Do not Agree’ to the terms of service without going through the lengthy and boring legalese. It is imperative in these troubled times to have a simpler ‘terms of service’ document so that readers understand what they are signing up for and how their data will be used. The privacy settings too needs to be simplified and made user-friendly so that more users actually visit the page and see what’s in it for them.
Many times, the source of data leak happens to be an insider. Thus, adequate controls must be in place to ensure that this doesn’t happen. Even when it happens, strict action needs to be taken against the perpetrator. This is needed so that it is taken as an example and others don’t get tempted to carry out data breaches again in the near future.
Also, proper training needs to be carried out so that all employees are aware of the control measures in place to safeguard Big Data ethics every minute of their corporate life.
All analytics companies and social media sites dealing with third parties when sharing data need to verify accurate identity in order to confirm that they aren’t impostors trying to steal data or carry out identity theft.
Such confidential information is typically extracted by people creating fake ids or representing themselves falsely from an academic institution or from a market research company.
It is important that organizations spend time assessing the negative impact of the use of data on specific groups of users. They should also factor in the impact of the data being made available publicly.
This will lead to an increase in the awareness levels about the damage a possible data breach might cause on the credibility and reputation of the company. Companies often carry out privacy impact assessment to eliminate the risk of misuse of information by employees.
It is important that Big Data analytics experts know beforehand what data to collect and how much data to collect in order to fulfil the job at hand. This way the analysts and data collection experts can stay away from data that has no use in their operational scheme of things.
The point here is that by limiting excessive (unnecessary) data collection and analysis, you limit the exposure to the risk of data breach.
In spite of the best plans and intentions, data breach might still strike your company. In this situation, it is not wise to panic. You need to have in place a pre-defined crisis management strategy to tackle these rough days of the data breach. It is important to stop the data ‘bleeding’, clean up the systems, verify that the source of the leak is plugged, and then resume operations.
IDC predicts that by 2020 more than 1.5 billion people will be affected by data breaches. The entire process of data collection, storage, sharing, analysis, and visualization needs to operate with sound governance models in place.
Every stakeholder in the value chain (users, third parties, analytics firms, and social media platforms) needs to practice ethics in Big Data to bring about tangible changes in
Could industry backed initiatives like the upcoming GDPR or the Bloomberg/Bright Hive partnership be the answer for the ethics dilemma associated with Big Data? It will majorly be contingent upon how companies infuse these values into their strategy and corporate DNA. Specifically, data scientists will have to integrate ethical review, algorithmic bias testing, and consider its impact on the society at large.
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.
[contact-form-7 id=”5″ title=”Contact form 1″]