3 Policies for Big Data Privacy Problems
In the wake of high-profile data privacy scandals — from the Ashley Madison breach to revelations about how companies like Target and Uber have leveraged sensitive user data — public concern about how organizations collect, analyze, and apply data has intensified. Consumers are no longer worried only about security breaches, but also about whether companies can be trusted to use data responsibly.
Advances in digital technology, GPS tracking, mobile devices, and wearable tech now allow organizations to assemble highly detailed pictures of individuals’ daily behavior. While this creates enormous analytical value, it also places companies under intense scrutiny. As data capabilities grow, so does the reputational and regulatory risk associated with misuse.
During my time at Evolv (later acquired by Cornerstone OnDemand), we found ourselves at the center of public conversations about workplace surveillance and predictive analytics. Rather than adopting a defensive posture, the company chose to proactively define ethical boundaries around data usage.
1. Be transparent and define a Big Data Code of Ethics
Evolv developed a clear, plain-language Workforce Big Data Code of Ethics that outlined what data was collected, how it was used, and what types of data were explicitly off-limits. The document was not legal boilerplate, but a values-driven statement designed to clarify intent and guide decision-making.
This transparency served two purposes: it forced internal alignment on ethical standards, and it provided an accessible reference point for customers, journalists, and regulators asking hard questions about data practices.
2. Distinguish between acceptable, questionable, and “creepy” data
Ethical data use is not binary. Beyond what is legally permitted, companies must confront gray areas. A well-known example emerged when research showed that applicants using Chrome or Firefox stayed longer and performed better than those using default browsers.
Although incorporating browser data could have improved predictive accuracy, the team recognized that such signals were likely correlated with age and could feel invasive to applicants. The decision was made not to use browser data in hiring models, drawing a clear line between insight and intrusion.
3. Actively test for bias and fairness
When analytics influence hiring decisions, fairness becomes paramount. Evolv deliberately audited its models to ensure that protected classes — including age, sex, and gender — were not disadvantaged by assessment outcomes.
In some cases, analytics helped reduce bias rather than reinforce it. Research on the long-term unemployed, for example, demonstrated that their performance was no worse than traditionally employed candidates — a finding later cited in a White House report advocating fair hiring practices.
The broader lesson is clear: companies using big data must assume that public scrutiny will only intensify. Waiting for regulation is no longer viable. Organizations must proactively define ethical boundaries, communicate them transparently, and earn trust through restraint as much as through insight.
In the long run, responsible data governance is not just about avoiding backlash — it is essential to sustaining the very relationships that make data-driven insight possible.
