Big data is revolutionising life sciences. From AI-assisted drug discovery to personalised medicine, data-driven breakthroughs are transforming how we diagnose, treat, and prevent disease. But with great power comes… well, a mess of ethical dilemmas.
Who owns genetic data? Can AI be trusted in life-or-death decisions? Are companies really protecting patient privacy or just pretending to care?
The good news? Ethical data initiatives are emerging—especially in Europe and Switzerland—showing that responsible big data use is possible. However, the demand for data ethics professionals is rising fast, and life sciences companies need to catch up.
- The core principles of big data ethics
- Who really owns our (medical) data?
- How big data is used (and abused) in life sciences – hello ethics!
- AI, bias, and discrimination: when big data gets it wrong
- Data ethicist and jobs in data ethics: the smart career move no one talks about
- Ethical big data: building trust in the age of innovation
The core principles of big data ethics
At its best, big data fuels medical innovation. At its worst, it becomes a global surveillance system. To prevent the latter, organisations must follow key ethical principles:
- Transparency
- Patients should know how their data is collected, stored, and used. (No, burying it in a 60-page consent form doesn’t count.)
- Privacy
- Data should be protected, anonymised, and handled with care—especially in healthcare.
- Accountability
- When AI makes a mistake, who takes responsibility? The developers? The data providers? The AI itself? (Spoiler: not the AI.)
- Fairness
- Biased data leads to biased decisions. If AI is trained on incomplete or skewed datasets, it risks deepening health disparities.
- Security
- Data breaches in healthcare are a goldmine for cybercriminals. Stronger protections are non-negotiable.
These principles exist for a reason: because companies love pushing ethical boundaries until regulation forces them to stop.
Read more about how big data challenges healthcare ethics here.

Writing the rules as we go
Regulation is playing catch-up. Some frameworks exist—GDPR in Europe, HIPAA in the U.S.—but they weren’t designed for the speed and scale of AI-driven data.
Life sciences companies are struggling to self-regulate in this new reality. AI can analyse millions of patient records in seconds, but if those records contain biased or incomplete data, the algorithms will make flawed, even dangerous decisions.
That’s why Switzerland is taking a proactive approach. The Swiss Personalized Health Network (SPHN) is setting strict guidelines for ethical data use in research, ensuring that patient data remains secure, fair, and anonymised. Similarly, the Digital Trust Label, developed by the Swiss Digital Initiative, helps organisations demonstrate their commitment to data responsibility—giving users clarity on how their data is used, processed, and protected.
So, are we where we need to be? Not yet. But the rules are being written—and this time, stakeholders from healthcare, policy, and tech are at the table.
Who really owns our (medical) data?
Data is big business. Tech companies profit from selling user data, while individuals get… what, personalised ads?
Many people trade privacy for convenience—free apps, seamless experiences, AI-powered healthcare—but few realise just how much they’re giving away.
The problem? Patients don’t own their medical data. Companies do. Genetic testing services, for example, can sell your DNA data to pharmaceutical companies without explicit permission. (Remember the Facebook-Cambridge Analytica scandal? Now imagine that, but with your genome.)
Switzerland is making moves to fix this. The Swiss Data Protection Act (nFADP), updated in 2023, strengthens individual rights over personal data and forces companies to disclose exactly how information is processed and shared.
It’s a step forward, but until there’s a cultural shift around consent, many patients won’t realise their data is being used until it’s too late.

How big data is used (and abused) in life sciences - hello ethics!
Let’s be clear—big data is the backbone of healthcare innovation right now.
It’s enabling:
- Early disease detection through predictive models.
- Personalised treatment plans using patient-specific variables.
- Drug discovery via AI that identifies viable compounds faster than ever.
But here’s the ethical tension: while big data helps save lives, it also challenges patient autonomy and informed consent.
Much of the data used in research today comes from public sources, wearable devices, or digital health apps.
Most users have no idea how their data will be used—or whether it could be sold, re-identified, or repurposed years later.
Informed consent in this context becomes murky. Patients agree to one thing (often unknowingly), and their data ends up fueling something entirely different.
This is why Europe launched the European Health Data Space (EHDS)—a framework that aims to balance innovation with trust. It promotes cross-border data sharing for research, but also reinforces transparency and individual control.
AI, bias, and discrimination: when big data gets it wrong
Here’s where things get uncomfortable. AI isn’t just neutral code. It’s trained on data, and data reflects human biases.

One real-world case: an algorithm used in U.S. hospitals to recommend care management programs. It analysed healthcare spending as a proxy for patient needs. The result? It prioritised white patients over Black patients because, historically, less money is spent on Black patients, even when they have the same health conditions.
That’s not just a glitch—it’s systemic bias encoded in math.
In Europe, the AI Act is setting new rules for how “high-risk AI” systems—especially in healthcare—must be designed, audited, and governed. The goal? Prevent automated discrimination before it causes harm.
For life sciences companies, this means ensuring their AI systems are trained on representative datasets, continuously tested for bias, and managed by teams that include ethicists—not just engineers.
Because if the data is flawed, AI is just making bad decisions faster.
Data ethicist and jobs in data ethics: the smart career move no one talks about
As big data becomes central to every major decision in life sciences, a new kind of professional is emerging: the data ethicist.
But don’t picture an academic philosopher in an ivory tower. Today’s data ethicist is more like a cross-functional diplomat—someone who can speak tech, understand regulation, and challenge assumptions inside complex systems.
In pharma and biotech, their job isn’t theoretical—it’s operational:
- Collaborating with R&D to assess ethical risks in AI-powered diagnostics.
- Guiding compliance teams through grey areas in data reuse and secondary research.
- Advising product teams on how to embed ethical practices into design workflows.
The challenge? These profiles are rare and hard to recruit. Most candidates either come from ethics or tech—not both. That’s where specialised recruitment comes in.
Life sciences companies need professionals who get the nuance, who can question confidently, and who know when to say no.
For candidates looking to future-proof their careers, data ethics is more than a buzzword—it’s a growth market.
Ethical big data: building trust in the age of innovation
Big data isn’t just changing life sciences—it’s redefining them. Faster discovery, more tailored care, smarter systems. But none of it works without trust.
And trust isn’t built on buzzwords. It’s built on transparency, accountability, and a willingness to ask uncomfortable questions. That’s where data ethicists come in. That’s where good leadership comes in. And yes—that’s where smart hiring comes in.
We’re not saying it’s easy. We’re saying it’s worth it. The future of health is being shaped right now—by the data we collect, the people we empower, and the values we choose to code into the system.
Let’s make sure those choices are deliberate. And human.