Enabling ICE: The Moral Obligations of Data Sharing
As ICE asks for “commercial Big Data and Ad Tech” products that would “directly support investigations activities”, what moral obligations do data- led organisations have in 2026? Adam, CEO of MetadataWorks, has long championed making data FAIR — Findable, Accessible, Interoperable and Reusable — yet argues that this moment exposes ethical tensions in data sharing that cannot be ignored
For those of you that have little to no access to the outside world, a quick recap on ICE. ICE is a federal law-enforcement agency within the U.S. Department of Homeland Security, overseeing both Enforcement and Removal Operations (ERO) and Homeland Security Investigations (HSI).
The agency has hit global news recently for an alleged overuse of force. Among other reports, federal immigration agents shot and killed Alex Pretti, a 37-year-old ICU nurse and U.S. citizen, in Minneapolis during an ICE enforcement operation, sparking intense public outrage, protests, and political criticism. Eyewitness video and accounts conflict with official federal statements about the incident, heightening tensions.
The shooting triggered hundreds of protesters in Minneapolis and echoed across other U.S. cities. Local leaders — including Minnesota’s governor — demanded federal forces leave the state, calling the situation unsafe and accusing ICE of excessive force and 63% of Americans currently disapprove of ICE actions.
What’s this got to do with data?
According to a recent Request for Information published in the Federal Register, ICE is seeking details from U.S. companies about “commercial Big Data and Ad Tech” products that could directly support investigative work.
As WIRED has reported, this appears to be the first time ICE has explicitly referenced ad tech in such a filing — signalling interest in repurposing technologies originally built for advertising, such as location and device data, for law-enforcement and surveillance purposes.
ICE has framed the request as exploratory and planning-oriented, asserting a commitment to civil liberties and privacy. However, this is not happening in isolation. ICE has previously purchased and used commercial data products — including mobile location data and analytics platforms — from vendors such as Palantir, Penlink (Webloc), and Venntel.
What are the implications for commercial organisations?
This kind of move by ICE throws a spotlight on the moral responsibilities of data-heavy companies, even when what they’re doing is technically legal.
I strongly believe in data federation and meaningful data sharing between public and private sectors. But we must be honest with ourselves: data sharing is not always an unqualified good.
If you’re sharing data or data tools with ICE, it seems reasonable to suggest you’re contributing to their output – at the moment this is certainly not something I, or MetadataWorks as a company, would be comfortable with.
For now, most of these private companies are not legally forced to sell or share data with ICE.
In essence:
- For the private sector, choosing to sell or share data or data tools is an ethical as well as a financial decision
- Choosing not to sell is also a statement which could have real commercial implications
The social contract behind data
Much of today’s data economy rests on an implicit social contract.
People tolerate — and often reluctantly accept — the collection of their personal data because they believe it will be used in ways that are broadly beneficial, proportionate, and aligned with societal norms. They expect data to improve services, enable innovation, and support public good outcomes — not to be quietly repurposed for coercive surveillance or enforcement activities they fundamentally oppose.
When data collected under that social contract is redirected into coercive surveillance or enforcement activities, the contract is broken.
But there is a deeper and more uncomfortable problem: social contracts are rarely renegotiated when power changes.
Moral tension:
People may be broadly comfortable sharing data under one government, one regulator, one corporate owner, or one leadership team — because they trust the values and safeguards in place at the time. But data persists far longer than administrations, CEOs, boards, or political norms.
A dataset collected under one set of ethical assumptions can quietly become an asset under another. A company is acquired, leadership changes, a government is voted out — and suddenly data people were comfortable sharing is being used in ways they would never have agreed to. In many cases, individuals have no practical mechanism to:
- reassert consent,
- restrict new uses,
- or meaningfully withdraw from downstream applications of their data.
This is not a hypothetical risk. It is a structural flaw in how modern data economies operate.
If consent cannot be revisited when power changes, then it was never truly informed or durable in the first place. And if people cannot realistically stop new uses they did not agree to, claims of legitimacy rest on increasingly shaky ground.
Trust, once lost, is extremely difficult to rebuild — and when trust in data systems erodes, people disengage, withhold information, and seek ways around institutions altogether. The long-term cost is borne by everyone.
What about ‘Neutral Tech’?
Companies that collect or broker large amounts of data (ad tech firms, data brokers, analytics platforms) often claim they’re providing tools, not deciding how they’re used. Essentially the argument claims that potential immoral uses of technology shouldn’t stifle technological development. But when those tools are explicitly sold or freely shared for enforcement or surveillance handled in what most would agree is un unethical manner, that neutrality argument gets shaky.
Moral tension:
If you know your product may enable forceful deportations, family separation, violence or coercive surveillance — and you choose to sell to agencies known for these practices, can you claim zero responsibility? Personally, I don’t think so.
What about Consent?
Most people whose data flows into ad-tech ecosystems:
- never explicitly consented to law-enforcement use
- do not expect behavioural or location data collected for advertising to be repurposed for surveillance
Even if data is “legally obtained”, using it for enforcement arguably violates the spirit of informed consent, if not the letter of the law.
Moral tension:
Many data professionals are comfortable stretching consent when the outcome is clearly beneficial - healthcare, safer transport, improved urban planning. That comfort evaporates when the same data enables practices that large parts of society view as morally questionable.
When moral legitimacy is contested, consent matters more, not less.
A Slippery Slope?
If we view the purchase of data in this case as permissible, we risk setting a precedent for the future. Data collected for one purpose here is quietly expanding into much more invasive uses.
Moral tension:
If companies normalise selling to ICE, it becomes easier for any government agency to justify buying commercial surveillance for any purpose. This precedent seems to be an alternative to traditional warrant application or usual means of democratic oversight. This alternative seems a less favourable choice, eroding helpful checks and balances.
Free will of the company?
Ultimately, private companies have free will. They can decide whether these moral concerns matter enough to influence their commercial strategy.
But if a company chooses to sell data or data services in this context, transparency should be non-negotiable. Customers, partners, and the public deserve visibility into:
- who data is sold to
- for what purposes
- under what ethical review (if any)
Without transparency, trust erodes, and rightly so.
What about the public sector (in the US)?
Some data-led organisations do not have the same freedom of choice.
In the U.S., certain data is automatically shared across federal systems. For example:
- fingerprints collected by local police are routinely checked against DHS/ICE databases
- driver and vehicle data shared via Nlets can be accessed by federal agencies, including ICE
In these cases, interoperability decisions remove ethical discretion from individual agencies.
That said, there is no blanket rule requiring all public-sector data to be shared with ICE. Agencies are still bound by privacy laws and internal policies — though enforcement and leadership discretion vary, which is concerning in itself.
Courts have previously restricted data sharing where confidentiality rules were breached, demonstrating that limits do exist when safeguards are taken seriously.
Where does this all leave us?
Private-sector organisations do have a choice — and with choice comes responsibility.
FAIR data has immense power to improve lives. I’ve built my career on that belief. But in the wrong hands, well-shared data can enable profound harm.
If we want to be serious about ethical data sharing, companies must go beyond legal compliance and establish formal Data Access and Ethics Boards — bodies with real authority, not symbolic or purely advisory status.
These boards must have the power to approve, block, or withdraw data access where ethical, societal, or human-rights risks are identified, and to revisit past decisions when political, organisational, or social contexts change. Ethical approval cannot be a one-time event; it must be ongoing and responsive to shifts in power, ownership, or intended use.
Crucially, this governance must be supported by transparent, public-facing processes and tools, not closed-door judgement calls. At a minimum, this should include:
- a published data access process, setting out how requests are assessed and on what ethical basis decisions are made;
- a public data ‘project’ register, showing which organisations are accessing data, for what purpose, and under what conditions, ideally linked to a data outputs register that documents how data is ultimately used, what outputs are produced, and—where appropriate—who is using them.
Where personal data is involved, transparency should still extend as far as lawfully possible, making clear which organisations — and in some cases which teams or practitioners — are responsible for using the data.
Importantly, these approaches are not theoretical. Some of our partners — including the Office for National Statistics’ Secure Research Service — already operate robust boards, clearly defined access processes, and practical tools that demonstrate how ethical oversight, transparency, and accountability can be implemented at scale.
Without this level of visibility and enforceable control, ethics risks becoming a branding exercise rather than a governance reality. Transparency, veto power, and the ability to change course are what give ethical commitments meaning.
If you would like to explore how these models work in practice, please get in touch.
By