Location Data Privacy Enforcement Gets More Concrete

The FTC's Kochava settlement and California's GM action show privacy enforcement moving toward consent, data minimization, and downstream data-broker controls.

AR

Aisha Rahman

Cybersecurity reporter

Published May 9, 2026

Updated May 9, 2026

12 min read

Location Data Privacy Enforcement Gets More Concrete

Overview

Location data privacy moved from a policy debate to a more concrete compliance problem in early May 2026. In the same week, the Federal Trade Commission advanced a settlement that would restrict Kochava's sale of sensitive location data, while California officials announced a $12.75 million General Motors privacy settlement tied to driver location and behavior data.

The two cases are different. One centers on mobile location data sold by a data broker. The other centers on connected-car data tied to OnStar and Smart Driver. But together they point in the same direction: regulators are treating precise movement data as a higher-risk asset that needs real consent, clear limits, and stronger controls after it leaves the company that collected it.

Location data privacy now has two May enforcement markers

The FTC's May 4 Kochava case timeline links to a proposed order and press release that would settle claims over the sale of location data linked to millions of mobile devices. The agency said the case concerns data tied to sensitive places, including medical facilities, religious locations, and other places where movement patterns can expose private facts about a person.

Four days later, California officials announced a General Motors settlement over alleged sales of Californians' location and driving data to data brokers. Bloomberg Law reported that GM would pay $12.75 million, and the California announcement described the action as the largest CCPA penalty in state history to date.

The timing matters because these are not tiny privacy notices or technical corrections. They are enforcement actions with named companies, money, consent restrictions, and business-model consequences. For security and privacy leaders, that makes location data privacy a board-level risk rather than a footnote in a mobile-app or vehicle-feature review.

Kochava settlement targets sensitive location data sales

The Kochava case is important because it focuses on downstream data sale, not only collection. According to the FTC's public case page, the proposed order would ban Kochava and subsidiary Collective Data Solutions from selling sensitive location data to settle charges tied to millions of mobile devices.

MediaPost's May 4 coverage of the Kochava location data settlement said the proposed resolution would require affirmative consent before certain sensitive location data is sold or disclosed. AdExchanger also framed the settlement as a long-running fight over what counts as sensitive location data and how much harm sale of that data can create.

That is a harder standard than many data businesses are used to. It is not enough to say data is used for analytics, audience targeting, measurement, or product improvement. If the data can expose where people worship, seek medical care, receive counseling, or spend nights, regulators are asking whether the business has a direct and defensible reason to trade in it.

For app publishers, ad-tech firms, SDK providers, and analytics vendors, the message is plain: a chain of contracts does not make sensitive movement data low-risk.

GM driver data shows cars are privacy systems too

The GM case brings the same issue into the connected-car market. Bloomberg Law reported that GM agreed to pay $12.75 million to resolve allegations that it illegally sold hundreds of thousands of Californians' location and driving data to data brokers. CalMatters reported that investigators said GM made about $20 million from unlawful sale of that data between 2020 and 2024.

The California announcement, syndicated by EIN Presswire as the General Motors privacy settlement, said the settlement includes civil penalties and restrictions on use and sale of consumer driving data. It also said GM sold data to Lexis and Verisk without customers' knowledge or consent.

That makes the car a privacy system, not only a transportation product. A modern vehicle can generate location, speed, braking, acceleration, app, roadside-assistance, infotainment, and diagnostic signals. Some of that data may help a customer. Some may help a manufacturer. Some may interest insurers, data brokers, advertisers, lenders, or litigants.

The compliance problem appears when the driver believes they are using a convenience feature, while the business treats the same signal as a data product.

Data minimization is becoming the sharper rule

California's GM action is especially notable because officials described it as a data-minimization case. That concept asks a direct question: did the business collect, use, keep, or sell more personal information than it reasonably needed for the purpose the consumer expected?

Data minimization can be more disruptive than a notice requirement. A notice can be rewritten. A minimization obligation can force a company to redesign collection, storage, retention, vendor sharing, product analytics, and revenue arrangements. It may also force a company to prove that each data flow has a current purpose, not just a historical business reason.

For connected cars, that means a manufacturer may need to separate data used for safety, maintenance, navigation, insurance programs, product analytics, and broker sale. For mobile apps, it means a publisher may need to split precise location from coarse location, requested service from advertising use, and first-party use from third-party transfer.

This is where privacy becomes operational. The privacy team cannot solve it alone if engineering, product, data science, legal, and revenue teams all touch the same location data.

Data broker risk now reaches the company that collected data

The Kochava and GM developments also weaken a common defense: the collector is not the final user of the data. Regulators are looking down the chain. If a company collects location or driving data and then sends it to a broker, the downstream use can come back into the enforcement story.

That matters for third-party risk programs. Security teams are used to reviewing vendors for breach risk, access controls, encryption, retention, and incident response. Location data privacy adds a different question: can the recipient use the data to infer sensitive places, routines, home addresses, workplace patterns, medical visits, religious observance, or risk behavior?

The answer may not be visible in a normal vendor questionnaire. A data broker can combine one feed with other signals. A mobile advertising ID, hashed identifier, connected-car record, or device history may seem less sensitive in isolation than it becomes after matching.

Therefore, companies need review that follows the data, not only the contract.

Connected car data creates a wider privacy surface

The FTC had already acted against GM and OnStar earlier in 2026. The agency's General Motors case page describes a January action over sharing drivers' precise location and driving behavior data without consent. The California settlement adds another state-level layer to the same connected-car concern.

That sequence is important. It shows federal and state regulators can look at similar facts through different legal tools. A company may settle with one regulator and still face another action, private litigation, or state investigation if the underlying data practice affected different consumers or legal duties.

For automakers, the privacy surface now includes mobile apps, dealer enrollment, connected services, infotainment accounts, insurance partnerships, roadside assistance, warranty systems, over-the-air updates, and data broker relationships. It also includes the way a feature is described to a driver at the moment they opt in.

Cars are no longer offline products with occasional data exhaust. They are rolling software platforms with persistent data trails.

Security teams need to treat location as sensitive by default

Location data should not sit in a low-risk bucket just because it lacks a name or email address. Movement patterns can reveal identity through home and work locations. They can show visits to clinics, shelters, schools, places of worship, courthouses, union offices, military facilities, or political events. They can also expose routines that create personal safety risks.

That is why location data privacy belongs in security architecture. Access logs, retention settings, data-loss controls, encryption, data catalogs, and vendor feeds all matter. A privacy lawyer can write the policy, but engineering and security teams decide whether a sensitive location feed can be queried, exported, joined, or sold.

This connects to broader Pagalishor coverage of security teams rebuilding identity review workflows. The same discipline used for account access should apply to sensitive data access: who can see it, why, for how long, and with what record of use.

It also connects to security buyers wanting fewer dashboards and faster decisions. Location data programs do not need more dashboards if no one can stop a bad data flow. They need ownership and clear action paths.

Privacy programs need stronger location-data inventories

The immediate task for companies is inventory. If a business cannot say where precise location data is collected, stored, enriched, transferred, sold, or deleted, it cannot make a credible consent or minimization claim.

A useful inventory should separate active collection from historical stores. It should identify SDKs, analytics tools, cloud data sets, partner exports, insurance or advertising feeds, and data broker relationships. It should also record whether the data is precise enough to infer sensitive locations or repeated routines.

Then comes purpose mapping. Which data is needed to provide a requested service? Which data supports product reliability? Which data exists for advertising or measurement? Which data has no current use? The answers should affect retention, access, deletion, and sharing rules.

That work may sound basic. Still, the GM and Kochava cases show why it matters. Regulators are not only asking whether companies had a privacy policy. They are asking whether the company's real data flows matched the promises and limits customers could reasonably understand.

Data deletion needs to reach partners and old feeds

The next weak spot is deletion. A company can stop selling sensitive location data tomorrow and still have older feeds sitting with brokers, analytics partners, cloud vendors, or customers who bought access months earlier. That makes location data privacy hard to fix after the fact.

The FTC's earlier connected-car action against GM and OnStar included deletion duties and requests to third parties, according to the agency's case materials. That kind of remedy points to a larger expectation: stopping a data flow is not enough if the company already distributed the data into places it no longer directly controls.

For privacy teams, deletion should be part of the data map from the beginning. Contracts need clear downstream deletion duties. Data pipelines need a way to identify which records were sent where. Audit logs should show when a partner received data, what category it received, and whether a deletion or suppression request later reached that partner.

Without that, a company may only be able to say it changed its own practice. Regulators may ask what happened to the old data.

Sensitive-location rules affect product and revenue plans

These cases also affect product planning. If a business depends on selling movement data, the risk calculation has changed. Consent screens may reduce opt-in rates. Data minimization may shrink the usable feed. Partner restrictions may make some data less valuable to brokers or advertising networks.

That can feel like a revenue hit, but the alternative is worse. A data product built on unclear consent can become a legal liability, a brand problem, and a customer-trust issue at the same time. GM's case is a useful example because the feature at issue was not the company's core business. It still created a record-setting California privacy penalty.

Product teams should treat precise location and driving data like regulated inventory. Every new use needs a purpose, a consent basis, a retention rule, and an exit plan. If the business cannot explain why it needs the data in one or two plain sentences, it probably needs less data.

The commercial teams need that discipline too. A sales contract that promises audience quality, driving behavior, or place-based targeting can create privacy exposure before engineering notices the problem. So pricing and partnership reviews should include a simple check: does this deal depend on sensitive movement data that a customer would not expect to be sold?

That question is now practical, not philosophical. The May enforcement actions show that location data can turn a quiet side business into the main legal story.

The next test is whether companies can prove restraint

The hardest part of location data privacy is restraint. Businesses have strong incentives to collect more data than they immediately need because future analytics, advertising, underwriting, safety, or product ideas may appear later. Regulators are now pushing the opposite view: collect and share less when the data can expose sensitive facts about people's lives.

That means the next privacy review should not start with a better disclosure. It should start with a map of what data moves, who receives it, and whether the company would be comfortable defending that transfer in plain language.

If the answer is no, the data flow is already a risk. The Kochava and GM actions make that risk harder to ignore.

Reader questions

Quick answers to the follow-up questions this story is most likely to leave behind.