Google and the Digital Fairness Act

Read Google’s Official Statement on the Digital Fairness Act (PDF)

On 23 October 2025, during the European Commission’s public consultation on the Digital Fairness Act, Google published its official feedback on the upcoming legislation.

Google says the EU already has powerful tools (GDPR, DSA, UCPD, CRD, DMA, AI Act) to police dark patterns, addictive design, unfair personalisation, influencer marketing, pricing tricks, and subscription traps. Its message: enforce and simplify before adding new rules; keep a risk-based, tech-neutral approach; resist one-size-fits-all bans on design features; and harmonise fragmented national add-ons. Where new law is truly needed, Google wants targeted fixes, especially around age assurance (risk-based), in-game transparency (odds disclosures), and subscription UX (no “cancel-anytime” mandates, no forced re-consent).

Why this matters (and how it maps to the DFA rulebook)

The DFA is being assembled to curb specific online practices: dark patterns, addictive design, unfair personalisation, influencer marketing abuses, pricing tactics, and contract/subscription traps.

Google’s submission pitches itself as pro-consumer and pro-enforcement but anti-duplication. In short: implement what the EU already adopted, coordinate enforcers, and only legislate where there’s a clear, evidenced gap.

Google’s overarching message: simplify first, legislate later

“We advocate for prioritising the effective and efficient implementation of existing digital legal frameworks.”

The EU’s digital rulebook is “increasingly complex,” with overlaps across GDPR, DSA, DMA, UCPD, CRD and the AI Act, so Google calls for coherent, risk-based lawmaking, clearer guidance, and better feedback loops.

DFA angle: This aligns with the Act’s stated goal to target specific manipulative practices rather than rewrite the whole consumer-law canon.

Dark Patterns: enforce, guide, educate (don’t re-legislate)

Google says the EU already has a “substantial foundation” against deceptive interfaces (UCPD, DSA, GDPR consent rules, AI Act bans on manipulative AI).

“Existing EU laws already provide a substantial foundation for addressing dark patterns.”

“Additional laws are not necessary… particularly those that would overlap with existing legislation.”

They want guidance to clarify the line between helpful behavioural design (e.g., two-step confirmation for high-stakes actions) and prohibited manipulation, pointing to Google research (“Unpacking Deceptive Design”) that evaluates patterns by intent divergence, mechanism deceptiveness, and impact magnitude.

Tension with DFA practice list: The DFA is expected to enumerate concrete dark patterns (biased hierarchies, fake urgency, confirm-shaming, hidden cancellations). Google prefers principles + enforcement over new blacklists (though it doesn’t defend those tactics).

Addictive design (and gaming features): beware one-size-fits-all bans

“We are concerned about proposals to ban or restrict popular digital features without clear research demonstrating consumer harm.”

Google argues many “problem” features (autoplay, notifications, infinite scroll) can be beneficial or safety-critical depending on context, think car dashboards, smart TVs, watches, voice assistants, or crisis alerts. Better to combine optional settings, defaults for minors, and education, then enforce existing law (DSA risk assessments for VLOPs, UCPD, AI Act vulnerability rules).

On games/loot boxes, Google backs transparency and controls over bans. Play policy requires odds disclosure for randomized items; users get budgets, purchase histories, Android app timers, and parental controls via Family Link.

Apps with loot boxes “must clearly disclose the odds of receiving those items in advance.”

Google resists forcing a real-money valuation of in-game items, calling it misleading and privacy-intrusive (it could require processing purchase histories) and noting Member States already diverge (some treat loot boxes as gambling).

DFA angle: Parliament has pushed to curb infinite scroll, autoplay and similar “addictive” hooks; the DFA page lists these as target practices. Google’s submission urges flexible, risk-based design obligations instead of blanket bans.

Personalisation & ads: valuable, regulated already, so tighten enforcement, define “vulnerability” carefully

“Personalisation is what organizes the world of information online into something manageable and usable.”

Google lists a long stack of current rules, DSA ad transparency + non-profiling options, GDPR fairness/consent limits, DMA consent for cross-service ad data, CRD/UCPD price-ranking transparency, ePrivacy on trackers, and AI Act bans on exploitative systems. Conclusion: enforce consistently, don’t duplicate.

They also warn that broad, subjective definitions of “vulnerability” (e.g., “negative mental states”) would be “extremely hard to enforce” at ads scale and risk violating data-minimisation.

“We are concerned that broad definitions of ‘vulnerability’ might be extremely hard to enforce.”

Google supports the DSA ban on profile-based ads to minors, but “strongly urges against describing ‘age’ as a type of consumer vulnerability” because age targeting is a basic parameter outside minors and useful for relevance.

On controls, Google touts consent flows and My Ad Center, where EU users monthly adjust ad preferences at large scale, and policies blocking targeting on sensitive categories.

DFA angle: The DFA’s “unfair personalisation” bucket (including potentially exploitative profiling) is central to the project; Google’s stance is keep it, but don’t overreach and harmonise protections across more services rather than pile on new duties for the usual platforms only.

Influencer marketing & price marketing: use what we have + educate

Google says hidden ads and misleading endorsements are already banned under AVMSD, UCPD, the e-Commerce Directive and DSA; it backs education and self-regulatory frameworks (e.g., EASA’s AdEthics, DiscloseMe) over new rules.

On pricing tactics (drip pricing, fake reductions, dynamic pricing): rely on Price Indication Directive + CRD + UCPD, with guidance/case law to address specifics.

DFA angle: The practice list includes both areas; Google’s through-line is “don’t duplicate, enforce.”

Contracts, renewals & cancellations: harmonise, but don’t mandate “cancel anytime” or extra re-consent

Google welcomes EU-wide consistency (e.g., today only France/Germany require a “cancellation button”). But it opposes rules that would let users cancel any time with short notice for long-term discounted plans, arguing that would kill pricing options consumers actually like.

“We caution against requirements stipulating that subscriptions can be canceled at any time with a short notice period.”

They also oppose universal express re-consent for auto-renewals or trial-to-paid conversions and default reminder mandates for short cycles, saying the CRD/UCPD already ensure clear pre-contract info and easy cancellation in account settings.

“We do not support express consent requirements for the renewal of a subscription.”

Google adds practicalities: verify identity to prevent fraudulent cancellations; allow chatbots if compliant; be cautious on automated contracts (too early to regulate). For free trials, collecting payment details up-front deters abuse and smooths continuity for satisfied users.

DFA angle: DFA scoping includes subscription traps and cancellation friction; Google’s pitch is harmonise outcomes but keep room for service-appropriate UX rather than one mandated button/flow for all.

Horizontal issues: age assurance, burden of proof, “average consumer,” fairness-by-design

  • Age assurance: back a risk-based approach; use age estimation for most services; reserve strict age verification for high-risk (e.g., porn). App-store-only checks are “flawed”, risks live inside the service.

  • Burden of proof: don’t reverse it; compliance is already evidenced along the user journey.

  • Consumer benchmark: keep the “average consumer” standard (plus existing vulnerable-user carve-outs). Don’t invent new, subjective benchmarks.

  • Fairness-by-design: an extra catch-all obligation adds ambiguity; better to specify concrete duties where gaps exist.

Simplification asks: less duplication, fewer pop-ups, modern channels

Google proposes trimming repetitive info in account-based or AI-assistant transactions; letting chat substitute for mandated phone/email; and dropping paper-era artifacts like the withdrawal form in e-commerce. It also asks to rebalance the 14-day withdrawal for digital services that provide immediate access to valuable content (to avoid “binge then refund” dynamics).

Likely impact on Google’s business & products

  • Ads & YouTube: A DFA that narrows “unfair personalisation” without outlawing mainstream audience signals (like age bands outside minors) aligns with Google’s model; over-broad vulnerability rules would be operationally and legally thorny.

  • Design features (autoplay/scroll/notifications): blanket bans would force YouTube/Android UX changes across devices; a risk-based approach lets Google keep context-specific defaults (e.g., cars, TVs) while tightening minors’ protections.

  • Play Store & gaming: odds disclosure and spend controls are already in policy; mandatory real-money valuations of in-game items (as some propose) are resisted on transparency and privacy grounds.

  • Subscriptions (YouTube Premium, Cloud, etc.): hard re-consent or cancel-anytime mandates could increase churn and raise prices; Google argues for harmonised but flexible rules.

  • Compliance costs: any DFA push for simplification & guidance would reduce fragmented burdens across the EU market, a key Google ask.

Quick reference: DFA practice → Google’s position

  • Dark patterns → Use UCPD/DSA/GDPR already; add guidance + enforcement + education rather than new blacklists.

  • Addictive design → Avoid feature bans; apply risk-based controls, defaults for minors, and user tools. Context matters (car/TV/watch/voice).

  • Specific gaming features (loot boxes, currencies)Odds disclosure, budgets, timers, parental controls; resist real-money item valuations.

  • Unfair personalisation/ads → Personalisation is useful; enforce current rules; define vulnerability narrowly; don’t treat age (beyond minors) as vulnerability.

  • Influencer marketing → Framework exists; strengthen education/self-regulation.

  • Price marketing → Covered by Price Indication Directive/CRD/UCPD; use guidance/case law.

  • Contracts, cancellations, subscriptionsHarmonise, but keep flexibility; no cancel-anytime mandate or re-consent for renewals/trials by default.

  • Age assuranceRisk-based; verification only for high-risk; checks belong inside the service, not at the app store.

What Google didn’t really ask for (or pushed back on)

  • A new fairness-by-design catch-all: Google says it adds ambiguity; specify duties instead.

  • A new rulebook for influencers/pricing: it prefers guidance, education, and self-regulatory standards over fresh legislation.

  • Universal cancellation buttons / reminders / re-consent: it warns these could raise costs, reduce discounts, and annoy users.

Bottom line for DFA drafters

Google’s red thread is coherence. Make enforcement consistent, fill real gaps only, keep things risk-based, and resist blanket bans that flatten context or harm legitimate UX and SME advertising. If the Commission sticks to targeted practices (as the DFA page signals), Google will welcome simplification, and fight over-broad personalisation or cancellation rules that disrupt its ad and subscription models.