Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society | Pasquale

Frank A. Pasquale III; Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society; Ohio State Law Journal, Vol. 78, 2017, U of Maryland Legal Studies Research Paper No. 2017-21; 2017-07-14; 13 pages; ssrn:3002546.

tl;dr → A comment for Balkin. To wit:
  1. Balkin should have supplied more context; such correction is supplied herewith.
  2. More expansive supervision is indicated; such expansion is supplied herewith.
  3. Another law is warranted; not a trinity, but perfection plus one more.
Fourth Law

A [machine] must always indicate the identity of its creator, controller, or owner.
<ahem>Like… say… a license, to operate, to practice; a permit; as manifest in a license plate, certificate of operation, certificate of board, a driver license; contractor license, a Bar Association Number, a VIN number, a tail number, a hull number.</ahem>

Three Laws, previous:

  1. machine operators are always responsible for their machines.
  2. businesses are always responsible for their operators.
  3. machines must not pollute.

So it is just like planes, trains & automobiles.


<quote>Balkin’s lecture is a tour de force distillation of principles of algorithmic accountability, and a bold vison for entrenching them in regulatory principles. <snip>…etc…</snip></quote>


  • Regulators
  • non-functional requirements
    the branded “By Design” theories

    • responsibility-by-design,
    • security-by-design,
    • privacy-by-design,
    • attribution-by-design [traceability-by-design].
  • Audit logs.
  • A Licentiate, the licentia ad practicandum
  • Supervisory Control.


Jack Balkin makes several important contributions to legal theory and ethics in his lecture, “The Three Laws of Robotics in the Age of Big Data.” He proposes “laws of robotics” for an “algorithmic society” characterized by “social and economic decision making by algorithms, robots, and AI agents.” These laws both elegantly encapsulate, and add new principles to, a growing movement for accountable design and deployment of algorithms. [This] comment aims to

  1. contextualize his proposal as a kind of “regulation of regulation,” familiar from the perspective of administrative law,
  2. expand the range of methodological perspectives capable of identifying “algorithmic nuisance,” a key concept in Balkin’s lecture, and
  3. propose a fourth law of robotics to ensure the viability of Balkin’s three laws.


  • Jack Balkin, Knight Professor of Constitutional Law and the First Amendment, Law School, Yale University; via Jimi Wales’ Wiki.


[in case it wasn't otherwise clear]

<quote>Balkin’s lecture is a tour de force distillation of principles of algorithmic accountability, and a bold vison for entrenching them in regulatory principles. As he observes, “algorithms

  1. construct identity and reputation through
  2. classification and risk assessment, creating the opportunity for
  3. discrimination, normalization, and manipulation, without
  4. adequate transparency, monitoring, or due process.

[endquote]” They are, therefore, critically important features of our information society which demand immediately attention from regulators. High level officials around the world need to put the development of a cogent and forceful response to these developments at the top of their agendas. Balkin’s “Laws of Robotics” is an ideal place to start, both to structure that discussion at a high level and to ground it in deeply rooted legal principles.

It is rare to see a legal scholar not only work at the deepest levels of policy (in the sense of all those normative considerations that should inform legal decisions outside of the law governing the case), but also recommend in clear and precise language a coherent set of concrete recommendations that both exemplify principles of critical and social theory, and stand some chance of being adopted by current government officials. That is Balkin’s achievement in The Three Laws of Robotics in the Age of Big Data. It is work to cite, celebrate, and rally around, and an auspicious launch for Ohio State’s program in Big Data & Law.</quote>


Jack M. Balkin  (Yale); The Three Laws of Robotics in the Age of Big Data; Ohio State Law Journal, Vol. 78, (2017), Forthcoming (real soon now, RSN), Yale Law School, Public Law Research Paper No. 592; 2016-12-29 → 2017-09-10; 45 pages; ssrn:2890965; previously filled, separately noted.


The Suitcase Words
  • Big Data,
    Age of Big Data
  • laws of robotics
    Three Laws of Robotics
  • algorithmic society
  • social decision-making,
    social decision-making by algorithms
  • economic decision-making,
    economic decision-making by algorithms
  • algorithms
  • robots
  • Artificial Intelligence (AI)
  • AI Agents
  • encapsulate
  • principles
  • accountable design
  • deployment of algorithms
  • contextualize
  • regulation of regulation
  • perspective of administrative law
  • methodological perspectives,
    range of methodological perspectives
  • algorithmic nuisance
  • fourth law of robotics
  • viability, to ensure the viability of
  • three laws

Previously filled.

Incompatible: The GDPR in the Age of Big Data | Tal Zarsky

Tal Zarsky (Haifa); Incompatible: The GDPR in the Age of Big Data; Seton Hall Law Review, Vol. 47, No. 4(2), 2017; 2017-08-22; 26 pages; ssrn:3022646.
Tal Z. Zarsky is Vice Dean and Professor, Haifa University, IL.

tl;dr → the opposition is elucidated and juxtaposed; the domain is problematized.
and → “Big Data,” by definition, is opportunistic and unsupervisable; it collects everything and identifies something later in the backend.  Else it is not “Big Data” (it is “little data,” which is known, familiar, boring, and of course has settled law surrounding its operational envelope).


After years of drafting and negotiations, the EU finally passed the General Data Protection Regulation (GDPR). The GDPR’s impact will, most likely, be profound. Among the challenges data protection law faces in the digital age, the emergence of Big Data is perhaps the greatest. Indeed, Big Data analysis carries both hope and potential harm to the individuals whose data is analyzed, as well as other individuals indirectly affected by such analyses. These novel developments call for both conceptual and practical changes in the current legal setting.

Unfortunately, the GDPR fails to properly address the surge in Big Data practices. The GDPR’s provisions are — to borrow a key term used throughout EU data protection regulation — incompatible with the data environment that the availability of Big Data generates. Such incompatibility is destined to render many of the GDPR’s provisions quickly irrelevant. Alternatively, the GDPR’s enactment could substantially alter the way Big Data analysis is conducted, transferring it to one that is suboptimal and inefficient. It will do so while stalling innovation in Europe and limiting utility to European citizens, while not necessarily providing such citizens with greater privacy protection.

After a brief introduction (Part I), Part II quickly defines Big Data and its relevance to EU data protection law. Part III addresses four central concepts of EU data protection law as manifested in the GDPR: Purpose Specification, Data Minimization, Automated Decisions and Special Categories. It thereafter proceeds to demonstrate that the treatment of every one of these concepts in the GDPR is lacking and in fact incompatible with the prospects of Big Data analysis. Part IV concludes by discussing the aggregated effect of such incompatibilities on regulated entities, the EU, and society in general.


<snide><irresponsible>Apparently this was not known before the activists captured the legislature and affected their ends with the force of law. Now we know. Yet we all must obey the law, as it stands and as it is written. And why was this not published in an EU-located law journal, perhaps one located in … Brussels?</irresponsible></snide>



    1. Purpose Limitation
    2. Data Minimization
    3. Special Categories
    4. Automated Decisions


  • Big Data (contra “little data”)
  • personal data
  • Big Data Revolution
  • evolution not revolution
    no really, revolution not evolution
  • The GDPR is a regulation “on the protection of natural persons,”
  • EU General Data Protection Regulation (GDPR)
  • EU Data Protection Directive (DPD)
  • IS GDPR different than DPD?  Maybe not.  Why? c.f. page 10.
  • Various attempts at intuiting bright-line tests around the laws are recited.
    It is a law, but nobody knows how it is interpreted or how it might be enforced.
  • statistical purpose
  • analytical purpose
  • data minimization
  • pseudonymization
  • reidentification
  • specific individuals
  • <quote>n the DPD, article 8(1) prohibited the processing of data “revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and the processing of data concerning health or sex life,” while providing narrow exceptions.85 This distinction was embraced by the GDPR.</quote>
  • Article 29 Working Party
  • on (special) category contagion
    “we feel that all data is credit data, we just don’t know how to use it yet.”
    c.f. page 19; attributed to Dr. Douglas Merrill, then-founder, ZestFinance, ex-CTO, Google.
  • data subjects
  • automated decisions
  • right to “contest the decision”
  • obtain human intervention
  • trade secrets contra decision transparency
    by precedent, in EU (DE), corporate rights trump decision subject’s rights.
  • [a decision process] must be interpretable
  • right to due process [when facing a machine]


Big Data is…

  • …wait for it… so very very big
    …thank you, thank you very much. I will be here all week. Please tip your waitron.
  • The Four Five “Vs”
The Four Five “Vs”
  1. The Volume of data collected,
  2. The Variety of the sources,
  3. The Velocity,
    <quote>with which the analysis of the data can unfold,</quote>,
  4. The Veracity,
    <quote>of the data which could (arguably) be achieved through the analytical process.</quote>,
  5. The Value, yup, that’s five.
    … <quote>yet this factor seems rather speculative and is thus best omitted.</quote>,

The Brussels Effect

  • What goes on in EU goes global,
  • “Europeanization”
  • Law in EU is applied world-wide because corporate operations are universal.


  • purpose limitation,
  • data minimization,
  • special categories,
  • automated decisions.


There are 123 references, across 26 pages of prose, made manifest as footnotes in the legal style. Here, simplified and deduplicated.

Previously filled.

Syllabus for Solon Barocas @ Cornell | INFO 4270: Ethics and Policy in Data Science

INFO 4270 – Ethics and Policy in Data Science
Instructor: Solon Barocas
Venue: Cornell University


Solon Barocas


A Canon, The Canon

In order of appearance in the syllabus, without the course cadence markers…

  • Danah Boyd and Kate Crawford, Critical Questions for Big Data; In <paywalled>Information, Communication & Society,Volume 15, Issue 5 (A decade in Internet time: the dynamics of the Internet and society); 2012; DOI:10.1080/1369118X.2012.678878</paywalled>
    Subtitle: Provocations for a cultural, technological, and scholarly phenomenon
  • Tal Zarsky, The Trouble with Algorithmic Decisions; In Science, Technology & Human Values, Vol 41, Issue 1, 2016 (2015-10-14); ResearchGate.
    Subtitle: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making
  • Cathy O’Neil, Weapons of Math Destruction; Broadway Books; 2016-09-06; 290 pages, ASIN:B019B6VCLO: Kindle: $12, paper: 10+SHT.
  • Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information; Harvard University Press; 2016-08-29; 320 pages; ASIN:0674970845: Kindle: $10, paper: $13+SHT.
  • Executive Office of the President, President Barack Obama, Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights; The White House Office of Science and Technology Policy (OSTP); 2016-05; 29 pages; archives.
  • Lisa Gitelman (editor), “Raw Data” is an Oxymoron; Series: Infrastructures; The MIT Press; 2013-01-25; 192 pages; ASIN:B00HCW7H0A: Kindle: $20, paper: $18+SHT.
    Lisa Gitelman, Virginia Jackson; Introduction (6 pages)
  • Agre, “Surveillance and Capture: Two Models of Privacy”
  • Bowker and Star, Sorting Things Out
  • Auerbach, “The Stupidity of Computers”
  • Moor, “What is Computer Ethics?”
  • Hand, “Deconstructing Statistical Questions”
  • O’Neil, On Being a Data Skeptic
  • Domingos, “A Few Useful Things to Know About Machine Learning”
  • Luca, Kleinberg, and Mullainathan, “Algorithms Need Managers, Too”
  • Friedman and Nissenbaum, “Bias in Computer Systems”
  • Lerman, “Big Data and Its Exclusions”
  • Hand, “Classifier Technology and the Illusion of Progress” [Sections 3 and 4]
  • Pager and Shepherd, “The Sociology of Discrimination: Racial Discrimination in Employment, Housing, Credit, and Consumer Markets”
  • Goodman, “Economic Models of (Algorithmic) Discrimination”
  • Hardt, “How Big Data Is Unfair”
  • Barocas and Selbst, “Big Data’s Disparate Impact” [Parts I and II]
  • Gandy, “It’s Discrimination, Stupid”
  • Dwork and Mulligan, “It’s Not Privacy, and It’s Not Fair”
  • Sandvig, Hamilton, Karahalios, and Langbort, “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms”
  • Diakopoulos, “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures”
  • Lavergne and Mullainathan, “Are Emily and Greg more Employable than Lakisha and Jamal?”
  • Sweeney, “Discrimination in Online Ad Delivery”
  • Datta, Tschantz, and Datta, “Automated Experiments on Ad Privacy Settings”
  • Dwork, Hardt, Pitassi, Reingold, and Zemel, “Fairness Through Awareness”
  • Feldman, Friedler, Moeller, Scheidegger, and Venkatasubramanian, “Certifying and Removing Disparate Impact”
  • Žliobaitė and Custers, “Using Sensitive Personal Data May Be Necessary for Avoiding Discrimination in Data-Driven Decision Models”
  • Angwin, Larson, Mattu, and Kirchner, “Machine Bias”
  • Kleinberg, Mullainathan, and Raghavan, “Inherent Trade-Offs in the Fair Determination of Risk Scores”
  • Northpointe, COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity
  • Chouldechova, “Fair Prediction with Disparate Impact”
  • Berk, Heidari, Jabbari, Kearns, and Roth, “Fairness in Criminal Justice Risk Assessments: The State of the Art”
  • Hardt, Price, and Srebro, “Equality of Opportunity in Supervised Learning”
  • Wattenberg, Viégas, and Hardt, “Attacking Discrimination with Smarter Machine Learning”
  • Friedler, Scheidegger, and Venkatasubramanian, “On the (Im)possibility of Fairness”
  • Tene and Polonetsky, “Taming the Golem: Challenges of Ethical Algorithmic Decision Making”
  • Lum and Isaac, “To Predict and Serve?”
  • Joseph, Kearns, Morgenstern, and Roth, “Fairness in Learning: Classic and Contextual Bandits”
  • Barocas, “Data Mining and the Discourse on Discrimination”
  • Grgić-Hlača, Zafar, Gummadi, and Weller, “The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making”
  • Vedder, “KDD: The Challenge to Individualism”
  • Lippert-Rasmussen, “‘We Are All Different’: Statistical Discrimination and the Right to Be Treated as an Individual”
  • Schauer, Profiles, Probabilities, And Stereotypes
  • Caliskan, Bryson, and Narayanan, “Semantics Derived Automatically from Language Corpora Contain Human-like Biases”
  • Zhao, Wang, Yatskar, Ordonez, and Chang, “Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints”
  • Bolukbasi, Chang, Zou, Saligrama, and Kalai, “Man Is to Computer Programmer as Woman Is to Homemaker?”
  • Citron and Pasquale, “The Scored Society: Due Process for Automated Predictions”
  • Ananny and Crawford, “Seeing without Knowing”
  • de Vries, “Privacy, Due Process and the Computational Turn”
  • Zarsky, “Transparent Predictions”
  • Crawford and Schultz, “Big Data and Due Process”
  • Kroll, Huey, Barocas, Felten, Reidenberg, Robinson, and Yu, “Accountable Algorithms”
  • Bornstein, “Is Artificial Intelligence Permanently Inscrutable?”
  • Burrell, “How the Machine ‘Thinks’”
  • Lipton, “The Mythos of Model Interpretability”
  • Doshi-Velez and Kim, “Towards a Rigorous Science of Interpretable Machine Learning”
  • Hall, Phan, and Ambati, “Ideas on Interpreting Machine Learning”
  • Grimmelmann and Westreich, “Incomprehensible Discrimination”
  • Selbst and Barocas, “Regulating Inscrutable Systems”
  • Jones, “The Right to a Human in the Loop”
  • Edwards and Veale, “Slave to the Algorithm? Why a ‘Right to Explanation’ is Probably Not the Remedy You are Looking for”
  • Duhigg, “How Companies Learn Your Secrets”
  • Kosinski, Stillwell, and Graepel, “Private Traits and Attributes Are Predictable from Digital Records of Human Behavior”
  • Barocas and Nissenbaum, “Big Data’s End Run around Procedural Privacy Protections”
  • Chen, Fraiberger, Moakler, and Provost, “Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals”
  • Robinson and Yu, Knowing the Score
  • Hurley and Adebayo, “Credit Scoring in the Era of Big Data”
  • Valentino-Devries, Singer-Vine, and Soltani, “Websites Vary Prices, Deals Based on Users’ Information”
  • The Council of Economic Advisers, Big Data and Differential Pricing
  • Hannak, Soeller, Lazer, Mislove, and Wilson, “Measuring Price Discrimination and Steering on E-commerce Web Sites”
  • Kochelek, “Data Mining and Antitrust”
  • Helveston, “Consumer Protection in the Age of Big Data”
  • Kolata, “New Gene Tests Pose a Threat to Insurers”
  • Swedloff, “Risk Classification’s Big Data (R)evolution”
  • Cooper, “Separation, Pooling, and Big Data”
  • Simon, “The Ideological Effects of Actuarial Practices”
  • Tufekci, “Engineering the Public”
  • Calo, “Digital Market Manipulation”
  • Kaptein and Eckles, “Selecting Effective Means to Any End”
  • Pariser, “Beware Online ‘Filter Bubbles’”
  • Gillespie, “The Relevance of Algorithms”
  • Buolamwini, “Algorithms Aren’t Racist. Your Skin Is just too Dark”
  • Hassein, “Against Black Inclusion in Facial Recognition”
  • Agüera y Arcas, Mitchell, and Todorov, “Physiognomy’s New Clothes”
  • Garvie, Bedoya, and Frankle, The Perpetual Line-Up
  • Wu and Zhang, “Automated Inference on Criminality using Face Images”
  • Haggerty, “Methodology as a Knife Fight”
    <snide>A metaphorical usage. Let hyperbole be your guide</snide>

Previously filled.

The Death of Rules and Standards | Casey, Niblett

Anthony J. Casey, Anthony Niblett; The Death of Rules and Standards; Coase-Sandor Working Paper Series in Law and Economics No. 738; Law School, University of Chicago; 2015; 58 pages; landing, copy, ssrn:2693826.

tl;dr → because reasons and
  • Prediction Technologies
  • Communication Technologies


Scholars have examined the lawmakers’ choice between rules and standards for decades. This paper, however, explores the possibility of a new form of law that renders that choice unnecessary. Advances in technology (such as big data and artificial intelligence) will give rise to this new form – the micro-directive – which will provide the benefits of both rules and standards without the costs of either.

Lawmakers will be able to use predictive and communication technologies to enact complex legislative goals that are translated by machines into a vast catalog of simple commands for all possible scenarios. When an individual citizen faces a legal choice, the machine will select from the catalog and communicate to that individual the precise context-specific command (the micro-directive) necessary for compliance. In this way, law will be able to adapt to a wide array of situations and direct precise citizen behavior without further legislative or judicial action. A micro-directive, like a rule, provides a clear instruction to a citizen on how to comply with the law. But, like a standard, a micro-directive is tailored to and adapts to each and every context.

While predictive technologies such as big data have already introduced a trend toward personalized default rules, in this paper we suggest that this is only a small part of a larger trend toward context- specific laws that can adapt to any situation. As that trend continues, the fundamental cost trade-off between rules and standards will disappear, changing the way society structures and thinks about law.

Table of Contents

  1. Introduction
  2. The Emergence Of Micro-Directives And The Decline Of Rules And Standards
    1. Background: Rules and standards
    2. Technology will facilitate the emergence of micro-directives as a new form of law
    3. Demonstrative examples
      • Example 1: Predictive technology in medical diagnosis
      • Example 2: Communication technology in traffic laws
    4. The different channels leading to the death of rules and standards
      1. The production of micro-directives by non-legislative lawmakers
      2. An alternative path: Private use of technology by regulated actors
  3. Feasibility
    1. The feasibility of predictive technology
      1. The power of predictive technology
      2. Predictive technology will displace human discretion
    2. The feasibility of communication technology
  4. Implications And Consequences
    1. The death of judging? Institutional changes to the legal system
    2. The development and substance of policy objectives
    3. Changes to the practice of law
    4. The broader consequences of these technologies on individuals
      1. Privacy
      2. Autonomy
      3. Ethics
  5. Conclusion


  • heavy-handed use of the metaphor “death X” in lieu of the more mundate “cessation of use of the technique X.”
  • At least they didn’t use the metaphors of “sea change” or “tectonic shifts” from the respective fields of weather prediction or geology.
  • <GEE-WHIZZ!>As economist Professor William Nordhaus notes, the increase in computer power over the course of the twentieth century was “phenomenal,”</GEE-WHIZZ!!>


  • catalog of personalized laws.
    “special law for you.”
  • rules and standards
    captures and benefits
  • micro-benefits
  • predictive technology
  • uncertainty of law
  • <quote>The legislature merely states its goal. Machines then design the law as a vast catalog of context-specific rules to optimize that goal. From this catalog, a specific micro-directive is selected and communicated to a particular driver (perhaps on a dashboard display) as a precise speed for the specific conditions she
  • positive versus normative (analysis)
  • (legislative) decision-making has
    • errors
    • costs
  • (subject) compliance has
    • cost
    • uncertainty
  • There are economies of scale in compliance
    • frequency of event
    • diversity of events
  • Conceptualize the frequency of the regulated event relative the specificity of the regulation.
  • The Combination
    • Prediction Technologies
    • Communication Technologies
  • <quote>The wise draftsman . . . asks himself, how many of the details of this settlement ought to be postponed to another day, when the decisions can be more wisely and efficiently and perhaps more readily made?</quote>, attributed to Henry Hart, Albert Sacks.
  • Claim
    • Standards are flexible, broad but uncertain in adjudication;
      so service delivery is tailored
      therefore the salubrious effect obtains.
    • Rules are specific, narrow but certain in adjudication;
      so service delivery is pre-specified, constrained
      therefore mis-applications occur.
    • Technology (cited) removes the distinction between Rules and Standards
  • advance (tax) rulings
  • Private Letter Rulings, IRS
  • No Action Letter, SEC


  • one size fits all
  • bright-line rule
  • over-inclusive (contra under-inclusive)
  • optimal decision rule
  • reasonable care
  • Error Typology(in hypothesis testing)
    • Type I Error
    • Type II Error
  • health surveillance technologies
  • second-order regulation


Sure, it’s a legal-style paper so there are 191 footnotes sprinkled liberally throughout the piece.  Only selected references were developed.


  • <quote>Indeed, some suggest that Moore’s Law is akin to a self-fulfilling prophecy.</quote>
    Harro van Lente & Arie Rip, “Expectations in Technological Developments: an Example of Prospective Structures to be Filled in by Agency”  researchgate, In Getting New Technologies Together: Studies In Making Sociotechnical Order, 206 (Cornelis Disco & Barend van der Meulen, eds. 1998), Amazon:311015630X: paper: $210+SHT.
  • …and more…

Via: backfill.

Digital Market Manipulation | Calo

M. Ryan Calo, Digital Market Manipulation; University of Washington School of Law Research Paper No. 2013-27; 2013-08-15; 53 pages.


Jon Hanson and Douglas Kysar coined the term “market manipulation” in 1999 to describe how companies exploit the cognitive limitations of consumers. Everything costs $9.99 because consumers see the price as closer to $9 than $10. Although widely cited by academics, the concept of market manipulation has had only a modest impact on consumer protection law.

This Article demonstrates that the concept of market manipulation is descriptively and theoretically incomplete, and updates the framework for the realities of a marketplace that is mediated by technology. Today’s firms fastidiously study consumers and, increasingly, personalize every aspect of their experience. They can also reach consumers anytime and anywhere, rather than waiting for the consumer to approach the marketplace. These and related trends mean that firms can not only take advantage of a general understanding of cognitive limitations, but can uncover and even trigger consumer frailty at an individual level.

A new theory of digital market manipulation reveals the limits of consumer protection law and exposes concrete economic and privacy harms that regulators will be hard-pressed to ignore. This Article thus both meaningfully advances the behavioral law and economics literature and harnesses that literature to explore and address an impending sea change in the way firms use data to persuade.


  • Assertions
    1. The digitization of commerce dramatically alters the capacity of firms to influence consumers at a personal level.
    2. Behavioral economics furnishes the best framework by which to understand and evaluate this emerging challenge; qualified as “once BE integrates the full relevance of the digital revolution”
  • Forward Claims
    • <quote>The consumer of the future is a mediated consumer—she approaches the marketplace through technology designed by someone else.</quote>
    • <quote>This permits firms to surface the specific ways each individual consumer deviates from rational decision-making, however idiosyncratic, and
      leverage that bias to the firm’s advantage.</quote>
    • <quote>Firms do not have to wait for consumers to enter the marketplace. Rather, constant screen time and more and more networked or “smart” devices mean that consumers can be approached anytime, anywhere.</quote>
  • Consequences of Mediation
    1. Technology captures and retains intelligence on the consumer’s
      interaction with the firm.
    2. Firms can and do design every aspect of the interaction with the consumer.
    3. Firms can choose when to approach consumers, rather than wait until the
      consumer has decided to enter a market context.
  • The Argument
    • There is nothing new here
      • In every age, in every generation
      • Someone is opines about odious behavior of the the grubby trades.
    • Or not; there is something new here, actually
    • Claimed:
      • An exceptionalism argument
      • digital market manipulation is different (exceptional)
      • The combination of three elements
        1. personalization
        2. systemization
        3. mediation
      • The bright line test is: systemization of the personal
        coupled with divergent interests.
      • QED
  • Therefore
    • Someone might get hurt (“there might be harms”).
    • A precautionary principle, a limiting principle, and justifies intervention.
    • Expansively: <quote>What, exactly, is the harm of serving an ad to a
      consumer that is based on her face or that plays to her biases? The
      skeptic may see none. [The case is made] that digital market
      manipulation, [as defined], has the potential to generate economic and privacy harms, and to damage consumer autonomy in a very specific way.</quote>
  • The Harms
    • Without the fancy-speak: <quote>Digital market manipulation presents an easy case: firms purposefully leverage information about consumers to their
      disadvantage in a way that is designed not to be detectable to them.</quote>
    • Ryan Calo, The Boundaries of Privacy Harm; In Indiana Law Journal; Volume 86, No. 3; 2011; available 2010-07-16; 31 pages.

      • either
        • unwanted observation
        • unwanted mental states.
      • “limiting principle”
      • “rule of recognition”
    • Market failure, writ large
      • Externalities generated, writ large
      • Regressive distribution effects, a type of market failure
    • Costs, Burdens
      • Costs-to-avoid by consumers.
      • Differential pricing against unwary customers
        e.g. imputed or estimated ability to pay or perceived willingness to pay as indicated by purchase behavior or visit frequency.
      • Inefficiences, a type of market failure
    • Privacy [harms to it]
      • Made manifest as differential pricing.
      • Loss of control (whatever that means).
      • Information sharing, between firms.
    • Autonomy [harms to it]
      • Vulnerability
      • Something gauzy-vague about
        • the encroachment upon play or playfullness.
        • the act of being watched changes the behavior of the subject (who is an object, being watched).
      • Threat Model: <quote>[The] concern is that hyper-rational actors armed with the ability to design most elements of the transaction will approach the consumer of the future at the precise time and in the exact way that
        tends to guarantee a moment of (profitable) irrationality.</quote>
  • Concessions, commitments & stipulations
    • Around the Harms, generally
      • No harm exists if the subject has no knowledge of it or concept of it as “harm” as such; the hidden “hidden Peeping Tom” principle.
      • All this could inure to the benefit of the consumer, maybe.
    • Around the harm to Autonomy, specifically
      • Autonomy is defined as the absence of vulnerability; thus adding vulnerability necessarily decreases autonomy.
      • The Law (consumer protection) has an interest in vulnerability and autonomy
        • On a forward-looking, hypothetical & precautionary basis; i.e. with X in hand, it might become possible to Y.
        • On a backward-looking, as-is or as-was basis; i.e. damage Y was done, is being done.
    • Around the notional degree or kind of the behaviors:
    • <quote>We are not talking about outright fraud here―in the sense of a material misrepresentation of fact―but only a tendency to mislead.</quote>
    • <quote>A consumer who receives an ad highlighting the limited supply of the product will not usually understand that the next person, who has not been associated with a fear of scarcity, sees a different pitch based on her biases. Such a practice does not just tend to mislead; misleading is the entire point.</quote>
  • The Free Speech Trump Card
    • No.
    • Classes
      • Political Speech
      • Commercial Speech
    • “The Press” must obey all other laws.
    • Mere data gathering is not “speech”, as such.
    • Data gathering for speech, is still not speech.

The Remedies

  1. Internal: Customer Subject Review Boards
    1. An institutional oversight organ; like an IHRB, an Ombudsman
    2. A body of ethical principles and practical guidelines (like The Belmont Report)
  2. External: Remove the conflict of interest
    1. Avoid the (targeted) advertising business model
    2. Avoid the (personalization) aspect of the user experience
    3. Fee-based, subscription-based services.

M. Ryan Calo; Code Nudge or Notice; University of Washington School of Law Research Paper No. 2013-04; 2013-02-13; 29 pages.

At some point though one is just negotiating on price

  • <quote>For, say, ten dollars a month or five cents a visit, users could opt out of the entire marketing ecosystem.</quote>
  • $120/year to whom?
  • $120/year across what scope?

Follow On

Attributed to Vance Packard

  • When you are manipulating, where do you stop?
  • Who is to fix the point at which manipulative attempts become socially undesirable?


  • Throat Clearing & Contextualization
  • Individuals & Institutions (mostly in order of appearance)
    • Jon Hanson
    • Douglas Kysar
    • Vance Packard, the works of
    • Eli Pariser
    • Joseph Turow
    • Dan Areily
    • Christine Jolls,
    • Cass Sunstein
    • Richard Thaler
    • Lior Strahilavitz
    • Ariel Porat
    • Ian Ayer
    • George Geis
    • Scott Peppet
    • Amos Tversky
    • Daniel Kahneman
    • Herbert Simon
    • Cliff Nass
    • Chris Anderson
    • Alessandro Acquisti
    • Christopher Yoo
    • Maurits Kaptein; persuasion profiling
    • John Hauser
    • Glen Urban
    • John Calfee
    • Dean Eckles, Facebook; persuasion profiling
    • Oren Bar-Gill
    • Russell Korobkin
    • Andrew Odlyzko
    • Neil Richards
    • Tal Zarsky
    • Julie Cohen
    • Jane Yakowitz Bambauer
    • B. J. Fogg; captology, Persuasive Technology
    • Matthew Edwards
    • Peter Swire
    • Richard Craswell
    • Viktor Mayer-Schönberger
    • Kenneth Cukier
    • John Rawls
  • Theory Bodies
    • Behavioral Economics (BE)
    • Prospect Theory
    • Dual Process Theory
      • Fast Thinking (contra Slow Thinking)
    • Neuromarketing
  • Branded Concepts (a tour of the terms)
    • Libertarian Paternalism
    • Nudging
    • Debiasing
    • Predictable Irrationality
    • Disclosure Ratcheting
    • Bounded Rationality
    • The End of Theory
      • Correlation trumping causality; Chris Anderson)
    • Outlier
      • Outlier detection
      • Outlier modeling
    • Information Overload
    • Phenomena “wear out”
    • Personification & anthropomorphization of “Information” to justify irrationality
      • As Villain
      • As Hero
      • As Victim
    • Information [overload] management strategies
      • Viceral Notice
      • Feedback
      • Mapping
    • Anchoring
    • Framing
    • Biases, debiasing
      • A/B Testing
      • General Bias
      • Specific Bias
    • Incentivization
    • Targeting
      • Means-based targeting
      • Morphing
      • Persuasion profiling (motivation discovery & exploitation)
    • Consent, limits of consent
      • contract term can become “unconscionable”
      • enrichments can become “unjust”
      • influence can become “undue”
      • dealings can constitutes “fair” dealing, or not
      • strategic behavior can constitute “bad faith”
      • interest rates can become “usurious” (usury)
      • higher prices an become “gauging”
    • Market Failure
      • inefficient markets
      • novel (new) sources of market failure (ah! I found another one!)
    • Behaviorally-informed [contract] drafting techniques.
    • The “hidden Peeping Tom” principle (puzzle, conundrum, stance)
      i.e. the woman doesn’t lose any virtue if she doesn’t know she was watched.
    • Caveat emptor
    • Mandatory Disclosure
    • “facilitation” (contra “friction”)
    • Informed Consent
    • “digital nudging”
    • “publicity principle”
  • Devices, Formulae, Practices
    • Learned Hand Formula (calculus of negligence).
    • Belmont Report
      • principles of “beneficence” and “justice”
      • Beneficence is defined as the minimizztion of harm to the subject and society while maximizing benefit—a kind of ethical Learned Hand Formula.
      • Justice prohibits unfairness in distribution, defined as the undue imposition of a burden or withholding of a benefit.
    • Institutional Human Review Board (IHRB)
    • Internal “algorithmists”
    • Ombusmen
  • Teleology & Goals
    • (Online advertising) is “ends-based”
    • Advertisers aspire to match the right ad to the right person [at the right time]
  • Argot (general trade-specific terms)
    • “atmospherics,” the layout and presentation of retail space.
    • “preference marketing,” against a consumer’s stated or volunteered preferences.
    • “behavioral marketing,” against a consumer’s observed preferences.
  • Epithets, Insults & Snidenesses
    • “Kafakesque”, attributed to Daniel Solove, and concurred by Calo.
    • “the feel of a zoetrope, spinning static case law in a certain light to
      create the illusion of forward motion” see page 39.
    • “Elephants” vs “mice” with 2x opinements from Peter Swire.
    • “digital divide”
  • Definition of market manipulation


… of note [it's a legal paper so the thing is largely footnotes].  No order.


All this rests upon the definition of Market Manipulation of Hanson & Kysar.

Taking Behaviorism Seriously, Part I

Jon Hanson, Douglas Kysar; Taking Behavioralism Seriously: The Problem of Market Manipulation; In New York University Law Review; Volume 74; 1999; page 632; 1999; Also Harvard Public Law Working Paper No. 08-54; 118 pages.


For the past few decades, cognitive psychologists and behavioral researchers have been steadily uncovering evidence that human decisionmaking processes are prone to nonrational, yet systematic, tendencies. These researchers claim not merely that we sometimes fail to abide by rules of logic, but that we fail to do so in predictable ways.

With a few notable exceptions, implications of this research for legal institutions were slow in reaching the academic literature. Within the last few years, however, we have seen an outpouring of scholarship addressing the impact of behavioral research over a wide range of legal topics. Indeed, one might predict that the current behavioral movement eventually will have an influence on legal scholarship matched only by its predecessor, the law and economics movement. Ultimately, any legal concept that relies in some sense on a notion of reasonableness or that is premised on the existence of a reasonable or rational decisionmaker will need to be reassessed in light of the mounting evidence that humans are “a reasoning rather than a reasonable animal.”

This Article contributes to that reassessment by focusing on the problem of manipulability. Our central contention is that the presence of unyielding cognitive biases makes individual decisionmakers susceptible to manipulation by those able to influence the context in which decisions are made. More particularly, we believe that market outcomes frequently will be heavily influenced, if not determined, by the ability of one actor to control the format of information, the presentation of choices, and, in general, the setting within which market transactions occur. Once one accepts that individuals systematically behave in nonrational ways, it follows from an economic perspective that others will exploit those tendencies for gain.

That possibility of manipulation has a variety of implications for legal policy analysis that have heretofore gone unrecognized. This article highlights some of those implications and makes several predictions that are tested in other work.

Taking Behaviorism Seriously, Part II

Jon Hanson, Douglas Kysar; Taking Behavioralism Seriously: Some Evidence of Market Manipulation; In Harvard Law Review; Volume 112; 1999; page 1420; Also Harvard Public Law Working Paper No. 08-52; 149 pages.


An important lesson of behavioralist research is that individuals’ perceptions and preferences are highly manipulable. This article presents empirical evidence of market manipulation, a previously unrecognized source of market failure. It surveys extensive qualitative and quantitative marketing research and consumer behavioral studies, reviews common practices in settings such as gas stations and supermarkets, and examines environmentally oriented and fear-based advertising. The article then focuses on the industry that has most depended upon market manipulation: the cigarette industry. Through decades of sophisticated marketing and public relations efforts, cigarette manufacturers have heightened consumer demand and lowered consumer risk perceptions. Such market manipulation may justify moving to enterprise liability, the regime advocated by the first generation of product liability scholars.

Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms | Crawford, Schultz

Kate Crawford, Jason Schultz; Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms; In Boston College Law Review; Vol. 55, No. 1, 2014; NYU School of Law, Public Law Research Paper No. 13-64; NYU Law and Economics Research Paper No. 13-36.

  • Kate Crawford, Microsoft Research; MIT Center for Civic Media; University of New South Wales (UNSW)
  • Jason Schultz, New York University School of Law


The rise of “big data” analytics in the private sector poses new challenges for privacy advocates. Unlike previous computational models that exploit personally identifiable information (PII) directly, such as behavioral targeting, big data has exploded the definition of PII to make many more sources of data personally identifiable. By analyzing primarily metadata, such as a set of predictive or aggregated findings without displaying or distributing the originating data, big data approaches often operate outside of current privacy protections (Rubinstein 2013; Tene and Polonetsky 2012), effectively marginalizing regulatory schema. Big data presents substantial privacy concerns – risks of bias or discrimination based on the inappropriate generation of personal data – a risk we call “predictive privacy harm.” Predictive analysis and categorization can pose a genuine threat to individuals, especially when it is performed without their knowledge or consent. While not necessarily a harm that falls within the conventional “invasion of privacy” boundaries, such harms still center on an individual’s relationship with data about her. Big data approaches need not rely on having a person’s PII directly: a combination of techniques from social network analysis, interpreting online behaviors and predictive modeling can create a detailed, intimate picture with a high degree of accuracy. Furthermore, harms can still result when such techniques are done poorly, rendering an inaccurate picture that nonetheless is used to impact on a person’s life and livelihood.

In considering how to respond to evolving big data practices, we began by examining the existing rights that individuals have to see and review records pertaining to them in areas such as health and credit information. But it is clear that these existing systems are inadequate to meet current big data challenges. Fair Information Privacy Practices and other notice-and-choice regimes fail to protect against predictive privacy risks in part because individuals are rarely aware of how their individual data is being used to their detriment, what determinations are being made about them, and because at various points in big data processes, the relationship between predictive privacy harms and originating PII may be complicated by multiple technical processes and the involvement of third parties. Thus, past privacy regulations and rights are ill equipped to face current and future big data challenges.

We propose a new approach to mitigating predictive privacy harms – that of a right to procedural data due process. In the Anglo-American legal tradition, procedural due process prohibits the government from depriving an individual’s rights to life, liberty, or property without affording her access to certain basic procedural components of the adjudication process – including the rights to review and contest the evidence at issue, the right to appeal any adverse decision, the right to know the allegations presented and be heard on the issues they raise. Procedural due process also serves as an enforcer of separation of powers, prohibiting those who write laws from also adjudicating them.

While some current privacy regimes offer nominal due process-like mechanisms in relation to closely defined types of data, these rarely include all of the necessary components to guarantee fair outcomes and arguably do not apply to many kinds of big data systems (Terry 2012). A more rigorous framework is needed, particularly given the inherent analytical assumptions and methodological biases built into many big data systems (boyd and Crawford 2012). Building on previous thinking about due process for public administrative computer systems (Steinbock 2005; Citron 2010), we argue that individuals who are privately and often secretly “judged” by big data should have similar rights to those judged by the courts with respect to how their personal data has been used in such adjudications. Using procedural due process principles, we analogize a system of regulation that would provide such rights against private big data actors.


  • Chain of Reasoning
    • Data creates “judgements”
    • Judgements create “takings”
    • Takings require “due process”
  • Due Process is:
    • a Separation of Powers
    • a Systems Management (discipline)
    • fair and feasible
  • Elements of A Due Process “Hearing” (supporting citations):
    1. an unbiased tribunal;
    2. notice of the proposed action;
    3. the grounds asserted for it;
    4. an opportunity to present reasons why the proposed action should not be taken;
    5. the right to call witnesses;
    6. the right to know the evidence against one;
    7. the right to have the decision based only on the evidence presented;
    8. the right to counsel;
    9. the making of a record;
    10. a statement of reasons;
    11. public attendance;
    12. judicial review.
  • Values of the Due Process; it should preserve (supporting citations):
    1. accuracy;
    2. the appearance of fairness;
    3. equality of inputs into the process;
    4. predictability, transparency, and rationality;
    5. participation;
    6. revelation;
    7. privacy-dignity.



Via: backfill