Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society | Pasquale

Frank A. Pasquale III; Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society; Ohio State Law Journal, Vol. 78, 2017, U of Maryland Legal Studies Research Paper No. 2017-21; 2017-07-14; 13 pages; ssrn:3002546.

tl;dr → A comment for Balkin. To wit:
  1. Balkin should have supplied more context; such correction is supplied herewith.
  2. More expansive supervision is indicated; such expansion is supplied herewith.
  3. Another law is warranted; not a trinity, but perfection plus one more.
Fourth Law

A [machine] must always indicate the identity of its creator, controller, or owner.
<ahem>Like… say… a license, to operate, to practice; a permit; as manifest in a license plate, certificate of operation, certificate of board, a driver license; contractor license, a Bar Association Number, a VIN number, a tail number, a hull number.</ahem>

Three Laws, previous:

  1. machine operators are always responsible for their machines.
  2. businesses are always responsible for their operators.
  3. machines must not pollute.

So it is just like planes, trains & automobiles.

Accolade

<quote>Balkin’s lecture is a tour de force distillation of principles of algorithmic accountability, and a bold vison for entrenching them in regulatory principles. <snip>…etc…</snip></quote>

Nostrum

  • Regulators
  • non-functional requirements
    the branded “By Design” theories

    • responsibility-by-design,
    • security-by-design,
    • privacy-by-design,
    • attribution-by-design [traceability-by-design].
  • Audit logs.
  • A Licentiate, the licentia ad practicandum
  • Supervisory Control.

Abstract

Jack Balkin makes several important contributions to legal theory and ethics in his lecture, “The Three Laws of Robotics in the Age of Big Data.” He proposes “laws of robotics” for an “algorithmic society” characterized by “social and economic decision making by algorithms, robots, and AI agents.” These laws both elegantly encapsulate, and add new principles to, a growing movement for accountable design and deployment of algorithms. [This] comment aims to

  1. contextualize his proposal as a kind of “regulation of regulation,” familiar from the perspective of administrative law,
  2. expand the range of methodological perspectives capable of identifying “algorithmic nuisance,” a key concept in Balkin’s lecture, and
  3. propose a fourth law of robotics to ensure the viability of Balkin’s three laws.

Mentions

  • Jack Balkin, Knight Professor of Constitutional Law and the First Amendment, Law School, Yale University; via Jimi Wales’ Wiki.

Amplification

[in case it wasn't otherwise clear]

<quote>Balkin’s lecture is a tour de force distillation of principles of algorithmic accountability, and a bold vison for entrenching them in regulatory principles. As he observes, “algorithms

  1. construct identity and reputation through
  2. classification and risk assessment, creating the opportunity for
  3. discrimination, normalization, and manipulation, without
  4. adequate transparency, monitoring, or due process.

[endquote]” They are, therefore, critically important features of our information society which demand immediately attention from regulators. High level officials around the world need to put the development of a cogent and forceful response to these developments at the top of their agendas. Balkin’s “Laws of Robotics” is an ideal place to start, both to structure that discussion at a high level and to ground it in deeply rooted legal principles.

It is rare to see a legal scholar not only work at the deepest levels of policy (in the sense of all those normative considerations that should inform legal decisions outside of the law governing the case), but also recommend in clear and precise language a coherent set of concrete recommendations that both exemplify principles of critical and social theory, and stand some chance of being adopted by current government officials. That is Balkin’s achievement in The Three Laws of Robotics in the Age of Big Data. It is work to cite, celebrate, and rally around, and an auspicious launch for Ohio State’s program in Big Data & Law.</quote>

Referenced

Jack M. Balkin  (Yale); The Three Laws of Robotics in the Age of Big Data; Ohio State Law Journal, Vol. 78, (2017), Forthcoming (real soon now, RSN), Yale Law School, Public Law Research Paper No. 592; 2016-12-29 → 2017-09-10; 45 pages; ssrn:2890965; previously filled, separately noted.

Argot

The Suitcase Words
  • Big Data,
    Age of Big Data
  • laws of robotics
    Three Laws of Robotics
  • algorithmic society
  • social decision-making,
    social decision-making by algorithms
  • economic decision-making,
    economic decision-making by algorithms
  • algorithms
  • robots
  • Artificial Intelligence (AI)
  • AI Agents
  • encapsulate
  • principles
  • accountable design
  • deployment of algorithms
  • contextualize
  • regulation of regulation
  • perspective of administrative law
  • methodological perspectives,
    range of methodological perspectives
  • algorithmic nuisance
  • fourth law of robotics
  • viability, to ensure the viability of
  • three laws

Previously filled.

The Three Laws of Robotics in the Age of Big Data | Balkin

Jack M. Balkin  (Yale); The Three Laws of Robotics in the Age of Big Data; Ohio State Law Journal, Vol. 78, (2017), Forthcoming (real soon now, RSN), Yale Law School, Public Law Research Paper No. 592; 2016-12-29 → 2017-09-10; 45 pages; ssrn:2890965.

tl;dr → administrative laws [should be] directed at human beings and human organizations, not at [machines].

Laws

  1. machine operators are responsible
    [for the operations of their machines, always & everywhere]
  2. businesses are responsible
    [for the operation of their machines, always & everywhere]
  3. machines must not pollute
    [in a sense to be defined later: e.g. by a "tussle"]

None of this requires new legal theory; c.f. licensing for planes, trains & automobiles; and on to nuclear plants, steel unto any intellectual business operation of any kind (ahem, medical, architecture, legal services; and anything at all under the Commerce Clause, no?)

Mentions

  • Isaac Asimov, the stories of
    …and the whole point of the stories was the problematic nature of The Three Laws, They seemed fun and clear but they were problematized and the don’t work as a supervisory apparatus. Maybe they don’t work at all. Is the same true here? Not shown.
  • Laws of Robotics,
    Three Laws of Robotics.
  • [redefined] the “laws of robotics” are the legal and policy principles that govern [non-persons, unnatural-persons].

Concepts Principles (HF/SE/IF/AN)

  1. homunculus, a fallacy
  2. substitution, an effect
  3. information fiduciaries, a role
  4. algorithmic nuisance, an ideal (an anti-pattern

Analysis

A matrix, the he cross product, of twelve (12) combinations:

Requirement of (TAdP)
  1. Transparency
  2. Accountability
  3. due Process
Principles of (HF/SE/IF/AN)
  • [the] homunculus fallacy
  • [a] substitution effect
  • information fiduciaries
  • algorithmic nuisance

 Argot

The Suitcase Words
  • Isaac Asimov.
  • three law of robotics.
  • programmed,
    programmed into every robot.
  • govern.
  • robots.
  • algorithms.
  • artificial intelligence agents..
  • legal principles,
    basic legal principles.
  • the homunculus fallacy.
  • he substitution effect.
  • information fiduciaries.
  • algorithmic nuisance.
  • homunculus fallacy.
  • attribution.
  • human intention.
  • human agency.
  • robots.
  • belief,
    false belief.
  • person
    little person.
  • robot.
  • program.
  • intentions,
    good intentions.
  • substitution effect.
  • social power.
  • social relations.
  • robots.
  • Artificial Intelligence (AI).
  • AI agents.
  • algorithms.
  • substitute,
    algorithmssubstitute for human beings.
  • operate,
    algorithms operate as special-purpose people..
  • mediated
    ,mediated through new technologies.
  • three laws of robotics
    Three Laws of Robotics.
  • Algorithmic Society.
  • robots.
  • artificial intelligence agents.
  • algorithms.
  • governments.
  • businesses.
  • staffed.
  • Algorithmic Society.
  • asymmetries,
    asymmetries of information,
    asymmetries of monitoring capacity,
    asymmetries computational power.
  • Algorithmic Society:.
  • operators,
    operators of robots,
    operators of algorithms
    operators of artificial intelligence agents.
  • information fiduciaries.
  • special duties,
    special duties of good faith,
    special duties fair dealing.
  • end-users, clients and customersdata subjects.
  • businesses,
    privately owned businesses.
  • the public,
    the general public..
  • duty,
    central public duty.
  • algorithmic nuisances.
  • leverage utilize use.
  • asymmetries of information,
    asymmetries of monitoring capacity,
    asymmetries of computational power.
  • externalize,
    externalize the costs,
    externalize the costs of their activities.
  • algorithmic nuisance.
  • harms
    harms of algorithmic decision making.
  • discrimination
    intentional discrimination.
  • pollution,
    unjustified pollution
    socially unjustified pollution
    contra (socially-)justified pollution.
  • power
    computational power.
  • obligations,
    obligations of transparency,<br/ obligations of due process,
    obligations of accountability.
  • obligations flow.
  • requirements,
    substantive requirements,
    three substantive requirements.
  • transparency.
  • accountability.
  • due process.
  • obligation,
    an obligation of.
  • fiduciary relations.
  • public duties.
  • measure,
    a measure,
    a prophylactic measure.
  • externalization,
    unjustified externalization
    unjustified externalization of harms.
  • remedy,
    remedy for harm.

Previously filled.

Incompatible: The GDPR in the Age of Big Data | Tal Zarsky

Tal Zarsky (Haifa); Incompatible: The GDPR in the Age of Big Data; Seton Hall Law Review, Vol. 47, No. 4(2), 2017; 2017-08-22; 26 pages; ssrn:3022646.
Tal Z. Zarsky is Vice Dean and Professor, Haifa University, IL.

tl;dr → the opposition is elucidated and juxtaposed; the domain is problematized.
and → “Big Data,” by definition, is opportunistic and unsupervisable; it collects everything and identifies something later in the backend.  Else it is not “Big Data” (it is “little data,” which is known, familiar, boring, and of course has settled law surrounding its operational envelope).

Abstract

After years of drafting and negotiations, the EU finally passed the General Data Protection Regulation (GDPR). The GDPR’s impact will, most likely, be profound. Among the challenges data protection law faces in the digital age, the emergence of Big Data is perhaps the greatest. Indeed, Big Data analysis carries both hope and potential harm to the individuals whose data is analyzed, as well as other individuals indirectly affected by such analyses. These novel developments call for both conceptual and practical changes in the current legal setting.

Unfortunately, the GDPR fails to properly address the surge in Big Data practices. The GDPR’s provisions are — to borrow a key term used throughout EU data protection regulation — incompatible with the data environment that the availability of Big Data generates. Such incompatibility is destined to render many of the GDPR’s provisions quickly irrelevant. Alternatively, the GDPR’s enactment could substantially alter the way Big Data analysis is conducted, transferring it to one that is suboptimal and inefficient. It will do so while stalling innovation in Europe and limiting utility to European citizens, while not necessarily providing such citizens with greater privacy protection.

After a brief introduction (Part I), Part II quickly defines Big Data and its relevance to EU data protection law. Part III addresses four central concepts of EU data protection law as manifested in the GDPR: Purpose Specification, Data Minimization, Automated Decisions and Special Categories. It thereafter proceeds to demonstrate that the treatment of every one of these concepts in the GDPR is lacking and in fact incompatible with the prospects of Big Data analysis. Part IV concludes by discussing the aggregated effect of such incompatibilities on regulated entities, the EU, and society in general.

Rebuttal

<snide><irresponsible>Apparently this was not known before the activists captured the legislature and affected their ends with the force of law. Now we know. Yet we all must obey the law, as it stands and as it is written. And why was this not published in an EU-located law journal, perhaps one located in … Brussels?</irresponsible></snide>

Contents

  1. INTRODUCTION AND ROAD MAP
  2. A BRIEF PRIMER ON BIG DATA AND THE LAW
  3. THE GDPR’S INCOMPATIBILITY
    FOUR EXAMPLES

    1. Purpose Limitation
    2. Data Minimization
    3. Special Categories
    4. Automated Decisions
  4. CONCLUSION: WHAT’S NEXT FOR EUROPE?

Mentioned

  • Big Data (contra “little data”)
  • personal data
  • Big Data Revolution
  • evolution not revolution
    no really, revolution not evolution
  • The GDPR is a regulation “on the protection of natural persons,”
  • EU General Data Protection Regulation (GDPR)
  • EU Data Protection Directive (DPD)
  • IS GDPR different than DPD?  Maybe not.  Why? c.f. page 10.
  • Various attempts at intuiting bright-line tests around the laws are recited.
    It is a law, but nobody knows how it is interpreted or how it might be enforced.
  • statistical purpose
  • analytical purpose
  • data minimization
  • pseudonymization
  • reidentification
  • specific individuals
  • <quote>n the DPD, article 8(1) prohibited the processing of data “revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and the processing of data concerning health or sex life,” while providing narrow exceptions.85 This distinction was embraced by the GDPR.</quote>
  • Article 29 Working Party
  • on (special) category contagion
    “we feel that all data is credit data, we just don’t know how to use it yet.”
    c.f. page 19; attributed to Dr. Douglas Merrill, then-founder, ZestFinance, ex-CTO, Google.
  • data subjects
  • automated decisions
  • right to “contest the decision”
  • obtain human intervention
  • trade secrets contra decision transparency
    by precedent, in EU (DE), corporate rights trump decision subject’s rights.
  • [a decision process] must be interpretable
  • right to due process [when facing a machine]

Definitions

Big Data is…

  • …wait for it… so very very big
    …thank you, thank you very much. I will be here all week. Please tip your waitron.
  • The Four Five “Vs”
The Four Five “Vs”
  1. The Volume of data collected,
  2. The Variety of the sources,
  3. The Velocity,
    <quote>with which the analysis of the data can unfold,</quote>,
  4. The Veracity,
    <quote>of the data which could (arguably) be achieved through the analytical process.</quote>,
  5. The Value, yup, that’s five.
    … <quote>yet this factor seems rather speculative and is thus best omitted.</quote>,
Erudition

The Brussels Effect

  • What goes on in EU goes global,
  • “Europeanization”
  • Law in EU is applied world-wide because corporate operations are universal.
Erudition

Aspects

  • purpose limitation,
  • data minimization,
  • special categories,
  • automated decisions.

References

There are 123 references, across 26 pages of prose, made manifest as footnotes in the legal style. Here, simplified and deduplicated.

Previously filled.

The Death of Rules and Standards | Casey, Niblett

Anthony J. Casey, Anthony Niblett; The Death of Rules and Standards; Coase-Sandor Working Paper Series in Law and Economics No. 738; Law School, University of Chicago; 2015; 58 pages; landing, copy, ssrn:2693826.

tl;dr → because reasons and
  • Prediction Technologies
  • Communication Technologies

Abstract

Scholars have examined the lawmakers’ choice between rules and standards for decades. This paper, however, explores the possibility of a new form of law that renders that choice unnecessary. Advances in technology (such as big data and artificial intelligence) will give rise to this new form – the micro-directive – which will provide the benefits of both rules and standards without the costs of either.

Lawmakers will be able to use predictive and communication technologies to enact complex legislative goals that are translated by machines into a vast catalog of simple commands for all possible scenarios. When an individual citizen faces a legal choice, the machine will select from the catalog and communicate to that individual the precise context-specific command (the micro-directive) necessary for compliance. In this way, law will be able to adapt to a wide array of situations and direct precise citizen behavior without further legislative or judicial action. A micro-directive, like a rule, provides a clear instruction to a citizen on how to comply with the law. But, like a standard, a micro-directive is tailored to and adapts to each and every context.

While predictive technologies such as big data have already introduced a trend toward personalized default rules, in this paper we suggest that this is only a small part of a larger trend toward context- specific laws that can adapt to any situation. As that trend continues, the fundamental cost trade-off between rules and standards will disappear, changing the way society structures and thinks about law.

Table of Contents

  1. Introduction
  2. The Emergence Of Micro-Directives And The Decline Of Rules And Standards
    1. Background: Rules and standards
    2. Technology will facilitate the emergence of micro-directives as a new form of law
    3. Demonstrative examples
      • Example 1: Predictive technology in medical diagnosis
      • Example 2: Communication technology in traffic laws
    4. The different channels leading to the death of rules and standards
      1. The production of micro-directives by non-legislative lawmakers
      2. An alternative path: Private use of technology by regulated actors
  3. Feasibility
    1. The feasibility of predictive technology
      1. The power of predictive technology
      2. Predictive technology will displace human discretion
    2. The feasibility of communication technology
  4. Implications And Consequences
    1. The death of judging? Institutional changes to the legal system
    2. The development and substance of policy objectives
    3. Changes to the practice of law
    4. The broader consequences of these technologies on individuals
      1. Privacy
      2. Autonomy
      3. Ethics
  5. Conclusion

Snide

  • heavy-handed use of the metaphor “death X” in lieu of the more mundate “cessation of use of the technique X.”
  • At least they didn’t use the metaphors of “sea change” or “tectonic shifts” from the respective fields of weather prediction or geology.
  • <GEE-WHIZZ!>As economist Professor William Nordhaus notes, the increase in computer power over the course of the twentieth century was “phenomenal,”</GEE-WHIZZ!!>

Mentions

  • catalog of personalized laws.
    “special law for you.”
  • rules and standards
    contra
    captures and benefits
  • micro-benefits
  • predictive technology
  • uncertainty of law
  • <quote>The legislature merely states its goal. Machines then design the law as a vast catalog of context-specific rules to optimize that goal. From this catalog, a specific micro-directive is selected and communicated to a particular driver (perhaps on a dashboard display) as a precise speed for the specific conditions she
    faces.</quote>
  • positive versus normative (analysis)
  • (legislative) decision-making has
    • errors
    • costs
  • (subject) compliance has
    • cost
    • uncertainty
  • There are economies of scale in compliance
    • frequency of event
    • diversity of events
  • Conceptualize the frequency of the regulated event relative the specificity of the regulation.
  • The Combination
    • Prediction Technologies
    • Communication Technologies
  • <quote>The wise draftsman . . . asks himself, how many of the details of this settlement ought to be postponed to another day, when the decisions can be more wisely and efficiently and perhaps more readily made?</quote>, attributed to Henry Hart, Albert Sacks.
  • Claim
    • Standards are flexible, broad but uncertain in adjudication;
      so service delivery is tailored
      therefore the salubrious effect obtains.
    • Rules are specific, narrow but certain in adjudication;
      so service delivery is pre-specified, constrained
      therefore mis-applications occur.
    • Technology (cited) removes the distinction between Rules and Standards
  • advance (tax) rulings
  • Private Letter Rulings, IRS
  • No Action Letter, SEC

Argot

  • one size fits all
  • bright-line rule
  • over-inclusive (contra under-inclusive)
  • optimal decision rule
  • reasonable care
  • Error Typology(in hypothesis testing)
    • Type I Error
    • Type II Error
  • health surveillance technologies
  • second-order regulation

References

Sure, it’s a legal-style paper so there are 191 footnotes sprinkled liberally throughout the piece.  Only selected references were developed.

Followup

  • <quote>Indeed, some suggest that Moore’s Law is akin to a self-fulfilling prophecy.</quote>
    Harro van Lente & Arie Rip, “Expectations in Technological Developments: an Example of Prospective Structures to be Filled in by Agency”  researchgate, In Getting New Technologies Together: Studies In Making Sociotechnical Order, 206 (Cornelis Disco & Barend van der Meulen, eds. 1998), Amazon:311015630X: paper: $210+SHT.
  • …and more…

Via: backfill.

Online Ads Roll the Dice declares the Federal Trade Commission (FTC)

Latanya Sweeney (FTC); Online Ads Roll the Dice; In Their Blog; 2014-09-25.
Latanya Sweeney is Chief Technologist at the Federal Trade Commission (FTC)
Teaser: Online ads, exclusive online communities, and the potential for adverse impacts from big data analytics

tl;dr => content targeting is bad, audience targeting is insidious.

Original Sources

Big Data: A Tool for Inclusion or Exclusion?; workshop; Federal Trade Commission (FTC); 2014-09-15;

  • Proceedings & Media
  • Commentariat
  • Speakers
    • Kristin Amerling, Chief Investigative Counsel and Director of Oversight, U.S. Senate Committee on Commerce, Science and Transportation
    • Alessandro Acquisti, Associate Professor of Information Systems and Public Policy, Heinz College, Carnegie Mellon University and Co-director of the CMU Center for Behavioral Decision Research
    • Katherine Armstrong, Senior Attorney, Division of Privacy and Identity Protection, FTC
    • Solon Barocas, Postdoctoral Research Associate, Princeton University Center for Information Technology Policy
    • danah boyd, Principal Researcher, Microsoft Research, Research Assistant Professor, New York University
    • Julie Brill, Commissioner, Federal Trade Commission
    • Christopher Calabrese, Legislative Counsel, American Civil Liberties Union
    • Leonard Chanin, Partner, Morrison Foerster
    • Daniel Castro, Senior Analyst, Information Technology and Innovation Foundation
    • Pamela Dixon, Founder and Executive Director, World Privacy Forum,
    • Cynthia Dwork, Distinguished Scientist, Microsoft Research
    • Mallory Duncan, Senior Vice President and General Counsel, National Retail Federation
    • Patrick Eagan-Van Meter, Program Specialist, Division of Financial Practices, FTC
    • Jeanette Fitzgerald, General Counsel and Chief Privacy Officer, Epsilon
    • Tiffany George, Senior Attorney, Division of Privacy & Identity Protection, FTC
    • Jeremy Gillula, Staff Technologist, Electronic Frontier Foundation
    • Gene Gsell, Senior Vice President, U.S. Retail & CPG, SAS
    • Mark MacCarthy, Vice President for Public Policy, Software Information Industry Association
    • Carol Miaskoff, Assistant Legal Counsel, Office of Legal Counsel, Equal Employment Opportunity Commission
    • Montserrat Miller, Partner, Arnall Golden Gregory LLP,
    • Christopher Olsen, Assistant Director, Division of Privacy and Identity Protection, FTC
    • C. Lee Peeler, President and CEO of the Advertising Self-Regulatory Council and, Executive Vice President, National Advertising Self-Regulation, Council of Better Business Bureaus
    • Stuart Pratt, President and CEO, Consumer Data Industry Association
    • Edith Ramirez, Chairwoman, Federal Trade Commission
    • Jessica Rich, Director, Bureau of Consumer Protection, Federal Trade Commission
    • David Robinson, Principal, Robinson + Yu
    • Michael Spadea, Director, Promontory Financial Group
    • Latanya Sweeney, Chief Technologist, Federal Trade Commission
    • Peter Swire, Professor of Law and Ethics, Scheller College of Business, Georgia Institute of Technology
    • Nicol Turner-Lee, Vice President and Chief Research & Policy Officer, Minority Media and Telecommunications Council
    • Joseph Turow, Professor, Annenberg School for Communication, University of Pennsylvania
    • Christopher Wolf, Senior Partner, Hogan Lovells, Founder and Chair, Future of Privacy Forum, Chair, National Civil Rights Committee, Anti-Defamation League
    • Katherine Worthman, Senior Attorney, Division of Financial Practices, FTC
    • Jinyan Zang, Research Fellow in Technology and Data Governance, Federal Trade Commission

Big Data, a Tool for Inclusion or Exclusion?; Edith Ramirez (FTC), Solon Baracas (Princeton); Workshop Slides; 36 slides.

  • A tutorial on “data mining,” i.e. what is it?
  • Claims:
    • Data mining is always & by definition a form of discrimination, by conferring upon individuals the traits of those similar to them [it is rational, statistically-based stereotyping] (slide 9)
    • Data mining can be wrong; can be skewed, can overcount, can undercount, can mis-label, can mis-classify; there be dragons here. (middle)
    • Data mining unintentionally exacerbates existing inequality; there is no ready answer (slide 25)

Latanya Sweeney, Jinyan Zang (FTC); Digging into the Data; presentation; 30 slides.

  • Subtitles (huge subtitles)
    • If the appropriateness of an advertisement for a publication depends on the nature and character of the publication, then
      how “appropriate” might big data analytics decisions be when placing ads?
    • If the appropriateness of an advertisement for a publication depends on the nature and character of the publication, then how “appropriate” might big data analytics decisions be when placing ads?
  • Contributors
    • Krysta Dummit, undergraduate, Princeton 2015.
    • Jim Graves, graduate student, Carnegie Mellon University (CMU)
    • Paul Lisker,  undergraduate, Harvard University 2016.
    • Jinyan Zang, Oliver Wyman (a consulting boutique), Harvard University 2013.
  • Mentions
  • Promise:
    • A forthcoming paper: contact Latanya Sweeny for a copy upon release

Response

Referenced

Actualities

alt text for 3
alt text for 4
alt text for 5
alt text for 6
alt text for 7

Via: backfill

Data Doppelgängers and the Uncanny Valley of Personalization | Sara M. Watson

Data Doppelgängers and the Uncanny Valley of Personalization; ; In The Atlantic; 2014-06-16.
Teaser: Why customized ads are so creepy, even when they miss their target

Mentioned

  • Masahiro Mori
  • Acxiom
  • Facebook

<quote>Personalization appeals to a Western, egocentric belief in individualism. Yet it is based on the generalizing statistical distributions and normalized curves methods used to classify and categorize large populations. Personalization purports to be uniquely meaningful, yet it alienates us in its mass application. Data tracking and personalized advertising is often described as “creepy.” Personalized ads and experiences are supposed to reflect individuals, so when these systems miss their mark, they can interfere with a person’s sense of self. It’s hard to tell whether the algorithm doesn’t know us at all, or if it actually knows us better than we know ourselves. And it’s disconcerting to think that there might be a glimmer of truth in what otherwise seems unfamiliar. This goes beyond creepy, and even beyond the sense of being watched. </quote>

Referenced

Previously

Actualities


Attributed To: jimi Wales Wiki

Via: backfill

Boston Consulting Group opines on Big Data

In Perspectives of Boston Consulting Group

Seven Ways to Profit from Big Data as a Business
James Platt
, Robert Souza, Enrique Checa, Ravi Chabaldas
2014-03-05

tl;dr => mix and match the models; introspection exercise: enough data, enough infrastructure, find a customer, find a partner, foster trust.

Business models (a listicle of 7):

  1. Build to Order
  2. Service Bundle
  3. Plug & Play
  4. Pay per Use
  5. Commission
  6. Value Exchange
  7. Subscription

Data Privacy by the Numbers; 10 slides
John Rose, Christine Barton, Robert Souza, James Platt
2014-02-19

tl;dr => From BCG Global Consumer Sentiment Survey 2013; lots of “how do you feel” questions to streetfolk, N=10,000 worldwide in consulting-capable countries.

The Trust Advantage: How to Win with Big Data
John Rose, Christine Barton, Robert Souza, James Platt
2013-11-06

tl;dr => very short. the trust advantage is: if [clients] can generate meaningful insights from [consumer CRM-type "personal"] information and execute an effective big-data strategy, the resulting torrent of newly available data could shift market shares and accelerate innovation.</quote>

The Age of Digital Ecosystems: Thriving in a World of Big Data
Tamim Saleh, Jon Brock, Nadjia Yousif, Andrew Luers
2013-07-23

tl;dr => words like: digital native, ecosystem, loyalty, platform, stack, standardization; ecosystem-platform-endorsed (fancy for “vendor lockin”); exemplar products, Apple, Google, Nest.

How to Get Started with Big Data
Robert Souza, Rob Trollinger, Cornelius Kaestner, David Potere, Jan Jamrich
2013-05-29

tl;dr => definition of Big Data (it’s “big” and “made of data”; and totally new); metaphors: volume, variety, velocity; [big] data applications: develop strategy, analytical approaches, new revenue streams; nostrums: right team, right tools, learn.

Big Data: The Next Big Thing for Insurers?
Eric Brat, Stephan Heydorn, Matthew Stover, Martin Ziegler
2013-03-25

tl;dr => [Betteridge's Law]; uses: fraud detection, claims mitigation & prevention, cross selling, pricing (risk assessment); monitoring & telematics; nostrums: (a listicle of 4, um 5) right environment, organizational redesign, learn, leverage data, [plan of] action.

Better Bundling in Technology, Media, and Telecom Markets: Four Simple Rules
Jean-Manuel Izaret, John Pineda
2013-03-12

tl;dr => a listicle of 4; bundling; sector: Technology, Media, Telecommunications (TMT); exemplars: Adobe Creative Suite; Verizon Triple Play; Hewlett-Packard VMware&Microsoft; Symantec Security; nostrum: use high margin resold products to increase the imputed margin of the proprietary core.

Unleashing the Value of Consumer Data
David Dean, Carl Kalapesi, John Rose
2013-01-02

tl;dr => data is an asset class [c.f. WEF]; data towards efficiency; something about popups & cookies, EU Cookie Law, other regulation; regulatory expertise is intoned

The Value of Our Digital Identity, pdf, 65 pages.
John Rose, Olaf Rehse, Björn Röber
2012-11-20

tl;dr => definition of digital identity, data about persons, threats to privacy, value is created by threats to privacy, Web 2.0, Internet of Things (IoT); the privacy cluster concepts

The Trends (a listicle of 6)

  1. [business process] automation,
  2. self service [user enablement],
  3. personalization, [just-in-time] delivery,
  4. [consumer product] research,
  5. secondary monetization ["data exhaust" concept]

Business Model Innovation: When the Game Gets Tough, Change the Game, pdf, 9 pages.
Zhenya Lindgardt, Martin Reeves, George Stalk, Mike Deimler
2009-12-14.

tl;dr => [everyone] needs Business Model Innovation (BMI); defined as a change in business model; exemplar: Apple 1990; value chain, cost model, operating model; listicle of dangers: portfolio bloat, failure to scale, pet ideas, isolated efforts, fixation on ideation, internal focus, historical bias

Data to Die For
Simon Kennedy, Dave Matheson
2007-10-01

tl;dr => data can have value; data can have problems; nostrums: leverage data, invent new techniques, develop closed proprietary information, get others to share with “leadership” (e.g. General Motors, Walmart’s VRM), simplify business processes; get strategy [BCG can help].

Big Privacy: Bridging Big Data and the Personal Data Ecosystem through Privacy by Design | Cavoukian, Reed

Ann Cavoukian, Drummond Reed; Big Privacy: Bridging Big Data and the Personal Data Ecosystem through Privacy by Design; 2013-12; 37 pages.

Promotions

Via: backfill

Mentioned

Terms

  • Big Privacy, contra Big Data.
  • Cloud Service Provider (CSP), contra Internet Service Provider (ISP)

Elements of Big Privacy

  1. Personal Clouds
  2. Semantic Data Interchange
  3. Trust Frameworks
  4. Identity and Data Portability
  5. Data-By-Reference (or Subscription)
  6. Accountable Pseudonyms
  7. Contractual Data Anonymization

Principles of Privacy by Design

  1. Proactive not Reactive; Preventative not Remedial
  2. Privacy as the Default Setting
  3. Privacy Embedded into Design
  4. Full Functionality – Positive-Sum, not Zero-Sum
  5. End-to-End Security – Full Lifecycle Protection
  6. Visibility and Transparency – Keep it Open
  7. Respect for User Privacy – Keep it User-Centric

Big Data Life Cycle

  • Data Harvesting
  • Data Mining
    • correlations
  • Application
    • discovery (to find)
    • prediction
    • value (predict value)
    • recommend

Miscellaneous

  • Foster Provost, Tom Fawcett; Data Science for Business: What you need to know about data mining and data-analytic thinking; O’Reilly Media; 2013-07-27; 414 pages; kindle: $10, paper: $22.
  • information
    • control
    • specificity
    • self-determination
    • consent
    • secrecy
    • purpose specificity
    • limitation on use
  • personal cloud
    • Gartner said, in 2012, so it must be true.
  • Personal Data
    • Suffix
      • Store
      • Locker
      • Vault
    • (Closed) Product Lines
      • Dropbox,
      • Google Drive,
      • Apple’s iCloud.
    • (Open) Source
      • OwnCloud,
      • remoteStorage,
      • Cloud OS,
      • XDI2.
  • Smart Data=> is Digital Rights Management (DRM)
    • Magic Pixie Dust
    • Data that “thinks for itself”
    • cloak of intelligence
    • <quote>The goal of the XDI Technical Committee is not just a semantic data format, but a semantic data protocol that enables machines to literally “talk” to each other in a common language</quote>
    • <quote>it can use semantic statements to describe the rights
      and permissions that apply to a specific set of data in a specific context.</quote>
  • Zooko Wilcox-O’Hearn; Zooko’s Triangle
    A folk theorem:  (digital) identifiers at a distance can be any of Memorable, Secure, Global; but not all (pick at most two).
  • The term “informational self-determination” was first used in a German constitutional ruling concerning personal information collected during Germany’s 1983 census.
  • Respect Network

Table of Contents

  1. Introduction
  2. Big Data, Privacy Challenges, and the Need to Restore Trust
  3. A Definition of Big Privacy
  4. The Seven Architectural Elements of Big Privacy
  5. Exemplar: Respect Network™ and the OASIS XDI Protocol
  6. How Big Privacy Applies the 7 Foundational Principles of Privacy by Design
  7. Conclusion

Referenced

Digital Market Manipulation | Calo

M. Ryan Calo, Digital Market Manipulation; University of Washington School of Law Research Paper No. 2013-27; 2013-08-15; 53 pages.

Abstract

Jon Hanson and Douglas Kysar coined the term “market manipulation” in 1999 to describe how companies exploit the cognitive limitations of consumers. Everything costs $9.99 because consumers see the price as closer to $9 than $10. Although widely cited by academics, the concept of market manipulation has had only a modest impact on consumer protection law.

This Article demonstrates that the concept of market manipulation is descriptively and theoretically incomplete, and updates the framework for the realities of a marketplace that is mediated by technology. Today’s firms fastidiously study consumers and, increasingly, personalize every aspect of their experience. They can also reach consumers anytime and anywhere, rather than waiting for the consumer to approach the marketplace. These and related trends mean that firms can not only take advantage of a general understanding of cognitive limitations, but can uncover and even trigger consumer frailty at an individual level.

A new theory of digital market manipulation reveals the limits of consumer protection law and exposes concrete economic and privacy harms that regulators will be hard-pressed to ignore. This Article thus both meaningfully advances the behavioral law and economics literature and harnesses that literature to explore and address an impending sea change in the way firms use data to persuade.

Argument

  • Assertions
    1. The digitization of commerce dramatically alters the capacity of firms to influence consumers at a personal level.
    2. Behavioral economics furnishes the best framework by which to understand and evaluate this emerging challenge; qualified as “once BE integrates the full relevance of the digital revolution”
  • Forward Claims
    • <quote>The consumer of the future is a mediated consumer—she approaches the marketplace through technology designed by someone else.</quote>
    • <quote>This permits firms to surface the specific ways each individual consumer deviates from rational decision-making, however idiosyncratic, and
      leverage that bias to the firm’s advantage.</quote>
    • <quote>Firms do not have to wait for consumers to enter the marketplace. Rather, constant screen time and more and more networked or “smart” devices mean that consumers can be approached anytime, anywhere.</quote>
  • Consequences of Mediation
    1. Technology captures and retains intelligence on the consumer’s
      interaction with the firm.
    2. Firms can and do design every aspect of the interaction with the consumer.
    3. Firms can choose when to approach consumers, rather than wait until the
      consumer has decided to enter a market context.
  • The Argument
    • There is nothing new here
      • In every age, in every generation
      • Someone is opines about odious behavior of the the grubby trades.
    • Or not; there is something new here, actually
    • Claimed:
      • An exceptionalism argument
      • digital market manipulation is different (exceptional)
      • The combination of three elements
        1. personalization
        2. systemization
        3. mediation
      • The bright line test is: systemization of the personal
        coupled with divergent interests.
      • QED
  • Therefore
    • Someone might get hurt (“there might be harms”).
    • A precautionary principle, a limiting principle, and justifies intervention.
    • Expansively: <quote>What, exactly, is the harm of serving an ad to a
      consumer that is based on her face or that plays to her biases? The
      skeptic may see none. [The case is made] that digital market
      manipulation, [as defined], has the potential to generate economic and privacy harms, and to damage consumer autonomy in a very specific way.</quote>
  • The Harms
    • Without the fancy-speak: <quote>Digital market manipulation presents an easy case: firms purposefully leverage information about consumers to their
      disadvantage in a way that is designed not to be detectable to them.</quote>
    • Ryan Calo, The Boundaries of Privacy Harm; In Indiana Law Journal; Volume 86, No. 3; 2011; available 2010-07-16; 31 pages.
      Mentions:

      • either
        • unwanted observation
        • unwanted mental states.
      • “limiting principle”
      • “rule of recognition”
    • Market failure, writ large
      • Externalities generated, writ large
      • Regressive distribution effects, a type of market failure
    • Costs, Burdens
      • Costs-to-avoid by consumers.
      • Differential pricing against unwary customers
        e.g. imputed or estimated ability to pay or perceived willingness to pay as indicated by purchase behavior or visit frequency.
      • Inefficiences, a type of market failure
    • Privacy [harms to it]
      • Made manifest as differential pricing.
      • Loss of control (whatever that means).
      • Information sharing, between firms.
    • Autonomy [harms to it]
      • Vulnerability
      • Something gauzy-vague about
        • the encroachment upon play or playfullness.
        • the act of being watched changes the behavior of the subject (who is an object, being watched).
      • Threat Model: <quote>[The] concern is that hyper-rational actors armed with the ability to design most elements of the transaction will approach the consumer of the future at the precise time and in the exact way that
        tends to guarantee a moment of (profitable) irrationality.</quote>
  • Concessions, commitments & stipulations
    • Around the Harms, generally
      • No harm exists if the subject has no knowledge of it or concept of it as “harm” as such; the hidden “hidden Peeping Tom” principle.
      • All this could inure to the benefit of the consumer, maybe.
    • Around the harm to Autonomy, specifically
      • Autonomy is defined as the absence of vulnerability; thus adding vulnerability necessarily decreases autonomy.
      • The Law (consumer protection) has an interest in vulnerability and autonomy
        • On a forward-looking, hypothetical & precautionary basis; i.e. with X in hand, it might become possible to Y.
        • On a backward-looking, as-is or as-was basis; i.e. damage Y was done, is being done.
    • Around the notional degree or kind of the behaviors:
    • <quote>We are not talking about outright fraud here―in the sense of a material misrepresentation of fact―but only a tendency to mislead.</quote>
    • <quote>A consumer who receives an ad highlighting the limited supply of the product will not usually understand that the next person, who has not been associated with a fear of scarcity, sees a different pitch based on her biases. Such a practice does not just tend to mislead; misleading is the entire point.</quote>
  • The Free Speech Trump Card
    • No.
    • Classes
      • Political Speech
      • Commercial Speech
    • “The Press” must obey all other laws.
    • Mere data gathering is not “speech”, as such.
    • Data gathering for speech, is still not speech.

The Remedies

  1. Internal: Customer Subject Review Boards
    1. An institutional oversight organ; like an IHRB, an Ombudsman
    2. A body of ethical principles and practical guidelines (like The Belmont Report)
  2. External: Remove the conflict of interest
    1. Avoid the (targeted) advertising business model
    2. Avoid the (personalization) aspect of the user experience
    3. Fee-based, subscription-based services.

M. Ryan Calo; Code Nudge or Notice; University of Washington School of Law Research Paper No. 2013-04; 2013-02-13; 29 pages.

At some point though one is just negotiating on price

  • <quote>For, say, ten dollars a month or five cents a visit, users could opt out of the entire marketing ecosystem.</quote>
  • $120/year to whom?
  • $120/year across what scope?

Follow On

Attributed to Vance Packard

  • When you are manipulating, where do you stop?
  • Who is to fix the point at which manipulative attempts become socially undesirable?

Mentions

  • Throat Clearing & Contextualization
  • Individuals & Institutions (mostly in order of appearance)
    • Jon Hanson
    • Douglas Kysar
    • Vance Packard, the works of
    • Eli Pariser
    • Joseph Turow
    • Dan Areily
    • Christine Jolls,
    • Cass Sunstein
    • Richard Thaler
    • Lior Strahilavitz
    • Ariel Porat
    • Ian Ayer
    • George Geis
    • Scott Peppet
    • Amos Tversky
    • Daniel Kahneman
    • Herbert Simon
    • Cliff Nass
    • Chris Anderson
    • Alessandro Acquisti
    • Christopher Yoo
    • Maurits Kaptein; persuasion profiling
    • John Hauser
    • Glen Urban
    • John Calfee
    • Dean Eckles, Facebook; persuasion profiling
    • Oren Bar-Gill
    • Russell Korobkin
    • Andrew Odlyzko
    • Neil Richards
    • Tal Zarsky
    • Julie Cohen
    • Jane Yakowitz Bambauer
    • B. J. Fogg; captology, Persuasive Technology
    • Matthew Edwards
    • Peter Swire
    • Richard Craswell
    • Viktor Mayer-Schönberger
    • Kenneth Cukier
    • John Rawls
  • Theory Bodies
    • Behavioral Economics (BE)
    • Prospect Theory
    • Dual Process Theory
      • Fast Thinking (contra Slow Thinking)
    • Neuromarketing
  • Branded Concepts (a tour of the terms)
    • Libertarian Paternalism
    • Nudging
    • Debiasing
    • Predictable Irrationality
    • Disclosure Ratcheting
    • Bounded Rationality
    • The End of Theory
      • Correlation trumping causality; Chris Anderson)
    • Outlier
      • Outlier detection
      • Outlier modeling
    • Information Overload
    • Phenomena “wear out”
    • Personification & anthropomorphization of “Information” to justify irrationality
      • As Villain
      • As Hero
      • As Victim
    • Information [overload] management strategies
      • Viceral Notice
      • Feedback
      • Mapping
    • Anchoring
    • Framing
    • Biases, debiasing
      • A/B Testing
      • General Bias
      • Specific Bias
    • Incentivization
    • Targeting
      • Means-based targeting
      • Morphing
      • Persuasion profiling (motivation discovery & exploitation)
    • Consent, limits of consent
      • contract term can become “unconscionable”
      • enrichments can become “unjust”
      • influence can become “undue”
      • dealings can constitutes “fair” dealing, or not
      • strategic behavior can constitute “bad faith”
      • interest rates can become “usurious” (usury)
      • higher prices an become “gauging”
    • Market Failure
      • inefficient markets
      • novel (new) sources of market failure (ah! I found another one!)
    • Behaviorally-informed [contract] drafting techniques.
    • The “hidden Peeping Tom” principle (puzzle, conundrum, stance)
      i.e. the woman doesn’t lose any virtue if she doesn’t know she was watched.
    • Caveat emptor
    • Mandatory Disclosure
    • “facilitation” (contra “friction”)
    • Informed Consent
    • “digital nudging”
    • “publicity principle”
  • Devices, Formulae, Practices
    • Learned Hand Formula (calculus of negligence).
    • Belmont Report
      • principles of “beneficence” and “justice”
      • Beneficence is defined as the minimizztion of harm to the subject and society while maximizing benefit—a kind of ethical Learned Hand Formula.
      • Justice prohibits unfairness in distribution, defined as the undue imposition of a burden or withholding of a benefit.
    • Institutional Human Review Board (IHRB)
    • Internal “algorithmists”
    • Ombusmen
  • Teleology & Goals
    • (Online advertising) is “ends-based”
    • Advertisers aspire to match the right ad to the right person [at the right time]
  • Argot (general trade-specific terms)
    • “atmospherics,” the layout and presentation of retail space.
    • “preference marketing,” against a consumer’s stated or volunteered preferences.
    • “behavioral marketing,” against a consumer’s observed preferences.
  • Epithets, Insults & Snidenesses
    • “Kafakesque”, attributed to Daniel Solove, and concurred by Calo.
    • “the feel of a zoetrope, spinning static case law in a certain light to
      create the illusion of forward motion” see page 39.
    • “Elephants” vs “mice” with 2x opinements from Peter Swire.
    • “digital divide”
  • Definition of market manipulation

References

… of note [it's a legal paper so the thing is largely footnotes].  No order.

Previously

All this rests upon the definition of Market Manipulation of Hanson & Kysar.

Taking Behaviorism Seriously, Part I

Jon Hanson, Douglas Kysar; Taking Behavioralism Seriously: The Problem of Market Manipulation; In New York University Law Review; Volume 74; 1999; page 632; 1999; Also Harvard Public Law Working Paper No. 08-54; 118 pages.

Abstract

For the past few decades, cognitive psychologists and behavioral researchers have been steadily uncovering evidence that human decisionmaking processes are prone to nonrational, yet systematic, tendencies. These researchers claim not merely that we sometimes fail to abide by rules of logic, but that we fail to do so in predictable ways.

With a few notable exceptions, implications of this research for legal institutions were slow in reaching the academic literature. Within the last few years, however, we have seen an outpouring of scholarship addressing the impact of behavioral research over a wide range of legal topics. Indeed, one might predict that the current behavioral movement eventually will have an influence on legal scholarship matched only by its predecessor, the law and economics movement. Ultimately, any legal concept that relies in some sense on a notion of reasonableness or that is premised on the existence of a reasonable or rational decisionmaker will need to be reassessed in light of the mounting evidence that humans are “a reasoning rather than a reasonable animal.”

This Article contributes to that reassessment by focusing on the problem of manipulability. Our central contention is that the presence of unyielding cognitive biases makes individual decisionmakers susceptible to manipulation by those able to influence the context in which decisions are made. More particularly, we believe that market outcomes frequently will be heavily influenced, if not determined, by the ability of one actor to control the format of information, the presentation of choices, and, in general, the setting within which market transactions occur. Once one accepts that individuals systematically behave in nonrational ways, it follows from an economic perspective that others will exploit those tendencies for gain.

That possibility of manipulation has a variety of implications for legal policy analysis that have heretofore gone unrecognized. This article highlights some of those implications and makes several predictions that are tested in other work.

Taking Behaviorism Seriously, Part II

Jon Hanson, Douglas Kysar; Taking Behavioralism Seriously: Some Evidence of Market Manipulation; In Harvard Law Review; Volume 112; 1999; page 1420; Also Harvard Public Law Working Paper No. 08-52; 149 pages.

Abstract

An important lesson of behavioralist research is that individuals’ perceptions and preferences are highly manipulable. This article presents empirical evidence of market manipulation, a previously unrecognized source of market failure. It surveys extensive qualitative and quantitative marketing research and consumer behavioral studies, reviews common practices in settings such as gas stations and supermarkets, and examines environmentally oriented and fear-based advertising. The article then focuses on the industry that has most depended upon market manipulation: the cigarette industry. Through decades of sophisticated marketing and public relations efforts, cigarette manufacturers have heightened consumer demand and lowered consumer risk perceptions. Such market manipulation may justify moving to enterprise liability, the regime advocated by the first generation of product liability scholars.

Alex (Sandy) Pentland

Alex (Sandy) Pentland Homepage — Honest Signals, Reality Mining, and Sensible Organizations.

The cited ones

Participation

Very high concept:

Alex (Sandy) Pentland; Reinventing Society in the Wake of Big Data; In The Edge; 2012-10-10.
Summary:

  • timesuck, it’s also a video 24:08
  • Outline
    • Changing the way we design systems
    • Philosophy: Adam Smith and Karl Marx were wrong
    • Creating a data-driven society
    • Who owns the data in the data-driven society
    • Organizations with hard information boundaries will dissolve (the state will wither away)
  • Statistically-Improbable Phrases: vast improvement, tectonic changes, eventually, out of a regulator’s pocket, service-oriented government, transparency and choice, became clear to me, leverage personal data, no longer works, we are going to be operating very much out of our old, familiar ballpark, the incumbents in the Internet are probably the major opposition (e.g. Google, Facebook), law of averages.

Separately