Syllabus for Solon Barocas @ Cornell | INFO 4270: Ethics and Policy in Data Science

INFO 4270 – Ethics and Policy in Data Science
Instructor: Solon Barocas
Venue: Cornell University

Syllabus

Solon Barocas

Readings

A Canon, The Canon

In order of appearance in the syllabus, without the course cadence markers…

  • Danah Boyd and Kate Crawford, Critical Questions for Big Data; In <paywalled>Information, Communication & Society,Volume 15, Issue 5 (A decade in Internet time: the dynamics of the Internet and society); 2012; DOI:10.1080/1369118X.2012.678878</paywalled>
    Subtitle: Provocations for a cultural, technological, and scholarly phenomenon
  • Tal Zarsky, The Trouble with Algorithmic Decisions; In Science, Technology & Human Values, Vol 41, Issue 1, 2016 (2015-10-14); ResearchGate.
    Subtitle: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making
  • Cathy O’Neil, Weapons of Math Destruction; Broadway Books; 2016-09-06; 290 pages, ASIN:B019B6VCLO: Kindle: $12, paper: 10+SHT.
  • Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information; Harvard University Press; 2016-08-29; 320 pages; ASIN:0674970845: Kindle: $10, paper: $13+SHT.
  • Executive Office of the President, President Barack Obama, Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights; The White House Office of Science and Technology Policy (OSTP); 2016-05; 29 pages; archives.
  • Lisa Gitelman (editor), “Raw Data” is an Oxymoron; Series: Infrastructures; The MIT Press; 2013-01-25; 192 pages; ASIN:B00HCW7H0A: Kindle: $20, paper: $18+SHT.
    Lisa Gitelman, Virginia Jackson; Introduction (6 pages)
  • Agre, “Surveillance and Capture: Two Models of Privacy”
  • Bowker and Star, Sorting Things Out
  • Auerbach, “The Stupidity of Computers”
  • Moor, “What is Computer Ethics?”
  • Hand, “Deconstructing Statistical Questions”
  • O’Neil, On Being a Data Skeptic
  • Domingos, “A Few Useful Things to Know About Machine Learning”
  • Luca, Kleinberg, and Mullainathan, “Algorithms Need Managers, Too”
  • Friedman and Nissenbaum, “Bias in Computer Systems”
  • Lerman, “Big Data and Its Exclusions”
  • Hand, “Classifier Technology and the Illusion of Progress” [Sections 3 and 4]
  • Pager and Shepherd, “The Sociology of Discrimination: Racial Discrimination in Employment, Housing, Credit, and Consumer Markets”
  • Goodman, “Economic Models of (Algorithmic) Discrimination”
  • Hardt, “How Big Data Is Unfair”
  • Barocas and Selbst, “Big Data’s Disparate Impact” [Parts I and II]
  • Gandy, “It’s Discrimination, Stupid”
  • Dwork and Mulligan, “It’s Not Privacy, and It’s Not Fair”
  • Sandvig, Hamilton, Karahalios, and Langbort, “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms”
  • Diakopoulos, “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures”
  • Lavergne and Mullainathan, “Are Emily and Greg more Employable than Lakisha and Jamal?”
  • Sweeney, “Discrimination in Online Ad Delivery”
  • Datta, Tschantz, and Datta, “Automated Experiments on Ad Privacy Settings”
  • Dwork, Hardt, Pitassi, Reingold, and Zemel, “Fairness Through Awareness”
  • Feldman, Friedler, Moeller, Scheidegger, and Venkatasubramanian, “Certifying and Removing Disparate Impact”
  • Žliobaitė and Custers, “Using Sensitive Personal Data May Be Necessary for Avoiding Discrimination in Data-Driven Decision Models”
  • Angwin, Larson, Mattu, and Kirchner, “Machine Bias”
  • Kleinberg, Mullainathan, and Raghavan, “Inherent Trade-Offs in the Fair Determination of Risk Scores”
  • Northpointe, COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity
  • Chouldechova, “Fair Prediction with Disparate Impact”
  • Berk, Heidari, Jabbari, Kearns, and Roth, “Fairness in Criminal Justice Risk Assessments: The State of the Art”
  • Hardt, Price, and Srebro, “Equality of Opportunity in Supervised Learning”
  • Wattenberg, Viégas, and Hardt, “Attacking Discrimination with Smarter Machine Learning”
  • Friedler, Scheidegger, and Venkatasubramanian, “On the (Im)possibility of Fairness”
  • Tene and Polonetsky, “Taming the Golem: Challenges of Ethical Algorithmic Decision Making”
  • Lum and Isaac, “To Predict and Serve?”
  • Joseph, Kearns, Morgenstern, and Roth, “Fairness in Learning: Classic and Contextual Bandits”
  • Barocas, “Data Mining and the Discourse on Discrimination”
  • Grgić-Hlača, Zafar, Gummadi, and Weller, “The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making”
  • Vedder, “KDD: The Challenge to Individualism”
  • Lippert-Rasmussen, “‘We Are All Different’: Statistical Discrimination and the Right to Be Treated as an Individual”
  • Schauer, Profiles, Probabilities, And Stereotypes
  • Caliskan, Bryson, and Narayanan, “Semantics Derived Automatically from Language Corpora Contain Human-like Biases”
  • Zhao, Wang, Yatskar, Ordonez, and Chang, “Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints”
  • Bolukbasi, Chang, Zou, Saligrama, and Kalai, “Man Is to Computer Programmer as Woman Is to Homemaker?”
  • Citron and Pasquale, “The Scored Society: Due Process for Automated Predictions”
  • Ananny and Crawford, “Seeing without Knowing”
  • de Vries, “Privacy, Due Process and the Computational Turn”
  • Zarsky, “Transparent Predictions”
  • Crawford and Schultz, “Big Data and Due Process”
  • Kroll, Huey, Barocas, Felten, Reidenberg, Robinson, and Yu, “Accountable Algorithms”
  • Bornstein, “Is Artificial Intelligence Permanently Inscrutable?”
  • Burrell, “How the Machine ‘Thinks’”
  • Lipton, “The Mythos of Model Interpretability”
  • Doshi-Velez and Kim, “Towards a Rigorous Science of Interpretable Machine Learning”
  • Hall, Phan, and Ambati, “Ideas on Interpreting Machine Learning”
  • Grimmelmann and Westreich, “Incomprehensible Discrimination”
  • Selbst and Barocas, “Regulating Inscrutable Systems”
  • Jones, “The Right to a Human in the Loop”
  • Edwards and Veale, “Slave to the Algorithm? Why a ‘Right to Explanation’ is Probably Not the Remedy You are Looking for”
  • Duhigg, “How Companies Learn Your Secrets”
  • Kosinski, Stillwell, and Graepel, “Private Traits and Attributes Are Predictable from Digital Records of Human Behavior”
  • Barocas and Nissenbaum, “Big Data’s End Run around Procedural Privacy Protections”
  • Chen, Fraiberger, Moakler, and Provost, “Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals”
  • Robinson and Yu, Knowing the Score
  • Hurley and Adebayo, “Credit Scoring in the Era of Big Data”
  • Valentino-Devries, Singer-Vine, and Soltani, “Websites Vary Prices, Deals Based on Users’ Information”
  • The Council of Economic Advisers, Big Data and Differential Pricing
  • Hannak, Soeller, Lazer, Mislove, and Wilson, “Measuring Price Discrimination and Steering on E-commerce Web Sites”
  • Kochelek, “Data Mining and Antitrust”
  • Helveston, “Consumer Protection in the Age of Big Data”
  • Kolata, “New Gene Tests Pose a Threat to Insurers”
  • Swedloff, “Risk Classification’s Big Data (R)evolution”
  • Cooper, “Separation, Pooling, and Big Data”
  • Simon, “The Ideological Effects of Actuarial Practices”
  • Tufekci, “Engineering the Public”
  • Calo, “Digital Market Manipulation”
  • Kaptein and Eckles, “Selecting Effective Means to Any End”
  • Pariser, “Beware Online ‘Filter Bubbles’”
  • Gillespie, “The Relevance of Algorithms”
  • Buolamwini, “Algorithms Aren’t Racist. Your Skin Is just too Dark”
  • Hassein, “Against Black Inclusion in Facial Recognition”
  • Agüera y Arcas, Mitchell, and Todorov, “Physiognomy’s New Clothes”
  • Garvie, Bedoya, and Frankle, The Perpetual Line-Up
  • Wu and Zhang, “Automated Inference on Criminality using Face Images”
  • Haggerty, “Methodology as a Knife Fight”
    <snide>A metaphorical usage. Let hyperbole be your guide</snide>

Previously filled.

The Death of Rules and Standards | Casey, Niblett

Anthony J. Casey, Anthony Niblett; The Death of Rules and Standards; Coase-Sandor Working Paper Series in Law and Economics No. 738; Law School, University of Chicago; 2015; 58 pages; landing, copy, ssrn:2693826.

tl;dr → because reasons and
  • Prediction Technologies
  • Communication Technologies

Abstract

Scholars have examined the lawmakers’ choice between rules and standards for decades. This paper, however, explores the possibility of a new form of law that renders that choice unnecessary. Advances in technology (such as big data and artificial intelligence) will give rise to this new form – the micro-directive – which will provide the benefits of both rules and standards without the costs of either.

Lawmakers will be able to use predictive and communication technologies to enact complex legislative goals that are translated by machines into a vast catalog of simple commands for all possible scenarios. When an individual citizen faces a legal choice, the machine will select from the catalog and communicate to that individual the precise context-specific command (the micro-directive) necessary for compliance. In this way, law will be able to adapt to a wide array of situations and direct precise citizen behavior without further legislative or judicial action. A micro-directive, like a rule, provides a clear instruction to a citizen on how to comply with the law. But, like a standard, a micro-directive is tailored to and adapts to each and every context.

While predictive technologies such as big data have already introduced a trend toward personalized default rules, in this paper we suggest that this is only a small part of a larger trend toward context- specific laws that can adapt to any situation. As that trend continues, the fundamental cost trade-off between rules and standards will disappear, changing the way society structures and thinks about law.

Table of Contents

  1. Introduction
  2. The Emergence Of Micro-Directives And The Decline Of Rules And Standards
    1. Background: Rules and standards
    2. Technology will facilitate the emergence of micro-directives as a new form of law
    3. Demonstrative examples
      • Example 1: Predictive technology in medical diagnosis
      • Example 2: Communication technology in traffic laws
    4. The different channels leading to the death of rules and standards
      1. The production of micro-directives by non-legislative lawmakers
      2. An alternative path: Private use of technology by regulated actors
  3. Feasibility
    1. The feasibility of predictive technology
      1. The power of predictive technology
      2. Predictive technology will displace human discretion
    2. The feasibility of communication technology
  4. Implications And Consequences
    1. The death of judging? Institutional changes to the legal system
    2. The development and substance of policy objectives
    3. Changes to the practice of law
    4. The broader consequences of these technologies on individuals
      1. Privacy
      2. Autonomy
      3. Ethics
  5. Conclusion

Snide

  • heavy-handed use of the metaphor “death X” in lieu of the more mundate “cessation of use of the technique X.”
  • At least they didn’t use the metaphors of “sea change” or “tectonic shifts” from the respective fields of weather prediction or geology.
  • <GEE-WHIZZ!>As economist Professor William Nordhaus notes, the increase in computer power over the course of the twentieth century was “phenomenal,”</GEE-WHIZZ!!>

Mentions

  • catalog of personalized laws.
    “special law for you.”
  • rules and standards
    contra
    captures and benefits
  • micro-benefits
  • predictive technology
  • uncertainty of law
  • <quote>The legislature merely states its goal. Machines then design the law as a vast catalog of context-specific rules to optimize that goal. From this catalog, a specific micro-directive is selected and communicated to a particular driver (perhaps on a dashboard display) as a precise speed for the specific conditions she
    faces.</quote>
  • positive versus normative (analysis)
  • (legislative) decision-making has
    • errors
    • costs
  • (subject) compliance has
    • cost
    • uncertainty
  • There are economies of scale in compliance
    • frequency of event
    • diversity of events
  • Conceptualize the frequency of the regulated event relative the specificity of the regulation.
  • The Combination
    • Prediction Technologies
    • Communication Technologies
  • <quote>The wise draftsman . . . asks himself, how many of the details of this settlement ought to be postponed to another day, when the decisions can be more wisely and efficiently and perhaps more readily made?</quote>, attributed to Henry Hart, Albert Sacks.
  • Claim
    • Standards are flexible, broad but uncertain in adjudication;
      so service delivery is tailored
      therefore the salubrious effect obtains.
    • Rules are specific, narrow but certain in adjudication;
      so service delivery is pre-specified, constrained
      therefore mis-applications occur.
    • Technology (cited) removes the distinction between Rules and Standards
  • advance (tax) rulings
  • Private Letter Rulings, IRS
  • No Action Letter, SEC

Argot

  • one size fits all
  • bright-line rule
  • over-inclusive (contra under-inclusive)
  • optimal decision rule
  • reasonable care
  • Error Typology(in hypothesis testing)
    • Type I Error
    • Type II Error
  • health surveillance technologies
  • second-order regulation

References

Sure, it’s a legal-style paper so there are 191 footnotes sprinkled liberally throughout the piece.  Only selected references were developed.

Followup

  • <quote>Indeed, some suggest that Moore’s Law is akin to a self-fulfilling prophecy.</quote>
    Harro van Lente & Arie Rip, “Expectations in Technological Developments: an Example of Prospective Structures to be Filled in by Agency”  researchgate, In Getting New Technologies Together: Studies In Making Sociotechnical Order, 206 (Cornelis Disco & Barend van der Meulen, eds. 1998), Amazon:311015630X: paper: $210+SHT.
  • …and more…

Via: backfill.

Digital Market Manipulation | Calo

M. Ryan Calo, Digital Market Manipulation; University of Washington School of Law Research Paper No. 2013-27; 2013-08-15; 53 pages.

Abstract

Jon Hanson and Douglas Kysar coined the term “market manipulation” in 1999 to describe how companies exploit the cognitive limitations of consumers. Everything costs $9.99 because consumers see the price as closer to $9 than $10. Although widely cited by academics, the concept of market manipulation has had only a modest impact on consumer protection law.

This Article demonstrates that the concept of market manipulation is descriptively and theoretically incomplete, and updates the framework for the realities of a marketplace that is mediated by technology. Today’s firms fastidiously study consumers and, increasingly, personalize every aspect of their experience. They can also reach consumers anytime and anywhere, rather than waiting for the consumer to approach the marketplace. These and related trends mean that firms can not only take advantage of a general understanding of cognitive limitations, but can uncover and even trigger consumer frailty at an individual level.

A new theory of digital market manipulation reveals the limits of consumer protection law and exposes concrete economic and privacy harms that regulators will be hard-pressed to ignore. This Article thus both meaningfully advances the behavioral law and economics literature and harnesses that literature to explore and address an impending sea change in the way firms use data to persuade.

Argument

  • Assertions
    1. The digitization of commerce dramatically alters the capacity of firms to influence consumers at a personal level.
    2. Behavioral economics furnishes the best framework by which to understand and evaluate this emerging challenge; qualified as “once BE integrates the full relevance of the digital revolution”
  • Forward Claims
    • <quote>The consumer of the future is a mediated consumer—she approaches the marketplace through technology designed by someone else.</quote>
    • <quote>This permits firms to surface the specific ways each individual consumer deviates from rational decision-making, however idiosyncratic, and
      leverage that bias to the firm’s advantage.</quote>
    • <quote>Firms do not have to wait for consumers to enter the marketplace. Rather, constant screen time and more and more networked or “smart” devices mean that consumers can be approached anytime, anywhere.</quote>
  • Consequences of Mediation
    1. Technology captures and retains intelligence on the consumer’s
      interaction with the firm.
    2. Firms can and do design every aspect of the interaction with the consumer.
    3. Firms can choose when to approach consumers, rather than wait until the
      consumer has decided to enter a market context.
  • The Argument
    • There is nothing new here
      • In every age, in every generation
      • Someone is opines about odious behavior of the the grubby trades.
    • Or not; there is something new here, actually
    • Claimed:
      • An exceptionalism argument
      • digital market manipulation is different (exceptional)
      • The combination of three elements
        1. personalization
        2. systemization
        3. mediation
      • The bright line test is: systemization of the personal
        coupled with divergent interests.
      • QED
  • Therefore
    • Someone might get hurt (“there might be harms”).
    • A precautionary principle, a limiting principle, and justifies intervention.
    • Expansively: <quote>What, exactly, is the harm of serving an ad to a
      consumer that is based on her face or that plays to her biases? The
      skeptic may see none. [The case is made] that digital market
      manipulation, [as defined], has the potential to generate economic and privacy harms, and to damage consumer autonomy in a very specific way.</quote>
  • The Harms
    • Without the fancy-speak: <quote>Digital market manipulation presents an easy case: firms purposefully leverage information about consumers to their
      disadvantage in a way that is designed not to be detectable to them.</quote>
    • Ryan Calo, The Boundaries of Privacy Harm; In Indiana Law Journal; Volume 86, No. 3; 2011; available 2010-07-16; 31 pages.
      Mentions:

      • either
        • unwanted observation
        • unwanted mental states.
      • “limiting principle”
      • “rule of recognition”
    • Market failure, writ large
      • Externalities generated, writ large
      • Regressive distribution effects, a type of market failure
    • Costs, Burdens
      • Costs-to-avoid by consumers.
      • Differential pricing against unwary customers
        e.g. imputed or estimated ability to pay or perceived willingness to pay as indicated by purchase behavior or visit frequency.
      • Inefficiences, a type of market failure
    • Privacy [harms to it]
      • Made manifest as differential pricing.
      • Loss of control (whatever that means).
      • Information sharing, between firms.
    • Autonomy [harms to it]
      • Vulnerability
      • Something gauzy-vague about
        • the encroachment upon play or playfullness.
        • the act of being watched changes the behavior of the subject (who is an object, being watched).
      • Threat Model: <quote>[The] concern is that hyper-rational actors armed with the ability to design most elements of the transaction will approach the consumer of the future at the precise time and in the exact way that
        tends to guarantee a moment of (profitable) irrationality.</quote>
  • Concessions, commitments & stipulations
    • Around the Harms, generally
      • No harm exists if the subject has no knowledge of it or concept of it as “harm” as such; the hidden “hidden Peeping Tom” principle.
      • All this could inure to the benefit of the consumer, maybe.
    • Around the harm to Autonomy, specifically
      • Autonomy is defined as the absence of vulnerability; thus adding vulnerability necessarily decreases autonomy.
      • The Law (consumer protection) has an interest in vulnerability and autonomy
        • On a forward-looking, hypothetical & precautionary basis; i.e. with X in hand, it might become possible to Y.
        • On a backward-looking, as-is or as-was basis; i.e. damage Y was done, is being done.
    • Around the notional degree or kind of the behaviors:
    • <quote>We are not talking about outright fraud here―in the sense of a material misrepresentation of fact―but only a tendency to mislead.</quote>
    • <quote>A consumer who receives an ad highlighting the limited supply of the product will not usually understand that the next person, who has not been associated with a fear of scarcity, sees a different pitch based on her biases. Such a practice does not just tend to mislead; misleading is the entire point.</quote>
  • The Free Speech Trump Card
    • No.
    • Classes
      • Political Speech
      • Commercial Speech
    • “The Press” must obey all other laws.
    • Mere data gathering is not “speech”, as such.
    • Data gathering for speech, is still not speech.

The Remedies

  1. Internal: Customer Subject Review Boards
    1. An institutional oversight organ; like an IHRB, an Ombudsman
    2. A body of ethical principles and practical guidelines (like The Belmont Report)
  2. External: Remove the conflict of interest
    1. Avoid the (targeted) advertising business model
    2. Avoid the (personalization) aspect of the user experience
    3. Fee-based, subscription-based services.

M. Ryan Calo; Code Nudge or Notice; University of Washington School of Law Research Paper No. 2013-04; 2013-02-13; 29 pages.

At some point though one is just negotiating on price

  • <quote>For, say, ten dollars a month or five cents a visit, users could opt out of the entire marketing ecosystem.</quote>
  • $120/year to whom?
  • $120/year across what scope?

Follow On

Attributed to Vance Packard

  • When you are manipulating, where do you stop?
  • Who is to fix the point at which manipulative attempts become socially undesirable?

Mentions

  • Throat Clearing & Contextualization
  • Individuals & Institutions (mostly in order of appearance)
    • Jon Hanson
    • Douglas Kysar
    • Vance Packard, the works of
    • Eli Pariser
    • Joseph Turow
    • Dan Areily
    • Christine Jolls,
    • Cass Sunstein
    • Richard Thaler
    • Lior Strahilavitz
    • Ariel Porat
    • Ian Ayer
    • George Geis
    • Scott Peppet
    • Amos Tversky
    • Daniel Kahneman
    • Herbert Simon
    • Cliff Nass
    • Chris Anderson
    • Alessandro Acquisti
    • Christopher Yoo
    • Maurits Kaptein; persuasion profiling
    • John Hauser
    • Glen Urban
    • John Calfee
    • Dean Eckles, Facebook; persuasion profiling
    • Oren Bar-Gill
    • Russell Korobkin
    • Andrew Odlyzko
    • Neil Richards
    • Tal Zarsky
    • Julie Cohen
    • Jane Yakowitz Bambauer
    • B. J. Fogg; captology, Persuasive Technology
    • Matthew Edwards
    • Peter Swire
    • Richard Craswell
    • Viktor Mayer-Schönberger
    • Kenneth Cukier
    • John Rawls
  • Theory Bodies
    • Behavioral Economics (BE)
    • Prospect Theory
    • Dual Process Theory
      • Fast Thinking (contra Slow Thinking)
    • Neuromarketing
  • Branded Concepts (a tour of the terms)
    • Libertarian Paternalism
    • Nudging
    • Debiasing
    • Predictable Irrationality
    • Disclosure Ratcheting
    • Bounded Rationality
    • The End of Theory
      • Correlation trumping causality; Chris Anderson)
    • Outlier
      • Outlier detection
      • Outlier modeling
    • Information Overload
    • Phenomena “wear out”
    • Personification & anthropomorphization of “Information” to justify irrationality
      • As Villain
      • As Hero
      • As Victim
    • Information [overload] management strategies
      • Viceral Notice
      • Feedback
      • Mapping
    • Anchoring
    • Framing
    • Biases, debiasing
      • A/B Testing
      • General Bias
      • Specific Bias
    • Incentivization
    • Targeting
      • Means-based targeting
      • Morphing
      • Persuasion profiling (motivation discovery & exploitation)
    • Consent, limits of consent
      • contract term can become “unconscionable”
      • enrichments can become “unjust”
      • influence can become “undue”
      • dealings can constitutes “fair” dealing, or not
      • strategic behavior can constitute “bad faith”
      • interest rates can become “usurious” (usury)
      • higher prices an become “gauging”
    • Market Failure
      • inefficient markets
      • novel (new) sources of market failure (ah! I found another one!)
    • Behaviorally-informed [contract] drafting techniques.
    • The “hidden Peeping Tom” principle (puzzle, conundrum, stance)
      i.e. the woman doesn’t lose any virtue if she doesn’t know she was watched.
    • Caveat emptor
    • Mandatory Disclosure
    • “facilitation” (contra “friction”)
    • Informed Consent
    • “digital nudging”
    • “publicity principle”
  • Devices, Formulae, Practices
    • Learned Hand Formula (calculus of negligence).
    • Belmont Report
      • principles of “beneficence” and “justice”
      • Beneficence is defined as the minimizztion of harm to the subject and society while maximizing benefit—a kind of ethical Learned Hand Formula.
      • Justice prohibits unfairness in distribution, defined as the undue imposition of a burden or withholding of a benefit.
    • Institutional Human Review Board (IHRB)
    • Internal “algorithmists”
    • Ombusmen
  • Teleology & Goals
    • (Online advertising) is “ends-based”
    • Advertisers aspire to match the right ad to the right person [at the right time]
  • Argot (general trade-specific terms)
    • “atmospherics,” the layout and presentation of retail space.
    • “preference marketing,” against a consumer’s stated or volunteered preferences.
    • “behavioral marketing,” against a consumer’s observed preferences.
  • Epithets, Insults & Snidenesses
    • “Kafakesque”, attributed to Daniel Solove, and concurred by Calo.
    • “the feel of a zoetrope, spinning static case law in a certain light to
      create the illusion of forward motion” see page 39.
    • “Elephants” vs “mice” with 2x opinements from Peter Swire.
    • “digital divide”
  • Definition of market manipulation

References

… of note [it's a legal paper so the thing is largely footnotes].  No order.

Previously

All this rests upon the definition of Market Manipulation of Hanson & Kysar.

Taking Behaviorism Seriously, Part I

Jon Hanson, Douglas Kysar; Taking Behavioralism Seriously: The Problem of Market Manipulation; In New York University Law Review; Volume 74; 1999; page 632; 1999; Also Harvard Public Law Working Paper No. 08-54; 118 pages.

Abstract

For the past few decades, cognitive psychologists and behavioral researchers have been steadily uncovering evidence that human decisionmaking processes are prone to nonrational, yet systematic, tendencies. These researchers claim not merely that we sometimes fail to abide by rules of logic, but that we fail to do so in predictable ways.

With a few notable exceptions, implications of this research for legal institutions were slow in reaching the academic literature. Within the last few years, however, we have seen an outpouring of scholarship addressing the impact of behavioral research over a wide range of legal topics. Indeed, one might predict that the current behavioral movement eventually will have an influence on legal scholarship matched only by its predecessor, the law and economics movement. Ultimately, any legal concept that relies in some sense on a notion of reasonableness or that is premised on the existence of a reasonable or rational decisionmaker will need to be reassessed in light of the mounting evidence that humans are “a reasoning rather than a reasonable animal.”

This Article contributes to that reassessment by focusing on the problem of manipulability. Our central contention is that the presence of unyielding cognitive biases makes individual decisionmakers susceptible to manipulation by those able to influence the context in which decisions are made. More particularly, we believe that market outcomes frequently will be heavily influenced, if not determined, by the ability of one actor to control the format of information, the presentation of choices, and, in general, the setting within which market transactions occur. Once one accepts that individuals systematically behave in nonrational ways, it follows from an economic perspective that others will exploit those tendencies for gain.

That possibility of manipulation has a variety of implications for legal policy analysis that have heretofore gone unrecognized. This article highlights some of those implications and makes several predictions that are tested in other work.

Taking Behaviorism Seriously, Part II

Jon Hanson, Douglas Kysar; Taking Behavioralism Seriously: Some Evidence of Market Manipulation; In Harvard Law Review; Volume 112; 1999; page 1420; Also Harvard Public Law Working Paper No. 08-52; 149 pages.

Abstract

An important lesson of behavioralist research is that individuals’ perceptions and preferences are highly manipulable. This article presents empirical evidence of market manipulation, a previously unrecognized source of market failure. It surveys extensive qualitative and quantitative marketing research and consumer behavioral studies, reviews common practices in settings such as gas stations and supermarkets, and examines environmentally oriented and fear-based advertising. The article then focuses on the industry that has most depended upon market manipulation: the cigarette industry. Through decades of sophisticated marketing and public relations efforts, cigarette manufacturers have heightened consumer demand and lowered consumer risk perceptions. Such market manipulation may justify moving to enterprise liability, the regime advocated by the first generation of product liability scholars.

Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms | Crawford, Schultz

Kate Crawford, Jason Schultz; Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms; In Boston College Law Review; Vol. 55, No. 1, 2014; NYU School of Law, Public Law Research Paper No. 13-64; NYU Law and Economics Research Paper No. 13-36.

  • Kate Crawford, Microsoft Research; MIT Center for Civic Media; University of New South Wales (UNSW)
  • Jason Schultz, New York University School of Law

Abstract

The rise of “big data” analytics in the private sector poses new challenges for privacy advocates. Unlike previous computational models that exploit personally identifiable information (PII) directly, such as behavioral targeting, big data has exploded the definition of PII to make many more sources of data personally identifiable. By analyzing primarily metadata, such as a set of predictive or aggregated findings without displaying or distributing the originating data, big data approaches often operate outside of current privacy protections (Rubinstein 2013; Tene and Polonetsky 2012), effectively marginalizing regulatory schema. Big data presents substantial privacy concerns – risks of bias or discrimination based on the inappropriate generation of personal data – a risk we call “predictive privacy harm.” Predictive analysis and categorization can pose a genuine threat to individuals, especially when it is performed without their knowledge or consent. While not necessarily a harm that falls within the conventional “invasion of privacy” boundaries, such harms still center on an individual’s relationship with data about her. Big data approaches need not rely on having a person’s PII directly: a combination of techniques from social network analysis, interpreting online behaviors and predictive modeling can create a detailed, intimate picture with a high degree of accuracy. Furthermore, harms can still result when such techniques are done poorly, rendering an inaccurate picture that nonetheless is used to impact on a person’s life and livelihood.

In considering how to respond to evolving big data practices, we began by examining the existing rights that individuals have to see and review records pertaining to them in areas such as health and credit information. But it is clear that these existing systems are inadequate to meet current big data challenges. Fair Information Privacy Practices and other notice-and-choice regimes fail to protect against predictive privacy risks in part because individuals are rarely aware of how their individual data is being used to their detriment, what determinations are being made about them, and because at various points in big data processes, the relationship between predictive privacy harms and originating PII may be complicated by multiple technical processes and the involvement of third parties. Thus, past privacy regulations and rights are ill equipped to face current and future big data challenges.

We propose a new approach to mitigating predictive privacy harms – that of a right to procedural data due process. In the Anglo-American legal tradition, procedural due process prohibits the government from depriving an individual’s rights to life, liberty, or property without affording her access to certain basic procedural components of the adjudication process – including the rights to review and contest the evidence at issue, the right to appeal any adverse decision, the right to know the allegations presented and be heard on the issues they raise. Procedural due process also serves as an enforcer of separation of powers, prohibiting those who write laws from also adjudicating them.

While some current privacy regimes offer nominal due process-like mechanisms in relation to closely defined types of data, these rarely include all of the necessary components to guarantee fair outcomes and arguably do not apply to many kinds of big data systems (Terry 2012). A more rigorous framework is needed, particularly given the inherent analytical assumptions and methodological biases built into many big data systems (boyd and Crawford 2012). Building on previous thinking about due process for public administrative computer systems (Steinbock 2005; Citron 2010), we argue that individuals who are privately and often secretly “judged” by big data should have similar rights to those judged by the courts with respect to how their personal data has been used in such adjudications. Using procedural due process principles, we analogize a system of regulation that would provide such rights against private big data actors.

Concept

  • Chain of Reasoning
    • Data creates “judgements”
    • Judgements create “takings”
    • Takings require “due process”
  • Due Process is:
    • a Separation of Powers
    • a Systems Management (discipline)
    • fair and feasible
  • Elements of A Due Process “Hearing” (supporting citations):
    1. an unbiased tribunal;
    2. notice of the proposed action;
    3. the grounds asserted for it;
    4. an opportunity to present reasons why the proposed action should not be taken;
    5. the right to call witnesses;
    6. the right to know the evidence against one;
    7. the right to have the decision based only on the evidence presented;
    8. the right to counsel;
    9. the making of a record;
    10. a statement of reasons;
    11. public attendance;
    12. judicial review.
  • Values of the Due Process; it should preserve (supporting citations):
    1. accuracy;
    2. the appearance of fairness;
    3. equality of inputs into the process;
    4. predictability, transparency, and rationality;
    5. participation;
    6. revelation;
    7. privacy-dignity.

Mentions

Promotions

Via: backfill