Tamara Denning, Yoky Matsuoka, Tadayoshi Kohno; Neurosecurity: security and privacy for neural devices; In Neurosurgery Focus; Volume 27; 2009-07; 4 pages.
An increasing number of neural implantable devices will become available in the near future due to advances in neural engineering. This discipline holds the potential to improve many patients’ lives dramatically by offering improved—and in some cases entirely new—forms of rehabilitation for conditions ranging from missing limbs to degenerative cognitive diseases. The use of standard engineering practices, medical trials, and neuroethical evaluations during the design process can create systems that are safe and that follow ethical guidelines; unfortunately, none of these disciplines currently ensure that neural devices are robust against adversarial entities trying to exploit these devices to alter, block, or eavesdrop on neural signals. The authors define “neurosecurity”—a version of computer science security principles and methods applied to neural engineering—and discuss why neurosecurity should be a critical consideration in the design of future neural devices.
Nita A. Farahany; Incriminating Thoughts; Stanford Law Review Vol. 64, 351 (2012)
Vanderbilt Public Law Research Paper No. 11-17; 2011-04; 59 pages; available at SSRN.
The neuroscience revolution poses profound challenges to current self-incrimination doctrine and exposes a deep conceptual confusion at the heart of the doctrine. In Schmerber v. California, the Court held that under the Self-Incrimination Clause of the Fifth Amendment, no person shall be compelled to “prove a charge [from] his own mouth,” but a person may be compelled to provide real or physical evidence. This testimonial/physical dichotomy has failed to achieve its intended simplifying purpose. For nearly fifty years scholars and practitioners have lamented its impracticability and its inconsistency with the underlying purpose of the privilege. This Article seeks to reframe the debate. It demonstrates through modern applications from neuroscience the need to redefine the taxonomy of evidence subject to the privilege against self-incrimination. Evidence can arise from the identifying characteristics inherent to individuals; it can arise automatically, without conscious processing; it can arise through memorialized photographs, papers, and memories; or it can arise through responses uttered silently or aloud. This spectrum — identifying, automatic, memorialized, and uttered — is more nuanced and more precise than the traditional testimonial/physical dichotomy, and gives descriptive power to the rationale underpinning the privilege against self-incrimination. Neurological evidence, like more traditional evidence, may be located on this spectrum, and thus doctrinal riddles of self-incrimination, both modern and ancient, may be solved.
M. Ryan Calo; Open Robotics; Maryland Law Review, Vol. 70, No. 3, 2011; 2011-11-09; 42 pages; Available at SSRN.
With millions of home and service robots already on the market, and millions more on the way, robotics is poised to be the next transformative technology. As with personal computers, personal robots are more likely to thrive if they are sufficiently open to third-party contributions of software and hardware. No less than with telephony, cable, computing, and the Internet, an open robotics could foster innovation, spur consumer adoption, and create secondary markets.
But open robots also present the potential for inestimable legal liability, which may lead entrepreneurs and investors to abandon open robots in favor of products with more limited functionality. This possibility flows from a key difference between personal computers and robots. Like PCs, open robots have no set function, run third-party software, and invite modification. But unlike PCs, personal robots are in a position directly to cause physical damage and injury. Thus, norms against suit and expedients to limit liability such as the economic loss doctrine are unlikely to transfer from the PC and consumer software context to that of robotics.
This essay therefore recommends a selective immunity for manufacturers of open robotic platforms for what end users do with these platforms, akin to the immunity enjoyed under federal law by firearms manufacturers and websites. Selective immunity has the potential to preserve the conditions for innovation without compromising incentives for safety. The alternative is to risk being left behind in a key technology by countries with a higher bar to litigation and a serious head start.
Ivan Martinovic, Doug Davies, Mario Frank, Daniele Perito, Tomas Ros, Dawn Song; On the Feasibility of Side-Channel Attacks with Brain-Computer Interfaces; In Proceedings of the USENIX Security Conference; 2012-08-08; landing (with video)
Brain computer interfaces (BCI) are becoming increasingly popular in the gaming and entertainment industries. Consumer-grade BCI devices are available for a few hundred dollars and are used in a variety of applications, such as video games, hands-free keyboards, or as an assistant in relaxation training. There are application stores similar to the ones used for smart phones, where application developers have access to an API to collect data from the BCI devices. The security risks involved in using consumer-grade BCI devices have never been studied and the impact of malicious software with access to the device is unexplored. We take a first step in studying the security implications of such devices and demonstrate that this upcoming technology could be turned against users to reveal their private and secret information. We use inexpensive electroencephalography (EEG) based BCI devices to test the feasibility of simple, yet effective, attacks. The captured EEG signal could reveal the user’s private informa tion about, e.g., bank cards, PIN numbers, area of living, the knowledge of the known persons. This is the first attempt to study the security implications of consumer-grade BCI devices. We show that the entropy of the private information is decreased on the average by approximately 15 – 40 % compared to random guessing attacks.