Mark Weiser, Principal Scientist, Xerox PARC April 1995
Originally published in Computer-Mediated Communication Magazine / Volume 2, Number 4 / April 1, 1995 / Page 17I am honored to offer a few words of commentary on previous "Last Link" columns by David Porush and Stephen Doheny-Farina. First let me explain a little bit about Ubiquitous Computing, or Ubicomp, a technology I helped develop and one they mentioned concern about in their columns.
I weigh in as an engineer, someone whose primary interest is "what should I build next?" Ubicomp is an unusual project for an engineer, for two reasons. First, I took inspiration from anthropology; and second, I knew that whatever we did would be wrong.
The anthropological critique was common in some quarters of Xerox PARC when I arrived in 1987. It went approximately like this: the most profound technologies are those that become embedded in people's lives; current computers force people to separate their machine life from the rest of their lives, so computers in their current form would never become a very significant or profound technology. Now where Doheny-Farina might have breathed a sigh of relief, I took this as a challenge. Could I design a radically new kind of computer that could more deeply participate in the world of people?
This led me quickly to the second conclusion, that I would get it wrong. As I began to glimpse what such an information applicance might look like, I saw that it would be so different from today's computer that I could not begin to understand or build it. So I set out, instead, to build some things that my colleagues and I could put in use, things as different as we could imagine from today's computers, yet using technology that could be made solid today. Using these things would then change us. From that new perspective, I would then again try to glimpse our new kind of computer and try again. Thus PARC launched projects to build inch-, foot-, and yard-sized computers--the Tabs, Pads, and Boards of Ubiquitous Computing, phase 1.
Interestingly, the active badges that track location, that have Doheny-Farina so concerned, were not invented at PARC at all, but by Olivetti and Digital Equipment Corporation. (Old order page for the Active Badge here | more Active Badge information here). However, we gladly used them as a way to help our machines know more about us, so they could better serve us. And thus we found ourselves in the middle of a controversy.
Before long, comments in the press and from our colleagues indicated that we were doing something dangerous: a technology that could be used for the tracking and control of people. What if Nazi Germany had had this? But we also believed that we were creating a technology with significant potential for good. The excesses of those proposing that we all don goggles to live in Virtual Reality, on the one hand, and those proposing that cute little computer personalities would be at our beck and call (e.g. Microsoft), on the other, seemed to push our program forward like a melon seed squeezed between two absurd fingers. What to do?
One possibility was to stop all such work immediately. Since we could see the potential for evil applications, why do it? And some individuals in the lab took this road. Others, including myself, continued the debate. Could we at PARC bring some special value to the work that others might not? Well, at PARC we have philosophers, social scientists, and anthropologists to offset the engineers; perhaps we could proceed with the work while maintaining a dialogue about its uses. This would at least be an improvement over the naive optimism of a pure engineering lab (for instance, Olivetti Research is comfortable broadcasting the locations of all its employees to anyone on the Internet).
Gradually, we evolved a two-part guide to action for Ubiquitous Computing research. These two parts might be useful to others working on technologies that evoke the suspicion of people like Doheny-Farina.
1. Build it as safe as you can, and build into it all the safeguards to personal values that you can imagine.
2. Tell the world at large that you are doing something dangerous.
Principle 1 ensures that, as an engineer, you have demonstrated to all concerned that it is possible to construct your system with appropriate safeguards. Unfortunately, it is rarely possible to build them in such a way that they cannot be removed. And so, as Doheny-Farina clearly questions, what prevents "organizations less enlightened" than PARC from removing those safeguards?
Principle 2 comes to the rescue then by providing a basis for informed discussion and action by anyone. Most engineers will defend as strongly as possible the value of their work and leave it to others to find fault. But that is not enough if one is doing something that one knows has possibly dangerous consequences. The responsible engineer in this case must pro-actively begin debate about how the technology should be used. For one thing, he or she may learn of more things to be applied under Principle 1. And for another, informed people are less likely to let safeguards be removed.
In practice, as we learned, Principle 2 sometimes becomes Principle 2-A: Cooperate with overblown and distorted media stories about your work. I have heard NBC Nightly News describe Xerox PARC as being at the forefront of "big brother technology," as though the world were sprinkled with surveillance laboratories, but we happened to be the best. And so that leads to Principle 2-B: Better that people are too scared about what you are doing than they not find out at all until it's too late. As we write (and say) in the computer biz: "sigh."
Principle 2 is far from a guarantee that evil will not be done. But I know of no way to provide such a guarantee for any technology. Refusing to work on such technology is the approach of the ostrich. However, I am an optimist. I think that people will eventually figure out how to use technology for their benefit, including, if necessary, passing laws or establishing social conventions to avoid its worst dangers. It should be every engineer's role to provide as much information as possible in the debate leading to these new laws and conventions.
Let me take a moment now to respond to some of the specific commentary of Doheny-Farina. Interestingly, he appears to be extremely optimistic about both technology and human nature. For instance, Doheny-Farina says at one point ". . . as I read on I expected to be assured that any Orwellian nightmares would be unjustified because of the way the technology is being designed." In other words, he expected technology to dissolve his nightmare. Now that is putting power into the hands of the engineers! But it is power we should not have, and fortunately do not have.
Doheny-Farina's expectation amounts in essence to the expectation that technology could contain inherent values. One culture's Orwellian nightmare might be another's answered prayers; a technology which could -- on the basis of technology alone -- prevent the one would also prevent the other. I don't believe there can ever be such a technology: a law of physics of morality. Yes, safeguards can be built into any system, such as the checks and balances in a good accounting system. But what keeps them in place is not the technology, but people's commitment to keeping them.
We cannot expect technology alone to solve ethical dilemmas. Technology is a tool made by people to meet people's needs. Like all tools, it can be used in ways undreamed of by the inventor. Like all tools, it will change the user in unexpected and profound ways.
Doheny-Farina makes much of my use of the word "dissent" to describe those in my lab who felt ubicomp might be wrong. I think dissent is a good word. It does not imply that the dissenter is wrong -- in fact, quite often in Western history the dissenter is proven right. In the creation and application of new technology there will always be disagreement about what to do.
As Porush says very well, technology cannot be turned on and off, or simply ignored by those who choose to ignore it. Doheny-Farina is quite the optimist if he thinks that it will ever be easy for people to do what is right instead of what is popular. It takes courage and intelligence to oppose technology when necessary. It is vital, I believe, to have a tradition of honorable dissent, of supporting those who cry "No!" when everyone else is swept along.
No set of principles can take the place of engaging in discussion -- pulling, pushing, and throwing one's weight into composing the life and culture we lead and will lead in the future. Engineers like myself are just a part of the dialogue, perhaps the smallest part (although one over-emphasized in the late twentieth century counter-revolution against the death of modernism). The main discussion is the one that happens day to day among all the individuals of our culture as they choose to go along or dissent.
Thank you everyone for letting me participate in this one.
I find I cannot end without addressing Porush's puzzling remark that "It is impossible for any mechanical system to be blind to any of its own states." What could this possibly mean? Most mechanical systems I know are blind, not just to their own states, but to the state of everything around them. Automobiles, light switches, paper clips, are all pretty darn blind to pretty much everything.
I think a truer statement is more like the opposite: every system must have state which is not modeled in the system. Otherwise there is infinite regress: the model itself must be modeled, and the model of the model, and then the model of the model of the model. The escape is for each model to be less detailed than what it models (as our knowledge of the world is always less than the world itself) -- but this means that there is some aspect of our own state to which we are blind.
It may not be the same state all the time, but it is always some state.
Mark Weiser was director of the Computer Science Lab (CSL) at Xerox PARC for six years until he fired himself in December 1994, at which time he hired Craig Mudge to replace him. Mark is now on "sabbatical from management," working on the next steps in Ubiquitous Computing. He plans to head CSL again in one or two years, assuming Craig hires him back.
Originally published in Computer-Mediated Communication Magazine / Volume 2, Number 4 / April 1, 1995 / Page 17
Copyright ©1995 by Mark Weiser. All Rights Reserved.
CalmTechnology.com is written and maintained by Caseorganic.com.