Dr Anders SandbergThe Future of Humanity Institute, University of Oxford

Dr Anders Sandberg
The Future of Humanity Institute, University of Oxford

Dr Anders Sandberg
Dr Anders Sandberg

Technology has advanced to a greater extent in enhancement of human welfare-particularly in the disabled and transplantation of organs. The Future of Humanity Institute at University of Oxford is a leading research centre looking at the risks and opportunities that will arise from technological change, weigh ethical dilemmas, and evaluate global priorities.

In an exclusive interview to LSW-LifeScienceWorld, Dr. Anders Sandberg, Research Scientist at the Future of Humanity Institute briefly mentions about the role of technological research in the area of human welfare.

LSW: When it comes to devices applicable in the human body, we all know of heart pacemaker, cochlear transplant or bionic ears, which came into existence long ago. Which are the other devices that are proved effective and are successfully implanted?

The most successful implants are passive – titanium bones, artifical teeth, teflon joints. They lack the glamour and drama of the functional implants that interact with the body, but they are widely used and very useful.

Among the active implants, pacemaker-like implants for deep brain stimulation are used against Parkinson’s disease, depression, chronic pain and some other conditions. Neural interfaces allow paralysed people to communicate with computers, and a few cases of brain-interface prosthetics have been demonstrated.

LSW: How far the bionic concept (devices) are successful with regards to safety measures?

Bionics is not safe: it requires surgery, risks of rejection, and has risks of infection. It is no doubt going to remain in the realm of professional medicine for a long while. However, some implants like RFID tracking chips and magnets for sensing magnetic fields are simple and can be implanted by amateurs.

LSW: Can technological advances go against nature’s law?

Natural law can denote the rules of how the world functions, in which case it is of course impossible to do anything against them – technology is part of nature just as much as trees and humans.

Natural law can also denote universal moral rules inherent in the world. Whether they actually exist is debated by ethicists; many modern philosophers are deeply sceptical about it. Many religious views hold opinions about such moral rules, but in a democratic society decisions cannot be based on what some but not other people believe: decisions must be based on principles that everybody can agree on.

It is common for lay people to say that if something is natural it must be good or acceptable, but this is not true: illness, violence, and ignorance are all natural but not good.  I think we should carefully think about how technology can affect us and our environment , but we should not think that just because something is new it against nature, nor should we think the old ways are always good. Many things have changed for the better: it is a good thing that we live longer and healthier lives thanks to technology, or that we have changed our values to become more tolerant and less prejudiced.

LSW: FHI-Your institute’s research area involves Human Enhancement and Future Technologies. Can you briefly mention some of the activities on both these areas?

We study what technologies are likely to change what it means to be human. An obvious area are technologies that could make us more intelligent, live far longer, change our memories or emotions, or communicate with machines. We are interested in mapping both their progress and analyze their likely moral or social meaning. We are also examining brain emulation, a possible future technology for copying the human brain into machines, which if it ever happens, would change the world radically. Similarly we are working hard on understanding the threats and possibilities of entirely artificial intelligence.

Our second big area is global catastrophic risks and existential risk – threats to the survival of humanity as a whole. We try to understand not just individual existing risks like pandemics or nuclear war, but also possible future risks such as artificial diseases or dangerous super intelligence. We study their likelihood, how they interact, and the moral imperative to act against them.

Our final area of research is applied rationality: how can we think well about things in the future – especially when we know we do not know everything – or well in the present, when we know our minds are limited and biased? We study how to think about huge, uncertain risks or how to make smarter decisions.

LSW: FHI’s objective towards human welfare:

Humans are bad at thinking about low-probability, high-impact risks. So we tend to underestimate the most extreme risks, the ones we should actually care most about. Conversely we suffer from status quo bias: we think the present is normal, unchanging and the best, and hence do not recognize the great value of many new technologies.

LSW: Finally, challenges in technology when it comes to defying nature:

The company is focused on the south Asian markets of Bangladesh and Pakistan, the South- east Asian markets of Thailand, Vietnam and Indonesia and the east African markets of Kenya and Tanzania. These are markets with sufficient potential and present us with enough opportunities to grow.


Leave a Reply