Wednesday, 23 December 2015

Artificial Intelligence: what does the future hold?

By Sam Firminger 

(Contains ‘2001: A Space Odyssey’ spoilers)

“I’m sorry Dave, I’m afraid I can’t do that.”

This memorable line from the Stanley Kubrick film 2001: A Space Odyssey, comes from a super-fast sentient computer called HAL 9000 (Heuristically programmed ALgorithmic computer) on board a spaceship travelling to Jupiter. It was programmed to look after its crew and ensure the success of the mission. However, after discovering the human crew intend to shut it down, it decides to silently kill one of the astronauts. After Dave attempts a rescue, HAL calmly tells him that he is unable to open the ship’s doors to dock, effectively trying to kill him too. A single, unwavering red eye stares down Dave meanwhile shutting off the life support of the remaining crew members.

It’s partly thanks to films like this that the possibility of artificial intelligence turning against us is solidified in the public conscience. It’s not alone; numerous other high profile media forms portray the possibility of this dystopian future; the famous Skynet system from the Terminator series, the Machines from the Matrix trilogy, and GLaDOS from the Portal games to name a few.

However, the idea of a creation intended for good turning on its creator can be traced back further than big Hollywood blockbusters. Literature is littered with stories of accidental monsters. The creation of Satan himself in Christianity was the result of an angel gone wrong. Frankenstein’s monster in the novel by Mary Shelley in 1818, one of the first science fiction books, is another classic example. The thought of malevolent, uncontrollable artificial intelligence is a terrifying and common one, but how likely is it?

Ethics of AI

Perhaps it would be wise to start with the question of how artificial intelligence should be programmed. One of the first tentative ideas comes from the Three Laws of Robotics written up by the sci-fi author Isaac Asimov over 7 decades ago in 1942. These are a set of rules by which robots are created within novels of his. They state that no robot shall harm or kill a human being directly or indirectly, that the robot must obey human orders, and finally that the robot should protect its own existence as long as it does not conflict with the previous two laws. Though these seem practical and that they could be implemented from an outsider’s perspective, these were originally only ever intended to be a literary tool by which to create dynamic sci-fi novels, and were not created by a scientist with knowledge of AI.

It’s worth noting that when ethics of AI are discussed, it is the ethics of robots or humanoid robots that people tend to think of immediately. Although there is huge investment in humanoid robotics, AI are far more prevalent in other fields that you might not expect, such as targeted Facebook ads and data mining from metadata collection by governments. Ethical frameworks of purely physical harm or death to humans are useless in situations like this and if talking about other types of harm such as breach of privacy, it is difficult to decide what is constitutes ‘harm’ and what does not.

AI of the future

The future generation of AI however lies in a process called machine learning. This is where AI is able to look at a set of data, learn from it and use that knowledge to change its future actions without being explicitly programmed to do so. The AI teaches itself. This allows the program to become more sophisticated and develop as time goes on as it experiences more data input. There are many examples of this already in the tech world, including speech recognition software, self-driving cars, the use of data mining to give personalized ads, personal assistants like Cortana (Microsoft) and Siri (Apple) and even Netflix movie recommendations. It’s on the rise too; deep learning, a branch of machine learning, has seen a huge rise in interest in recent years. Deep learning is the ability to look at hundreds of examples of data and make accurate predictions in new situations, much like how a biological neural network functions.

This boom in development of machine learning brings the debate back to ethical framework for development of AI. Should we be scared of AI being capable of teaching itself? It’s possible that over generations, an AI program will be able to improve more efficiently than natural evolution would allow. It would not only be able to improve from generation to generation, it would also be able to specifically design what the next generation of program should look like in addition to speeding up the rate of change. This would be a process not limited by biological factors. This runaway process could eventually end up in a program many times more intelligent than a human, given enough computing power.

It’s for reasons like this that some of greatest minds alive today have warned against machine learning AI. Professor Stephen Hawking has said: “The development of full artificial intelligence could spell the end of the human race.” Elon Musk, CEO of Tesla and SpaceX has echoed similar sentiments. He said; “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.” He even said rogue AI is more dangerous than nuclear attack. It’s clearly a pretty big issue that demands action.
The ethics of future AI
With the rapid recent development of machine learning, it seems certain safeguards are essential and need to be put into place. Along with other tech giants, Elon Musk has taken it upon himself to start this process. On December 11th 2015 they pledged $1b and started up a non-profit organisation called OpenAI. Their goal as stated is to: “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” They have also pledged to share the data with AI firms, and open-source the data to make sure the whole sector is on the same page.

It seems OpenAI is the first step in creating a universal ethical code for future AI projects. By providing its data for all to use, it will ensure that not one company becomes too powerful or dominates the market. In a blogpost they said: “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.” Their aim is to unite AI into a ‘common intelligence’. Up to now, projects from different companies for different tasks have been kept data to themselves but OpenAI want to unite and encourage the sharing of intelligence to create something different. They want to share the huge value in data sets between other companies. The emphasis on OpenAI to be independent of financial return ensures that whoever has to ‘prioritize the outcome’ of AI is not doing so for selfish financial gain.

Future AI and open-source data

The question one may then ask is what incentive is there for companies to share any data at all? Google has recently announced open-sourcing of its machine learning algorithms for its AI library TensorFlow, in the hope that knowledgeable outsiders can access their algorithms and improve them – a ‘I’ll scratch your back if you scratch mine’ situation. They’re not alone; Facebook and Microsoft have also announced open-sourcing of their AI hardware and software.
It seems the future of AI is in cooperation and continual open-sourcing of data from the likes of Google, Facebook, Microsoft and OpenAI. It’s in this way that dystopian futures of rogue AI can be avoided. Perhaps instead we’ll be seeing extremely advanced but sarcastic robots in the near future, of the likes of TARS from Interstellar, floating around the International Space Station.

Links to the more information about the ideas discussed can be found below:

Ethics of Robotic Intelligence regarding Lethal Autonomous weapons

Sunday, 20 December 2015

What Came First, Sponges or the Comb Jellies?

By Sam Firminger

The well-known chicken or the egg conundrum first posed by classical philosophers such as the likes of Aristotle asks a simple biological question. Reduced down to its very essence, it’s a concept often applied in the field of phylogenetics. Here, a range of tools available to biologists are used to study the evolutionary history and relationships of the entire biological scale, from genes to species and phyla. In the past, these tools utilised mainly morphological data but with recent progressions in gene sequencing and evolutionary modelling, genetic data is swiftly changing the landscape of phylogenetics. With this advent of new technologies, the phylogeny of the early complex multicellular organisms, the basal Metazoans (more widely known as the kingdom of Animalia or Animals) has been turned on its head with contentious results being presented to the scientific community.

The Basal Metazoans

The Cambrian Period (541 to 485.5 Mya) is well known for the ‘Cambrian explosion’, a remarkable event that encompassed the explosive radiation of organisms into most of the animal phyla we know and love today. Immediately predating this period however, is the Ediacaran, also commonly referred to as the Vendian (635-542 Mya). Fossil records have shown that this is actually when soft-bodied organisms first appeared on Earth. It is here that the Porifera (Sponges), Ctenophores (Comb jellies), Cnidarians (Jellyfish and corals) and Placozoans first emerged in the murky depths of the ancient oceans. Ctenophores could easily be mistaken for jellyfish with their layered, jelly-like bodies but instead move by the movement of ‘combs’, or cilia, running along the body. They also lack the stinging cells which Jellyfish are famous for. Species found at depth can often be seen to have wonderfully striking multi-coloured LED-like ripples along their bodies, caused by scattering of light through the moving combs. Similarly, most (but not all) Ctenophora species are capable of using proteins that cause bioluminescence: you may well have seen these curious creatures in a documentary or two.

Conflicting Phylogenies
The traditional phylogeny of these taxa places the Porifera as the most basal group, with the rest splitting off from this lineage and evolving later. This is probably the view a layman would also adopt, simply by looking at the organisms. Sponges are sessile and look relatively simple, even plant-like, due to the lack of any features that you might typically associate associates with animals, such as limbs, eyes or muscles. However, multiple academic groups in recent years have challenged this view using transcriptome data (sequenced data from all types of RNA found in a cell). They suggest that it was actually the Ctenophores which evolved first, placing them as a sister group and the most distant to all known animals. This controversial view of the evolutionary history of these animals immediately sent ripples through the scientific community, as it went against all previous textbooks and published papers, including a 2014 paper published in the highly revered Nature journal.

If one is to run with this hypothesis then there are some things that need explaining. Ctenophores have relatively complex epithelial nerve nets, along with muscles and a gut. These characteristics are absent in Porifera, suggesting that if the Ctenophores did in fact evolve first, there would have had to have been a secondary loss of these features to a simpler body plan found in the sponges, followed by another novel evolution of a nervous system into those found in Cnidarians (Jellyfish). An alternative explanation is that the Ctenophores evolved their nervous system independently of the other phyla. However, this has been proven to be unlikely due to both the Ctenophores and the Cnidarians having specific common features, including neuronal fate patterning genes and the presence of vital components for synaptic function. This provocative claim of the sponges no longer being viewed as the ancestral phyla led to widespread questioning of whether it was time for the history books to be rewritten, completely changing our understanding of evolution as we know it. Are they right?

The Recent Research

A team at the University of Bristol led by Dr Davide Pisani with colleagues from around the world published a paper in December 2015 using genomic data to tackle these controversial claims. The team used data sets from previous experiments suggesting Ctenophore-early hypotheses and showed that the choice of evolutionary models, which are run on data, is crucial for obtaining correct results, and in previous papers these were inappropriately chosen. They discuss how the previously used models had failed to take into account important biological factors which affect the rate of change of genes such as the hydrophobicity of bases.  Subsequent analyses by Dr Pisani and his team using more appropriate models along with powerful statistical methods lead to the conclusion that it was in fact the sponges which came first, not the Ctenophores, which supports and reinforces the classically held hypothesis: a sigh of relief for many scientists.

Dr Davide Pisani told the University of Bristol press team: “Knowing whether sponges or comb jellies came first is fundamental to our understanding of evolution.  Take the nervous system for example; this is the fundamental organ system that mediates our own perception of self.  It is what makes us human, so is pretty important!  Depending on whether sponges or comb jellies came first underpins entirely different evolutionary histories for this organ system.  If comb jellies came first, then the last common ancestor of all the animals might have had a nervous system, and as all comb jellies are predators this ancestor might have even been a predator.”

These results highlight the issue of revolutionary claims that all too often surface in the scientific community. Upon closer inspection with rigorous testing and analyses, they are not always what they seem. It begs the use of proper methodology with thorough self-analysis before publishing a paper. It seems as if the history books are safe, for now.

Dr Pisani’s paper can be found in full at:

Monday, 14 December 2015

Women in Science: Dr Georgina Meakin

By Amy Newman

Dr Georgina Meakin, a forensic science researcher at University College London, recently came to give a guest lecture at Bristol as part of a series of events run by the Women in Science society. She spoke about her experience as a woman in forensic science, and fascinated us with some examples of cases and pieces of her research along the way.

Dr Meakin took a somewhat unconventional route into her current career in forensics, choosing to return to university to study for a Masters after already completing a PhD. Hearing about the variety of jobs she’d had in the past was very reassuring that, just as the careers service might tell you, there are many possibilities following on from a science degree!

She also spoke of the #distractinglysexy trend that was sparked by the recent controversy surrounding Nobel Prize winner Tim Hunt, and how forensic science isn’t as glamorous as TV shows such as CSI might have us believe. The pictures she showed of the full Personal Protective Equipment everyone in her lab has to wear definitely brought home her point!

Dr Meakin’s work at UCL focuses on “trace DNA”: DNA recovered from a crime scene that is present in tiny amounts and of an unknown biological origin. Lab methods have advanced so far that we are now able to gain reliable sequence profiles from just a few cells’ worth of DNA, which means a detectable bodily fluid doesn’t have to be present to gain possible DNA evidence. However, as Dr Meakin explained, it can be hard to determine whether such trace DNA found at a crime scene was actually from one of the perpetrators, as DNA can be spread through actions such as coughing as well as direct physical contact. Current work from her lab has even found that, for example, if you were to shake someone’s hand and then they were to use a knife in a crime, small traces of your DNA would probably be found on the weapon!

The talk was a great opportunity to hear about a field of science I know I for one hadn’t really thought about as a future option, and Dr Meakin’s enthusiasm both for forensics and promoting women in science really shone through.  

Monday, 7 December 2015

What are Chemists Doing to Fight Ebola?

By David Morris  

Since the beginning of 2014, Western Africa has been battling an outbreak of Ebola Haemorrhagic fever (EHF), or simply ‘Ebola’. The epidemic caused by the Ebola virus has been largely contained, but the death toll is still ascending into the tens of thousands. To understand what chemists have been doing to exterminate the virus, some background knowledge of it is required.

An electron micrograph of an Ebola virus 

The Ebola virus is a filamentous virus comprised of a strand of genetic code known as RNA, encapsulated by a protein membrane. It can survive in a multitude of bodily fluids for up to several months, making inter-host transmission very feasible. The fruit bat, native to parts of western Africa serves as a ‘natural reservoir’ for the virus. This means the virus can survive and replicate within a fruit bat without killing it, allowing it to thrive in countries where fruit bats are prevalent. RNA acts as a code for the expression of specific proteins. After the Ebola virus is introduced to the human body, it expresses a protein that binds to human ‘interferons’. These are proteins that call for an immune response when necessary. This binding stops the interferons calling for antibodies to destroy the virus, rendering the immune system largely redundant.
The exterior membrane of the virus presents pendant proteins called glycoproteins to the surface of a healthy cell. The Ebola glycoprotein hijacks the cholesterol influx receptors of healthy cells, dragging the virus into them. This allows the virus easy access to the cell interior, where it is free to replicate.
The Ebola virus also expresses a disordered protein called VP24 that interacts strongly with the collagen in the body. Collagen is responsible for separating connective tissues and acting as a barrier to prevent unwanted materials entering organs and tissues. When VP24 interacts with collagen, it can distort the collagen until the collagen is denatured and useless. After this, there is little else stopping blood pouring into organs and to the surface of the skin, thus widespread internal and external bleeding occurs and causes fatal problems in the body.
The many ways that the Ebola virus acts on the body has given chemists just as many platforms from which it can be stopped. Upon entering the body, the virus is very quick to shut down the immune system. The body tries to respond by expressing a specific antibody to combat the virus. Scientists have noticed this and developed an effective way to detect the rapid expression of this antibody, making early diagnosis and recovery from the disease much more likely.

Many contemporary EHF treatments are derived from molecules that are very structurally similar to what the virus uses in protein expression and replication. Sarepta Therapeutics have developed a modified strand of RNA that, when deployed, the virus encounters in the body and mistakes for its own genetic code during replication. Due to the modified code, the daughter virus then goes on to express dysfunctional VP24 which can’t bind to interferons properly, allowing them to signal the immune response to destroy the virus. Similarly, Tekmira Pharmaceuticals have developed ‘small interfering’ RNA drugs that the virus again mistakes for its own RNA. This modified RNA prevents the daughter virus from being able to replicate itself at all. Mapp Biopharmaceuticals developed ZMapp as a ‘cocktail’ of several antibodies that have a high affinity to Ebola glycoprotein. The cell doesn’t mistake the new glycoprotein-ZMapp structure for cholesterol upon binding and so isn’t tricked into allowing the virus into the cell. As the virus cannot infect new cells, it eventually perishes. These techniques have resulted in a plethora of antiviral drugs that can be used to treat EHF. Applying these remedies in conjunction with one another increases the likelihood of effective treatment exponentially as it stops the virus at several points.
These types of drugs work well as they are very structurally similar to the actual virus, such that they have a much higher affinity for the virus as opposed to human cells. This allows the drug to selectively interrupt the virus’ processes over bodily processes, and therefore limits the potential negative effects of the drug on the patient.
Chemists are currently developing methods for EHF drugs to be made in a quick, scalable and economically viable fashion with few toxic impurities, so they can pass through clinical trials and be used on scale. Ebola recovery is now becoming more and more common. With the research effort the pharmaceutical industry is pouring into fighting the Ebola virus, it is very realistic that the epidemic will be exterminated and effectively controlled within the coming years.