Showing posts with label computer science. Show all posts
Showing posts with label computer science. Show all posts

Tuesday, 5 February 2013

The AI Lab: Brain-Computer Interfaces - The Future of Collaborative Mind-Control Systems Shaping Up

Alfred Omachar



One of the most challenging advances in human-machine interfaces is the use of a brain-computer interface (BCI) to communicate a user's intention to a computer by passing the classical hand input interfaces such as keyboard, mouse and touch-pad. 

However, recent research in BCI has shown impressive capability for controlling mobile robots, virtual avatars and even humanoid robots. For example, one study demonstrated the ability to control a humanoid robot with a BCI, where users (humans) were able to select an object in the robot's environment – seen through the robot's cameras – and put it in a desired area in the environment -  seen through an overhead camera. Similarly, BCIs have also managed to help people with disabilities to control, for example, a wheelchair, robotic prosthesis or computer cursor.

So how do BCIs work (in a nutshell)?

A BCI system records the brain's electrical activity using electroencephalography (EEG) signals. The signals can be taken invasively or non-invasively either from inside the brain or from the scalp. Non-invasive BCI takes signals that are present at micro-volt levels on the scalp and then amplifies them using an EEG. These signals are then digitised so that they can be used by the computer. Machine learning algorithms are then used to construct software that learn to recognise the patterns generated by a user as he/she thinks of a certain concept, for example, “up”  or “down”. 

A promising Future for Collaborative BCIs

Now researchers are discovering that they even get better results in some tasks by combining the signals from multiple BCI users. For instance, a team at the University of Essex managed to develop a simulator in which pairs of BCI users had to steer a craft towards the centre of a planet by thinking about one of eight directions that they could fly in. Brain signals representing the users' chosen direction were merged in real time and the spacecraft followed that path.

According to the results of this study, it turns out that two-brain navigation performed better compared to single brain navigation. Simulation flights were 67% accurate when controlled by a single user but were 90% on target when controlled by two users. In addition, random noise in the combined EEG signals were significantly reduced and the dual brain navigation could also compensate for a lapse in attention by any one of the two users. In fact, NASA's Jet Propulsion lab in Pasadena, California, has been observing this study while itself investigating the potential of BCIs controlling, for example, planetary rovers, among other space applications. However, for now the idea of planetary rover remote control still remains speculative as most pioneers in the field of BCI are in their research stage.

Saturday, 12 January 2013

Relics amongst us

Tom Stubbs


Meet the organisms that have outlived the Egyptian pyramids, the Roman Empire and all humanity.

As humans we are familiar with lifespans on a decadal timescale. Human life expectancies vary globally from 32 to 83 years and the oldest person ever officially recorded was a whopping 122 years old. It is amazing to think animals such as the giant tortoise can live past the age of one hundred, such as the legendary Lonesome George. Nevertheless, these lifespans are truly eclipsed by representatives from the plant kingdom.
Methuselah

The oldest individual living organism on Earth is a bristlecone pine, aptly named Methuselah, from the Hebrew Bible. This individual, hidden away in the ‘Forest of Ancients’ in the Inyo National Forest of California, is an incredible 4,800 years old. To put that into perspective, the tree must have sprouted around 2800 BC! It was already a centenarian before the first Egyptian pyramids and the Mayan civilization would not appear for 800 years. It has existed through wars and the rise and falls of civilisations, yet it still sits there humbly in the mountains of California. Bristlecone Pines are not particularly large, reaching around 50 feet, and they grow very slowly, taking around 700 years to grow 3 feet! At first glance the plant appears rather drab, but so would you if you had outlasted every other single organism on the planet.

Believe it or not, Methuselah is not the oldest recorded individual tree, there is a member of the same species that was older. This was Prometheus, which might have been 5,000 years old. Unfortunately Prometheus was felled by an enthusiastic graduate student in 1964! There is a chance that Methuselah may over take its rival and continue to live past our great-great grandchildren. Who knows, scientists might be blogging about a 6,000 years old tree in the very distant future.

Sarv-e-Abarkooh
Bristlecone pines are not the only primeval trees living amongst us. There is the giant 82 feet high cypress named Zoroastrian Sarv (or Sarv-e-Abarkooh). This individual evergreen is between 4,000 and 4,500 years old, around the same as Stonehenge! It can be found in Abarkooh, Iran.

So why do some trees live so long? Their compartmentalised vascular system helps considerably, allowing sections of the tree to deteriorate while the individual survives. They also have the ability to synthesise defensive compounds to protect against parasites and bacteria. An underlying physiological mechanism prevents genetic mutations from accumulating in their cells to the same extent as other organisms. Longevity is naturally selected as it increases the organism’s reproductive opportunities.

We have trees that have existed for thousands of years, how would you feel if I told you there are plants that may have lived for tens and hundreds of thousands of years, surely not? The exceptional trees described above are all individual units, with a single stem and root system. There are a group of plants which have evolved a clonal mode of life. This involves using many genetically identical clones stems that to the untrained eye, appear to be individual trees, but beneath the surface they are all connected in a massive network of roots. This allows these plants to defy time. The loss of a single unit stem or ‘tree’ does not mean the death of the overall organism and clonal colonies can live for incredibly long periods.

Part of the 'Pando' colony
Perhaps the most famous ancient clonal colony is ‘Pando’, a colony of Quaking Aspen in Utah. This colony is 80,000 years old, so compared to this Methuselah looks like a spring chicken! An age of 80,000 years is difficult to comprehend, but during this time our ancestors were all confined to Africa. Unbelievably some reputable estimates believe the colony could be as old as 1 million years. If so Pando would be 800,000 years older than the earliest human. Also known as the ‘The Trembling Giant’ Pando is made up of 47,000 stems that are clones of a single male aspen, when a stem dies it is simply replenished. Together this colossus weighs 6,000,000 kg making Pando the heaviest living organism on earth.

Old Tjikko
If you consider Pando a cheat for being made up of multiple stems then check out Old Tjikko. This ancient spruce tree from Sweden is 9,550 years old, twice the age of Methuselah. Unlike Pando this tree has only a single stem, so it looks like a normal tree. However, this stem is just one of many and is only 600 years old. It is a clone that is continuously replaced from an ancient root stock.

In February 2012 a new contender to the title of oldest colonial organism was announced. To find it we have to venture into the marine realm. Reports suggested a species of seagrass, Posidonia oceanica, along the Mediterranean coast is between 80,000 and 200,000 years old. It looks like a meadow but as with other clonal colonies, it is all one genetic individual. Ironically, this ancient seagrass now faces its greatest threat - humanity. Induced Mediterranean climate change is causing P. oceanica meadows to decline by around 5% each year. You will also remember that it was a freak human related accident that led to the felling of Prometheus and ‘The Senator’, previously the fifth oldest living tree, was burnt down by a woman in Florida in 2012! As a species we must be careful we do not destroy these wonderful relics.

Wednesday, 28 November 2012

Graphene: The Future of Computers?

Hannah Bruce Macdonald


Graphene is a one atom thick sheet of pure carbon, with a hexagonal pattern much like graphite. Its structure causes graphene to allow very rapid movement of electrons across the sheet, and it is this property that has caused graphene to be flagged for use in ultra-fast computers. However, the nature of transistors require them to be semi-conductors like Silicon, allowing the circuit to be turned on and off by a band gap in the material. A band gap is a difference in energy between the energy of the electrons and the energy required to conduct, with the band gap being bridged only when the correct amount of external energy is applied. Graphene does not have a band gap and therefore does not semi-conduct. 

Computer chip with over 1 billion transistors
A band gap has been introduced into Graphene before, allowing the application of Graphene to transistors. This has been done using techniques such as adding an insulating layer to the Graphene, reducing its conductivity, or through carving the Graphene into ribbons, where the altered structure allows the current to be turned off more easily. Both these previous techniques have been successful, but are limited in their scope as they only work in transistors above a certain size, as the edges of the ribbons become roughly cut at small sizes, allowing the band gap to disappear.

New research carried out at the Georgia Institute of Technology has found that graphene sheets designed with a rippling surface could be used for transistors. The troughs in the rippled surface mimic the ribbon, but as it is made of one continuous sheet, the issue of the edges of the ribbons has been bypassed. The parallel trenches are 18nm deep and have a band gap of 0.5 electron volts. This development has opened up the opportunity for graphene to replace silicon in transistors in the future, but there is still plenty of research to be done into the band gap, to produce the ideal sized graphene ripple. This potential shift from silicon to graphene transistors could push Moore’s Law to its limits, as the law states that processor speeds and the number of transistors on a computer chip would double every two years.

Will rippling the surface of graphene make them the ultimate transistor?

Thursday, 15 November 2012

The AI Lab: A Look At iPhone Siri's Future

Alfred Omachar


Ever thought how great it would be if you could use voice control on your iPhone without having to lay your hands on it?

Well, the future looks very promising for Siri as speech recognition software company Nuance is working on building a mobile device that would allow its users to speak to it without ever touching it, even while it’s on sleep mode. Nuance Communications, which made the virtual assistant app called Dragon Go! and is widely believed to be the voice provider for Apple's Siri, believes that you would soon be able to talk to your smartphone while it lies idle. It is also expected that smartphones will have the ability to listen to an on-going stream of noise and differentiate its users voice from background chatter. The company is currently working with several chip designers on such a persistent low-power way in which devices could listen to voice commands from user. Vlad Sejnoha, Nuance's CTO, expects this to be achieved in just a few years.

However, it was noted that there are a couple of challenges that would have to be dealt with before these capabilities become available. Issues such as accidental triggering of the voice assistant and privacy/security if you always have software that is consistently listening in the background. Foolproof voice identification would also be needed to avoid the smartphone from releasing personal information to whoever asks it. But the biggest challenge of course would be to convince users to be comfortable with a device that is always paying attention. 

Looking at how Apple has approached technology over the past few years, we could expect anything to happen to Siri. Either way, no matter how enthusiastic you are about the voice technology, it seems that it is moving closer and closer to the world of sci-fi. One in which many techies have always dreamt of. Could we also see Siri moving into Mac? Or could we soon be able to say, “Siri, book me on the next flight to London”? Where do you see things going from here?

Saturday, 3 November 2012

The AI Lab: Watson, IBM's Supercomputer Genius, Could Be Your New Doctor!

Alfred Omachar



Watson, widely remembered for making headlines last year as the first ever cognitive system to win the TV quiz show Jeopardy!, is now training to get a new job as a doctor. As recently announced by IBM, Watson is now going to be an advisor and an assistant to all kinds of professional decision-makers, starting with healthcare, then moving on to other areas such as finance. The company together with the Memorial Sloan-Kettering Cancer Center (MSKCC), plans to use the system for cancer research and treatment. Using clinical knowledge based on genomic and molecular data from MSKCC, Watson will help oncologists diagnose and treat individual cancer patients. Generally computers should also be able to help, but the limitations of current systems, for instance, in dealing with natural language, have prevented real advances. 

So what is it that actually makes Watson different from other intelligent systems?

A combination of three modern computing techniques makes Watson's smart learning software unique:

1. Natural language processing – to help in comprehending unstructured data

2. Hypothesis generation and evaluation – providing a list of responses based on relevant evidence

3. Evidence-based learning – improving its performance based on its outcomes so as to make it smarter with each interaction.

These capabilities enabled it to perform well on the Jeopardy! show which mainly depends on the ability to find out double meanings of words, puns, rhymes and hints as well as the ability to process large amounts of information to make complex logical connections. Check out the video below for more information.



It's more than Just Game Shows
Despite the fact that Watson addressed its first task of winning on Jeopardy!, IBM wants it to be more than just a professional game show contestant. But how exactly would Watson help out in healthcare?

  • First, the doctor poses a question to the system, providing it with symptoms of an illness. Watson then mines personal data from the patient and his/her medical records.
  •  It combines this information with findings from medical research and then examines all data sources to form hypotheses and tests them.
  • Watson then lists potential diagnoses along with a level of confidence for each of these diagnosis thus helping the doctor making a more informed decision.

A Promising Technology for Cancer Treatment?
Without any doubt, good doctors are notable for their ability to detect patterns and apply relevant medical knowledge to their patients. However, no physician can keep up with the amount of medical information available, which in fact is doubling every year and is all unstructured. Watson has the ability to mine a wide array of clinical data and medical cases that is accessible electronically, uncover patterns from these data and offer evidence to treatment decisions more firmly. IBM and Memorial Sloan-Kettering (MSKCC) do hope that their collaborative effort to adapt Watson into cancer research and treatment will produce notable results. Not only is Watson expected to improve the cancer diagnosis process, but it is also likely that in the future it will become part of a doctor's common tools such as a stethoscope or a blood pressure monitor.

Curious about Watson's Intelligence? Take a look at Watson as it competes against Ken Jennings and Brad Rutter, two of Jeopardy!'s most successful contestants.

Tuesday, 23 October 2012

The AI Lab: Can Machines Finally Think?

Alfred Omachar


When English Mathematician Alan Turing first questioned the ability of machines to think back in the 1950’s, the idea of building such computers looked difficult but possible. The computer would have to process language, learn from the conversation, remember what has been said, respond to the human and display common sense. 
Alan Turing

Turing went on to evaluate this possibility by proposing a method that would come to be known as 'the turing test' and was based on an imitation game. Assuming that a computer can sufficiently imitate a human to the point where a suspicious judge cannot tell the difference between an intelligent human and a machine, has actually shown that machines can think. Over the past 60 years, the test has proved to be quite influential to AI researchers resulting to the development of a multitude of AI programs all in an attempt to pass it.

Recently, software called 'Cleverbot', created by AI scientist Rollo Carpenter, was claimed to have passed the Turing test. Cleverbot uses an AI algorithm to chat with humans by searching through previous record conversations and providing an appropriate response. The Cleverbot test took place at the Indian Institute of Technology Guwahati, India where a group of participants conducted a conversation with an unknown respondent. Half of the participants were communicating with a human while the other half were communicating with Cleverbot. The software was voted 59% human, scoring higher than any previous Turing test, while the humans themselves were voted 69% human. A score of 50% or higher is considered a pass grade. Therefore, Cleverbot is undoubtedly intelligent but whether it can really think still remains debatable.

Here's a part of my conversation with Cleverbot from its website. Clearly it didn't end well!

User: So cleverbot, are you human or machine?
Cleverbot: Human.

User: Boy or girl?
Cleverbot: Girl.

User: Do you like going out?
Cleverbot: Sometimes.

User: Me too. Want to meet sometime?
Cleverbot: No.

User: But why?
Cleverbot: Because I don't know you.

So can Cleverbot think?
Well, some researchers have argued that passing the test only implies the ability to 'imitate intelligence' and does not mean that a program is actually 'thinking'. Nonetheless, it is not just a matter of the test, but valid proof that the evolution of intelligence programs is no where near its endpoint. 

Want to chat with Cleverbot?
Have a chat with Cleverbot by clicking here, you will be surprised by the results!