Showing posts with label The AI lab. Show all posts
Showing posts with label The AI lab. Show all posts

Tuesday, 5 February 2013

The AI Lab: Brain-Computer Interfaces - The Future of Collaborative Mind-Control Systems Shaping Up

Alfred Omachar



One of the most challenging advances in human-machine interfaces is the use of a brain-computer interface (BCI) to communicate a user's intention to a computer by passing the classical hand input interfaces such as keyboard, mouse and touch-pad. 

However, recent research in BCI has shown impressive capability for controlling mobile robots, virtual avatars and even humanoid robots. For example, one study demonstrated the ability to control a humanoid robot with a BCI, where users (humans) were able to select an object in the robot's environment – seen through the robot's cameras – and put it in a desired area in the environment -  seen through an overhead camera. Similarly, BCIs have also managed to help people with disabilities to control, for example, a wheelchair, robotic prosthesis or computer cursor.

So how do BCIs work (in a nutshell)?

A BCI system records the brain's electrical activity using electroencephalography (EEG) signals. The signals can be taken invasively or non-invasively either from inside the brain or from the scalp. Non-invasive BCI takes signals that are present at micro-volt levels on the scalp and then amplifies them using an EEG. These signals are then digitised so that they can be used by the computer. Machine learning algorithms are then used to construct software that learn to recognise the patterns generated by a user as he/she thinks of a certain concept, for example, “up”  or “down”. 

A promising Future for Collaborative BCIs

Now researchers are discovering that they even get better results in some tasks by combining the signals from multiple BCI users. For instance, a team at the University of Essex managed to develop a simulator in which pairs of BCI users had to steer a craft towards the centre of a planet by thinking about one of eight directions that they could fly in. Brain signals representing the users' chosen direction were merged in real time and the spacecraft followed that path.

According to the results of this study, it turns out that two-brain navigation performed better compared to single brain navigation. Simulation flights were 67% accurate when controlled by a single user but were 90% on target when controlled by two users. In addition, random noise in the combined EEG signals were significantly reduced and the dual brain navigation could also compensate for a lapse in attention by any one of the two users. In fact, NASA's Jet Propulsion lab in Pasadena, California, has been observing this study while itself investigating the potential of BCIs controlling, for example, planetary rovers, among other space applications. However, for now the idea of planetary rover remote control still remains speculative as most pioneers in the field of BCI are in their research stage.

Thursday, 15 November 2012

The AI Lab: A Look At iPhone Siri's Future

Alfred Omachar


Ever thought how great it would be if you could use voice control on your iPhone without having to lay your hands on it?

Well, the future looks very promising for Siri as speech recognition software company Nuance is working on building a mobile device that would allow its users to speak to it without ever touching it, even while it’s on sleep mode. Nuance Communications, which made the virtual assistant app called Dragon Go! and is widely believed to be the voice provider for Apple's Siri, believes that you would soon be able to talk to your smartphone while it lies idle. It is also expected that smartphones will have the ability to listen to an on-going stream of noise and differentiate its users voice from background chatter. The company is currently working with several chip designers on such a persistent low-power way in which devices could listen to voice commands from user. Vlad Sejnoha, Nuance's CTO, expects this to be achieved in just a few years.

However, it was noted that there are a couple of challenges that would have to be dealt with before these capabilities become available. Issues such as accidental triggering of the voice assistant and privacy/security if you always have software that is consistently listening in the background. Foolproof voice identification would also be needed to avoid the smartphone from releasing personal information to whoever asks it. But the biggest challenge of course would be to convince users to be comfortable with a device that is always paying attention. 

Looking at how Apple has approached technology over the past few years, we could expect anything to happen to Siri. Either way, no matter how enthusiastic you are about the voice technology, it seems that it is moving closer and closer to the world of sci-fi. One in which many techies have always dreamt of. Could we also see Siri moving into Mac? Or could we soon be able to say, “Siri, book me on the next flight to London”? Where do you see things going from here?

Saturday, 3 November 2012

The AI Lab: Watson, IBM's Supercomputer Genius, Could Be Your New Doctor!

Alfred Omachar



Watson, widely remembered for making headlines last year as the first ever cognitive system to win the TV quiz show Jeopardy!, is now training to get a new job as a doctor. As recently announced by IBM, Watson is now going to be an advisor and an assistant to all kinds of professional decision-makers, starting with healthcare, then moving on to other areas such as finance. The company together with the Memorial Sloan-Kettering Cancer Center (MSKCC), plans to use the system for cancer research and treatment. Using clinical knowledge based on genomic and molecular data from MSKCC, Watson will help oncologists diagnose and treat individual cancer patients. Generally computers should also be able to help, but the limitations of current systems, for instance, in dealing with natural language, have prevented real advances. 

So what is it that actually makes Watson different from other intelligent systems?

A combination of three modern computing techniques makes Watson's smart learning software unique:

1. Natural language processing – to help in comprehending unstructured data

2. Hypothesis generation and evaluation – providing a list of responses based on relevant evidence

3. Evidence-based learning – improving its performance based on its outcomes so as to make it smarter with each interaction.

These capabilities enabled it to perform well on the Jeopardy! show which mainly depends on the ability to find out double meanings of words, puns, rhymes and hints as well as the ability to process large amounts of information to make complex logical connections. Check out the video below for more information.



It's more than Just Game Shows
Despite the fact that Watson addressed its first task of winning on Jeopardy!, IBM wants it to be more than just a professional game show contestant. But how exactly would Watson help out in healthcare?

  • First, the doctor poses a question to the system, providing it with symptoms of an illness. Watson then mines personal data from the patient and his/her medical records.
  •  It combines this information with findings from medical research and then examines all data sources to form hypotheses and tests them.
  • Watson then lists potential diagnoses along with a level of confidence for each of these diagnosis thus helping the doctor making a more informed decision.

A Promising Technology for Cancer Treatment?
Without any doubt, good doctors are notable for their ability to detect patterns and apply relevant medical knowledge to their patients. However, no physician can keep up with the amount of medical information available, which in fact is doubling every year and is all unstructured. Watson has the ability to mine a wide array of clinical data and medical cases that is accessible electronically, uncover patterns from these data and offer evidence to treatment decisions more firmly. IBM and Memorial Sloan-Kettering (MSKCC) do hope that their collaborative effort to adapt Watson into cancer research and treatment will produce notable results. Not only is Watson expected to improve the cancer diagnosis process, but it is also likely that in the future it will become part of a doctor's common tools such as a stethoscope or a blood pressure monitor.

Curious about Watson's Intelligence? Take a look at Watson as it competes against Ken Jennings and Brad Rutter, two of Jeopardy!'s most successful contestants.

Tuesday, 23 October 2012

The AI Lab: Can Machines Finally Think?

Alfred Omachar


When English Mathematician Alan Turing first questioned the ability of machines to think back in the 1950’s, the idea of building such computers looked difficult but possible. The computer would have to process language, learn from the conversation, remember what has been said, respond to the human and display common sense. 
Alan Turing

Turing went on to evaluate this possibility by proposing a method that would come to be known as 'the turing test' and was based on an imitation game. Assuming that a computer can sufficiently imitate a human to the point where a suspicious judge cannot tell the difference between an intelligent human and a machine, has actually shown that machines can think. Over the past 60 years, the test has proved to be quite influential to AI researchers resulting to the development of a multitude of AI programs all in an attempt to pass it.

Recently, software called 'Cleverbot', created by AI scientist Rollo Carpenter, was claimed to have passed the Turing test. Cleverbot uses an AI algorithm to chat with humans by searching through previous record conversations and providing an appropriate response. The Cleverbot test took place at the Indian Institute of Technology Guwahati, India where a group of participants conducted a conversation with an unknown respondent. Half of the participants were communicating with a human while the other half were communicating with Cleverbot. The software was voted 59% human, scoring higher than any previous Turing test, while the humans themselves were voted 69% human. A score of 50% or higher is considered a pass grade. Therefore, Cleverbot is undoubtedly intelligent but whether it can really think still remains debatable.

Here's a part of my conversation with Cleverbot from its website. Clearly it didn't end well!

User: So cleverbot, are you human or machine?
Cleverbot: Human.

User: Boy or girl?
Cleverbot: Girl.

User: Do you like going out?
Cleverbot: Sometimes.

User: Me too. Want to meet sometime?
Cleverbot: No.

User: But why?
Cleverbot: Because I don't know you.

So can Cleverbot think?
Well, some researchers have argued that passing the test only implies the ability to 'imitate intelligence' and does not mean that a program is actually 'thinking'. Nonetheless, it is not just a matter of the test, but valid proof that the evolution of intelligence programs is no where near its endpoint. 

Want to chat with Cleverbot?
Have a chat with Cleverbot by clicking here, you will be surprised by the results!