How an MND sufferer became the world’s first human cyborg
Peter Scott-Morgan was determined to both survive and thrive with the help of revolutionary technology after suffering a terminal diagnosis. Allie Nawrat digs deeper into this unique project.
cientist and robotics expert Peter Scott-Morgan was diagnosed with motor neurone disease (MND) in 2017. Like fellow MND sufferer Professor Stephen Hawking, Scott-Morgan was given two years to live.
However, Scott-Morgan was determined to not give in to this terminal degenerative disease that often causes muscle paralysis. As he puts it in on his website, Scott-Morgan chose to not just survive, but to thrive with the help of cutting-edge technology.
To achieve this, Scott-Morgan is working with a range of technology companies, including household names like Intel and DXC as well as smaller specialist firms such as speech synthesis company CereProc. The vision is for Scott-Morgan to combine himself with artificial intelligence (AI) and robotics to both lengthen and improve the quality of his life.
Becoming the first human cyborg
Termed Peter 2.0, Scott-Morgan’s vision to become the first human cyborg was the subject of a recent Channel 4 documentary and hinges around seven pillars. Two of these are exclusively medical – one to replumb his stomach, the second to remove his larynx (or voice box) - while the remaining five leverage innovative technology.
Since “there are a number of high-profile companies rallying around what Peter's doing, DXC acts an integrator” for these pillars, explains DXC fellow and director of AI Jerry Overton. This is particularly important given how ambitious Scott-Morgan’s vision is to use technology to provide him with the means to be able express himself in his own voice and move around autonomously, two functions affected by MND.
To achieve this, the companies involved have developed a synthetic voice that is the same as Scott-Morgan's, an animated avatar that talks and expresses his emotions, and a word predictor for verbal spontaneity in conversations. CereProc chief scientific officer Matthew Aylett notes that pulling this off was a real “world first” and testament to the commitment of all parties.
There are number of high-profile companies rallying around what Peter's doing
DXC will also support the final two pillars of a self-driving wheelchair and an exoskeleton, the latter of which leverages Permobil’s F5 VS powerchair.
Building a synthetic voice
After realising he would inevitably lose his voice because of MND and then deciding to have his voice box removed to prevent pneumonia, Scott-Morgan was on a mission to find a way to create a synthetic voice that sounded like him.
He was initially disappointed with much of the technology on offer, most of which is created for call centres rather than for those who have lost their voices for medical reasons. However, he then came across Edinburgh-based CereProc. Their previous work includes rebuilding the voice of famous US film reviewer Roger Ebert who had his larynx removed because of throat cancer, Aylett notes.
CereProc worked with Scott-Morgan to record more than 15 hours of audio and more than 1,000 individual phrases. Aylett explains this was plugged into the company’s neural text-to-speech (TTS) technology to create a voice that sounds like Scott-Morgan’s.
Importantly, CereProc’s TTS technology allows for personality and emotion to be added back into the synthetic voice. This means that Scott-Morgan has a synthetic voice with “an intimate voice style, an enthusiastic, presentational voice style, a neutral voice style and a conversational voice style that can be used”.
As his MND progresses, Scott-Morgan’s will be unable to form facial expressions. Because of this, alongside having a synthetic voice, Scott-Morgan was keen to have a 3D animated avatar that would express emotions and appear to be talking. This was developed by Embody Digital and will be attached to his wheelchair.
Intel and word prediction
After developing a synthetic voice and an avatar, the next challenge was enabling Scott-Morgan to use them to interact fast enough to hold a conversation. Aylett notes: “We really wanted that interaction to be able to make use of all the great knobs and dials we had put on the voice.”
Reducing the conversation silence gap and driving verbal spontaneity was where Intel got involved. Intel fellow and director of the anticipatory computing lab Lama Nachman was interested in how Scott-Morgan used his eyes and gaze to type out the words he wanted to say.
Intel improved upon the Assistive Context-Aware Toolkit (ACAT) it had developed for Professor Hawking, so it could support a gaze tracker from Swedish technology company Tobii. Nachman explains gaze control is significantly quicker than the cheek trigger method used by Professor Hawking, making holding a conversation without lengthy silences easier.
We really wanted that interaction to be able to make use of all the great knobs and dials we had put on the voice
To further reduce the silence gap, Intel also worked to develop an AI-enhanced word predictor, which can predict what the next word may be based on what Scott-Morgan is typing with his gaze. By doing this, he “only needs to type around 10% of all the letters before the word predictor kicks in and figures out what it is that he wants to say,” says Nachman.
The next phase for the predictor is for it to listen to the conversations Scott-Morgan has and using learnings from previous conversations to make predictions about what he might want to respond. If Scott-Morgan is not happy with any of the predictor’s options, he can then begin to type out his preferred response.
Nachman notes that this capability is currently in development, but Intel is hoping Scott-Morgan can begin working with it within the next year.
DXC and the Cyborg Universe
Linked with its role as the integrator, DXC has been building a core interface for Scott-Morgan to use across these seven pillars. This interface is known as the Cyborg Universe and is accessed via unity software and Microsoft HoloLens, a mixed reality headset.
While the capabilities that Intel has been working on in ACAT are not currently available through the HoloLens, DXC is currently in the process of deploying that technology with Scott-Morgan. Nachman explains that the HoloLens has a less extensive set of capabilities than ACAT as it is typically used for gaming.
However, as the Cyborg Universe moves forward, the priority is to find a way for the two to work side by side, either through integration or a consistent interface so that Scott-Morgan can take advantage of muscle memory when using either system.
Importantly, CereProc’s synthetic voice and avatar are compatible with both ACAT and the HoloLens. In addition, DXC’s work to build Scott-Morgan an autonomous wheelchairm linked to the Cyborg Universe. Overton notes that DXC has successfully developed a semi-autonomous mode for Scott Morgan’s wheelchair that has automatic obstacle avoidance. The next version will include even more advanced capabilities and will respond to voice commands such as "take me to kitchen".
DXC is also working on adding additional capabilities into the Cyborg Universe that go beyond Scott-Morgan’s original vision. These include Cyborg Artist, which allows the user to create original works of art, as well as medical monitoring and diagnostics.
Beyond Peter 2.0
Importantly for Scott-Morgan, his vision does not only apply to himself. He wants it to empower everyone with disabilities to both survive and thrive with the help of technology.
To achieve this, he has set up the Scott-Morgan Foundation with his husband Francis. Further to this, Scott-Morgan is committed to the principle of open source innovation so that individuals can develop additional capabilities on top of existing software to suit their needs.
Overton notes that DXC is incredibly committed to making as much of the software developed for the project open source. It will be supporting the foundation in allowing anyone to download and use the software associated with the Peter 2.0 project.
Nachman adds that ACAT was already open source, and Intel is committed to continuing this with the new capabilities baked into the toolkit. She concludes: “[Open source] is something that we feel very strongly about, it is something that we wanting to enable the whole community with. Frankly this is a community that gets left out time and time again.”
[Open source] is something that we feel very strongly about, it is something that we wanting to enable the whole community with
CereProc’s contribution to the project is slightly different since it is bespoke for Scott-Morgan’s voice. However, Aylett emphasises “we are really happy to support as many people as possible to build these synthetic voices” and then plug and play that into open source software.
For Aylett, Scott-Morgan’s technological vision and project will empower “people with severe disabilities and with communication difficulties to say I am not happy with that, I want something better.”
Main image credit: Copyright 2020 Channel 4
BACK TO TOP