Hands, vision, brain… Start-ups have presented technologies that use AI to improve the human body’s capabilities or correct its deficiencies.
After interfering in many economic sectors, artificial intelligence is now attacking the human body. At the Consumer Electronics Show in Las Vegas (CES), which opened its doors on January 7, start-ups are using advances in AI to create, with the complicity of connected objects, the kind of interfaces between people and machines that were once confined to laboratories.
It is the American-Chinese start-up BrainCo that has, without doubt, presented one of the most successful projects on the subject. Originating from the Harvard Innovation Lab and founded three years ago the company has developed machine learning algorithms capable of understanding the activity of the brain and nerve endings. This allows him to design a robotic artificial hand that can be used by any amputee (up to the elbow) and controlled almost instinctively. About twenty minutes of calibration, during installation, are enough to master it, the company assures. Amputees still have the nerves that control the movement of the hand in the rest of their arm,” says Molei Wu, an engineer at BrainCo. The company’s algorithms categorize the signals emitted by the nerves according to the desired movement and transform them into commands for the robotic hand.
The JDN attended a demonstration with amputee users of the technology. They are able to perform all sorts of elaborate, albeit somewhat slow, movements: shake hands, grasp an object, lift a finger of their choice, make their thumb and index finger touch each other… Sold for between $10,000 and $15,000, the hand has been approved by the Chinese health authorities and is in the process of being approved in the United States for an expected start of marketing in June.
BrainCo develops another product for the general public, still based on AI, but this time for the brain. It is a kind of headband that detects the state of concentration, relaxation or stress of the user, accompanied by an app that displays meditation tips and historical data. The electrical activity of the brain (electroencephalography) associated with these different states of mind has been observed in several thousand people during the R&D phase in order to train learning machine algorithms to recognize these different feelings. They also refine their detection capability for each user as the headband is used. BrainCo has entered into a partnership with Formula Medecine, which provides medical follow-up for Formula 1 drivers, to integrate the blindfold into its training programs.
Still in the brain, the Frenchman NextMind focuses on detecting what he sees, to trigger physical or digital actions without moving an eyelash. NextMind is a project resulting from work in cognitive neuroscience at the CNRS, where Sid Kouider, founder of the start-up, was research director. NextMind has designed a headband that attaches a sensor to the back of the skull, where the visual cortex is located. By analyzing the electrical signal from this area of the brain, the device can understand what someone is looking at and trigger actions based on where the person is looking.
NextMind sensor. © JDN
The company has integrated its headband into the virtual reality headsets to use the gaze as a control in addition to the joysticks. It also imagines applications in the transport sector, for example, to control the dashboard of a car while keeping your hands on the steering wheel or to monitor the level of attention of airplane pilots. The start-up, which raised €4 million at the end of 2018, will start selling its device in the second quarter of 2020 to developers and will open it to the general public once a satisfactory ecosystem of applications has emerged. Its Chinese competitor BrainUp, also present at the show, is developing a similar device and functionalities.
A Ctrl+F for blind people
Unlike NextMind, Orcam is for those who can’t see. This Israeli company created by the founders of Mobileye uses the same technology as the giant of driving aids, computer vision, but puts it at the service of the blind and visually impaired. She designed Orcam My Eye, a miniature camera that attaches to the temple of a pair of glasses. Equipped with a small loudspeaker and connectable to Bluetooth headphones, it can read any text in 35 languages. The visually impaired user can put his finger in front of a text and the device will start reading it a few seconds later. Searching for words in a text is also available via a voice command, which will start playback from the phrase in which the keyword is found. For a totally blind person, a button allows you to read any text in front of you, from newspapers to road signs.
The device, which operates completely offline, also provides information about the elements surrounding the user. During a demonstration, for example, he told us: one man and six people are standing in front of you. In addition to people (and their gender if they are close enough), Orcam recognises colours, doors, glass and stairs. As in the game you freeze – burn, the device emits a different sound signal if the user approaches or moves away from an object he is trying to catch. Orcam has started to develop another device, Orcam Ear, this time for people with hearing loss, whose hearing aids malfunction in noisy environments. A small camera worn around the neck synchronizes with the hearing aids to associate the movements of the lips she is filming in front of her with the voices emitting them, while excluding unnecessary ambient noise that interferes with hearing. Valued at more than a billion dollars after its last capital raising at the end of 2018, Orcam claims to have sold several tens of thousands of aircraft.
These types of technologies have been around for years, but were previously experimental, very expensive or impractical because they required cumbersome measuring and collection devices. Advances in machine learning and miniaturization of hardware make the business viable and allow more to be done. For example, real-time measurements of brain activity. What used to be impossible, analyses Sid Kouider, founder of NextMind. Augmented, optimized, the body is no longer allowed to bugger.