Ambrosia: A Novel, Low-Cost, Machine Learning and Signal Processing Based Artificial Spinal Cord Framework for Motor Disabilities

Summary

Over 5.4 million individuals within the United States are currently diagnosed with a certain order of paralysis. Although these patients have weak brain-to-muscle connections, which often prohibit their desired movements, they still emit electrical brain signals through their scalps—voltages that can be read through Electroencephalography headsets (EEGs). Our solution to combat the widespread, expensive nature of paralysis comes in the form of a $300, 3D printed upper body exoskeleton and a 3D printed EEG headset. Using novel signal segmentation selective search algorithms, Convolutional Neural Network classification architectures, and Sequence to Sequence translation encoders, we have been able to get an industry leading accuracy to take specific electrical potentials and associate them with their respective motor images/motions. Specifically, we have created and optimized an EEG to EMG signal translation system—one that accommodates for both rotational and translational joint movements while also preserving a desired rate of motion. On top of these sensor optimizations, we were able to use and integrate a Transcutaneous Electrical Nerve Stimulation (TENS) device within our software and hardware stack to externally stimulate motion with a human patient at the most fundamental level. With our sophisticated 3-step machine learning algorithm and lightweight hardware framework, our system has applications in physical therapy and would permit paraplegics—and physically disabled patients in general—to actively partake in routinely tasks on a daily basis.

Question / Proposal

Given the noisy and unstructured nature of brain wave (EEG) data and muscle wave (EMG) data, can we build a machine learning model to map these two continuous datasets more effectively than existing approaches and build a non-invasive device using off-the-shelf components that can recreate trained intended motions at a low price point?

Signal processing techniques can be utilized to fill in the role of the spinal cord—replicating a complex EEG-to-EMG signal decomposition process. The specific mapping of EEG waves to EMG waves has not been looked at computationally in past literature, however, some sections of our proposed computational approaches (segmentation and classification) have been looked at extensively for time series data. We also have reason to believe that our proposed translation system, a sequence to sequence encoder, will function as well because it uses a series of recurrent neural networks which work well with time series data.

At the same time, because paraplegics’ damaged spinal cords do not affect the biological functionality of their nerves and muscles, their nerves can be electrically stimulated to artificially provoke motion. Devices such as the TENS (Transcutaneous Electrical Nerve Stimulation) unit have been used to trigger patients’ nerves and muscles to aid in potential recovery during physical rehabilitation.

Thus, we hypothesize that a computationally and physically robust medical device module can be used to demonstrate a more accurate, mobile, and practical method of re-creating motion within the disabled population. 

Research

In the past, SVM classifiers utilized solely the “C3” and “C4” regions to classify input signals (Costantini, Casali, & Todisco, 2013; Mirnaziri, Rahimi, Alavikakhaki, Ebrahimpour, 2013). However, modern studies indicate various novel approaches for additionally accurate classifications. For example, researchers Alamari, Samaha, & Alkamha (2013) used a powerful heuristic, which ignored a patient’s intrinsic thoughts and solely payed attention to the regions of the brain that encompassed desire. Hence, only when a patient wanted to move, the SVM would classify the signal. Furthermore, researchers Li, Xu, & Zhu (2015) tracked eyeball movement through the EEG in order to narrow down SVM classifications. Generally, when one’s desire is to move, his/her eyeballs tend to move towards the direction of desired motion. According to Li, Xu, & Zhu (2015), regions of the brain labelled as “F7” and “F8” served as indicators of eyeball movement. With the use of this additional factor, a classification accuracy of over 90% was obtained. However, the key takeaway with these studies is that they all revolve around basic classifications of certain classes—such as between left and right hand. Hence, these studies don’t focus on finding a correlation between two forms of continuous data—such as the brain signal and the respective body-stimulating signal—which leads to the best imitation of the user’s desired intent for daily tasks.

Sequence to sequence learning has generally been used for applications like translating languages and speech recognition—where there is one input mapped to one output continuous set of strings. Studies conducted by Sutskever et al. (2014) and Nallapati et al. (2016) showed mediocre accuracy when relying on Long Short-Term Memory (LSTM) neural network models even when they were testing on conducting 1-to-1 sequence to sequence encoding and decoding. Hence, we proposed a new approach where we relied on Convolutional Neural Networks to compute intermediary encoder weights and decoder weights instead of the RNNs involved with LSTMs.

Furthermore, using these computational systems, researchers at universities such as MIT (Brewster, 2016) are developing robotic exoskeletal frameworks to help the physically disabled regain motion. However, these solutions however tend to be extremely immobile and extremely costly—for example, the MIT Phoenix exoskeleton costs over $40,000 and is considered one of the industry’s cheapest solutions. Hence, throughout our studies, we focuesd on developing a lightweight 3D printed framework that would utilize off-the-shelf FDA approved products such as the Transcutaneous Electrical Stimulation (TENS) module to non-invasively help the physically disabled regain motion.

Pictured above are current solutions available to the public. This visual conveys how modern solutions are high in cost, weight, and have many limitations.

Method / Testing and Redesign

Computational Procedure:

Data Collection:

To construct the machine learning model that we desire, we created a data set that mapped labeled EEG activity to labeled EMG activity. The dataset was created using three able bodied people (the two researchers and one immediate family member).

When collecting data a simple computer prompter would tell the user what motion to think of and the EEG and EMG data would be simultaneously recorded for this prompt. This was repeated to construct a dataset that over 32 hours at a sampling rate of 128 Hz.

Computational Processing Approach:

The overall process for processing the data involves first segmenting the signal into pieces that could possibly represent motion classes (i.e. moving leg, moving arm, etc.), classifying these motion classes, and translating the motion data itself into a continuous muscle then stimulation signal.

1. Ambrosia Comprehensive Search (ACS) Segmentation

The ACS Segmentation algorithm is a custom segmentation algorithm that builds on top of an idea in computer vision called selective search, and a segmentation algorithm called twin peaks. This improved the algorithm such that the recall was higher than all other algorithms and yet we still maintained a reasonable speed—allowing us to capture all actions that are passed in.

2. Classification via Convolutional Neural Network

Our Convolutional Neural Network was scaled down to a 1-dimensional kernel, which allowed us to apply the kernel function on the 1D EEG data. This function helps generate the output layer nodes—each node representing a motion class with the associated weight representing probability of that class being chosen for the input wave.

3. Translation using Sequence to Sequence Encoder Architecture to EMG

Instead of relying on RNNs, we relied on the weights obtained from CNNs with both EEG and EMG data to generate the respective encoder and decoder states—which helped convert one sequence of input data to the respective output data since we trained the architecture to correlate EEG data with the respective muscle movement data.

4. Obtaining Transfer Function to Translate to TENs

The precise stimulation signal for the electrical stimulation module was derived from applying our transfer function—a least squares regression on torque associated with EMG—upon the sequence-to-sequence encoder’s output signal.

Device Engineering Approach:

1. Socket Module for Framework

The 3D printed framework allows us to guide electrode placement and the socket module for the framework pictured was designed such that the maximal number of degrees of freedom can be accomodated.

2. TENs Control using Arduino Shield

The TENs module was controlled using an Arduino that read the network output signal from a Serial port and then properly translated that to the hardware in near real-time.

Testing Methodology:

Multiple layers of testing were conducted on the system to ensure that performance was optimal. We recorded speed and recall for segmentation, AUC for the classification, and RMSE for the Seq2Seq encoder. Furthermore, the system as a whole was tested by allowing one user to wear the headset and the other user to wear the electrical stimulation module—minimizing bias throughout testing.

Results

This table emphasizes the success of our novel ACS segmentation algorithm when compared to other segmentation algorithms, such as the Modified Varri (MV) segmentation algorithm or the Standard Deviation segmentation (SD) algorithm. The speed of the ACS algorithm beats out the SD algorithm but is still relatively slower than the MV algorithm; however, the recall is much higher than both of the other algorithms by a significant amount and this will be extremely important as we aim to accurately segment the signal to represent all potential movements. Our reasoning is that we are willing to sacrifice a little speed to make sure that the patient is able to effectively move their body without any misclassifications due to poor segmentation.

This second figure is an ROC curve representing with an Area Under the Curve (AUC) of over 86%—which indicates for extremely high classification when relying on true positive and false positive labelings of certain classifications. As pictured, the curve levels out to around 1 true positive rate quite rapidly without sacrificing too much false positive which is an extremely positive sign for the 1 dimensional convolutional neural network architecture.

This figure was a visualization created when testing the accuracy of the classifications for 8 motion classes. The x-axis is represented by the number of sequence steps processed by the computational architecture while the intensity represents the system’s confidence in a certain action for that step—these confidences are the weights outputted by the CNN in Phase II of our layered algorithmic system. The visualization illustrates how the system initially stutters around all of the possible classes but is quite rapidly able to single out the right motion classification, during which, the confidence for that class is extremely high and all classes have extremely low confidence weights.  

Finally, the last figure was an aggregate graph on the efficiency of the sequence to sequence learning model. This graph indicates that the average deviation for the entire model was around 23% which indicates that the average quality of analysis for signals being inputted was 77%. Furthermore, we combined the the RMSE graph for both training and testing and realized that the testing set doesn't have a much higher RMSE graph than the training set, indicating that the data has not been overfit.

 

Conclusion

The system that we have built throughout this process is one that can be very helpful and influential in saving lives and our hypothesis was clearly supported. We indeed were able to create the system that maps EEG waves to EMG waves and also were able to construct the hardware apparatus needed for it to effectively function. The work that we have done in this study is very extendable in that we currently support eight actions and this can be extended to theoretically an infinite number of actions if we have enough data and there are people willing to participate in our training sessions for the overall computational model. Furthermore, due to the system’s extendability at an extremely low cost, our research has multiple potential avenues—including a commercial avenue through which a majority of the paralyzed population would have access to our efficient solution.

Furthermore, there are numerous limitations on our results as we were only able to successfully test on ourselves—able-bodied humans instead of testing on the paralyzed. Although we are certain that our system would work upon the physically impaired, we know that being verified to test on the paralyzed and physically disabled would add more credibility in terms of functionality.

There are a number of future steps that we could take with our work. We are currently working on building a custom EEG headset and hardware to lower the cost of the product to possibly around $50. This would allow us to make sure that we are only using the parts that we need and that no extraneous modules are included. We are also looking into modifying the digital TENs unit to take a fully analog EMG signal and this could potentially help us introduce rotational motion thought the use of one single electrode pad.

About me

All we wanted to do was save the world.

We have been close friends since early 2004 (4 years old at the time) and we first met outside of our community swimming pool. With our competitiveness, our first interaction took place near the diving board where each of us judged the other's "flips" and "dives," neither of which were impressive. This rivalry helped develop many of our shared interests—ranging from competitive dancing to competitive programming to competitive gaming.  However, throughout all of this competitiveness, we created one of the strongest friendships we could have imagined—one that has followed us for 14 years.

Ever since, we have been self teaching ourselves increasingly difficult concepts using resources such as our role model, Tushar Roy, and his algorithmic Youtube Channel and browsing through r/programming. We started off with data structures, shifted towards algorithms, and finally started conducting our own computer science research—whether it was running around our neighborhood with drones identifying areas for mosquito breeding sites or scraping historical weather sites for hurrican prevention data. These experiences have exposed us to the interdisciplinary aspects of computer science as well—whether it’s with neuroscience or epidemiology.

Winning to us would mean everything. It would validate our work on an international stage and bring light to the problem and approach we are using. While we have some science fair experience, this will be our biggest one and will be our most impactful project yet.

Hopefully, we will save the world together.

Health & Safety

Throughout all of our procedure, we have conducted our research in a variety of locations—ranging from outdoor spaces to our garages. Even though we never obtained a registered laboratory, we were able to utilize our local schools and libraries for basic supplies including prototyping tools such as 3D printers and hot glue for the overall design of the framework. Furthermore, we maintained proper experimentation etiquette by utilizing gloves and lab glasses during our soldering procedures and hardware design of the exoskeletal framework.

All technologies that we utilized—such as the transcutaneous electrical stimulation module and the 3D printed EEG headset—were already FDA approved and available for purchase as an off the shelf product. We took extreme pride in this non-invase, safe, off the shelf approach since many other research approaches utilized invasive approaches such as surgically implanting electrodes to both obtain EEG data and potentially stimulate a part of the body.

When dealing with the electrical stimulation module, we were sure to follow safety guidelines to the highest extent possible. We made sure that all electrode contacts were placed well and that those using the system were grounded and did not have any static electricity on them. 

Bibliography, references, and acknowledgements

Alomari, M. H., Samaha, A., & AlKamha, K. (2013). Automated classification of L/R hand movement EEG signals using advanced feature extraction and machine learning. arXiv preprint arXiv:1312.2877.

Brewster, Signe (2016, February). “This $40,000 Robotic Exoskeleton Lets the Paralyzed Walk.” MIT Technology Review, MIT Technology Review, www.technologyreview.com/s/546276/this-40000-robotic-exoskeleton-lets-the-paralyzed-walk/.

Costantini, G., Casali, D., & Todisco, M. (2010, July). An SVM based classification method for EEG signals. In Proceedings of the 14th WSEAS International Conference on Circuits, Corfu Island, Greece (Vol. 2224).

Li, Z., Xu, J., & Zhu, T. (2015). Recognition of brain waves of left and right hand movement imagery with portable electroencephalographs. arXiv preprint arXiv:1509.08257.

Mirnaziri, M., Rahimi, M., Alavikakhaki, S., & Ebrahimpour, R. (2013). Using Combination of µ, β and γ Bands in Classification of EEG Signals. Basic and clinical neuroscience, 4(1), 76.

Nallapati, R., Zhou, B., Gulcehre, C., & Xiang, B. (2016). Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023.

Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems (pp. 3104-3112).