PLANET-NOONS: PipeLine for Analyzing New Exoplanet Transits using Neural-networks to Operate and Observe uNiversal Surveys


As I was looking through research that had been done in the field of observational astronomy, I realized that neural-network astronomy research up to that point had only been questions of how to apply neural-network research in other fields to the field of Astronomy. I saw a perfect niche for me to work in: since Observational Astronomy depends on ongoing observations for data, any neural network used should be able to be automatically update, which is not a prerequisite for many other fields. I benchmarked my self-updating prototype against NASA’s Robovetter and Autovetter as well as against Astronet, another neural-network which had achieved the highest accuracy on tests. I predicted that my neural-network would be able to obtain the same or better accuracy while significantly cutting training and update times when compared to the neural networks, and was validated when my prototype’s accuracy as high or higher and cut training times by up to 90% as well. I realized that using my prototype across several projects could yield mutual benefits, as the prototype’s handling of low quality and high noise targets would continue to improve with increased exposure. As proof of this concept, I reran the prototype, this time to identify new transit signals, and was able to confirm 26 new exoplanet candidates. In the future, I hope to improve this prototype with the data from different civilian and professional photometry projects and incorporate this neural-network into others as well.

Question / Proposal

Can neural-networks help us discover exoplanets more efficiently?

Despite the rapid growth in machine learning applications, there has been no attempt as of yet to create a perpetually expanding neural network. Whereas there is less of a need for a perpetually expanding neural network in most instances where research is based on calculations, simulations, and experimentation, observational astronomy revolves around outward observation so datasets are constantly expanding from data provided by space missions and terrestrial telescopes. Without a perpetually expanding neural network, virtually every previous machine learning approach to the Kepler data has relied on simulations and data that has already been collected, and would suffer drastic costs in time and processing power from repeated and inefficient preprocessing if applied to ongoing observational astronomy. Despite this requirement, however, observational astronomy is perfectly poised for the development of neural-networks, as information is already processed into easy-to-parse data values and sample images. Because of this uniformity, all research done on transiting exoplanets can be assimilated together into a single, universal neural-network if that algorithm were to be self-updating and maintained consistency in accuracy. Creating such a self-updating neural-network should be possible, and should also be able to cut out around 50% of the training process, considering a sub-process of training, pre-processing, does not have to be redone for each successive iteration. The results should demonstrate the clear benefits of using such a neural-network within the field of observational astronomy, with much better integration into ongoing projects without losing time or accuracy.


Research within the realm of applying machine learning to the Kepler dataset and similar transit photometry datasets is relatively sparse. Most of the research on this subject was conducted just in the past few years, and has been focused on bringing techniques and methods from other fields into incorporating machine learning to photometry. Such scholarship has used Kepler tangentially with machine learning to identify stars (McCauliff et al. 2015), calculate probability of planet occurrence (Catanzarite et al. 2015), and study planet orbits (Thompson et al. 2018). Exoplanet research with applied machine learning has followed trends observed in other fields of science, for example, utilizing the random forest algorithm (McCauliff et al. 2015) and seeking to establish more accurate results than decision trees or non-machine learning algorithms. Nasa currently uses one decision-tree and one neural-network algorithm for their sorting phase, which are the Robovetter and the Autovetter algorithms respectively. While they have both become very accurate at analyzing the Kepler data, they are seriously flawed in their analysis of low signal-to-noise transit signals, because they will automatically disqualify transits that have too much interference. Studies have shown that Robovetter has consistently been more accurate than Autovetter. The culmination of all that has been done previously in the field of observational astronomy was the Google Astronet project by Christopher Shallue and Vanderburg. While it was the most accurate in the field, it’s long training time is a serious disadvantage because of the nature of Astronomy data as stated previously. My research into previous projects helped me single out Google’s Tensorflow as my starting framework, even though my final project can work on any neural-network in any framework, because Tensorflow’s adaptability drastically reduces the amount of coding needed to prepare it for different projects. My final project would therefore be both adaptable in terms of what projects it could interact with as well as what neural-network framework or programming language used in the main project. In addition, my research helped me set goals and determine that the neural-networks to beat were Robovetter, Autovetter, and Astronet; the first two because they are the government benchmark, and the last because of it’s position as both the pinnacle of transiting exoplanet neural-network efforts and as the most accurate neural-network currently in the field. In addition, my research also added an additional obstacle to overcome: that an ultimate goal of any Kepler neural network project, is to not only exceed both the training time of Autovetter and the accuracy of Robovetter, but also to improve low signal-to-noise analysis. Thus my project is the next logical step in the integration of machine learning into observational astronomy. Through the development of a prototype, many projects in the field can be immediately consolidated to create a single streamlined neural-network more accurate than the rest, greatly increasing both the speed and accuracy of future pursuits within stellar observations.

Method / Testing and Redesign

In order to maintain integrity, I ran the three algorithms (NASA’s Autovetter/Robvetter, Astronet, and PLANET-NOONS) using the same data set, which was taken from the Mikulski Archive for Space Telescopes, 10 times each with both step-wise and full trainings in order to keep consistency. The experiment was also done on the same computer in order to control results. The light-curves from MAST underwent preprocessing in order to provide a clearer image of each TCE. I selected how this preprocessing would take place through my research of past experiments in the same field. Preprocessing was done by removing the in-transit points from the TCE, before flattening the remaining points. I calculated the Bayesian information criterion (BIC), a way of selecting the best fit model for a certain set of data and chose the interval which minimized the BIC (Schwarz 1978). I then created a one-dimensional vector by folding light-curves onto their TCE periods. I created a local view of each transit through the techniques used by Armstrong et al. (2017), by choosing the distance as less than the width. At the same time, the program creates a global view, or a representation with greater than the width as detailed by Thompson et al. (2018). The local view prioritizes both short and long TCE periods with the same weight, but may miss eclipses and therefore mistakenly classify a candidate. Meanwhile, global views consolidate all TCEs together, but may erase small transits. The use of both local and global views results in a balance between both, so that no curves are underrepresented or overlooked, settling for the best of both worlds. Since data came in both local and global representations, the program fed the data through separate paths for local and global as well as a third feature column which combined the two representations. T-SNE, represented in Fig 1, 2, and 3 with three different parameters and used to analyze the neural-network, places packets of data that are considered similar by the neural network close to one another.

Fig 1

Fig 2

Fig 3

Outputs came as values from 0 to 1, indicating confidence from low to high in determining a transiting planet. As with most algorithms deciding between a true or a false result, I trained the model to minimize the cross-entropy cost function, which is detailed by the equation:

D(S,L)= -Llog(S)

In designing the neural network, I had to redesign how the model processed data multiple times in order to maximize the amount of time reduced. At first, I sought only to mitigate the training time, and created successive iterations that required less and less reprocessing. Finally, however, I realized that reprocessing and training in some cases were extraneous, and came up with a way to cut those processes completely, while at the same time making sure new data wasn’t ignored by the model. Only the final model’s data is shown, because I did full tests on all of the iterations.


My final prototype consisted of three parts: a neural-network intaking raw light-curves and Robovetter data, outputting a guess as to the candidate status of a signal, an update program, which would package new input data as the original dataset, skipping the original preprocessing stage altogether and further training the model, and a reverse-flow pipeline. The reverse-flow pipeline is meant for incorporation into other transit photometry projects, and can digest different types of photometry (comparative, non-visible, typical) in order to better train the model for high-interference pictures.

Ultimately, my accuracy was negligibly different from the benchmark, which is significant considering the accuracy of the benchmark. Averaged over 10 trials each, with no outliers, these results are even more significant. A cross-section of the metrics I collected for each model is shown in Figure 1.

Fig 1

Meanwhile, I found that preprocessing in the case of the original neural network took up training time in a 9 to 10 ratio. Thus, my reversible self-updating software reduced training time to potentially one-tenth of what it would be in future runs. Whereas for future runs, the original would have to reprocess all of the data it had already gone through as well as the new data in order to maintain consistency and integrity with the code and results, my newly designed format would cut that time down by 82 percent on average. In addition, training graphs for the PLANET-NOONS runs (Fig. 2) seem similar to the Shallue and Vanderburg runs. This means that the different packing method did not significantly interfere with training or training logs, which is important for transparency and trouble-shooting. My results indicate that the current program for detecting exoplanets would greatly benefit from a lateral integration of a neural network containing my module throughout ongoing transit identification projects.

Fig 2

In addition, in order to demonstrate the effectiveness of my model with additional data, I ran the model through raw Kepler data, searching for possible exoplanet candidates. By doing this, I confirmed 26 exoplanet candidates which Shallue & Vanderburg (2018) had discarded, while at the same time I also used the investigation techniques detailed in Jon M. Jenkins (2017) to further confirm samples (Fig 3).

Fig 3

I also found other possible candidates, but out of an abundance of caution I discarded them in favor of targets with larger numbers of already confirmed planets. To have further peace of mind, and to be overly cautious regarding confirmation, I enlisted the help of University of Florida researchers and professors as well, in addition to my personal findings, the findings of my neural network, and the limited findings in Shallue & Vanderburg (2018). These researchers found that the signals reported are indeed candidates through their own personal investigation and a neural network that they designed, and I am currently working to further verify these exoplanets.


My research demonstrated the usefulness of a self-updating neural network in identifying exoplanets, comparing and contrasting its advantages and disadvantages to Robovetter, the current sorting algorithm, which is not self-updating. My main advantages to both Robovetter and other neural network solutions such as Autovetter are summarized as follows:

1. Other neural networks have been individual target focused, but my module allows for a wide-binned acceptor, which makes it is as equally well suited to the Kepler processing pipeline as Robovetter.

2. By using a shuffled injection system, I am able to minimize the effect on the neural network architecture development while still upgrading the system to become self-adapting and continuously growing.At the same time, the injection system massively reduces the downtime between data injection and adaptation.

3. Finally, the module makes the entire network reverse-feedable, so that non-neural networks and non-Tensorflow program can still feed their data into the system once columnic factors have been calculated, allowing and suggesting the creation of a universal database of expanding light-curve datasets as a matter of investment into the extension.

In addition, as a proof of process, I established 26 new candidates that have a high chance of being exoplanets but that were overlooked by Robovetter and previous research into this topic. Continued improvements might be able to recontextualize data that was automatically rejected by the Robovetter, as well as prove extremely useful as a tool for both future civilian and academic research.

With the understanding that the self-updating algorithm does in fact work and attach to the Kepler output pipeline in an efficient manner, I propose that the neural network be used as a universal feed tray, with both academic and civilian research which passes a threshold of good fruit to create a near constant feed of information that will continue to strengthen the neural network for use in both academic and civilian research projects. My design greatly expands the uses of a neural network in Observational Astronomy, while preserving accuracy and integrity of data found in previous studies. In addition, my design also serves as a bridge between civilian and academic research projects, and has the potential to greatly improve the models understanding and accuracy in diverse situations, such as low signal-to-noise scenarios.

The next step is quite obviously to begin adapting the algorithm to different forms of transit detection. One example is of comparative photometry, where two stars similar in brightness and pattern are simultaneously observed, so that a change in one that is not echoed in the other may be the occurrence of a transit. This self-updating neural network can also be applied in fields such as taxonomy, where neural networks might be able to classify species or even bridge evolutionary trees. Group sociology and populative biology could also benefit from constantly updating neural networks, as estimates on human behavior and genetic traits could become more impressive and better honed as the human population increases and more data becomes available.

About me

Over the past three years for the National History Day Contest, I’ve documented the creation of the nuclear bomb and the birth of the scientific method, both of which required me to get inside the minds of scientists to understand how they thought. I quickly realized that trying to think like a scientist came naturally to me. Just like the scientists who had envisioned a world powered by splitting atoms, I’ve come to realize that what I want most from life is a chance to be at the forefront of the field, to see things that literally need to be seen to be believed, and to introduce them to humanity.

The scientist who has had the greatest influence on me is Professor Smecker-Hane, at the University of California Irvine, whose work in increasing minority and female participation in Astronomy resonated with me as the perfect intersection of interest and social justice. I was actually able to meet Dr. Smecker-Hane, and it was actually in our conversations that she inspired me to continue with my research project, even though the Astronomy research camp I had been scheduled to attend had cancelled at the last-minute due to funding problems. I have already greatly benefitted from my project, gaining first-hand professional experience, but if I were to win the Google Science Fair, it would mean that all of the extra work had been worth it, and I would use the prizes to further the development of my research career and continue my project.

Health & Safety

I did not work under a registered laboratory, and did not utilize university resources throughout my project. I also did not have a mentor to guide me through my project.

Bibliography, references, and acknowledgements

Abadi, M., Paul Barham, Jianmin Chen, et al. 2005, Papers presented at the Workshop on Wireless Traffic Measurements and Modeling: June 5, 2005, Seattle, WA, USA (Berkeley, CA: USENIX Association), oCLC: 83296063.


Akeson, R. L., Chen, X., Ciardi, D., et al. 2013, Publications of the Astronomical Society of the Pacific, 125, 989.


Armstrong, D. J., Pollacco, D., & Santerne, A. 2017, Monthly Notices of the Royal Astronomical Society, 465, 2634.


Bellinger, E. P., Angelou, G. C., Hekker, S., et al. 2016, The Astrophysical Journal, 830, 31,  ArXiv: 1607.02137.


Benedict, G. F., McArthur, B. E., Forveille, T., et al. 2002, 581, 4 Bond, I. A., Udalski, A., Jaroszynski, M., et al. 2004, The Astrophysical Journal, 606, L155, arXiv: astro-ph/0404309.


Boyle, W. S., & Smith, G. E. 1970, The Bell System Technical Journal, 49, 587 Brock, A., Lim, T., Ritchie, J. M., & Weston, N. 2017, arXiv:1706.04983 [cs, stat], arXiv: 1706.04983.


Bryson, S. T., Jenkins, J. M., Gilliland, R. L., et al. 2013, Publications of the Astronomical Society of the Pacific, 125, 889.


Carleo, I., Benatti, S., Lanza, A. F., et al. 2018, Astronomy & Astrophysics, 613, A50, arXiv: 1805.01281.


Catanzarite, J., Jenkins, J. M., McCauliff, S. D., et al. 2015, IAU General Assembly, 29, 2255463. https://ui.adsabs.


Charbonneau, D., Brown, T. M., Latham, D. W., & Mayor, M. 2000, The Astrophysical Journal, 529, L45, arXiv: astro-ph/9911436.


Chauvin, G., Lagrange, A.-M., Dumas, C., et al. 2004, Astronomy & Astrophysics, 425, L29, arXiv: astro-ph/0409323.


Dnner, C., Parnell, T., Sarigiannis, D., et al. 2018, arXiv:1803.06333 [cs], arXiv: 1803.06333. Funahashi, K.-I. 1989, Neural Networks, 2, 183.


Hinners, T. A., Tat, K., & Thorp, R. 2018, The Astronomical Journal, 156, 7.


Huang, G., Sun, Y., Liu, Z., Sedra, D., & Weinberger, K. 2016, arXiv:1603.09382 [cs], arXiv: 1603.09382.


Ian Goodfellow, Yoshua Bengio, & Aaron Courville. 2016, Deep Learning.


Jon M. Jenkins. 2017.


Kingma, D. P., & Ba, J. 2014, arXiv:1412.6980 [cs], arXiv:1412.6980.


Longo, G., Donalek, C., Raiconi, G., et al. 2004, in Toward an International Virtual Observatory, ESO ASTROPHYSICS SYMPOSIA (Springer, Berlin, Heidelberg), 202–213.


Mayor, M., & Queloz, D. 1995, IAUC 6251: 51 Peg; C/1995 Q1


McCauliff, S. D., Jenkins, J. M., Catanzarite, J., et al. 2015, The Astrophysical Journal, 806, 6.


NASA-Exoplanet-Science-Institute. 2018.


Pat Brennen. 2018.


Schwarz, G. 1978, The Annals of Statistics, 6, 461.


Shallue, C. J., & Vanderburg, A. 2018, The Astronomical Journal, 155, 94.


Thompson, S. E., Mullally, F., Coughlin, J., et al. 2015, The Astrophysical Journal, 812, 46.


Thompson, S. E., Coughlin, J. L., Hoffman, K., et al. 2018, The Astrophysical Journal Supplement Series, 235, 38, arXiv: 1710.06758.


Udalski, A., Zebrun, K., Szymanski, M., et al. 2002, Acta Astronomica, 52, 115.


Widrow, B., & Hoff, M. E. 1988 (Cambridge, MA, USA: MIT Press), 123–134.


I’d like to thank a few people who have been indispensable in their help to my project.

Thank you to Dr. Susan Mullally at the SETI Institute for helping me find the signals and simulated signals that I was looking for, as well as telling me what is and isn’t possible to reconstruct from the NASA public archives.


Thank you to Professor Smecker-Hane at University of California, Irvine, for allowing me to sit in on your classes and talk with you about your research experience and your experience with your diversity programs. You really inspired me to keep going on my project, as well as to grow up to be a scientist that stands up for social justice.


Thank you to Professor Jian Ge, at the University of Florida, for following up on the candidates I suggested and independently confirming them in order to confirm 26 new exoplanet candidates even through an abundance of caution.


Thank you to my entire family for bearing with me after countless long nights and listening to all my crazy ideas.