High-beam - a Major Concern
Driving during night-time has become highly stressful for most drivers, especially in India. I conducted an online-survey of 189 people in Bangalore, who indicated that high-beam from an oncoming-vehicle was their biggest concern for safety on road during night. The sudden intensity of light blinds a driver for few moments, leading to most accidents. There were around 5000 registered cases in last six months alone in my state. A little care could have reduced it, making roads a lot safer.
Rephrasing an old quote - "If We Can Detect It, We Can Control It". I started with this as my guiding principle, and tried to work on accuracy along the way to enhance my results, while making it viable for practical purposes.
LightSafe analyses vehicular traffic moving on road using Computer-Vision algorithms, and makes sure a car switches to low-beam, when light from an oncoming-vehicle is detected. And when we replicate this across vehicles, it would make it safer for everyone on the road. I’m approaching the concerned authorities to help deploy this.
To make it easier for adoption, I’ve designed a "retrofittable solution", so that any manufacturer can support it, irrespective of their year-of-manufacture. While helping drivers to just plug-n-use the solution.
Started with a mechanical prototype, using Raspberry-Pi, camera, servo-motor, and few Lego-bricks. And made sure that the concept works. And enhanced the same for an Android-smartphone, which wirelessly communicates with car-controller to electronically control the light-beam directly.
Can we make night-driving safer under high-beam conditions, and offer a solution which everyone on the road could adopt?
Context & Problem
Night-driving is becoming risky for most people, rising their levels of stress on the road today. Major factor responsible for this is the inappropriate usage of high-beam on vehicle’s headlight. The sudden intensity of light blinds a driver for few moments, leading to most of these accidents.
Although car manufacturers are becoming aware of this problem, only a few started offering a solution recently. But these are in the high-end car segment. Unfortunately that doesn’t help, as it’s the car from opposite direction which is generally reason for accidents. Hence, we need a solution that can be adopted by "everyone" driving on the road to make driving safer for all.
To make it easy for adoption, one needs an affordable solution which is easy to install on any car, irrespective of its manufacturer or its year-of-manufacture. Thus I developed LightSafe, which makes use of a smartphone (which most drivers already have) to detect and analyze the presence of an oncoming-vehicle and control the car’s light-beam accordingly. In order to make sure it works consistently, I tested it on different roads, having varied conditions.
Leading car-manufacturers like BMW, Toyota have started research for similar solutions, and have put them on few of their high-end-models. They have sophisticated solutions, that are quite reliable too. And this technology helps them to a certain extent, but doesn’t help resolve the basic issue - of an oncoming-car’s high-beam, as it is not within their control.
In order to make it safer for all, we probably need a solution which all cars on the road could easily adopt, irrespective of the manufacturer, or their year-of-manufacture. Even if one offers a solution, there could be restrictions for adoption unless one can make it affordable for everyone, without compromising on precision to control the light-beam when needed, and doing it automatically without any human intervention, and also revert when the car is gone.
I took a simple 3-step-process for solving this problem:
But challenge was to identify the right technologies for this purpose, so that everyone could afford it easily.
I interviewed a few cab-drivers to understand from people who spend most of their time driving on the road. 9-out-of-13 stated that high-beam was their most stressful aspect while driving in the night, which also put their lives to risk. I also followed it with an online-survey in school (results given in Summary), which convinced me that I was addressing a much-needed problem.
Researched about different types of sensors:
On-Road sensors had the following issues:
For instance, magnetic sensors needed a huge investment from the government to install, and the time required to cover various roads would be too long.
On-Board sensors offered a more viable solution comparatively. I explored the different sensors, to understand their pros-and-cons. Light sensors were the first choice for obvious reasons, but they had their own drawbacks. Microphones were my next choice, but soon realized that due to ambient noise around, it wasn’t able to detect sound from the oncoming-vehicle. And tried options like passive infrared sensors, etc. Looking at the results obtained from a camera, encouraged me to develop solution on top of it.
Hereon, it was all about vision-algorithms, which quickly execute even on a basic smartphone-device that became my focus. Pixel-detection models weren’t accurate, so I started exploring other possible alternates. Finally, contour-detection gave the most accurate results, with some levels of optimization - like accurate detection of high-beam. I also realized that a car only needed to maintain low-beam, irrespective of the state of opposite-car’s light-beam, which further enhanced my results. And by removing objects that aren’t related to vehicles, I could reduce a lot of false-positives that were occuring due to light emitted from various objects on the road - by limiting my processing to bottom-right-quadrant (right-hand-drive vehicles in India).
Light Sensor: I used Google’s-Science-Journal (GSJ) app for testing. I observed that light from the headlights was scattered, hence wasn’t producing accurate results most-times. On further testing, I found that the values only went up when the sensor was very close to the light-source (headlight).
Microphone: I used GSJ app for testing. Results obtained showed that a regular microphone gathered various external noises apart from car’s own. This made the microphone infeasible.
Camera: When I tested, results were fairly encouraging, as I could process the video-stream from an IP-webcam, and detect the blotches of light to detect a beam. This would consistently repeat giving accurate results.
I started with detection of high-beam from oncoming-vehicles, so that the car-driving changes its light-beam to low when detected. But on further analysis, I realized that this could be simplified, to low irrespective of opposite-vehicle’s light condition. Hence, changed algorithm to detect any kind of vehicular light. And revert to previous state when no vehicle detected.
Computer Vision Algorithm
For processing the video-stream, I used OpenCV, and programmed using Python. I started by setting up an IP-camera, and it gave access to the captured stream. This output was fed into processing unit where my algorithm runs to detect contours around beaming headlights, which appeared as white blotches.
Used Raspberry-Pi for processing initially, and built complete vision algorithm using that. And once sure, ported it to Android Smartphone for processing with minor modifications - as most users already possessed this device.
During testing, found a few false-positives where system would wrongly interpret presence of a car:
In order to fix them, I detected light coming from the bottom-right-quadrant (for right-hand-drive cars in India), which considerably improved accuracy, to approximately 91% under city-traffic conditions.
In beginning my focus was on detection-algorithm, with a simpler control mechanism for car-light-control. So when an opposite-carlight was detected, algorithm made sure that car’s light-lever would be pushed downwards to set car’s light-beam low - built using a mechanical Lego-arm.
When I realized cars are electronically-controlled, I started exploring possibilities of taking this approach - as it would make controlling car-lights seamless with no moving parts around the driver. All it required was wireless-OBD (On-Board Diagnostics) controller, which could be plugged into a car’s OBD-interface. Smartphone after processing, sent a specific control-code securely over Bluetooth, to switch light-beam accordingly. These control-codes are highly-restricted, so could test this within auto-manufacturer’s office-premises.
With proper approvals from government, auto-manufacturers could use these codes to support. These codes also are unique to each manufacturer.
The project heavily depends on detection of the oncoming-car, so I wanted to optimize it for:
Went through multiple iterations of development (around 9 in all). And each iteration would give rise to set of interesting challenges. Fine-tuned my algorithm to enhance cropping, blurring, etc. in order to detect light-objects accurately, and improve subtle false-positives such as tail-lamp, building-light or street-lamp being identified as car-light. While control was comparatively simpler (as described in Method/Testing and Redesign section). Was finally able to achieve an accuracy of approximately 91% in city-traffic conditions. Few of the major observations are as given below.
Handling light using a basic camera
Since I was targeting for mass-market-adoption, it was mandatory to make sure that the solution works on a smartphone having basic camera. Hence, the camera (rear) that I tested on was 12 megapixels - average resolution of a smartphone camera available in market today. And identified the thresholds within which car’s headlight could be detected accurately, and fixed this to be in the RGB-color range of [240, 240, 240] and [255, 255, 255] after multiple observations. All processing was performed using the OpenCV library. Apart from that the contour formation was the most critical aspect.
Accuracy vs Distance
Graph in Fig.13 shows the change in accuracy with respect to distance from the vehicle.
Accuracy vs Camera Resolution
Graph in Fig.14 shows the accuracy with respect to different camera resolutions.
Camera orientation and its impact on accuracy
Angle of the camera made a difference, both in horizontal and vertical directions. In horizontal direction I was filtering light coming from certain area within the video frame - bottom-right-quadrant - to make sure I detected car-light from opposite-direction accurately. For this purpose the steering-wheel was my reference, and stuck a smartphone-holder on car’s dashboard to make it stay in place. The other aspect was the vertical direction of the car’s height which would have an impact to cover objects in that quadrant and avoid looking on the car’s bonnet. This helped capture video-stream accurately without hampering its viewing-angle.
Effect of contour-size/threshold on car’s distance
The size of the contour and the thresholds of the RGB gave me insights into the distance of the car to a certain extent. A smaller area of the contour having lower RGB values certainly proved that the car was at a distance, while a larger contour with stronger RGB values for a car closer to the camera (i.e. the car it’s placed in). This was a good observation, but I wanted to study its consistency in order to make sure, and incorporate changes accordingly in my future implementation.
These observations helped enhance the system speed and improve its accuracy, to use it for practical purpose.
I started with a problem commonly encountered by a number of people - the high-beam menace - which caused the highest number of casualties for night-time driving. My only aim was to create a solution that could be easily adopted by "everyone". Reason is simple, we can have a true solution in place only when all of us use it. Being a problem with light, a light sensor was my obvious choice. But soon realized that this problem wasn’t so simple as I initially thought, and needed more improvements in order to obtain better accuracies while making it practically affordable.
LightSafe being a solution based on computer-vision provides more accurate results, way beyond the basic sensors approach. Using the processing power available in today’s smartphone I could come up with an algorithm that could fast process the visual information to conclude the presence of a vehicle in the opposite direction and successfully switch the state of the light in the moving car. Though it took a few iterations to refine the solution, I could achieve a decent accuracy around 91% under normal city-driving conditions. I’ve done extensive testing of around 10 hours of driving on road to verify the implementation.
As I started on this problem, I soon realized that it also needed the cooperation of various others in order to make it successful. Here are some of the key factors I focused on for each of them:
I started with simple solutions, but soon realized that in order to get results I desired, I had to dig deep into computer-vision. Understanding these concepts, and using OpenCV was my biggest learning. Also I better understand auto-electronics now. Testing for long hours has made me understand why autonomous cars take so long to develop.
The major work going ahead is to get the cooperation of various car manufacturers, and convince them about the importance and test this solution on each of these vehicles. Secondly, I’m trying to connect to government officials through a few organizations concerned about road safety. And I’ll continue demonstrating the solution and discuss with them in coming months. After further testing, I’d like to make this an open-platform so that manufacturers can embed their own codes securely and help faster adoption, to make our roads safer.
I'm Rishank, a 9th grader studying at the National Public School, Indiranagar in Bangalore, India. I love coding, building robots & solving puzzles. My interest in STEM started when I was a kid. My dad used to bring me various Lego sets, and I spent long hours constructing them to create interesting models. Later my brother motivated me to participate in a few robotics competition such as WRO, FLL, etc. using the Lego Mindstorms EV3 robotic set. I also participated in various science and robotic competitions, winning quite a few.
My hobbies include cycling, playing piano, origami and playing with a yoyo. And I cycle regularly, and go on long rides. I’ve recently completed my piano exam for grade-5 with distinction from Trinity. I continuously try out newer models in origami and create a few of my own. Yoyo-ing is a fun activity that I share with my friends, showing them some new tricks.
My idols include Elon Musk and Steve Jobs who have innovated and revolutionized peoples’ lifestyles., and I want to innovate simple things that can change the world for the better. That's how I got interested in Google Science Fair, and developed this solution. Winning this competition would hugely help in reaching to more people, and save their lives from stress due to night driving, also making our roads lot safer. I’m continuously trying to find ways to enhance this solution. And also reaching to more people who can help me in deploying this practically.
Only dry hand were to be used on the Raspberry Pi
Insulated rubber gloves were to be used on the Raspberry Pi
No liquids were to be brought next to LightSafe
No liquids were to be brought next to LightSafe
I would sincerely like to thank my parents for their support without which this project would not be possible. I would also like to thank my brother for helping me in testing and proof-reading.