Sometimes, a driver is distracted and fails to spot a red light, blowing right through it.
It could be that a driver isn’t necessarily distracted, and might instead simply be in an unfamiliar locale, and fails to spy where the traffic signal has been placed.
Another possibility is that the red light is somehow obscured, perhaps there’s heavy rain coming down or snow that’s falling, any of which might make things harder for the driver to see a traffic signal.
Scarily, there are those drivers that are intoxicated and thus they might or might not detect a red-light, and even if they do see it, their mental state might disrupt them from taking the proper action and coming to a stop.
Some drivers like to play a game of chicken with red lights, whereby upon seeing a red light up ahead, the driver keeps in-motion, though they should really be slowing down, but they believe in their own minds that they can comfortably end-up at the light once it turns green and won’t need to come to a stop (this is a daily judgment call, by many).
In some cases, they judge wrong, and end-up either partially into the intersection or decide to just go for it and drive through the red light entirely.
Then there are the outright scofflaws.
These drivers don’t care that the light is red.
They are willing to rush through a red light as though it were a green light.
One popular excuse is that there wasn’t any other cross-traffic (that they could see), and thus why come to a stop, they exhort, and anyway it wastes energy to come to a stop and then get underway again (those that are so-called crying tears over such wasted energy are unlikely to be prolonged savers of energy in any other respects of their existence, by the way).
You can potentially add to the list of intentions for not stopping knowingly at a red light the aspect of the driver believing fervently that they won’t get caught.
In other words, it’s one thing to run a red light, breaking the law, and get caught doing so, while there’s the other side of things when you are pretty sure that you won’t be caught doing this illegal act.
Sadly, some people are guided by their perceived probability of being caught driving illegally, more so than whether their driving actions generate ill-advised risks that could demonstrably harm or kill themselves or others (including their passengers, nearby pedestrians, and drivers plus passengers in other cars).
You might be the safest driver out there, and yet you know that at any time, at any place where there’s a traffic signal, other drivers might be misjudging or purposely flouting a red light, and could readily smash into your car.
There’s not too much that you can do, other than endlessly be on-the-watch for the actions of other drivers, hoping that you’ll be lucky enough and quick enough to spot a red-light hoodlum and avoid their adverse driving antics.
Oddly enough, there aren’t as many red-light deadly outcomes per year as you might otherwise naturally assume.
That’s about 2 to 3 people per day, on average, so think of this as a loved one or someone that you know, any of which could be caught up in a red-light fatality.
I am loathed to say it, but the odds of getting killed via a red-light thug is relatively low (my hesitation is that I don’t want those idiots that are running red lights to somehow interpret the stat as though somehow it is okay for them to do their terrifying and dastardly acts of red-light destruction).
All in all, for the number of miles that we collectively drive, and for the number of traffic signals that you might encounter on a daily driving trip, there aren’t as many red-light running wipe-outs as could occur.
Once again, be careful in interpreting that aspect.
I’m betting that we all see or experience a red-light dangerous act with a rather common frequency.
By luck of the draw or maybe due to other circumstances, those red-light crazies aren’t being continually turned into killer roadway incidents.
In fact, apparently, about 40% of all drivers were also of the belief that if they did run a red-light, they figured it was unlikely they would get caught.
That last statistic makes sense since the odds of getting caught running a red light involves the chances of a police car being at the same intersection at the same moment that you run the red light, along with the cop realizing that you’ve done so, which can be tricky to readily spot at times.
If you ever wondered why some cities decide to use red-light cameras to try and catch the red-light evildoers, perhaps the aspect that nearly a half of all drivers believe they won’t get caught showcases that if someone realizes a camera might catch them, it would deter those malcontents that are contemplating such an action (though realize that oftentimes these miscreants don’t even notice the camera anyway, and thus, it becomes an after-the-fact lesson rather than a preventative cure per se).
Estimates are that over the course of a year, you might end-up sitting at red-lights for about 60 hours of your lifetime annually (that’s adding up all the times in a year that you stop at a red light).
The typical traffic signal has about a two-minute or so cycle time, meaning that it goes through the cycle of green, yellow, and red in about a two minute time period (this varies quite a bit in terms of some locations might have cycles of 90 seconds, or maybe three minutes, four minutes, etc.).
A rule-of-thumb is that the yellow light is usually around 3 to 6 seconds in length (again, this varies)
Thus, in theory, the green light and red light split the rest of the two minutes or so cycle time.
Sometimes, the green light gets the greater proportion, while in certain intersections or particular times of the day, the red light gets the larger proportion of the cycle time.
Why all this discussion about the nature of red lights?
The latest news reports claim that Tesla is readying an Autopilot update that will include a red-light auto-stopping feature.
But real-life is not always so easily swayed or overcome.
There are a lot of gotchas and this coming update, if indeed it is on the verge of being released, could be a bad deal.
There are lots of ways that this can go wrong, horrifically so.
Not only could this harm people, but it also has the potential for creating a backlash against Tesla cars and could potentially have a backlash over-spill toward all efforts underway to craft and field AI-based true self-driving cars.
Here’s today’s question: “Will the advent of a Tesla Autopilot update that includes a red-light auto-stopping feature have potentially adverse consequences and what might those be precipitated by?”
True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public is forewarned about a disturbing aspect that’s been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And Auto-Stopping
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
Existing Tesla’s are not Level 4 and nor are they Level 5.
Well, if you have a true self-driving car (Level 4 and Level 5), one that is being driven solely by the AI, there is no need for a human driver and indeed no interaction between the AI and a human driver.
The twist that’s going to mess everyone up is that the AI might seem to be able to drive the Level 2 car, meanwhile, it cannot, and thus the human driver still must be attentive and act as though they are driving the car.
Consider how this applies to red lights.
You are driving a car and there is a red light up ahead.
The smiley face version of such a scenario is that the car detects the red light, and furthermore, upon the detection, brings the car to a stop, sufficiently in time to come to a stop smoothly and properly.
The human driver didn’t have to take any action.
Score one point.
Imagine though that the car fails to detect the red light.
Presumably, the human driver is paying rapt attention and will realize that the red-light detection has gone awry, for whatever reason, and thus the human driver has to now bring the car to a stop for the red-light.
Will though human drivers have this required rapt attention.
If you relied upon the red-light auto-stopping feature and it successfully worked say ten times in a row, what impulse or reaction might you have as a human driver on subsequent red lights?
Of course, you’d begin to assume that the red-light auto-stopper will always save your bacon.
You can bet that they’ll believe so resolutely in the auto-stopper that they will readily take their eyes off the road and their hands off the wheel and their feet off the pedals.
Maybe this will be sufficient for say 90% of the time, or for those that are staunch believers, let’s say it is even 99% of the time – you have to ask, what about the 10% or the 1% of the time that the auto-stopper didn’t work right.
Time to score a minus point, likely make it several minus points.
I’m sure some will retort that there’s no reason to believe that the auto-stopper won’t work all of the time.
Suppose the red-light itself is obscured in some manner and not readily detected by the car?
Or, there is a red-light, but the system of the car fails to realize that it is the red-light of a traffic signal.
Keep in mind that when driving on a busy road that is in a downtown area, there are a lot of other competing red lights that have nothing to do with the traffic signals.
You might wonder, well, if that’s the case, why would a fully true self-driving car be any better, since it would presumably have the same chances of fouling up (and, for a Level 2 car, at least the human driver is there as a means to step-in)?
First, this is exactly why the progress toward achieving public roadway ready Level 4 and Level 5 self-driving cars is slow going and a slug-fest (by-and-large, there is a human safety driver sitting in the driver’s seat currently, purposely monitoring the car and presumably ready to take over, and in theory alert at all times, unlike a conventional human driver).
The true self-driving car needs to be right, all of the time.
That’s a high bar.
Secondly, many are anticipating that for true self-driving cars, they will be driving in designated areas, called an Operational Design Domain (ODD), which basically means the scope of where the self-driving car is able to drive.
Thus, you might have a Level 4 self-driving car that is set up to drive in a downtown area, during daylight, and not in inclement weather.
If those conditions aren’t met, the self-driving car won’t try driving, since it would be doing so outside of its allowed bounds.
In addition, many of the self-driving car developers are aiming to have detailed pre-mapped indications of where all the traffic signals are in the designated locale, which increases the chances of the system detecting the traffic signal and reduces the risk of mistaking something else as a traffic signal.
This means that the roadway infrastructure such as traffic signals, bridges, railroad crossings, and other aspects will be equipped with an electronic device allowing those roadway elements to broadcast their status. Self-driving cars will be similarly equipped with V2I features to pick up the signals and therefore use that information accordingly.
Thus, in the future, a traffic signal will likely emit an electronic signal saying it is red, or green, or yellow, and the self-driving car won’t necessarily need to visually detect the traffic signal (or, do both, double-checking the V2I with a visual look-see).
All of that is going to bolster the advent of self-driving cars.
That’s not where things are today.
So, let’s get back to the Level 2 cars.
A Level 2 car that gets equipped with a red-light auto-stopping feature is asking for trouble.
And, in case you doubt that assertion, here’s something you can bet your bottom dollar on.
When an incident happens of a Level 2 car that fails to stop for a red-light, and someone gets harmed or killed, the maker of the Level 2 car is going to say that it was unfortunate, but that in-the-end, the human driver was at fault.
Some really vigorous fans of a Level 2 car might say, hey, if a driver of a Level 2 wants to take a chance and use the red-light auto-stopper, and they get killed, it’s on them.
Recall that a red-light incident isn’t going to only endanger the driver of the car, it also endangers the passengers, and pedestrians nearby, and other drivers and their passengers.
Of the red-light deadly incidents taking place in our everyday conventional cars, about one-third or around 30% of those killed consisted of the driver of the offending car, while two-thirds were others that got entrapped into the matter.
There are some other facets to consider.
Suppose a Level 2 car with a red-light auto-stopper detects a red-light that isn’t a traffic signal and inadvertently classifies the red-light as though it were associated with a traffic signal.
What would the auto-stopper do?
Presumably, it will do its thing, namely, it will try to bring the car to a stop.
If this happens, and say there’s another car behind the stopping vehicle, the other driver (presumably a human) might get caught off-guard and ram into the Level 2 car that is unexpectedly coming to a halt.
Would the human driver of the car with the auto-stopper realize that the system has falsely opted to come to a stop, and if so, would the human driver be astute enough to timely overtake the action?
The bottom line is whether human drivers can really co-share the driving task with a red-light auto-stopper, such that the human driver will always be on their toes and able to course correct for the auto-stopper.
Plus, any such course correction has to be done on a timely basis, giving the human driver perhaps just a few seconds or a split second to decide what to do.
And, if the auto-stopper hasn’t done its thing, the human driver might be overly concentrating on why the auto-stopper didn’t act, rather than trying to resolve the red-light situation at hand.
Finally, in addition to the human lives question, some pundits suggest that if a Level 2 car does end-up implicated in causing human harm via an auto-stopper feature, the public and regulators might not comprehend why things went afoul, and instead try to put the kibosh on all efforts to craft and adopt self-driving cars altogether.
Accordingly, those worried about these potential adverse outcomes are apt to argue that a red light ought to be shined toward stopping the roll-out of such a red-light auto-stopping feature.