Optical adversarial attack can change the meaning of road signs
XorNot 2021-08-17 00:03:11 +0000 UTC [ - ]
This is a lot more interesting though from how it highlights the difference between current machine interpretation and human interpretation of road signs: which is to say, explainable AI is still incredibly important - the ability to slightly discolor the signs and have a blackbox AI read something entirely different means the systems and trainers we have are not cueing off a "safe" set of inputs to deliver their interpretations.
From that context, this is very interesting and important work - and I would argue also points the way to at least some basic road safety standards for self-driving systems (they should be resistant to this sort of non-human perceptible visual distortion).
Sebb767 2021-08-17 00:34:20 +0000 UTC [ - ]
But most of them are easy to spot, at least in the aftermath. If you spayed over a sign, people would notice. If you removed one, a driver would probably not be able to directly notice, but he might notice the situation (i.e. you should not go fast in front of a school, even when the sign is missing) or you could find evidence if an accident happens.
The scary thing about these signs is that they could easily go unnoticed, especially if placed well (for example a crossroad with already sun-bleached signs). Neither a driver (who could possibly intervene) or an investigator would be able to spot this, unless they're specifically looking for it exactly.
Things like this are, in my opinion, also exactly why people put the bar of trust so high for autonomous vehicles: They might fail in ways totally opaque to us. You can at least somewhat estimate what a person might do [0]; for autonomous cars, this is hard or even impossible.
[0] I know someone could be extremely drunk or on drugs and possibly also mistake a stop sign like this, but you'd usually see some warning signs. An autonomous car might go from totally normal to batshit insane without any prior indication or apparent reason.
Retric 2021-08-17 00:53:37 +0000 UTC [ - ]
Self driving cars don’t put that much emphasis on street signs, their not for example going to drive into a brick wall because of an arrow.
Alter speed limit signs in front of a speed trap might hit a lot of people but their unlikely to try and take a sharp turn at 70MPH or anything.
Sebb767 2021-08-17 01:24:04 +0000 UTC [ - ]
They have to. Even the most accurate maps of speed limits have flaws and cannot adapt to dynamic limits. I'm pretty sure that if car makers could avoid the trouble of parsing signs, the would do so.
> their not for example going to drive into a brick wall because of an arrow.
Teslas first driver death was due to the car not recognizing a white truck. It would be easy to add a fake increase in speed limit, just before a white enough wall, so that the car would not be able to brake in time once the LIDAR picks the obstacle up. To the casual observer it would look like the car just decided to accelerate into a wall.
lazide 2021-08-17 02:52:20 +0000 UTC [ - ]
asdff 2021-08-17 03:34:22 +0000 UTC [ - ]
XorNot 2021-08-17 10:09:27 +0000 UTC [ - ]
I kind of understand where they're coming from: a sensor where under some conditions you have to totally disregard all its input is a really bad sensor to have because your model will get disproportionately good using the "good" input, and then you're still left with "and in a medium drizzle we need to also work off optics only".
lazide 2021-08-17 17:12:31 +0000 UTC [ - ]
We’ve evolved to do that because it’s necessary for survival - sometimes we get dust in our eyes, or there is glare, or it’s too dark to see and we don’t have a light, or we’re picking up on a impending problem with something around a corner (a loud fight, or whistling wind), etc.
If visual sensors and LiDAR suck in the rain, radar is great. If radar sucks in crowded road or RF noisy environments, that’s when LiDAR or visual sensors are awesome.
Even on the road this is true - rumble strips, mechanical noise, shaking, etc.
XorNot 2021-08-17 23:03:31 +0000 UTC [ - ]
RADAR isn't an adequate replacement as severally crashes have demonstrated - it can't see non metallic objects.
That's the problem I think you're glossing over: under a whole bunch of circumstances your model is operating with out an entire sense.
lazide 2021-08-18 00:58:56 +0000 UTC [ - ]
Sensor fusion weighs inputs from multiple sensors, determining (roughly) which ones are ‘more wrong’ and ignoring them, to various degrees.
LiDAR also doesn’t just ‘not work’ at all in rain, it degrades as it gets heavier until it doesn’t work. Like vision, etc. Radar also can pick up non metallic objects, and does all the time - including geese, humans, clouds, etc, depending on frequency, and there are variable frequency radars out there that can do all sorts of cool stuff, including frequency sweeping, so it’s not just going to ‘miss’ something. There are of course issues like multipathing. But that is why sensor fusion, and similar techniques.
All of this adds up to something much more accurate and less prone to failure than basic vision, which is what Tesla is running now. It also costs more.
jazzyjackson 2021-08-17 03:33:09 +0000 UTC [ - ]
I honestly would push back on this, the car is more likely to recognize the arrows/lane markers and might not understand a brick wall - from what I can tell from watching the FSD ride-alongs, if they car doesn’t understand something it just ignores it.
sillycross 2021-08-17 01:27:14 +0000 UTC [ - ]
If the self driving car misinterpreted a "NO SWIMMING" sign to a "MUST TURN RIGHT" sign, I assume it's going to take the turn and drive into a lake?
NavinF 2021-08-17 10:57:49 +0000 UTC [ - ]
sillycross 2021-08-18 07:59:05 +0000 UTC [ - ]
It's common that due to construction, temporary road signs are placed to re-route traffic. It would be a disaster if the self driving car simply ignores them because they are not on the map, I assume?
shagie 2021-08-17 01:01:26 +0000 UTC [ - ]
This is the subject of the story Car Wars by Cory Doctorow - https://web.archive.org/web/20170105065118/http://this.deaki...
dheera 2021-08-17 02:08:07 +0000 UTC [ - ]
I mean, walking upto a stop sign dangling a projector on a stick is pretty damn easy to notice.
It's also much easier to just hold up a bogus actual sign than to go through this mess, and it would be highly illegal as well.
jazzyjackson 2021-08-17 03:35:07 +0000 UTC [ - ]
(Disclaimer, I doubt this works because the adversarial image is probably based on a single frame, not the actual object in context)
Dylan16807 2021-08-17 00:46:50 +0000 UTC [ - ]
The latter could easily be part of a dumb prank, "doing nothing wrong", but cause serious damage.
This piece of tape shouldn't kill anyone: https://www.extremetech.com/wp-content/uploads/2020/02/tesla...
jcims 2021-08-17 00:18:17 +0000 UTC [ - ]
Even with humans in the loop in the training process, there's only so much context they can bring to override data that appears authentic.
asdff 2021-08-17 03:39:09 +0000 UTC [ - ]
jcims 2021-08-17 06:43:37 +0000 UTC [ - ]
hirundo 2021-08-17 00:33:29 +0000 UTC [ - ]
A suboptimal metaphor in context.
xwdv 2021-08-17 04:11:04 +0000 UTC [ - ]
newpavlov 2021-08-17 01:15:46 +0000 UTC [ - ]
I think that any massively deployed true autopilot system will maintain a global semantic map of roads. Most sudden traffic sign changes, especially those which may influence car behavior, will be manually verified and committed to the global map. Any consistent discrepancies between the model and reality will be investigated as well, especially if it causes customer complaints. And it can be followed by releasing map patches telling software to prefer the model over detection for the problematic sign. Considering that at this stage such maps will be probably integrated with government services (i.e. the map will be compared against official road data), I think such attacks on the autopilot systems will be found quite fast.
barryvan 2021-08-17 01:29:23 +0000 UTC [ - ]
I'm not sure that governments would be quick enough to keep this "central roadmap" correct up-to-the-minute (to be honest, I suspect that "up-to-the-month" would be a stretch!).
newpavlov 2021-08-17 01:40:05 +0000 UTC [ - ]
>Here in AU, we also have "school zones" with speed limit signs that turn on only during certain times and dates.
Such conditions are usually fixed and do not suddenly change. Even OSM handles them today more or less fine.
barryvan 2021-08-17 01:48:52 +0000 UTC [ - ]
Smart signage would (likely) be more fragile, (likely) require charging or similar infrastructure, and would require configuration -- so that the sign can communicate in which direction of travel it applies and so on. Certainly not impossible hurdles to overcome, but the costs associated seem very high -- in terms of manufacture, maintenance, and training.
For what it's worth, I suspect that we will see FSD cars operating with infrastructure likely along the lines you outline in the not-terribly-distant future -- but I suspect that they will operate within very clearly prescribed zones, like freeways and highways.
jdavis703 2021-08-17 04:17:19 +0000 UTC [ - ]
And pranksters, vandals and petty thieves won’t make off with these cryptographically signed signage? Because this already happens for our existing dumb infrastructure.
newpavlov 2021-08-17 04:56:53 +0000 UTC [ - ]
mcosta 2021-08-17 11:09:30 +0000 UTC [ - ]
belorn 2021-08-17 09:22:11 +0000 UTC [ - ]
Cutout of pedestrians is extremely common, especially in the country area where they want to remind drivers to slow down next to homes. Sometimes they wear vests to look a bit like traffic police, through that practice is a bit (more?) illegal.
If I remember right, when that comic came out there was a HN thread discussing exactly those things. Those tricks are not done in order to murder people, but they might confuse a computer.
dheera 2021-08-17 02:11:20 +0000 UTC [ - ]
Also, if you want to mess with autonomous cars it's much easier to just buy this $15 T-shirt instead
https://www.amazon.com/Novelty-Signs-Halloween-Outfit-T-Shir...
vntok 2021-08-17 09:57:00 +0000 UTC [ - ]
Causality1 2021-08-17 01:03:35 +0000 UTC [ - ]
morpheos137 2021-08-17 01:40:29 +0000 UTC [ - ]
See for example
https://slazebni.cs.illinois.edu/fall18/lec12_adversarial.pd...
jdavis703 2021-08-17 04:20:53 +0000 UTC [ - ]
Yes, but pranksters frequently vandalize stop signs with messages like “stop driving” or “stop eating animals” and human drivers are smart enough to figure out what to do. I’ve never seen someone throw their car into park or toss their burger out the window upon seeing a confusing stop sign.
vntok 2021-08-17 09:52:35 +0000 UTC [ - ]
If you ever travelled by plane, surely you've seen people at the airport trash their perfectly edible food (snacks, etc.) upon seeing the confusing signs just before the safety checks, even though in many cases they would have been allowed to pass it through the gates.
SuchAnonMuchWow 2021-08-17 09:53:30 +0000 UTC [ - ]
We are just too much used to the existence of optical illusions for humans that we fail to consider them a possibility for attacks. Adversarial attacks on neural networks are simply that: optical illusions, but designed for neural networks instead of humans.
avian 2021-08-17 12:29:49 +0000 UTC [ - ]
SuchAnonMuchWow 2021-08-18 17:15:47 +0000 UTC [ - ]
qayxc 2021-08-17 09:25:18 +0000 UTC [ - ]
skybrian 2021-08-17 05:05:46 +0000 UTC [ - ]
I’m not sure how much a difference that makes in practice?
abrookewood 2021-08-17 05:57:00 +0000 UTC [ - ]
skybrian 2021-08-17 06:17:59 +0000 UTC [ - ]
The hotel room thing seems like a lot more work than quickly done vandalism, and I’m wondering how close to the sign they would need to be.
chaircher 2021-08-17 10:14:18 +0000 UTC [ - ]
kgeist 2021-08-17 05:58:08 +0000 UTC [ - ]
ragebol 2021-08-17 07:48:34 +0000 UTC [ - ]
mnd999 2021-08-17 08:39:55 +0000 UTC [ - ]
delusional 2021-08-17 10:28:11 +0000 UTC [ - ]
ragebol 2021-08-17 11:11:26 +0000 UTC [ - ]
BoxOfRain 2021-08-17 13:29:36 +0000 UTC [ - ]
teddyh 2021-08-17 09:42:15 +0000 UTC [ - ]
Freak_NL 2021-08-17 10:28:24 +0000 UTC [ - ]
Great single point of failure there, and now any sign placed temporarily because of unforeseen events (flooding, bridge collapse, etc.) won't be acknowledged by the autonomous car because it lacks a valid QR-code.
jjk166 2021-08-17 14:21:56 +0000 UTC [ - ]
If you really wanted a secure system you'd just have a database of all the signs in an area that cars could access (for example if I know what road i'm on, and my approximate location on that road, I should be able to look up what the speed limit on this stretch of road is).
The fact is if you want to be able to stop a car from going over a bridge that's collapsed by placing a sign in front of it, inevitably the same system can prevent a car from going over a bridge that hasn't collapsed. Realistically, unexpected road signs should prompt manual operation.
todd8 2021-08-17 15:33:53 +0000 UTC [ - ]
Forging isn't necessary, just copying one that is inappropriate in a particular setting.
Forgery isn't difficult either if the QR code isn't signed. QR codes are standardized and software to generate any QR code is reasonably straightforward to write (one has to know a bit about Reed-Solomon error correction), but writing software isn't necessary because there are a number of easily available libraries that can be used to generate QR codes.
Cryptographically signed QR codes (which I believe are what you are talking about) would need to include location data that could be verified by the vehicle, such as GPS coordinates and the sign's facing direction. This might work, but of course the logistics could be difficult in practice. Since signs aren't on the internet, it would be impossible to update them whenever a signing key was compromised. Key compromises would probably happen because signs are manufactured all over the world.
asah 2021-08-17 09:25:01 +0000 UTC [ - ]
ajsnigrutin 2021-08-17 10:57:42 +0000 UTC [ - ]
theoretically, if someone was to 'borrow' a 20km/h sign from somewhere, and take it to a 130km/h highway, and attach it to a random spot, would the car slow down, until the driver intervened? Or would it "guess" there's something wrong with such a drastic speed change, and beep/ignore/alert the driver?
tyfon 2021-08-17 11:48:24 +0000 UTC [ - ]
There have however been issues with e-trons and id.X cars reading side street signs here in Norway and braking hard on the highway.
watermelon0 2021-08-17 11:21:36 +0000 UTC [ - ]
mschuster91 2021-08-17 00:00:31 +0000 UTC [ - ]
How would one go and attack a real-world vehicle? Essentially you'd need to gut a vehicle entirely, mock every sensor and actuator to fool the car's control unit to believe it is still driving a car... a lot of effort.
XorNot 2021-08-17 00:04:40 +0000 UTC [ - ]
mschuster91 2021-08-17 00:14:52 +0000 UTC [ - ]
elihu 2021-08-17 03:26:47 +0000 UTC [ - ]
I thought that actually turned out not to be the case quite a bit of the time. An attack that's trained against one model will often work against different models as well, because the various models, even if trained on different data, will often end up looking at similar features.
I'm not an ML expert though, so maybe someone with more knowledge can chime in.
asdff 2021-08-17 03:41:42 +0000 UTC [ - ]
smoldesu 2021-08-17 00:05:59 +0000 UTC [ - ]
Stuff like this is why we can't trust AI. That doesn't mean we can't use it, but most of our models are black-boxes that can only be "debugged" as much as a pizza can be "decooked"
aaron695 2021-08-17 00:04:09 +0000 UTC [ - ]
This following article wouldn't have happened if the vehicle was automated, it would have seen something was wrong, and also already have the correct signage from it's database -
3 Are Sentenced to 15 Years in Fatal Stop Sign Prank https://www.washingtonpost.com/archive/politics/1997/06/21/3...
What does need to stop is moving vehicles having uncovered street signs on them, if it's not in place cover it up.
ec109685 2021-08-17 00:20:24 +0000 UTC [ - ]
thaumasiotes 2021-08-17 04:16:19 +0000 UTC [ - ]
That's not what the link says:
> In March 2001, the Florida Court of Appeals reversed the manslaughter convictions of all three defendants. The court held that the prosecution had made improper comments during closing argument. The court did not reverse the grand theft convictions.
So nobody believed they'd committed the crime, and this was corrected by expunging their conviction for manslaughter, but officially they still stole the sign, even though, if that were true, they would also have committed the manslaughter.
aaron695 2021-08-17 00:47:07 +0000 UTC [ - ]
"At a hearing on the motion, several witnesses testified that they had driven by the intersection days before the accident and the stop sign was already down"
"A man who was driving to a luncheon the day before the crash testified that as he pulled up to the intersection, he saw a semi-truck backing up across the road toward the sign. He said that when he came back after lunch, the stop sign was on the ground."
dkdbejwi383 2021-08-17 11:14:23 +0000 UTC [ - ]
In the UK they are a different shape and colour, you don't need the text to know a STOP sign is what it is.
dagw 2021-08-17 11:20:35 +0000 UTC [ - ]
No, but some of these sign reading algorithms probably don't take color and shape into account. A naive implementation might first checks the sign for something that can be OCRed, and if it finds that that is a 2 digit number that matches known speed limits then it's a speed limit.
dkdbejwi383 2021-08-17 14:48:18 +0000 UTC [ - ]
dagw 2021-08-17 15:40:55 +0000 UTC [ - ]
The training set had red octagonal signs and signs with numbers, but no red octagonal signs with numbers. Since the the model had never seen a sign with just two big digits that wasn't a speed limit sign it decides that the numbers was more significant than the shape or color and went with that.
What this shows is how important it is to include intentionally adversarial data in your training set if you want your model to be robust in 'unexpected' situations.
keiferski 2021-08-17 08:30:56 +0000 UTC [ - ]
https://en.m.wikipedia.org/wiki/Operation_Greif
...German soldiers, wearing captured British and U.S. Army uniforms and using captured Allied vehicles, were to cause confusion in the rear of the Allied line.
Reconnaissance patrols of three or four men were to reconnoiter on both sides of the Meuse river and also pass on bogus orders to any U.S. units they met, reverse road signs, remove minefield warnings, and cordon off roads with warnings of nonexistent mines.
As a result, U.S. troops began asking other soldiers questions that they felt only Americans would know the answers to in order to flush out the German infiltrators, which included naming certain states' capitals, sports and trivia questions related to the U.S., etc. This practice resulted in U.S. brigadier general Bruce Clarke being held at gunpoint for some time after he incorrectly said the Chicago Cubs were in the American League[7][8][9][10] and a captain spending a week in detention after he was caught wearing German boots. General Omar Bradley was repeatedly stopped in his staff car by checkpoint guards who seemed to enjoy asking him such questions.
Is there a programmatic equivalent of asking what league the Chicago Cubs are in?
yosamino 2021-08-17 09:34:20 +0000 UTC [ - ]
teddyh 2021-08-17 09:34:07 +0000 UTC [ - ]
throw0101a 2021-08-17 10:43:50 +0000 UTC [ - ]
elygre 2021-08-17 11:08:48 +0000 UTC [ - ]
an_ko 2021-08-17 09:06:42 +0000 UTC [ - ]
jareklupinski 2021-08-17 18:01:09 +0000 UTC [ - ]
nightcracker 2021-08-17 08:41:41 +0000 UTC [ - ]
sokoloff 2021-08-17 10:21:22 +0000 UTC [ - ]
(a somewhat forced acronym of “Completely Automated Public Turing test to tell Computers and Humans Apart”)
2021-08-17 10:20:36 +0000 UTC [ - ]