Hugo Hacker News

Optical adversarial attack can change the meaning of road signs

keiferski 2021-08-17 08:30:56 +0000 UTC [ - ]

This reminded me of Operation Greif, a German covert mission during WW2.

https://en.m.wikipedia.org/wiki/Operation_Greif

...German soldiers, wearing captured British and U.S. Army uniforms and using captured Allied vehicles, were to cause confusion in the rear of the Allied line.

Reconnaissance patrols of three or four men were to reconnoiter on both sides of the Meuse river and also pass on bogus orders to any U.S. units they met, reverse road signs, remove minefield warnings, and cordon off roads with warnings of nonexistent mines.

As a result, U.S. troops began asking other soldiers questions that they felt only Americans would know the answers to in order to flush out the German infiltrators, which included naming certain states' capitals, sports and trivia questions related to the U.S., etc. This practice resulted in U.S. brigadier general Bruce Clarke being held at gunpoint for some time after he incorrectly said the Chicago Cubs were in the American League[7][8][9][10] and a captain spending a week in detention after he was caught wearing German boots. General Omar Bradley was repeatedly stopped in his staff car by checkpoint guards who seemed to enjoy asking him such questions.

Is there a programmatic equivalent of asking what league the Chicago Cubs are in?

yosamino 2021-08-17 09:34:20 +0000 UTC [ - ]

There is an authentication system named after the concept, if that is what you mean: https://en.wikipedia.org/wiki/Shibboleth_Single_Sign-on_arch...

teddyh 2021-08-17 09:34:07 +0000 UTC [ - ]

I am reminded of the questions asked by the original Leisure Suit Larry game in order to keep children from playing; the questions were designed to only be answerable by adults.

elygre 2021-08-17 11:08:48 +0000 UTC [ - ]

Ken sent me.

an_ko 2021-08-17 09:06:42 +0000 UTC [ - ]

I'd you mean "question that sounds basic, but which many professional programmers wouldn't be able to answer at gunpoint": How do prefix and suffix increment operators (++x vs x++) differ in functionality?

jareklupinski 2021-08-17 18:01:09 +0000 UTC [ - ]

"Is CloudWatch in Google Cloud or AWS?"

nightcracker 2021-08-17 08:41:41 +0000 UTC [ - ]

When used to filter out computers from humans such a procedure is known as a Turing test.

sokoloff 2021-08-17 10:21:22 +0000 UTC [ - ]

See also CAPTCHA

(a somewhat forced acronym of “Completely Automated Public Turing test to tell Computers and Humans Apart”)

2021-08-17 10:20:36 +0000 UTC [ - ]

XorNot 2021-08-17 00:03:11 +0000 UTC [ - ]

The problem with most of these types of "attacks" is that they're about the same as any other messing with roadsigns or what not. There's no end of things you could do around the roads to cause crashes, but they're all illegal and ultimately low impact.

This is a lot more interesting though from how it highlights the difference between current machine interpretation and human interpretation of road signs: which is to say, explainable AI is still incredibly important - the ability to slightly discolor the signs and have a blackbox AI read something entirely different means the systems and trainers we have are not cueing off a "safe" set of inputs to deliver their interpretations.

From that context, this is very interesting and important work - and I would argue also points the way to at least some basic road safety standards for self-driving systems (they should be resistant to this sort of non-human perceptible visual distortion).

Sebb767 2021-08-17 00:34:20 +0000 UTC [ - ]

> There's no end of things you could do around the roads to cause crashes, but they're all illegal and ultimately low impact.

But most of them are easy to spot, at least in the aftermath. If you spayed over a sign, people would notice. If you removed one, a driver would probably not be able to directly notice, but he might notice the situation (i.e. you should not go fast in front of a school, even when the sign is missing) or you could find evidence if an accident happens.

The scary thing about these signs is that they could easily go unnoticed, especially if placed well (for example a crossroad with already sun-bleached signs). Neither a driver (who could possibly intervene) or an investigator would be able to spot this, unless they're specifically looking for it exactly.

Things like this are, in my opinion, also exactly why people put the bar of trust so high for autonomous vehicles: They might fail in ways totally opaque to us. You can at least somewhat estimate what a person might do [0]; for autonomous cars, this is hard or even impossible.

[0] I know someone could be extremely drunk or on drugs and possibly also mistake a stop sign like this, but you'd usually see some warning signs. An autonomous car might go from totally normal to batshit insane without any prior indication or apparent reason.

Retric 2021-08-17 00:53:37 +0000 UTC [ - ]

> from totally normal to batshit insane

Self driving cars don’t put that much emphasis on street signs, their not for example going to drive into a brick wall because of an arrow.

Alter speed limit signs in front of a speed trap might hit a lot of people but their unlikely to try and take a sharp turn at 70MPH or anything.

Sebb767 2021-08-17 01:24:04 +0000 UTC [ - ]

> Self driving cars don’t put that much emphasis on street signs,

They have to. Even the most accurate maps of speed limits have flaws and cannot adapt to dynamic limits. I'm pretty sure that if car makers could avoid the trouble of parsing signs, the would do so.

> their not for example going to drive into a brick wall because of an arrow.

Teslas first driver death was due to the car not recognizing a white truck. It would be easy to add a fake increase in speed limit, just before a white enough wall, so that the car would not be able to brake in time once the LIDAR picks the obstacle up. To the casual observer it would look like the car just decided to accelerate into a wall.

lazide 2021-08-17 02:52:20 +0000 UTC [ - ]

Tesla cars don’t have LiDAR right? That’s why they are so susceptible to this problem. Visual sensors only.

asdff 2021-08-17 03:34:22 +0000 UTC [ - ]

It feels borderline irresponsible that Tesla is visual only. Why not collect another datapoint in the form of lidar? It's human lives at stake, don't tell me lidar is too costly: M robot vacuum uses Lidar and the thing cost 1/10th the price of putting leather seats on most cars today. It's excellent, the vacuum has no issues with my white walls and it paints beautiful maps of my apartment. I'd strap myself to it and go around town right now if it could only support the weight.

XorNot 2021-08-17 10:09:27 +0000 UTC [ - ]

LIDAR surely wouldn't work very well in rain though? Same problem with the RADAR - can't see RADAR transparent objects.

I kind of understand where they're coming from: a sensor where under some conditions you have to totally disregard all its input is a really bad sensor to have because your model will get disproportionately good using the "good" input, and then you're still left with "and in a medium drizzle we need to also work off optics only".

lazide 2021-08-17 17:12:31 +0000 UTC [ - ]

Not really though right? Sensor fusion is used, and it’s a generally solved problem. We do the same thing - we have sound, touch (which can also pick up changes in air pressure, humidity, temperature, wind), visual, and internal model/balance.

We’ve evolved to do that because it’s necessary for survival - sometimes we get dust in our eyes, or there is glare, or it’s too dark to see and we don’t have a light, or we’re picking up on a impending problem with something around a corner (a loud fight, or whistling wind), etc.

If visual sensors and LiDAR suck in the rain, radar is great. If radar sucks in crowded road or RF noisy environments, that’s when LiDAR or visual sensors are awesome.

Even on the road this is true - rumble strips, mechanical noise, shaking, etc.

XorNot 2021-08-17 23:03:31 +0000 UTC [ - ]

How though? If LIDAR is your only depth ranging input, then rain means it's gone - there's nothing to fuse.

RADAR isn't an adequate replacement as severally crashes have demonstrated - it can't see non metallic objects.

That's the problem I think you're glossing over: under a whole bunch of circumstances your model is operating with out an entire sense.

lazide 2021-08-18 00:58:56 +0000 UTC [ - ]

I take it you have no idea how sensor fusion works?

Sensor fusion weighs inputs from multiple sensors, determining (roughly) which ones are ‘more wrong’ and ignoring them, to various degrees.

LiDAR also doesn’t just ‘not work’ at all in rain, it degrades as it gets heavier until it doesn’t work. Like vision, etc. Radar also can pick up non metallic objects, and does all the time - including geese, humans, clouds, etc, depending on frequency, and there are variable frequency radars out there that can do all sorts of cool stuff, including frequency sweeping, so it’s not just going to ‘miss’ something. There are of course issues like multipathing. But that is why sensor fusion, and similar techniques.

All of this adds up to something much more accurate and less prone to failure than basic vision, which is what Tesla is running now. It also costs more.

jazzyjackson 2021-08-17 03:33:09 +0000 UTC [ - ]

> drive into a brick wall because of an arrow.

I honestly would push back on this, the car is more likely to recognize the arrows/lane markers and might not understand a brick wall - from what I can tell from watching the FSD ride-alongs, if they car doesn’t understand something it just ignores it.

sillycross 2021-08-17 01:27:14 +0000 UTC [ - ]

> not for example going to drive into a brick wall because of an arrow.

If the self driving car misinterpreted a "NO SWIMMING" sign to a "MUST TURN RIGHT" sign, I assume it's going to take the turn and drive into a lake?

NavinF 2021-08-17 10:57:49 +0000 UTC [ - ]

Why would the car follow a sign instead of Google maps?

sillycross 2021-08-18 07:59:05 +0000 UTC [ - ]

Because maps can always be out of date?

It's common that due to construction, temporary road signs are placed to re-route traffic. It would be a disaster if the self driving car simply ignores them because they are not on the map, I assume?

shagie 2021-08-17 01:01:26 +0000 UTC [ - ]

> Things like this are, in my opinion, also exactly why people put the bar of trust so high for autonomous vehicles: They might fail in ways totally opaque to us. You can at least somewhat estimate what a person might do [0]; for autonomous cars, this is hard or even impossible.

This is the subject of the story Car Wars by Cory Doctorow - https://web.archive.org/web/20170105065118/http://this.deaki...

dheera 2021-08-17 02:08:07 +0000 UTC [ - ]

> But most of them are easy to spot

I mean, walking upto a stop sign dangling a projector on a stick is pretty damn easy to notice.

It's also much easier to just hold up a bogus actual sign than to go through this mess, and it would be highly illegal as well.

jazzyjackson 2021-08-17 03:35:07 +0000 UTC [ - ]

Could etch the pattern onto a mirror that reflects sunlight onto the sign at the chosen time of day that your chosen victim drives by everyday

(Disclaimer, I doubt this works because the adversarial image is probably based on a single frame, not the actual object in context)

Dylan16807 2021-08-17 00:46:50 +0000 UTC [ - ]

I feel like there's a categorical difference between actually changing a road sign to say the wrong thing, versus adding some kind of smudge to it that certain computers read in an objectively wrong way.

The latter could easily be part of a dumb prank, "doing nothing wrong", but cause serious damage.

This piece of tape shouldn't kill anyone: https://www.extremetech.com/wp-content/uploads/2020/02/tesla...

jcims 2021-08-17 00:18:17 +0000 UTC [ - ]

I do wonder if there could be some kind of 'adversarial training' of large networks, like those of Tesla, in which drivers in an area collude to normalize dangerous behavior in an area (e.g. drive in oncoming lane, stop at an overpass on a major highway, etc).

Even with humans in the loop in the training process, there's only so much context they can bring to override data that appears authentic.

asdff 2021-08-17 03:39:09 +0000 UTC [ - ]

I'd me more worried about adversarial employees. Tesla has a reputation of working people to the bone for not much comp compared to the competition. I can't imagine they are connecting their stressed workers to psychologists. If one engineer just cracked one day, they could potentially push an update to all cars that does something terrible.

jcims 2021-08-17 06:43:37 +0000 UTC [ - ]

Sure. I'd guess someone could also sneak an easter egg in there to cause the battery packs to self-immolate.

hirundo 2021-08-17 00:33:29 +0000 UTC [ - ]

> ...and ultimately low impact.

A suboptimal metaphor in context.

arthurcolle 2021-08-17 00:34:26 +0000 UTC [ - ]

chef's kiss

xwdv 2021-08-17 04:11:04 +0000 UTC [ - ]

The problem is not that you can cause crashes, it’s that you can cause crashes at scale.

newpavlov 2021-08-17 01:15:46 +0000 UTC [ - ]

Realistic impact of such issues is usually quite exaggerated: https://xkcd.com/1958/

I think that any massively deployed true autopilot system will maintain a global semantic map of roads. Most sudden traffic sign changes, especially those which may influence car behavior, will be manually verified and committed to the global map. Any consistent discrepancies between the model and reality will be investigated as well, especially if it causes customer complaints. And it can be followed by releasing map patches telling software to prefer the model over detection for the problematic sign. Considering that at this stage such maps will be probably integrated with government services (i.e. the map will be compared against official road data), I think such attacks on the autopilot systems will be found quite fast.

barryvan 2021-08-17 01:29:23 +0000 UTC [ - ]

How would that work where streets signs _do_ change suddenly? A couple of examples I see every day are roadworks (new fixed signage for the duration of the works, or even a person flipping a sign between "stop" and "slow") and "smart" freeways/signage (screen-based speed limit signs that change, and have additional signs for speed-limits if all screens are off). Here in AU, we also have "school zones" with speed limit signs that turn on only during certain times and dates.

I'm not sure that governments would be quick enough to keep this "central roadmap" correct up-to-the-minute (to be honest, I suspect that "up-to-the-month" would be a stretch!).

newpavlov 2021-08-17 01:40:05 +0000 UTC [ - ]

I would assume that "smart" signs would transmit their state via Vehicle-to-Infrastructure (V2I) network, same for roadworks (i.e. workers would mark affected zone and this state will be transmitted via a mobile battery-powered beacon). Such transmitted states will be signed, with road authority key in the root of the certificate tree, so it will be hard to spoof them (at the very least, it will be easy to identify responsible for key leaking). Of course, it assumes a proper digitalization of infrastructure and appropriate regulation changes, which is not given... But in my opinion massive adoption of full self-driving cars will be difficult without them.

>Here in AU, we also have "school zones" with speed limit signs that turn on only during certain times and dates.

Such conditions are usually fixed and do not suddenly change. Even OSM handles them today more or less fine.

barryvan 2021-08-17 01:48:52 +0000 UTC [ - ]

Hmm, that would certainly be a solution. However, I'm a little dubious as to how this would work in practice. The current signage can be thrown on the back of a truck, bashed about, subjected to dirt and weather, and is really easy to use.

Smart signage would (likely) be more fragile, (likely) require charging or similar infrastructure, and would require configuration -- so that the sign can communicate in which direction of travel it applies and so on. Certainly not impossible hurdles to overcome, but the costs associated seem very high -- in terms of manufacture, maintenance, and training.

For what it's worth, I suspect that we will see FSD cars operating with infrastructure likely along the lines you outline in the not-terribly-distant future -- but I suspect that they will operate within very clearly prescribed zones, like freeways and highways.

jdavis703 2021-08-17 04:17:19 +0000 UTC [ - ]

> Such transmitted states will be signed

And pranksters, vandals and petty thieves won’t make off with these cryptographically signed signage? Because this already happens for our existing dumb infrastructure.

newpavlov 2021-08-17 04:56:53 +0000 UTC [ - ]

I don't quite follow you. Yes, we can't prevent keys from leaking here and there, which could allow malicious actors to cause some damage, but we can reduce its impact by making keys short-term (as we do today with TLS certificates) and capability-restricted, i.e. you would not be able to change highway speed limit with a road worker key and you will not be able to use a key generated for city A in a neighboring city B. And you probably will be able to use a leaked key only once, since after an illegal usage is detected it will be quickly revoked. Plus it will be clear whose negligence has caused the leak, thus responsible people/organizations can be held accountable, which eventually should improve cybersecurity level (at least a bit).

mcosta 2021-08-17 11:09:30 +0000 UTC [ - ]

This doesn't work with banks. How is it going to work with governments?

belorn 2021-08-17 09:22:11 +0000 UTC [ - ]

That comic is fun and all, but the specific examples are things people actually do to reduce traffic speed. Painting lines that turn a two way road into a single lane for a single car length is common for low speed roads in order to force cars to take turn passing it, thus reducing speed.

Cutout of pedestrians is extremely common, especially in the country area where they want to remind drivers to slow down next to homes. Sometimes they wear vests to look a bit like traffic police, through that practice is a bit (more?) illegal.

If I remember right, when that comic came out there was a HN thread discussing exactly those things. Those tricks are not done in order to murder people, but they might confuse a computer.

dheera 2021-08-17 02:11:20 +0000 UTC [ - ]

Right.

Also, if you want to mess with autonomous cars it's much easier to just buy this $15 T-shirt instead

https://www.amazon.com/Novelty-Signs-Halloween-Outfit-T-Shir...

vntok 2021-08-17 09:57:00 +0000 UTC [ - ]

You'd have to sit perfectly still for the car not to detect and react to your very-pedestrian-like walking pattern, at which point it's not really messing with them but rather like sitting near or in the middle of traffic and trying to make cars... not run you over?

Causality1 2021-08-17 01:03:35 +0000 UTC [ - ]

Something I didn't find terribly clear: did the researchers manage to fool the software into seeing a shape other than an octagon, or is the software so bad it thinks an octagonal sign could be anything other than a stop sign?

morpheos137 2021-08-17 01:40:29 +0000 UTC [ - ]

the latter. The software doesn't think. That is why neural networks are dumb unless allied with expert systems. As a matter of fact that is more or less how human cognition works: the fusion of a pattern recognition system with multimodal sensory inputs filtered through an expert system looking for meaning in what is sensed.

See for example

https://slazebni.cs.illinois.edu/fall18/lec12_adversarial.pd...

jdavis703 2021-08-17 04:20:53 +0000 UTC [ - ]

> that is more or less how human cognition works

Yes, but pranksters frequently vandalize stop signs with messages like “stop driving” or “stop eating animals” and human drivers are smart enough to figure out what to do. I’ve never seen someone throw their car into park or toss their burger out the window upon seeing a confusing stop sign.

vntok 2021-08-17 09:52:35 +0000 UTC [ - ]

> I’ve never seen someone throw their car into park or toss their burger out the window upon seeing a confusing stop sign.

If you ever travelled by plane, surely you've seen people at the airport trash their perfectly edible food (snacks, etc.) upon seeing the confusing signs just before the safety checks, even though in many cases they would have been allowed to pass it through the gates.

SuchAnonMuchWow 2021-08-17 09:53:30 +0000 UTC [ - ]

There are cases where pranksters painted a road on a wall, and drivers were fooled by it. So it definitely happens with humans too.

We are just too much used to the existence of optical illusions for humans that we fail to consider them a possibility for attacks. Adversarial attacks on neural networks are simply that: optical illusions, but designed for neural networks instead of humans.

avian 2021-08-17 12:29:49 +0000 UTC [ - ]

SuchAnonMuchWow 2021-08-18 17:15:47 +0000 UTC [ - ]

damn I was wrong, that's indeed the pictures I had in mind !

qayxc 2021-08-17 09:25:18 +0000 UTC [ - ]

I have the same question. If it's the latter then systems like that cannot ever be allowed on roads.

2021-08-17 01:37:04 +0000 UTC [ - ]

skybrian 2021-08-17 05:05:46 +0000 UTC [ - ]

In case you’re thinking you’ve seen this sort of research before, the difference is that they don’t touch the sign, just illuminate it.

I’m not sure how much a difference that makes in practice?

abrookewood 2021-08-17 05:57:00 +0000 UTC [ - ]

It makes a massive difference. To the human eye, the altered sign is not really distinguishable from the normal one and the attackers don't need to approach the sign in order to alter it. It makes a remote attack scenario quite achievable (depending on signal strength required for the laser etc). Think people camped out in a hotel room over looking a freeway and causing Telsas all to crash.

skybrian 2021-08-17 06:17:59 +0000 UTC [ - ]

Modifying street signs isn’t actually hard, though, and whoever did it could leave right away. I’m not sure how noticeable to humans the modifications need to be? A minor disadvantage is that it leaves evidence.

The hotel room thing seems like a lot more work than quickly done vandalism, and I’m wondering how close to the sign they would need to be.

chaircher 2021-08-17 10:14:18 +0000 UTC [ - ]

To use a very clunky metaphor I suppose it's like the difference between gun crime and knife crime.

kgeist 2021-08-17 05:58:08 +0000 UTC [ - ]

What about adding signed QR codes to road signs? It's practically impossible to forge those, it doesnt't require complex vision systems, and you don't have to revamp your whole road infrastucture, just gradually replace existing signs.

ragebol 2021-08-17 07:48:34 +0000 UTC [ - ]

The reason for reading the existing traffic signs is that they already exist. If you would design roads for autonomous cars, you'd replace the signs with eg. radio beacons and the like.

mnd999 2021-08-17 08:39:55 +0000 UTC [ - ]

Let’s do that - on highways where it’s worth the investment. And let’s stop pretending that self-driving is in any way viable on normal roads. Outside of that relatively controlled environment, there are too many inputs and the reasoning requires is too complex.

delusional 2021-08-17 10:28:11 +0000 UTC [ - ]

I think you'd agree that putting QR codes on signs is cheaper and less intrusive than replacing them with radio beacons.

ragebol 2021-08-17 11:11:26 +0000 UTC [ - ]

I do agree that is would be cheaper, but they're ugly and quite intrusive as I can't read them.

BoxOfRain 2021-08-17 13:29:36 +0000 UTC [ - ]

Yeah I don't know what it is about QR codes but I'm really not a fan of the aesthetic they have. There's something cheap, mechanistic, institutional about them that I can't quite put my finger on.

teddyh 2021-08-17 09:42:15 +0000 UTC [ - ]

They would have to be signed including GPS coordinates, otherwise they would be vulnerable to a replay attack. And since GPS is not that precise, you would still be able to, say, take two nearby signs and switch them.

Freak_NL 2021-08-17 10:28:24 +0000 UTC [ - ]

And of course the obvious problem of how you would keep the signing key private over time. Road signs are made by many companies working for every level of government from the national level down to individual municipalities. Centralise the generation of QR-codes?

Great single point of failure there, and now any sign placed temporarily because of unforeseen events (flooding, bridge collapse, etc.) won't be acknowledged by the autonomous car because it lacks a valid QR-code.

jjk166 2021-08-17 14:21:56 +0000 UTC [ - ]

The QR code is just a hash of the information on the sign to confirm that the system is reading the sign correctly. If someone actually puts up an incorrect sign, it would be immediately apparent to humans.

If you really wanted a secure system you'd just have a database of all the signs in an area that cars could access (for example if I know what road i'm on, and my approximate location on that road, I should be able to look up what the speed limit on this stretch of road is).

The fact is if you want to be able to stop a car from going over a bridge that's collapsed by placing a sign in front of it, inevitably the same system can prevent a car from going over a bridge that hasn't collapsed. Realistically, unexpected road signs should prompt manual operation.

todd8 2021-08-17 15:33:53 +0000 UTC [ - ]

> It's practically impossible to forge those,...

Forging isn't necessary, just copying one that is inappropriate in a particular setting.

Forgery isn't difficult either if the QR code isn't signed. QR codes are standardized and software to generate any QR code is reasonably straightforward to write (one has to know a bit about Reed-Solomon error correction), but writing software isn't necessary because there are a number of easily available libraries that can be used to generate QR codes.

Cryptographically signed QR codes (which I believe are what you are talking about) would need to include location data that could be verified by the vehicle, such as GPS coordinates and the sign's facing direction. This might work, but of course the logistics could be difficult in practice. Since signs aren't on the internet, it would be impossible to update them whenever a signing key was compromised. Key compromises would probably happen because signs are manufactured all over the world.

asah 2021-08-17 09:25:01 +0000 UTC [ - ]

clever! and it reuses the existing (low) tech of painted signage.

ajsnigrutin 2021-08-17 10:57:42 +0000 UTC [ - ]

I've never driven a tesla or any other car with "autopilot" features..

theoretically, if someone was to 'borrow' a 20km/h sign from somewhere, and take it to a 130km/h highway, and attach it to a random spot, would the car slow down, until the driver intervened? Or would it "guess" there's something wrong with such a drastic speed change, and beep/ignore/alert the driver?

tyfon 2021-08-17 11:48:24 +0000 UTC [ - ]

Luckily the tesla autopilot ignores speed limits on the high-way, it only follows it (when on auto pilot) on slower roads.

There have however been issues with e-trons and id.X cars reading side street signs here in Norway and braking hard on the highway.

watermelon0 2021-08-17 11:21:36 +0000 UTC [ - ]

If you have a car with speed sign recognition, and activated cruise control, it would definitely slow down, albeit slowly.

mschuster91 2021-08-17 00:00:31 +0000 UTC [ - ]

Academically certainly interesting, but everything I've seen on adversarial attacks against ML models relies on knowing the model or having a way to run the model against attacker-determined input.

How would one go and attack a real-world vehicle? Essentially you'd need to gut a vehicle entirely, mock every sensor and actuator to fool the car's control unit to believe it is still driving a car... a lot of effort.

XorNot 2021-08-17 00:04:40 +0000 UTC [ - ]

Dynamos for vehicle performance are common, and GPS spoofing is not hard. In a world where AI driving systems are a couple of decades old and on the used car market now, obtaining the source hardware would be very straight forward.

mschuster91 2021-08-17 00:14:52 +0000 UTC [ - ]

I'd seriously hope that any AI self-drive would not just use GPS and motor load sensors to learn its place in the world. You'd also need to spoof accelerometers, gyroscopes, multiple cameras and in case of everything except Tesla also LIDAR, and that convincing enough that no fail-safe is triggered.

2021-08-17 00:19:50 +0000 UTC [ - ]

elihu 2021-08-17 03:26:47 +0000 UTC [ - ]

> everything I've seen on adversarial attacks against ML models relies on knowing the model or having a way to run the model against attacker-determined input

I thought that actually turned out not to be the case quite a bit of the time. An attack that's trained against one model will often work against different models as well, because the various models, even if trained on different data, will often end up looking at similar features.

I'm not an ML expert though, so maybe someone with more knowledge can chime in.

asdff 2021-08-17 03:41:42 +0000 UTC [ - ]

Or, you could just pull a page from intelligence agencies. All you would need to do is bribe or radicalize a disgruntled employee, then you'd know the model.

smoldesu 2021-08-17 00:05:59 +0000 UTC [ - ]

This title doesn't even begin to scratch the surface of what this is capable of. A much more viable (and damaging) attack would be encoding "invisible" data onto a widely-shared photo, and then manipulating it to trip the CSAM filters for the respective platforms. These attacks are designed to be invisible in real-world applications, but I can imagine a similar implementation designed to be invisible to humans (eg. encoding important data as two seemingly identical colors, or manipulating dither patterns).

Stuff like this is why we can't trust AI. That doesn't mean we can't use it, but most of our models are black-boxes that can only be "debugged" as much as a pizza can be "decooked"

aaron695 2021-08-17 00:04:09 +0000 UTC [ - ]

Still stupid.

This following article wouldn't have happened if the vehicle was automated, it would have seen something was wrong, and also already have the correct signage from it's database -

3 Are Sentenced to 15 Years in Fatal Stop Sign Prank https://www.washingtonpost.com/archive/politics/1997/06/21/3...

What does need to stop is moving vehicles having uncovered street signs on them, if it's not in place cover it up.

ec109685 2021-08-17 00:20:24 +0000 UTC [ - ]

As an aside, those defendants were eventually exonerated: https://www.law.umich.edu/special/exoneration/Pages/casedeta...

thaumasiotes 2021-08-17 04:16:19 +0000 UTC [ - ]

> those defendants were eventually exonerated

That's not what the link says:

> In March 2001, the Florida Court of Appeals reversed the manslaughter convictions of all three defendants. The court held that the prosecution had made improper comments during closing argument. The court did not reverse the grand theft convictions.

So nobody believed they'd committed the crime, and this was corrected by expunging their conviction for manslaughter, but officially they still stole the sign, even though, if that were true, they would also have committed the manslaughter.

aaron695 2021-08-17 00:47:07 +0000 UTC [ - ]

I like these totally contradictory witness statements. Why you can't trust witness's and the article doesn't seem to even notice -

"At a hearing on the motion, several witnesses testified that they had driven by the intersection days before the accident and the stop sign was already down"

"A man who was driving to a luncheon the day before the crash testified that as he pulled up to the intersection, he saw a semi-truck backing up across the road toward the sign. He said that when he came back after lunch, the stop sign was on the ground."

eutectic 2021-08-17 02:23:02 +0000 UTC [ - ]

Maybe someone put it back up inbetween?

aaron695 2021-08-17 09:45:35 +0000 UTC [ - ]

I hope one day you are my jury.

eutectic 2021-08-17 15:12:20 +0000 UTC [ - ]

Not sure if this was intended as a compliment or an insult :)

dkdbejwi383 2021-08-17 11:14:23 +0000 UTC [ - ]

Is a stop sign really the same shape and colour as a speed limit sign in the USA?

In the UK they are a different shape and colour, you don't need the text to know a STOP sign is what it is.

dagw 2021-08-17 11:20:35 +0000 UTC [ - ]

Is a stop sign really the same shape and colour as a speed limit sign in the USA?

No, but some of these sign reading algorithms probably don't take color and shape into account. A naive implementation might first checks the sign for something that can be OCRed, and if it finds that that is a 2 digit number that matches known speed limits then it's a speed limit.

dkdbejwi383 2021-08-17 14:48:18 +0000 UTC [ - ]

Seems like a bad idea in hindsight! Any system like this should be considering multiple factors, including the shape and size, as well as evaluating the behaviour of other drivers to see if the perceived instruction seems reasonable. After all, that's what we do as humans.

dagw 2021-08-17 15:40:55 +0000 UTC [ - ]

One of the core problems with machine learning is that we often don't really know why the model does what it does. We train the neural network on 100000 images of road sign, 5000 of which are stop signs and we know it will identify stop signs, but we don't know what it is that makes the algorithm think something actually is a Stop sign. Is it the color, the shape, the word "Stop"?

The training set had red octagonal signs and signs with numbers, but no red octagonal signs with numbers. Since the the model had never seen a sign with just two big digits that wasn't a speed limit sign it decides that the numbers was more significant than the shape or color and went with that.

What this shows is how important it is to include intentionally adversarial data in your training set if you want your model to be robust in 'unexpected' situations.