SHARE THIS ARTICLE

Hackers have demonstrated some worrisome ways to manipulate and confuse the various systems on a Tesla Model S.

Their most dramatic feat: sending the car careening into the oncoming traffic lane by placing a series of small stickers on the road.

Attack vector: This an example of an “adversarial attack,” a way of manipulating a machine-learning model by feeding in a specially crafted input. Adversarial attacks could become more common as machine learning is used more widely, especially in areas like network security.

Blurred lines: Tesla’s Autopilot is vulnerable because it recognizes lanes using computer vision. In other words, the system relies on camera data, analyzed by a neural network, to tell the vehicle how to keep centered within its lane.


Read Article


Whoops! Hackers Successfully Fool Model S And Steer Into Oncoming Traffic

About the Author

Agent009

User Comments

Aspy11
atc98092
Jazzyjazz
222max
CANADIANCOMMENTS
mre30

Add your Comments

Images hosted in your AgentSpace can now be posted in the comments section using the following syntax (case matters): [img]IMAGE URL[/img]
Example: [img]http://agent001.myautospies.com/images/sample.jpg[/img]