SHARE THIS ARTICLE

Hackers have demonstrated some worrisome ways to manipulate and confuse the various systems on a Tesla Model S. Their most dramatic feat: sending the car careening into the oncoming traffic lane by placing a series of small stickers on the road.

Attack vector: This an example of an “adversarial attack,” a way of manipulating a machine-learning model by feeding in a specially crafted input. Adversarial attacks could become more common as machine learning is used more widely, especially in areas like network security.

Blurred lines: Tesla’s Autopilot is vulnerable because it recognizes lanes using computer vision. In other words, the system relies on camera data, analyzed by a neural network, to tell the vehicle how to keep centered within its lane.


Read Article


Whoops! Hackers Successfully Fool Model S And Steer Into Oncoming Traffic

About the Author

Agent009