Asymptotically Typeable

Home Blog RSS

Context folks

Mon, 14 Mar 2022

In Nautilus' Deep Learning Is Hitting a Wall, Gary Marcus elaborates on his bone of contention with deep learning. Here I'll jot down my own with AI at large when applied to social situations.

Let's talk about self-driving cars.

I want to focus on two videos about autonomous cars. The first is in Switzerland. It's an idyllic day. Lightly windy. The Tesla's passenger points his phone at the dashboard, recording an irregularity while crappy music is filling up the interior of the car and the entirety of the video. The Tesla is parked in front of a grocery shop called Coop. Its banner is white, vertical, and has the text in two colors: CO in red and OP in yellow. The flag is flailing, and flickering between red and yellow on the Tesla's screen is a street light. It thought the flag was a street light. The second video is of a Tesla insisting on maneuvering through a worksite around a man holding a stop sign in the middle of the road. It did not take the man's stop sign seriously. It treated him like fallen debris.

What is a street light? Or simpler still. What is a stop sign?

Answer 1: it is a red hexagon with a white border and a white script screaming STOP.
Problem: so a Tesla ought to halt if it saw a Stop sign sticker pasted on the bumper of some prankster.

Answer 2: same as answer 1, but a stop sign is on a straight and tall-ish pole.
Problem: Accidents happen, and a car could catapult itself into a stop sign and leave it bowing at an arbitrary angle. So, soon after such an accident, a Tesla ought to ignore the mutilated stop sign.

Answer 3: same as answer 1, but it's standing on the pavement or way above the street.
Problem: Axiom: Humans wear exotic t-shirts. Ergo, facing someone wearing a stop-sign t-shirt standing still on the pavement by a bus station, a Tesla ought to stop.

Answer 4: same as answer 3, but a stop sign must not move.
Problem: On a particularly windy day in a particularly unreputable neighborhood with unreliable infrastructure, a Tesla ought to keep going.
Problem: Confronting a miserable man holding and rocking the stop sign, identically to the second video, the Tesla ought to swerve.

We can go about this forever, conjuring hypothetical yet probable edge cases. This is not mental masturbation. False negatives, racing past a stop sign, and false positives, screeching to a halt, can cause accidents. Someone's got to pay! Should the driver pay? But Tesla advertised an autonomous vehicle that nudges them to believe, although the contrary is in the fine print, that they are well-able of taking control. Masquerading. Manipulating. Marketing. So, should Tesla pay? Or should the guy with a Stop-sign t-shirt pay? If such car accidents occur, when they remain unresolved, then they undermine the people's confidence in a fair justice system and safe roads.

So why is it okay to apply machine learning to protein folding, board games, and data analysis, but not to autonomous cars, judicial rulings, and writing public policy?

Would I be a cretin if I call out the obvious conclusion that context is key? Context folks. There are cultural contexts that no computer can conjure in its measly mind. At first, your eyes catch this stop-sign straddling sport in a bright yellow or orange jacket. Then you think that they're standing aimlessly in the middle of the road next to a construction site. Then you realize they're eyeing cars and their drivers, avoiding their gaze when they appear frustrated, and reciprocating a small smile with a similar one of his own. Tiny cultural cues allow you to conclude that the creature is not crazy. They're a human with a purpose. They care about the drivers, and drivers should care about their stop signs. Their Stop sign says stop.

Context derives intention. Intention informs judgment. And judgment guides decisions.

I hear the rebukes. A large enough and deep enough neural network will extrapolate this cultural context. Sure. Aside from Bonini's paradox. Nothing stops it as long as the training data only comes from the geographical and temporal spot where execution is expected. Two regions; two networks. Crossing from one to the other with the same car requires retraining. Just as you must retake a driver's exam, the car must be reprogrammed.

But these vehicles are on the road now. The singular plan to make them safe is to push human drivers away from roads. Sculpting a schism where some roads allow you behind the wheels and others behind the invisible driver. Yesterday horse-drawn chariots were banned from highways, tomorrow the meaty man.

But in the age of fast internet and fast food, we must squash these bugs now. The rank and file, the rich, and the revenue depend on it. So how will we do it? My prediction is in line with Clive Thompson's observation: "more jobs; worse jobs". Shit jobs will be created to assist autonomous cars.

Manufacturers will hire flesh-and-bone drivers to meander the streets, alleys, and boulevards of developed countries adorned with strict driving laws (forget the streets of Beirut). You mustn't look any further than the internet to extrapolate my conclusion. Every time a robot believes you to be of its kin you must scrutinize blurry and pixelated photos for stairs, buses, crosswalks, and street lights. You are already assisting the machine by virtually meandering the streets in your armchair. When the cars need live data, these carbon chauffeurs will gather it and stream it into the psyche of the silicon cruising the vicinities. Street Popularurbanway is home to a crowd of foremen and workers birthing a building from these hours to these hours for the next five months. Alley Famousdudestreet, where Famousdude School is, is busy tonight because of a prom. The turn East on the intersection between Waytonether Street and Hellhighway is blocked by a boulder. Etc... The misfortune of these drivers is that an AI specter will dictate to them the routes to drive.

Robots at the ends of a production line and humans sandwiched in between. Faceless humans toil for the appliance.