[aviation news]
I’m not sure what’s gained by putting ride-share drivers out of work, but driverless cars have become a common sight in Los Angeles. I seldom drive any distance without encountering one of Waymo’s white Jaguar SUVs, its roof and fenders barnacled with spinning sensors and its front seats empty.I’ve followed them in traffic for miles—they drive well. I’ve tried to book a ride, but the hilly neighborhood of narrow, winding streets in which I live lies just outside their comfort zone. Give them another six months.
A few years ago, a survey found almost 80 percent of respondents unwilling to ride in a driverless car. I don’t know what the fraction would be today, but I’ll bet it’s shrunk. One of my least adventurous friends reported that after a few minutes of a Waymo ride she felt perfectly comfortable—a triumph of experience over imagination.
So, once we’ve accepted driverless cars, what about pilotless airliners? Is that a bridge too far?
Pilotless aircraft are already common in war. Pilotless cargo carriers are being developed. There’s a difference between remotely-piloted aircraft and completely autonomous ones, but complex autonomous functions, like autoland systems, have been in use for a long time, and autonomous navigation is no trouble at all.
Even carrier landings, whose mastery is the pride of a Navy aviator, have been automated. Autonomous VTOL air taxis are on the horizon—particularly the western horizon, because a Chinese firm, eHang, is already edging into that business. Theoretical papers on integrating pilotless aircraft into the air traffic system hit my inbox every week.
When people argue against pilotless airliners, they invariably bring up the MacGyver element that the alert, creative human mind brings to problems. Cases of human pilots coping with seemingly hopeless situations are easy to find. The first to come to everyone’s mind, because it is the most recent, is the “Miracle on the Hudson.” Chesley “Sully” Sullenberger made a smart decision and then executed it very well. Many other pilots might not have done as well as he did. But a computer might.
An even more remarkable instance, now more distant, was the semisuccessful landing of a crippled DC-10 at Sioux City, Iowa, in 1989. An uncontained failure of the No. 2 engine had destroyed all three of the airplane’s hydraulic systems, disabling its flight controls. A deadheading pilot named Dennis Fitch came up to the cockpit to help. Kneeling on the floor, he controlled the airplane by manipulating the thurst of the two remaining engines. That approach and landing really deserved to be called a miracle. Efforts by other pilots to reproduce it on simulators invariably failed.
Stories of bold, resourceful action and uncanny skill appeal to our imagination, but cases in which crews were confronted with equally, or sometimes less, disastrous problems and failed to cope successfully with them are more common. Air France Flight 447, for example: The first officer stalled the airplane after all three pitot tubes iced up, and it fell, still stalled, for a minute and a half without anyone figuring out what was happening. Or Eastern Airlines Flight 401, which gradually descended into the Florida Everglades while all three crew members puzzled over a landing gear problem. For every example of a crew that rescued a flight you can find several that doomed one.
One question posed by a hypothetical pilotless airliner—apart from whether anybody would be willing to ride in it—is whether a nonhuman control system could surpass the performance of human crews in emergencies. Another is whether it could achieve human-level judgment about the wider environment—weather, geography, and traffic—in a messy human world.
I suspect that the answer to both questions is yes.
The key is that ubiquitous blessing and bugbear of our time, artificial intelligence (AI). I think most people accept that computers can match the ordinary, day-in-day-out performance of human crews under routine conditions. They already do. But that’s not enough. Computers would have to deal equally well with the unforeseen—and preferably a little better.
People often imagine an AI as a huge collection of deterministic rules: if…then…else… Indeed, such so-called “expert systems” are appropriate where answers are clear cut. The problem with such a system in a dynamic environment is that there is sure to be some overlooked eventuality that will leave it helpless. But that is not how AI works today. It’s more flexible, indeterminate, probabilistic—more similar to human thought.
The essence of modern AI is “machine learning.” Machines—meaning, in this case, computers—are learning the world in the same way that babies do, by ingesting floods of data and subjecting it to trial and error. The programming strategy that achieves this is called a “neural network”—the name highlights its parallels with the human brain.
Flying an airplane is largely a rule-based activity. The proper courses of action in most situations are laid out in the operator’s handbook and in the shared lore of piloting. But the manner in which actions are carried out comes from the pilot, and it is here that machine learning allows a computer to match the performance of even a very skilled human pilot, just as computers have overtaken the best chess and go players.
The behaviors of airplanes are governed by known laws and are highly predictable by computer simulation. An AI pilot would be trained by thousands of repetitions of both boringly routine and freakishly rare events and situations.
Each training cycle would be scored on its outcome—safe landing, crash and burn, or something in between—and the system would eventually find its way to the most effective course of action. When all engines have choked on volcanic ash, all hydraulic systems have failed, or the left wing is on fire, AI would at least have the advantage over a human pilot of having already experienced that emergency thousands of times. And, as with humans, the learning never stops. Each flight adds to the AI’s data stream.
That computers can learn to be good pilots seems obvious. They are quicker-witted and more precise than people and do not become rattled when faced with the prospect of their own extinction. As military and cargo users adopt pilotless aircraft, the technology will evolve, and people will become increasingly accustomed to the idea that airplanes, like elevators, don’t need human operators. Someday, the robot airplane will surprise no one. Passengers may even come to feel relieved that they do not have to rely on mere humans to keep them safe.
And then, what will become of us? Will we puzzle our grandchildren with nostalgic tales of the days when there were windows on the front ends of planes, and people called pilots held a sort of steering wheel in their hands?
“A steering wheel,” they’ll say. “What’s that?”
When people say AI is a mortal threat to our world, that must be what they mean.
This column first appeared in the June Ultimate Issue 959 of the FLYING print edition.
Share this content: