Robo-Plane?

October 13, 2010 by Bruce Landsberg

Networks and blogdom are abuzz about Google’s “Robo-Car” that is being tested in the automotive purgatory known as Southern California.  The auto world is in the early stages of thinking machines  drive better than humans.  The ineptitude on the nation’s highways makes the theory plausible.  With cellphone, iPods,  iPads, nail polishing, men shaving or lunch many vehicles are already self piloting without the benefit of technology and have the accident rate to prove it.

We in the aviation business are much too sophisticated for this foolishness – We’ve had autopilots for at least half a century and the later models can handle about 98%  of the flight. Could we go that last 2% to make it totally automatic?

There’s a new moniker – the optionally piloted vehicle. (I’ve often felt that way after some of my less-than-stellar-flights. Perhaps a better description is marginally piloted vehicle (MPV)). The Army and Navy have been interested in the concept for some time. Helicopters and a Cessna Skymasters (!) are in testing. Aurora Flight Sciences  is said to be working with two GA aircraft -  The Diamondstar DA40 and the Twin Star DA-42. They  can be flown conventionally or controlled from the ground with a remote control module added to the standard system. (Wonder if a gunship option could be added to deal with traffic pattern misbehavior – I digress.)

In military or commercial applications (not necessarily involving passengers) think of the productivity gains. Fatigue ceases to be an issue – except possible for the mechanics who have to maintain the beastie.

Now take this a step further. Suppose the typical GA aircraft were so equipped and a marginally qualified pilot got into weather beyond their ability. The pilot pushes a button and says to the aircraft ” OK – you’ve got the controls. ” Then one of two things might happen – The aircraft being fully self contained, and knowing most things about most things, would look for the nearest suitable airport taking into account runway, approach procedure, available fuel, surface wind etc – go there and land.

The second way might be for the pilot to hand control over to a ground- based UAS  (Unmanned Aerial Systems) pilot who would guide the machine remotely to a safe landing.

Both Garmin and Avidyne have both recently introduced autopilots that sense unusual attitudes and recover the aircraft to straight and level flight  so this is technically quite feasible. Since the skill level is quite high for IFR flight and MPV are quite a challenge from utility, safety and training aspects is this a reasonable concept?

Space Odyssey fans will recall that the HAL 9000 computer has never maid a misteak.

There was a request for more information on last week’s blog -  the EASA-US”discussion regarding pilot – maintenance certifications procedures: http://www.flightglobal.com/articles/2009/03/16/323783/us-proposals-could-hit-easa-faa-pact.html


Bruce Landsberg
Senior Safety Advisor, Air Safety Institute

ASI Online Safety Courses  |  ASI Safety Quiz  |  Support the AOPA Foundation

10 Responses to “Robo-Plane?”

  1. Thomas Boyle Says:

    I predict that such systems will make far fewer misteaks than the humans do now. For that matter, within the limited domain that GPS-coupled autopilots operate in today, they already do.

  2. E.J. Gonzalez Says:

    I think in the end, certain operations such as passenger operations will always require a human pilot. At least in the near future. Being a software engineer, I know that a computer is limited to what is programmed into it. You can test things for years but you’ll probably never account for every possible glitch. As odd as it sounds, I think a human pilot will be the ultimate safety device/backup as Sully proved when he landed his Airbus on the Hudson.

    Then again, it could just be the pilot in me bristling at the thought of planes not needing pilots. ;-)

  3. Nickolaus Leggett Says:

    It would be wise to consider the sociological consequences of taking humans out of the driver’s seat and the pilot’s seat. Many of us like to drive and fly and we would greatly resent being forced out of these activities. Just being a consumer without actually doing things is not satisfying. Over time these resentments would grow into various forms of anger as human achievements and adventures were placed on the sidelines or restricted to “harmless” hobbies or entertainment parks. Eventually a substantial Luddite movement would arise uniting people who want to do things and make things for themselves. This movement would unite with other movements of dissidents and disaffected persons. My recommendation is to avoid this track of historical consequences. Nick Leggett Social Scientist, Inventor, and Technology Analyst.

  4. Bruce Landsberg Says:

    Nick…

    Interesting thoughts. What you have stated is perhaps the impetus for why many people learned to fly in the first place. Thanks for your comments.

  5. Don Arnold Says:

    Some think that air terrorism would be thwarted by autonomous aircraft, as there is no one to coerce or overpower. I think that a roboplane could be brought down by an inventive terrorist, but to hit a high value target, you have to get your hands on the controls.

  6. Thomas Boyle Says:

    I agree that, beyond a certain level of complexity, all software is subject to Lowery’s Law: “There is always one more bug.”

    However, that law applies to humans, too. The Colgan Air crash appears to have been a case where even the most basic modern autopilot would have done better than the “ultimate safety device/backup.”

    Capt. Sullenberger, correctly realizing he did not have the computational precision required to determine whether he could glide back to LGA, made the conservative decision to go into the river – something no current (or foreseeable) autopilot could have considered. On the other hand, a good soaring computer would have pointed out that he did, indeed, have (barely) enough altitude to get back to LGA and avoid the considerable risk of a water landing.

    I agree that human pilots are important and useful for their executive judgment. Even with a flight computer suggesting LGA, it might have made sense to go for the river anyway, for several reasons. But when it comes to precision flying, the computers are already better at it.

    I would welcome a day when people could enjoy personal air transportation without having to engage in quite such extensive training, and when they could do it with a much-reduced overall accident rate. And, if such systems make medical certification obviously irrelevant (instead of just expensive and almost entirely ineffective, as it already is), I imagine more than a few people would welcome that, too.

  7. Divad Vermiculo Says:

    Having read about the software glitches with HAL 9000, I think that you likely ON PURPOSE made 2 spelling mistakes when you said “Space Odyssey fans will recall that the HAL 9000 computer has never maid a misteak.”

    So, even with the “seemingly invincible computer”, there have been, are, and will be errors ranging from missing the touch-down point by 0.1357 mm, to crashing nose-down into terrain at 0.25 knots below Vne.

    I do computer maintenance for a small non-profit, and am amazed at just how fast and furious hard drives and computers get corrupted.

    There would have to be some really good designers of computers, and really good programmers in order to successfully make a decent auto-pilot that could take over 100% of flying duties.

  8. Jennifer Christiano Says:

    Robo plane? Count me out!!

    But why stop there? Why not just hand over our entire lives over to robots, who can do everything “better” than we can? Heck, why even have natural reproduction any more, as soon as medical research can figure out how to gestate genetically perfect babies that meet their parents’ every desire? “What a wonderful world it will be” (as the old song goes) when our world is made totally safe, clean, controlled and devoid of all disappointments and the need to rise to challenge, difficulty and and surprises, except those we choose beforehand to program into our experience….. with the safe and sanitized outcome always predetermined, of course.

    Life is all about risks and risk management. Sooner or later, we all experience problems, and death. Sorry, but that’ the way it is and we cannot continue to allow society to be driven by those who cannot come to terms with that fact. The reality is that, as we make our machines “smarter” in the race to insulate ourselves from all possible risk and discomfort, we become more stupid and find ever more new ways to screw up. And genuine, simple, cost-effective and growth-oriented solutions become more difficult to employ because individuals are moved further and further from any potential for control.Multiple smaller disasters become replaced with new disasters and fewer, but much larger, tragedies that are more difficult to avoid. The lesson is that are no such things as free lunches.

    I vehemnetly dislike the fact that we are rarely allowed to seriously and honestly examine the difficult and long-term costs of “progress” until long after the (largely predictable) problems become apparent. Cerftainly new things need to be explored and tried, but I’d like to see the tyranny of “techno-correctness” that imposes a blanket gag on our ability to question blind beliefs such as “newer is always better” or “we should just because we can”, be lifted.

    Re: the robo-plane, Fuggeddaboutdit!!

  9. Thomas Boyle Says:

    Count you out?

    I’m sorry, I don’t think I can do that, Jennifer.

    But, perhaps I could sing you a song?

    Daisy, Daisy, give me your answer, do…

  10. Dan Winkelman Says:

    I work for a UAS manufacturer and previously worked for a large avionics maker (in both defense and commercial markets). My job was and is Systems Engineering… which entails working with engineers from all disciplines. One of my specialties was digital interface design: the art of getting all the different boxes to talk to each other.

    In my experience, software engineering is not anywhere near the technological maturity to take it the “last 2%”. One major issue not recognized by this article is that the “last 2%” of the flight regime not controlled by computer is also the most difficult to control. There’s a saying in engineering: “The last 25% of the requirements will take 75% of the effort.” Additionally… hukmans are the ones writing code, and they love to tinker with code, and many of them don’t play well with others (or are too lazy to care what everyone else coded). I had an incredibly difficult time trying to make sure all the boxes in one avionics suite worked together, that everyone’s code played nice with everyone else’s, that nothing got lost in translation, that one party sent exactly what another party was expecting, et cetera.

    Then you have the whole problem of V&V (Verification and Validation). Engineers will V&V a requirement with the minimum possible amount of evidence that is considered acceptable. We in the engineering world (or most of us, anyway) understand that none of us can possibly dream up every possible contingency and test to that. So, we narrow the scope of our requirements until they can be verified in a timely and cost-effective manner.

    In all phases of design and V&V there is an understanding: Ultimately, the last line of defense is a human pilot or operator. UAVs have ground-based operators to intervene when things go awry. Autopilots have humans to take over when they fail. We are going to need that for a long time to come.

    We do not have the discipline, technical expertise, or unlimited foresight required to make 100% unmanned/unpiloted aviation a reality. Not now, not in the near future. I won’t say “never”, but I will say that it’s not going to happen any time soon.

    What I would like to see more of in these discussions is a recognition of the limitations of computerized systems, and subsequent definition of an appropriate role for the human operator. If we treated the humans and the computers as an integrated system, we would have fewer problems. Right now, most people argue that either humans are the be-all-end-all, or computers can do everything. The reality is: we need both.

Leave a Reply

*