How to Listen
In this Episode
Join us on a road trip. Our destination? The future of driving. Our guest is Dr. Steve Dellenback, vice president of the Intelligent Systems Division at Southwest Research Institute and an expert in the field of automated vehicles, or self-driving cars. The technology is in the fast lane and is already in limited use in commercial and military settings, but there are also plenty of road blocks. For instance, you program an automated car to obey traffic laws. So, how do you teach it to react in unexpected situations that require it to break the law, let’s say, to save a life? If the technology fails and results in an accident, who’s to blame, the passenger, the manufacturer, or the software designer?
Along with the accelerated advancements come the ethical speed bumps. Listen now as we merge into automated traffic and hit the highways of self-driving vehicles.
Below is a transcript of the episode, modified for clarity.
Lisa Peña (LP): Let's take a drive. But on this drive, you are not steering, pumping the gas, or braking, and neither am I. The car is doing all of that and much more by itself. This self-driving car already exists.
So, when can we hop in and enjoy the ride? Buckle up, as our guest today takes us through the twists and turns of automated driving.
We live with technology, science, engineering, and the results of innovative research every day. Now, let's understand it better. You're listening to the new Technology Today Podcast, presented by Southwest Research Institute.
Hello, and welcome to Technology Today. I'm your host, Lisa Peña. Our guest today is developing automated vehicle technology and navigating the road to self-driving vehicles. It's a road that also has some speed bumps.
Dr. Steve Dellenback is Vice President of the Intelligent Systems Division at Southwest Research Institute and an expert in the field of automated driving. Thanks for joining us, Steve.
Dr. Steve Dellenback (SD): Happy to be here.
LP: Well, let's start at the beginning. For those of us new to this concept, what is automated driving? How do you define it?
SD: Well, it's installing computers in your car that can control the acceleration, the braking and the steering. And then have sensors to perceive the environment. And can essentially replace the human driver by using computer commands to control the vehicle to go where you wish to go.
LP: So the car can do everything a human driver does, it's just on its own. Is that how this works?
SD: The ultimate scenario is, is that you may not even have a steering, accelerator or brake in your car. You simply just have a vehicle and you program in that I want to go to the local grocery store, you want to to a sporting event, kid's tennis, or something like that, and the car would just take you there.
LP: So it's kind of like being in a box that's like just a computer box almost.
SD: Yeah. Exactly. And there's a lot of different discussions about, today, we think that everybody owns cars. And if you look at the number of cars that every household holds, it's a pretty high number.
And there's a lot of people, when they talk about the future of automated vehicles, is the concept of personal auto disappears eventually in this scenario. Where you would actually think more of the Uber, the ride sharing concept. When you needed to get transport somewhere, you would dial up an automated vehicle, and they would come and pick you up. I'm not sure I know exactly where that reality is going to fall because it really becomes the economic model and what works best.
LP: So let's break down this technology a little bit. We're talking about software and sensors, but how does all of this kind of work together? How does it communicate?
SD: When you have an automated vehicle, there's actually multiple sensors. Just like you know you use your eyes to perceive the environment, you also use your ears to hear different things in the environment. For example, you may hear an emergency vehicle approaching.
To fully automate a car, though, you have sensors that have very short range capabilities on the order of one to three feet. Then you have sensors that might work up to a 50 to 60 feet. But then you also need long range centers because when you're going quite fast, what's three feet away from you is really not important. What's important is what's 100 feet, 200 feet down the road. So it's a combination of different sensors that we fuse together to provide the information in the computer about this is what's coming up in the environment.
You then have to couple that with some type of map. Because if you want to go from location A to location B, you have to know the roads. And just like you as a human, if you're starting at home and you're going to go to the local grocery store, you know you need go down to your local street and turn on a major thoroughfare, and then you have to pull into the parking lot.
So the computers, to do this driving, also have to have some form of a map so they know where how to get to that location that they're going to do. So then we take all that sensor information, we fuse it with the desired route, and then we have to start executing that. In other words, we have to then tell the car to accelerate, but you have to be watching for objects, et cetera, and different things that might come up.
A child maybe on a bicycle, and they're coming up to the curb. You have to decide, is that child going to truly stop? Should I start taking evasive action, or do I trust that child's going to stop and just continue on my path? So there's a wide variety of decisions that have to be made to do this.
Driving is actually far more complex than most of us think. When you first learn how to drive it's a bit overwhelming. And you probably tend to go a little bit slower, and you're very cautious.
As you drive more, you become very comfortable in it. It really becomes almost like a rote memory. You don't necessarily remember how you got to some place, you simply execute it as you went along. But there's a lot of processing going on in your head as you drive from point A to point B.
LP: So, it really sounds like this technology has really been developed and really well thought out, but we're not seeing it on the roads every day. But it is being used. It is being deployed in certain scenarios. So how is this technology being used right now?
SD: To your point of whether or not it's been developed, I would argue that it's still in development. And although a tremendous amount of capability has already been implemented and we can do a lot of things, there's still a lot left to be done. There's some commercial companies, companies like Waymo, which some people may have heard of. The Google car that's now Waymo is the subsidiary that does that. Uber, they're typically in the news for having vehicles that do self-driving.
The other thing that people hear about are, for example, you may have heard of General Motors has something called Super Cruise. Tesla has their AutoDrive system. And some people confuse those with fully automated cars.
Those are what we call ADAS systems. That stands for Advanced Driving Assistance Systems. For example, you can be going along on a freeway, you can engage an automated system. It'll maintain your speed, it'll maintain yourself in between the lines of the freeway. And so, yes, it's sort of self-driving. But the point is, it can't pull into the gas station for you. It's simply designed to work in very constrained-type notional environments.
The Society of Automobile Engineers has a scale from one to five to describe how automated a car is. And there's a large number of cars out there that have level two and three automation. That's the Tesla's of the world. You may have an adaptive speed control on your car.
Some people have what we call a "lane nanny" - it'll remind you if you start drifting out of your lane. The car will either beep or give you a haptic sensor in your seat, like a vibration on your seat. Those are level two and level three and they're designed to be driver assist systems to help you drive more safely.
The full-up automated systems are really the concept that you're sitting at your driveway at home, you get in the car, and you either verbally or you somehow tell the car, "take me to the grocery store" and you don't ever touch anything. That's fully-driving cars.
We're not there yet. People are spending a lot of money on working on this. Literally billions of dollars is spent in R&D [research & development] in this space. And we're still not done yet.
LP: So those are some of the commercial applications. Are there also military applications being developed? Can you talk about that?
SD: Sure. There's definitely various military things that have been done. You think of, for example, in the recent wars over the last 20 years, we've had issues in convoy operations. The insurgents will put in IEDs, Improved Explosive Devices, in convoys. Can be a very dangerous type of operation.
So there's a natural desire to want to automate convoy operations, and the military has been working on that for 20-some-odd years. They're actually planning to deploy in either 2020 or 2021 will be some of the first deployment of automated convoys.
Yet they're still going to have the first truck be man-driven, then the following trucks will be automated. So the military's not quite yet to full-up automation.
LP: So what do we do here specifically at Southwest Research Institute to develop this technology, to take it further?
SD: We work both in the commercial side of the business and the military side of the business. And in the commercial side of the business, we do a variety of work for original equipment manufacturers. Those are the car companies who actually build your cars.
And we build subsystems that they use and they can integrate into their vehicle platform as one of the tools for automated driving. For example, somebody might want a better perception system to go pick up maybe the side of the road, or traffic signs, or traffic signals, and we have technology that we've developed here that we can use to do that.
Another area that we're very good at is something called localization, about where is my car. Because everybody's familiar with GPS at some level. You have it on your phone, you may have a car. And it gives you a crude approximation of where you're at.
If you're walking down the sidewalk, if it's within ten feet, that's perfectly fine because you have your own sensors. You have your eyeballs that you can use to position yourself better on the sidewalk. But if you think of a vehicle that doesn't have a human brain in it, that localization is very important. If you're off ten feet in a lane driving down [Interstate Loop] 410, that's not a good thing because you have to be between the lines. So we've developed some very advanced technology using a wide variety of algorithms and a wide range of sensors to accurately place our vehicle in different locations.
And on the military side, we've been doing work for both the U.S. Army, the Marines and the Navy for about the last ten years. And that's a whole different world of technology because if you think about what the Waymo vehicle does or the Uber vehicle does, they work on paved roads.
In the United States, approximately 65% of the U.S. road network is paved. That means there's 35% of the roadway network that's not paved. And so we're developing technologies that can drive in these unstructured environments. Because again, if you go back to what a Tesla, or GM Super Cruise, or Waymo, they expect paved road that have stripes on it. And they use that to aid in their localization into the placement of the vehicle.
When you're going down a dirt road, you don't have those markers. Dirt roads change. It rains and the roads structure changes.
After a month, if you're in most climates, grass actually grows. Sometimes in Texas it seems to die in the summer. But in many areas, the grass at the beginning of the year is six inches tall. At the end of the year, it's two feet tall. So it's a very different type of structure.
So in the military space, we've been working on various techniques to improve how do you position, how do you localize a vehicle in this space? And then perception is very different. If you look at commercial vehicles, most of them have a device called a lidar. A lidar is like spinning laser that can go out there and give back to the computer information about objects.
It doesn't give you the details, for example, that you have brown eyes or eyebrows. What it tells you is, here's the shape of the human being. It doesn't give you features of the human. You then use vision to determine the specific features.
In an off-road environment, a lidar is not nearly as effective because the environment changes so quickly. The commercial people actually pre-map their routes. For example, before an Uber vehicle, or a Waymo vehicle drives from point A to point B, they actually manually drive the vehicle collecting, using lidar, all of the different shapes of the buildings, and the signs, and then they use that information when they come back into automated mode to help localize or to position themselves in doing that.
In an off-road environment, you can't do that. Because number one, we're not necessarily driving on a road, so we're not taking a fixed route between point A and point B. We may be in a battle environment, a theater environment, and we're simply saying we want to go a couple kilometers to the north and then one kilometer to the east. And we're driving across rocks, and gravel, and what have you. So those are the type of techniques that Southwest Research has been working on for the military over the last ten years.
LP: So how do you envision self-driving cars working with entire transportation systems, basically, maybe talking to each other, or talking to a homebase, just communicating what's happening on the roads?
SD: Well you actually opened up an interesting debate that's rampant in the transportation business right now. Because for about the last 15 years, the US government's been working on technology called connected vehicles. And the concept of connected vehicles is that two cars can communicate information to each others as they are moving along. And also, the car can communicate to the infrastructure. And what I mean by that is, for example, if you're approaching a traffic light, rather than having to look at what the light is, the infrastructure could broadcast to your car, 'you currently have a green light that's going to turn red in ten seconds.'
The concept of what USDOT (U.S. Department of Transportation) is trying to do is essentially, that's really almost driving assistance system. So if you're coming up to an intersection, and let's say you have a green light. And it detects that you have somebody coming on the cross street that's not slowing down. They may broadcast a message to your car that says, 'even though you have a green light someone's going to run the red light, you need to take evasive action before you get to the intersection.'
And so this whole concept of connected cars is very important to the discussion of automated vehicles. What we're seeing is that many of the private technology companies, Waymo/Google's of the world, Uber, are not using connected vehicle technology right now. Because it's really been led by the U.S. government and in the auto industries here in the United States.
So the debate is, do automated vehicles need connected vehicle technology to be successful? And most of us think that they do. Because what you're essentially doing is extending your sensor range. Because if my car can talk to your car, and you can provide me data that you see as well as the data that my car sees, I'm going to have a more effective automated vehicle.
But that is yet another technology leap because you have communications that you have to worry about. And frankly, a really big question is, is how do I know your data is good? For example, if you're from manufacturer A, and you have your own proprietary algorithm that you're using. And your broadcasting data to manufacturer B who has some liability in the product that they're developing, can they trust what you're sending them is correct?
In other words, do they want to make a decision on their car based on data from a competitor's car? It's a very interesting debate that's out there. And there's not one answer for it all. I think we're going continue to see it evolve over the next 20 to 30 years.
LP: Are there any privacy issues with that? Meaning...
SD: There are huge privacy issues behind connected vehicle. And probably, there's really two issues that have slowed down connected vehicle. One is the issue of spectrum, and how there's enough airways, if you will, to share it, because there's other uses for those airwaves. Communication companies, internet service providers would like to get that same airways that the auto industry is using. So that's issue number one.
But number two, yes, there's always this concern about is "Big Brother" looking after me? Because if you start thinking about this technology, when you start having automated technology, the car knows a lot about its occupants. It knows where they're going. It may not know what they're doing in the car, but it knows what their destinations are.
And so the whole concept of, are people comfortable in knowing that their car could be easily tracked, I'm not here to solve that debate. But what I can tell you is a lot of people on their phones are already broadcasting an unbelievable amount of information about yourselves, what your purchasing habits are, et cetera. And so, while it is a roadblock to the connected vehicle program, people are going to have to decide what do you want from your technology? Because it's frankly, going to be impossible not to know where people are if you're using technology to move them from point A to point B.
LP: I'm thinking about what it must have been like, those beginning days of the internet before there was this World Wide Web. And I'm hearing you talk, and I'm kind of feeling like you're on the ground floor of something really huge. Do you do you feel that way? Do you feel that decades from now this may be mainstream, but right now, you're really having to work out the kinks?
SD: Yeah, I really do. And I think that's what makes it so exciting to work in this field. I think that right now we have a lot of media hype in this field.
There's been a lot of discussion how automated vehicles are here. You're going to see them next year, you're going to see him in two years, you're going to see them in three years. If you go out and you search the internet, you'll be astounded the number of articles that were hypothesizing are prophesizing that in 2020 there would be all types of automated vehicles out on the road.
And that's not going to happen in 2020. And the real issue, I think, that we're struggling with is, how do you put them in mixed traffic? And what I mean by that is, if you could overnight wiggle your nose and say, all manned vehicles go away and everything becomes unmanned, the problem is solvable much sooner.
The problem is, how do you put a computer-based car into the randomness of man-driving cars? And my favorite analogy is, go drive on [Interstate Loop] 410 at 8:00 in the morning or at 5:00 at night and there is not much order out there.
It's more of it's a law of the jungle. How many times can you change lanes? How fast can you go? How can you squeeze in?
An automated car has to be very orderly. It has to think logically. And it's not going to drive like a human driver. It's not going to weave and bob in and out of traffic.
And so that is one of, I think, the most significant challenges left in front of this industry from a technology perspective. There's a number of social issues we could talk about. But from a technology perspective of how are we going to handle mixing automated cars and unmanned cars?
And let me give you some examples. In the agriculture industry, unmanned vehicles are very common because they can work out in fields, they can do different operations, and Southwest [Research Institute] is actually involved with a number of agricultural companies and helping them doing their automation programs.
But there's no other drivers out there and they also work in a constrained environment. If you have a tractor that's plowing a field, or you have a tractor that's working in a vineyard, it's very well-defined and there's not a new object every day. Humans are not walking through there. There's no traffic. So it's a very predictable environment.
The mining industry, again, another area of the Southwest Research Institute is working, has very large open pit mines where trucks can drive in and drive out. There's no mixed-mode traffic. So the automation problem, I'm not going to call it simple, but it's not nearly as complex as trying to drive down an interstate during rush hour.
LP: So one of the challenges then is, as you mentioned, is to integrate this automated vehicle technology in with the manned vehicles. As someone who got rear-ended yesterday, I wouldn't have minded a little bit more rhyme and reason to the 5 o'clock traffic. So that brings us to the safety of automated vehicles. Are they naturally safer?
SD: This is a tough question to answer because anytime an engineer answers a question without data they can get in trouble. Each year in the United States right now, we're losing between 35,000 and 40,000 souls due to traffic accidents. And the engineers that build cars have done an amazing job in making cars crash-survivable. Crashes that would have killed people 20 years ago, people routinely walk away from almost unscathed now. That's what the industry believes.
Are we going to become crashless? No. Again, you can find some articles, if you go out and search of the internet, that will say, the crashless cars of the future. Nobody really believes that that's building the technology, cars are still going to crash.
So the question is that's really in front of us, can we make them crash less often than man-driven vehicles? Because if we're going to lose 40,000 souls a year to man-driven vehicles, if we can't get better than that number with automated, I think better means 20,000 or less, or maybe 10,000 or less, than you wonder what's the real purpose of that technology? Is it to save lives? Yes.
To make it safer? There are literally millions of accidents. And you had the unfortunate accident being rear-ended.
For example, my car. I have an interesting import car that's got all types of technology packages. And if I'm not watching, it'll actually apply the brakes and take me to zero miles per hour if it detects something in front of me. It'll actually essentially auto stop.
Those are really nice. And these are that category of advanced drive assisted systems that I talked about, that level two, level three automation. And so again, in general, we think that automated cars should be safer because they shouldn't get tired, they should get sleepy. We hope they don't drink. They don't have too many ways.
I say that with a bit of a chuckle, but technology also fails. I'm not sure if you've ever had your computer get a blue screen.
LP: Of course.
SD: Of if you ever had your phone just stop working. And that's one of the things that I lose sleep over sometimes is, how do you build up enough redundancy in your vehicle because we know technology's not infallible? And so the real question becomes, when it fails, what's acceptable? And what is acceptable to you might be different than what's acceptable to me.
My point is, if it's your family that's at risk you may have a different perspective about whether that, or if you say, oh, well, it was somebody in another country, you may be less concerned. Again, that's this whole argument about how do we put a value on all of that?
LP: So let's explore the possibility of failure for a moment. Have you reached there, have you looked into that what happens when an automated vehicle fails? What does that look like?
SD: That's been one of the interesting challenges that people have really been trying to work. Because in the military space, they continue to have a requirement, a hard requirement that any automated vehicle must be capable of human driving. It's an absolute requirement. So every one of our military vehicles has a steering wheel, an accelerator, and a brake. Because they don't think that we engineers can solve 100% of the scenarios.
One of the things I want to point out that people don't know sometimes, is of the 40,000 traffic deaths that we get a year, that happens in 0.0001% of the driving that occurs. So if somebody tells you, well, my car is 99% safe, they have even gotten to the area where people are dying. The point is, even if your car is 99.9% of the time reliable, that's still not close enough to where you need to be.
And so, yes, we are worrying quite a bit about this issue. You start looking at systems. If you lose your technology, your vehicle can auto shut down. And how can you safely pull off to the side of the road?
The question is, if you have a total complete catastrophic computer failure, you're doomed. But if you start losing sensors, you start losing capability, maybe you find a way to move your car off to the side the road, and you park and, you call for help.
Something else some of the companies that are doing, the Waymos and Ubers of the world, they're exploring tele operation. For example, if your automated vehicle is coming along, let's say there's a construction zone or something really odd happened, maybe a wind blew over a big tree and the road is totally blocked and your vehicle simply says, 'I give up, I don't know what to do.'
They can log in from a remote control center, use the cameras of the car, and essentially, they have a joystick, if you will, I'm not sure technically what it would be, but they could drive your car and put it back into a position where the automation can return.
And you start thinking about the scale of that in the millions and millions of cars that are on the highway and how many of these control centers would we need? How many people would you need? And so, yes, engineers are considering this.
The other issue is, how you build your car. How many redundant sensors do you put on? You can build an automated car with one or two sensors. But if one of those sensors goes out, what do you do? And again, if you're going three miles an hour in a parking lot it's probably not nearly as challenging problem if you're going 70 miles an hour down a freeway and you lose capability.
LP: As we were discussing the integration of the automated vehicle with the manned vehicle, you have mentioned the concept of managed technology lanes. They don't exist, but what are they and what would that look like? And how would that aid the crossover?
SD: Most of our listeners are probably have been on an HOV lane, a high-occupancy vehicle lane, HOV. Whether you're going to Houston or Dallas, if you're in Texas, but almost every major city in the country has lanes that if you put certain number of people in them, you can drive during key traffic periods. Over time, we've seen those evolve to sometimes having charges evolve, so they basically become a toll lane.
Well, there's been an evolving concept in the last ten years in the transportation business called managed lanes and the idea behind a managed lane is that you're guaranteeing a level of service. For example, the tolls will vary.
The tolls will actually go up as traffic gets heavier because they're trying to give you a guaranteed level of service. Sometimes people call those quote, "Lexus lanes." In other words, are they lanes for the privileged for the people that can afford that?
Fast forward to your question, what if we could go out there and put technology lanes? And say, this lanes only good for a level five automated car? Again, I would actually argue, we could probably start deploying those things in the next two to five years if we had sections of roadway that had no manned-driven vehicle in there.
But again, how do you keep people - how do you separate that? We do what's called grade separation in the transportation business where you actually have a Jersey barrier, those are those concrete walls, so that people can't cross over into it. But if you have a lane that, in theory, is dedicated to automated technology, and humans can get they're vehicles driven in there and we know humans will do stupid things under extreme situations, typically high stress, rush hour type activities, that's where those things fall down.
But the real issue behind it is, then people start arguing, well, these are lanes for the wealthy. This really actually begs a whole other issue of this automated vehicle is the cost of these things. They're not $35,000 cars. Right now, if you went out and looked at the average price of a new vehicle in the United States, it's around $35,000.
Anybody who's working in this business is taking a base vehicle platform and adding in anywhere between $100,000 and $200,000 of technology to that platform to make it an automated vehicle. And I think we all know in this country, economics is what drives change, especially in the technology space. Are we prepared to spend that much money to have this capability?
LP: One article I read says automated driving is putting us on the cusp of a transportation revolution. Do you think this is true? How far off is this?
SD: I do think it's putting us on the cusp of a revolution. I believe that cusp is maybe 30 to 50 years out. Again, different people have different opinions on this.
And you can go out, and some of the very large auto companies will tell you 2050, 2060 is the cusp that you're describing. If you go talk to the technology companies, they give you a much earlier year. Who's right? I don't know.
This really raises the issue of there's a whole number of societal issues that we haven't talked about. And number one issue is that, if you've never ridden in an automated car you probably should wait until you ride in one at 70 miles an hour before you say you're ready to use them. It's an unnerving experience, if you have any levels of control.
I'm a control freak, I'm an engineer, I like to be in control and it's very uncomfortable sitting in the back seat of a car with nobody in the front seat driving 70 miles an hour.
The other issue in all of this that really hasn't been resolved completely is the liability issue on this and this is another of those societal issues. When an automated vehicle crashes, who is responsible?
You think of this scenario. Let's say that your grandmother is blind and she's taking it to bingo. So she gets in the automated vehicle, but on the way it hits somebody. It maybe hits a pedestrian.
Should your grandmother, who can't see, be held responsible for that automation? And right now, different states have different solutions. We're going to have to do something to settle this once and for all of what we're going to do.
LP: So it seems that there are quite a few ethical debates surrounding automated driving. Why is this? There's so much to look at, so many perspectives to explore.
SD: Well, absolutely. As you know as somebody who drives a car, you have to make decisions, probably not every day, but my guess is, once or twice a year, you probably have to make an ethical decision. If you come up to something you may choose to go around a barrier, or you may choose to actually go over the yellow lines because a mattress fell over in the right-hand lane.
So how do you teach an automated car when it's OK to break the law, number one? And that's actually more serious than you consider. How many times on [Interstate Loop] 410, I'd say at least weekly, I see a mattress laying in the middle of the road. And there are times that I actually have to go on the shoulder to avoid that mattress. What we've taught our automated vehicle is, no, you don't drive on the shoulder because that's not a legal thing to do.
Another aspect is, and this is one where people struggle with also, is if you ever took a class in philosophy, or you've had a law class, they talk about something called the trolley problem. The trolley problem is a classic ethics problem, where you have a trolley that's coming down and the brakes have failed. And you're the person who has to throw the switch whether go to the left or to the right.
And on the left-hand side, there's five people stuck on the track. And on the right-hand side, there's one person on the track. So the question is, you have to throw the switch. What decision do you make? Most people will quickly say, well, I would throw it towards the one because you'd rather save five and sacrifice one.
Then the challenge, let's say, well, what if that one person was the president of the United States? The Secret Service would tell you, you take out the five rather than the one. The point behind this discussion is you have to make a value decision.
So now we put a value on human life. The Secret Service would tell you, the president is worth more than the group of five. Well, if that group of five is your family, my guess is, you think the family's worth as much or more than the president.
How do we start building this into cars? Are we going to have a nationally-approved value scale where we say it's OK to hit this but not this? And again, that's not a trivial discussion.
Because the reality is, if you're driving down the roadway and a dog runs in front of you, and you're going 50 miles an hour, frankly, it's better to hit the dog than to swerve off the roadway and potentially roll your car. Now if it's a person that walks out in front of you, you probably are going to try to swerve off the roadway to save that human because that's what we as humans do. But then if you drive off the road and you roll the car and it kills you, then think about what happens in the court of law when the lawyers say to the software person, well, how did you program the car? And you said, well, since a person got there I drove off the road. So the lawyer is probably going to ask you, so then you intentionally killed the - you said the passenger was worth less in value than the person that you might hit.
And again, those are just all illustrative. But if you really try to translate that to when you do human driving, it's the same thing that you're doing. You're trying to do the right thing. And that's very difficult to build into a computer.
LP: Yeah. You have to essentially build in a conscience, decision-making tools on the fly.
SD: The conscience, that's a wonderful word. And that's something that, how do you put that into an automation? Because again, we know as human beings here in this country, maybe even your co-workers, you have different opinions than they do. We have different political aspirations.
We have all types of different ideals in this country. And how do you make a model that works for everybody? And I think equally challenging - and this is another back to the society thing - is do you have to standardize that across all automated vehicles? In other words, should a Waymo vehicle have the same values, same conscious, as an Uber. Or is that a product discrimination feature?
There's this big debate about whether or not the federal government needs to get involved and build-in these certain standards, and nobody really knows. The government is not doing anything about it, I'm not saying they should or they shouldn't, but I think those are the answers that we've got to get settled before we're going to see widespread adoption on the public motorways of automated vehicles.
LP: Yeah. So before I can hop in my car, program it, and take a nap, I'm really excited that experts like you are sorting all this out, and looking at every facet of this, and trying to figure it out. So perhaps one day this is a widespread reality rather than a here-and-there option.
So let's talk on a personal level. What inspires you to continue to pursue this technology?
SD: Part of it's why I'm at Southwest Research Institute is to push the state-of-the-art is a wonderful thing that we get to do here at the Institute. And I firmly believe that in long run, we'll make our transportation ecosystem better and safer by automating cars.
I've been working in the transportation space for about 25 years of my career so far at the Institute and there are just some people who simply don't want to drive. I have four daughters and driving is not to them what it is to me.
I really enjoy driving. I like a car with a manual transmission, I like to drive. They view it as transportation.
So if the society is going to just simply want transportation, automation is a super way to do it. And so it's exciting to be a part of it. It's part fun to be part of the national and international debate about how do we actually make these type of technologies.
The thing I can't underscore is how complex this is. We talk about how complex it is to send a man to the moon. And it was amazingly complex. Making an automated car is an amazingly complex type of thing. A vehicle's got a lot of moving parts to it.
LP: That's true. And this conversation has really felt like a glance into the not-so-distant future. Definitely, so much exciting information here. So thank you, Steve, for enlightening us and sharing your vision of automated driving with us today. It's been great having you here.
SD: Well, thanks for the opportunity. It's an exciting topic. There's a lot to read out there. I encourage people to keep reading out there. But put a filter on what you read and realize that there's marketing aspects to this and there's technology. We're going to see a safer and better transportation space between automated vehicles and I'm just excited to be a minor part of that moving forward.
LP: We'll all be watching. Thank you. And that wraps up this episode of Technology Today.
Subscribe to the Technology Today Podcast to hear in-depth conversations with people like Dr. Steve Dellenback changing our world and beyond through science, engineering, research, and technology.
SwRI is a leading provider of algorithms and component technologies for automated driving for both commercial and military vehicles.