Status
You're currently viewing only Stainless's posts. Click here to go back to viewing the entire thread.
[url=http://meincmagazine.com/civis/viewtopic.php?p=25353717#p25353717:3f0xw5m0 said:
nummycakes[/url]":3f0xw5m0]
Code:
65 miles/hr / (5x 2,200/hr) = 9.5m

Your average car is about 4m long, give or take, so that's a bit over a car length between cars, essentially zero reaction time, I don't see why not even if a self-driving car couldn't see more than the back of the car in front of it (or the front of the car behind it - don't want to brake faster than they can). More distant hazards would be detectable as at great distances as humans can if not more so and reacted to (there's no reason to keep cameras at where the eyes of a human driver would be, or just in one spot).

This came up in the previous thread - in short while you could pack cars close(r) together the practical limit might turn out to higher than you'd expect.

A partial list of issues with stacking truly close from memory - Differing software, differing hardware, varying levels of maintenance, networking failures (if cars are networked) and all the random events that go with driving.

Even so I can see good implementations having a big impact on congestion.

What would be interesting is how would automakers deal with fully self driving cars. For example if I'm not behind the wheel, the carefully stratified differences in the current line up for some companies (say BMW) stops making much if any difference. The difference between a 3 & 5 Series BMW I can see - but between a 320, 328 & 335?
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=25377771#p25377771:10s53qtg said:
dh87[/url]":10s53qtg]
[url=http://meincmagazine.com/civis/viewtopic.php?p=25370033#p25370033:10s53qtg said:
ChrisG[/url]":10s53qtg]
Actually, I think much of this thread is pure fantasy. Self-driving cars aren't going to start appearing, on-roads, in significant numbers, for many decades yet.

From NHK:

Nissan to test self-drive car on public roads

...

Nissan is hoping to put the self-driving car on sale in 2020.

6 years might be a bit optimistic, but "decades" is very likely wrong.

I think that a huge benefit of s-d cars will be their fuel economy. There's no reason for an s-d car to go from 0 to 60 at any appreciable rate. In fact, if I'm reading the newspaper or taking a nap, I'd prefer 0 to 60 to be as gentle as possible. Hence, engines can all be low-power hybrids or electrics. That's how all the carmakers are planning to meet the 2025 standard of 55mpg fleet average.

Eletronic ABS was available in the early 70's - ABS wasn't really widespread in new cars until the early 90's, and that's when the clock really starting running for on the road fleet turn over. For USA the last figure I saw for average car age was a bit over 10 years.

Self drive isn't cheap and is quite limited in currently available forms - so to see significant numbers (which admittedly is a poorly defined mark) decades is probably not that bad a guess.
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=25383697#p25383697:3dtv4dka said:
Deus Casus[/url]":3dtv4dka]
[url=http://meincmagazine.com/civis/viewtopic.php?p=25383605#p25383605:3dtv4dka said:
Alamout[/url]":3dtv4dka]
dh87":3dtv4dka said:
My view is that if s-d cars meet 1/3 of realistic expectations of what they can do, everyone will rush to buy one.
Most people can't buy a new car regardless of how nice it is. Cars are really expensive! That's why turnover is so slow--because lots of people buy used cars and keep them for as long as possible. It's not like 12-year-old cars are really nice, but they're all over the road anyway.

and":3dtv4dka said:
The replacement cycle won't really play into it, unless you're thinking about complete replacement of the cars.
What other replacement cycle is there? I'm not talking about new-car-buyers buying new cars every few years. I'm talking about how long it takes for a current-year model to get off the road, and it takes 15-20 years. As cars get even more reliable, the cycle can last even longer.

We were discussing how long it takes for a large number of cars on the road to be self-driving. Unless used cars don't count as cars, that means you have to consider the entire cycle.
Depending on how well self driving cars due the cycle can be subverted by the .gov Just make some high % of freeways self drive only. Watch those self drive cars sell. Or depending on how complex the systems end up being could be done after market by certified techs. I wouldn't want some self driving car that joe schmo installed on the road but I could see retrofitting if done under the right conditions.

That said I don't really see retro fitting working due to the scale involved.

Retro fitting is not likely to happen in any real amount for the simple reason that sales would be poor (paying thousands of dollars to retrofit a car?), no major car make would pursue it as it's an obvious way to make new models stand out, and technical issues would be larger and more varied in a retrofit scenario.

New technology adoption simply takes a long time to spread through the working fleet of cars, ABS & Electronic Fuel Injection are both examples of relatively (compared to some like self drive systems of any level) cheap and very workable systems that took a long time to spread across the fleet, and EFI was pretty much mandated by emissions requirements.

Let's say Nissan brings a good system (hands off on freeways in summer say) to market in 6 years. It won't be across the line, probably would only be on the top models of say the Altima and up - right there you've cut sales figures to %25 at least. If all makers did it then you're looking at ~4 million cars per year. So 6 years gone and another 4 years for a model cycle before it trickles down another step - now we are 1 decade in with ~6% penetration under pretty favorable conditions. We haven't even considered the "unintended acceleration" sort of debacle that occurs in the USA yet.

The replacement cycle doesn't influence the first adopters. They can just buy a new car if the benefits are greater than the costs. That way, I expect that there will be a "significant number" of self-driviing cars in a hurry.

The problem is that "significant number" isn't a defined number. Somebody might call 30K cars significant, others would call that a pilot project.
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=25408293#p25408293:2i58inkm said:
Control Group[/url]":2i58inkm]
[url=http://meincmagazine.com/civis/viewtopic.php?p=25407143#p25407143:2i58inkm said:
ChrisG[/url]":2i58inkm]
[url=http://meincmagazine.com/civis/viewtopic.php?p=25399673#p25399673:2i58inkm said:
Frennzy[/url]":2i58inkm]Of course there is.

Cruise control would be one example.

Guess again. You're still very much in control; cruise control merely lets you take your foot off a pedal. You're still steering, required to be prepared to brake or maneuver suddenly, or you may even be required to adjust the cruise control.

Ergo you're still actually driving.
Adaptive cruise control, automatic braking, lane keeping assist, blind spot alerting, automatic parallel parking - each of them does something for the driver that the driver had to do manually before. How is this not a continuum between full manual control and fully automatic driving? Who's liable if the automatic parallel parking runs over an orphan lying on the pavement?
s
That is a continuum, but at some point a line gets crossed.

The orphan is the drivers fault. As far as I can tell, in the USA, anybody with money and a link to the incident is liable. An insurance rider is the best answer to this question I've seen so far.
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=27293405#p27293405:mf4r1zw2 said:
Chuckstar[/url]":mf4r1zw2]
[url=http://meincmagazine.com/civis/viewtopic.php?p=27292615#p27292615:mf4r1zw2 said:
Alamout[/url]":mf4r1zw2]Another analogy is something like UDP streaming for video. While it will drop packets every once in a while, it's not worth trying to correct for that (like TCP would). You just move on to the next packet and ignore the error, because it's a tiny percentage of the data you're receiving.

The car is using a similar protocol for mapping out the world around it--it doesn't need to correct every bad signal, it can just smooth over it because errors are very rare.
Exactly. But I think "errors are very rare" is somewhat of a simplification. The errors just have to be rare enough. How rare is important really depends on how quickly your sensor is updating the view of the world. If you're updating the world view 100 times a second, you can probably afford to smooth over a couple large errors per second. If you're updating the world view 5 times a second, then you need errors to really be pretty rare. Off the top of my head, I don't know how many full frames per second the Google LIDAR gives them. Note that for close in things like the car in front slamming on its brakes, the Google car also uses radar (IIRC), so you're not at the mercy of the LIDAR for the really time critical things like emergency braking.

I think they are using Velodyne - though I'm not searching for confirmation right now. Looking at the Velodyne 64E datasheet it lists an adjust "frame rate" of 5 to 15 Hz and a fairly typical 1.3 M points per second - also seems to be time of flight based (not continous wave).

I put frame rate in quotes, because this isn't like a camera - what you are really doing is adjusting the rate at which the laser head spins. So you are mainly trading off between horizontal anuglar resolution and how often you scan a given space, but it's a convienent label.

LIDAR data always has some noise - even on a calm day you'll get spurious points - dust, insects and probably the worst kind is from a reflective surface at an oblique angle as they can look like a valid object. And of course any kind of precipation can effectlively blind a LIDAR system as the returns from particles in the air can out number and completely obscure the returns you are interested in. It takes far less rain or snow than you would think to be a real problem - barely into use the wipers territory. I don't have access to sample now, but in rain it's pretty easy to have over half of the points be useless noise.

As for braking - a roof mounted LIDAR system will have a 360 degree blind spot centred on the laser head - anything too close to car will be in the shadow of the car & roof, which is why it's mounted as high as possible. Time of flight systems usually have a minimum distance they'll measure - but a typical distance would very close to or instead the vehicle footprint anyway.

Anyway - back to the point update rate is in the low HZ range but atmospheric conditions are a much bigger factor in whether the error rate is reasonable than the update rate.
 
Removing elevation data completely from LIDAR scanner is a pretty amusing idea.

A typical mobile LIDAR setup works roughly like this (for time of flight):

1: LASER pulse is sent out - beam will widen with distance, though Velodyne's range is pretty short.
Horizontal and vertical angle of the pulse, along with the position and attitude of the laser scanner is recorded - IE GPS & INS (accelerometers and Gyros)

2: Reflections from the pulse return to the scanner and is recorded along with the info from when it was sent. Depending on the system we get anywhere from 1 measurement (a point or pixel if you want) to 10+ from that one original pulse. This is actually an important distinction - a system can claim up to 1 million points a second, but only pulse 100,000 times a second if it can distinguish up to 10 points from a pulse.

3: Using the time between send and receive plus location of the system and all the angles recorded you get a location for whatever the laser reflected back from.

If you try to just sweep a big vertical beam the data becomes just about useless. One pulse will reflect back from a massive number of objects and the only thing you'd be told about is distance, horizontal angle and maybe how reflective they are. Since this is a vehicle the beam won't ever really be perfectly vertical so now all your points are in the wrong position on the horizontal anyway as you intentionally discarded that info. Yes you can know that it's tilted, but you don't know where in the beam the return is from so that doesn't help. On a banked corner you won't even be able to tell if a return is beside the road or on it.

Also as soon as you exceed your ability to handle points you risk missing objects at random - I'm not even sure there's a reliable way to handle the returns that the road alone will generate.

You can't pick out most objects because you have no idea where they are or their shape & size. Protuding manhole or light pole, plastic bag or person, bridge you're about to pass below or concrete barrier - can't tell them apart reliably.

This is leaving out a number of practical problems with building the system in the first case.
 
Stainless wrote:
2: Reflections from the pulse return to the scanner and is recorded along with the info from when it was sent. Depending on the system we get anywhere from 1 measurement (a point or pixel if you want) to 10+ from that one original pulse. This is actually an important distinction - a system can claim up to 1 million points a second, but only pulse 100,000 times a second if it can distinguish up to 10 points from a pulse.


This is why you don't want to have elevation information unless you must. Fast detectors aren't super expensive, but the detection electronics and optics can be much more simple if you have fewer channels.

etc.

I think you need to reread the post.
You want to take a system based on polar coordinates and just throw out one of the angles.
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=27331613#p27331613:1n8rtftc said:
redleader[/url]":1n8rtftc]
[url=http://meincmagazine.com/civis/viewtopic.php?p=27331581#p27331581:1n8rtftc said:
Stainless[/url]":1n8rtftc]
Stainless wrote:
2: Reflections from the pulse return to the scanner and is recorded along with the info from when it was sent. Depending on the system we get anywhere from 1 measurement (a point or pixel if you want) to 10+ from that one original pulse. This is actually an important distinction - a system can claim up to 1 million points a second, but only pulse 100,000 times a second if it can distinguish up to 10 points from a pulse.


This is why you don't want to have elevation information unless you must. Fast detectors aren't super expensive, but the detection electronics and optics can be much more simple if you have fewer channels.

etc.

I think you need to reread the post.

You're assuming these systems work in a mode where each reflection is a discrete event, and while you can build systems like this (so called geiger mode detection), you would not want to for this application.

No offense, but you've thought this through about .00001% as much as I have, so if you think I'm overlooking something obvious, its more likely that you're just making bad assumptions.

You've quoted the text where I infer that each reflection is not a discrete item - hence why systems can distinguish more than 1 point from a pulse.
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=27331741#p27331741:1r0be8br said:
redleader[/url]":1r0be8br]
[url=http://meincmagazine.com/civis/viewtopic.php?p=27331707#p27331707:1r0be8br said:
Stainless[/url]":1r0be8br]
[url=http://meincmagazine.com/civis/viewtopic.php?p=27331613#p27331613:1r0be8br said:
redleader[/url]":1r0be8br]
[url=http://meincmagazine.com/civis/viewtopic.php?p=27331581#p27331581:1r0be8br said:
Stainless[/url]":1r0be8br]
Stainless wrote:
2: Reflections from the pulse return to the scanner and is recorded along with the info from when it was sent. Depending on the system we get anywhere from 1 measurement (a point or pixel if you want) to 10+ from that one original pulse. This is actually an important distinction - a system can claim up to 1 million points a second, but only pulse 100,000 times a second if it can distinguish up to 10 points from a pulse.


This is why you don't want to have elevation information unless you must. Fast detectors aren't super expensive, but the detection electronics and optics can be much more simple if you have fewer channels.

etc.

I think you need to reread the post.

You're assuming these systems work in a mode where each reflection is a discrete event, and while you can build systems like this (so called geiger mode detection), you would not want to for this application.

No offense, but you've thought this through about .00001% as much as I have, so if you think I'm overlooking something obvious, its more likely that you're just making bad assumptions.

You've quoted the text where I infer that each reflection is not a discrete item - hence why systems can distinguish more than 1 point from a pulse.

Ok, but thats not what I tried to explain. I'm not saying each pulse is a discrete measurement, I'm saying each reflection is a discrete item. That is what is meant by geiger mode (basically a detector hooked up to a discriminator so that it generates TTL pulses when the signal exceeds a threshold). Geiger mode devices have a limited count rate (because TTL pulses have a fall time), which is what I think you're describing above when you say "distinguish up to 10 points from a pulse".

Non-geiger mode devices have no such limit (e.g. maximum reflections per pulse is just the detector saturation power), hence I was suggesting to you that if you find that the count rate is a problem, its because you've picked the wrong detection scheme. If thats not what you meant, feel free to explain yourself.

What I'm saying is that you get back a waveform from the pulse. Aside from very simple systems - you take that waveform and use it to determine how many objects the pulse hit. There are of course trade offs in the method you use as to the turn around time and the number of objects you can deal with.

I suppose you could attempt to build a more vector based system that directly or nearly so uses the waveform - which might interesting. Really though I just meant this point as background and this is slightly missing my point.

You've taken a system that measures a 3D world in polar coordinates and tossed one of the angles. Right off the bat the positional data is just about useless, for example what's the difference between a bridge above and concrete barriers used to close a road temporarily? Both will have the same range of reflectivity and can be equally wide, the difference is in elevation. Overhead sign or back of a commericial vehicle? Couple of ways to guess - but none nearly as reliable and simple as the fact that the sign 10 meters up.

Also the horizontal position is off, you can't compensate for the tilt of the vehicle as you mention - you've thrown out a critical piece of data you'd need to do it. Say the vehicle is tilted to the right - anything scanned above the plane perpendicular to the beam is offset to the right from the position you'd calculate - anything below to the left. Easy to fix - if you know if it's above or below the plane and by how much - except that's the data you just tossed.
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=28283877#p28283877:123az930 said:
Dmytry[/url]":123az930]Speaking of traffic lights, cars can speed up and slow down a bit to where they arrive at the traffic light when it is green.

This doesn't even need automatic cars, just some software that tells you that you can either speed up to X or slow down to Y to hit green.

Software doesn't even need to be in the car - I've seen this done effectively with roadside signs that display the speed you need at that point to hit the next green. People actually paid attention because it worked.

This was on a medium speed road, and not an inner city grid with lots of intersections close together though.
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=29196853#p29196853:cqdibne1 said:
blargh[/url]":cqdibne1]While a rather different environment from the general roads, self-driving trucks are coming to the mining industry, and we're not talking about pickups here.

Job-wise:
“That will take 800 people off our site,” Cowan said of the trucks. “At an average (salary) of $200,000 per person, you can see the savings we’re going to get from an operations perspective.”

It's not coming to the mining industry - it's already there and now it's spreading. Open pit mines are probably one of the best early adopter cases since the company has a high degree of control over the environment, work is repetitive, and labour cost is high.
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=29759255#p29759255:2tat6x0r said:
Chuckstar[/url]":2tat6x0r]
[url=http://meincmagazine.com/civis/viewtopic.php?p=29757675#p29757675:2tat6x0r said:
redleader[/url]":2tat6x0r]
[url=http://meincmagazine.com/civis/viewtopic.php?p=29757629#p29757629:2tat6x0r said:
Stainless[/url]":2tat6x0r]To be fair stereoscopic vision and a sense of acceleration (in 3D) is pretty much all people use to drive a car - so if it has stereoscopic cameras (depending on the field of view) and accelerometers (which pretty much every car does) it's a fair claim.

3D Radar/lidar is a lot easier to work with which is why all the early prototype devices use it (literally gives you a 3D map of whats around you), but you're right that stereoscopic cameras may work just fine in place of range finding.
We engage in quite sophisticated processing to pull all the necessary 3D data out of our stereoscopic vision -- especially considering the limitations of binocular vision for determining depth out past a certain distance, where we fall back on motion parallax and contextual data (such as knowing how big common items usually are). Using Lidar would cut out a lot of processing to get from sensor data to a local world map.

Also, we can swivel our stereoscope around, while fixed camera positions might need to be augmented for any detailed range data that might be needed to the sides/rear.

It does, and that's probably why the first systems use it. Better resolution than SONAR, easier than RADAR in some ways and much lower processing required than cameras.

But LIDAR performs horribly with precipitation of almost any kind. Depending on the wavelength you use water absorbs, reflect or refracts while snow, dust, smoke etc. either reflect or absorb. All of those are clearly bad results and to make matters worse high speed scans will pick up the same airborne particles/drops multiple times. For example this can turn a very light shower that would not give a human pause into a veritable cloud of points that obscure the actual scene. While you can filter to some degree (and that can become a processing issue as well) it just isn't as all weather capable as some other sensors.

I'm not saying LIDAR doesn't have its place - even a vehicle that is only fully self driving in clear weather is pretty damn useful and in concert with other sensors it is terrible useful a lot of the time - just that if you want to (and I wouldn't) bet on a very limited set of sensor types - cameras and accelerometers are the obvious choice as they replicate the two main sets of data we use to drive.

To be fair good LIDAR is relatively immune to lighting problems, can read most signs, pick up lane markings and actually can do a very good job of finding the tire grooves in pavements all which could very useful (no lane markings, sunset/sunrise, unlit roads & signs).
 
1) 2D is probably enough. You don't really need elevation range finding in general (its not like cars have to jump over things or slip under limbo bars). Furthermore if you also have stereo optical imaging you can probably get elevation well enough off of that by fusing images with ranging.

Pure 2D isn't really enough cars drive under a surprising high number of things - bridges, tunnels, overpasses, overhead fixtures, overhanging trees etc would all be a pain to deal with reliably. I'd think you'd wind up restricting RADAR to just very close range info.
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=29871113#p29871113:jo6jdzsb said:
redleader[/url]":jo6jdzsb]
[url=http://meincmagazine.com/civis/viewtopic.php?p=29870951#p29870951:jo6jdzsb said:
irenic[/url]":jo6jdzsb]
I think in theory that might be the case, but from I've seen of beamforming in medical imaging (esp 3d) it's easier said than done and getting enough volume can be a problem.

If you look at medical imaging, basically 100% of 2D and now 3D ultrasound systems are now beamformed, and thats much smaller volume than automotive. Radar is higher bandwidth, but not tremendously so. It turns out its just cheaper in the long run to not have mechanical scanning if you can at all avoid it.


[url=http://meincmagazine.com/civis/viewtopic.php?p=29870997#p29870997:jo6jdzsb said:
Stainless[/url]":jo6jdzsb]
1) 2D is probably enough. You don't really need elevation range finding in general (its not like cars have to jump over things or slip under limbo bars). Furthermore if you also have stereo optical imaging you can probably get elevation well enough off of that by fusing images with ranging.

Pure 2D isn't really enough cars drive under a surprising high number of things - bridges, tunnels, overpasses, overhead fixtures, overhanging trees etc would all be a pain to deal with reliably.

You need to be able to see these things, but do you really need to be able to resolve them along the elevation axis? Probably not. Knowing if you are going to collide with something is enough, you don't need to know exactly where along the bumper it will hit.

Yes you need to resolve along the elevation axis, maybe not as as finely as on the horizontal axes - but you can't really chuck the entire z axis and get very useful information.

If you have a 2D system you can't tell how high something is, so for example the back of a truck sticking out across the road and the arm of a single arm sign like below will look the same to RADAR.
13145.jpg


You could restrict the vertical beam spread and the horizontal distance at which you're willing to use it in order to be able to make reasonable assumptions and then what you'll wind up with is a very short range system that can't replace the role LIDAR holds - but is good for parking and last minute collision warning.
 
You can't distinguish between the crossing sign and a trailer based on reflectivity since you have no idea how the trailer or the sign is constructed - could be any mix of materials. Could even be a signpost beam sitting on pup trailer wheels or lying in the street!

If you can't distinguish elevation and have a vertical wide beam then objects far enough down even a flat road will appear on RADAR - since you can't tell the elevation you'll can't tell if it's something you'll pass below or hit. So now you need to tighten the beam and/or restrict how far out you look. Throw in roads that aren't flat and it gets worse.

Thanks for reminding me, I thought the argument seemed familiar.
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=29871425#p29871425:1q6urrpu said:
redleader[/url]":1q6urrpu]
[url=http://meincmagazine.com/civis/viewtopic.php?p=29871365#p29871365:1q6urrpu said:
Stainless[/url]":1q6urrpu]You can't distinguish between the crossing sign and a trailer based on reflectivity since you have no idea how the trailer or the sign is constructed - could be any mix of materials. Could even be a signpost beam sitting on pup trailer wheels or lying in the street!

Like I explained to you last time, you can distinguish them by correctly choosing your elevation and divergence angles.

[url=http://meincmagazine.com/civis/viewtopic.php?p=29871365#p29871365:1q6urrpu said:
Stainless[/url]":1q6urrpu]
If you can't distinguish elevation and have a vertical wide beam then objects far enough down even a flat road will appear on RADAR - since you can't tell the elevation you'll can't tell if it's something you'll pass below or hit.

Yes, radar has a diffraction-limited maximum effective range, but its quite large. I suggest you try calculating it. Probably you will be surprised.

[url=http://meincmagazine.com/civis/viewtopic.php?p=29871365#p29871365:1q6urrpu said:
Stainless[/url]":1q6urrpu]
So now you need to tighten the beam and/or restrict how far out you look.

I think we are using different terms for the same thing and therefore missing each other points.



You reduce your beam divergence, which rather than reducing range, actually extends it.

[url=http://meincmagazine.com/civis/viewtopic.php?p=29871365#p29871365:1q6urrpu said:
Stainless[/url]":1q6urrpu]
Thanks for reminding me, I thought the argument seemed familiar.

Next time, just reread the explanation so that everyone else don't have to listen to me repeat myself because you've forgotten.

You're right on the terms for the same items. I'll use yours.

Given a nice narrow vertical divergence angle and a careful chosen (but fixed) elevation angle extends the range of the RADAR, but you still can't distinguish between the overhead sign & truck - at best you just don't see the overhead sign at all.

You end up limiting how far out you can look, not necessarily because the system can't see that far - but because your assumptions about where an object is vertically become increasingly less valid, and possibly completely invalid at times if you don't have detailed topography data (and no random changes to the road surface & area have occurred since the data was gathered).

Might be cheap and useful, but it isn't a good replacement for the sort of data you can get out of a 3D active sensing system with good range.
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=29880695#p29880695:3j22x3wx said:
Chuckstar[/url]":3j22x3wx]
[url=http://meincmagazine.com/civis/viewtopic.php?p=29880425#p29880425:3j22x3wx said:
Megalodon[/url]":3j22x3wx]
The more I read about this the more I think highway driving is the closest to being solved, and ambiguous/complex situations in urban environments are the bigger challenge.
I think that comment is mostly right, but I think you have to include bad weather driving in there somewhere. I'm remembering driving with an inch of snow on the ground, where I've had to pretty much just intuit where the lane must be. More importantly than differentiating lanes, is differentiating where the edge of the road must be. I know I've occasionally run into curbs on familiar roads, where the snow was obscuring the transition between road and curb. And even if you have a fine-resolution map, GPS simply is not accurate enough to keep you from hitting the curb without help from some kind of sensor system. Not sure how well radar and lidar work in heavy rain or snow, either.

Don't get me wrong. I'm sure it's possible for developers to figure out how to get the computer to drive safely in snow. I just suspect they will tend to fully tackle that issue after tackling other scenarios first.

Technically GPS can be accurate enough to keep a car in a lane, you can get single digit cm horizontal accuracy, but there are two problems which make it a not great solution - coverage drops in urban canyons (surrounded by tall buildings), tunnels etc and using GPS means working from a fixed map of the lanes which at some point will go out of date and be wrong.

Probably the worse thing about snow and lanes is that the location of and the number of lanes pre-snow is fairly often not at all same as during and post-snow fall - and the ruts that can form over time. I've seen two lane roads become 3 ruts so two way traffic is sharing one rut, 4 lane roads become 5 and all sorts of goat path randomness. Like you say, probably doable - but certainly not low hanging fruit.
 
As for passengers, do you have anything to back this up? Why should we expect the passengers to be enough aware of surrounding events if we know that the driver, the person specifically assigned with the task of doing that, is often not?

I`ve heard this referenced in the past - though I don`t have any links to the study, I recall the reasoning to be that the passenger in the car is more likely to pick up on cues that the driver is busy at a particular moment. Basically that a passenger is more likely to pause conversation or allow the driver to pause (IE not prompting them for an immediate response) when appropriate, say while turning left or merging etc. where as a remote person will tend to set a more relentless pace of conversation since they`re oblivious to most of those cues.

Edit: Found two links though both are typical articles reference to a study (2 different studies) but little to no hard numbers.

http://well.blogs.nytimes.com/2008/12/01/chatty-driving-phones-vs-passengers/
http://www.sciencedaily.com/releases/2008/12/081201081917.htm
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=30269515#p30269515:2mbaj6ci said:
Tom the Melaniephile[/url]":2mbaj6ci]
[url=http://meincmagazine.com/civis/viewtopic.php?p=30239435#p30239435:2mbaj6ci said:
Pont[/url]":2mbaj6ci]

They'll be so damn convenient and desirable that even places like rural Arkansas will quickly be putting up e-signs and embedding simple RF tags in the road to help them along. All of the "major issues" with self-driving cars are really not that hard to solve, and there is a lot of incentive for everyone to cooperate in solving them.

DOTs are already discussing embedding RF tags in road signs and in the roads. Costs have gotten plausible, they weren't plausible a decade ago when there was an earlier push for "smart" roads with tags.

Major problem with tagging in the road itself is that the easiest way to tag is in the striping or pavement markers - but in states with snow, those get destroyed pretty fast by plows. Tagging in the road matrix itself is notably more difficult/expensive.

Tagging signs is easy - if you do it as new signs are deployed.

However: Expected lifespan of a sign is 10+ years, and it's expensive/time consuming to go visit each existing sign to slap on an appropriate tag. Accessing overhead highway signs is a noticeably more involved operation than roadside.

What's the point of tagging signs?
Wouldn't a very large percentage of permanent signs be covered via OCR and generally have the same info represented in already existing navigation data products?
 
Edit - started typing this a while ago, before Chuckstar and Alamout posted much the same slant.

[url=http://meincmagazine.com/civis/viewtopic.php?p=30272031#p30272031:y73b6u86 said:
Happysin[/url]":y73b6u86]
What's the point of tagging signs?
Wouldn't a very large percentage of permanent signs be covered via OCR and generally have the same info represented in already existing navigation data products?

Obscured signs, snow-covered signs, signs behind another moving vehicle. Both/and seems a much better option than relying solely on OCR.

Obscured signs - move the sign or trim the vegetation - sending a crew to tag a sign instead of fixing the basic problem is an odd approach.
Snow-covered signs - I live in an area with snow, a snow covered sign is very rare thing - and even then odds are the info on the sign is in a nav database. And I'd note that the yield & stop signs are still very recognizable due to their shapes.

Moving vehicles - sure that be an improvement, but really there's an argument to made that a sign is poorly placed if traffic obscures long enough a significant number of cars to pass by without being able to see.

Sure might better than just OCR, but that's ignoring how static signs are, how much of that info is already in databases and the cost of replacing signs. I can see uses for RF tags or even simple optical/RF reflectors for road markings, but just generally RF tagging every new sign doesn't really make sense.
 
[url=http://meincmagazine.com/civis/viewtopic.php?p=30386919#p30386919:3rtm4usq said:
ZnU[/url]":3rtm4usq]
[url=http://meincmagazine.com/civis/viewtopic.php?p=30386821#p30386821:3rtm4usq said:
irenic[/url]":3rtm4usq]
Can't remember if I mentioned that self-driving taxi fleets make no sense. It's using a very complicated machine with unlimited downside in legal liability to replace a minimal wage job.

Maybe if you assume autonomous vehicles get in accidents more often than minimum wage human drivers. But everyone who has anything to do with autonomous vehicle development believes the opposite will be true. By a wide margin.

You might also get to sidestep most or all controls on the taxi industry. No need to buy taxi medallions (or the local equivalent) from a limited pool when you can run an automated car sharing service. Basically the Car2go model, but the car does the driving. It also opens up the possibility of gaining a near monopoly for the first entrants to the market.

Not saying that legal liability isn't a problem, just that it's more than replacing a minimum wage job.
 
Status
You're currently viewing only Stainless's posts. Click here to go back to viewing the entire thread.