Observatory Version of Misc Musings, Ravings, and Random Thoughts

Status
You're currently viewing only Dmytry's posts. Click here to go back to viewing the entire thread.

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=29657173#p29657173:1x78s096 said:
Kalessin[/url]":1x78s096]
[url=http://meincmagazine.com/civis/viewtopic.php?p=29654061#p29654061:1x78s096 said:
Dmytry[/url]":1x78s096]
[url=http://meincmagazine.com/civis/viewtopic.php?p=29650447#p29650447:1x78s096 said:
bluloo[/url]":1x78s096]
As you note, there's a greater certainty in the observations found in physics experiments, so replication studies aren't often necessary.
And yet there's always replication studies in physics as it is a quantitative field where new experiments improve accuracy or are necessary for practical applications.
Are the two of you familiar with Feynman's "Cargo Cult Science" commencement address at Caltech and the subsequent adapted essay?
Yeah... I think his point on scientific integrity may be most spot on - that you need to list all other explanations you can honestly think of (rather than just come up with a few explanations that you your findings would contradict). Take for example those priming studies which fail to replicate.

The excuse from folks doing those priming studies - there's so many factors, maybe something is missing in a replication.

Well that's the whole point - in the original findings, there was even more unknown factors involved (with a stopwatch and everything, whereas a replication uses IR sensors), you don't know that it is the priming idea which is the cause (even if we take it on face value that there was a difference between the groups in the first place). If it fails to replicate, and a replication tried to recreate priming (but not the unknowns), then the cause is somewhere within unknowns.

If a chemist uses dirty glassware and finds that, surprisingly, X catalyses a reaction a little bit (he was testing X's efficacy as catalyst), and others fail to replicate, well it's his fault he didn't write in his paper that the bottles were so and so dirty and the catalyst could be any one of the compounds in "dirty" (and to actually make a contribution he'd have to painstakingly find which compound it was). There was the "form hypothesis" and "test hypothesis" but "actually figure out something" was absent. (That sometimes happens in physics too, and guess what, physics can't get away with it and neither can any other field.)

edit: cite on the priming story.

Now what I think is true, though, is that the priming findings are probably no worse than most other findings - just under greater scrutiny (the physics equivalent would be, I dunno a special paint that makes airplanes go faster - of course that's gonna be replicated because it is practically useful - it is often useful to get people to walk faster or score better on memory tasks).
 

Dmytry

Ars Legatus Legionis
11,443
What's up with all the attempts to explain first world obesity with literally anything other than "over-eating and under-exercising"? (mostly the former). I just don't get it. It's like trying to explain lung cancer with the use of cheaper cigarettes. Yeah the cheaper cigarettes might (or might not) cause slightly higher cancer rate than expensive cigars (and it would be really difficult to find out given that the price difference is a huge confounder), the message should still be "don't smoke" (or "don't overeat"). Yet the message from the ads is always "do over-eat", there's no regulation of portion sizes at your fastfood, nothing like, say, mandatory stickers on packaging that you could stick on a card to count how much you ate.
 

Dmytry

Ars Legatus Legionis
11,443
So I go to an ice cream place and I see extremely obese family getting their obese kids even more obese (all of them white, they drove here, it's not cheap ice cream). They'd never let their kids smoke, yet they are causing even more harm than if they did. It's just mind blowing. People who have everything, they're inflicting the damage that's comparable to that of WHOs "moderate starvation" at some third world $1/day living. It's not self inflicted anymore either, it's idiot-inflicted upon a child.

edit: and no I don't think it's entirely "individual moral failing". There's a huge collective component to the failure. Anything else of similar health impact would get regulated as a hazardous material. edit2: and it doesn't really matter what the cause of that kid's increased appetite is; maybe he got that virus, it doesn't matter, the parent is doing an unacceptably bad thing and it's seen as acceptable. edit3: and Charles Stross is completely off base with regards to "and an example of our expectations of health being infected by misplaced moralizing". Imagine if they let their kid smoke or drink alcohol, CPS would have been called immediately (as it should be). There's almost no moralization even when a child is involved, which if you think about it, is highly unusual for the western culture.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=29829845#p29829845:1v0ryc03 said:
Alamout[/url]":1v0ryc03]...that was exactly his point.
I thought he was talking about the possibility that obesity was caused directly by a virus rather than coming up with explanations for over-eating and under-exercising. (I dunno how much the junk food changed anything though, the calorie intake is generally regulated in the long term based on the actual calorie amount, and very calorie dense foods existed for a long time).
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=29831059#p29831059:39y2rmy2 said:
Apteris[/url]":39y2rmy2]
[url=http://meincmagazine.com/civis/viewtopic.php?p=29830251#p29830251:39y2rmy2 said:
Dmytry[/url]":39y2rmy2]
[url=http://meincmagazine.com/civis/viewtopic.php?p=29829845#p29829845:39y2rmy2 said:
Alamout[/url]":39y2rmy2]...that was exactly his point.
I thought he was talking about the possibility that obesity was caused directly by a virus rather than coming up with explanations for over-eating and under-exercising. (I dunno how much the junk food changed anything though, the calorie intake is generally regulated in the long term based on the actual calorie amount, and very calorie dense foods existed for a long time).
Alamout has it, I was arguing that systemic factors are most likely to blame, be they aforementioned virus, sedentarism and sugar-filled foods, or others.
Well, yeah - if that's a general trend then there's systemic factors at play.
Though I'm perfectly ready to agree ice cream is not the best dietary choice if you're obese.
I don't think ice cream is necessarily a bad choice if it's eaten in moderation. One thing that seems kind of ridiculous to me in the US is obsession with eating something different instead of eating less. Especially back in the day, trying to avoid fats (on the grounds of caloric density). Didn't work. Now it's heading straight towards eating too much protein.
dmytry":39y2rmy2 said:
Imagine if they let their kid smoke or drink alcohol, CPS would have been called immediately (as it should be). There's almost no moralization even when a child is involved, which if you think about it, is highly unusual for the western culture.
I'd guess it was because for the vast majority of human history, more food was always better. We haven't really come to grips--accepted on a deep level--either with the fact that there is such a thing as too much food, or with there being tasty food that's best left uneaten.
Yeah, that's probably it.

I think the ideal diet would be quite calorie dense (and micro-nutrient complete but that's pretty easy to attain), eaten in moderation. Not going too high above RDA on proteins (i.e. it'll be mostly fats and carbs). A diet so calorie poor you run out of time / your stomach is full before you can overeat, that'd be highly abnormal and waste too much time, even though it may still be preferable to calorie excess.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=29833605#p29833605:1nd0p79e said:
dragonlord[/url]":1nd0p79e]The other thing is that it's not the ice-cream that's bad for the children. That's just a treat. It also isn't either over-eating or the lack of exercise that's causing the problems, it's the combination of the 2 on a daily basis that's causing problems.

The problem with the over-eating isn't that these families start out massively over-eating, it's that they get their portion sizes slightly screwed on the more side and then it snowballs from there.

From an education point of view, the authorities need to be careful that they don't go from over-eating/not enough exercise to under-eating. I think that this is because the problems aren't just McDonalds and co, but they're also tied to things like the clean plate policy that many of us grew up under.
It's not even just that, there's also a complete lack of reasonable response to the kid's BMI being too high. Under-eating (with adequate vitamins, minerals, and electrolytes) is precisely what that kid must be doing until kid's BMI is closer to normal. I don't know if the doctor doesn't tell the parents the kid needs to plain eat less, or the parents don't listen. I would guess they're trying everything (except eating less) and blaming genetics. Some parents also force overweight kids to eat, seen that with my own eyes (although not with the above mentioned extreme family). edit: apparently a lot of parents outright don't realize there's a problem in the first place. So if a parent describes severe childhood obesity as "about the right weight" , it's not hard to guess what the parent was doing when their kid was "underweight".
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=29849371#p29849371:1aqk3vki said:
Exordium01[/url]":1aqk3vki]
[url=http://meincmagazine.com/civis/viewtopic.php?p=29849173#p29849173:1aqk3vki said:
PerpetualMind[/url]":1aqk3vki]This seems promising: How a Microscopic Supercapacitor Will Supercharge Mobile Electronics
"Laser-etched graphene brings Moore's Law to energy storage". I'm bemused at how inexpensive the production technique is.

They've been talking about this exact thing since 2012. New articles get run periodically.
It's really inexpensive to make supercapacitors in general. Just make charcoal out of some coconut shells, make paste from charcoal and salt, smear it on foil, put paper in between, put it in a little jar of salt water, and voila you got yourself a supercapacitor. It'd be pretty hard to beat natural materials in farads per dollar.

Attaining high energy density and low resistance, that's what's difficult. They don't make any specific claims about energy density or anything there, which makes me doubt they even managed to beat coconut shells. A very tiny battery with very low energy density would have very few, if any uses (due to low energy density you'd rather want it to be big), and especially not in something like a pacemaker which needs high energy density. If the resistance is low enough it could be used as a small capacitor (which would be pretty good, putting capacitors on the chip, eliminating external electrolytic capacitors altogether), but that's about it.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=29969059#p29969059:k1p4v3jk said:
RAOF[/url]":k1p4v3jk]Plus, humans have evolved¹ since the paleolithic. Even if a typical paleolithic diet existed, we knew what it was, and it happened to be the healthiest diet for paleolithic humans there's still no particular reason to expect that it's the healthiest diet for modern humans.

¹: Widespread adult lactose digestion says hi!
Yeah. I very much doubt that what ever they were eating would have been the healthiest diet for them anyway. The idea of "healthy" and "unhealthy" foods in the developed countries is quite divorced from the reality of eating slightly rotten meat and just living with the effects of the bacterial toxins - or having to choose between scurvy and parasites if you only got meat - cook it, kill parasites and destroy vitamin C, don't cook it, get some parasites. That's the real unhealthy, not first-world-problem "Bet you can't eat just one!" unhealthy.

edit: It's still crazy to me that one could be so well insulated from real food problems as to just assume that cavemen ate some good diet.
 

Dmytry

Ars Legatus Legionis
11,443
Well, isn't it like ~3% ? There could be a plenty of groups of cro-magnons from back then that ended up contributing less than 3% to us (if you could separate them and measure contribution). A few consecutive instances of more people come over than living there can dilute genes by the factor of 33 easily. E.g. if you do five dilutions by the factor of 2, you dilute down to 0.03125 .

edit: that is not to say there necessarily was a lot of interbreeding, but that genes can get diluted quite easily when a population is living in very harsh conditions with a low population growth rate, and people keep coming over from more populous regions.
 

Dmytry

Ars Legatus Legionis
11,443
Speaking of weird shower thoughts, I was thinking the other day - regarding anisotropies in cosmic microwave background - if the laws of physics were exactly rotationally symmetric, and we aren't starting with an asymmetrical starting state, that wouldn't be reconcilable with me personally observing that universe is not the same in all directions at any scale, be it cosmic microwave background not being uniform or me seeing a bright rectangle with a bunch of text in front of me. So one could conclude something about fundamental laws (that there's some random element to them, or they're not symmetric) from just one glance at the world. Which is of course quite trivial but sort of made me think of ancient Greeks and their approach to physics. They concluded there should be atoms of some kind from how salt water dries into crystals.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30220941#p30220941:3qexocqh said:
Interactive Civilian[/url]":3qexocqh]

1995 was worlds different from today, and I'd say it's at least as different from today as 1995 was different from 1975 (though I can't speak to 1975 from direct experience as I was only born in 1978).

Am I missing something?
I dunno how to even compare without too big subjectivity. Space wise there was a big progress 1955 to 1975 with inarguably less impressive 1975 to 1995 , and 1995 to 2015 is well, there's still same stuff - space telescopes, rovers (Curiosity looks a lot like Lunokhod, although durability is much better), automated landers taking a couple pictures from hostile place and going silent (venus and titan). The worldwide space effort basically got cut in half for at least 10 years, though, which had nothing to do with technology.

Electronics wise, you're just comparing refinement of silicon to refinement of silicon, and there's only a minor slowdown in the last 5..10 years or so. It's something where we got to wait and see.

Software wise, when it comes to inner workings there's definitely nothing really interesting going on. Fundamental stuff - 1975 to 1995, some (but not as much as previous 20 years), 1995 to today, almost nothing. Still programming in C++ and languages that differ minimally from C++. Other languages still promise and completely fail to deliver 10x gains in productivity. If there will ever be another leap similar to assembly language to high level, it's waaay off to when compiler-ish software is picking solutions from immense number of alternatives instead of doing precisely what it is told to do. (As it is now there's diminishing returns on reducing the most tedious parts of work).
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30229179#p30229179:rchi9mrp said:
MilleniX[/url]":rchi9mrp]
[url=http://meincmagazine.com/civis/viewtopic.php?p=30226777#p30226777:rchi9mrp said:
Dmytry[/url]":rchi9mrp]Software wise, when it comes to inner workings there's definitely nothing really interesting going on. Fundamental stuff - 1975 to 1995, some (but not as much as previous 20 years), 1995 to today, almost nothing. Still programming in C++ and languages that differ minimally from C++. Other languages still promise and completely fail to deliver 10x gains in productivity. If there will ever be another leap similar to assembly language to high level, it's waaay off to when compiler-ish software is picking solutions from immense number of alternatives instead of doing precisely what it is told to do. (As it is now there's diminishing returns on reducing the most tedious parts of work).
Yeah, no.

Did you forget the several entire overhauls of 'how web sites get built'? There's been a pretty steady progression, roughly CGI in C, Perl, PHP, JSP & ASP, MVC frameworks (Ruby on Rails, etc), JavaScript doing more than image roll-overs, and AJAX-y responsive/incremental front-ends?
and none of that is any sort of fundamental difference compared to having websites at all. I actually work making html5 websites now, mostly with node.js. So what.
How about the plummeting cost and soaring availability of general-purpose single-board computers like Raspberry Pi?
Not software.
The need for security in a networked world, and the tooling (though painfully under-used) to make it a reality?
If anything, that's a great example of how little it changed - people knew buffer overruns were bad and executing untrusted code was bad, and that's still number 1 problem.
The drastically increasing spread of explicit parallel programming as a means to attain performance?
Supercomputers were parallel since almost ever. Hell from before digital computers, with a lot of people in a room and a lot of tabulators doing some nuke calculations. It is still the same split the task into sub tasks, run sub tasks in parallel, combine model that offers the only remotely good way of doing things in parallel.
And if you really particularly want to see what people are doing with compiler technology, look at Haskell and Scala, and embedded DSLs written in each. The source code looks nothing like what the processor will ultimately run.
Haskell: first version in 1990, still almost entirely irrelevant, the community still very vocal.
Finally, there is a huge renaissance in new programming language development. You literally can't open the Programming sub-reddit without tripping over a new language post.
And none of them making a change even remotely similar in impact to assembly->C . The Haskell guys still say they are but obviously given that adoption is not in any way similar to transition from assembly to C or other high level languages, they're not.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30240299#p30240299:2ewnsb62 said:
UserJoe[/url]":2ewnsb62]
[url=http://meincmagazine.com/civis/viewtopic.php?p=30226777#p30226777:2ewnsb62 said:
Dmytry[/url]":2ewnsb62]
Software wise, when it comes to inner workings there's definitely nothing really interesting going on. Fundamental stuff - 1975 to 1995, some (but not as much as previous 20 years), 1995 to today, almost nothing. Still programming in C++ and languages that differ minimally from C++. Other languages still promise and completely fail to deliver 10x gains in productivity. If there will ever be another leap similar to assembly language to high level, it's waaay off to when compiler-ish software is picking solutions from immense number of alternatives instead of doing precisely what it is told to do. (As it is now there's diminishing returns on reducing the most tedious parts of work).
The one area where I would disagree is programming and analysis environments like Mathematica for scientific/engineering programming and problem solving. There are huge gains in using an environment like that compared to programming in a standard language because so much is built in and the mathematical objects can be manipulated at a such a high level.
I guess. Still, I'd put that somewhere together with there's more libraries simply because people had been writing libraries during that time.

I think it is actually quite remarkable how little programming has changed - security has same issues, the widely used programming languages are not very different (C# doesn't do anything radically new), etc. I guess because it matured to some extent, so no assembly -> C or even C -> Smalltalk changes happened. For the end user there were some improvements in usability but then it pretty much hit brick wall in last 5..10 years where the changes are just made for the sake of changes and people don't want to upgrade, don't like new things, etc.

Maybe at some point in the future there will be huge changes again, when an under-specified program can be built (with the "compiler" dipping into this enormous combinatorial space of possible solutions). E.g. where you can specify that the array must become sorted and the compiler invents quicksort or another sorting algorithm appropriate, on it's own. Not cheats like Haskell "quicksort" example that's O(n^2) on already sorted array, and far more complicated and more verbose than even C if you make it not suck.
 

Dmytry

Ars Legatus Legionis
11,443
It just doesn't seem like nearly as big of a deal as the changes from 1975 to 1995 . It's even less impressive if we focus on innovation rather than mass adoption. It happens when some field becomes "mature", it's harder to improve things until there's another huge breakthrough.

It's interesting to speculate what it may be like after the next breakthrough. What I picture is, for example you specify that array must be re-arranged so that each next array member must be greater than the previous, and the "compiler" sits and works on it - for however long you want it to - and comes up with increasingly refined sorting algorithms (which it remembers so next time you do sorting it isn't re-inventing the wheel). That would be a huge breakthrough if it would be generally applicable, to problems that are genuinely difficult to solve (unlike sorting). The next breakthrough after that, you could be somewhat vague about the description and the most-probably-acceptable solution is found. (It already exists in form of recruiting a human to write a program and telling him what you want, so we know it is physically possible to do).
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30243735#p30243735:1lv6070a said:
LordFrith[/url]":1lv6070a]The next shift will be when we see lots of commercial GPGPU deep-learning and "compile-to-hardware" systems.

However, you could look at the prevalence of industry going to remote-IT infrastructure as a pretty large change as well.

How about the fact we have applications that can translate language in speech, or even cooler, overlay a visual translation?
Ghmm last time I tried automatic captions they sucked big time. As for remote infrastructure and the cloud I think they used to be called "mainframe"... :D
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30246049#p30246049:2abuhlas said:
MilleniX[/url]":2abuhlas]
[url=http://meincmagazine.com/civis/viewtopic.php?p=30243487#p30243487:2abuhlas said:
Dmytry[/url]":2abuhlas]It just doesn't seem like nearly as big of a deal as the changes from 1975 to 1995 . It's even less impressive if we focus on innovation rather than mass adoption. It happens when some field becomes "mature", it's harder to improve things until there's another huge breakthrough.

It's interesting to speculate what it may be like after the next breakthrough. What I picture is, for example you specify that array must be re-arranged so that each next array member must be greater than the previous, and the "compiler" sits and works on it - for however long you want it to - and comes up with increasingly refined sorting algorithms (which it remembers so next time you do sorting it isn't re-inventing the wheel). That would be a huge breakthrough if it would be generally applicable, to problems that are genuinely difficult to solve (unlike sorting). The next breakthrough after that, you could be somewhat vague about the description and the most-probably-acceptable solution is found. (It already exists in form of recruiting a human to write a program and telling him what you want, so we know it is physically possible to do).
You may find the papers 1, 2 on Macho interesting.
Yeah I read of it... the problem is that there is a very huge solution space and we just don't know how to deal with that in general. You can have a formal description of what it means for an array to be sorted, and yet we are unable to get from there to quicksort or other non shitty sorting algorithm (there's great many possible algorithms). It is also possible for a human to understand perfectly the specifications but be unable to implement them unless the specifications detail what to do in great detail.

edit: I mean, the main reason why it is easier to tell a human what to do is that human will perform a what-->how conversion. You tell the human you want books arranged alphabetically on the shelf, he'll come up with the insertion sort. If you had to explain like pick a book, go through the books on the shelf until you find a subsequent book , insert the book before that, and by the way subsequent book means go through the letters in the title, and by the way do this when you fill one shelf... you're just doing C programming but more verbose and error prone.

It's not merely that the human understands what you wanted, it's that the human can come up with an algorithm for attaining that state. The computer can presently neither understand what you want nor come up with an algorithm for attaining it (this deficiency can be cheated around in edge cases because common operations have human-written code that you can leverage, hence Macho. You can match the words in description to comments and names in human written sources and with a bit of luck you'll find something relevant without actually making a dent in any hard problems).
 

Dmytry

Ars Legatus Legionis
11,443
Something I was thinking about lately: C. Elegans simulations. We know it got 302 neurons and we know how those neurons connect but we can't get it to move around like the real thing in a simulator, because we don't have synaptic data and it's not very useful to know how everything is connected if we don't know what happens at the junctions. Need better microscopy.

Meanwhile when it comes to simulating mammalian brains, about which we know even less, we somehow have huge successes simulating a 100x larger neural network - a piece of rat's brain. I think it's only successful because we don't know enough about what a tiny piece of mammalian cortex must actually do, to tell apart a successful simulation from a failure. As long as it oscillates, it's not obviously a failure. We know what C. Elegans does so we know if we fail, we don't know what 30k neurons sized piece of rat's brain must be doing for the rat to be a rat. So if it's oscillating, who's to say it's a failure? I'm not sure but I think we don't even know if the real thing would oscillate with same "boundary conditions".
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30372213#p30372213:gdz76dnv said:
WM314[/url]":gdz76dnv]
What're you thinking about by "synaptic data"? Do we not have electrophysiological data from C elegans synapses?
Apparently, it's not even remotely good enough to simulate the worm and replicate worm's behaviours in the simulator. I mean, just a quick search, in 2009 they were just about determining if the response is graded or all-or-nothing at the neuromuscular junctions. It's exactly like having robot schematics where you don't know what's a couple resistors and what's a transistor, you only see the wires, and you're just beginning to determine that, yes, this one thing isn't a tunnel diode.

edit: I think to get a neuron network to do what ever makes c elegans be c elegans (or what makes a rat be a rat) you need quite precise quantitative information about how synapses behave. And then in dendrites themselves you have vesicles being transported in progress, you have mitochondria going into a dendrite who knows why and doing who knows what, affecting conductivity...
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30374851#p30374851:26vvuq2y said:
shread[/url]":26vvuq2y]
[url=http://meincmagazine.com/civis/viewtopic.php?p=30372397#p30372397:26vvuq2y said:
Dmytry[/url]":26vvuq2y]
[url=http://meincmagazine.com/civis/viewtopic.php?p=30372213#p30372213:26vvuq2y said:
WM314[/url]":26vvuq2y]
What're you thinking about by "synaptic data"? Do we not have electrophysiological data from C elegans synapses?
Apparently, it's not even remotely good enough to simulate the worm and replicate worm's behaviours in the simulator. I mean, just a quick search, in 2009 they were just about determining if the response is graded or all-or-nothing at the neuromuscular junctions. It's exactly like having robot schematics where you don't know what's a couple resistors and what's a transistor, you only see the wires, and you're just beginning to determine that, yes, this one thing isn't a tunnel diode.

edit: I think to get a neuron network to do what ever makes c elegans be c elegans (or what makes a rat be a rat) you need quite precise quantitative information about how synapses behave. And then in dendrites themselves you have vesicles being transported in progress, you have mitochondria going into a dendrite who knows why and doing who knows what, affecting conductivity...
I would guess the mitochondria are simply ATP factories to power neuron functions. The complexity would be in the various types of synapses with attendant multiplexing of signal transmission and ability to change synapse type and number, not to mention growing to establish novel synaptic connections with other neurons.
I mean, it's doing who knows what to signal propagation through the dendrite. It's occluding most of the dendrite. I briefly worked on software for visualizing those things, had some serial block face microscopy datasets to play with, you could actually fly down a dendrite or an axon and see a mitochondrion...

Simulating this of course wouldn't matter for something like fake neurons used in various AI projects (that don't really work very well), but when you're trying to simulate an actual organism you probably also have to simulate this, as well as chemical diffusion and vesicle transport, because it would affect electrical parameters and there would be a lot of functionality that evolved around the energy and transport limitations. Of course as it is we don't even know electrical properties of synapses or how those are modified as the animal learns/remembers, never mind the growth and pruning and the rest.

I don't expect a truly successful simulation of even a small piece of mammalian brain in my lifetime (unless enabled by some highly unexpected and extremely disruptive breakthrough in something else). And obviously all that chemical complexity will be incredibly expensive to simulate per neuron, far more so than various "estimates" by the likes of Kurzweil who just equate a real neuron from the brain to the extremely simplistic neuron model from machine learning. Seriously they just need to look at the behaviour of say amoeba, which would take an entire microcontroller with at least a few kilobytes of ram to merely imitate (let alone simulate, that'd take a serious CPU). And that's a single cell which is not even specialized for "thinking" the way a neuron is.
 

Dmytry

Ars Legatus Legionis
11,443
I think you may be able to get yourself going with a controlled leak, just not like in the movie. You'd need to bend to put your centre of mass outside your body, and hold your hand there as close to the centre of mass as possible. That decreases the torque (proportional to the closest distance between the line of thrust and the centre of mass). If you got yourself spinning fast it'd probably be all over because it'd require much more control to stop spinning.

The dV shouldn't be too good either, 340 m/s exhaust velocity in the ideal case, more like 100 probably.
 

Dmytry

Ars Legatus Legionis
11,443
Yeah, my understanding was that at 0.21 bar you just maybe get slow chronic lung problems if you keep doing that all day every day for a while (although I thought it was safe enough? Didn't some spacecraft use pure oxygen at 0.21-ish bars? edit: skylab that was it). Whereas under high pressure it's not what you expect, it's CNS toxicity. You need higher than normal partial pressure to get CNS toxicity, of course, because it's from oxygen that's literally dissolved in blood.

I had a random thought on another topic (relating to partial pressure of oxygen): humans could live in conditions where it would be very difficult to start a fire. In 5% oxygen at 4 bars, most things won't burn (because nitrogen would take all the heat away). Aliens on such a planet may have to evolve further before building a technological civilization. (Ditto for aliens that don't have highly dexterous hands to start with.) So if there are aliens there would be aliens that are naturally smarter than us and aliens that are naturally dumber than us.
Also, our variation in intelligence is also not something universal, aliens may have greater or lower variation in intelligence (one where they can only make, say, a nuke when most of them are nearly as smart as their top scientists), or intelligence that can be boosted immensely by a right growth hormone early on (at an energy trade-off).

That's why I don't really buy into this narrative where a lot of planets reach our technological level, but then most kill themselves. That's the universe of old scifi where all aliens have to be highly anthropomorphic because they're humans in costumes. Also no reason why in multi billion year lineage the bottleneck would be at some specific spot of your choosing anyway. Probably it's just a bunch of moderately low probabilities, of things never going too wrong over a multi billion year timespan.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30385289#p30385289:1k26h91a said:
Barmaglot[/url]":1k26h91a]
[url=http://meincmagazine.com/civis/viewtopic.php?p=30382193#p30382193:1k26h91a said:
Dmytry[/url]":1k26h91a]Didn't some spacecraft use pure oxygen at 0.21-ish bars? edit: skylab that was it

Not just Skylab; all NASA manned spacecraft up to and including Apollo used a pure oxygen atmosphere. Among other things, this reduced spacecraft weight and removed the need for pre-breathing in EVAs. However, they also used pressure slightly above atmospheric on pure oxygen (!) during launch in order to prevent atmospheric nitrogen from getting in, which eventually caught up to them and killed Grissom, White and Chaffee on Apollo 1, after which they switched to a nitrox mixture during launch procedures and a gradual shift to pure oxygen at more normal partial pressure once in orbit. This also caused problems in Apollo-Soyuz Test Project, as the Soyuz craft used regular air at sea level pressure, as opposed to Apollo's pure oxygen at 0.35 bar.
Yeah. I didn't know that so many US launches used pure oxygen. It seem to be a common misconception that things would burn the same if the partial pressure is the same... things burn better without having to heat up nitrogen, so even at 0.21 bar pure oxygen there's a greater risk of fire.

So, for the Mars habitat, would pure oxygen at reduced pressure be safe medically long term? The decompression any time you go out would really be a problem. 1 bar atmospheric air inside a space suit wouldn't work well at all... those things are ****ng stiff as it is.

Sidenote: it rather annoys me how there's this stereotype that Russian space programme is reckless.

In completely offtopic, anyone knows a scifi story where people or aliens are living at a planet with lower oxygen concentration (under higher pressure maybe) so things don't burn? Or did I just come up with a new premise? Say, make it 4 bars with 5% oxygen, with everyone having slight nitrogen narcosis all the time just for shits and giggles.
Going back to The Martian, while the use of oxyliquit with sugar fuel was sheer brilliance (fun fact: there were instances of oxyliquit with sawdust fuel being used to fill aerial bombs during the Siege of Leningrad), there is absolutely no way they would've achieved the desired effect.
Yeah, also realistic as something they'd come up with - any scientist dealing with cryogenic gasses would probably know the risk of a liquid oxygen related kaboom.
The velocity of the escaping air jet wouldn't be anywhere close to a 'normal' rocket engine, and in order to generate 40m/s of delta-V, the air would've needed to mass a double digit percentage of the entire craft - clearly implausible for anything less than a blimp.
Yeah, I noticed that too. The exhaust velocity would be ~340m/s at most, and without a proper nozzle it'd go in all directions.
Also, Pathfinder getting buried in the sand like it was shown is, while maybe not completely impossible, then at least very highly improbable - Martian dust storms don't behave like that. IIRC, this is something specific to the movie, not to the book.

Lastly, and this is the biggest one of all - why, oh why did they completely gut the character of Annie Montrose? They even went as far as casting Kristen Wiig for the role, and then they just had her stand in the background, which was positively criminal. She had some of the best lines in the book - (re: Council of Elrond) 'Jesus, none of you got laid in high school, did you?', and, 'I need something, Venkat. You've been in contact for twenty-four hours and the media is going ape shit ... The press is crawling down my throat for this. And up my ass. Both directions, Venkat! They're gonna meet in the middle!'. Maybe in the extended cut?
Hmm, this makes me want to read the book.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30405155#p30405155:xg4v07a2 said:
truth is life[/url]":xg4v07a2]
[url=http://meincmagazine.com/civis/viewtopic.php?p=30404411#p30404411:xg4v07a2 said:
Dmytry[/url]":xg4v07a2]Sidenote: it rather annoys me how there's this stereotype that Russian space programme is reckless.
They worked really hard to earn that reputation, though. Between the Voskhods, a bunch of incidents on the Vostoks, Komarov (and, for that matter, half of the early Soyuz flights),
Well, US also worked really hard to earn that reputation, with 2 shuttles lost, both losses predictable with mere extrapolation of anomalies observed on earlier flights. Early spaceflight is inherently reckless to some extent.
the early Proton, practically the whole of their planetary program except for the Venus stuff for some reason...there was a lot of slinging shit around to see if it stuck regardless of whether it had been properly tested or proven, i.e. recklessness.
Better to have 3 probes with 50% success rate than 1 with 90% . Unmanned flights demand a very different approach (SpaceX's re-usability is a step in the right direction there).
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30406037#p30406037:jokel8mx said:
truth is life[/url]":jokel8mx]
[url=http://meincmagazine.com/civis/viewtopic.php?p=30405227#p30405227:jokel8mx said:
Dmytry[/url]":jokel8mx]
Well, US also worked really hard to earn that reputation, with 2 shuttles lost, both losses predictable with mere extrapolation of anomalies observed on earlier flights. Early spaceflight is inherently reckless to some extent.
I knew you were going to bring up the Shuttle, but the difference between the attitude and situations of NASA in the 1980s and 2000s relative to the Soviet Union in the 1960s was extreme. The Russians were flying vehicles that were known to have serious and uncorrected flaws
Shuttle had serious and uncorrected flaws.
, not merely flaws that seemed to be under control.
Ahh, so deceiving oneself is the key here. Let's recap the Shuttle again. Challenger: o-ring failure due to the combination of ignoring erosion on o-rings in previous flights (as big as 2/3 of the way) where no erosion was supposed to happen, launching at a much colder temperature against what engineering said, and reusing the pipes from solid rocket boosters (smashed against the ocean and re-straightened). Columbia: there were previous instances of this damage, of uncontrolled magnitude (you can't control how big are the chunks falling off).

To top it off, pre-Challenger they were claiming 1 in 100 000 failure rate.

Having to deceive oneself before doing something highly reckless is not safety, it's just a slightly different form of recklessness.
For example, prior to Soyuz 1 the Soyuz vehicle had two test flights (it would have had three, but a flaw in the design of the emergency escape system had caused the second test flight to be destroyed prior to flight). From Siddiqi, during these flights the vehicle suffered numerous problems with its attitude control systems that caused it to repeatedly exhaust all of its propellant attempting to orient, without actually successfully doing so. Only through great cleverness and the use of backup systems (and, in the case of the first test flight, sheer bloody-mindedness with repeatedly firing the main engine despite that rocket cutting off after just a few seconds of thrust each time) was either returned to Earth. In the first case, the vehicle was destroyed by an onboard booby trap (which, fortunately, was never installed on a crewed vehicle despite some KGB pressure); in the second, it turned out that there was a hole in the heat shield that, among other issues, would have caused a cabin depressurization. In drop tests, the parachute system had repeatedly failed, particularly significantly considering how Komarov died.

Does this sound like a vehicle which ought to be launched with a human onboard? Yet that's what they did at the very next flight, with no attempts made to have at least one wholly successful test flight beforehand, and it killed Komarov. And they never did fully debug the 7K-OK; it suffered more or less serious faults on practically every mission it undertook, culminating in the tragedy of Soyuz 11. To their credit, though, OKB-1 implemented significant redesigns afterwards (actually the process started earlier, but the redesigned craft only entered service afterwards) and implemented much better quality control procedures so that later versions were far better behaved, but their attitude towards the first flight can hardly be defended, and is far worse than anything NASA ever did except Apollo 1.
I'd love to see you rail about Shuttle if it was Russian. No launch abort system (that's in 1981), not switching to something safer for, what, 30 years, and taking entirely unnecessary risks due to politically motivated "space plane" concept. Killed 14 you know.
Early spaceflight is inherently dangerous, but it is not inherently reckless, and the Soviets were much worse than NASA about heedlessly taking on unnecessary risks.
Yeah, that must be why 4 kosmonauts got killed, compared to 14 astronauts (excluding Apollo 1 fire, which killed 3 astronauts).

The thing is, if you do a bit better on the first flight, that won't save lives if you'll just keep flying the flaws until someone dies. Any minor reduction in risk can be easily overcome with risk compensation behaviours. Ignoring a single uncontrolled anomaly for just a few years kills more people than launching some untested "we promise it'll work this time" rocket with a person inside.

At the end of the day, the cause of death is the same for astronauts and cosmonauts: stupid political pressure, misinterpreting earlier successes, flying flawed vehicles until someone dies (which kills everyone inside).

edit: as for planetary programme, that has most to do with electronics in extreme cold.
 

Dmytry

Ars Legatus Legionis
11,443
I don't think the planetary/robotic programme was much worse overall (allowing for some funding disparity). There's the first entry into atmosphere of another planet, first soft landing on another planet, first to transmit images from the surface of another planet, balloon probes. Why Mars didn't do as well, I dunno, the sample sizes get small once you start focussing on just Mars or just Venus.

Also, nothing impossible about automated sample return from Mars in the 1970s. It's a darned shame nobody did it.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30444307#p30444307:3mivcrqr said:
truth is life[/url]":3mivcrqr]
[url=http://meincmagazine.com/civis/viewtopic.php?p=30413883#p30413883:3mivcrqr said:
Dmytry[/url]":3mivcrqr]I don't think the planetary/robotic programme was much worse overall (allowing for some funding disparity). There's the first entry into atmosphere of another planet, first soft landing on another planet, first to transmit images from the surface of another planet, balloon probes. Why Mars didn't do as well, I dunno, the sample sizes get small once you start focussing on just Mars or just Venus.
Not so small, actually; the Soviets tried to launch a similar number of spacecraft to Mars as they did to Venus, and in both cases they account for a substantial fraction of all of the spacecraft ever sent or attempted to be sent to either planet (more so in the case of Venus, of course, but the Soviets are responsible for around half of all Mars missions ever attempted). Many of their Mars spacecraft were doomed by launch vehicle failures, but the others invariably suffered serious problems before succeeding. Even the tiny number of "successful" missions they had were only "successful" in that they didn't fail before they actually got to Mars. For the most part, based on my readings, the problem seems to have been poor quality control in the spacecraft and an acceptance of a relatively low probability of success. Since Venera program missions were very much shorter than Mars program missions (for instance, they first attempted Venus orbiters for Venera 15/16) this was less of a problem (though still a problem; many early Venera missions suffered failures similar to those that plagued the Mars program), so they were more successful.

I think the Russians could most reasonably have attempted Mars missions in the late 1970s/early 1980s timeframe; at this point, their launchers were reliable enough that they wouldn't just explode, they weren't suffering from the effects of the Soviet collapse yet, and they had developed technology that was reliable enough to actually last long enough to reach Mars and operate for months or years afterwards. Not coincidentally, this was also the time period when the Venera program was most successful, as a proportion of missions launched.
I mean even though the sample size may seem big the data points are not at all statistically independent. A couple issues with alloys or electronics in the extreme cold and they all fail for the same reason. Mars missions are a single wrong alloy away from a failure. Keep in mind though that the launch vehicles require testing, and a lot just gets piggy-backed onto a test.
But they were obsessed with trying to launch a sample return mission which was far too complex for the technology they had, so they didn't take advantage of it.
The problems of earlier missions weren't with the complexity, they were with material degradation in transit, like a wrong alloy recrystallizes. E.g. if someone has a probe that lands and sends back a video, and another one that does something highly complicated, the latter doesn't necessarily use any electronic components or alloys that wouldn't be present in the former. And so the former doesn't have a dramatically higher chance of success than the latter, even though it may be dramatically simpler.

It'd be mostly a cost saving measure, which is less important when engineering is a: cheap and b: when you have trouble finding other things to do for your educated people.
[url=http://meincmagazine.com/civis/viewtopic.php?p=30413883#p30413883:3mivcrqr said:
Dmytry[/url]":3mivcrqr]Also, nothing impossible about automated sample return from Mars in the 1970s. It's a darned shame nobody did it.
Not impossible, maybe, but the Russian scheme was far too ambitious for the state of their program. The contemporary JPL proposal was to use a modified Viking lander carrying a large booster stage to lift a return capsule into space and then back to Earth, which meant that it would be taking advantage of the already-developed Viking entry profile, the Viking heat shield design, Viking descent technology, and much Viking hardware, which of course had been shown to work pretty well. By contrast, Lavochkin wanted to develop an entirely new spacecraft that would be launched in multiple parts atop a Proton, dock together, and fly to Mars, then carry out the sample return mission. This introduced a number of points where things could go wrong while not at all benefiting from earlier missions demonstrating the necessary technology for critical mission phases like entry, descent, and landing. The sensible thing to do would have been to look at Mars 2/3 and Mars 4/5/6/7, try to identify where things went wrong, fix them, and launch new missions to show that they could, in fact, land things on Mars and put things in orbit around Mars. Then they could start working on a sample return mission, to follow on from those. The sample return mission was a perfectly fine long-term goal, but it made no sense at all as the next mission for them to do, which was precisely what they decided to make it (then the Igla rendezvous system ran into trouble and it was cancelled).
Well, that's how it would've served a long term goal had the country not collapsed. The "automatically dock together" part probably had some military purpose to begin with anyway, this just screams "trying to piggy-back some solid science onto tests and demonstrations of 'steal a satellite' tech we've been making".

The space shuttle also looks like "steal a satellite for ground analysis" vehicle more than anything else. I mean why else would you even want that much space-to-earth cargo space?
I don't really think it's a shame that nobody did sample return in the 1970s. If you look at most planetary missions, particularly in the United States, they are usually the products of decades of gestation and development, where mission advocates and customers (engineers and scientists, in other words) negotiate with policymakers and funding agencies to work out precisely what is desired from a scientific perspective and what can be afforded from a budgetary perspective. From a scientific perspective, the disadvantage of the 1970s proposals, at least in the United States, was that they would only provide a so-called "grab sample," that is whatever rocks or regolith happened to be around the lander when it touched down, while landing at that time was very imprecise. Together, this meant that you would probably get relatively boring material, not really worth the cost of the mission
On Earth you can find signs of life no matter how bad of a sample you pick. If there's some life, we can expect spores, and spores will get everywhere. A lot of science was done on those little samples.
compared to other possibilities. That's why all later JPL proposals have prominently included large rovers meant to collect samples from a wide area and bring them back to the launcher (in turn, that's driven multi-launch designs to reduce individual mission launch mass and spread out mission costs, which is why Mars 2020 is going to be a sample-collection rover with no sample-return rocket...). There are Mars opportunities every 26 months, so you can go do it whenever you happen to get all the your ducks in a row. That's taken forty years, more or less, but that's fairly typical for space probes...the first Mercury orbiter proposals date back to the early 1960s, for instance...
Moon proposals didn't take that long.
The big loss, to my mind, were the many proposals around taking advantage of the "Grand Tour" opportunity of the late 1970s. In reality the Voyagers flew this, but there were proposals for many more spacecraft to take advantage of the gravitational assist opportunities of the time for various missions, like entry probes to several of the giants (not just Jupiter) and outer planet orbiters. Similarly there was the failure of the United States to do anything about Halley's return; there were some really beautiful designs out there that would have blown any of the spacecraft that actually flew out of the water. These hurt more because unlike Mars sample return you can't just do them (more or less, given spacecraft development times) any time, you have that one specific opportunity and then they're gone basically forever.
Yeah, that's a good point, although some info about the Mars can be closed forever if some Earth extremophile brought about on the spacecraft manages to take hold there.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30446287#p30446287:2p3quqp0 said:
truth is life[/url]":2p3quqp0] The simple spacecraft that just sends back videos has many fewer points that can suffer failure
But the points are not statistically independent. If a complex device fails from e.g. some type of transistor failing, a simple device which nonetheless employs the same transistor will be far more likely to fail than simple "points of failure" analysis would indicate. It is true that a failure risk would be higher but it wouldn't be as much higher as you would expect without accounting for the very strong correlation between points of failure.
[url=http://meincmagazine.com/civis/viewtopic.php?p=30445527#p30445527:2p3quqp0 said:
Dmytry[/url]":2p3quqp0]
Well, that's how it would've served a long term goal had the country not collapsed.
No, what happened was that Lavochkin decided around 1974 that this was the next Mars mission they were going to try to do, then they abandoned it in 1978 because they couldn't get the automatic rendezvous system to work reliably (on Soyuz/Almaz), long before the collapse. That had nothing to do with it, they would have needed clairvoyance to predict it would be an issue.
[/quote]
What I'm saying is that if not for collapse, that they couldn't have predicted, then that early design could conceivably have been used later, becoming just early planning.

In the short term, i.e. the 1970s, they should have been taking the lessons learned from previous probes and developing a new generation of relatively simple spacecraft that showed that they could actually undertake many of the necessary steps for the sample-return mission (conveniently, the United States was just at this point deciding it didn't care about Mars exploration for a while, so even simple missions would look good). Then they could develop those spacecraft into a sample return vehicle for the 1990s. Now, of course in reality the country didn't exist any more then, but they didn't know that. In 1974 it was completely reasonable to expect the Soviet Union to be around for a lot longer, and therefore for a long-range plan with a sample return in the late 1980s or 1990s to be feasible.
Yeah, but that's what it would've fallen back to when Igla was not working. Everyone's reusing old plans all the time, even across much greater technological changes.
[url=http://meincmagazine.com/civis/viewtopic.php?p=30445527#p30445527:2p3quqp0 said:
Dmytry[/url]":2p3quqp0]The "automatically dock together" part probably had some military purpose to begin with anyway.
The system they were planning to use, Igla, was also the one used by the Soyuz spacecraft used to supply the Almaz stations. So I suppose you could technically say that it did! But they had already developed it for that, going to Mars wasn't a roundabout (and very silly) way of developing a military technology. I mean, this is the Soviet space program! It was run by the military. If they wanted some technology developed, they just told people to develop it, and they did.
No, I'm saying the military tech was a round-about way to do science missions. I.e. when you want to do something cool with Mars, you look at what military is making that you can use.

[url=http://meincmagazine.com/civis/viewtopic.php?p=30445527#p30445527:2p3quqp0 said:
Dmytry[/url]":2p3quqp0]
On Earth you can find signs of life no matter how bad of a sample you pick. If there's some life, we can expect spores, and spores will get everywhere. A lot of science was done on those little samples.
Yes, which also reduced demand for just going and grabbing whatever you come across for billions of dollars (that you could use to build many other spacecraft that could do useful science).
Those rocks been in space for millions of years if not billions, and been sitting in dirt for a long time on Earth too.

Also, they weren't expecting life due to the Viking results (I don't think anyone is seriously expecting to find life in returned samples; if life exists, it must be very rare and located in hard-to-access places).
If there's active life in hard to access places, you can find dried and dead spores in easier to access spaces. Spore formation is incredibly common.

The interest was geology, and just grabbing a bit of rock from one place has limited value for that. For instance, none of the Martian meteorites, as the featured story points out, are sedimentary rocks. And sedimentary rocks probably weren't going to be anywhere that a Viking-type lander could realistically land due to its landing ellipse size.

Of course, there were also budget issues. NASA at that time was very heavily involved in developing Shuttle, which didn't leave a lot of money left over for an expensive sample return mission. That particular design also required two Shuttle launches and in-orbit rendezvous and vehicle construction, which was never developed, and couldn't have been launched earlier than the 1980s, in any case. By that time, JPL had moved on to more capable (though more expensive) probes that only required one or sometimes two launches (they were trading different possibilities).

[url=http://meincmagazine.com/civis/viewtopic.php?p=30445527#p30445527:2p3quqp0 said:
Dmytry[/url]":2p3quqp0]Moon proposals didn't take that long.
Sure they did. The modern wave of Moon missions since the 1990s was based on proposals dating back to the 1960s. The Apollo missions were based on proposals as far back as Goddard in the 1920s and earlier (Tsiolkovsky being relatively unknown in the United States). There's always decades of research behind any mission, if you scratch the surface. No one proposes a mission and then just goes right away.
Well if that's what you mean by "proposals", then you can find a plenty of Mars proposals from the 1920s too.

[url=http://meincmagazine.com/civis/viewtopic.php?p=30445527#p30445527:2p3quqp0 said:
Dmytry[/url]":2p3quqp0]
Yeah, that's a good point, although some info about the Mars can be closed forever if some Earth extremophile brought about on the spacecraft manages to take hold there.
That's extremely unlikely due to the surface conditions. You could avoid that just by never going, anyways.[/quote]
Well, some organisms are fucking hardy, and it gets above freezing in places.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=30446899#p30446899:i28fzfyo said:
truth is life[/url]":i28fzfyo]
[url=http://meincmagazine.com/civis/viewtopic.php?p=30446637#p30446637:i28fzfyo said:
Dmytry[/url]":i28fzfyo]Yeah, but that's what it would've fallen back to when Igla was not working. Everyone's reusing old plans all the time, even across much greater technological changes.
A long-term goal? No, they dropped it entirely, focused on Venus for a little while, then started to play up Phobos. Then they fell apart.
Would've tried that eventually, Phobos is easier than Mars so why not try that first?
[url=http://meincmagazine.com/civis/viewtopic.php?p=30446637#p30446637:i28fzfyo said:
Dmytry[/url]":i28fzfyo]
Those rocks been in space for millions of years if not billions, and been sitting in dirt for a long time on Earth too.
Yes, but you can still do a lot of science with them and they're about as good quality, scientifically speaking as a grab sample: a random sample of part of the Martian crust. If you, the planetary geologist, are thinking about spending billions of dollars instead of hundreds of thousands, you're going to want to make the case that this is really a lot better scientifically, which the grab samples are not.
Never? What's about grab sample from Earth vs something that was ejected? (if it could be ejected).
[url=http://meincmagazine.com/civis/viewtopic.php?p=30446637#p30446637:i28fzfyo said:
Dmytry[/url]":i28fzfyo]
If there's active life in hard to access places, you can find dried and dead spores in easier to access spaces. Spore formation is incredibly common.
Depends on where the "hard to access" place is. It also depends on whether there are processes going on that destroy spores, which seems rather likely given what we know of the Martian surface.
They wouldn't "destroy" spores, they'd inactivate them. You can put stuff in agar without doing a sample return, but you can't do electron microscopy or the like.
[url=http://meincmagazine.com/civis/viewtopic.php?p=30446637#p30446637:i28fzfyo said:
Dmytry[/url]":i28fzfyo]
Well if that's what you mean by "proposals", then you can find a plenty of Mars proposals from the 1920s too.
And that just strengthens my point (it's also not really true because the Moon was a far better-known quantity, so you had some fairly detailed engineering analysis going on, e.g. the famous BIS study, while Mars didn't receive that until the 1950s). Any mission has many years of study behind it hashing out the best way to do things. That's a big reason why Apollo could go from speech to the Moon between 1961 and 1969, the principals had already figured out most of the underlying things that needed to be done. In the 1970s the technology needed to conduct automated sample return was so new that Mars return mission proposals didn't have that kind of heritage, so they weren't launched; they were either started and abandoned (Mars 5M) or sent back for further study and development.
There was automated sample return from the moon already, I'd think you could describe that as "most of the underlying things" too. The automatic docking is just one thing, like giant engines on Saturn V which were tricky because of instabilities.

Actually, unlike the engine issue, all the underlying mathematics is well understood, it can even be implemented in a purely mechanical device (albeit with severe difficulty).

Albeit yes, I'll agree that it was highly ambitious. The thing is, the performance of research teams, even those consisting largely of very brilliant people, was sometimes severely undercut by nepotism. With Igla though I can't see how you could think it overly ambitious without the benefit of hindsight.

edit: and the first automated unmanned docking was in 1967.
[url=http://meincmagazine.com/civis/viewtopic.php?p=30445527#p30445527:i28fzfyo said:
Dmytry[/url]":i28fzfyo]
Well, some organisms are fucking hardy, and it gets above freezing in places.
Hardy, yes, but Mars combines many "problems" for organisms into a single set, which means there's a very low chance any one organism will be adapted to survive all of them.
Maybe, maybe not, still worth to sterilize everything (not to kill everything but to decrease the chances).
 

Dmytry

Ars Legatus Legionis
11,443
I was wondering, can you determine which areas of the screen are commonly touched on a phone, even if the screen was thoroughly cleaned, by examining glass with an atomic force microscope?

I would think yes, because fingers do erode glass (however little), and also with some history information (patterns that are produced in the pocket being smoothed out more in the recently used areas).

Also, I wonder if smudges usually contain good information about the order of presses (the finger movement in a press got to be dependent on prior position).

This is for a hypothetical strawman case where someone had been using a strong password on their phone and wiping the screen after every entry, while doing something really evil but trusting technology too much.

Other question, electro-migration in chips, would transistors in the static memory cells or other circuitry exhibit any long term changes proportional to the charge that passed through them in their lifetime, similar to burn-in of the displays? (some kind of electromigration). This is more of a theoretical interest, I wonder if specialized secure hardware should move things around memory to equalize the "burn in", or perhaps simply xor every cell with a random value that keeps changing, or if that would be just voodoo chicken security.

edit: it could be interesting to write a modern Sherlock Holmes, which uses real tech to solve cases - e.g. he has a scanning electron microscope and an atomic force microscope, which probably cost about the same adjusted for inflation as a good regular microscope did back in the day of Sherlock Holmes. Not the CSI "enhance" cybershit but good ole physical evidence, edit2: without over-focussing on just the genetics.

edit3: electro-migration seems to be an issue big enough to affect longevity of circuits... interesting. That would suggest to me that in some crypto hardware one may be able to determine the key by examining memory holding that key, if the key is always in the same location. May be non trivial to fully protect from such attacks as the computing hardware itself may experience burn-in.
 

Dmytry

Ars Legatus Legionis
11,443
Yeah, I wasn't thinking of traversing the whole screen, more of taking the samples the size of working area, at multiple locations (e.g. 100x100 grid if that can be automated), to determine surface roughness which would presumably differ between most and least touched areas. There would also be some anisotropy. edit: also i think you're low by 1000 :)
 

Dmytry

Ars Legatus Legionis
11,443
I pondered the same question in an earlier thread. It seems to me that you would be able to hear it at the distances where you can survive the tidal forces, mostly because you can hear pressure differences down to parts per billion. But the effect is complicated as the gravitational waves preserve volume, so you probably need solids to be able to detect them. I mostly thought over the case where you are standing in the air close to a solid object.
 

Dmytry

Ars Legatus Legionis
11,443
How does the "VRC4 card" on this page work? It somehow glows with visible light when exposed to IR (and it's not just a pre-charged phosphorescent layer getting activated by IR). Is it multi-photon fluorescence? Or some nonlinear optics? edit: apparently some nanocrystals can do multi-photon fluorescence at reasonably low light levels, interesting.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=31743935#p31743935:2u0juv9e said:
MilleniX[/url]":2u0juv9e]
[url=http://meincmagazine.com/civis/viewtopic.php?p=31741479#p31741479:2u0juv9e said:
.劉煒[/url]":2u0juv9e]
[url=http://meincmagazine.com/civis/viewtopic.php?p=31739551#p31739551:2u0juv9e said:
Dmytry[/url]":2u0juv9e]I always liked the idea of fission sail or alpha decay sail. Deposit an isotope with high spontaneous fission rate onto a thin film, which is spin-stabilized.
That sounds like a limited power source, the faster the decay rate the shorter the half lives.
If most of your isotope spontaneously fissions then you get as much energy out per kg as you do with a nuclear reactor (but far better efficiency converting it to dv), and with the kind of mission timespan involved it really doesn't matter much that it'll take longer to burn through the fuel.

Dunno if there are isotopes with high enough spontaneous fission rates (over the decay chain) and long enough half life for other decays, so that most of energy comes in form of spontaneous fission.
Also, once you get going with it, how do you turn around and stop at the other end? Unlike a rocket, you can't turn around and retro-thrust. Unlike a light sail, you can't turn around and reflect light coming from the destination. A decay sail is going to keep decaying continuously. Maybe it's built with enough material to last all the way to the destination?
Well the discussion was about how fast you can go.

When you're stopping you have the advantage that you're moving awfully fast against the stellar wind, so you can e.g. have some kind of huge magnetic sail that powers itself. There's a plenty of proposals which use something like that for deceleration. edit: details would really depend on how fast this thing is going and how heavy it is.
 

Dmytry

Ars Legatus Legionis
11,443
It wouldn't be easy to get fuel to be critical when its fraction of a micrometre thick at any point and has enough spacing around to get the fragments not to just collide with the rest of the fuel but to somehow get out. I'm rather dubious it can work on a reasonable scale, you're reducing density enormously, raising the critical mass.

edit: namely if you decrease fuel density by X, the mean free path of neutrons will increase X, so my understanding is that the the size has to increase by ~X as well, the volume by X^3 and the mass by X^2 . I'm not sure how much you'd need to decrease the fuel density but I imagine by a very large factor if the fuel is in the layers so thin that substantial fraction of highly charged fragments escape and they're spaced sufficiently that said fragments could be somehow redirected or otherwise used before they collide with the fuel. I think it just ain't going to work, with a very large value of "ain't". Hence why I'm talking about spontaneous fission or alpha decay.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=31860875#p31860875:3t8260wo said:
UserJoe[/url]":3t8260wo]
[url=http://meincmagazine.com/civis/viewtopic.php?p=31860611#p31860611:3t8260wo said:
Dmytry[/url]":3t8260wo]It wouldn't be easy to get fuel to be critical when its fraction of a micrometre thick at any point and has enough spacing around to get the fragments not to just collide with the rest of the fuel but to somehow get out. I'm rather dubious it can work on a reasonable scale, you're reducing density enormously, raising the critical mass.

edit: namely if you decrease fuel density by X, the mean free path of neutrons will increase X, so my understanding is that the the size has to increase by ~X as well, the volume by X^3 and the mass by X^2 . I'm not sure how much you'd need to decrease the fuel density but I imagine by a very large factor if the fuel is in the layers so thin that substantial fraction of highly charged fragments escape and they're spaced sufficiently that said fragments could be somehow redirected or otherwise used before they collide with the fuel. I think it just ain't going to work, with a very large value of "ain't". Hence why I'm talking about spontaneous fission or alpha decay.
It's actually been looked at enough that there is a Wiki page about the concept. The physical size and mass of fissile materials was reasonable. The reflector mass would be large compared to the mass of the core.
Looked in the first cited paper, it starts with 18.5 tons for the moderator-reflector... then proceeding to discuss operating it at 10 gigawatt, where my favourite phrase is "for example, if the wheel diameter is 200 meters, and the wheel rim has a velocity of 1km/s".

The problem is reactors don't scale down very well.

edit: here's the spontaneous fission list on wikipedia. Take Cm 250 , it mostly fissions spontaneously, it's half life is 6900 years, i.e. about 1% of it decays in 100 years, which is going to be a lot better on a 100-year mission than carrying an enormous mass of non fuel as per above. Plus scales down all the way down to a micro probe. I don't know if there's any better isotopes.

edit2: wait, this is the misc musings thread! I'm free to go off on another tangent!

So, I was wondering, what is the theoretical lower limit on the probe payload size where the probe could eventually send it's own progeny to the stars? (Assuming any stable arrangement of atoms can be manufactured for the first probe, by an advanced civilization). I'm thinking it would be a few micrometers. Enough space to contain a smallest self-replicator with extra code for increasingly larger structures. That would seem to dramatically cut down the time it takes to colonize a galaxy, to where the prevalent way of detecting Dyson spheres is "is there one around the Sun?".

edit3: another observation, it appears that given the extreme versatility of carbon by itself (e.g. carbon nanotubes can be more conductive than copper, or they can be semi-conducting, depending on the lattice numbers, and it appears possible to dope said semiconductor with nothing but lattice irregularities), and high chemical stability, it may be theoretically possible to build a self-replicator that uses only carbon, or carbon with some hydrocarbons as the carrier liquid, using carbon-only photovoltaics for power.

It shouldn't be understated though just how extremely advanced such technology would be - it is far beyond "mere" self replication in a vat of highly purified feed fluid, it is beyond most things you could do with said nanobots in said vat (extremely powerful computers, scanning of brains into said computers, and other scifi), and it's well beyond "mere" self replication almost with just the carbon but using half the periodic table as catalysts, etc.
 

Dmytry

Ars Legatus Legionis
11,443
[url=http://meincmagazine.com/civis/viewtopic.php?p=31888627#p31888627:1bd8mtj1 said:
redleader[/url]":1bd8mtj1]
[url=http://meincmagazine.com/civis/viewtopic.php?p=31888019#p31888019:1bd8mtj1 said:
Arbelac[/url]":1bd8mtj1]
[url=http://meincmagazine.com/civis/viewtopic.php?p=31887805#p31887805:1bd8mtj1 said:
Arbelac[/url]":1bd8mtj1]
All the infrastructure would still exist, except it would need to be replaced. No wireline communications, massive power failures (grid would likely be destroyed). Food distribution would instantly become an issue as most vehicles would be dead, and producers equipment would be dead as well. Riots and violence would escalate quickly in urban centers as starvation sets in.

Damaging a vehicle from space would require unimaginably strong fields because of the very small cross section of the electronics (meter or two at most). The problem you have is that cars and trucks still work, but you'll need someway to pump and distribute fuel without power. It can be done but it would be a mess organizationally.

I was thinking along the lines of the Carrington Event of 1859.

I think that is quite a few orders of magnitude weaker than you would need to damage compact objects like cars or even homes. The Carrington Event generated sparks across telegraph lines because telegraph lines are hundreds or thousands of miles long. To damage an object thousands or millions of times shorter requires thousands or millions of times stronger fields. That isn't very likely, and if it did happen, the damage to electrical equipment would probably be the least of our concerns.
Much simpler way to put it, a car can withstand up to and over the field that would induce breakdown in the air. I.e. it can get struck by lightning and the metal outside would conduct all the current without any effect on the electronics (except maybe the radio). The car's tires would arc over and burn up before electronics would fail.

edit: basically, the damage would be to the grid and possibly to the communication satellites (no idea how tough are those vs CMEs? would it simply shorten the operational timespan or just fry them outright?), as well as other communication gear. edit2: I wonder though, could it end up starting residential fires etc? US construction is basically a tinderbox.

The only way it could end civilization is if it somehow triggers a nuclear war due to something stupid e.g. a submarine or a remote missile silo thinking that a nuclear war happened, or some nuking detectors glitching.
 
Status
You're currently viewing only Dmytry's posts. Click here to go back to viewing the entire thread.