Recipes to play with “No Deco Limits”

For a technical, philosophical and every other view on No-Deco limits, go to scubaboard.

First let me elaborate on why the mathematical model is important and how to play with it.

Models allow reverse engineering (fitting)
This post is about understanding the mathematical model in terms of the NDLs we know and use. One of the most important things about any model is, once you have verified it in one direction (how much deco must I do for a specific dive), you can then run it in the other direction (how much can I dive, before I have mandatory deco.) You can then understand what parameters, adjustments, corrections, variations other people were using when they came up with the numbers they gave you.

This is a subtle point and one excites me the most. What this means is, if someone said to me, “Let’s dive to 100 feet, for 40 minutes, on 32%, and ascend while stopping two minutes every ten feet.”, I now have the tools guess their parameters.

Suppose they were using VPM, then I can reverse-engineer things like what bubble sizes they considered “critial”, and what their perfusion rates were assumed to be, etc. If they were using the Buhlmann, I can reverse-engineer their gradient factors.

This is awesome because it allows me to break down the black box a little – instead of whining about “My computer said 10 minutes, and yours said 20 minutes”, I can whine in a far more specific and deeply annoying way – “Oh I see, my computer is assuming critical He bubble size to be 0.8 microns, but yours is assuming 0.5 microns.” Remember kids – you shouldn’t always whine, but when you do, make it count!

When your computer has a “conservativism factor”, what does that mean? Is it simply brute-multiplying each stop by that factor? Is it multiplying the shallow stops? Is it a coefficient used in a curve-fitting model, if let’s say it’s trying to fit a curve like a spline or bezier to “smoothen out the ascent”? Conservativism factor “4” makes you no more intelligent about what’s going on, than saying, “These are the adjustments/corrections I make.”

While these ARE “just models”, models are nothing if they are not properly parameterized.

Here again, existing software came short in what I could do with it. The GUI is a great tool for a narrow specific task. But when exploring the model, nothing is more useful and powerful than being able to play with it programmatically. Once I begin posting recipes you’ll know what is so fascinating about “playing with it”.

If you’re a fan of the Mythbusters, you will see them refer to this as, producing the result.

Models allow study of rates-of-change (sensitivity analysis)

The other very important aspect of a model, even the constants are wrong, is the overall rate of change, or growth. Also called sensitivity analysis (meaning how sensitive is my model to which parameters.)

Let us say we had a few things in our control – ppO2, ppN2, ppH2, bottom time, ascent rate, descent rate, stops.

What a mathematical model allows us to learn (and should help us learn), is how sensitive the plans are to each of these parameters, even if specific constants are wrong.

Let me put it this way – if you wanted to guess the sensitivity of a “car” to things like – weight, number of gear shifts, size of wheels, etc., and you had a hummer to study with, and but had to somehow extend that knowledge to an sedan, how would you do it?

The “constants” are different in both. But the models aren’t. An internal combustion engine has an ideal RPM rate where it provides the maximum torque for minimum fuel consumption. The specific rev rate will be different. And you can’t account for that. However, the “speed at which inefficiency changes”, is a commonality in all internal combustion engines. Unless the  sedan is using a wenkel engine, the rate-of-change characteristics still apply. Even if the hummer’s ideal RPM 2000, and the sedan’s is 1500, the questions we can still study are – when I deviate 10% from the ideal, how does that affect fuel consumption, and torque?

So even if the software/constants I wrote are entirely wrong (which they probably are), they still serve a valuable tool in studying these changes.

A study in NDLs

Naturally one of the first unit tests I wrote for the algorithm, was PADI dive tabes: https://github.com/nyxtom/dive/blob/master/test/dive_test.js#L646

The point here was to recreate an approximation of the dive tables. What fascinated me was how much subtle understanding there is behind that number though.

First let’s define an NDL as: Maximum time at a depth, with an ascent ceiling of zero.

What this means is, whether you use Buhlmann, or VPM or whatever model you like, the NDL is the time after which you can ascend straight to the surface (depth of zero meters.)

So what happens when we run pure Buhlmann without a gradient factor?

(This snippet is meant to be executed here: http://deco-planner.archisgore.com/)

var buhlmann = dive.deco.buhlmann();
var plan = new buhlmann.plan(buhlmann.ZH16BTissues);
plan.addBottomGas("Air", 0.21, 0.0);
plan.ndl(dive.feetToMeters(100), "Air", 1.0);

//Result is 16

That’s a bit strange isn’t it? The official NDL on air is closer to 19 or 20 minutes (with a “mandatory safety stop”.)

Does it mean my model is wrong? My software is wrong? Compare it with different depths, and you’ll find it gives consistently shorter NDLs. What gives?

Let’s try fudging the conservativism factor a bit.

var buhlmann = dive.deco.buhlmann();
var plan = new buhlmann.plan(buhlmann.ZH16BTissues);
plan.addBottomGas("Air", 0.21, 0.0);
plan.ndl(dive.feetToMeters(100), "Air", 1.1);


//Result is 19

That’s just about where we expect it to be. This tells me that the NDL could have been computed with a less conservative factor. But is there something I’m missing?

Wait a minute, this assumes you literally teleport to the surface. That’s not usually the case. Let’s run the same NDL with a 30-feet-per-minute ascent (this time we have to use the getCeiling method).

for (var bottomTime = 1; bottomTime <= 120; bottomTime++) {
 var buhlmann = dive.deco.buhlmann();
 var plan = new buhlmann.plan(buhlmann.ZH16BTissues);
 plan.addBottomGas("Air", 0.21, 0.0);
 plan.addFlat(dive.feetToMeters(100), "Air", bottomTime);
 plan.addDepthChange(dive.feetToMeters(100), 0, "Air", 3);
 
 if (plan.getCeiling(1.0) > 0) {
 console.log("NDL for 100 feet is: " + (bottomTime-1));
 break;
 }
}
NDL for 100 feet is: 19

That’s interesting. For the same parameters, if we assume an ascent of two minutes, our NDL went up – we can stay down longer if we are ASSURED of a 30-feet-per-minute ascent at the end.

Now remember these numbers are entirely made up. My constants are probably helter-skelter. You shouldn’t use the SPECIFIC numbers on this model. But there’s something intuitive we discovered.

Let’s try it again with a 3 minute safety stop at 15 feet:

for (var bottomTime = 1; bottomTime <= 120; bottomTime++) {
 var buhlmann = dive.deco.buhlmann();
 var plan = new buhlmann.plan(buhlmann.ZH16BTissues);
 plan.addBottomGas("Air", 0.21, 0.0);
 plan.addFlat(dive.feetToMeters(100), "Air", bottomTime);
 plan.addFlat(dive.feetToMeters(15), "Air", 3);
 
 if (plan.getCeiling(1.0) > 0) {
 console.log("NDL for 100 feet is: " + (bottomTime-1));
 break;
 }
}
NDL for 100 feet is: 22

Once again these numbers make sense – if we are ASSURED of a 3 minute stop at 15 feet, our NDL goes up. How interesting.

This gives you a better idea of a “dynamic” dive. You aren’t exactly teleporting from depth to depth, and those ascents and descents matter. Try this for different gasses.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s