Focusing on Michael's G7 BC results with the 178 Amax:
.248 from a low velocity round (.308 Win I think) and
.290 from a 300 RUM
.248 is only +3% from my value of .240. This is essentially a match, considering that the error only results in ~10" difference in predicted trajectory at 1000 yards. This is about as accurate as a drop test can be expected to be given all the uncertainties mentioned (chrono error, range error, etc) as well as unaccounted for effects like vertical wind, and coreolis (which could be up to ~3" of vertical in this case).
Now the same design bullet is fired from a different, higher velocity rifle. It appears to produce a G7 BC of .290. Since the weight and caliber of the bullet is the same, the only difference affecting the BC could be drag. Going from .248 to .290 implies that the drag (form factor) reduced by 15%. That's a huge amount. That's the difference between a bullet profiled like the 178 Amax (which is a relatively high drag bullet), and something even sleeker than a 7mm 162 Amax (which is an exceptionally low drag bullet).
When groper insists that the G7 BC
can't be that high (.290) it's the same as saying the drag of the 178 Amax
can't be as low as a 7mm 162 Amax. And he's right. That's a 15% difference, and bullets don't magically fly with 15% less drag from one rifle to another. You don't ever have to pull a trigger to know that.
I think Michael and the others know that the bullet isn't flying with such a different BC, but are still left wondering why their tests imply the drastic difference. This is the question I'd like to focus on. Because to focus on the question of why the BC really is 15% different between rifles is a waste of time; because it's not that different.
Michael provided information that describes a fairly detailed and well executed test. Some basic questions I have about the test are:
1) Wind conditions. Is there any terrain that would generate vertical wind over the range you're testing? We all know what the smallest amount of wind can do to horizontal displacement. It's just as bad in the vertical plane if it's there.
2) The test that produced a BC of .290; was it ever repeated? If so, how closely was the .290 repeated?
3) Are you accounting for the velocity drop between the muzzle and chrono? A 10 fps drop in 5 yards (from muzzle to chrono) correlates to ~4" difference in predicted drop at 1000 yards.
Michael and most everyone else is deriving BC's from drop tests. One of the most critical measurement instruments in that experiment is the scope, so I'd like to ask some questions about that.
I'll question the assumption that a scope that's calibrated (meaning click values measured) when mounted to one rifle has the same calibration factor when mounted to a different rifle. Here are my reasons:
1) I suspect that when a scope is mounted with different hardware (rings, bases, etc) and clamped in different places on the tube, that it can effect the amount of internal deflection to the 'mechanism' that moves the reticle in response to clicking the turret.
2) Let's say on rifle 'A', the wind zero for the scope is in the mechanical and optical center of the scope. You establish a calibration factor for this wind zero. Now mount the same scope on rifle 'B'. It happens that the wind zero for rifle 'B' is 10 MOA (for example) to the right. Now, when the plunger pushes on the ROUND erector tube which houses the reticle, it's not pushing directly on the bottom of the ROUND cylinder, but instead is pushing up on the edge of it. I think this might result in a different amount of reticle movement per click compared to the situation where the plunger was pushing directly on the center of the round tube.
The above two possibilities are only my speculation, grasping for possible causes as to why a scope's vertical adjustment calibration might not be the same when mounted to different rifles. Keep in mind I'm not an optics expert.
Back to the BC's. I'm not here to stomp my foot and declare that anyone's wrong because they got different numbers than me and my numbers are accurate. In this case, we can throw my numbers away completely and focus on the fact that someone did testing that implied the same bullet had a BC that was different by 15% when fired from two different rifles. I think we all accept that that's not
actually the case, but the question we all want answered is: how and why would a test imply that?
Could it be velocity effects? I don't think so, and here's why. Using my latest test results, I've compiled the following correlation between G7 BC and velocity for the 178 Amax. Note that the BC values given are the instantaneous values at the corresponding speed. In order to get the average G7 BC for a trajectory, you average the BC's over the flight range of interest.
Code:
velocity G7 BC
3300 0.255
3200 0.254
3100 0.253
3000 0.252
2900 0.250
2800 0.249
2700 0.247
2600 0.245
2500 0.243
2400 0.241
2300 0.239
2200 0.236
2100 0.234
2000 0.232
1900 0.231
1800 0.230
1700 0.230
1600 0.231
1500 0.233
1400 0.243
1300 0.242
1200 0.223
The average G7 BC from 3300 to 1500 fps (roughly the range of flight speeds over 1000 yards for the 300 RUM) is .241. The average for the lower velocity round (2800 to 1200 fps) is .237. So everyone who've been looking to velocity effects to explain the difference in BC is right that there is a difference, but according to the numbers above, the difference is only 1.7% for the average BC, not the 15% that's implied by the test. Could my numbers be off a little? Sure. But I don't think they're off that much. The implied .290 BC is coming from somewhere else.
Some more context. I've tested the 155, 168 and 178 Amax's. These bullets have essentially the same ogive and boat tail, which dictates that they should all have close to the same G7 form factor. The measured form factors for these 3 bullets are: 1.100 for the 155, 1.101 for the 168, and 1.118 for the 178. In other words, for bullets that have the same geometry as it relates to drag, they all have essentially the same measured form factor (drag) within 2%. Note the 178 has a little more drag which might be due to the longer bearing surface (more skin friction drag). The G7 BC's of these bullets are calculated by dividing the sectional density by the form factor. A G7 BC of .290 for the 178 Amax implies a G7 form factor of .924! This gets back to the notion that the .290 BC implies a level of drag that's simply unrealistic for a bullet shaped like this (and the 155, 168, etc).
I can think of many scenarios that would cause a bullet to fly with more drag than it should (stability problems, rough/worn barrels, bullet deforming in the barrel, etc). However, there is no way a bullet flies with less than it's minimal drag.
So we're back to the testing. I'm interested in Michaels answers to the questions above. My interest lies in understanding (for myself and helping others to understand) the science of ballistics. How and why some observations appear to contradict what's known about ballistics.
Those who deliberately enter false values into a ballistics program in order to get it to match what they shot in the field will get an answer that is more or less useful. But those who go to the trouble of verifying all the inputs are correct will get a prediction that is much closer to reality, is much more accurate over a wider range of conditions and distances, and will be more accurate on the other outputs (retained velocity, tof, wind drift, etc). If you're happy to knowingly enter false values, so be it. It's not my job to convince everyone that science works. But for those who are interested in getting to the bottom of things and doing it right, let's talk.
-Bryan