Flat Earth Follies: High Altitude Balloon footage PROVES Flat Earth
Well... no, not even a little bit...
Almost all the footage from high-altitude balloons shows the apparent curvature fluctuating from
...concave (when the horizon line is below center of the lens)
...to convex (when the horizon line is above center)
...to flat?
This effect is called 'Lens Barrel' and is almost unavoidable when you are using the wider-angle lenses common for this purpose, especially on cameras such as the popular GoPro series. Lenses that try to compensate for this are called Rectilinear lenses but it's more difficult, and more expensive, to make a wide-angle Rectilinear lens.
I marked a big X on the last frame to help us find the center of the image (the other two are obviously far from center). What we find is that the closer the horizon line is to the center of the lens the less distortion there is.
All three of these images are from the same video, the same camera, and the same lens. They are all distorted exactly the same way. This is just to demonstrate that the area in the center of the lens has less distortion and to show clearly the direction of that distortion.
So it should be clear that you cannot just pick any frame from a video and declare the Earth to be flat or curved. You have to analyze what you see so you can draw an accurate conclusion.
If we were looking through such a lens straight at a square grid of lines the resulting image would look something like this:
The middle portions (vertically and horizontally) are expanded outwards from the center so our straight lines appear to be bowed the further from the center you get, exactly as we saw in the first frame when the horizon was below center and it was bowed downwards.
This means that when the horizon is at or below the center line and you can still see positive curvature, then you can be sure you are seeing actual curvature.
And, with the horizon below the center line, we would also expect the measured "apparent curvature" to be slightly less than the actual curvature due to this lens distortion. However, the edges are less distorted than the middle portion so under normal conditions the distortion should be small.
We really only care about two points, the peak of the horizon and where the horizon meets the edge of the frame.
We can also use the properties of the lens to reverse the distortion introduced, as I've done here with the first two images.
One thing we can do is try to find a frame where the peak of the horizon goes through the center of the lens and is fairly level, to minimize distortion. And since the downward curved horizon would entirely fall in the lower half of the frame, the distortion here is actually making the curvature appear flatter rather than exaggerating it. Once you understand that you are already seeing less curvature than is actually present in such an image, you really shouldn't need any further analysis to know that this shows positive curvature.
At the lower altitudes we all know that it will at least appear very flat and be difficult to measure¹, as it should be given the size of the Earth. The visual curvature at this point would be mere fractions of a degree so we're going up to around 100,000 feet to get a better view.
For this detailed analysis I'm using the raw RotaFlight Balloon Footage from around 3:22:50 because they documented the camera details, the horizon is pretty sharp, and this footage is available at 1080p (an earlier version of this page used the Robert Orcutt footage but it was 720p, I didn't know which camera or settings he used, and the horizon wasn't very sharp in that footage). The camera used in the RotoFlight footage was a GoPro Hero3 White Edition in 1080p mode. From the manual I know that 1080p on this camera only supports Medium Field of View which is ~94.4°. The balloon pops almost 4 minutes later (at 3:26:29) so we should be up around 100,000' in this frame.
I used a Chrome plugin called 'Frame by Frame for YouTube™' to find a frame where the horizon is about one pixel below dead center and it is very level. I then captured that frame at 1920x1080:
I marked the center of the frame with a single yellow pixel, so you can see the horizon is either at or maybe 1 pixel below that mark. There are 38 pixels between the two red lines (both of which are 1 pixel high and absolutely straight across the frame) which mark the extents of the peak of our horizon and where the horizon hits the edge of the frame (±2 pixels as the horizon isn't perfectly sharp due to the thick atmosphere along the edge).
And again, since the limbs of the Earth shown here are below the center of the lens the distortion is, if anything, reducing the apparent curvature into a slightly flatter curve.
If you wanted a picture you could look at and verify the positive curvature of the Earth, this is that picture.
[You can also use the FEI horizon calculator to do the calculations shown below]
Common sense should tell you that, because of the enormous size of the Earth compared to our view, we would expect the visible curvature to still be very slight, even at 100,000 feet, especially with a more narrow Field of View as shown when we crop this image:
What we need to do is calculate how much of a visual 'bump' the peak of our horizon should have over where the horizon meets the edge of the frame given our estimated altitude of 100,000', camera horizontal Field of View of 94.4°, and an image 1920 pixels across.
But why is there a visual bump?
One thing we know from the study of Perspective is that a circle, when viewed at angle other than 90°, is going to appear as an ellipse that is more and more 'smushed' the steeper the angle at which we view it. Viewed on edge from the middle it's going to look 'flatter and flatter'.
In our case we are in the middle of this circle and we're viewing the edge at an angle of about 5.6° at this altitude, so it's going to be very squished, but it would still have a definite bump in the middle.
And to find that we need...
NOTE: we are using a spherical approximation of the Earth and ignoring atmospheric refraction in order to make our calculations manageable. However, this means that for larger heights our estimates will diverge slightly from actual observations. Not by very much but keep in mind there is a small margin of error here as Earth's Radius will be slightly different by location, but it only varies by ~0.34%. I am also not using the same method as the Lynch paper as I could not find his equation for X at the time of this writing, I get only slightly different results from Lynch using this approach. So to be clear, this method here is only a way to get a Very Good approximation of what you would expect to see, not a perfect calculation.
To help us get our geometry and nomenclature down I made some diagrams.
We are at point O (the observer) at about h=100000' (~18.9 miles) above an Earth with a radius R of about 3959 miles. When we look out to a point just tangent on the surface of the Earth, our horizon peak at point P will appear "a little higher" in our field of vision than the edges of the horizon do, at points B/C. This is easy to tell because we can draw a line from point OP and OK and there is clearly an angle between them. Point G is ground-level and point A is the center of our horizon circle.
Because line D is a tangent to the surface of the Earth, we know that it forms a right angle with the center of our sphere, this allows us to easily calculate all the distances and angles involved:
Here is our view from above (this one is to scale), showing the horizon circle in green, the edges of the horizon at points B & C, and the distance KP is our horizon circle Sagitta:
And here are the calculations from my GeoGebra Calculator showing the side view:
So in an ideal rectilinear lens we would expect to see approximately 41 pixels between the edges of our horizon circle (points B & C) and our horizon peak (at point P).
This is very good agreement with the image above where we have ~38±2 pixels using the Curvilinear lens which has slightly squished our curvature.
In this section we will look in detail at the mathematics for the above geometry and we will use this to then estimate what we expect to see in our camera.
We are given only a few values to start with but we can find all the rest using just the Pythagorean theorem using our two right triangles. After that we will step through the projection and calculate how many pixels we should observe which requires us to map our view into a 2D frame like our picture is. If you need help with some of the right triangle formulas here you can use this Right Triangle Calculator which will explain it in more detail (remember to put the 90° angle in the correct position and select the element you want to solve for).
So here are the solutions to the various line segments in our diagram. These are all fairly straight forward right triangle solutions. We mainly need D, H, & Z which we get from simple geometry. I've included some of the other calculations for reference only. Values for a, p, h, and R are given values.
So even at 100,000' our horizon is a mere 386 mile radius circle and we're just 19 miles above the center. So we are still only seeing a fraction of the Earth at a fairly slight angle.
And a large portion of that distant horizon is actually compressed by perspective and tilted to our viewpoint, making it virtually impossible to see. The majority of the visible area where you can make anything out is considerably smaller than the full horizon distance.
Given this configuration we should then expect to see only a 2.6° difference between the horizon peak and the line formed between the edges of the horizon. That is slight but measurable.
But now we want to know if this matches our image.
Since we want to compare this to our photograph (Sagitta Pixel Height) we also need to know how many pixels high that is and this is where things get a little bit tricky.
We will run through this process twice. The first time looking straight out and the second time we will rotate our view to look slightly down directly toward our horizon peak (P) which is fractionally more accurate for the image we are looking at but also more complex.
For our 3D to 2D projection we can take 3D coordinates [x,y,z] and transform them by dividing x and y by the z value [x/z, y/z] which projects onto the plane z=1. We can show that this preserves straight lines by considering two points [0,100,100],[50,100,100] which trivially transform into [0,1],[0.5,1] - so we can see they remain along the same y value, therefore remain in a straight line - we've simply scaled them down proportional to their distance (which is how perspective works). Geometrically, think about being in the middle of 4 equally spaced parallel lines which run out in front of you, as the distance (z) increases the points at different z values would simply get closer and closer to the center in the 2D projection. It is easy to see how [x/z, y/z] accomplishes this.
The second step then is finding the 'edge' of our frame. Since we know our Field of View (FOV) we can find that by locating the x-axis extent of a point lying along our Field of View (conveniently provided by finding the edge of our horizon).
We make the origin be our camera at [0,0,0] and we are looking out parallel to the plane of our horizon circle along the z-axis (towards [0,0,1]) and our axes are oriented as follows:
Our horizon peak (P) is therefore found directly ahead (x=0), down at the horizon circle plane (y=H), and at a distance Z from our camera. This is not the total distance (D) from camera to point P but rather only the z-axis distance (Z).
Which projects into 2D as:
So far, very simple.
Where is point B? It is to the right at 1/2 the FOV angle (\(a/2\)) on our horizon circle. We can convert an angle and distance (radius of our horizon circle = Z) into coordinates using \(x=Z \sin(a/2), z=Z \cos(a/2)\), and of course our y value is the same as P, in the plane of the horizon circle at y=H. Here is a diagram showing our horizon circle and where the points are located on it.
This places Point B at
and we again divide x & y by z to project into the 2D plane:
Now that we have both of our points mapped into 2D coordinates:
we just need to find the vertical (y-axis) difference between the 2D y-axis values, which is simply:
Simplified:
We can also further reduce the expression H/Z into terms of h and R by substitution:
Giving us:
Next we need to find the extents of our frame. Since the edge of the horizon is also the edge of our photo that rightmost point that gives us our greatest x extent, and we need to double that (to account for the left side) - we then divide by this quantity to scale our Δy value into a ratio of the whole frame. So 2 times the x value of B" would be:
The Z cancels out leaving us with:
So we can now divide Δy by this extent value so we get a ratio
giving us a final equation:
We finally only need to multiply our ratio by the number of horizontal pixels (assuming the pixels are square, or you would need to adjust for the pixel ratio) to give our final value
Therefore, in an undistorted, rectilinear 1920 pixel horizontal resolution image with a Field of View of 94.4° we should expect to see approximately 41 pixels of curvature from 100,000'.
Already very close, but we're looking right at the Horizon rather that straight-out. Does that change our calculation? Well, yes, but only a little bit because in tilting the camera we've slightly change the point on the horizon which hits the edge of our frame.
Now let's rotate our camera so it points right at the horizon peak (P), as it does in our example image. To do this we will need to rotate our coordinates by our Horizon Dip angle along our x-axis (x coordinates remain unchanged but y and z should rotate) so that point P becomes directly ahead [0, 0]. Then we can repeat the calculations above.
The rotational matrix multiplication for x-axis rotation is:
Given an angle u the equations to transform x, y, and z coordinates are:
Since we are 'looking down' we need to rotate 'up' so our angle will be positive and equal to Horizon Dip to bring P right into the center of our frame at \([0,0]\).
Our point P starts at \([0, H, Z]\) and we need to rotate point P to be straight ahead, so our u angle:
and project into 2D:
That was easy because the rotation exactly cancels out the previous \(H/Z\) slope. But it's a good check that our rotation was in the desired direction.
And next we need to rotate point B, we start out in the same place as before:
And we rotate by matrix multiplication, for our small angles the cos(u) term will the majority of the value, and we shift a tiny bit into sin(u).
and project into 2D by dividing x and y by z, as before:
This is why you need special video cards to play games at high frame rates... and we've only rotated one axis.
Since point P's y value is 0 our Δy is therefore just our y value from B":
Divide Δy by 2 times B" rotated x-extent (x/z) to get our ratio (I multiply by the inverse here):
The first denominator and the second numerator cancel out leaving:
and multiply by our horizontal pixel count to scale back to pixels, giving our formula:
In this case we get 40.9 pixels.
With the details of the lens "fish-eye" or curvilinear properties we could further transform our image to match more exactly. Since the horizon is near the center of the lens the distortion should be slight and working against the visible curvature I'll leave the lens correction as an exercise for the reader. The simplest method would be process the image using the Lightroom/Photoshop plugins that can do the lens correction automatically and then just measure the pixels.
Because the horizon is at or below the center line, which would flatten the curvature not exaggerate it, we can easily be certain that this footage shows that the horizon is convexly curved. We have also calculated what we would expect to see under ideal conditions and find a good match with our observation once we account for the actual lens distortions.
We can see that our lens, altitude, and FOV are all very important to figuring out what we expect to see and knowing what to expect is critical in evaluating what we actually see - especially through the eye of a camera lens. And that just because the camera image has some distortion it doesn't mean the photo tells us nothing about the scene. Indeed, this image very clearly shows the curvature despite the distortion working against it.
If you want to measure the curvature you have to be very careful and pretty high up.
And while 35,000' sounds pretty high up, here is what that looks like:
One approach is to apply lens distortion correction to the image. There are a number of tools for this. The GNU Image Manipulation Program (GIMP) has lens distortion/correction capabilities. There is also a (limited) online lens correctino tool in photo-kako. There is a YouTube video discussing how to remove HERO3 Lens Distortion. There is a dedicated tool called PTLens. Or you can use the ImageMagick toolsuite, see Correcting Lens Distortions in Digital Photographs.
Here is our image with correction applied to 'defish' the image and you can clearly see the inverse effect it has in the 'pincushion' appearance of the previous straight-lines (the progress bar and text). But as you can see, our curvature remains clearly pronounced because there is very little distortion in the center of the lens and because we've placed all of our horizon below center the evil 'fish-eye' was actually slightly flattening out the real curvature.
There it is folks -- the curvature of the horizon at ~100,000'
There is also a good YouTube video by Frank de Brouwer where he does a similar analysis on High-Altitude GoPro camera footage:
Of course, this only demonstrates that our horizon is a circle of a radius entirely consistent with the oblate spheroid model of the Earth. What we have done here is calculate how big we would expect that circle to be given our altitude above a spheroid and how that should appear in a camera lens given some field of view.
Measurements and observations can only lend weight to any model, they never "prove" it, as Flat Earthers are so fond of claiming. Rarely is this more clear than the observation that just because a photo makes it look flat (or curved), doesn't mean it is actually flat (or curved). It just means the curvature is hard to measure, especially at lower altitudes (in this case, due to the immense size of the Earth) and in the face of lens distortion. But we have taken these into account in this analysis so we can be fairly sure we are seeing actual curvature of the horizon.
But I see no way that this view would be consistent with a Flat Earth model, there is no known physics that would explain the observed size of the horizon on a Flat Earth. We are well above the majority of the atmosphere here so we would be able to see MUCH further if the plane below us were flat.
Thanks to doctorbuttons for pointing out the disagreement with Lynch in my original calculator and to Mick for his contribution of the 3D to 2D transforms.
[¹] Lynch, David K. "Visually Discerning the Curvature of the Earth." Applied Optics 47.34 (2008): n. pag. Web. 09 Sept. 2016.
Another lower-altitude shot:
and at about 13km up we can start to see definite curvature, see further analysis on where this is any why that isn't some kind of "Hotspot" due to a local sun.
Not So Fast
Almost all the footage from high-altitude balloons shows the apparent curvature fluctuating from
...concave (when the horizon line is below center of the lens)
Figure 1. Horizon below lens center bows the horizon down, would flatten any actual curvature |
...to convex (when the horizon line is above center)
Figure 2. Horizon above lens center bows horizon up |
...to flat?
Figure 3. Horizon through lens center is most accurate |
This effect is called 'Lens Barrel' and is almost unavoidable when you are using the wider-angle lenses common for this purpose, especially on cameras such as the popular GoPro series. Lenses that try to compensate for this are called Rectilinear lenses but it's more difficult, and more expensive, to make a wide-angle Rectilinear lens.
I marked a big X on the last frame to help us find the center of the image (the other two are obviously far from center). What we find is that the closer the horizon line is to the center of the lens the less distortion there is.
All three of these images are from the same video, the same camera, and the same lens. They are all distorted exactly the same way. This is just to demonstrate that the area in the center of the lens has less distortion and to show clearly the direction of that distortion.
So it should be clear that you cannot just pick any frame from a video and declare the Earth to be flat or curved. You have to analyze what you see so you can draw an accurate conclusion.
If we were looking through such a lens straight at a square grid of lines the resulting image would look something like this:
Figure 4. Curvilinear distortion, credit Wikimedia Commons |
The middle portions (vertically and horizontally) are expanded outwards from the center so our straight lines appear to be bowed the further from the center you get, exactly as we saw in the first frame when the horizon was below center and it was bowed downwards.
This means that when the horizon is at or below the center line and you can still see positive curvature, then you can be sure you are seeing actual curvature.
And, with the horizon below the center line, we would also expect the measured "apparent curvature" to be slightly less than the actual curvature due to this lens distortion. However, the edges are less distorted than the middle portion so under normal conditions the distortion should be small.
We really only care about two points, the peak of the horizon and where the horizon meets the edge of the frame.
We can also use the properties of the lens to reverse the distortion introduced, as I've done here with the first two images.
Figure 5. Frame from Robert Orcutt footage, corrected for lens distortion |
Figure 6. Frame from Robert Orcutt footage, corrected for lens distortion |
How to proceed?
One thing we can do is try to find a frame where the peak of the horizon goes through the center of the lens and is fairly level, to minimize distortion. And since the downward curved horizon would entirely fall in the lower half of the frame, the distortion here is actually making the curvature appear flatter rather than exaggerating it. Once you understand that you are already seeing less curvature than is actually present in such an image, you really shouldn't need any further analysis to know that this shows positive curvature.
At the lower altitudes we all know that it will at least appear very flat and be difficult to measure¹, as it should be given the size of the Earth. The visual curvature at this point would be mere fractions of a degree so we're going up to around 100,000 feet to get a better view.
For this detailed analysis I'm using the raw RotaFlight Balloon Footage from around 3:22:50 because they documented the camera details, the horizon is pretty sharp, and this footage is available at 1080p (an earlier version of this page used the Robert Orcutt footage but it was 720p, I didn't know which camera or settings he used, and the horizon wasn't very sharp in that footage). The camera used in the RotoFlight footage was a GoPro Hero3 White Edition in 1080p mode. From the manual I know that 1080p on this camera only supports Medium Field of View which is ~94.4°. The balloon pops almost 4 minutes later (at 3:26:29) so we should be up around 100,000' in this frame.
I used a Chrome plugin called 'Frame by Frame for YouTube™' to find a frame where the horizon is about one pixel below dead center and it is very level. I then captured that frame at 1920x1080:
Figure 7. RotaFlight Raw Weather Balloon Footage, ~3:22:50 |
I marked the center of the frame with a single yellow pixel, so you can see the horizon is either at or maybe 1 pixel below that mark. There are 38 pixels between the two red lines (both of which are 1 pixel high and absolutely straight across the frame) which mark the extents of the peak of our horizon and where the horizon hits the edge of the frame (±2 pixels as the horizon isn't perfectly sharp due to the thick atmosphere along the edge).
And again, since the limbs of the Earth shown here are below the center of the lens the distortion is, if anything, reducing the apparent curvature into a slightly flatter curve.
If you wanted a picture you could look at and verify the positive curvature of the Earth, this is that picture.
[You can also use the FEI horizon calculator to do the calculations shown below]
How close is that to what we EXPECT to see?
Common sense should tell you that, because of the enormous size of the Earth compared to our view, we would expect the visible curvature to still be very slight, even at 100,000 feet, especially with a more narrow Field of View as shown when we crop this image:
Figure 8. Why Field of View (FOV) matters |
What we need to do is calculate how much of a visual 'bump' the peak of our horizon should have over where the horizon meets the edge of the frame given our estimated altitude of 100,000', camera horizontal Field of View of 94.4°, and an image 1920 pixels across.
But why is there a visual bump?
Perspective
One thing we know from the study of Perspective is that a circle, when viewed at angle other than 90°, is going to appear as an ellipse that is more and more 'smushed' the steeper the angle at which we view it. Viewed on edge from the middle it's going to look 'flatter and flatter'.
Figure 9. Effect of perspective on circle (image credit) |
In our case we are in the middle of this circle and we're viewing the edge at an angle of about 5.6° at this altitude, so it's going to be very squished, but it would still have a definite bump in the middle.
And to find that we need...
Geometry
NOTE: we are using a spherical approximation of the Earth and ignoring atmospheric refraction in order to make our calculations manageable. However, this means that for larger heights our estimates will diverge slightly from actual observations. Not by very much but keep in mind there is a small margin of error here as Earth's Radius will be slightly different by location, but it only varies by ~0.34%. I am also not using the same method as the Lynch paper as I could not find his equation for X at the time of this writing, I get only slightly different results from Lynch using this approach. So to be clear, this method here is only a way to get a Very Good approximation of what you would expect to see, not a perfect calculation.
To help us get our geometry and nomenclature down I made some diagrams.
We are at point O (the observer) at about h=100000' (~18.9 miles) above an Earth with a radius R of about 3959 miles. When we look out to a point just tangent on the surface of the Earth, our horizon peak at point P will appear "a little higher" in our field of vision than the edges of the horizon do, at points B/C. This is easy to tell because we can draw a line from point OP and OK and there is clearly an angle between them. Point G is ground-level and point A is the center of our horizon circle.
Because line D is a tangent to the surface of the Earth, we know that it forms a right angle with the center of our sphere, this allows us to easily calculate all the distances and angles involved:
Figure 10. NOTE: 100,000' would only be ~2 pixels high at this scale! As shown, h is ~900 miles up. |
Here is our view from above (this one is to scale), showing the horizon circle in green, the edges of the horizon at points B & C, and the distance KP is our horizon circle Sagitta:
Figure 11. Overhead View of Horizon Circle - in GeoGebra |
And here are the calculations from my GeoGebra Calculator showing the side view:
Figure 12. Side View of Geometry - in GeoGebra |
So in an ideal rectilinear lens we would expect to see approximately 41 pixels between the edges of our horizon circle (points B & C) and our horizon peak (at point P).
This is very good agreement with the image above where we have ~38±2 pixels using the Curvilinear lens which has slightly squished our curvature.
The Math
In this section we will look in detail at the mathematics for the above geometry and we will use this to then estimate what we expect to see in our camera.
We are given only a few values to start with but we can find all the rest using just the Pythagorean theorem using our two right triangles. After that we will step through the projection and calculate how many pixels we should observe which requires us to map our view into a 2D frame like our picture is. If you need help with some of the right triangle formulas here you can use this Right Triangle Calculator which will explain it in more detail (remember to put the 90° angle in the correct position and select the element you want to solve for).
So here are the solutions to the various line segments in our diagram. These are all fairly straight forward right triangle solutions. We mainly need D, H, & Z which we get from simple geometry. I've included some of the other calculations for reference only. Values for a, p, h, and R are given values.
Variable | Equation | Value | Description |
---|---|---|---|
a | 94.4° | 94.4° / 1.647591 rad | Horizontal Field of View |
p | 1920 pixels | 1920 pixels | Horizontal Resolution (pixels) |
h | 100000 ft | 18.9394 mi | Observer Height (in miles) |
R | 3959 mi | 3959 mi | Earth Radius (approximate) |
ß | \(\arcsin(R/(h+R))\) | 84.408° / 1.4732 rad | angle at XOD (90°-ß) is angle from level to horizon point P |
D | \(\sqrt{h(h+2R)}\) | 387.7123 mi | distance to the horizon (OP) |
Z | \((D \cdot R)/(h+R)\) | 385.8664 mi | radius of horizon circle also AP = \(D sin(β)\) = \(( R \sqrt{h(h+2R)} )/(h+R)\) |
S | \((h \cdot R)/(h+R)\) | 18.8492 mi | distance to horizon plane from Ground also AG = \(R-\sqrt{R^2-Z^2}\)) |
H | \(S+h\) | 37.7886 mi | Observer Height above horizon plane also OA = \(D \cos(β)\); (\(((h \cdot R)/(h+R))+h\)) |
S₁ | \(Z (1-\cos(a/2))\) | 123.6928 mi | height of the chord made by BC is given by KP, this is where our Field of View is used to find point K |
Horizon Dip | \(arctan(H/Z)\) | 5.593° | angle from level to point P (this is the angle for OP, from slope of H over Z, also \(90°-ß\)) |
Chord Dip | \(\arctan(H/(Z-S₁))\) | 8.202° | angle from level to point K (also \(\arctan(H/(Z \cos(a/2))\)), again, angle for the slope of OK) |
Horizon Sagitta Angle | |Chord Dip-Horizon Dip| | 2.6086° | True/geometric angle between horizon peak and edges |
Sagitta Pixel Height | *discussed below | ~40 pixels | Estimate in pixels |
So even at 100,000' our horizon is a mere 386 mile radius circle and we're just 19 miles above the center. So we are still only seeing a fraction of the Earth at a fairly slight angle.
And a large portion of that distant horizon is actually compressed by perspective and tilted to our viewpoint, making it virtually impossible to see. The majority of the visible area where you can make anything out is considerably smaller than the full horizon distance.
Given this configuration we should then expect to see only a 2.6° difference between the horizon peak and the line formed between the edges of the horizon. That is slight but measurable.
But now we want to know if this matches our image.
3D Projection
Since we want to compare this to our photograph (Sagitta Pixel Height) we also need to know how many pixels high that is and this is where things get a little bit tricky.
We will run through this process twice. The first time looking straight out and the second time we will rotate our view to look slightly down directly toward our horizon peak (P) which is fractionally more accurate for the image we are looking at but also more complex.
For our 3D to 2D projection we can take 3D coordinates [x,y,z] and transform them by dividing x and y by the z value [x/z, y/z] which projects onto the plane z=1. We can show that this preserves straight lines by considering two points [0,100,100],[50,100,100] which trivially transform into [0,1],[0.5,1] - so we can see they remain along the same y value, therefore remain in a straight line - we've simply scaled them down proportional to their distance (which is how perspective works). Geometrically, think about being in the middle of 4 equally spaced parallel lines which run out in front of you, as the distance (z) increases the points at different z values would simply get closer and closer to the center in the 2D projection. It is easy to see how [x/z, y/z] accomplishes this.
The second step then is finding the 'edge' of our frame. Since we know our Field of View (FOV) we can find that by locating the x-axis extent of a point lying along our Field of View (conveniently provided by finding the edge of our horizon).
We make the origin be our camera at [0,0,0] and we are looking out parallel to the plane of our horizon circle along the z-axis (towards [0,0,1]) and our axes are oriented as follows:
x-axis = left(-)/right(+)
y-axis = up(-)/down(+)
z-axis = behind(-)/forward(+)
y-axis = up(-)/down(+)
z-axis = behind(-)/forward(+)
Locating our Points
Our horizon peak (P) is therefore found directly ahead (x=0), down at the horizon circle plane (y=H), and at a distance Z from our camera. This is not the total distance (D) from camera to point P but rather only the z-axis distance (Z).
\(P = [0, H, Z]\)
Which projects into 2D as:
\(P" = [0/Z, H/Z]\)
\(P" = [0, H/Z]\)
\(P" = [0, H/Z]\)
So far, very simple.
Where is point B? It is to the right at 1/2 the FOV angle (\(a/2\)) on our horizon circle. We can convert an angle and distance (radius of our horizon circle = Z) into coordinates using \(x=Z \sin(a/2), z=Z \cos(a/2)\), and of course our y value is the same as P, in the plane of the horizon circle at y=H. Here is a diagram showing our horizon circle and where the points are located on it.
Figure 13. Overhead View of Horizon Circle with Equations for Dimensions |
This places Point B at
\(B = [Z \sin(a/2), H, Z \cos(a/2)]\)
and we again divide x & y by z to project into the 2D plane:
\(B" = [\color{Orange}{(Z \sin(a/2))/(Z \cos(a/2))}, \color{ForestGreen}{H/(Z \cos(a/2))}]\)
Finding the y-axis delta
Now that we have both of our points mapped into 2D coordinates:
\(P" = [0, \color{Blue}{H/Z}]\)
\(B" = [\color{Orange}{(Z \sin(a/2))/(Z \cos(a/2))}, \color{ForestGreen}{H/(Z \cos(a/2))}]\)
\(B" = [\color{Orange}{(Z \sin(a/2))/(Z \cos(a/2))}, \color{ForestGreen}{H/(Z \cos(a/2))}]\)
we just need to find the vertical (y-axis) difference between the 2D y-axis values, which is simply:
\(Δy = |\color{Orange}{y₁} - \color{Blue}{y₀}|\)
\(Δy = [\color{Orange}{H/(Z*cos(a/2))}] - [\color{Blue}{H/Z}]\)
\(Δy = [\color{Orange}{H/(Z*cos(a/2))}] - [\color{Blue}{H/Z}]\)
Simplified:
\(Δy = H/Z (1/\cos(a/2)-1)\)
We can also further reduce the expression H/Z into terms of h and R by substitution:
\(H/Z = (S+h) / ((D \cdot R)/(h+R))\)\(H/Z = (((h \cdot R)/(h+R))+h) / (((\sqrt{h(h+2R)})R)/(h+R))\) [wolfram|alpha]
\(H/Z = \sqrt{h(h+2R)} / R\)
\(H/Z = D/R\)
\(H/Z = \sqrt{h(h+2R)} / R\)
\(H/Z = D/R\)
Giving us:
\(Δy = D/R * (1/\cos(a/2)-1)\)
Finding the Frame
Next we need to find the extents of our frame. Since the edge of the horizon is also the edge of our photo that rightmost point that gives us our greatest x extent, and we need to double that (to account for the left side) - we then divide by this quantity to scale our Δy value into a ratio of the whole frame. So 2 times the x value of B" would be:
\(\require{cancel}\Large{2 {\frac{\cancel{Z} \sin(a/2)}{\cancel{Z} \cos(a/2)}}}\)
The Z cancels out leaving us with:
\(\Large{2 \frac{sin(a/2)}{cos(a/2)}} \normalsize{= 2 \tan(a/2)}\)
So we can now divide Δy by this extent value so we get a ratio
\( \Large{\frac{Δy}{2 \tan(a/2)} = \frac{D}{R} \left(\frac{1/cos(a/2)-1}{2 \tan(a/2)}\right) = \frac{D}{R} \left(\frac{1}{2} \tan(a/4)\right)} \)
giving us a final equation:
\(\Large{\frac{D}{R} \cdot \frac{\tan(a/4)}{2}}\)
Find the Pixels
We finally only need to multiply our ratio by the number of horizontal pixels (assuming the pixels are square, or you would need to adjust for the pixel ratio) to give our final value
Sagitta Pixel Height = \(\Large{p \cdot {\frac{D}{R} \cdot \frac{\tan(a/4)}{2}}}\) = 41.07
Therefore, in an undistorted, rectilinear 1920 pixel horizontal resolution image with a Field of View of 94.4° we should expect to see approximately 41 pixels of curvature from 100,000'.
Already very close, but we're looking right at the Horizon rather that straight-out. Does that change our calculation? Well, yes, but only a little bit because in tilting the camera we've slightly change the point on the horizon which hits the edge of our frame.
Rotated Coordinates
Now let's rotate our camera so it points right at the horizon peak (P), as it does in our example image. To do this we will need to rotate our coordinates by our Horizon Dip angle along our x-axis (x coordinates remain unchanged but y and z should rotate) so that point P becomes directly ahead [0, 0]. Then we can repeat the calculations above.
The rotational matrix multiplication for x-axis rotation is:
\( \begin{bmatrix} x \\[0.3em] y \\[0.3em] z \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\[0.3em] 0 & \cos & -\sin \\[0.3em] 0 & \sin & \cos \end{bmatrix} \)
Given an angle u the equations to transform x, y, and z coordinates are:
\(x' = x\)
\(y' = y \cos(u) - z \sin(u)\)
\(z' = y \sin(u) + z \cos(u)\)
\(y' = y \cos(u) - z \sin(u)\)
\(z' = y \sin(u) + z \cos(u)\)
Since we are 'looking down' we need to rotate 'up' so our angle will be positive and equal to Horizon Dip to bring P right into the center of our frame at \([0,0]\).
Our point P starts at \([0, H, Z]\) and we need to rotate point P to be straight ahead, so our u angle:
\(u = \arctan(H/Z) = \arctan(D/R)\)
\(x' = 0\)
\(y' = H \cos(u) - Z \sin(u) = 0\)
\(z' = H \sin(u) + Z \cos(u) = D\)
\(P' = [0, 0, D]\)
\(y' = H \cos(u) - Z \sin(u) = 0\)
\(z' = H \sin(u) + Z \cos(u) = D\)
\(P' = [0, 0, D]\)
and project into 2D:
\(P" = [0/D, 0/D]\)
\(P" = [0, 0]\)
\(P" = [0, 0]\)
That was easy because the rotation exactly cancels out the previous \(H/Z\) slope. But it's a good check that our rotation was in the desired direction.
And next we need to rotate point B, we start out in the same place as before:
\(B = [Z \sin(a/2), H, Z \cos(a/2)]\)
And we rotate by matrix multiplication, for our small angles the cos(u) term will the majority of the value, and we shift a tiny bit into sin(u).
\(x' = Z \sin(a/2)\)
\(y' = H \cos(u) - (Z \cos(a/2)) \sin(u)\)
\(z' = H \sin(u) + (Z \cos(a/2)) \cos(u)\)
\(y' = H \cos(u) - (Z \cos(a/2)) \sin(u)\)
\(z' = H \sin(u) + (Z \cos(a/2)) \cos(u)\)
\(B' = \begin{bmatrix} Z \sin(a/2) \\ H \cos(u)-(Z \cos(a/2)) \sin(u) \\ H \sin(u)+(Z (\cos(a/2)) \cos(u) \end{bmatrix} \)
and project into 2D by dividing x and y by z, as before:
\( B" = \begin{bmatrix} (Z \sin(a/2))/(H \sin(u)+(Z (\cos(a/2)) \cos(u)) \\ (H \cos(u)-(Z \cos(a/2)) \sin(u))/(H \sin(u)+(Z (\cos(a/2)) \cos(u)) \end{bmatrix} \)
This is why you need special video cards to play games at high frame rates... and we've only rotated one axis.
Since point P's y value is 0 our Δy is therefore just our y value from B":
\( \Large{Δy = \frac{H \cos(u) - (Z \cos(a/2)) \sin(u)}{H \sin(u) + (Z \cos(a/2)) \cos(u)}} \)
Divide Δy by 2 times B" rotated x-extent (x/z) to get our ratio (I multiply by the inverse here):
\( \Large{ \frac{H \cos(u) - (Z \cos(a/2)) \sin(u)}{\cancel{H \sin(u) + (Z \cos(a/2)) \cos(u)}} \cdot \frac{\cancel{H \sin(u)+(Z (cos(a/2)) \cos(u)}}{2 \cdot Z \sin(a/2)} } \)
The first denominator and the second numerator cancel out leaving:
\( \Large{\frac{H \cos(u) \; - \; (Z \cos(a/2)) \sin(u)}{2 \cdot Z \sin(a/2)}} \)
and multiply by our horizontal pixel count to scale back to pixels, giving our formula:
Sagitta Pixel Height = \(\Large{p \cdot \frac{H \cos(u) \; - \; (Z \cos(a/2)) \sin(u)}{2 \cdot Z \sin(a/2)}}\)
In this case we get 40.9 pixels.
With the details of the lens "fish-eye" or curvilinear properties we could further transform our image to match more exactly. Since the horizon is near the center of the lens the distortion should be slight and working against the visible curvature I'll leave the lens correction as an exercise for the reader. The simplest method would be process the image using the Lightroom/Photoshop plugins that can do the lens correction automatically and then just measure the pixels.
Conclusion
Because the horizon is at or below the center line, which would flatten the curvature not exaggerate it, we can easily be certain that this footage shows that the horizon is convexly curved. We have also calculated what we would expect to see under ideal conditions and find a good match with our observation once we account for the actual lens distortions.
We can see that our lens, altitude, and FOV are all very important to figuring out what we expect to see and knowing what to expect is critical in evaluating what we actually see - especially through the eye of a camera lens. And that just because the camera image has some distortion it doesn't mean the photo tells us nothing about the scene. Indeed, this image very clearly shows the curvature despite the distortion working against it.
If you want to measure the curvature you have to be very careful and pretty high up.
And while 35,000' sounds pretty high up, here is what that looks like:
Figure 14. Calculations from 35,000' |
One approach is to apply lens distortion correction to the image. There are a number of tools for this. The GNU Image Manipulation Program (GIMP) has lens distortion/correction capabilities. There is also a (limited) online lens correctino tool in photo-kako. There is a YouTube video discussing how to remove HERO3 Lens Distortion. There is a dedicated tool called PTLens. Or you can use the ImageMagick toolsuite, see Correcting Lens Distortions in Digital Photographs.
Here is our image with correction applied to 'defish' the image and you can clearly see the inverse effect it has in the 'pincushion' appearance of the previous straight-lines (the progress bar and text). But as you can see, our curvature remains clearly pronounced because there is very little distortion in the center of the lens and because we've placed all of our horizon below center the evil 'fish-eye' was actually slightly flattening out the real curvature.
Figure 15. Our Final Image, corrected for lens distortion |
There it is folks -- the curvature of the horizon at ~100,000'
There is also a good YouTube video by Frank de Brouwer where he does a similar analysis on High-Altitude GoPro camera footage:
The Bad News/Good News
Of course, this only demonstrates that our horizon is a circle of a radius entirely consistent with the oblate spheroid model of the Earth. What we have done here is calculate how big we would expect that circle to be given our altitude above a spheroid and how that should appear in a camera lens given some field of view.
Measurements and observations can only lend weight to any model, they never "prove" it, as Flat Earthers are so fond of claiming. Rarely is this more clear than the observation that just because a photo makes it look flat (or curved), doesn't mean it is actually flat (or curved). It just means the curvature is hard to measure, especially at lower altitudes (in this case, due to the immense size of the Earth) and in the face of lens distortion. But we have taken these into account in this analysis so we can be fairly sure we are seeing actual curvature of the horizon.
But I see no way that this view would be consistent with a Flat Earth model, there is no known physics that would explain the observed size of the horizon on a Flat Earth. We are well above the majority of the atmosphere here so we would be able to see MUCH further if the plane below us were flat.
Thanks to doctorbuttons for pointing out the disagreement with Lynch in my original calculator and to Mick for his contribution of the 3D to 2D transforms.
[¹] Lynch, David K. "Visually Discerning the Curvature of the Earth." Applied Optics 47.34 (2008): n. pag. Web. 09 Sept. 2016.
Follow-up showing a couple of frames from lower altitudes
Jeff Rhoades said I didn't "test whether there was curvature of the horizon at a low altitude when the camera is at dead center", I actually did show a shot earlier in the article which does show that -- but it is from different footage and it isn't nicely horizontal so it is a fair request.
And by comparing these images to the higher altitude images we can see the curvature is profoundly increasing. Good call Jeff and thank you for the feedback!
Since the Earth is very large it EXPECTED that the amount of horizon sagitta we can see at lower altitudes is almost unmeasurable. The horizon is too hazy, the camera resolution is too low, and the field of view is too small. These are not insurmountable issues however. If you DO want to measure the horizon sagitta at lower altitudes some of the things you can do are: put a giant straight edge RIGHT in the center of the camera frame so that any lens distortion is also applied to the straight edge, get up to 140 degrees Field of View, shoot in infrared to minimize haze, and get an 8K camera.
Another lower-altitude shot:
and at about 13km up we can start to see definite curvature, see further analysis on where this is any why that isn't some kind of "Hotspot" due to a local sun.
Comments
Post a Comment