Earlier this month, I promised to introduce you to a few of our newer images that had been overlookedblog. The oldest of those is “Wild Stallion”, which was added to our regular collection in January, 2017, just in time for a festival in Wellington. Wellington is known for its polo and its equestrian community.
But this isn’t the first time I’ve mentioned this image in our blog. The photograph was taken eight Januarys earlier at Paynes Prairie Preserve State Park near Gainesville. And in 2017 I chose it to demonstrate how to use vanishing points to adjust the size as you moved an object around in your image (see Use Vanishing Point To Resize Animals You Move Around In Post-Processing). Nancy liked the results and decided to add it to our collection.
You can learn more about this image on its webpage (Wild Stallion).
Our friend, Ibis Hillencamp (whom you may remember for the advice she gave on our FAQ page about becoming a better photographerlink) thought people might need an explanation of a photograph’s aspect ratio and why you need to consider it when enlarging or cropping your images.
When you enlarge a picture, unless you want distortion, you have to increase the width the exact same ratio as the height. For example, a 4″ by 6″ image might be enlarged into an 8″ by 12″ image, or a 10″ by 15″, and so forth. For each of these examples, the aspect ratio, which is the height divided by the width (or vice versa, as long as you are consistent), remains the same (). Mathematicians would call the three rectangles in this example, and all others with the same aspect ratio, “similar”. When placed at the right distances, you would not be able to tell them apart. SLR cameras, starting with the analog 35mm and continuing to the digital versions, have an aspect ratio of 2:3 and can make prints the size of any of the above examples with no problem. Other cameras have different aspect ratios. If you haven’t already done so, learn your camera’s aspect ratio.
And Now The Bad News
The problem starts when you try to put your picture in a standard-sized frame. They routinely have a different aspect ratio. If you want an 8″ by 10″ print, for example, you will be changing the aspect ratio to 0.8. An 11″ by 14″ print has an aspect ratio of 0.786. The simple answer would be to crop your original image, which means you are going to lose part of the picture. That could be a problem. The other option is to fill in any missing parts. That is almost always a problem. Let me show you.
For those of you who do not recognize her, the above picture is of my wife, Nancy, the nature and wildlife photographer (No, this is not a selfie). This image has an aspect ratio of 4:3. Suppose we want to put her picture in a mat with a 3:2 aspect ratio. The easiest thing would be to crop to the red rectangle, which is the largest such rectangle we can get from the given material. But as you can see, there is no breathing space around the hat. So we could enlarge to the orange rectangle to use the original picture’s entire width, but we will need to get creative and fill in some along the top and bottom edges (by the way, can you guess why the top and bottom voids created by the orange rectangle are not the same size?). While the techniques to fill those voids are beyond the scope of this article, I would like to share a few thoughts. These thoughts apply not only to the case where you need to add material to change aspect ratio but for other causes also, like when you inadvertently cut off some body part.
The first moral to this dilemma is don’t get too tight on your subject while shooting. Start leaving yourself a little more edge room when you take your pictures. Besides not inadvertently cutting off body parts, which are harder to bring back after-the-fact, you might actually capture the subject’s whole reflection, which you didn’t even notice in the excitement of getting this unique subject.
The first step in processing this change in aspect ratio is to go back and check the original file. Maybe you had previously cropped the image for compositional purposes and the original might still have at least part of the now-missing material that you need.
Small, uncomplicated additions are easy enough with Photoshop’s Clone Stamp tool (and although I’m not a huge fan, sometimes Content-Aware Fill might even work), but it gets trickier as the size of the addition increases. It would be no problem to fill the new space above Nancy’s head with sky, and maybe even throw in an extra cloud or two, but if for some reason, we had wanted to extend the left edge of this image an inch or so, finding enough water to fill the gap without people noticing repetitions could be an issue.
Sometimes you can create more usable material from within the image itself by copying some of the waves, for example, and flipping them, or rotating them, etc. But you will have to judge the effectiveness of these actions on a case-by-case basis.
Look at the photograph you took just before this one and just after this one for more material. Especially if you are shooting wildlife, I know you had your camera on rapid-shoot. The neighboring shot that you didn’t select for this image may have ‘new’ material that would be useful for your current extension project.
Continue to expand your search area. Even if you didn’t get another picture of your subject squirrel that day, you might have other squirrel pictures you can use to replace that missing body part.
Send Your Ideas
Well, that’s all I have for now. Although I have no intentions yet of following this article with more detailed information on the Clone Stamp or other tools, I am pretty sure there are plenty of tutorials out there, both by Adobe and by several third parties. If you do have your own hard-earned techniques or suggestions on any of the material I’ve just discussed or even a horror story that’s relevant, I’m sure my readers would love to see your comments below. Thanks.
I’ve been using Adobe Photoshop since before we started this endeavor but didn’t buy Lightroom 3 until December 2010. I bought Lightroom so I could catalog and keep track of Nancy’s growing collection of photographs (I still do virtually all of our photo editing in Photoshop). At the time, Nancy already had about 15 thousand digital images (we won’t even talk about all of her slides and negatives). After a number of unsuccessful campaigns, I can now report that essentially all of THOSE photographs have been entered into Lightroom. In the process, I have also cataloged some of her more recent work. Altogether, I’ve now cataloged 29 thousand of her 64 thousand digital photographs (and counting). I’ve identified 360 species of bird, 45 species of butterflies and moths, about 100 mammals (including eleven types of squirrels). In her digital era, we have made eleven trips out of the country to five different continents (the other two continents haven’t been visited since the days of film). So you are probably wondering what I learned about cataloging.
Well, I’m still learning. Part of the problem is that we cater to some discriminating classes of consumer, like birders (and others), who want to know about the specific type of bird or butterfly. But not being an expert, I’ve not always been successful at identifying those subjects, even after spending quite some time doing research. This is part of the reason you may have noticed I have actually lost ground so far (if you’ve done the math). But I have learned a few things.
In The Beginning
First, some background. Before Lightroom, I thought it would be good, as some experts had suggested, to put our photographs in folders based on content. I think I had a folder (they may have called them directories back then) for dogs and another for people, each of which had subfolders, but it soon became apparent that some images had both dogs and people and the whole system became a bit of a mess before I realized a need to move on to a better system.
Now, at the end of a day of photographing I upload the pictures to the computer into a folder labeled with the date, which is in another folder labeled with the year. It’s simpler this way. If we are away from home, they get uploaded onto the laptop and immediately backed up onto an external hard drive before formatting the camera’s memory card to be ready for the next day. Then when I get home I transfer the laptop copies to my desktop. I also regularly back up our whole portfolio to one of our larger external hard drives on a basis that is never quite as regular as it should be, but that part is beyond the scope of this article. And then when I get around to it, I sit down and import the pictures into Lightroom one daily folder at a time. I already have a metadata preset giving Nancy’s contact information but before each upload, I update the preset’s location. We don’t religiously get GPS data, but at least try to add sublocation, city, and state. After importing, I go through and add keywords. It is the keywords I’m relying on to find the pictures I’m looking for years later. As far as the other parts of the workflow that people write about, like rating and weeding, that’s Nancy’s job; she will decide to look through a day’s work and together we will evaluate how to handle each picture. She has “the eye”; generally I’m there just to remind her of what is possible and what isn’t feasible and to take notes on how or if she wants each one edited. But when I’m cataloging, I only cull the obvious – the hopelessly out of focus or those with the cut-off (or missing) subject, for example. There are good reasons for not being too aggressive with the delete button at this stage (which I may get a chance to comment on in the near future so stay tuned).
My Lessons On Keywording
So here my current thoughts:
Embrace hierarchical cataloging. If somebody is looking for just a butterfly picture, that’s fine, asking for ‘butterfly’ will bring up all subcategories. but if they specifically want a giant swallowtail, you can search for it directly.
Your categories should follow your own needs, not official scientific classifications. Under ‘woodpecker’ (which is under ‘bird’) I have seven different species, but ‘northern yellow-shafted flicker’ is listed separately (under ‘bird’). If someone looking through the results of a search for ‘woodpecker’ could be expected to ask “where are the flickers?” then I made a mistake. But it is easy to move things around. Which brings us to the next point-
Develop your hierarchy organically, or as needed. Start with simple categories, like ‘amphibian’ maybe, and subdivide as the number of amphibians makes searching for your favorite species of frog more time-consuming. Or if flowers are your specialty and you listed each individual species under ‘flower’ (or even if you didn’t start with the ‘flower’ keyword), combining all roses into their own subgroup of ‘flower’ (and/or supergroup of the individual varieties) might someday be appropriate. Being too detailed may be overkill at first, but those details can become more critical when you are searching through tens of thousands of pictures. Although we have a ‘bird’ category, which is well developed with many levels of subcategories, I don’t yet have ‘mammal’ as a separate category. As I mentioned, we do have ‘squirrel’, which has 11 subcategories and other things like giraffe are also subdivided. I don’t expect somebody to ask to see all of our mammal pictures, but if it does happen I can adjust.
Not all of my subcategories of ‘bird’ are individual species (or genus, or family, etc). Some of the subgroups are based on the type of bird or likely habitat; I group them with other birds they are likely to be confused with. For example, I have ‘shorebird’, which to me means all those little birds that run back and forth at the beach just ahead of the waves to feed in the sand (and includes a number of scientific families). This way if I use up my allotted time without identifying the species I can throw it in the ‘shorebird’ class and maybe identify it later (perhaps as a bonus when identifying another bird in that class). Things like moorhens or spoonbills that would never be confused with those guys would not be part of the class. Sometimes even when you cannot identify the particular species, it helps to narrow it down.
As another example of mixed classification types, under ‘people’ I have individual names. If I have pictures of related people, I might throw them together in a group by their last or family name, or add the last name as an intermediate group between ‘people’ and the individuals. But maybe more important for search purposes, I have other ‘people’ subclasses based on what they are doing, like ‘surfer’, ‘cowboy’, or ‘tourist’.
Your strict hierarchy alone may not always be the best answer. You may well wind up with a hybrid scheme. Sometimes within a species, if I have a lot of pictures (or if I expect people to ask for a particular subset of the group), I may subclassify. For example, I have both ‘male painted bunting’ and ‘female painted bunting’ under ‘painted bunting’, and for some animals, we have another subgroup for ‘immature’. But both butterflies and moths, which are separate classes in my scheme, have caterpillars. I could have ‘Species A caterpillar’, ‘Species B caterpillar’, etcetera as subcategories of every species for which I have caterpillar pictures, but this makes it difficult if someone wants to see all of my caterpillars. In this case, I made ‘caterpillar’ its own independent category and I add it to the keywords of both butterflies and moths.
To see the Note click here.To hide the Note click here.
Of course to complicate things, the caterpillar of some moths, like the Carolina sphinx moth, have their own distinct name (e.g. tobacco hornworm), so in those cases, I kept the hornworm keyword and still added ‘caterpillar’ to the picture’s keyword collection (even though it seemed redundant).
Another nice thing about keywording is the synonym list for each keyword so that one can add scientific names, or other local/common names to all of your animals, or strange nicknames to crewmates (those are the ones you will most likely remember when you do try to dig them up later). Each of those synonyms is searchable.
Keep in mind, the main purpose of cataloging/keywording is to be able to find that picture years later. The first secret would be to have a good idea of what characteristics will need to search for (and hope those requirements don’t change over the intervening years).
A secondary purpose is to record notes that would be useful in those later years. For example, having a keyword for everyone on your cruise that happened to find their way in front of your camera might not seem important now (since you won’t likely be searching specifically for them later), but if they do wind up in front of your favorite humpback whale you may need their name later and it’s best to get it down while it is still fresh.
These comments just show my current method for this process. My scheme will probably continue to evolve, and even if it doesn’t, I give no guarantee that is the best plan for you. I hope I’ve given some ideas that will be useful and maybe even save you the time of learning everything the hard way, but in the end, the most efficient cataloging scheme is probably the one that most closely matches the workings of your own brain. Whether you list individual species of plants under ‘purple flower’ or just add ‘purple flower’ as an independent subcategory of ‘flower’ (or ‘plant’) depends on how your brain normally processes these attributes. Thinking about and/or understanding how you think could be the hardest part of this process.
I’ve had a few opportunities lately to help people edit their photographs where they wanted to combine two photos into a composite and were worried about the relative sizes being proper, especially when the camera settings and/or the scene were not identical. Based on these experiences, I’ve created a couple of scenarios to introduce certain concepts.
A Safe Selfie
Suppose you need to add part of a wild animal behind you – to make a safe selfie, if you will. Most of your composite shots, where two objects are moved around in an image with plenty of other size reference points, fall in this category. Generally, combining subjects is a two-part process:
Resize One Picture To Match Pixels Per Inch For The Two Subjects
First, you must know the physical size of the two objects. In the selfie case, you probably already know your own size, but suppose you want to place the head of an animal (whose picture you took from a safe distance) right behind you. In one case, I Googled an animal to get size information but could not find the size of the head of an adult male of that species. They did list shoulder height, however. So then I found a picture of this type of animal online that showed the head and enough of the animal to measure its shoulder height (since my client’s picture of the animal did not have all of these features), and by comparing the two measurements on the picture, found the size of the head. Fortunately, it’s not always that hard. Now, measure the subjects in your two pictures in pixels. Divide the number of pixels by their length in inches. Resize one of the images so that the pixels per inch that you just calculated are the same in both pictures. For example, let’s say your 6-foot height (72″) measures 792 pixels in the first picture. That’s 11 pixels/inch. The alligator or bear’s head, which you found to be 24″ long, measures 192 pixels in the second picture, for 8 pixels per inch. You can either enlarge the ferocious animal or downsize your likeness. If you want to reduce your size, open the first picture in Photoshop. Click on “Image” in the menu, and then “Image Size…”. Make sure the “Resample” box is checked. Multiply the pictures existing resolution (say 300 Pixels/Inch) by the target pixels per inch calculated above (in this case 8 to match your animal) and divide by your starting pixels per inch (11). That gives you 218.182, which is what replaces the existing 300 in “Resolution”. Hit “OK”. Now, you and your animal head are the appropriate sizes, if you plan to put them side-by-side in your picture. If you want to move one in front of the other, its size will change.
Use A Vanishing Point To Resize One Subject For Changing Distance
Now you can use vanishing points to maintain the correct sizes as you move your objects into place. We’ve already explained that process in Using The Vanishing Point To Keep
The Size Right When Moving Wildlife Around. I would like to point out that as long as your object stays the same distance from the camera, or in the same focal plane, you can move it up, down, and all around without changing size. If you move it closer to the camera, it should get larger. When you move it away, make it smaller. Once you resize it for its new distance, you can again move it up, down, and all around within that new focal plane at no extra cost. Also, once you find the horizon in your picture, it doesn’t matter which point along that horizon line you use as the vanishing point; all of them will resize your object correctly. Pick a point that is conveniently off to one side far enough to make long enough construction lines to give you some precision when changing size.
A Beach Scene
I also helped somebody with a beach scene that invoked two simpler special cases of the resize problem. The base or background image was a wide-angle beach scene and the photographer wanted to add objects that they took with a zoom lens at the same scene that same day.
The photographer’s intent, in this case, was to shoot objects floating on the water near the horizon with a strong zoom lens and add them to the picture so that they looked closer. An object floating in the water is restricted to a specific plane in such a way that its distance from the horizon is directly related to its distance from the camera (within a camera’s normal field of view), which is the determining factor in that object’s relative size. As long as the horizon is in the picture, this is no problem. Whether you add that object at its original pixel size (as magnified by a telephoto lens) or even if you scale it further in Photoshop (by holding down the shift key to preserve the aspect ratio as you move a corner of the selected border while using the Move tool, for example), as long as you keep the horizon of the added object directly on the same line as the horizon of the background, the invisible construction lines from an invisible vanishing point will ensure the size and placement of your object are in agreement. If the horizon is not in the picture, then you need to look for other size references and handle as in the first general case discussed above.
Birds (or other airborne objects) are even easier. Their position is unrestricted and, more importantly, there are no other size references in view so there is really no way of knowing how large the object really is or how far away, meaning that if it were a ball, there would be no way for you to tell if it is a large ball far away or a smaller ball up close. If you are familiar with the object and know how large cooper hawks are, for example, then your brain will automatically assign the hawk an appropriate distance based on its size when trying to make sense of the picture. You can put that hawk just about anywhere and the viewer won’t know the difference. Obviously, if you put a pigeon in a hawk’s talons then each would act as a size reference for the other and at least their relative sizes would have to match. If they were not touching (or near enough to imply an interaction), there would be no such restriction.
There are other positional clues besides size to think about. On a sunny day, an object’s shadow provides positional information, namely the object’s relationship to the sun, which must be consistent throughout the image (for best results).
When you shoot someone’s face with a wide-angle lens from a close distance, it will not look the same as when you shoot the same face from far away with a telephoto lens. An example of this is shown in the fourth image from the top at Choose the Right Lens to Make Flattering Portraits (the only image that’s in color). I’ve seen some experts blame this on lens distortion (as the guy in this otherwise great video at Focal Length for Storytelling – How Lens Choice Affects Your Images, but I don’t consider that lens distortion. There is such a thing as lens distortion, but in this case, the subject’s nose really does look bigger and the ears really do disappear behind the cheeks if you were to close one eye and look at that person from 3″ in front of their face. I call that a perspective shift and it is strictly a matter of angles and geometry, not lens issues. The “distortion” occurs when you take that image out of context by changing the perspective, which happens quite noticeably when you move an object from very far away to very close (or vice versa) in your image or if you take a 180° panorama, for example, and print/display it small enough to cover only 15°. This perspective shift is virtually impossible to correct in Photoshop, so don’t go too wild while moving things around in your picture. (Interestingly, it is by a lack of any perspective shift that you can catch somebody who created a reflection in their picture by just adding a flipped subject in post-processing. The explanation of this tangent to today’s topic would require a separate article, however.)
To see the Note click here.To hide the Note click here.
Well, that’s about everything I know on this subject. Please feel free to contribute your own hard-earned understanding of this issue for the betterment of photography in the comment section below. Thanks.
As we discussed on the Services page of our website, we digitally “stretch” our image before wrapping it around the edge of our gallery-wrapped canvas images. Here’s how we do that:
Our gallery wraps are either 3/4” thick or 11/2“. On the thin ones, I usually take the 1/4” strip along the edges and stretch it to 1″, thus having an extra 1/4” to wrap around to the back side to cover for variations in the printing and stretching processes. On the larger ones, I take 1/2” and stretch it to 2″ (thus leaving 1/2” on the back). I wouldn’t stretch the image more than four times its original size, but you could go less. To do that, you would effectively be taking a wider margin to wrap around the side.
As an example, if I want a 12” x 18” image stretched around a 11/2” frame, I would crop the image to 13” x 19”. Then, after putting guides 1/2” in from each edge and another guide right on each edge, I would increase the canvas size 3” in both dimensions to get 16” x 22” with the image centered.
To see the Note click here.To hide the Note click here.
Click Image ⇨ Canvas Size…
Put a check in the Relative Box
Make Width and Height 3 Inches
Make sure Anchor dot is in center of the grid
I would then use a scale transform to digitally stretch the outermost 1/2” to 2” wide, filling the canvas.
To see the Note click here.To hide the Note click here.
Make sure Snap is checked in the View Menu
Use Rectangular Marquee tool to select the 1/2” strip between the guides along one of the edges
Click Edit ⇨ Transform ⇨ Scale
Place the mouse cursor over the little square in the middle of the outer edge of the selected area and drag to the edge of the canvas
Hit the check mark to finish the transform
Repeat Steps 2 through 5 with the 1/2” strips along the other three edges
(Actually, I first do the four corner squares separately, but since only a small bit along the edge of those squares has any chance of being seen, you could include them in either the horizontal or vertical strips (or even both)).
Then I add a blank (transparent) edge around the image representing the canvas I need for stretching the canvas around the frame by increasing the canvas size by double the required margins in both dimensions, the same way we did above. That margin would be at least the width of the moulding along the bottom (1″ for the 11/2” moulding we are using now) and enough extra to get a grip with the canvas pliers (for me that’s at least 3/4“). That would make the image’s final dimensions at least 191/2” x 251/2“. When I am finished, I add layers with cut lines, fold lines, staple lines, positioning marks for the hanging hardware, etcetera, but that is a personal matter beyond the scope of this article.
That’s about it. Feel free to leave comments or questions.
Our latest image, Eclipse Over Long Pine Key, of the solar eclipse in August in the Everglades was by far our most complicated yet. While spending hours and hours overcoming challenges in post-processing, I wondered if I was wasting my time – would anybody even be interested in the results and was each of these steps really necessary, or was I just over-thinking a problem again. You be the judge.
Another Brilliant Idea
Although I wrote a blog article with suggestions for taking eclipse photographs with either your smartphone or camera, we had no plans to take any ourselves. Then, about twenty hours before the start of the event, I got another ‘brilliant’ idea – the concept for a multi-Gigapan panorama and time-lapse image of the solar eclipse.
What I pictured were images of the eclipsing sun as it soared barely over the tops of the skyscrapers in downtown Miami. This image would be taken from a balcony somewhere between the tenth and twentieth floor so you could capture an interesting street view in the foreground. Of course, it would be a Gigapan to give plenty of detail (we’ve been influenced and inspired by fellow Miami photographer Robert Holmes), but one panorama would not be enough. Ideally, as you took each of your carefully placed sun shots, you would need to shoot an area of the city directly beneath it so that the constantly changing light intensity and the buildings’ shadows would change in synchrony with the sun. The sun photos would be taken with a camera with about 16-stops of neutral density filter. For ease of execution, it might be better to have a second camera for the Gigapan unit. Each associated Gigapan panorama would need to have enough overlap with its neighbor to be able to stitch them together and go high enough to capture the sun’s position so you could correctly add in the better sun shot later, and give enough headroom for the sun’s path.
Reality Sets In
Some of the challenges of this project were unanticipated and, as you will see, some were self-inflicted. Although this was our most ambitious project yet and would turn out to be tremendously challenging for the technical support crew (me), Nancy, with her artistic eye, still made the aesthetic decisions. First of all, we don’t do cityscapes. We had less than a day to get ready for this shoot, and already had a doctor’s appointment for Nancy’s mom scheduled for the morning of the eclipse. We brainstormed and searched Google Earth to make a list of possible sites. Then, after dropping off Nancy’s mom from the doctor the next day, we headed to the Everglades. The pines on the island in the lake at the Long Pine Key Campground were our first choice. We got there around one o’clock, but I was disappointed in the height of the trees, so although the equipment takes over a half an hour to set up (and the eclipse started at around 1:30), we headed to our second choice, Pine Glades Lake, about six miles away. It proved to be completely inadequate for our needs, but the side trip ate up another hour of valuable time. By the time we returned to Long Pine Key, found the ideal location, and set up the Gigapan, the eclipse was already near its peak (around 2:45). This may still be doable, I told myself; for the sun I can just flip a copy of the pictures we get in the second half of the event. I set up the Gigapan to combine all the missed areas in the first panorama, and then took six more Gigapans after that. Since I only packed one tripod, as I operated the Gigapan Nancy had to lie on her back, pull the 100-400mm lens back to 100mm so she could find the sun on the LCD monitor in Live View, zoom out to 400mm, focus, and push the button every five minutes. Shortly after 4 o’clock, after shooting our seven Gigapans and twenty sun shots, we packed up and went home.
The Real Work Begins
Stitching The Panoramas
Photoshop does pretty well putting together smaller panoramas, but bogs down as the number of images grows (for small panoramas, Canon’s PhotoStitch, which came with the camera, does an even better job). Gigapan Stitch, which comes with their motor drives, does pretty well on the larger panos, but I usually use Kolor’s Autopano Giga because it has more choices in projections, better control options, and does better at eliminating ghosting (which happens when things (including even trees and branches) move between shots). I’m still climbing the learning curve, which added to the time needed to stitch together all seven panoramas. I believe there must be a way to combine the stitched panoramas into one large image with Autopano, but I couldn’t find it in time for this project, so had to warp and stitch the individual panoramas together by hand. Unlike my imagined cityscape, the ground location of the camera and the intervening lake make the concerns about the shadows much less significant. Because of that, and the fact that each of the panoramas was larger than strictly required, I was able to cover the field with only two of the seven panoramas, but then added the last panorama – the one with clouds.
And Now For The Hard Parts
First, The Bad News
When I saw the first of the sun photos on my computer screen, I was amazed that one could capture such details as sunspots with a regular camera. And then it occurred to me that this meant I wouldn’t be able to just flip all the sun shots to re-create the shots we missed, and that since we didn’t have the sun camera on a tripod, I would have to find a way to make sure the suns had the right orientation with the horizon if I wanted this image to be anatomically, or should I say astronomically correct. Also, although I expected the sun in the panoramas to be blown out, I thought I’d still be able to use its position to place the new sun. Wrong! The blown out area was much too large to be useful.
Is There An Astronomer In The House?
But how would I determine the proper position and orientation? For position, I used “The Photographer’s Ephemeris” (TPE) app on my phone (cost: $3) to find the azimuth (compass bearing) and altitude (angle of elevation) of the sun from the location of the camera at any time during the eclipse and I had taken the compass bearing and/or the angle of elevation of a few of the features in the image during the shoot. With this information, I mapped out a grid on a separate Photoshop layer, placed a small circle at the location of each sun, and then used the Pen Tool to mark the sun’s whole trajectory.
For the orientation, I checked the web and even Facebook for pictures or information showing the orientation of the sun and moon’s path across it, but could not find the information I needed. By then it was several weeks after the eclipse, but I thought I could just go out and take new photographs of the sun with the camera on a tripod to get its orientation. My first image, at around 9 am, showed a different sunspot pattern than shown during the eclipse. A photograph taken around 1:30 pm showed that same pattern, but the sun had rotated clockwise about 66° in relation to the horizon from the first shot. This mission wasn’t going to be easy.
I opened each sun photo in Photoshop and erased the black background. On new layers, I found the center (which became slightly more challenging as the missing piece became larger), placed a circle on the sun’s edge, and drew horizontal and vertical crosshairs over it. I duplicated all of those construction lines and moved them to represent the moon. I placed all the layers into a group labeled with the time of the shot so it would be easier to combine these images into one Photoshop file. In this master sun file, I started with the shot taken at the peak of the eclipse and rotated that group so that the moon’s center was directly over the sun’s. As I rotated the other suns to align their spots with the first, the moons’ centers formed a horizontal line about half a radius above the sun’s center. Even better, the distance of each moon’s center from that of the peak moon along that line was basically proportional to the time difference between the two. Because of that, I was able to find the moon’s position at any time and replace the sun photographs that I missed.
The Sun’s Trajectory
I noticed that the sun’s 66° difference in orientation in relation to the horizon on my two later test shots seemed to match the change in angle of the sun’s trajectory in relation to the horizon during that interval. I’m still not confident that I’ve got my mind wrapped around all of the intricacies of these three moving celestial bodies, so this could be a coincidence, but I decided to run with this notion. After plotting the sun’s trajectory in its separate layer, we decided for aesthetic reasons (call this artistic license) to compensate for the disappointing tree height by compressing that layer downward (which could possibly have been an accurate representation if the declinations of (or the latitudes directly below) the sun and moon had been somewhat less than the 121/2° they were at the time). To do this, I simply made a selection with the Rectangular Marquee Tool, the bottom edge of which was on the horizon and the other three sides were large enough to include the whole trajectory line. Then, using a scale transform, I just lowered the top edge of the selected area to taste.
The Moon’s Path
I was pretty sure that the moon’s path was not parallel to the sun’s but didn’t know how much to tilt it. I heard that the plane of the moon’s orbit differed by 5 degrees from the earth’s, but didn’t know how that related to the problem. In the master sun file, I saw that the each sun had to be tilted from 0 to over 27 degrees with an average of 10°. I chose to rotate everything in the master sun file 8 degrees.
After deriving a trajectory for the sun, at first I selected and placed the appropriate photographs at five-minute intervals. When the eclipse started just after local apparent noon (when the sun crossed our meridian of longitude and was at its closest to being directly overhead), it was moving the fastest through its trajectory and the five-minute suns were much further apart than they were at the end of the eclipse three hours and 77 degrees of azimuth later. Nancy wasn’t happy. Instead of equal time, I considered next an equal-azimuth-change approach, but that would have the suns getting further apart at the end of the trajectory instead of the beginning, so Nancy decided on having the suns the same distance apart on the image, the third easiest of the three approaches because of the curved trajectory and requiring use of the Pythagorean Theorem.
Another artistic decision was the sun’s size. After re-creating the missing sun shots for the times dictated by the new spacing strategy, I copied all the necessary sun folders into the finished panorama file. As I started moving them into position, we decided they were not large enough, so I deleted all of them from the panorama file, increased the number of pixels in each direction of the master sun file by 25%, and recopied the appropriate folders into the panorama file. As I moved each sun into position, I rotated it so that the bottom edge of that group was parallel to the tangent (slope) of the trajectory path at that point. One could possibly ‘justify’ these actions by arguing that the same effect could have been achieved by just moving further away from Long Pine Key.
Finally, we decided to end the string of suns as it went behind a cloud rather than continue to the trees. The problem was that the cloud and the sun were in their respective positions at different times, meaning the cloud was not properly backlit as the juxtaposed position of the last sun dictated. It was my younger brother who first pointed out that “flaw”.
As I was setting up the first panorama near the peak of the eclipse, I didn’t notice that the sun was almost three f-stops dimmer than normal (or almost one eighth as bright). I think this shows the eye’s and mind’s ability to compensate for different conditions (although it could just show how oblivious I can be to my surroundings when I’m focussed on a challenge). I determined the exposure level of the camera in the usual way. As I took the later panoramas, however, the camera noticed the change in light intensity. I started getting more and more blinkies, and before the fifth panorama, I added our last (3-stop) neutral density filter.
Since the sky of each panorama was blown out in a large area around the sun, I had to restore its color uniformly across the image and then darken the sky around each sun appropriately. To do that I used Photoshop again to find the relative area of each sunHow, and then use that area to determine how much darker its part of the sky should be.
Well, that’s about everything we considered. You’ve got a little over six years to get ready for the next solar eclipse in this country (April 8, 2024), so don’t wait ’til the last minute to prepare (like some people I know). Hopefully, by learning from the tribulations and mistakes of others, you can make your life easier while still making better pictures. Good luck! And feel free to leave comments (or questions).
There are mathematical or drafting programs that may do a better job of finding areas of all sorts of seemingly random two-dimensional shapes, and I may have used one or two of these as a student, but I haven’t had any of them on my computer for many moons. So when I recently needed to compare the size of the visible sun at different times during a solar eclipse so I could compare exposure levels, I was out of luck. But then there was Photoshop. I just finished this article about how to find an object’s area, and put it on our website at www.beehappygraphics.com/find-area.html, mainly because I mentioned the technique in an earlier blog post, and was about to mention it again in an article I promised about the challenges of our newest eclipse image. This probably isn’t the most common task you will be doing, but when you need it, this can be handy. Enjoy!