In case you want a progress update, I am currently working on an important new feature that hopefully will solve at least two related, recurring problems.
The problems it looks to solve (or at least reduce) relate to separating foreground and background.
1) If you have a landscape, for example, with a complex tree line against a sky and want to darken the sky and lighten the trees, pixels at the boundary between sky and trees (which typically are a combination of both) often show up too light or too dark, no matter how carefully you create the mask. Here is an image, the result of naively trying to darken the sky and lighten the trees with a mask and the result of the new algorithm. The new algorithm still has several kinks to be worked out and potential refinements.
2) When compositing an object from one image into another, it is difficult to create a mask that follows a tricky boundary (e.g. wisps of hair, fur, etc.) and even if you do, you end up compositing some of the background along with the foreground.
The goal is to produce three images from an input image -- one with only the background, one with only the foreground and a mask that identifies the boundary between them. To do this, you need to identify the boundary region that is a blend of background and foreground and guess the colors of background and foreground in that region as well as the ideal blend of those colors that comes closest to the input image at that point.
The idea is to have the user locate subject areas within an image (such as sky vs trees or flower vs background or person vs background) by identifying representative parts of the image that are clearly only foreground and clearly only background. Using these areas as a starting point, the foreground and background regions are expanded outward until reaching pixels that are no longer clearly foreground or clearly background. At this point, the foreground and background areas continue to grow into the boundary region by diffusing the colors from nearby pixels in the region, producing separate foreground and background images that fill the boundary area. Once the entire boundary area has been filled, the optimal mask image is computed.
To solve problem 1, you apply different curves (or other processing) to the foreground and background images and then blend the results using the computed mask. To solve problem 2, you overlay the foreground image (with the computed mask) on the base image.
Ultimately I am thinking of integrating this feature into transformations such as Brightness Curve, Blend and Composite. This is a fairly ambitious project that will take some time to complete, and I also have some travel planned this summer which will slow thing down.
Work in Progress
Moderator: jsachs
Work in Progress
Jonathan Sachs
Digital Light & Color
Digital Light & Color
-
- Posts: 584
- Joined: January 29th, 2019, 11:47 pm
- What is the make/model of your primary camera?: Nikon Z8
Re: Work in Progress
Very interesting and useful !
-
- Posts: 95
- Joined: December 10th, 2009, 11:52 pm
- What is the make/model of your primary camera?: Canon 90D
Re: Work in Progress
This sounds very useful - I hope it goes smoothly.
John P
John P
Re: Work in Progress
Impressive! I look forward to seeing the results. Of course, if you're looking for beta testers...
-
- Posts: 227
- Joined: November 24th, 2009, 2:00 am
- What is the make/model of your primary camera?: Fuji X-Pro 2
- Contact:
Re: Work in Progress
Deep human thought beats AI solutions.
Re: Work in Progress
While I have gotten quite nice results in some cases (see below), the general form of this problem is turning out to be significantly harder than I anticipated, so, after completely rewriting the code for the fourth or fifth time, I am putting this feature on the back burner for a while until I get some new ideas.
Jonathan Sachs
Digital Light & Color
Digital Light & Color