QUESTION
How do you programmatically and accurately determine the best preview size for an application that is displaying the camera's preview at the device's screen size? (Or inside any view of variable dimensions, really).
BEFORE YOU TRY TO MARK THIS AS A DUPLICATE OF ONE OF THE MILLIONS OF OTHER ASPECT RATIO RELATED QUESTIONS ON SO please understand that I am searching for a different solution than what is generally available. I ask this question because I have read so many "answers" but they all point to a solution that I feel is incomplete (and potentially flawed as I will describe here). If it's not a flaw then please help me understand what I am doing wrong.
I have read a lot of different implementations on how applications choose preview sizes and most of them take an approach that I am calling the "close enough" approach (where you decide which option is best based on subtracting the ratio of the size option from the ratio of the screen resolution and find the option with the lowest value). This approach seems to not ensure that it is picking the best option but ensure that it won't pick the worst option.
For example if I try to iterate through each available preview size on a device with screen resolution of 720x1184
and display the preview full screen (720x1184
) here are the results of the camera preview (sorted by options that are closest to the screen resolution in the format abs(ratio of size option - screen resolution ratio)
). All sizes come from .getSupportedPreviewSizes). ("Results" are from visual observations using a test case using a static circle that appears in the camera's viewfinder and are not programmatic)
720.0/1184.0 = 0.6081081081081081
Res Ratio Delta in ratios Result
----------------------------------------------------
176/144 = 1.22222222222 (0.614114114114) (vertically stretched)
352/288 = 1.22222222222 (0.614114114114) (vertically stretched)
320/240 = 1.33333333333 (0.725225225225) (vertically stretched)
640/480 = 1.33333333333 (0.725225225225) (vertically stretched)
720/480 = 1.5 (0.891891891892) (vertically stretched)
800/480 = 1.66666666667 (1.05855855856) (looks good)
640/360 = 1.77777777778 (1.16966966967) (horizontally squashed)
1280/720 = 1.77777777778 (1.16966966967) (slight horizontal squash)
1920/1080 = 1.77777777778 (1.16966966967) (slight horizontal squash)
It wouldn't be a proper Android code test without running it on another device. Here are the results of displaying a camera preview on a device with screen resolution 800x1216 and displaying the preview at the same resolution (800x1216
)
800/1216.0 = 0.657894736842
Res Ratio Delta in ratios Results
------------------------------------------------
176/144 = 1.22222222222 (0.56432748538) (looks vertically stretched)
352/288 = 1.22222222222 (0.56432748538) (looks vertically stretched)
480/368 = 1.30434782609 (0.646453089245) (looks vertically stretched)
320/240 = 1.33333333333 (0.675438596491) (looks vertically stretched)
640/480 = 1.33333333333 (0.675438596491) (looks vertically stretched)
800/600 = 1.33333333333 (0.675438596491) (looks vertically stretched)
480/320 = 1.5 (0.842105263158) (looks good)
720/480 = 1.5 (0.842105263158) (looks good)
800/480 = 1.66666666667 (1.00877192982) (looks horizontally squashed)
960/540 = 1.77777777778 (1.11988304094) (looks horizontally squashed)
1280/720 = 1.77777777778 (1.11988304094) (looks horizontally squashed)
1920/1080 = 1.77777777778 (1.11988304094) (looks horizontally squashed)
864/480 = 1.8 (1.14210526316) (looks horizontally squashed)
The "close enough" approach (assuming that any delta in ratio is equal to or less than 1.4d
is acceptable) would return 1920x1080
on both devices if iterating through lowest to highest values. If iterating through highest to lowest values it would pick 176x144
for DeviceA and 176x144
for DeviceB. Both of those options (although "close enough") are not the best options.
QUESTION
In studying the results above how can I programatically derive the values that "look good"? I can't get these values with the "close enough" approach so I am misunderstanding the relationship between screen size, view that I am displaying the preview in and the preview sizes themselves. What am I missing?
Screen dimensions = 720x1184
View dimensions = 720x1184
Screen and View Aspect Ratio = 0.6081081081081081
Best Preview Size ratio (720x480) = 1.5
Why are the best options not the values that have the lowest delta in ratios? The results are surprising since everyone else seems to think the best option is by calculating the smallest difference in ratios but what I'm seeing is that the best option seems to be in the middle of all of the options. And that their width is closer to the width of the view that will display the preview.
Based on the above observations (that the best option is not the value with the lowest delta in ratios) I have developed this algorithm that iterates though all of the possible preview sizes, checks if it meets my "close enough" criteria , stores the sizes that meet this criteria and finally find the value that is at least greater than or equal to provided width.
public static Size getBestAspectPreviewSize(int displayOrientation,
int width,
int height,
Camera.Parameters parameters,
double closeEnough) {
double targetRatio=(double)width / height;
Size bestSize = null;
if (displayOrientation == 90 || displayOrientation == 270) {
targetRatio=(double)height / width;
}
List<Size> sizes=parameters.getSupportedPreviewSizes();
TreeMap<Double, List> diffs = new TreeMap<Double, List>();
for (Size size : sizes) {
double ratio=(double)size.width / size.height;
double diff = Math.abs(ratio - targetRatio);
if (diff < closeEnough){
if (diffs.keySet().contains(diff)){
//add the value to the list
diffs.get(diff).add(size);
} else {
List newList = new ArrayList<Camera.Size>();
newList.add(size);
diffs.put(diff, newList);
}
Logging.format("usable: %sx%s %s", size.width, size.height, diff);
}
}
//diffs now contains all of the usable sizes
//now let's see which one has the least amount of
for (Map.Entry entry: diffs.entrySet()){
List<Size> entries = (List)entry.getValue();
for (Camera.Size s: entries) {
if (s.width >= width && s.height >= width) {
bestSize = s;
}
Logging.format("results: %s %sx%s", entry.getKey(), s.width, s.height);
}
}
//if we don't have bestSize then just use whatever the default was to begin with
if (bestSize==null){
if (parameters.getPreviewSize()!=null){
bestSize = parameters.getPreviewSize();
return bestSize;
}
//pick the smallest difference in ratio? or pick the largest resolution?
//right now we are just picking the lowest ratio difference
for (Map.Entry entry: diffs.entrySet()){
List<Size> entries = (List)entry.getValue();
for (Camera.Size s: entries) {
if (bestSize == null){
bestSize = s;
}
}
}
}
return bestSize;
}
Obviously this algorithm doesn't know how to pick the best option just knows how to pick an option that is not the worst like every other implementation I have seen out there. I need to understand the relationship between the sizes that actually look good and the view dimensions that the preview will be displayed in before I can improve my algorithm to actually pick the best option.
I have looked at the implementation of how CommonWares' camera-cwac project deals with this and it appears to also use the "close enough" algorithm. If I apply that same logic to my project then I get back values that are decent but not the "perfect" size. I get 1920x1080
back for both devices. Although that value is not the worst option it's also slightly squished. I am going to run his code in my test app with test cases to determine if it also squishes the image slightly since I already know that it will return a size that isn't as optimal as it could be.