I am having a strange problem in my project. What I want to do is that, a user will paint or draw using swipe over a image as overlay and I just need to crop the area from the image that is below the painted region. My code is working well only when the UIImage
view that is below the paint region is 320 pixel wide i.e. width of iPhone. But If I change the width of the ImageView
, I am not getting the desired result.
I am using the following code to construct a CGRect
around the painted part.
-(CGRect)detectRectForFaceInImage:(UIImage *)image{
int l,r,t,b;
l = r = t = b = 0;
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
BOOL pixelFound = NO;
for (int i = leftX ; i < rightX; i++) {
for (int j = topY; j < bottomY + 20; j++) {
int pixelInfo = ((image.size.width * j) + i ) * 4;
UInt8 alpha = data[pixelInfo + 2];
if (alpha) {
NSLog(@"Left %d", alpha);
l = i;
pixelFound = YES;
break;
}
}
if(pixelFound) break;
}
pixelFound = NO;
for (int i = rightX ; i >= l; i--) {
for (int j = topY; j < bottomY ; j++) {
int pixelInfo = ((image.size.width * j) + i ) * 4;
UInt8 alpha = data[pixelInfo + 2];
if (alpha) {
NSLog(@"Right %d", alpha);
r = i;
pixelFound = YES;
break;
}
}
if(pixelFound) break;
}
pixelFound = NO;
for (int i = topY ; i < bottomY ; i++) {
for (int j = l; j < r; j++) {
int pixelInfo = ((image.size.width * i) + j ) * 4;
UInt8 alpha = data[pixelInfo + 2];
if (alpha) {
NSLog(@"Top %d", alpha);
t = i;
pixelFound = YES;
break;
}
}
if(pixelFound) break;
}
pixelFound = NO;
for (int i = bottomY ; i >= t; i--) {
for (int j = l; j < r; j++) {
int pixelInfo = ((image.size.width * i) + j ) * 4;
UInt8 alpha = data[pixelInfo + 2];
if (alpha) {
NSLog(@"Bottom %d", alpha);
b = i;
pixelFound = YES;
break;
}
}
if(pixelFound) break;
}
CFRelease(pixelData);
return CGRectMake(l, t, r - l, b-t);
}
In the above code leftX, rightX, topY, bottomY are the extreme values(from CGPoint
) in float that is calculated when user swipe their finger on the screen while painting and represents a rectangle which contains the painted area in its bounds (to minimise the loop).
leftX - minimum in X-axis
rightX - maximum in X-axis
topY - min in Y-axis
bottom - max in Y-axis
Here l,r,t,b are the calculated values for actual rectangle.
As expressed earlier, this code work well when the imageview in which paining is done is 320 pixels wide and is spanned throughout the screen width. But If the imageview's width is smaller like 300 and is placed to the center of the screen, the code give false result.
Note: I am scaling the image according to imageview's width.
Below are the NSLog
output:
When imageview's width is 320 pixel (These are value for the component of color at matched pixel or non-transparent pixel):
2013-05-17 17:58:17.170 FunFace[12103:907] Left 41 2013-05-17 17:58:17.172 FunFace[12103:907] Right 1 2013-05-17 17:58:17.173 FunFace[12103:907] Top 73 2013-05-17 17:58:17.174 FunFace[12103:907] Bottom 12
When imageview's width is 300 pixel:
2013-05-17 17:55:26.066 FunFace[12086:907] Left 42 2013-05-17 17:55:26.067 FunFace[12086:907] Right 255 2013-05-17 17:55:26.069 FunFace[12086:907] Top 42 2013-05-17 17:55:26.071 FunFace[12086:907] Bottom 255
How can I solve this problem because I need the imageview in center with padding to its both side.
EDIT: Ok looks like my problem is due to image orientation of JPEG images(from camera). Png images are working good and are not affected with change in imageview's width. But still JPEGs are not working even if I am handling the orientation.