iOS: How to trim an image to the useful parts (remove transparent border)
Asked Answered
W

2

15

I'm trying to automatically show the useful part of a largely transparent png in an iPhone app. The image may be say 500x500 but it is mostly transparent. Somewhere within that image is a non-transparent part that I want to display to the user as large as I can so I want to trim off as much as I can from each side (or make it appear that way by stretching and moving within the UIImageView. Any ideas?

Wain answered 2/10, 2011 at 7:21 Comment(2)
Additional: Could the person who marked down this question explain why? Was the answer too obvious, did I not ask politely enough? I don't understand what's wrong with it.Wain
voted up. I think it is a great question.Remus
T
0

Using Quartz convert the image to a bitmap, examine the alpha channel bits to find the bounds of the non-transparent part of the image.

Here is an Apple Tech Note: Getting the pixel data from a CGImage object. You can get a CIImage from a UIImage with:

CGImageRef imageRef = [uiImage CGImage];
Telic answered 2/10, 2011 at 11:47 Comment(0)
S
0

I made a method to do this that scans all of the pixels in the image looking for columns or rows that are all transparent (0.01 tolerance) vs contain any non-transparent pixels, then trims the image accordingly.

///crops image by trimming transparent edges
-(UIImage *)trimImage:(UIImage *)originalImage {

    // components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
    const CGFloat* colorComponents = CGColorGetComponents([UIColor colorWithRed:1 green:0 blue:0 alpha:1].CGColor);
    UInt8* color255Components = calloc(sizeof(UInt8), 4);
    for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);

    // raw image reference
    CGImageRef rawImage = originalImage.CGImage;

    // image attributes
    size_t width = CGImageGetWidth(rawImage);
    size_t height = CGImageGetHeight(rawImage);
    CGRect rect = {CGPointZero, {width, height}};

    // image format
    size_t bitsPerComponent = 8;
    size_t bytesPerRow = width*4;

    // the bitmap info
    CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;

    // data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
    UInt8* data = calloc(bytesPerRow, height);

    // get new RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // create bitmap context
    CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);

    // draw image into context (populating the data array while doing so)
    CGContextDrawImage(ctx, rect, rawImage);

    //float iln2 = 1.0f/log(2.0f);

    float topTrim = 0;
    float bottomTrim = 0;
    float leftTrim = 0;
    float rightTrim = 0;

    @autoreleasepool {

        int pixelPosition = 0;

        //

        float row = 1;
        float column = 1;
        BOOL found = NO;
        while (row < height) {
            while (column < width) {
                pixelPosition = row*width+column;
                NSInteger pixelIndex = 4*pixelPosition;
                float alphaValue = data[pixelIndex+3]/255.0f;
                if (alphaValue > 0.01f) {
                    found = YES;
                    break;
                }
                column++;
            }
            if (found) {
                break;
            }
            column = 1;
            row++;
        }
        topTrim = row;

        //

        row = height-1;
        column = 1;
        found = NO;
        while (row > 0) {
            while (column < width) {
                pixelPosition = row*width+column;
                NSInteger pixelIndex = 4*pixelPosition;
                float alphaValue = data[pixelIndex+3]/255.0f;
                if (alphaValue > 0.01f) {
                    found = YES;
                    break;
                }
                column++;
            }
            if (found) {
                break;
            }
            column = 1;
            row--;
        }
        bottomTrim = row;

        //

        row = 1;
        column = 1;
        found = NO;
        while (column < width) {
            while (row < height) {
                pixelPosition = row*width+column;
                NSInteger pixelIndex = 4*pixelPosition;
                float alphaValue = data[pixelIndex+3]/255.0f;
                if (alphaValue > 0.01f) {
                    found = YES;
                    break;
                }
                row++;
            }
            if (found) {
                break;
            }
            row = 1;
            column++;
        }
        leftTrim = column;

        //

        row = 1;
        column = width-1;
        found = NO;
        while (column > 0) {
            while (row < height) {
                pixelPosition = row*width+column;
                NSInteger pixelIndex = 4*pixelPosition;
                float alphaValue = data[pixelIndex+3]/255.0f;
                if (alphaValue > 0.01f) {
                    found = YES;
                    break;
                }
                row++;
            }
            if (found) {
                break;
            }
            row = 1;
            column--;
        }
        rightTrim = column;

    }

    // clean up
    free(color255Components);
    CGContextRelease(ctx);
    CGColorSpaceRelease(colorSpace);
    free(data);

    //

    float trimWidth = rightTrim-leftTrim;
    float trimHeight = bottomTrim-topTrim;

    UIView *trimCanvas = [[UIView alloc] initWithFrame:CGRectMake(0, 0, trimWidth, trimHeight)];
    trimCanvas.backgroundColor = [UIColor clearColor];

    UIImageView *trimImageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, width, height)];
    trimImageView.image = originalImage;
    trimImageView.contentMode = UIViewContentModeScaleToFill;
    trimImageView.backgroundColor = [UIColor clearColor];

    [trimCanvas addSubview:trimImageView];

    //

    trimImageView.center = CGPointMake(trimImageView.center.x-leftTrim, trimImageView.center.y-topTrim);

    //

    CGRect __rect = [trimCanvas bounds];
    UIGraphicsBeginImageContextWithOptions(__rect.size, (NO), (originalImage.scale));
    CGContextRef __context = UIGraphicsGetCurrentContext();
    [trimCanvas.layer renderInContext:__context];
    UIImage *__image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    //

    return __image;

}
Steroid answered 9/6, 2019 at 6:26 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.