Correct crop of CIGaussianBlur
Asked Answered
D

9

35

As I noticed when CIGaussianBlur is applied to image, image's corners gets blurred so that it looks like being smaller than original. So I figured out that I need to crop it correctly to avoid having transparent edges of image. But how to calculate how much I need to crop in dependence of blur amount?


Example:

Original image:
enter image description here

Image with 50 inputRadius of CIGaussianBlur (blue color is background of everything):
enter image description here

Demisec answered 11/10, 2012 at 12:38 Comment(1)
Just dropping in to say that this Lion is super badass.Kinase
T
55

Take the following code as an example...

CIContext *context = [CIContext contextWithOptions:nil];

CIImage *inputImage = [[CIImage alloc] initWithImage:image];

CIFilter *filter = [CIFilter filterWithName:@"CIGaussianBlur"];

[filter setValue:inputImage forKey:kCIInputImageKey];

[filter setValue:[NSNumber numberWithFloat:5.0f] forKey:@"inputRadius"];

CIImage *result = [filter valueForKey:kCIOutputImageKey];

CGImageRef cgImage = [context createCGImage:result fromRect:[result extent]];

This results in the images you provided above. But if I instead use the original images rect to create the CGImage off of the context the resulting image is the desired size.

CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
Tedman answered 21/11, 2012 at 17:9 Comment(1)
In order to achieve desired result, you needed to apply Clamp filter first, then blur. See solution provided by @orjLinkoski
F
29

There are two issues. The first is that the blur filter samples pixels outside the edges of the input image. These pixels are transparent. That's where the transparent pixels come from. The trick is to extend the edges before you apply the blur filter. This can be done by a clamp filter e.g. like this:

CIFilter *affineClampFilter = [CIFilter filterWithName:@"CIAffineClamp"];

CGAffineTransform xform = CGAffineTransformMakeScale(1.0, 1.0);
[affineClampFilter setValue:[NSValue valueWithBytes:&xform
                                           objCType:@encode(CGAffineTransform)]
                     forKey:@"inputTransform"];

This filter extends the edges infinitely and eliminates the transparency. The next step would be to apply the blur filter.

The second issue is a bit weird. Some renderers produce a bigger output image for the blur filter and you must adapt the origin of the resulting CIImage by some offset e.g. like this:

CGImageRef cgImage = [context createCGImage:outputImage
                                   fromRect:CGRectOffset([inputImage extend],
                                                         offset, offset)];

The software renderer on my iPhone needs three times the blur radius as offset. The hardware renderer on the same iPhone does not need any offset at all. Maybee you could deduce the offset from the size difference of input and output images, but I did not try...

Flux answered 9/8, 2013 at 1:14 Comment(1)
This works perfectly. I just wanted to point out that the CGAffineTransform assigned in your example if the default one for the filter, so you can omit all that code or maybe just call [affineClampFilter setDefaults]Glennglenna
R
18

To get a nice blurred version of an image with hard edges you first need to apply a CIAffineClamp to the source image, extending its edges out and then you need to ensure that you use the input image's extents when generating the output image.

The code is as follows:

CIContext *context = [CIContext contextWithOptions:nil];

UIImage *image = [UIImage imageNamed:@"Flower"];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];

CIFilter *clampFilter = [CIFilter filterWithName:@"CIAffineClamp"];
[clampFilter setDefaults];
[clampFilter setValue:inputImage forKey:kCIInputImageKey];

CIFilter *blurFilter = [CIFilter filterWithName:@"CIGaussianBlur"];
[blurFilter setValue:clampFilter.outputImage forKey:kCIInputImageKey];
[blurFilter setValue:@10.0f forKey:@"inputRadius"];

CIImage *result = [blurFilter valueForKey:kCIOutputImageKey];

CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *result = [[UIImage alloc] initWithCGImage:cgImage scale:image.scale orientation:UIImageOrientationUp];

CGImageRelease(cgImage);

Note this code was tested on iOS. It should be the similar for OS X (substituting NSImage for UIImage).

Raina answered 24/6, 2015 at 8:45 Comment(0)
B
14

I saw some of the solutions and wanted to recommend a more modern one, based off some of the ideas shared here:

private lazy var coreImageContext = CIContext() // Re-use this.

func blurredImage(image: CIImage, radius: CGFloat) -> CGImage? {
    let blurredImage = image
        .clampedToExtent()
        .applyingFilter(
            "CIGaussianBlur",
            parameters: [
                kCIInputRadiusKey: radius,
            ]
        )
        .cropped(to: image.extent)

    return coreImageContext.createCGImage(blurredImage, from: blurredImage.extent)
}

If you need a UIImage afterward, you can of course get it like so:

let image = UIImage(cgImage: cgImage)

... For those wondering, the reason for returning a CGImage is (as noted in the Apple documentation):

Due to Core Image's coordinate system mismatch with UIKit, this filtering approach may yield unexpected results when displayed in a UIImageView with "contentMode". Be sure to back it with a CGImage so that it handles contentMode properly.

If you need a CIImage you could return that, but in this case if you're displaying the image, you'd probably want to be careful.

Brammer answered 19/12, 2018 at 9:56 Comment(1)
Thanks Ben this is the best answerNiven
P
5

This works for me :)

CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *blurFilter = [CIFilter filterWithName:@"CIGaussianBlur"];
[blurFilter setDefaults];
[blurFilter setValue:inputImage forKey:@"inputImage"];
CGFloat blurLevel = 20.0f;          // Set blur level
[blurFilter setValue:[NSNumber numberWithFloat:blurLevel] forKey:@"inputRadius"];    // set value for blur level
CIImage *outputImage = [blurFilter valueForKey:@"outputImage"];
CGRect rect = inputImage.extent;    // Create Rect
rect.origin.x += blurLevel;         // and set custom params
rect.origin.y += blurLevel;         // 
rect.size.height -= blurLevel*2.0f; //
rect.size.width -= blurLevel*2.0f;  //
CGImageRef cgImage = [context createCGImage:outputImage fromRect:rect];    // Then apply new rect
imageView.image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
Preengage answered 7/11, 2013 at 16:2 Comment(0)
D
3

Here is the Swift 5 version of blurring the image. Set the Clamp filter to defaults so you will no need to give transform.

func applyBlurEffect() -> UIImage? {

    let context = CIContext(options: nil)
    let imageToBlur = CIImage(image: self)
    let clampFilter = CIFilter(name: "CIAffineClamp")!
    clampFilter.setDefaults()
    clampFilter.setValue(imageToBlur, forKey: kCIInputImageKey)

    //The CIAffineClamp filter is setting your extent as infinite, which then confounds your context. Try saving off the pre-clamp extent CGRect, and then supplying that to the context initializer.
    let inputImageExtent = imageToBlur!.extent

    guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
        return nil
    }
    currentFilter.setValue(clampFilter.outputImage, forKey: kCIInputImageKey)
    currentFilter.setValue(10, forKey: "inputRadius")
    guard let output = currentFilter.outputImage, let cgimg = context.createCGImage(output, from: inputImageExtent) else {
        return nil
    }
    return UIImage(cgImage: cgimg)

}
Desulphurize answered 6/9, 2019 at 19:41 Comment(1)
Where doe the extent come from? I'm trying this solution on a current project, and the image I'm using for input has size 60.33 by 70.0 but the extent produced is 149 by 188, which is a different aspect ratio. Perhaps it's related to the fact that I'm using a UIImageView with a frame that is square (using aspect fit), but the function knows nothing about the UIImageView as far as I can tell. Why are these aspect ratios different? I expect them to be the same, perhaps only slightly more square from extending the edges from the blur. The blurred image is less square, though.Sidman
P
1

Here is Swift version:

func applyBlurEffect(image: UIImage) -> UIImage {
    let context = CIContext(options: nil)
    let imageToBlur = CIImage(image: image)
    let blurfilter = CIFilter(name: "CIGaussianBlur")
    blurfilter!.setValue(imageToBlur, forKey: "inputImage")
    blurfilter!.setValue(5.0, forKey: "inputRadius")
    let resultImage = blurfilter!.valueForKey("outputImage") as! CIImage
    let cgImage = context.createCGImage(resultImage, fromRect: resultImage.extent)
    let blurredImage = UIImage(CGImage: cgImage)
    return blurredImage

}
Polard answered 24/10, 2015 at 10:18 Comment(0)
C
0

See below two implementations for Xamarin (C#).

1) Works for iOS 6

public static UIImage Blur(UIImage image)
{   
    using(var blur = new CIGaussianBlur())
    {
        blur.Image = new CIImage(image);
        blur.Radius = 6.5f;

        using(CIImage output = blur.OutputImage)
        using(CIContext context = CIContext.FromOptions(null))
        using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
        {
            return UIImage.FromImage(cgimage);
        }
    }
}

2) Implementation for iOS 7

Using the way shown above isn't working properly on iOS 7 anymore (at least at the moment with Xamarin 7.0.1). So I decided to add cropping another way (measures may depend on the blur radius).

private static UIImage BlurImage(UIImage image)
{   
    using(var blur = new CIGaussianBlur())
    {
        blur.Image = new CIImage(image);
        blur.Radius = 6.5f;

        using(CIImage output = blur.OutputImage)
        using(CIContext context = CIContext.FromOptions(null))
        using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
        {
            return UIImage.FromImage(Crop(CIImage.FromCGImage(cgimage), image.Size.Width, image.Size.Height));
        }
    }
}

private static CIImage Crop(CIImage image, float width, float height)
{
    var crop = new CICrop
    { 
        Image = image,
        Rectangle = new CIVector(10, 10, width - 20, height - 20) 
    };

    return crop.OutputImage;   
}
Carincarina answered 26/9, 2013 at 15:0 Comment(0)
A
0

Try this, let the input's extent be -createCGImage:fromRect:'s parameter:

-(UIImage *)gaussianBlurImageWithRadius:(CGFloat)radius {
    CIContext *context = [CIContext contextWithOptions:nil];
    CIImage *input = [CIImage imageWithCGImage:self.CGImage];
    CIFilter *filter = [CIFilter filterWithName:@"CIGaussianBlur"];
    [filter setValue:input forKey:kCIInputImageKey];
    [filter setValue:@(radius) forKey:kCIInputRadiusKey];
    CIImage *output = [filter valueForKey:kCIOutputImageKey];
    CGImageRef imgRef = [context createCGImage:output
                                      fromRect:input.extent];
    UIImage *outImage = [UIImage imageWithCGImage:imgRef
                                            scale:UIScreen.mainScreen.scale
                                      orientation:UIImageOrientationUp];
    CGImageRelease(imgRef);
    return outImage;
}
Adachi answered 16/10, 2016 at 13:43 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.