Is this code drawing at the point or pixel level? How to draw retina pixels?
Asked Answered
F

3

7

Consider this admirable script which draws a (circular) gradient,

https://github.com/paiv/AngleGradientLayer/blob/master/AngleGradient/AngleGradientLayer.m

int w = CGRectGetWidth(rect);
int h = CGRectGetHeight(rect);

and then

angleGradient(data, w, h ..

and the it loops over all those

for (int y = 0; y < h; y++)
for (int x = 0; x < w; x++) {

basically setting the color

    *p++ = color;

But wait - wouldn't this be working by points, not pixels?

How, really, would you draw to the physical pixels on dense screens?

Is it a matter of:

Let's say the density is 4 on the device. Draw just as in the above code, but, on a bitmap four times as big, and then put it in the rect?

That seems messy - but is that it?

Flowerdeluce answered 2/12, 2017 at 16:57 Comment(1)
Note: If your problem is literally drawing a circular gradient, there are a number of simple approaches, example: https://mcmap.net/q/196877/-swift-rainbow-colour-circle I was using "circular gradient" merely as an example of pixel drawing.Flowerdeluce
G
5

[Note: The code on the github example does not calculate the gradient on a pixel basis. The code on the github example calculates the gradient on a points basis. -Fattie]

The code is working in pixels. First, it fills a simple raster bitmap buffer with the pixel color data. That obviously has no notion of an image scale or unit other than pixels. Next, it creates a CGImage from that buffer (in a bit of an odd way). CGImage also has no notion of a scale or unit other than pixels.

The issue comes in where the CGImage is drawn. Whether scaling is done at that point depends on the graphics context and how it has been configured. There's an implicit transform in the context that converts from user space (points, more or less) to device space (pixels).

The -drawInContext: method ought to convert the rect using CGContextConvertRectToDeviceSpace() to get the rect for the image. Note that the unconverted rect should still be used for the call to CGContextDrawImage().

So, for a 2x Retina display context, the original rect will be in points. Let's say 100x200. The image rect will be doubled in size to represent pixels, 200x400. The draw operation will draw that to the 100x200 rect, which might seem like it would scale the large, highly-detailed image down, losing information. However, internally, the draw operation will scale the target rect to device space before doing the actual draw, and fill a 200x400 pixel area from the 200x400 pixel image, preserving all of the detail.

Gorham answered 2/12, 2017 at 17:47 Comment(3)
Please, if you glance at the github code, line 96. Note that Pavel in fact just gets the points size: So that would be 100x200 in the example you gave in your answer. Shouldn't Pavel be doubling that size? So, Pavel would double it to 200x400 as in your example. (Assume a density of "2".) Maybe I'm missing something: if I'm not mistaken, Pavel is doing precisely NOT what you describe? Am I missing something??Flowerdeluce
I did look at the AngleGradientLayer code. That's what I based my answer on. That's what I was talking about when I mentioned the -drawInContext: method. As I said, to properly support high resolution, that code ought to convert the rect it gets from the context to device space when determining the size of the image it creates. I guess that corresponds to your "Method M".Gorham
gotchya!! @kenthomases ! the code ought to convert the rect. Sorry, I misread your answer. Totally awesome - epic! :) Thanks again. Bounty coming....Flowerdeluce
F
1

So, based on the magnificent answer of KenThomases, and a day of testing, here's exactly how you draw at physical pixel level. I think.

class PixelwiseLayer: CALayer {

    override init() {

        super.init()
        // SET THE CONTENT SCALE AT INITIALIZATION TIME
        contentsScale = UIScreen.main.scale
    }

    required init?(coder aDecoder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }

    override open func draw(in ctx: CGContext) {

        let rectDEVICESPACE = ctx.convertToDeviceSpace(bounds).size
        // convertToDeviceSpace >>KNOWS ABOUT CONTENT SCALE<<
        // and YOU have CORRECTLY SET content scale at initialization time

        // write pixels to DEVICE SPACE, BUT ...
        let img = pixelByPixelImage(sizeInDeviceSpace: rectDEVICESPACE)

        // ... BUT the draw# call uses only the NORMAL BOUNDS
        ctx.draw(img, in: bounds)
    }

    private func pixelByPixelImage(sizeInDeviceSpace: CGSize) -> CGImage {

        let wPIXELS = Int(sizeInDeviceSpace.width)
        let hPIXELS = Int(sizeInDeviceSpace.height)
        // !!!THAT IS ACTUAL PIXELS!!!

        // you !!!DO NOT!!! need to multiply by UIScreen.main.scale,
        // as is seen in much example code.
        // convertToDeviceSpace does it properly.

        let bitsPerComponent: Int = MemoryLayout<UInt8>.size * 8
        let bytesPerPixel: Int = bitsPerComponent * 4 / 8
        let colorSpace: CGColorSpace = CGColorSpaceCreateDeviceRGB()
        let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)

        var data = [RGBA]()

        for y in 0..<hPIXELS {
            for x in 0..<wPIXELS {

                let c = yourPixelColor(atPoint: x .. y)
                data.append(c)
            }
        }

        // the context ... use actual pixels!!!!
        let ctx = CGContext(data: &data,
                    width: wPIXELS, height: hPIXELS,
                    bitsPerComponent: bitsPerComponent,
                    bytesPerRow: wPIXELS * bytesPerPixel,
                    space: colorSpace,
                    bitmapInfo: bitmapInfo.rawValue)
        let img = ctx?.makeImage()!
        return img!  // return a CGImage in actual pixels!!!!!!
    }

    // (PS, it's very likely you'll want needsDisplayOnBoundsChange as with most layers.
    // Set it true in init(), needsDisplayOnBoundsChange = true )

}

fileprivate struct RGBA { // (build raw data like this)
    var r: UInt8
    var g: UInt8
    var b: UInt8
    var a: UInt8
}

The critical elements:

first ...

        super.init()
        // SET THE CONTENT SCALE >>>>AT INITIALIZATION TIME<<<<
        contentsScale = UIScreen.main.scale

second ...

    override open func draw(in ctx: CGContext) {

        realPixelSize = ctx.convertToDeviceSpace(bounds).size
        ...
    }

third ...

    override open func draw(in ctx: CGContext) {

        ...
        your image = yourPixelDrawingFunction( realPixelSize ) // NOT BOUNDS
        ctx.draw(img, in: bounds)  // NOT REALPIXELSIZE
    }

Example ...

console:
contentsScale 3.0
UIScreen.main.scale 3.0
bounds (0.0, 0.0, 84.0, 84.0)
rectDEVICESPACE (252.0, 252.0)
actual pixels being created as data: w, h 252, 252

It's absolutely critical to set contentsScale at initialization time.

I tried some os versions, and it seems for better or worse the default for layers for contentsScale is unfortunately "1" rather than screen density, so, do not forget to set it!!! (Note that other systems in the OS will use it, also, to know how to handle your layer efficiently, etc.)

Flowerdeluce answered 11/12, 2017 at 19:56 Comment(3)
I don't believe you should be multiplying by UIScreen.main.scale in yourPixelDrawingFunction().Gorham
You're setting the layer's contentsScale too late. Doing it during draw() means it can't affect the context that's passed in to draw(). Also, if you were writing a custom view class instead of a custom layer class, the context and its transform between user and device space would be set up for you for the view's draw() call.Gorham
YOU ARE TOTALLY CORRECT @KenThomases - I see the mechanism now.Flowerdeluce
M
0

What it sounds like you are looking for is the scale property on UIScreen:

https://developer.apple.com/documentation/uikit/uiscreen/1617836-scale

This allows you to control the number of pixels the coordinate system gives you per virtual pixel. iOS devices basically work at non retina coordinates. Old link explaining what is going on here:

http://www.daveoncode.com/2011/10/22/right-uiimage-and-cgimage-pixel-size-retina-display/

Don't use his macros, as some devices are now scale of 3.0, but the post explains what is going on.

Metronymic answered 6/12, 2017 at 16:50 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.