Efficient method to draw a line with millions of points
Asked Answered
S

2

20

I'm writing an audio waveform editor in Cocoa with a wide range of zoom options. At its widest, it shows a waveform for an entire song (~10 million samples in view). At its narrowest, it shows a pixel accurate representation of the sound wave (~1 thousand samples in a view). I want to be able to smoothly transition between these zoom levels. Some commercial editors like Ableton Live seem to do this in a very inexpensive fashion.

My current implementation satisfies my desired zoom range, but is inefficient and choppy. The design is largely inspired by this excellent article on drawing waveforms with quartz:

http://supermegaultragroovy.com/blog/2009/10/06/drawing-waveforms/

I create multiple CGMutablePathRef's for the audio file at various levels of reduction. When I'm zoomed all the way out, I use the path that's been reduced to one point per x-thousand samples. When I'm zoomed in all the way in, I use that path that contains a point for every sample. I scale a path horizontally when I'm in between reduction levels. This gets it functional, but is still pretty expensive and artifacts appear when transitioning between reduction levels.

One thought on how I might make this less expensive is to take out anti-aliasing. The waveform in my editor is anti-aliased while the one in Ableton is not (see comparison below). enter image description here enter image description here

I don't see a way to turn off anti-aliasing for CGMutablePathRef's. Is there a non-anti-aliased alternative to CGMutablePathRef in the world of Cocoa? If not, does anyone know of some OpenGL classes or sample code that might set me on course to drawing my huge line more efficiently?

Update 1-21-2014: There's now a great library that does exactly what I was looking for: https://github.com/syedhali/EZAudio

Scopoline answered 31/1, 2011 at 9:1 Comment(0)
O
6

i use CGContextMoveToPoint+CGContextAddLineToPoint+CGContextStrokePath in my app. one point per onscreen point to draw using a pre-calculated backing buffer for the overview. the buffer contains the exact points to draw, and uses an interpolated representation of the signal (based on the zoom/scale). although it could be faster and look better if i rendered to an image buffer, i've never had a complaint. you can calc and render all of this from a secondary thread, if you set it up correctly.

anti-aliasing pertains to the graphics context.

CGFloat (the native input for CGPaths) is overkill for an overview, as an intermediate representation, and for calculating the waveform overview. 16 bits should be adequate. of course, you'll have to convert to CGFloat when passing to CG calls.

you need to profile to find out where your time is spent -- focus on the parts that take the most time. also, make you sure you only draw what you must, when you must and avoid overlays/animations where possible. if you need overlays, it's better to render to an image/buffer and update that as needed. sometimes it helps to break up the display into multiple drawing surfaces when the surface is large.

semi-OT: ableton's using s+h values this can be slightly faster but... i much prefer it as an option. if your implementation uses linear interpolation (which it may, based on its appearance), consider a more intuitive approach. linear interpolation is a bit of a cheat, and really not what the user would expect if you're developing a pro app.

Oriental answered 31/1, 2011 at 10:25 Comment(4)
Thanks for the the detailed response! Sounds like you've been here too. Indeed I'm drawing the waveform more often than I might need to. I'll look at substituting a full re-draw with an overlay. Regarding Ableton using s+h values, what are those? You're correct to assume that my implementation uses linear interpolation. Looks like Ableton's squared lines could be more efficient to draw, but not sure how I'd go about doing that.Scopoline
@Scopoline you're welcome. by 's+h', i meant the 'sample and hold' values of the signal. this just draws the waveform steps exactly as they are represented in the time domain. it's better (imo) to interpolate and construct the signal's representation. s+h is ok, but it should be an option, with an interpolated representation as the default. when i say interpolation, i am referring to an approach which is more representative of an analog signal than s+h or linear interpolation. for that, it's probably going to require that you balance performance with accuracy. a sinc would be good, (cont)Oriental
but you can likely get away with a higher order spline (as one example). interpolating/reconstructing the waveform correctly can add a lot of cpu to your current implementation.Oriental
using a draw line approach, the s+h drawing implementation is pretty simple - you'd actually have more points to draw but the math is more simple, and could be performed in int (or short), saving a bunch of memory (in some cases) as well as cpu.Oriental
A
2

In relation to the particular question of anti-aliasing. In Quartz the anti-aliasing is applied to the context at the moment of drawing. The CGPathRef is agnostic to the drawing context. Thus, the same CGPathRef can be rendered into an antialiased context or to a non-antialiased context. For example, to disable antialiasing during animations:

CGContextRef context = UIGraphicsGetCurrentContext();
GMutablePathRef fill_path = CGPathCreateMutable();
// Fill the path with the wave
...

CGContextAddPath(context, fill_path);
if ([self animating])
    CGContextSetAllowsAntialiasing(context, NO);
else
    CGContextSetAllowsAntialiasing(context, YES);
// Do the drawing
CGContextDrawPath(context, kCGPathStroke);
Adapt answered 31/1, 2011 at 9:58 Comment(1)
Ah, thanks for pointing out the context anti-aliasing setting. Unfortunately, it doesn't seem to have lead to a significant performance increase. Apparently I have too many other inefficient processes in my current implementation.Scopoline

© 2022 - 2024 — McMap. All rights reserved.