Core Image filters can be chained together, one after the other. I find the code easier to read (and write) if it’s written with that idea.
CIFilter *hueFilter;
CIFilter *exposureFilter;
CIImage *inputImage; // assume this has already been created
CIImage *outputImage;
hueFilter = [CIFilter filterWithName:@"CIHueAdjust"];
[hueFilter setValue:inputImage forKey:kCIInputImageKey];
[hueFilter setValue:[NSNumber numberWithFloat:5] forKey:@"inputAngle"];
outputImage = [hueFilter valueForKey:kCIOutputImageKey];
exposureFilter = [CIFilter filterWithName:@"CIExposureAdjust"];
[exposureFilter setValue:outputImage forKey:kCIInputImageKey];
[exposureFilter setValue:[NSNumber numberWithFloat:5] forKey:@"inputEV"];
outputImage = [exposureFilter valueForKey:kCIOutputImageKey];
Above, the first filter is created. Note the use of the constants for the keys where available. At the end of the block the filter has been set for the image, but the calculations are not actually performed until the image is rendered. Any new filters applied will be combined for the most efficient operation.
The next block then applies the next filter, using the output of the first filter as the input for the second. This can be repeated as many times as needed. By writing the code as above, you can easily turn on/off filters as needed, or even reorder them if you have several.
Apple’s documentation is very good and has many examples: Core Image Programming Guide.