Is it possible in objective C that we can take the screen shot of screen and stored this image in UIImage.
You need to create a bitmap context of the size of your screen and use
[self.view.layer renderInContext:c]
to copy your view in it. Once this is done, you can use
CGBitmapContextCreateImage(c)
to create a CGImage from your context.
Elaboration :
CGSize screenSize = [[UIScreen mainScreen] applicationFrame].size;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, screenSize.width, screenSize.height, 8, 4*(int)screenSize.width, colorSpaceRef, kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(ctx, 0.0, screenSize.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
[(CALayer*)self.view.layer renderInContext:ctx];
CGImageRef cgImage = CGBitmapContextCreateImage(ctx);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(ctx);
[UIImageJPEGRepresentation(image, 1.0) writeToFile:@"screen.jpg" atomically:NO];
Note that if you run your code in response to a click on a UIButton, your image will shows that button pressed.
self
? What about window transforms? Do you know what will happen when keyboard/alerts are shown? –
Particularism The previous code assumes that the view to be captured lives on the main screen...it might not.
Would this work to always capture the content of the main window? (warning: compiled in StackOverflow)
- (UIImage *) captureScreen {
UIWindow *keyWindow = [[UIApplication sharedApplication] keyWindow];
CGRect rect = [keyWindow bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[keyWindow.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
You need to create a bitmap context of the size of your screen and use
[self.view.layer renderInContext:c]
to copy your view in it. Once this is done, you can use
CGBitmapContextCreateImage(c)
to create a CGImage from your context.
Elaboration :
CGSize screenSize = [[UIScreen mainScreen] applicationFrame].size;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, screenSize.width, screenSize.height, 8, 4*(int)screenSize.width, colorSpaceRef, kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(ctx, 0.0, screenSize.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
[(CALayer*)self.view.layer renderInContext:ctx];
CGImageRef cgImage = CGBitmapContextCreateImage(ctx);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(ctx);
[UIImageJPEGRepresentation(image, 1.0) writeToFile:@"screen.jpg" atomically:NO];
Note that if you run your code in response to a click on a UIButton, your image will shows that button pressed.
self
? What about window transforms? Do you know what will happen when keyboard/alerts are shown? –
Particularism Technical Q&A QA1703 Screen Capture in UIKit Applications
http://developer.apple.com/iphone/library/qa/qa2010/qa1703.html
try this...
- (UIImage*)captureView:(UIView *)view
{
CGRect rect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
- (void)saveScreenshotToPhotosAlbum:(UIView *)view
{
UIImageWriteToSavedPhotosAlbum([self captureView:self.view], nil, nil,nil);
}
UIGraphicsBeginImageContext(self.window.bounds.size);
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
[data writeToFile:@"foo.png" atomically:YES];
UPDATE April 2011: for retina display, change the first line into this:
if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)])
{
UIGraphicsBeginImageContextWithOptions(self.window.bounds.size, NO, [UIScreen mainScreen].scale);
}
else
{
UIGraphicsBeginImageContext(self.window.bounds.size);
}
// 100% Work
- (UIImage *)screenshot
{
UIGraphicsBeginImageContextWithOptions(self.main_uiview.bounds.size, NO, 2.0f);
[self.main_uiview drawViewHierarchyInRect:_main_uiview.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (void)SnapShot {
if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(self.view.bounds.size);
}
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *data = UIImagePNGRepresentation(image);
[data writeToFile:@"snapshot.png" options:NSDataWritingWithoutOverwriting error:Nil];
[data writeToFile:@"snapshot.png" atomically:YES];
UIImageWriteToSavedPhotosAlbum([UIImage imageWithData:data], nil, nil, nil);
}
[UIImage imageWithData:data]
if you already have image object above. 3. Camel case –
Nickolai Use this code to take the screenshot:
-(void)webViewDidFinishLoad:(UIWebView *)webView
{
UIGraphicsBeginImageContext(self.webView.bounds.size);
[self.webView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenshotImage, nil, nil, nil);
NSString *docDir = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *str1=[NSString stringWithFormat:@"%d",myInt];
NSString *pngFilePath = [NSString stringWithFormat:@"%@/page_%@.png",docDir,str1]; // any name u want for image
NSLog(@"%@",pngFilePath);
NSData *data1 = [NSData dataWithData:UIImagePNGRepresentation(screenshotImage)];
[data1 writeToFile:pngFilePath atomically:YES];
}
data1
, UIImagePNGRepresentation
it's already NSData –
Nickolai (UIImage *)screenshot { UIGraphicsBeginImageContextWithOptions(self.main_uiview.bounds.size, NO, 2.0f); [self.main_uiview drawViewHierarchyInRect:_main_uiview.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext();
return image; }
In modern way:
Obj-C
@interface UIView (Snapshot)
- (UIImage * _Nullable)snapshot;
@end
@implementation UIView (Snapshot)
- (UIImage * _Nullable)snapshot {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, UIScreen.mainScreen.scale);
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
@end
Swift
extension UIView {
func snapshot() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(bounds.size, false, UIScreen.main.scale)
drawHierarchy(in: bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Yes, here's a link to the Apple Developer Forums https://devforums.apple.com/message/149553
© 2022 - 2024 — McMap. All rights reserved.