It is possible to convert a RenderTexture to a Texture2D?. I need to create textures on the fly to be used later. I guess one will RenderTexture less performance than a Texture2D.
Thanks.
It is possible to convert a RenderTexture to a Texture2D?. I need to create textures on the fly to be used later. I guess one will RenderTexture less performance than a Texture2D.
Thanks.
The trick here is to create a new Texture2D, and then use the ReadPixels method to read the pixels from the RenderTexture to the Texture2D, like this:
RenderTexture.active = myRenderTexture;
myTexture2D.ReadPixels(new Rect(0, 0, myRenderTexture.width, myRenderTexture.height), 0, 0);
myTexture2D.Apply();
The above code assumes that you've created a new Texture2D object at the appropriate width and height to copy from the render texture.
Here’s some really typical code - it may help someone, cheers
public void MakeSquarePngFromOurVirtualThingy()
{
// capture the virtuCam and save it as a square PNG.
int sqr = 512;
virtuCamera.camera.aspect = 1.0f;
// recall that the height is now the "actual" size from now on
// the .aspect property is very tricky in Unity, and bizarrely is NOT shown in the editor
// the editor will still incorrectly show the frustrum being screen-shaped
RenderTexture tempRT = new RenderTexture(sqr,sqr, 24 );
// the "24" can be 0,16,24 or formats like RenderTextureFormat.Default, ARGB32 etc.
virtuCamera.camera.targetTexture = tempRT;
virtuCamera.camera.Render();
RenderTexture.active = tempRT;
Texture2D virtualPhoto = new Texture2D(sqr,sqr, TextureFormat.RGB24, false);
// false, meaning no need for mipmaps
virtualPhoto.ReadPixels( new Rect(0, 0, sqr,sqr), 0, 0); // you get the center section
RenderTexture.active = null; // "just in case"
virtuCamera.camera.targetTexture = null;
//////Destroy(tempRT); - tricky on android and other platforms, take care
byte[] bytes;
bytes = virtualPhoto.EncodeToPNG();
System.IO.File.WriteAllBytes( OurTempSquareImageLocation(), bytes );
// virtualCam.SetActive(false); ... not necesssary but take care
// now use the image somehow...
YourOngoingRoutine( OurTempSquareImageLocation() );
}
private string OurTempSquareImageLocation()
{
string r = Application.persistentDataPath + "/p.png";
return r;
}
Next - or rather, before that.
Very often you have the nightmare of setting the plane and camera sizes correctly. This may help
it’s a completely typical example when you are using Prime31’s camera plugin. You can adapt it to your uses.
public void PutCameraImageOnOurVirtualCanvas(string imagePath)
{
// one way or another, put the image on the virtual plane.
// in this example it's from a device camera. so, put the image on...
// (NOTE - inevitably you will have rotated/twisted the plane (in the editor) since cameras suck
virtuCanvas.renderer.material.mainTexture =
EtceteraAndroid.textureFromFileAtPath( imagePath );
//
// make the canvas SHAPE correct, given the camera data shape
//
Vector2 ActualP31ImageSize = EtceteraAndroid.getImageSizeAtPath( imagePath );
float heightIsBiggerBy = ActualP31ImageSize.y / ActualP31ImageSize.x;
virtuCanvas.transform.localScale = new Vector3( 1f, heightIsBiggerBy, 1f );
//
// make the canvas fit to the same SIZE as the virtual camera
// (so we'll be taking the middle chunk of the canvas)
//
float virtCamHeight = virtuCamera.camera.ScreenHeight();
// and that's basically also the camera width as it concerns us,
// since we set the camera to .aspect=1 below.
// you seem to have to use Height as that measure (not width)
// due to the way .aspect works. .aspect is super-flakey in Unity, take care
float imWidth = virtuCanvas.renderer.bounds.size.x;
float imHeight = virtuCanvas.renderer.bounds.size.y;
float shouldBeBiggerBy;
if ( imWidth < imHeight )
shouldBeBiggerBy = virtCamHeight / imWidth;
else
shouldBeBiggerBy = virtCamHeight / imHeight;
Vector3 imScale = virtuCanvas.transform.localScale;
imScale.x = imScale.x * shouldBeBiggerBy;
imScale.y = imScale.y * shouldBeBiggerBy;
virtuCanvas.transform.localScale = imScale;
}
public static float ScreenHeight(this Camera someOrthoCamera)
{
// utility to get the dimension (real world meters) of any old ortho camera
return
someOrthoCamera.ViewportToWorldPoint(new Vector3(0,1,10)).y
- someOrthoCamera.ViewportToWorldPoint(new Vector3(0,0,10)).y;
}
And just the final piece of the puzzle that may help someone,
Don’t forget when you do “GetPixels inside SetPixels” you want to take the correct shape.
In the example the size of the camera would almost certainly be 4:3, say, but we want only a square shape. Don’t forget that even thoug we “rescale the plane” as above to present a square shape, as you’ll see with the first log statements the “whole shape” is still actually there. So, just use the trick of the arguments inside the GetPixels to get the shape you want – there’s no need to tediously traverse the arrays manually, or anything like that.
public string CreateAndSaveImageNow()
{
Debug.Log("making image -- now using camTexture ");
Debug.Log("camTexture width height is " +camTexture.width +" " +camTexture.height);
Debug.Log("renderer.material.mainTexture width height is " +renderer.material.mainTexture.width +" " +renderer.material.mainTexture.height);
// it's very likely the camera texture is say 640.480, but we want ONLY a square.
Texture2D virtualPhoto =
new Texture2D(480,480,
TextureFormat.RGB24, false);
// so be sure to use the size arguments, inside the "GetPixels", to get only the square you want.
virtualPhoto.SetPixels( camTexture.GetPixels(0,0,480,480) );
virtualPhoto.Apply();
byte[] bytes;
bytes = virtualPhoto.EncodeToPNG();
System.IO.File.WriteAllBytes( OurTempSquareImageLocation(), bytes );
return OurTempSquareImageLocation();
}
And finally! In that last example you’d typically have to rotate the image. Here’s the same routine with some simple, easy to understand kode which will do that.
int desiredSize;
public string CreateAndSaveImageNow()
{
Debug.Log("needsToBeRotatedDegrees was here ... " +needsToBeRotatedDegrees);
if ( camTexture.width < camTexture.height )
desiredSize = camTexture.width;
else
desiredSize = camTexture.height;
Debug.Log("DeviceCamera, CreateAndSaveImageNow, size " +desiredSize);
Texture2D virtualPhoto =
new Texture2D(desiredSize, desiredSize, TextureFormat.RGB24, false);
Color[] origPixels = camTexture.GetPixels(0,0,desiredSize,desiredSize);
// GetPixels has the handy 'four argument' form, GetPixels32 does not have that
// here for simplicity, using GetPixels. so first take a square block.
// now rotate as needed either -90, 0, 90
Color[] rotPixels = new Color[ origPixels.Length ];
for (var x = 0; x < desiredSize; x++)
for (var y = 0; y < desiredSize; y++)
{
if ( needsToBeRotatedDegrees == 0 )
rotPixels[ XY(x, y) ] = origPixels[ XY(x, y) ];
if ( needsToBeRotatedDegrees == 270 || needsToBeRotatedDegrees == -90 )
rotPixels[ XY(x, y) ] = origPixels[ XY(y, (desiredSize-1-x)) ];
if ( needsToBeRotatedDegrees == 90 )
rotPixels[ XY(x, y) ] = origPixels[ XY((desiredSize-1-y), x) ];
}
virtualPhoto.SetPixels( rotPixels );
virtualPhoto.Apply();
byte[] bytes;
bytes = virtualPhoto.EncodeToPNG();
System.IO.File.WriteAllBytes( OurTempSquareImageLocation(), bytes );
return OurTempSquareImageLocation();
}
private int XY( int x, int y )
{
// this trivial routine just returns the "1d array" version of x,y
// using the global width (==height) of the arrays in question
return ( x + y * desiredSize );
}
Again hope it helps someone save some time, cheers…
© 2022 - 2024 — McMap. All rights reserved.
You don't call ReadPixels on the RenderTexture itself, you call it on the destination Texture2D, while the RenderTexture is active. ReadPixels automatically reads from the currently active render texture.
– PinholeHey, this is exactly what I want for a little test, just failing to get it working. Hopefully just something simple I'm doing wrong, if you could take a quick look? var myRenderTexture : Texture; function Start () { var photo = new Texture2D (256, 256); photo.ReadPixels(new Rect(0, 0, myRenderTexture.width, myRenderTexture.height), 0, 0); photo.Apply(); renderer.material.mainTexture = photo; }
– MaasYou also have to set the active render texture before you read pixels. So ReadPixels() knows what it's reading from. So the full code would be like; RenderTexture.active = myRenderTexture; myTexture2D.ReadPixels(new Rect(0, 0, myRenderTexture.width, myRenderTexture.height), 0, 0); myTexture2D.Apply();
– Governmenti get cannot cast from source type to render type for RenderTexture.active = myRenderTexture; ... can you assign a renderer.material.mainTexture to the .active?
– CaboodleIn my case I have one single RenderTexture that is always assigned to a camera. Why do I still have to set
– DockeryRenderTexture.active
to it when reading the texture? I don't understand this.