Rendering to a texture with iOS 5 texture cache api.

There are a couple examples of how to take image input, perhaps from the iPhone camera, or from your own image, fast map those image to an OpenGl texture, and then render them with an OpenGL shader. This post is for those of you who don’t necessarily want to render the image on the screen, but want to perform some OpenGL operations and then read the image back out. Luckily, the same API that allows you to map the images to textures, also allows you to read data back out from the textures without having to use glReadPixels(…) or some other method that takes a long time. Here’s how it’s done…

I’m going to skip all the boiler plate code to set up an OpenGL context, instantiate your shaders, and simply focus on the important bits that aren’t readily available in the existing examples provided by Apple.

First, to render to a texture, you need an image that is compatible with the OpenGL texture cache. Images that were created with the camera API are already compatible and you can immediately map them for inputs. Suppose you want to create an image to render on and later read out for some other processing though. You have to have create the image with a special property. The attributes for the image must have kCVPixelBufferIOSurfacePropertiesKey as one of the keys to the dictionary.

  CFDictionaryRef empty; // empty value for attr value.
  CFMutableDictionaryRef attrs;
  empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
      NULL,
      NULL,
      0,
      &kCFTypeDictionaryKeyCallBacks,
      &kCFTypeDictionaryValueCallBacks);
  attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
      1,
      &kCFTypeDictionaryKeyCallBacks,
      &kCFTypeDictionaryValueCallBacks);

  CFDictionarySetValue(attrs,
      kCVPixelBufferIOSurfacePropertiesKey,
      empty);

There, and now you can create a CVPixelBuffer that you’ll render to..

  // for simplicity, lets just say the image is 640x480
  CVPixelBufferRef renderTarget;
  CVPixelBufferCreate(kCFAllocatorDefault, 640, 480, 
    kCVPixelFormatType_32BGRA,
    attrs,
    &renderTarget);
  // in real life check the error return value of course.

OK, you have a pixelbuffer that has the correct attribute to work with the texture cache, lets render to it.

 // first create a texture from our renderTarget
 // textureCache will be what you previously made with CVOpenGLESTextureCacheCreate
 CVOpenGLESTextureRef renderTexture;
 CVOpenGLESTextureCacheCreateTextureFromImage (
         kCFAllocatorDefault,
         textureCache,
         renderTarget,
         NULL, // texture attributes
         GL_TEXTURE_2D,
         GL_RGBA, // opengl format
         640,
         480,
         GL_BGRA, // native iOS format 
         GL_UNSIGNED_BYTE,
         0,
         &renderTexture);
  // check err value

 // set the texture up like any other texture
  glBindTexture(CVOpenGLESTextureGetTarget(renderTexture),
    CVOpenGLESTextureGetName(renderTexture));
  glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
  glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

 // bind the texture to the framebuffer you're going to render to 
 // (boilerplate code to make a framebuffer not shown)
 glBindFramebuffer(GL_FRAMEBUFFER, renderFrameBuffer);
 glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                        GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

 // great, now you're ready to render to your image. 

After you’ve got your render target set up, you just render with the OpenGL shaders like all the examples show. You might use another camera image as input, create another texture with CVOpenGLTextureCacheCreateTextureFromImage, and then bind that texture to your shader (glActivateTexture and glBindTexture). Use your program (glUseProgram), etc.

When you’re done rendering, your render target now contains the output of your image and you don’t need to use glReadPixels to retrieve it. Instead, you just lock the memory and continue processing it. Here is an example of something you might do…

  if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(renderTarget,
      kCVPixelBufferLock_ReadOnly)) {
    uint8_t* pixels=(uint8_t*)CVPixelBufferGetBaseAddress(renderTarget);
    // process pixels how you like!
    CVPixelBufferUnlockBaseAddress(m_rgb, kCVPixelBufferLock_ReadOnly);
  }

Have fun! Sorry no downloadable iOS project at this time.

This entry was posted in Programming and tagged , , , , , . Bookmark the permalink.

60 Responses to Rendering to a texture with iOS 5 texture cache api.

  1. infrid says:

    Can someone please look at this?

    I’ve been trying on and off all week, and reading up on opengl too… I just get a blank screen, or the clear color (if I do a clear). I never get any geometry..

    http://stackoverflow.com/questions/16224598/why-cant-i-write-out-my-opengl-fbo-to-an-avassetwriter

  2. scott says:

    The docs at http://developer.apple.com/library/ios/#documentation/CoreVideo/Reference/CVOpenGLESTextureCacheRef/Reference/reference.html seem to imply that the correct way to write to a CVPixelBuffer using OpenGLES is by creating a renderbuffer (not a texture) using CVOpenGLESTextureCacheCreateTextureFromImage:

    //Mapping a BGRA buffer as a renderbuffer:
    CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, pixelBuffer, NULL, GL_RENDERBUFFER, GL_RGBA8_OES, width, height, GL_RGBA, GL_UNSIGNED_BYTE, 0, &outTexture);

    The comments similarly imply that a texture created with CVOpenGLESTextureCacheCreateTextureFromImage is only supposed to be used as a source (despite the fact that using it as a target clearly works, at least in most cases):

    //Mapping a BGRA buffer as a source texture:
    CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_RGBA, width, height, GL_RGBA, GL_UNSIGNED_BYTE, 0, &outTexture);

    infrid, if you modify your code to create a renderbuffer instead of a texture, is AVAssetWriter happier? Also, do you see different behavior between simulator and device?

  3. Imsdal says:

    Can someone please post a downloadable (simple)working example of this? Such does not seem to be available anywhere on the web.

  4. Doug says:

    @Duncan Hi Duncan, would you mind sending me the ChromaKey source code from WWDC 2011, my email is douglasallen27(at)googlemail.com

    Ive been trying to get it running for the last few hours with no luck.

  5. howard says:

    Hi,

    I am also intrerested in the source code. Can you please send it to me as well?

    It’s a pity that the Apple guy does not want to share his code.

    Regards,
    Howard

  6. howard says:

    Hi,

    I am also intrerested in the source code. Can you please send it to me as well? My email is: wminghao(at)gmail.com

    It’s a pity that the Apple guy does not want to share his code.

    Regards,
    Howard

  7. dustypixels says:

    @Duncan – Duncan if you can mail me ChromaKey example I will greatly appreciate it. My email is ryanharris@outlook.com
    Thank you. Ryan

  8. Pingback: Recording from the iPad screen | LiquidSketch

  9. Tjaž says:

    This is so helpful!!! Thank you a lot!

    I have managed to compile and run it on my iPad mini. And it Works like a charm.

    At the moment I am using it to implement my own physics for my game using a GPU for parallel computations.

    This API should be known to more developers! On the internet I have found barely any information regarding IOS texture cache api.

  10. Patrick says:

    @Duncan source code would be incredibly helpful, could you please send it to me?
    pat@acmeaom.com

Comments are closed.