There are a couple examples of how to take image input, perhaps from the iPhone camera, or from your own image, fast map those image to an OpenGl texture, and then render them with an OpenGL shader. This post is for those of you who don't necessarily want to render the image on the screen, but want to perform some OpenGL operations and then read the image back out. Luckily, the same API that allows you to map the images to textures, also allows you to read data back out from the textures without having to use glReadPixels(...) or some other method that takes a long time. Here's how it's done...

I'm going to skip all the boiler plate code to set up an OpenGL context, instantiate your shaders, and simply focus on the important bits that aren't readily available in the existing examples provided by Apple.

First, to render to a texture, you need an image that is compatible with the OpenGL texture cache. Images that were created with the camera API are already compatible and you can immediately map them for inputs. Suppose you want to create an image to render on and later read out for some other processing though. You have to have create the image with a special property. The attributes for the image must have kCVPixelBufferIOSurfacePropertiesKey as one of the keys to the dictionary.

  1.  
  2. CFDictionaryRef empty; // empty value for attr value.
  3. CFMutableDictionaryRef attrs;
  4. empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
  5. NULL,
  6. NULL,
  7. 0,
  8. &kCFTypeDictionaryKeyCallBacks,
  9. &kCFTypeDictionaryValueCallBacks);
  10. attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
  11. 1,
  12. &kCFTypeDictionaryKeyCallBacks,
  13. &kCFTypeDictionaryValueCallBacks);
  14.  
  15. CFDictionarySetValue(attrs,
  16. kCVPixelBufferIOSurfacePropertiesKey,
  17. empty);
  18.  

There, and now you can create a CVPixelBuffer that you'll render to..

  1.  
  2. // for simplicity, lets just say the image is 640x480
  3. CVPixelBufferRef renderTarget;
  4. CVPixelBufferCreate(kCFAllocatorDefault, 640, 480,
  5. kCVPixelFormatType_32BGRA,
  6. attrs,
  7. &renderTarget);
  8. // in real life check the error return value of course.
  9.  

OK, you have a pixelbuffer that has the correct attribute to work with the texture cache, lets render to it.

  1.  
  2. // first create a texture from our renderTarget
  3. // textureCache will be what you previously made with CVOpenGLESTextureCacheCreate
  4. CVOpenGLESTextureRef renderTexture;
  5. CVOpenGLESTextureCacheCreateTextureFromImage (
  6. kCFAllocatorDefault,
  7. textureCache,
  8. renderTarget,
  9. NULL, // texture attributes
  10. GL_TEXTURE_2D,
  11. GL_RGBA, // opengl format
  12. 640,
  13. 480,
  14. GL_BGRA, // native iOS format
  15. GL_UNSIGNED_BYTE,
  16. 0,
  17. &renderTexture);
  18. // check err value
  19.  
  20. // set the texture up like any other texture
  21. glBindTexture(CVOpenGLESTextureGetTarget(renderTexture),
  22. CVOpenGLESTextureGetName(renderTexture));
  23. glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
  24. glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
  25.  
  26. // bind the texture to the framebuffer you're going to render to
  27. // (boilerplate code to make a framebuffer not shown)
  28. glBindFramebuffer(GL_FRAMEBUFFER, renderFrameBuffer);
  29. glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
  30. GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
  31.  
  32. // great, now you're ready to render to your image.
  33.  

After you've got your render target set up, you just render with the OpenGL shaders like all the examples show. You might use another camera image as input, create another texture with CVOpenGLTextureCacheCreateTextureFromImage, and then bind that texture to your shader (glActivateTexture and glBindTexture). Use your program (glUseProgram), etc.

When you're done rendering, your render target now contains the output of your image and you don't need to use glReadPixels to retrieve it. Instead, you just lock the memory and continue processing it. Here is an example of something you might do...

  1.  
  2. if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(renderTarget,
  3. kCVPixelBufferLock_ReadOnly)) {
  4. uint8_t* pixels=(uint8_t*)CVPixelBufferGetBaseAddress(renderTarget);
  5. // process pixels how you like!
  6. CVPixelBufferUnlockBaseAddress(m_rgb, kCVPixelBufferLock_ReadOnly);
  7. }
  8.  

Have fun! Sorry no downloadable iOS project at this time.