There are a couple examples of how to take image input, perhaps from the iPhone camera, or from your own image, fast map those image to an OpenGl texture, and then render them with an OpenGL shader. This post is for those of you who don’t necessarily want to render the image on the screen, but want to perform some OpenGL operations and then read the image back out. Luckily, the same API that allows you to map the images to textures, also allows you to read data back out from the textures without having to use glReadPixels(…) or some other method that takes a long time. Here’s how it’s done…
I’m going to skip all the boiler plate code to set up an OpenGL context, instantiate your shaders, and simply focus on the important bits that aren’t readily available in the existing examples provided by Apple.
First, to render to a texture, you need an image that is compatible with the OpenGL texture cache. Images that were created with the camera API are already compatible and you can immediately map them for inputs. Suppose you want to create an image to render on and later read out for some other processing though. You have to have create the image with a special property. The attributes for the image must have kCVPixelBufferIOSurfacePropertiesKey as one of the keys to the dictionary.
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
There, and now you can create a CVPixelBuffer that you’ll render to..
// for simplicity, lets just say the image is 640x480
CVPixelBufferRef renderTarget;
CVPixelBufferCreate(kCFAllocatorDefault, 640, 480,
kCVPixelFormatType_32BGRA,
attrs,
&renderTarget);
// in real life check the error return value of course.
OK, you have a pixelbuffer that has the correct attribute to work with the texture cache, lets render to it.
// first create a texture from our renderTarget
// textureCache will be what you previously made with CVOpenGLESTextureCacheCreate
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (
kCFAllocatorDefault,
textureCache,
renderTarget,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
640,
480,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
// check err value
// set the texture up like any other texture
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture),
CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// bind the texture to the framebuffer you're going to render to
// (boilerplate code to make a framebuffer not shown)
glBindFramebuffer(GL_FRAMEBUFFER, renderFrameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
// great, now you're ready to render to your image.
After you’ve got your render target set up, you just render with the OpenGL shaders like all the examples show. You might use another camera image as input, create another texture with CVOpenGLTextureCacheCreateTextureFromImage, and then bind that texture to your shader (glActivateTexture and glBindTexture). Use your program (glUseProgram), etc.
When you’re done rendering, your render target now contains the output of your image and you don’t need to use glReadPixels to retrieve it. Instead, you just lock the memory and continue processing it. Here is an example of something you might do…
if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(renderTarget,
kCVPixelBufferLock_ReadOnly)) {
uint8_t* pixels=(uint8_t*)CVPixelBufferGetBaseAddress(renderTarget);
// process pixels how you like!
CVPixelBufferUnlockBaseAddress(m_rgb, kCVPixelBufferLock_ReadOnly);
}
Have fun! Sorry no downloadable iOS project at this time.
Hi,
Thank you for your nice information Rendering to a texture with iOS 5 texture cache api.. I like it.
Thanks
Hi,
Do you think it’s possible to use this with a different CVPixelFormatType such as kCVPixelFormatType_420YpCbCr8BiPlanarFullRange ? I guess one should change parameters of function CVOpenGLESTextureCacheCreateTextureFromImage() but I can’t find how…
Thanks
It’s easy to get YUV420 as in input texture but I don’t know how to get that as an output.
I have problem with rendering to a texture. I get strange result:
https://devforums.apple.com/message/598513#598513
I think that is something wrong with texture size or rendering non-power-of-two texture.
Maybe you had to deal with similar problem ?
Hi Dennis,
thank you for your information and code pieces.
I have a question: in function CVOpenGLESTextureCacheCreateTextureFromImage you use texture named “rgbTexture” though in the rest of your code you use “renderTexture”. I assume that it is a mistake and only one texture should be used. Or is it on purpose?
Corrected that and another typo, thanks.
Hi Dennis,
I have tried to use your post nevertheless the binding that I create between pixel buffer and texture works for me only in the direction from pixel buffer to texture.
I setup an example in which I have CMSampleBuffer. I extract from it CVPixelBuffer – renderTarget in your terminology. Then I create a renderTexture as you show or as is it done in Apple’s RosyWriter example. When I bind the texture to the framebuffer I can check using
glReadPixels(0, 0, texWidth, texHeight, GL_RGBA, GL_UNSIGNED_BYTE, rawPixels) that content of renderTarget is copied to rawPixels.
Now I want to use a shader to modify the texture. To make everything simple I just want to turn each texel to red – gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0). I render to the renderTexture and using glReadPixels I check that all was turned red.
And now comes my problem, where I probably do something wrong. My understanding is that renderTarget (the pixelBuffer) should mirror the content of renderTexture. Unfortunately when I check the content of renderTarget the original content is unchanged.
Do you have any advice for me what I do wrong?
Thanks.
Jaras
Do you remember to create your renderTarget with the correct attributes?
I believe so. I derive the renderTarget from CMSampleBuffer so it is created with Camera API.
Have you tried with a CVPixelBufferRef like in the example here?
Here is the relevant piece of code that I used. Regarding your question I also tried to create CVPixelBufferRef as is in your post and then to color them red as I do in the attached code but it did not work either.
//
// FrameConverterView.m
// JSReaderWriter2
//
// Created by Jaromir Siska on 11.01.12.
// Copyright (c) 2012 siskaj@me.com. All rights reserved.
//
#import “FrameConverterView.h”
#include “ShaderUtilities.h”
#import
enum {
ATTRIB_VERTEX,
ATTRIB_TEXTUREPOSITON,
NUM_ATTRIBUTES
};
const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
const GLfloat textureVertices[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
const GLubyte Indices[] = {
0,1,2,
2,3,0,
};
const GLsizei texWidth = 128;
const GLsizei texHeight = 128;
@implementation FrameConverterView
– (const GLchar *)readFile:(NSString *)name
{
NSString *path;
const GLchar *source;
path = [[NSBundle mainBundle] pathForResource:name ofType: nil];
source = (GLchar *)[[NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:nil] UTF8String];
return source;
}
+ (Class)layerClass {
return [CAEAGLLayer class];
}
– (void)setupLayer {
_eaglLayer = (CAEAGLLayer*) self.layer;
_eaglLayer.opaque = YES;
}
– (void)setupContext {
EAGLRenderingAPI api = kEAGLRenderingAPIOpenGLES2;
_context = [[EAGLContext alloc] initWithAPI:api];
if (!_context) {
NSLog(@”Failed to initialize OpenGLES 2.0 context”);
exit(1);
}
if (![EAGLContext setCurrentContext:_context]) {
NSLog(@”Failed to set current OpenGL context”);
exit(1);
}
}
– (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
rawPixels = (GLubyte *)calloc(texWidth * texHeight * 4, sizeof(GLubyte));
// [self setupLayer];
// [self setupContext];
}
return self;
}
-(BOOL)createBuffers {
BOOL success = YES;
[self setupLayer];
[self setupContext];
glDisable(GL_DEPTH_TEST);
glGenFramebuffers(1, &frameBufferHandle);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
//====================================================
/* glGenRenderbuffers(1, &colorBufferHandle);
glBindRenderbuffer(GL_RENDERBUFFER, colorBufferHandle);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &renderBufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &renderBufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorBufferHandle);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
NSLog(@”Failure with framebuffer generation”);
success = NO;
}
*/
//============================================
// Create a new CVOpenGLESTexture cache
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &videoTextureCache);
if (err) {
NSLog(@”Error at CVOpenGLESTextureCacheCreate %d”, err);
success = NO;
}
// Load vertex and fragment shaders
const GLchar *vertSrc = [self readFile:@”Shader.vsh”];
const GLchar *fragSrc = [self readFile:@”Shader.fsh”];
// attributes
GLint attribLocation[NUM_ATTRIBUTES] = {
ATTRIB_VERTEX, ATTRIB_TEXTUREPOSITON,
};
GLchar *attribName[NUM_ATTRIBUTES] = {
“position”, “textureCoordinate”,
};
glueCreateProgram(vertSrc, fragSrc,
NUM_ATTRIBUTES, (const GLchar **)&attribName[0], attribLocation,
0, 0, 0, // we don’t need to get uniform locations in this example
&_program);
if (!_program) {
NSLog(@”Problem se shaders!”);
success = NO;
}
return success;
}
– (void)renderWithSquareVertices:(const GLfloat*)squareVertices textureVertices:(const GLfloat*)textureVertices
{
glClearColor(0, 104.0/255.0, 55.0/255.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
// Use shader program.
glUseProgram(_program);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, CVOpenGLESTextureGetName(texture));
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
// Update uniform values if there are any
// Validate program before drawing. This is a good check, but only really necessary in a debug build.
// DEBUG macro must be defined in your debug configurations if that’s not already the case.
#if defined(DEBUG)
if (glueValidateProgram(_program) == 0) {
NSLog(@”Failed to validate program: %d”, _program);
return;
}
#endif
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
– (void)convertSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
if (frameBufferHandle == 0) {
BOOL success = [self createBuffers];
if ( !success ) {
NSLog(@”Problem initializing OpenGL buffers.”);
}
}
if (videoTextureCache == NULL)
return;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)imageBuffer;
// Create a CVOpenGLESTexture from the CVImageBuffer
size_t frameWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t frameHeight = CVPixelBufferGetHeight(pixelBuffer);
texture = NULL;
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
if (!texture || err) {
NSLog(@”CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)”, err);
return;
}
glBindTexture(CVOpenGLESTextureGetTarget(texture), CVOpenGLESTextureGetName(texture));
// Set texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(texture), 0);
NSLog(@”GL error15: %d”, glGetError());
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
// glReadPixels(0, 0, texWidth, texHeight, GL_RGBA, GL_UNSIGNED_BYTE, rawPixels);
//
//
// if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly)) {
// uint8_t *pixels =(uint8_t *)CVPixelBufferGetBaseAddress(pixelBuffer);
// CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
// }
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(@”Incomplete FBO: %d”, status);
// exit(1);
}
[self renderWithSquareVertices:squareVertices textureVertices:textureVertices];
// Here I check rawPixels in order to see whether shaders were applied properly; content is FF0000FF – rgba, as my shader is simple
// red – gl_FragColor = vec4(1.0,0.0,0.0,1.0)
glReadPixels(0, 0, texWidth, texHeight, GL_RGBA, GL_UNSIGNED_BYTE, rawPixels);
// I inspect pixels to see outcome; unfortunately their contain non-modified content
if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly)) {
uint8_t *pixels =(uint8_t *)CVPixelBufferGetBaseAddress(pixelBuffer);
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
}
glBindTexture(CVOpenGLESTextureGetTarget(texture), 0);
// Flush the CVOpenGLESTexture cache and release the texture
CVOpenGLESTextureCacheFlush(videoTextureCache, 0);
CFRelease(texture);
}
@end
Are you trying to render onto the input image? I think you’ll have more luck creating a new pixel buffer to render onto as in the post. I’m not sure exactly why that didn’t work for you. I didn’t see from a quick glance why it isn’t working for you but I do know of a number of people (including myself) that this is working for.
One other difference though, I bind the framebuffer first then call glFrameBufferTexture2D. Maybe your texture isn’t actually bound to the framebuffer.
Hi Dennis,
I have tried maximally to simplify the whole problem. I setup the code the way you suggest, i.e. I bind first the framebuffer and then I call glFrameBufferTexture2D. I also create a new pixel buffer to render onto as you suggest. No luck. I attach the relevant code – I think that the code now just copies your post and just a piece of code for rendering to the texture is added.
Can you post the code that works for you?
Here is my code:
//
// OpenGLView.m
// PixelBufferWriter
//
// Created by Jaromir Siska on 13.01.12.
// Copyright (c) 2012 siskaj@me.com. All rights reserved.
//
#import “OpenGLView.h”
#include “ShaderUtilities.h”
#include
#include
#import
enum {
ATTRIB_VERTEX,
ATTRIB_TEXTUREPOSITON,
NUM_ATTRIBUTES
};
const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
const GLfloat textureVertices[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
const GLsizei texWidth = 128;
const GLsizei texHeight = 128;
@implementation OpenGLView
– (const GLchar *)readFile:(NSString *)name
{
NSString *path;
const GLchar *source;
path = [[NSBundle mainBundle] pathForResource:name ofType: nil];
source = (GLchar *)[[NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:nil] UTF8String];
return source;
}
+ (Class)layerClass {
return [CAEAGLLayer class];
}
– (void)setupLayer {
_eaglLayer = (CAEAGLLayer*) self.layer;
_eaglLayer.opaque = YES;
}
– (void)setupContext {
EAGLRenderingAPI api = kEAGLRenderingAPIOpenGLES2;
_context = [[EAGLContext alloc] initWithAPI:api];
if (!_context) {
NSLog(@”Failed to initialize OpenGLES 2.0 context”);
exit(1);
}
if (![EAGLContext setCurrentContext:_context]) {
NSLog(@”Failed to set current OpenGL context”);
exit(1);
}
}
– (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
rawPixels = (GLubyte *)calloc(texWidth * texHeight * 4, sizeof(GLubyte));
[self setupLayer];
[self setupContext];
[self convertPixelBuffer];
}
return self;
}
-(BOOL)createBuffers {
BOOL success = YES;
glDisable(GL_DEPTH_TEST);
glGenFramebuffers(1, &frameBufferHandle);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
// Create a new CVOpenGLESTexture cache
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &videoTextureCache);
if (err) {
NSLog(@”Error at CVOpenGLESTextureCacheCreate %d”, err);
success = NO;
}
// Load vertex and fragment shaders
const GLchar *vertSrc = [self readFile:@”Shader.vsh”];
const GLchar *fragSrc = [self readFile:@”Shader.fsh”];
// attributes
GLint attribLocation[NUM_ATTRIBUTES] = {
ATTRIB_VERTEX, ATTRIB_TEXTUREPOSITON,
};
GLchar *attribName[NUM_ATTRIBUTES] = {
“position”, “textureCoordinate”,
};
glueCreateProgram(vertSrc, fragSrc,
NUM_ATTRIBUTES, (const GLchar **)&attribName[0], attribLocation,
0, 0, 0, // we don’t need to get uniform locations in this example
&_program);
if (!_program) {
NSLog(@”Problem se shaders!”);
success = NO;
}
return success;
}
– (void)render
{
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
// Use shader program.
glUseProgram(_program);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
// Validate program before drawing. This is a good check, but only really necessary in a debug build.
// DEBUG macro must be defined in your debug configurations if that’s not already the case.
#if defined(DEBUG)
if (glueValidateProgram(_program) == 0) {
NSLog(@”Failed to validate program: %d”, _program);
return;
}
#endif
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
– (void)convertPixelBuffer
{
if (frameBufferHandle == 0) {
BOOL success = [self createBuffers];
if ( !success ) {
NSLog(@”Problem initializing OpenGL buffers.”);
}
}
if (videoTextureCache == NULL)
return;
CFDictionaryRef empty;
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
// for simplicity, lets just say the image is 640×480
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorDefault, 640, 480,
kCVPixelFormatType_32BGRA,
attrs,
&pixelBuffer);
// Create a CVOpenGLESTexture from the CVImageBuffer
size_t frameWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t frameHeight = CVPixelBufferGetHeight(pixelBuffer);
// texture = NULL;
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
if (!texture || err) {
NSLog(@”CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)”, err);
return;
}
glBindTexture(CVOpenGLESTextureGetTarget(texture), CVOpenGLESTextureGetName(texture));
// Set texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(texture), 0);
NSLog(@”GL error15: %d”, glGetError());
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(@”Incomplete FBO: %d”, status);
// exit(1);
}
[self render];
// Here I check rawPixels in order to see whether shaders were applied properly; content is FF0000FF – rgba, as my shader is simple
// red – gl_FragColor = vec4(1.0,0.0,0.0,1.0)
glReadPixels(0, 0, texWidth, texHeight, GL_RGBA, GL_UNSIGNED_BYTE, rawPixels);
// I inspect pixels to see outcome; unfortunately their contain non-modified content
if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly)) {
uint8_t *pixels =(uint8_t *)CVPixelBufferGetBaseAddress(pixelBuffer);
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
}
glBindTexture(CVOpenGLESTextureGetTarget(texture), 0);
// Flush the CVOpenGLESTexture cache and release the texture
CVOpenGLESTextureCacheFlush(videoTextureCache, 0);
CFRelease(texture);
}
@end
I’ve been meaning to put something together and stick it on Github. Been busy with some other contract work though. I think there is another github project someone started already to do this. Not sure if they got it to work yet or not.
I have the same problem like jaras. After rendering, the pixelbuffer has the same values as before.
Dennis, it would be fine if you can put a simple example on Github. I have searched WEB quite extensively and I have not been able to find any such project. There are some discussions on Apple Developer Forum – people can go from pixelbuffer to texture and ask how to do the reverse binding – texture –> pixelbuffer.
Here is one project that was started. The author emailed about not quite having it yet too. I think he forgot to create a 2nd framebuffer and bind the renderTexture to it. Anyway, I hope to be able to come back to this and provide a good example but I’m pretty swamped with another project right now.
https://github.com/clarkeaa/iOS-Fast-Map-Texture-Editing
Jaras,
I had the same problem as you and after trying around for days I think I have found the problem:
You have to call CFRelease(texture) BEFORE you want to read out the rawPixels.
Seemingly the texture is not mirrored back to the pixel buffer until the call of CFRelease(texture).
Let me know if this works for you!
Dennis, thank you for this great post, it is really helpful, since there seems to be no documentation about the API. Nevertheless, I think it would be worth to mention that you have to call CFRelease before you read out the rendered texture.
Good catch, I hope a few others find that helps as well and are able to get working examples. I’ll try to verify that is required in my own code when I get a chance and if that turns out to be the issue I’ll update the post accordingly.
ljkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Hi Sveno,
I have tried your advice but without success. In the code that I displayed above I tried to move CFRelease(texture) before the comment –
“// I inspect pixels to see outcome; unfortunately their contain non-modified content” but without success. I also tried to move forward CVOpenGLESTextureCacheFlush but with the same result.
If you have functional example can you put it on Github or display here the relevant part of the code?
Here is the relevant part of the code that works for me. Hope it helps to fix your problem. Btw. have you checked your shaders? You have to define a precision for all float variables you use in the shaders. This could also be the reason that your pixels are not updated.
– (BOOL) createOffscreenFramebuffer:(GLuint *)framebufferHandle textureCache:(CVOpenGLESTextureCacheRef *)textureCache width:(int)width height:(int)height
{
BOOL success = YES;
glDisable(GL_DEPTH_TEST);
glGenFramebuffers(1, framebufferHandle);
glBindFramebuffer(GL_FRAMEBUFFER, *framebufferHandle);
// Offscreen framebuffer texture cache
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void*) oglContext , NULL, &grayscaleTextureCache);
if (err) {
NSLog(@”Error at CVOpenGLESTextureCacheCreate %d”, err);
success = NO;
}
// Load vertex and fragment shaders
const GLchar *vertSrc = [self readFile:@”grayscale.vsh”];
const GLchar *fragSrc = [self readFile:@”grayscale.fsh”];
// attributes
GLint attribLocation[NUM_ATTRIBUTES] = {
ATTRIB_VERTEX, ATTRIB_TEXTUREPOSITON,
};
GLchar *attribName[NUM_ATTRIBUTES] = {
“position”, “textureCoordinate”,
};
glueCreateProgram(vertSrc, fragSrc,
NUM_ATTRIBUTES, (const GLchar **)&attribName[0], attribLocation,
0, 0, 0, // we don’t need to get uniform locations in this example
&grayscaleProgram);
return success;
}
#pragma mark Processing
– (void) convertImageToGrayscale:(CVImageBufferRef)pixelBuffer width:(int)width height:(int)height
{
// Create oglContext if it doesnt exist
if (!oglContext) {
oglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!oglContext || ![EAGLContext setCurrentContext:oglContext]) {
NSLog(@”Problem with OpenGL context.”);
return;
}
}
if (!grayscaleFramebuffer) {
[self createOffscreenFramebuffer:&grayscaleFramebuffer textureCache:&grayscaleTextureCache width:width height:height];
}
CVOpenGLESTextureRef grayscaleTexture = NULL;
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
grayscaleTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
width,
height,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&grayscaleTexture);
if (!grayscaleTexture || err) {
NSLog(@”CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)”, err);
return;
}
glBindTexture(CVOpenGLESTextureGetTarget(grayscaleTexture), CVOpenGLESTextureGetName(grayscaleTexture));
// Set texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindFramebuffer(GL_FRAMEBUFFER, grayscaleFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(grayscaleTexture), 0);
glViewport(0, 0, width, height);
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat textureVertices[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
// Use shader program.
glUseProgram(grayscaleProgram);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
// Update uniform values if there are any
// Validate program before drawing. This is a good check, but only really necessary in a debug build.
// DEBUG macro must be defined in your debug configurations if that’s not already the case.
#if defined(DEBUG)
if (glueValidateProgram(grayscaleProgram) != 0) {
NSLog(@”Failed to validate program: %d”, grayscaleProgram);
return;
}
#endif
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindTexture(CVOpenGLESTextureGetTarget(grayscaleTexture), 0);
// Flush the CVOpenGLESTexture cache and release the texture
CVOpenGLESTextureCacheFlush(grayscaleTextureCache, 0);
CFRelease(grayscaleTexture);
}
Hi Sveno,
thank you for the code. Actually it is quite similar to mine. The example still does not work for me. I checked my shaders. They are OK and they do work. I inspect the content of the texture with GLReadPixels and the changes are written to the texture. I will continue experiment and let you know if I find what will make the code to work.
Thank you.
J.
Sorry I haven’t got around to posting an example. I’ve been very busy with a couple other projects so this is lower on the priority list. (I do however do consulting work if it’s a large problem for someone..)
Gents,
Have you find solution for this issue. I got the same results – the data read via glreadPixels looks ok but there is nothing in the pixelBuffer. Looks like the data was never synced.
Interestingly when I put my code in the Apple’s RosyWriter everything seems to be working but not as a “separate” code.
Ok. This bloody thing works but only on the device. In simulator I got nothing while on device it is working like a charm….
Glad to hear it’s working for you. Perhaps I should mention that it only works on a device in the article. Think I knew that actually but hadn’t called it out.
Poopi is right. On the device it really works, on simulator it does not. Should try sooner – I know that CIContext method render:toCVPixelBuffer: works only with device.
Folks,
Session 419 from the 2011 WWDC covers this in some detail. There is a fully functioning sample app, ChromaKey, that shows an end-to-end solution. It takes 1024×768 video from an iPad 2, renders it in OpenGL, uses a shader to “knock out” the background color, and then displays the result to the screen, with an option to save it to a video file in your photo library. The video file is also at 1024×768, at 30 fps.
The app isn’t publicly available, but I emailed Eryk Vershen and asked him, and he sent it to me. He specifically asked me not to post it online, but if you PM me I can send it to you. I haven’t posted to this blog before, so I’m not sure how to get replies. I’ll try to check in, or do a search for WareTo do com and send me an email there. (I’m not going to post my email because I don’t want the spam bots to find me.)
Duncan C
@Duncan – Duncan if you can mail me ChromaKey example I will greatly appreciate it. My email is siska10@me.com.
Thank you. jaras
Have you gotten anything working using anything other than NULL for “// texture attributes”
I can’t find a documented case of how to use them anywhere.
Good question. All the documentation I ever read always used NULL there when creating the texture from the CVImage. I’ve never had any reason to try and use something else. Do you have something specific you need to accomplish?
I got this working for the most part in both directions. Pretty incredible stuff.
One issue I’m having, though, is that the pixelbuffer that mirrors the framebuffer seems to be flipped in both axes. I can clearly transform that after the fact, but there must be a way to make this happen more directly in the rendering pipeline.
This is ringing some opengl bell in the recesses of my brain, any advice?
Look back a number of posts where Jaris posted some code. You might need to change either the orientation of the texture vertices or square vertices. One thing I think is that iOS renders from bottom to top instead of top to bottom like some opengl implementations I’ve heard, so a lot of times you get an upside down image at the start. I rendered my image like Jaris did. Notice the texture Vertices are shaped like a Z, starting at the bottom and going toward the top.
What kind of recording FPS are you guys getting with this? I hear with glReadPixels, you get around ~10-15 FPS.
I could easily do 30 fps unless I needed to do a color conversion from the rendered BGRA texture to something else. I don’t think you can get anything out of the OpenGL texture besides the BGRA so if you can’t use BGRA and you need to convert it that will add much more time to the process than it takes to render the texture.
I got all of this working, thank you so much. The examples and discussion here were much more helpful than even the apple sample code.
Does anyone have any information on how this works under the hood? I’m assuming there is some unique hardware being leveraged here. I haven’t been able to find any information on this at all.
Great, thanks Dennis.
Just to be clear, is it possible to use this to render OpenGL data to an empty CVPixelBuffer, and then read that CVPixelBuffer from the CPU? In other words, is it fine if the CVPixelBufferRef doesn’t actually hold any texture?
I think by “empty” you meant that it’s uninitialized memory right? You do have to create the pixel buffer but that buffer does not have to be initially filled and it doesn’t have to be passed to you from the camera api or something. But yes, you fill the texture in opengl that is backed by your pixel buffer in the texture cache and that in turn can be read out and handled how you want.
Yeah I got it working also, pretty cool stuff. This blog post and the comments have been very helpful, thanks!
Interesting stuff. But why doesn’t it work on simulator? Any idea how to make it work on the simulator?
@Duncan – Duncan if you can mail me ChromaKey example I too will greatly appreciate it. My email is michelle@mooncatventures.com
I really need to get something like this working with ffmpeg, I think I know how to use shaders to do yuv to RGB, but to get around using sw_scale, which is very slow I need to create a cmsampleref or at least a pvbufferRef, I know how to do it creating an AVPicture then running that through sw_scaler, but that is one huge bottleneck , everyone says this demo will help.
Is there a way to use this API to render to the screen (render buffer) instead of a texture and still have a CVPixelBuffer that contains the data?
Just found this via StackOverflow.
Is there any way to use this method to get the raw float values out of a framebuffer? (I’m not sure what the format renderTarget should be in this case.)
glReadPixels on OpenGL ES 2.0 forces a type conversion to GL_UNSIGNED_BYTE and I’m trying to figure out a way around this. My ultimate goal is to implement a generic, floating-point matrix multiply routine for arbitrary-sized matrices (e.g., much larger than 4×4) using OpenGL ES 2.0 on iOS.
Great stuff, one thing to note is that i had to do a glFinish() before reading the cache – i would imagine the reasoning is pretty obvious, but basically opengl may not have executed the drawings by the time you check the cache…
And to cksaid, could you just take the framebuffer values and divide them by 255 to get the floating point value? I don’t think those floating point values are stored anywhere let alone the framebuffer, at least not on the iphone…
I saw you had some github project for this.. could you possibly share this, or point links to a simple working example, please?
The only working example I can find is GPUImage, which is a great project; but not exactly the most straight forward read that I’ve found and is utterly confounding to someone starting out with this.
I can’t find the wwdc 2011 sample for the ipad green screen demo anywhere, and the documentation from apple isn’t all that great. I’m studying up on my opengl; but I really need a hand with this particular technique of saving things for my custom green-screen app I’ve made. Any help to get a simple example showing this tecnhique would be greatly received.
And Duncan c – if you are reading this, I’m desperate to get a copy of that app too.
Thanks.
I haven’t posted this specific project to github. I gleaned the important bits from some work I was doing on a full time project so it wasn’t able to be shared in its entirety.
I keep meaning to post something more but in the mean time projects that are paying the bills are plentiful.
ok, I dig.. I’ll keep trying to figure it out.. thanks for the post though.. 🙂