Porting Krita to OpenGL 3.1/ES 2.0

Krita was the first painting application with an OpenGL accelerated canvas. We had that before Photoshop… Which also meant that the code was getting quite old fashioned. These days, life is supposed to be better. More flexible in any case. However, even though a 2D canvas is a simple thing, once you factor in rotation, zooming, panning and so on, the potential for bugs is quite big, and we’ve been fixing bugs for ages in the old code.

So I didn’t want to throw that away, but have as clean and straightforward as possible a port from the old code to start with. The old code mostly looked like this (for painting the transparency checkers background):

KisCoordinatesConverter *converter = coordinatesConverter();

QTransform textureTransform;
QTransform modelTransform;
QRectF textureRect;
QRectF modelRect;

converter->getOpenGLCheckersInfo(&textureTransform, &modelTransform, &textureRect, &modelRect);

KisConfig cfg;
GLfloat checkSizeScale = KisOpenGLImageTextures::BACKGROUND_TEXTURE_CHECK_SIZE / static_cast(cfg.checkSize());

textureTransform *= QTransform::fromScale(checkSizeScale / KisOpenGLImageTextures::BACKGROUND_TEXTURE_SIZE,
                                            checkSizeScale / KisOpenGLImageTextures::BACKGROUND_TEXTURE_SIZE);

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glViewport(0, 0, width(), height());
glOrtho(0, width(), height(), 0, NEAR_VAL, FAR_VAL);

glMatrixMode(GL_TEXTURE);
glLoadIdentity();
loadQTransform(textureTransform);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
loadQTransform(modelTransform);

glBindTexture(GL_TEXTURE_2D, m_d->openGLImageTextures->backgroundTexture());
glEnable(GL_TEXTURE_2D);

glBegin(GL_TRIANGLES);

glTexCoord2f(textureRect.left(), textureRect.bottom());
glVertex2f(modelRect.left(), modelRect.bottom());

glTexCoord2f(textureRect.left(), textureRect.top());
glVertex2f(modelRect.left(), modelRect.top());

glTexCoord2f(textureRect.right(), textureRect.bottom());
glVertex2f(modelRect.right(), modelRect.bottom());

glTexCoord2f(textureRect.left(), textureRect.top());
glVertex2f(modelRect.left(), modelRect.top());

glTexCoord2f(textureRect.right(), textureRect.top();
glVertex2f(modelRect.right(), modelRect.top());

glTexCoord2f(textureRect.right(), textureRect.bottom());
glVertex2f(modelRect.right(), modelRect.bottom());

glEnd();

glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);

In other words, we set a projection, a transformation matrix for the texture and for the model/view and then start drawing the vertices. Pretty simple. I was rather surprised when I did not find any clear tutorial on converting code like this through google. I’ve read a bunch of modern opengl books and tutorials by now, and they pretty much all have the same order of explanation, the same things they emphasize and the same “advanced” topics. But I couldn’t figure out how to draw my checkers or the tiles for my image. Yeah, I’m a linguist, not a mathematician, and I probable read these tutorials wrong or something.

In any case, after going through the Qt OpenGL examples, the tutorials on Wikibooks, the Red, Orange and Blue books, the Matt Gattis’ notes on porting to WebGL and more confused questions and more confusing answers on Stack Overflow than I care to count, I finally got something that works, and is as straight a translation of the old code as possible.

Purists will cavil at my use of attribute arrays and glDrawArrays — but the alternative, as far as I can tell, would be to redo all the matrix calculation and use matrices to place my tiles in the right location or to update and send new vertex buffer objects all the times. This works — and in the future it might even be pretty.

So, for posterity, and because there might be others in the same spot as me (to wit, tasked with porting OpenGL 1.3 code to OpenGL ES 2.0 or OpenGL 3.1 without compatibility profile), here’s a summary of my current code.

The vertex shader:

uniform mat4 modelViewProjection;
uniform mat4 textureMatrix;

attribute highp vec4 a_vertexPosition;
attribute mediump vec4 a_textureCoordinate;

varying vec4 v_textureCoordinate;

void main()
{
    gl_Position = modelViewProjection * a_vertexPosition;
    v_textureCoordinate = textureMatrix * a_textureCoordinate;
}

The fragment shader (needs to be expanded to handle color correction):

uniform sampler2D texture0;

varying mediump vec4 v_textureCoordinate;

void main() {
    gl_FragColor = texture2D(texture0, v_textureCoordinate.st);
}

And finally the code. The shader programs are all done using Qt’s shader classes, and I don’t show that code here — it’s in the calligra git repo anyway.

KisCoordinatesConverter *converter = coordinatesConverter();

QTransform textureTransform;
QTransform modelTransform;
QRectF textureRect;
QRectF modelRect;

converter->getOpenGLCheckersInfo(&textureTransform, &modelTransform, &textureRect, &modelRect);

// XXX: getting a config object every time we draw the checkers is bad for performance!
KisConfig cfg;
GLfloat checkSizeScale = KisOpenGLImageTextures::BACKGROUND_TEXTURE_CHECK_SIZE / static_cast(cfg.checkSize());

textureTransform *= QTransform::fromScale(checkSizeScale / KisOpenGLImageTextures::BACKGROUND_TEXTURE_SIZE,
                                            checkSizeScale / KisOpenGLImageTextures::BACKGROUND_TEXTURE_SIZE);

m_d->checkerShader->bind();

QMatrix4x4 projectionMatrix;
projectionMatrix.setToIdentity();
projectionMatrix.ortho(0, width(), height(), 0, NEAR_VAL, FAR_VAL);

// Set view/projection matrices
QMatrix4x4 modelMatrix(modelTransform);
modelMatrix.optimize();
modelMatrix = projectionMatrix * modelMatrix;
m_d->checkerShader->setUniformValue("modelViewProjection", modelMatrix);

QMatrix4x4 textureMatrix(textureTransform);
m_d->checkerShader->setUniformValue("textureMatrix", textureMatrix);

//Setup the geometry for rendering
QVector vertices;
vertices << QVector3D(modelRect.left(),  modelRect.bottom(), 0.f)
            << QVector3D(modelRect.left(),  modelRect.top(),    0.f)
            << QVector3D(modelRect.right(), modelRect.bottom(), 0.f)
            << QVector3D(modelRect.left(),  modelRect.top(), 0.f)
            << QVector3D(modelRect.right(), modelRect.top(), 0.f)
            << QVector3D(modelRect.right(), modelRect.bottom(), 0.f); m_d->checkerShader->enableAttributeArray(PROGRAM_VERTEX_ATTRIBUTE);
m_d->checkerShader->setAttributeArray(PROGRAM_VERTEX_ATTRIBUTE, vertices.constData());

QVector texCoords;
texCoords << QVector2D(textureRect.left(), textureRect.bottom())
            << QVector2D(textureRect.left(), textureRect.top())
            << QVector2D(textureRect.right(), textureRect.bottom())
            << QVector2D(textureRect.left(), textureRect.top())
            << QVector2D(textureRect.right(), textureRect.top())
            << QVector2D(textureRect.right(), textureRect.bottom()); m_d->checkerShader->enableAttributeArray(PROGRAM_TEXCOORD_ATTRIBUTE);
m_d->checkerShader->setAttributeArray(PROGRAM_TEXCOORD_ATTRIBUTE, texCoords.constData());

    // render checkers
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_d->openGLImageTextures->checkerTexture());

glDrawArrays(GL_TRIANGLES, 0, 6);

glBindTexture(GL_TEXTURE_2D, 0);
m_d->checkerShader->release();

For Krita, there are quite a few todo’s left:

  • Restore the opengl outline cursor — now we use the qpainter one
  • Render onto a framebuffer object so this code can be integrated in Krita Sketch
  • Update the image texture tiles in a thread
  • Update the projection in a thread
  • Restore the colormanagement using OCIO
  • Check whether it runs on Windows, OSX (and Android)
  • Maybe move the layer composition to OpenGL using the GPUImage shader code?

Especially the testing on Windows is interesting, since the old opengl canvas never worked on Windows.