여기, 전체 화면 안티 톱의 이 아니다. 着色器 있는 FXAA 같은 방식으로 기반 아니라 호환성 더, 고정 파이프 라인 의 다중 샘플링 방식 기반.
먼저 말하겠습니다 개발 환경, 내가 쓰는 것은 VC2013+GLEW1.11.
꼭 통해 프로그램 설정 다중 샘플링 우선 필요한 진입 제어 패널, 확실히 안티 톱의 이 설정 '프로그램을 따라 정해진다 "물론 이 설정을 구체적인 이름, 다른 그래픽 메이커 다른 부르는 법 있다.
다음은 바로 중점 되었다.
Windows다음 꼭 통해 프로그램 설정 다중 샘플링 반드시 사용wglChoosePixelFormatARB이 함수. 현재 인터넷 기본적으로 모두 직접 가서 가져오는 중 이 함수 인터페이스 주소, 사실 절대 귀찮게 안. GLEW 이미 이 함수 인터페이스 있다, 우리는 할 것은 정확하게 사용 뿐이다. 이 함수 정확한 사용 하는 것이 바로 만듭니다. 임시 창 통해 이 창, 우리는 초기화할 수 GLEW 및 가져오기 반드시 기초 픽셀 형식, 다시 사용wglChoosePixelFormatARB이 함수, 받다, 사용 가능한 다중 샘플링 픽셀 형식, 마지막 에 렌더링 창 설정 이 텍스트 서식 수 있다.
//---임시 창 만듭니다.
HWND secondwnd; WNDCLASS wc; //창 클래스 구조체
wc.style = CS_HREDRAW | CS_VREDRAW; //폼 스타일
wc.lpfnWndProc = WndProc; //폼 처리 함수
wc.cbClsExtra = 0; //창 클래스 여부 by 확장
wc.cbWndExtra = 0; //폼 실행 여부 by 확장
wc.hInstance = NULL; //폼 핸들
wc.hIcon = LoadIcon(0, IDI_APPLICATION); //창 아이콘
wc.hCursor = LoadCursor(NULL, IDC_ARROW); //폼 마우스 스타일
wc.hbrBackground = (HBRUSH)GetStockObject(WHITE_BRUSH); //창 배경 색상
wc.lpszMenuName = NULL; //창 메뉴
wc.lpszClassName = L"BaseWndClass"; //폼 类名
if (!RegisterClass(&wc)) //폼 등록 성공 여부 판단
{
MessageBox(L"등록 실패");
return;
}
//새 창
secondwnd = CreateWindow(L"BaseWndClass", //폼 类名
L"MyFrom", //창 제목 명
WS_OVERLAPPEDWINDOW, //폼 스타일
CW_USEDEFAULT,
CW_USEDEFAULT, //폼 좌표
CW_USEDEFAULT,
CW_USEDEFAULT, //폼 좌표
NULL, //폼 의 부모 창
NULL, //폼 하위 메뉴
NULL, //폼 핸들
NULL); // 만약 함수 성공을 복귀 값 위해 새 창 페이지: 만약 함수 실패를 복귀 값 NULL 위해
if (secondwnd == 0)//새 창 성공 여부 판단
{
MessageBox(L"폼 생성 실패");
return;
}
::ShowWindow(secondwnd, SW_HIDE); //표시 폼
::UpdateWindow(secondwnd); //그리기 폼,
PIXELFORMATDESCRIPTOR pfd = //정의 텍스트 서식
{
sizeof(PIXELFORMATDESCRIPTOR), // 상술한 형식 설명자 크기
1, // 버전 번호
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA, // 신청 RGBA 형식
32,
0, 0, 0, 0, 0, 0, // 무시 색채 분
8, // 투명도 캐시 없다.
0, // Shift 키를 무시합니다 Bit
0, // 어큐뮬레이션 캐시 없다.
0, 0, 0, 0, // 무시 모인 자리
32, // 32비트 Z- 캐시 (깊이 캐시)
8, // 없다 蒙板 캐시
0, // 보조 캐시 없다.
PFD_MAIN_PLANE, // 그린 图层 주
0, // Reserved
0, 0, 0 // 무시 층 마스크
};
//-----이 임시 창 쓰는 얻을 수 HGLRC 하나
HDC secondDC = ::GetDC(secondwnd);
int nIndex = ChoosePixelFormat(secondDC, &pfd); //선택 방금 정의 텍스트 서식
if (nIndex == 0)
{
return;
}
SetPixelFormat(secondDC, nIndex, &pfd); //텍스트 서식 설정
glRC = wglCreateContext(secondDC);
wglMakeCurrent(secondDC, glRC);
//---그리고 GLEW 초기화
GLenum glerr = glewInit();
if (glerr != GLEW_OK)
{
MessageBox(L"GLEW 초기화할 수 없음!");
}
else
{
OutputDebugString(L"GLEW\n 성공적으로 불러왔습니다");
}
UINT Multisample = 4; //4 배, 다중 샘플링 레이트
int pixelformat;
UINT numformat;
float fAttributes[] = { 0, 0 };
int iAtributes[] =
{ WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
WGL_ACCELERATION_ARB,
WGL_FULL_ACCELERATION_ARB,
WGL_COLOR_BITS_ARB, 24,
WGL_ALPHA_BITS_ARB, 8,
WGL_DEPTH_BITS_ARB, 32,
WGL_STENCIL_BITS_ARB, 8,
WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
WGL_SAMPLE_BUFFERS_ARB, GL_TRUE,
WGL_SAMPLES_ARB, Multisample,
0, 0
};
//----얻다, 사용 가능한 다중 샘플링 픽셀 형식
if (!wglChoosePixelFormatARB(secondDC, iAtributes,
fAttributes, 1, &pixelformat, &numformat))
{
MessageBox(L"찾을 수 있는 안티 톱의 이 형식");
}
//----by 임시 창 얻은 HGLRC 삭제, 동시에 정리하기 임시 창
wglMakeCurrent(NULL, NULL);
wglDeleteContext(glRC);
::ReleaseDC(secondwnd, secondDC);
::DestroyWindow(secondwnd);
//-----으로 얻은 다중 샘플링 픽셀 형식을 가지고 폼 설정
SetPixelFormat(this->GetDC()->m_hDC, pixelformat, &pfd); //텍스트 서식 설정
glRC = wglCreateContext(this->GetDC()->m_hDC);
wglMakeCurrent(this->GetDC()->m_hDC, glRC);
//----------------------
위 그동안 코드, 바로 내가 말한 방식에 따라 초기화 거야 OpenGL.
이렇게 설치 후, 니가 glEnable/glDisable 통해 켜거나 다중 샘플링 되었다.
This is just a short tutorial about drawing primitives in OpenGL 3.x without using deprecated functionality. The code uses Visual Studio and a link to download a freeGLUT version is available.
Adding GLEW Support
Dealing with OpenGL 3.1 is hard enough, so I'll skip gymnastics with OpenGL extension and use OpenGL Extension Wrangler Library (GLEW). GLEW is a cross-platform open-source C/C++ extension loading library, and can be freely downloaded from the following site: http://glew.sourceforge.net. The following snippet of code includes support for GLEW, and should be placed somewhere in your code. If you are building a Visual Studio MFC application, which I recommend, the best place for that is somewhere at the end of stdafx.h file.A cross-platform version of this code (which uses freeGLUT for windowing) is available on githuband freeGLUT can be downloaded from http://freeglut.sourceforge.net
We will start with creation of class CGLRenderer. This class should gather together all OpenGL related code. My students will recognize the functions I insisted on during the lectures. The header file is the same as in good old OpenGL 2.1, but the implementation will be severely changed.
//----------------------------
class CGLRenderer
{
public:
CGLRenderer(void);
virtual ~CGLRenderer(void);
bool CreateGLContext(CDC* pDC); // Creates OpenGL Rendering Context
void PrepareScene(CDC* pDC); // Scene preparation stuff
void Reshape(CDC* pDC, int w, int h); // Changing viewport
void DrawScene(CDC* pDC); // Draws the scene
void DestroyScene(CDC* pDC); // Cleanup
protected:
void SetData(); // Creates VAO and VBOs and fill them with data
protected:
HGLRC m_hrc;// OpenGL Rendering Context
CGLProgram* m_pProgram; // Program
CGLShader* m_pVertSh; // Vertex shader
CGLShader* m_pFragSh; // Fragment shader
GLuint m_vaoID[2]; // two vertex array objects, one for each drawn object
GLuint m_vboID[3]; // three VBOs
};
//----------------------------
Rendering Context Creation
First we have to create an OpenGL Rendering Context. This is the task for CreateGLContext() function.
//----------------------
bool CGLRenderer::CreateGLContext(CDC* pDC)
{
PIXELFORMATDESCRIPTOR pfd;
memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cDepthBits = 32;
pfd.iLayerType = PFD_MAIN_PLANE;
int nPixelFormat = ChoosePixelFormat(pDC->m_hDC, &pfd);
if (nPixelFormat == 0) return false;
BOOL bResult = SetPixelFormat (pDC->m_hDC, nPixelFormat, &pfd);
if (!bResult) return false;
HGLRC tempContext = wglCreateContext(pDC->m_hDC);
wglMakeCurrent(pDC->m_hDC, tempContext);
GLenum err = glewInit();
if (GLEW_OK != err)
{
AfxMessageBox(_T("GLEW is not initialized!"));
}
int attribs[] = { WGL_CONTEXT_MAJOR_VERSION_ARB,
3,
WGL_CONTEXT_MINOR_VERSION_ARB,
1,
WGL_CONTEXT_FLAGS_ARB, 0, 0 };
if(wglewIsSupported("WGL_ARB_create_context") == 1)
{
m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs);
wglMakeCurrent(NULL,NULL);
wglDeleteContext(tempContext);
wglMakeCurrent(pDC->m_hDC, m_hrc);
}
else
{
//It's not possible to make a GL 3.x context. Use the old style
context (GL 2.1 and before)
m_hrc = tempContext;
}
//Checking GL version
const GLubyte *GLVersionString = glGetString(GL_VERSION);
//Or better yet, use the GL3 way to get the version number
int OpenGLVersion[2];
glGetIntegerv(GL_MAJOR_VERSION, &OpenGLVersion[0]);
glGetIntegerv(GL_MINOR_VERSION, &OpenGLVersion[1]);
if (!m_hrc)
return false;
return true;
}
//---------------------
Choosing and setting pixel format are the same as in previous version of OpenGL. The new tricks that should be done are:
Create standard OpenGL (2.1) rendering context which will be used only temporarily (tempContext), and make it current
Have you noticed something odd in this initialization? In order to create new OpenGL rendering context you have to call function wglCreateContextAttribsARB(), which is an OpenGL function and requires OpenGL to be active when it is called. How can we fulfill this when we are at the beginning of OpenGL rendering context creation? The only way is to create an old context, activate it, and while it is active create a new one. Very inconsistent, but we have to live with it!
Scene Preparation
After we have created rendering context, the next step is to prepare scene. In the function PrepareScene() we will do whatever we have to do just once, before the scene is drawn for the first time.
Vertex shader is very simple. It just sends input values to the output, and converts vec3 to vec4. Constructors are the same as in previous versions of GLSL. The main difference, in regard to GLSL 1.2, is that there is no more attribute and varying qualifiers for variables inside shaders. Attribute variables are now in(put) and varying variables are out(put) for the vertex shaders. Uniforms stay the same.
//-----------------
// Vertex Shader – file "minimal.vert"
#version 140
in vec3 in_Position;
in vec3 in_Color;
out vec3 ex_Color;
void main(void)
{
gl_Position = vec4(in_Position, 1.0);
ex_Color = in_Color;
}
//-----------------
Fragment shader is even simpler. Varying variables in fragment shaders are now declared as in variables. Take care that the name of in(put) variable in fragment shader must be the same as out(put) variable in vertex shader.
//-----------------------
// Fragment Shader – file "minimal.frag"
#version 140
precision highp float; // needed only for version 1.30
in vec3 ex_Color;
out vec4 out_Color;
void main(void)
{
out_Color = vec4(ex_Color,1.0);
}
//----------------------
If you have problem with compiling shader’s code (for the reason OpenGL 3.1 is not supported), just change the version number. Instead of 140, put 130. These shaders are so simple that the code is the same in GLSL version 1.3 and version 1.4.
Setting Data
Function SetData() creates VAOs and VBOs and fill them with data.
Vertex buffer objects (VBO) are familiar item since OpenGL version 1.5, but the vertex array objects require more explanation. Vertex array objects (VAO) encapsulate vertex array state on the client side. These objects allow applications to rapidly switch between large sets of array state.
VAO saves all states for all vertex attributes. The maximum number supported by your video card can be obtained by calling glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &MaxVertexAttribs).
A VAO stores the states of the vertex attribute arrays (if each of them is enabled, their sizes, stride, type, if they are normalized or not, if they contain unconverted integers, vertex attribute array pointers, element array buffer bindings and attribute array buffer bindings). In order to test how it works, we will create two separate (simple) objects with different VAOs.
Setting Viewport
Reshape() function just sets a viewport.
void CGLRenderer::Reshape(CDC *pDC, int w, int h) { glViewport(0, 0, w, h); }
Drawing
DrawScene(), as its name implies, draws the scene.
//--------------------
void CGLRenderer::DrawScene(CDC *pDC)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(m_vaoID[0]); // select first VAO
glDrawArrays(GL_TRIANGLES, 0, 3); // draw first object
glBindVertexArray(m_vaoID[1]); // select second VAO
glVertexAttrib3f((GLuint)1, 1.0, 0.0, 0.0); // set constant color attribute
glDrawArrays(GL_TRIANGLES, 0, 3); // draw second object
glBindVertexArray(0);
SwapBuffers(pDC->m_hDC);
}
//--------------------
As we can see, VAO binding changes all vertex attribute arrays settings. But be very careful! If any vertex attribute array is disabled, VAO loses its binding to corresponding VBO. In that case, we have to call again glBindBuffer() and glVertexAttribPointer() functions. The specification tells nothing about this feature, but it is what we have to do with current version of NVidia drivers.
Cleaning up
And, at the end we have to clean up the whole mass...
Welcome to OpenGL 3.3+ tutorials series. In this series, you will learn how to use OpenGL the new way. This way is little bit more difficult than the way before, now OpenGL relies on you to do many things. But don't get scared, these tutorials will explain stuff slowly and step-by-step you will build a good basis to think the new way.
In old OpenGL (before 2.0 version), most functionality has been FIXED into OpenGL, having it easier for programmers to do simple tasks (like working with matrices, transforming vertices and so on), but it didn't offer much space to do some very specific stuff. With OpenGL 2.0 shaders came, allowing programmer to replace some of the fixed functionality and rewrite it the way he wanted. It was a very good thing. But till OpenGL 3.0, you could still rely on fixed functionality even in shaders. For example, till GLSL 1.40 (OpenGL Shading Language, the most important thing of new OpenGL, which will be covered in these tutorials), you could use function fttransform(), which stands for (I guess) Fixed Transformation, so you could transform vertices using OpenGL built-in modelview and projection matrix and everything was OK. But in OpenGL 3.0, this fixed functionality has been deprecated, and in OpenGL 3.2 and later removed from core functionality (so when using OpenGL 3.2 rendering context and later, calling these functions will have no effect).
So how is it in new OpenGL? Well, now you cannot use ol' good functions like glTranslatef(), glRotatef(), glScalef(), orglMatrixMode(GL_PROJECTION), then setting the perspective with gluPerspective and similar functions. Now you have to calculate the matrices yourself, then upload them to vertex shader, and handle vertices with it. But don't worry, there are libraries over the internet, that work with matrices. We will work with one such later. So it won't be that difficult in the end.
The next thing that has significantly changed is actual rendering of things. Now there is no glBegin() and glEnd() function. Everything is replaced using vertex buffer objects (VBOs) and vertex array objects (VAOs). While in old OpenGL, rendering a triangle was as intuitive as possible,
glBegin(GL_TRIANGLES); glVertex2d(-5, 0); // Pass first vertex glVertex2d( 5, 0); // Pass second vertex glVertex2d( 0, 5); // Pass third vertex
glEnd();
the code of triangle render in OpenGL 3.3 can look like this:
As you can see it is longer, and not as intuitive. But this will bring BLAZING FAST rendering. If you know something about assembly (you don't even have to), you will notice that each call of glVertex3f has 3 floats as parameters. These floats must pass as function parameters to processor registers, before they are sent to GPU. And for one triangle, it's 3 function calls (well for one triangle it really isn't a problem, but scene with one triangle probably isn't what we want :). And for object with 10000 triangles, it's 30000 calls. This is so-called CPU bottleneck, when rendering is slowed by processor passing all this data to GPU. In new OpenGL, you can see that we first set up our objects (store their data in GPU), then we call just few functions to tell which data to use and so on, and then we call (for example) glDrawArrays to render objects. Now the CPU sends only few data to GPU, and rendering is way way way faster :) Of course, in older OpenGL versions you could use for example vertex arrays to speed-up rendering, but it's always passing data from RAM (client memory) to GPU (server memory) each and every frame, which is, no matter how we look at it, not good. But since OpenGL 1.5, you could move forward to use VBOs (store data in GPU), and it would not be that bad. OK, so let's begin.
- Setting up glew library:
First thing we will need to use is glew (OpenGL Extension Wrangler Library). You can download it from here: http://glew.sourceforge.net. After downlading and extracting, we will need to be able to include it to our project. Since I'm using Visual Studio, the best option is to have glew extracted in some libraries directory, and then add include paths and library paths to Visual Studio. In Visual Studio 2008, you can do it under Tools -> Options -> Projects and Solutions -> VC++ Directories, as you can see on following picture:
In Show directories for, you must choose Include files, and add a glew_installation_folder/include (of course, put there real path, for example C:\Libraries\glew-1.7.0\include). Then you also must add library paths, so select Library files, and add glew_installation_folder/lib there. Then we can have in our code:
#include <gl/glew.h>
and it will be OK. The worse option is to copy glew.h to your directory, so don't do it. The good thing about this include paths is, that if new version of glew (or any other used library) is out, you will download it, then just change the include path to the new version of your library and you will have new features and functions. This header supersedes the gl.h header on Windows, which hasn't been updated since version 1.1 (along with opengl32.lib). I know that Microsoft wants Windows developers to use DirectX, but they really could offer an alternative and add OpenGL support right into Visual Studio. But they will probably never do. It's sad, but there's nothing I can do about it. But glew will do all the work for us, one single call will get function pointers to all procedures, and we can use OpenGL 3.3 features without problems.
- The OpenGL control class:
My aim si to create a class, that will control OpenGL creation, releasing and practically everything that deals with OpenGL. So let's start with class declaration:
class COpenGLControl { public: bool InitOpenGL(HINSTANCE hInstance, HWND* a_hWnd, int iMajorVersion, int iMinorVersion, void (*a_InitScene)(LPVOID), void (*a_RenderScene)(LPVOID), void(*a_ReleaseScene)(LPVOID), LPVOID lpParam); void ResizeOpenGLViewportFull();
Even though it may seem little complicated at first glance, it's not that bad. Let's look at the functions:
initOpenGL - this is most important function, creates OpenGL rendering context within a given window, the parameters are - application instance (if you don't know what it is, it doesn't matter, it's not that important), major and minor version of OpenGL, and pointers to functions - init function, rendering function, and optional release function. The idea is to create one instance of COpenGLControl class somewhere, tell it which functions in your project are init function, rendering function and release function, and then you are ready to go. So simple call of this function will give us OpenGL context of version we want.
ResizeOpenGLViewportFull() - sets OpenGL viewport to whole window
RenderScene() - renders scene, the lpParam parameter is LPVOID type - it means it's a general pointer, it can point to anything you want. But basically, lpParam will point to our OpenGL Controller instance. The thing about using function callbacks is that the code isn't that intuitive at first glance, but they are a very good thing, even though may be hard for those who didn't see it before to understand. You may have a look at Wikipedia about callbacks:http://en.wikipedia.org/wiki/Callback_(computer_programming)
ReleaseOpenGLControl() - cleanup function - releases scene data (if release function callback was set) and deletes rendering context. lpParam applies to same concept, as written previously.
RegisterSimpleOpenGLClass - registers window class that supports OpenGL, this class is used in fake window (you'll see later)
UnregisterSimpleOpenGLClass - unregisters previously registered window class
MakeCurrent() - sets current rendering context to the one we created (it calls traditional wglMakeCurrent function
SwapBuffersM() - swaps front and back buffer - simply calls traditional SwapBuffers function, and that's why it has extra M in the name, because then compiler complains, even though I think it really doesn't have a reason, because we're taking class member function, but still (try it, I have tried)
InitGLEW - initializes GLEW library
That does it, now we should have an idea what does each function do. We will have a closer look only at the initGLEW and InitOpenGL function. In other functions there isn't much to explain, they are pretty straightforward.
So what are we doing here? You may have guessed from the names of variables - we create a fake window. Then we set up the rendering context old way - using wglCreateContext. This will give us access to OpenGL functions. And here comes the reason for all this - now we can initalize GLEW library using glewInit. What GLEW does is that it gets function pointers to all OpenGL functions and extensions (if they are supported by the graphics card). It calls the wglGetProcAddress for every OpenGL function. But without OpenGL context, we couldn't get OpenGL function pointers, so that's why we create a fake window, get OpenGL function pointers and then destroy the fake window. I know - it isn't very nice, but searching OpenGL Wiki and some forums over internet, I didn't find a better way of doing this on Windows.
- InitOpenGL function
bool COpenGLControl::InitOpenGL(HINSTANCE hInstance, HWND* a_hWnd, int iMajorVersion, int iMinorVersion, void (*a_InitScene)(LPVOID), void (*a_RenderScene)(LPVOID), void(*a_ReleaseScene)(LPVOID), LPVOID lpParam) { if(!initGLEW(hInstance))returnfalse;
At the beggining of function, we initialize GLEW. After we have info about OpenGL capabilities our graphics card has, we can proceed with creating context. If the desired context is 2.1 and lower, we just create OpenGL the old way. But for later versions of OpenGL (3.0 and beyond), we are using new set of functions -wglChoosePixelFormatARB and wglCreateContextAtrribsARB. The line:
is used to check, whether we have access to these functions (whether they are supported by our graphics card). If this check succeeds, we can use new wgl functions - wglChoosePixelFormatARB and wglCreateContextAtrribsARB. These functions allow us to specify attributes of pixel format and context. You just pass a pointer, in this case it points to array of numbers, which has format ATTRIBUTE, VALUE, ATTRIBUTE, VALUE... and zero is at the end. You can specify as many parameters as you want, and you always end it with zero. It is more flexible than old PIXELFORMATDESCRIPTOR structure, which is fixed. However, if you look at SetPixelFormat function, you can see that I pass an unitialized PIXELFORMATDESCRIPTOR structure, even though I didn't use it for finding suitable pixel format. It is because when calling SetPixelFormat, you must pass PIXELFORMATDESCRIPTOR structure as the last parameter. I couldn't find anywhere about the right way of setting this OpenGL 3.3 context on Windows, but since we must pass something, we pass a dummy PIXELFORMATDESCRIPTOR, and everything works :) If there is some info on internet (on MSDN or anywhere), I will edit this article. But for now, I'm happy it works. So in conclusion - to find right pixel format, we usewglChoosePixelFormatARB, and for setting it, we call SetPixelFormat with whatever the third parameter (but not NULL). In the end of function, we just set the function pointers for init, render and release function, and finally call the init function to initialize our scene. And that does it - we are done with initialization of OpenGL.
Because this is only the first tutorial, it ends here. We won't render the first triangle now, but in second tutorial. It's because the first tutorial would get simply too long, since rendering primitives got little bit more difficult, and requires explanation of terms and functions used. But in order to test the functionality of our OpenGL context, we will clear the background color to nice light blue :) and it will look just like that:
So that's it for today! You can have a look at whole code, but really don't worry if you don't understand it. I'm a native Win32 platform programmer - it means I don't use MFC or any wrapper, and that's why I have to handle window messages and everything by myself. This approach has drawbacks of course - you must write more code (that's already written in some libraries, like MFC), but also many advantages, like you have a COMPLETE control over your application flow, there is no need for additional libraries, and the final EXE file isn't as big as if you have used for example MFC. And these tutorials want to teach OpenGL, not Win32 programming. Just know, that everything is setup fine :) Thank you for reading so far! Add comments and/or questions below.
Working with Extensions and GL 3.2 Rendering Context
OpenGL 3.2 is up and running (although currently only with NV 190.56 beta drivers, but it will not stop us to try some of the brand new functionality), and we are ready to start coding. Or... Are we? As we all well know, on MS Windows we have direct access only to GL 1.1 API. Okay, okay, we all no that (I hear your boring yawn). We will go to GLEW or GLee or similar site and download latest library that can handle everything we need. But if we go to those sites we will see that libraries are out of date.
Do we have to wait until new libraries are released? Certainly not! First part of this tutorial will cover direct accessing the extensions. If you already know that, just jump to GL 3.2 Rendering Context Creation section.
Extensions
There are just 4 steps to get access to new functionality...
Step 1. Go to the site http://www.opengl.org/registry/ and download the latest header files (for Windows we just need two of them: glext.h and wglext.h). There is also gl3.hfile that could be useful, but we will skip it this time. Downloaded files should be copied into Include/GL folder, or in the local folder of your application.
Step 2. I suggest you to create two new files (for example ogl.h and ogl.cpp) where we can include all OpenGL related .h files and pointers to functions. The ogl.h file, at the moment, should look like this:
// --- ogl.h --- #include "gl11.h"
#include "glext.h" #include "wglext.h"
#pragma comment(lib,"opengl32.lib")
As you can see, I’ve copied all .h files into project’s folder. The file gl11.h is in fact gl.h that was shipped with previous versions of Visual Studio (VS2k8 does not include it), but I have to rename it (because of the project name) and copy into project’s folder (because VS2k8 does not include it). I’ve also added 11 to indicate OpenGL version supported by this file (ver.1.1). The file ogl.cpp currently include only this:
// ogl.cpp #include "stdafx.h"
Step 3. Include ogl.h into StdAfx.h file. Take a look at MSDN to see the purpose of StdAfx.h (Precompiled Header Files).
//--- OpenGL--- #include "ogl.h"
Step 4. Add new function to GLRenderer class (introduced in previous tutorial). Let’s name it InitAPI(), because it’s purpose will be to grab pointers to all functions that we need. We will explain the whole process on the function glCreateProgram(). This functions serves to create a program that will contain shaders.
Step 4a. Define a proper function pointer in ogl.h.
extern PFNGLCREATEPROGRAMPROC glCreateProgram;
The type of that pointer should be created this way: PFN + ToUppercase(function_name) + PROC
Steps 4a, 4b ad 4c have to be repeated for all functions we want to call inside our code. Just to enable shaders and draw a triangle using a single VBO we have to add plenty of functions. After some time of writing and copy/pasting, the function InitAPI() should look like this:
After acquiring each pointer we should test if it is NULL. If it is NULL, specified function is not implemented and we shouldn’t continue to execute program.
GL 3.2 Rendering Context Creation
As we have already learnt in the previous tutorial, to create a GL 3.x rendering context we have to: 1. define pixel format (as for old rendering context), 2. create an old context and activate it (make it current), 3. define attributes of the new (3.x) context, 4. create new (3.x) context using function wglCreateContextAttribsARB(), 5. deactivate and delete the old context.
Everything should be done in CreateGLContext() function.
if (!m_hrc) { AfxMessageBox(_T("OpenGL 3.x RC was not created!")); return false; }
return true; }
Before creating the new context it is not a bad idea to check the version of OpenGL our driver supports. For that reason, we will create function GetGLVersion() which will retrieve the major and the minor version of supported OpenGL. If we have to deal OpenGL 3.x, than version should be retrieved calling function glGetIntegerv() with GL_MAJOR_VERSION and GL_MINOR_VERSION.
// for GL 3.x glGetIntegerv(GL_MAJOR_VERSION, major); // major = 3 glGetIntegerv(GL_MINOR_VERSION, minor); // minor = 2
But to be sure that we will read OpenGL versions correctly if older than 3.x is supported, we should use glGetString(GL_VERSION). The next code demonstrates mixed technique to retrieve GL version.
void CGLRenderer::GetGLVersion(int* major, int* minor) { // for all versions char* ver = (char*)glGetString(GL_VERSION); // ver = "3.2.0"
*major = ver[0] - '0'; if( *major >= 3) { // for GL 3.x glGetIntegerv(GL_MAJOR_VERSION, major); // major = 3
glGetIntegerv(GL_MINOR_VERSION, minor); // minor = 2
}
else { *minor = ver[2] - '0'; }
// GLSL ver = (char*)glGetString(GL_SHADING_LANGUAGE_VERSION); // ver = "1.50 NVIDIA via Cg compiler" }
It is also demonstrated how to read OpenGL Shading Language (GLSL) version. The function does not retrieve GLSL version. The comment shows how the returned string looks like.
Let’s go back to attributes. The attributes are defined as a list of pairs (name, value) terminated with 0. If certain attributes are not defined in the list, the default values are used.
WGL_CONTEXT_MAJOR_VERSION_ARB and WGL_CONTEXT_MINOR_VERSION_ARB define version of GL context we want to create. If not defined the default values are 1 and 0 respectively. That means "the latest version that also supports GL 1.0". In the case of GL 3.2 it is a GL 3.2 compatibility profile.
WGL_CONTEXT_PROFILE_MASK_ARB defines which “profile” the context should support. There are two profiles: - core and - compatibility.
If we create the core profile, only undeprecated functions can be used. The compatibility profile enables using all functionality, from GL 1.0 to GL 3.2. It is the superset of core profile.
When GL 3.0 came to us, I thought that, so called deprecated functions, will die (and I was not very happy, because some of them were very useful). But, by default, GL 3.0 was backward compatible. Then GL 3.1 has come, and we have learnt that if we want to use deprecated functions we have to use extensions! Wow, it was something challenging, because drivers do not have to implement extensions. Everything out of the core is optional. So, we have to orient to core functions, reimplement geometric transformations, projections, and everything else. I thought the new age of GL has begun. GL 3.2 returned things where they stood almost a year ago. Now we have two profiles cohabitating without problems. Maybe the core profile will be better optimized, but ... maybe. After reading some posts of Mark Kilgard (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=258516#Post258516 and http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=258666#Post258666) I'm sure that compatibility mode/profile/however-it-is-called will not be removed in foreseeable future.
For the core profile a predefined value WGL_CONTEXT_CORE_PROFILE_BIT_ARB (or hexadecimal value 0x00000001) should be used. This is the default value for WGL_CONTEXT_PROFILE_MASK_ARB attribute (GL 3.2)! To use compatibility mode set WGL_CONTEXT_PROFILE_MASK_ARB attribute to WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB (0x00000002). Previous implementations of OpenGL should ignore WGL_CONTEXT_PROFILE_MASK_ARB attribute. (I have tried WGL_CONTEXT_PROFILE_MASK_ARB on two older (3.0) versions of NV drivers and saw only trouble.)
It is very interesting that the profile attribute is implemented as a bitmask rather than an enumeration. Currently, only a single profile can be specified, but it is not unlikely that eventually there will be profiles defined than can be implemented in the same context.
The attribute WGL_CONTEXT_FLAGS_ARB specifies a set of flags affecting the rendering context. - If WGL_CONTEXT_DEBUG_BIT_ARB flag is set a “debug” context should be created. But currently (after three versions of GL) it still does not have any effects. - If WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB is set a “forward-compatible” context will be created. The “forward-compatible” context must not support functionality marked as “deprecated” by defined version of GL, while a non-forward-compatible context must support all functionality in that version, deprecated or not.
WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB had a predominant function in GL 3.0 and 3.1 contexts, but now when profiles are defined its function is pretty unclear.
Citation: If the forward compatibility bit is set, the returned context must not implement deprecated functionality in that profile (nothing is deprecated from the OpenGL 3.2 compatibility profile, so a forward-compatible compatibility profile has exactly the same functionality as a non-forward-compatible compatibility profile, while a forward-compatible core profile would leave out only the handful of features - wide lines and two queries - deprecated from core OpenGL 3.2).
Drawing a TriangleNow we are ready to set up other functions so that we can see something meaningful on the screen.
The function PrepareScene() creates shaders, loads code, compiles shaders, attaches them to a program, binds IDs to attributes locations, links and activates the program.
OpenGL 4.0이 이미 2010년 3월에 소개되었지만 대부분의 하드웨어(특히 저의...)들이 지원 준비가 되어있지 않은 관계로 이 튜토리얼에서는 OpenGL 3.2를 사용합니다. OpenGL 3.2는 2009년 7월 24일에 소개가 되었고 아직까지도 최신이라고 생각할 수 있습니다. 3.2버전부터 Geometry Shader가 사용 가능합니다.
소스 코드를 첨부하였으니 다운로드 받아서 참고하시면서 보시기 바랍니다.
Step #1 Build a window with the 'Project Wizard'
비주얼 C++을 실행하고 아래의 그림처럼 File->New->Project(Ctrl+Shift+N) 메뉴를 클릭하면 나타는 'New Project 윈도우'에서 Win32 project를 선택합니다.
'OK'버튼을 누르면 Win32 Application Wizard 윈도우가 나타나는데 그냥 'Finish'버튼을 누릅니다. 이렇게 하면 즉시 실행 가능한 윈도우 프로그램이 준비됩니다. F5 버튼을 누르면 바로 실행해 볼 수 있습니다.
Step #2 Getting A Rendering Context from Microsoft Windows
이 단계는 2개의 서브단계로 구성됩니다.
1. Getting an OLD Rendering Context
이 단계는 최신 버전의 Rendering Context를 얻기 위해 거쳐가야하는 단계입니다.
먼저, 몇개의 전역변수를 선언합니다.
HDC g_hDC = NULL;
HGLRC g_hRC = NULL;
HWND g_hWnd = NULL;
다음, InitInstance()에서 CreateWindow()함수 호출 다음에 아래와 같이 프로그래밍하여 구버전의 RC(Rendering Context)를 얻습니다.
g_hWnd = hWnd; // After CreateWindow(), save window handle to global variable g_hDC=GetDC(g_hWnd); // Get Device Context which is used to draw something on the // client region.
sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor
1, // Version Number
PFD_DRAW_TO_WINDOW |// Format Must Support Window
PFD_SUPPORT_OPENGL |// Format Must Support OpenGL
PFD_DOUBLEBUFFER,// Must Support Double Buffering
PFD_TYPE_RGBA,// Request An RGBA Format
32 ,// 32 bit Color Depth(8, 16, 24, 32)
0, 0, 0, 0, 0, 0,// Color Bits Ignored
0,// No Alpha Buffer
0,// Shift Bit Ignored
0,// No Accumulation Buffer
0, 0, 0, 0,// Accumulation Bits Ignored
24,// 16Bit Z-Buffer (Depth Buffer)
8,// Stencil Buffer
0,// No Auxiliary Buffer
PFD_MAIN_PLANE,// Deprecated param
0,// Reserved
0, 0, 0// Layer Masks Ignored
};
int iPixelFormat=ChoosePixelFormat(g_hDC,&pfd); // Get PixelFormat Number. If pfd is // not supported, zero is returned and you must try again //with another pfd or quit your program
SetPixelFormat(g_hDC,iPixelFormat,&pfd); // Set PixelFormat with the Number you got.
// If it fails, FALSE is returned.
HGLRC tempContext;
tempContext=wglCreateContext(g_hDC); // OpenGL Rendering Context creation. As you can
// see, before you create RC, you must choose your pixel format.
wglMakeCurrent(g_hDC,tempContext); // Activate The Rendering Context
이 소스코드의 가장 적절한 위치는 CreateWindow() 호출의 아랫부분입니다. 예제 소스에는 어떠한 에러핸들링도 되어있지 않습니다. 필요하다면 ChoosePixelFormat()이나 wglCreateContext()나 wglMakeCurrent()등의 함수 호출에 에러 처리를 추가할 수 있습니다.
wglCreateContext()는 opengl32.lib 라이브러리를 필요로함으로 아래의 그림처럼 추가해줍니다.
RC를 만든후에는 프로그램의 종료시나 그것이 더 이상 필요 없게된 시점에서 wglDeleteContext()를 사용하여 RC를 지워줍니다.
2. Accessing to extented functions of OpenGL and Getting a new RC
이제, GLEW를 초기화 할 차례입니다. 이것을 하기 위해 glew.h와 wglew.h를 stdafx.h파일에 포함시킵니다.
#include <gl/glew.h>
#include <gl/wglew.h>
glew32.lib 라이브러리 역시 아래의 그림처럼 추가합니다.
main.cpp파일로 돌아가서 glewInit()을 호출합니다.
GLenum err = glewInit();
if (GLEW_OK != err)
return FALSE;
신버전의 RC를 얻기위해 다시 Pixel Format이 필요한데 그것은 wglChoosePixelFormatARB()를 통해 얻을 수 있습니다. 이 함수는 ChoosePixelFormat()의 확장된 버전입니다. ARB는 Chronos Group의
Architexture Review Board의 약자입니다.
if (!WGLEW_ARB_pixel_format) // In some old graphic cards, ARB_pixel_format may not
wglCreateContextAttribsARB의 마지막 인자 형식은 위에서 이미 살펴본 함수와 동일합니다. WGL_CONTEXT_MAJOR_VERSION_ARB는 3.2에서 3을 의미하고, MINOR_VERSION_ARB는 3.2의 2를 의미합니다. 이제 3.2 RC를 얻었습니다.
Step #3 Clean Up Rendering Context(RC) and Device Context(DC)
프로그램 종료시 RC와 DC를 릴리즈 할 필요가 있습니다. 아래와 같은 함수를 만듭니다.
VOID CleanUp()
{
if (g_hRC)
{
wglMakeCurrent(NULL,NULL);
wglDeleteContext(g_hRC);
g_hRC = NULL;
}
if (g_hDC)
{
ReleaseDC(g_hWnd,g_hDC);
g_hDC = NULL;
}
}
그리고 WinMain 함수의 종료지점에서 호출해 줍니다.
Step #4 Prepare the scene and clear background
이제 매 프레임마다 호출되는 RenderScene()을 만듭니다.
int RenderScene(GLvoid) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer glLoadIdentity(); // Reset The Current Modelview Matrix return TRUE; }
WinMain()의 메세지 처리 부분을 수정하여 아무런 메시지가 없을 때 RenderScene()가 호출되도록 합니다.
// Main message loop:
BOOL bDone = FALSE;
while(!bDone)
{
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
{
if (msg.message==WM_QUIT)// Have We Received A Quit Message?
{
bDone=TRUE;// If So done=TRUE
}
else
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else
{
RenderScene();
SwapBuffers(g_hDC);
}
}
컴팡하고 실행을 합니다. 검은 배경의 윈도우를 보실 수 있을 겁니다. 이 컬러가 OpenGL3.2에 의해 그려진 색상입니다.
예를 들면, 4X수퍼샘플링은 4배에 해당하는 컬러 및 깊이 샘플링 작업이 필요한 반면, 4X 멀티샘플링은 컬러값을 공유하기 때문에 더 적은 샘플링만 필요하다는 장점이 있습니다. 특히 1X 만큼의 텍스쳐 샘플링만 일어난다는 데에 큰 이득이 있겠군요. 결과적으로 폴리곤 내부(같은 플래그먼트)는 4개의 샘플링 포지션들 모두가 같은 컬러값(한번의 텍스쳐 샘플링)을 공유하기 때문에 안티알리어싱이 안된다는 단점이 있지만, 가장 중요한 폴리곤 엣지가 안티알리어싱이 되기 때문에 별로 문제되지는 않을 것 같고요.
멀티샘플링의 원리는 폴리곤을 래스터라이징하는 해상도만 늘리고 같은 폴리곤이라면 샘플들이 같은 컬러를 사용해서 별로 티나지 않는다는 원리에 입각한 것 같습니다.
Nx SSAA(Super Sampled AA)는 정말로 N배 크게 렌더링해서 나중에 1/N로 줄이는 방법입니다. Nx MSAA는 SSAA와 동일한 메모리를 사용하지만 픽셀의 레이아웃 측면에서 큰 차이점이 있습니다.
Nx MSAA는 픽셀을 프래그먼트라고 부르는 N개의 서브픽셀로 나눈 후에 픽셀에 대한 삼각형의 커버리지를 계산하여 프래그먼트를 채우는 방식입니다. 예를 들어 픽셀을 반쯤 가리면 N/2개의 프래그먼트만 색상값이 업데이트됩니다. N배 MSAA와 1배 MSAA(일명 노안티)의 픽셀 셰이딩 횟수는 동일하기 때문에 MSAA는 SSAA에 비해서 월등한 성능을 보입니다.
픽셀을 어떻게 서브픽셀로 나누냐, 그리고 어떻게 활용하느냐에 따라서 MSAA 품질이 많이 달라지는데요, 과거에는 정규 그리드(바둑판)으로 나눴지만 요즘은 불규칙하게 나누어 좀 더 좋은 결과를 얻고 있습니다. CSAA(Coverage Sampled AA)라든지 ATI의 CFAA(Custom Filtering AA), ADAA(Adaptive AA) 과 같은 키워드로 구글링을 해보시면 도움이 되실 것 같네요.
멀티 샘플링이란 컬러나 깊이, 스텐실 정보(샘플) 등을 사용하여 점, 선, 폴리곤, 비트맵, 이미지 등과 같은 OpenGL 프리미티브를 안티앨리어싱하는 테크닉이다. 각각의 프래그먼트들은 서브 픽셀(SUBPIXEL)의 개수에 따라서 컬러와 깊이 텍스처 좌표 집합등을 여러개 가진다.
I want to do full-screen anti-aliasing in OpenGL, and dont want to use the anti-aliasing that is provided by OpenGL itself, since I am building a game and want good effects in it.
By "the anti-aliasing provided by GL itself", do you refer to the old poorly named line and polygon quality hints that are labelled as "antialiasing", or are you referring to using regular MSAA through GLX/WGL visuals? –
There are several alternatives to native MSAA in OpenGL. With post-processing effects, the best thing about them is that you can usually just throw in the shader to the final, unprocessed image and it does the rest. Here are three methods worth taking a look:
Fast Approximate Anti-Aliasing (Geeks3D) - Good in most cases. Pretty easy to apply and understand. Drawback is sharp, high contrast noise in textures gets blurred a bit. Edges as subtle as 1/4 pixels steep look dead-accurate as traditional MSAA. Any less steep than that, it loses a bit of accuracy.
Normal Filtered Anti-Aliasing (GameDev) - Haven't tested this one yet accurately, but it's the easiest to understand. In best cases it resembles 16x MSAA and in worst cases it's like 2x MSAA. It generates a temporary normal map to represent edges and relative angles. You can sample the normal map either with luma difference or color difference.
Morphological Anti-Aliasing (Iryoku) - been improved to SMAA - Subpixel Mophological AA. It's pretty complex at 4 passes, but achieves the best results I've seen. It creates gradients along edges as gradual as 1/100 to 1/200 pixels steep (!). Sampling can be luma-based, color-based or depth-based. Textures stay very crisp and clean. (the example is DX10 and HLSL based, would take some time to port it to GLSL accurately)
These techniques don't super-sample or multi-sample, so lines that appear less than 1 pixel in thickness will appear with gaps and not be anti-aliased correctly. This is the downside to using a non-MSAA approach. Since you're only working with a raster image at full resolution, you can't create additional information from these empty gaps.
Take notice that all of these techniques are dependent on sampling adjacent luma (brightness) or chroma (color) values. Calculating luma and optional gamma correction requires additional instructions on the AA shader, though it's pretty straightforward. You can offload this by calculating the luma in the previous shader that provides the un-retouched image, storing the luma in the alpha channel. Then in the AA shader, you will simply sample the alpha.
There's lots of ways to do antialiasing. One is to use multisample antialiasing (MSAA), where your back buffer actually stores multiple sub-pixel samples, and when you render triangles, lines, etc. the system automatically fills in the correct set of samples in each pixel. Then at the end of rendering the image is "resolved" by averaging over all the sub-pixel samples to get one sample per pixel.
Another way is to use post-processing antialiasing, where you render the scene as normal and then do some targeted blurring on the end result to hide aliased edges. There are a variety of techniques for this, but one of the best / most popular at the moment is called FXAA (Fast approXimate Anti-Aliasing).
MSAA will generally give better-looking results than post-processing AA, but can be slower because it requires approximately double the memory bandwidth for every rendering operation. MSAA can also require more video memory than post-processing AA, depending on the details of the setup.
You can find specific info about implementing either MSAA or FXAA on the Web - just google either one of those terms.
I'm trying to enable mutlisampling and alpha-to-coverage for an FBO. Using the default framebuffer, all I have to do is call glEnable(GL_MULTISAMPLE) and glEnable(GL_SAMPLE_ALPHA_TO_COVERAGE). However, I am unable to achieve the same effect using my own FBO.
My goal: Draw the scene to an FBO the same way it would be drawn to the default framebuffer with the above properties. From there I want to be able to use the image as a texture for future passes through a shader.
This works: Code for making an FBO without multisampling/alpha-to-coverage, 1 color attachment, 1 depth attachment:
// Generate the color attachment glGenTextures(1,&defaultColorAttachment0); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D,defaultColorAttachment0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,screenWidth,screenHeight,0,GL_RGBA,GL_UNSIGNED_BYTE,NULL); // Bind the texture to the FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, defaultColorAttachment0,0); // Generate the depth attachment glGenRenderbuffers(1,&defaultDepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, defaultDepthBuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, screenWidth, screenHeight); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, defaultDepthBuffer);
This doesn't work. Code trying to make a multisampled FBO:
I have tried looking through the OpenGL wiki on this, although the it's incomplete (various unfinished headings make it look unprofessional). glGetError never complains. I've tried messing around with this, but I either get a black screen or a screen full of garbage pixels.
Main Question: What things do I need to consider/change and where (FBO creation, textures, shaders) in order to get multisampling and alpha-to-coverage to work with an FBO?
Have you tried calling glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, defaultColorAttachment0,0); instead of glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, defaultColorAttachment0,0);? – JhericoDec 9 '13 at 23:39
Your FBO should be failing a completeness check the way it is allocated right now. A call to glCheckFramebufferStatus (...) ought to return GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLEbecause your depth buffer has exactly1 sample and your color buffer attachment has 4.
Since you are also using a multisampled texture attachment in this FBO, you should be aware of differences between sampling a single-sampled texture vs. multisampled in GLSL shaders.
Multisampled textures have a special sampler uniform type (e.g.sampler2DMS) and you have to explicitly fetch each sample in the texture by its integer (non-normalized) texel coordinate and sample index using texelFetch (...). This also means that they cannot be filtered or mip-mapped.
You probably do not want a multisampled texture in this case, you probably want to use glBlitFramebuffer (...) to do the MSAA resolve into a single-sampled FBO. If you do this instead you can read the anti-aliased results in your shaders rather than having to fetch each sample and implement the anti-aliasing yourself.
YES! This it super helpful. I've got it working now! I had to make another FBO to blit the multisampled image into, but that's okay. I lined up the depth buffer with the color buffer (multisample, RGBA => 32). Thanks so much. – GraphicsMuncherDec 11 '13 at 1:34
=======================
=======================
=======================
출처:
History
Before GL_ARB_multisample extension, the edges of lines, polygons, and points could be selectively antialiased using using glEnable(GL_LINE_SMOOTH), glEnable(GL_POLYGON_SMOOTH), glEnable(GL_POINT_SMOOTH) respectively, combined with a blending function, such as glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). Such features were typically not implemented in hardware in early consumer graphics cards at the time, and were done in software resulting in poor performance. More expensive "workstation" graphics cards from HP, Sun, and SGI at the time did implement these features in hardware. Modern programs should not make use of these features.
Rendering with Multisampling
There are two pieces to rendering with multisampling:
Allocating a multisample render target (window or FBO), and
While the second step is standardized across all render target types and platforms, the first is only standardized for FBOs (as it is totally internal to GL). When rendering to a window, the allocation method depends on the platform-specific GL integration layer in-use (e.g. WGL, GLX, AGL, etc.). GLUT provides a wrapper around some of these so you don't have to care.
The following sections describe how to perform MSAA render target allocation (#1) for various render target types and platforms.
Allocating a Multisample Render Target
Render-to-FBO
As mentioned, allocation of multisample off-screen FBOs is platform independent. Here's an example:
Once you've rendered your scene into the multisampled FBO, you can display the anti-aliased result on the screen by blitting your multisampled FBO to the back buffer using glBlitFramebuffer(). Before doing so, you must first remember to specify the back buffer as your draw buffer, and to specify your multisampled FBO as the read frame buffer, like so:
//-------------------
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
// Make sure no FBO is set as the draw framebuffer
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
// Make sure your multisampled FBO is the read framebuffer
glDrawBuffer(GL_BACK);
// Set the back buffer as the draw buffer
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
//-------------------
Doing this blit will automatically resolve the multisampled FBO. In other words, for each texel in the multisampled FBO, the blit will blend together the texel's samples, and this blended color is what is written into the corresponding pixel on the back buffer. If you wish, you can resolve your multisampled FBO into a different off-screen FBO by setting that FBO as your draw buffer.
Render-to-Window
Microsoft Windows (using wgl)
The wgl functions required to create a multisampled OpenGL context are not available until a valid OpenGL context is made current to the thread. This leads to a temporary context code path which can get very involved with platform specific detailed. Users of libraries such as GLUT, GLEW, or GLee can significantly reduce the amount of effort required. The rest of this section assumes that valid context is present and all wgl extension function pointers have been obtained, if not, please see how to create a temporary context.
A valid pixel format for the framebuffer is choose using the wglChoosePixelFormatARB function with a list of attributes to specify the multisampling properties. In order to choose a framebuffer format that incorporates multisampling, you must add it to the list of attributes. For example, this list of attributes does not select a multisampled pixel format:
To consider multisampled visuals, the WGL_SAMPLE_BUFFERS_ARB and WGL_SAMPLES_ARB attributes must be present. The WGL_SAMPLE_BUFFERS_ARB must be set to 1, and WGL_SAMPLES_ARB is the number of samples, e.g. for 8x multisampling, WGL_SAMPLES_ARB would be set to 8. This attribute list is the same as above, but considers 4x multisampled pixel formats too:
int attributes[] = { WGL_DRAW_TO_WINDOW_ARB, GL_TRUE, WGL_SUPPORT_OPENGL_ARB, GL_TRUE, WGL_DOUBLE_BUFFER_ARB, GL_TRUE, WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB, WGL_COLOR_BITS_ARB, 32, WGL_DEPTH_BITS_ARB, 24, WGL_STENCIL_BITS_ARB, 8, WGL_SAMPLE_BUFFERS_ARB, 1, //Number of buffers (must be 1 at time of writing) WGL_SAMPLES_ARB, 4, //Number of samples 0 };
Once the correct pixel format is found, creating a context proceeds as normal.
X Windows (using GLX)
This is the method you'd use on UNIX/Linux if you want to create an X window with multisampling capability. There are a few methods for doing this, but the following locates an MSAA FBConfig, looks up its XVisual, and then creates an X window in that visual. You'll notice some similarities with the WGL method above.
GLUT is a wrapper around the platform-specific GL integration layers (e.g. WGL, GLX, etc.) which allows you to prototype simple cross-platform GL examples quickly. In GLUT, the key to allocating an MSAA window is this:
glutInitDisplayMode( ... | GLUT_MULTISAMPLE );
which directs GLUT to allocate an MSAA-capable window. The following is a complete example showing how to render with multisampling in a simple GLUT test program:
#include <GL/gl.h> #include <GL/glut.h> #include <GL/glu.h> #include <stdio.h> float rotation_angle=0; int msaa=1; void reshape(int width, int height) { glViewport(0, 0, width, height); } void mouse(int button, int state, int x, int y) { if (state==GLUT_DOWN) { msaa = !msaa; glutPostRedisplay(); } } void display() { int err=0; glClear(GL_COLOR_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(-1,1,-1,1,-1,1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(rotation_angle, 0,0,1); glColor4f(1,0,0,1); if (msaa) { glEnable(GL_MULTISAMPLE_ARB); printf("msaa on\n"); } else { printf("msaa off\n"); glDisable(GL_MULTISAMPLE_ARB); } glRectf(-.5,-.5,.5,.5); glutSwapBuffers(); err = glGetError(); if (err) fprintf(stderr, "%s\n", gluErrorString(err)); } int main (int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode( GLUT_RGBA | GLUT_DOUBLE | GLUT_MULTISAMPLE); glutCreateWindow(argv[0]); glutDisplayFunc(display); glutMouseFunc(mouse); glutReshapeFunc(reshape); glutReshapeWindow(400,400); printf("%s\n",glGetString(GL_RENDERER)); rotation_angle=30; glutMainLoop(); return 0; }
Extension References
- http://www.opengl.org/registry/specs/ARB/multisample.txtThese might interest you : - http://www.opengl.org/registry/specs/NV/multisample_filter_hint.txt - http://www.opengl.org/registry/specs/NV/explicit_multisample.txt - http://www.opengl.org/registry/specs/NV/multisample_coverage.txtConclusionSetting up FSAA takes a few steps but it is worth it. Today's GPUs are very rapid and the user wants control over the quality of the game's graphics.On Windows, the extension we are interested in are WGL_ARB_extensions_string which defines wglGetExtensionsStringARB, but the only way to check if this is available is to call wglGetExtensionsStringARB. So this is a chicken and egg situation. Just get the function pointer to wglGetExtensionsStringARB and use it to see if you have WGL_ARB_pixel_format and WGL_ARB_multisample.
Conclusion
Setting up FSAA takes a few steps but it is worth it. Today's GPUs are very rapid and the user wants control over the quality of the game's graphics. On Windows, the extension we are interested in are WGL_ARB_extensions_string which defines wglGetExtensionsStringARB, but the only way to check if this is available is to call wglGetExtensionsStringARB. So this is a chicken and egg situation. Just get the function pointer to wglGetExtensionsStringARB and use it to see if you have WGL_ARB_pixel_format and WGL_ARB_multisample.
Conclusion
Setting up FSAA takes a few steps but it is worth it. Today's GPUs are very rapid and the user wants control over the quality of the game's graphics. On Windows, the extension we are interested in are WGL_ARB_extensions_string which defines wglGetExtensionsStringARB, but the only way to check if this is available is to call wglGetExtensionsStringARB. So this is a chicken and egg situation. Just get the function pointer to wglGetExtensionsStringARB and use it to see if you have WGL_ARB_pixel_format and WGL_ARB_multisample.
I saw Pablo van der Meer's article detailing his CStarWarsCtrl. I thought it was very interesting but I didn't like the facts that it used -
MFC and
StretchBlt.
So I set upon the task to reimplement it using OpenGL while at the same time making it friendly for Win32.
What is it?
This article provides a Win32 OpenGL framework. It makes it easy for you by hiding most of the OpenGL initialization/shutdown code. The article shows how to use the framework to create a StarWars type effect of scrolling text.
Win32 OpenGL
The hardest part was setting up OpenGL under Win32. First you need to create a window class using the CS_OWNDC style, and query for a pixel format. Once you have those, you can create an OpenGL rendering context and attach it to the DC of the window. After a little documentation, I found out it wasn't that difficult at all. I often make Win32 controls which do a lot of work for you. So here I stuck with my plan. I made a Win32 control that does all the OpenGL work for you. All you have to give the control is a COpenGLWndController class and a requested pixel format and the control makes use of it. Let's take a look at it.
class COpenGLWndController { private: // these are friends because these functions need to call SetParameters friend static LRESULT OnOpenGLSetController(HWND hWnd,void *pController); friend static LRESULT OnOpenGLCreate(HWND hWnd,LPCREATESTRUCT lpcs); void SetParameters(HDC hdc,HGLRC hglrc); virtual void vDraw() = 0; // render it now HDC m_hdc; HGLRC m_hglrc; public: void Draw(); virtual ~COpenGLWndController() {;} virtual int ValidatePixelFormat(HDC hdc,int suggestedFormat); virtual void WindowSized(int cx,int cy) = 0; virtual void Init() = 0; // initialize textures virtual void Close() = 0; // the window is closing, destroy textures/etc };
Creation
To create an OpenGL window, use this function:
BOOL RegisterOpenGLWindow(HINSTANCE hInst); // Remember, once created, the window will call 'delete' on the controller. HWND CreateOpenGLWindow(HINSTANCE hInst,HWND hParent, DWORD style,UINT id,LPRECT rt, COpenGLWndController *pController, LPPIXELFORMATDESCRIPTOR pfd);
One simply has to create a subclass of COpenGLWndController and implement WindowSized, vDraw, Init, andClose. WindowSized is called in response to a WM_SIZE message, and this is where you change your OpenGL viewport. vDraw is the function which renders the scene. Don't get this confused with Draw. Draw is the public function you call to repaint the window - it handles behind the scenes things like swapping the buffers. Draw ends up calling vDraw anyways. Init is called when the OpenGL window has created its rendering context and is now ready for us. You can now load your textures or initialize OpenGL as you see fit. Close is similar, here you can delete any OpenGL textures/objects etc. ValidatePixelFormat does not need to be overridden, but it can be. You can use this function to fiddle with the pixel format, returning a new one if you want. I use it in my implementation to turn on FSAA (full screen antialiasing).
Implementation
Let's take a look at how our subclass works - CStarWarsController. The code for WindowSized is pretty self explanatory.
The code for Close cleans up the fonts used and deletes our CObjects
void CStarWarsController::Close() { glDeleteLists(1000,MAX_TEXT); // delete our objects now for (int i=0;i<NUMOBJECTS;++i) { if (pObjects[i]) { delete pObjects[i]; pObjects[i] = NULL; } } }
I mentioned the CObject class. I use this class in the controller as it represents a moving object along the screen. Each line of text is treated as an object. Each object has a starting point, a vector it moves along, and a current position. Thus, for any time t, I can calculate the current position from the starting point and movement vector. CObject has one overriddable function, Draw(). I provide two subclasses of CObject: CTextObject and CTexturedQuad. The moving flag is a CTexturedQuad.
The time offset might need some explaining. The objects are in an array. The first object needs to be followed by the others to look good. Each object has the same starting point. For this example, it's 0,-4,0. But each object has a time offset, for when they should appear. At 0 time, they'll appear at 0,-4,0. At a time offset of 2, they'll be closer to the viewer, because its 2 seconds behind. Therefore all objects in the array have an increasing time offset. The text objects typically require a 2 second time offset between them. This is how the objects are spaced apart. This means you can space them apart as far as you want by changing the time offset field.
In my example, it renders constantly. CStarWarsController has a function called Idle which moves all the objects and stars around the screen. The code is easy, simple vector math.
void CStarWarsController::Idle() { LARGE_INTEGER now; // get current time QueryPerformanceCounter(&now); m_fTimeElapsed = ((float)(now.QuadPart - m_start.QuadPart) /(float)m_freq.QuadPart); // move the objects for (int i=0;<NUMOBJECTS;++i) { pObjects[i]->m_curPos.x = pObjects[i]->m_start.x; pObjects[i]->m_curPos.y = pObjects[i]->m_start.y + pObjects[i]->m_slope.y * (m_fTimeElapsed - pObjects[i]->m_fTimeOffset); pObjects[i]->m_curPos.z = pObjects[i]->m_start.z + pObjects[i]->m_slope.z * (m_fTimeElapsed - pObjects[i]->m_fTimeOffset); } // move the stars, calculate new time based on star m_start time m_fTimeElapsed = ((float)(now.QuadPart - m_starStart.QuadPart)/(float)m_freq.QuadPart); for (int i=0;i<m_iNumStars;++i) { // update their z position m_pStars[i].m_curPos[2] = m_pStars[i].m_start.z + m_pStars[i].speed.z * (m_fTimeElapsed - m_pStars[i].timeOffset); // ok they're out of view, respawn a new star if (m_pStars[i].m_curPos[2] >= EYE_Z) { m_pStars[i].m_start.x = GetRandom(-5.0,5.0); m_pStars[i].m_start.y = GetRandom(-5.0,5.0); m_pStars[i].m_start.z = -10.0f; m_pStars[i].timeOffset = m_fTimeElapsed; } else { m_pStars[i].m_curPos[0] = m_pStars[i].m_start.x; m_pStars[i].m_curPos[1] = m_pStars[i].m_start.y; } } }
Similarly, the vDraw function doesn't do much besides rendering the stars and calling CObject::Draw.
/* Method to actually draw on the control */ void CStarWarsController::vDraw() { glClearColor(0.0,0.0,0.0,0.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); if (!m_bStarted) return; glHint(GL_MULTISAMPLE_FILTER_HINT_NV,GL_NICEST); glEnable(GL_MULTISAMPLE_ARB); glDisable(GL_BLEND); glCullFace(GL_BACK); glEnable(GL_CULL_FACE); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(0.0,0.0,EYE_Z,0.0,0.0,0.0,0.0,1.0,0.0); // now draw stars - as points if (m_bPointStars) { glBegin(GL_POINTS); for (int i=0;i<m_iNumStars;++i) { glColor3fv(m_pStars[i].m_fColor); glVertex3fv(m_pStars[i].m_curPos); } glEnd(); } else // draw stars as quads { glBegin(GL_QUADS); for (int i=0;i<m_iNumStars;++i) { #define LENGTH 0.02f glColor3fv(m_pStars[i].m_fColor); glVertex3f(m_pStars[i].m_curPos[0]- LENGTH,m_pStars[i].m_curPos[1]-LENGTH, m_pStars[i].m_curPos[2]); glVertex3f(m_pStars[i].m_curPos[0]-LENGTH, m_pStars[i].m_curPos[1]+LENGTH, m_pStars[i].m_curPos[2]); glVertex3f(m_pStars[i].m_curPos[0]+LENGTH, m_pStars[i].m_curPos[1]+LENGTH, m_pStars[i].m_curPos[2]); glVertex3f(m_pStars[i].m_curPos[0]+LENGTH, m_pStars[i].m_curPos[1]-LENGTH, m_pStars[i].m_curPos[2]); } glEnd(); } // now draw text glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); float distance,alpha; for (int i=0;i<NUMOBJECTS;++i) { if (!pObjects[i]) continue; // determine distance from us distance = sqrtf(pObjects[i]->m_curPos.x*pObjects[i]->m_curPos.x + pObjects[i]->m_curPos.y*pObjects[i]->m_curPos.y + pObjects[i]->m_curPos.z*pObjects[i]->m_curPos.z); // approximate the alpha value based on the distance away from us alpha = 3.75f - sqrtf(distance); if (alpha > 1.0f) alpha = 1.0f; else if (alpha < 0.0) alpha = 0.0; glPushMatrix(); // move everything into position glScalef(0.50f,0.50f,0.50f); glTranslatef(pObjects[i]->m_curPos.x, pObjects[i]->m_curPos.y,pObjects[i]->m_curPos.z); glRotatef(pObjects[i]->m_fAngle,1.0,0.0,0.0); glColor4f(pObjects[i]->m_fColor[0], pObjects[i]->m_fColor[1], pObjects[i]->m_fColor[2],alpha); pObjects[i]->Draw(); glPopMatrix(); } // ok now we check the last alpha value, if it's <= 0.0, // everything has faded away, and we restart if (alpha <= 0.0) Start(); }
The last piece of interesting code is the ValidatePixelFormat function. Due to limitations in the SetPixelFormat function, in order to implement this function we have to go through some hoops. First I create a dummy window and an OpenGL context for it. Then I can call ValidatePixelFormat. Inside this function it can use OpenGL functions to query a device's capabilities. Then once that function returns, I destroy the dummy window and rendering context and create the real window and context. Painful, but it works.
Scrolling text looks badly aliased. I wanted to solve this problem, so I figured out how to turn on FSAA if a video card supports it. Here's a look at the code:
// Overridden to enable multisampling (FSAA) int CStarWarsController::ValidatePixelFormat(HDC hdc,int suggestedFormat) { HDC hDC = wglGetCurrentDC(); PFNWGLCHOOSEPIXELFORMATARBPROC wglChoosePixelFormatARB = (PFNWGLCHOOSEPIXELFORMATARBPROC) wglGetProcAddress("wglChoosePixelFormatARB"); if (!wglChoosePixelFormatARB) return suggestedFormat; if (!GLExtensionExists("WGL_ARB_multisample ")) return suggestedFormat; int pixelFormat; BOOL bStatus; UINT numFormats; float fAttributes[] = {0,0}; int iAttributes[] = { WGL_DRAW_TO_WINDOW_ARB,GL_TRUE, WGL_SUPPORT_OPENGL_ARB,GL_TRUE, WGL_ACCELERATION_ARB,WGL_FULL_ACCELERATION_ARB, WGL_COLOR_BITS_ARB,24, WGL_ALPHA_BITS_ARB,8, WGL_DEPTH_BITS_ARB,16, WGL_STENCIL_BITS_ARB,0, WGL_DOUBLE_BUFFER_ARB,GL_TRUE, WGL_SAMPLE_BUFFERS_ARB,GL_TRUE, WGL_SAMPLES_ARB,4, 0,0}; bStatus = wglChoosePixelFormatARB(hDC,iAttributes, fAttributes,1,&pixelFormat,&numFormats); if ((bStatus == GL_TRUE) && (numFormats == 1)) { m_bMultiSample = true; return pixelFormat; } // ok that failed, try using 2 samples now instead of 4 iAttributes[19] = 2; bStatus = wglChoosePixelFormatARB(hDC,iAttributes, fAttributes,1,&pixelFormat,&numFormats); if ((bStatus == GL_TRUE) && (numFormats == 1)) { m_bMultiSample = true; return pixelFormat; } // failed, return the suggested format and continue return suggestedFormat; }
Other uses
This example shows you how to create a standalone OpenGL application. However, you can easily use my control as a child window. I wrote a Euchre game, and I embedded the OpenGL control + StarWars controller into my About Box. It makes a nice effect.
Hi, I’ve been having a spot of bother using WGL extensions. I’m attempting to use wglChoosePixelFormatARB in conjunction with glClampColorARB to turn off the clamping of colour values output from the fragment shader (rest assured I have a very good reason for wanting to turn off the clamping and have been informed this is the best way to do it; any other suggestions are welcome). However
which also returns NULL. Is there anything I need to make sure I’m doing before these calls bearing in mind this was previously working with a simple ChoosePixelFormat call before I started playing around with the clamping and changed the pixel formats accordingly.
I’m running this on an NVIDIA 6800 with ForceWare version 77.72.
Thanks in advance for any help
Thanks for the help guys, I’ve made a bit of progress but am now stumped on something else along the same lines. I managed to get something going from a combination of things I read on NeHe, Humus etc. However while I now get no errors it’s not performing as expected.
What I’m actually after is an output of unclamped float values (32-bits per component if poss) from the fragment shader and my code looks something like this at the moment:
The problem is the values are still clamped……I'm told this is possible in GL so I must be missing something.
Hope these questions aren’t too daft but I’m jumping in at the deep end a bit with OpenGL having only used DirectX previously. So there’s a bit of a combination of internet trawling and guesswork in the code.
Thanks again.
=======================
=======================
=======================
출처: http://blog.naver.com/newsdu/80191220898
어제 콜백 클래스 만든다고 삽질 좀 하다가 만들었는데 멤버 함수 포인터가 항상 4바이트가 아니라는걸 알고
잠시 한수 접어뒀다...ㅋ
그래서 엔진 구조를 좀더 확장하고 프레임워크 구조에 OpenGL 렌더러를 만들어서 윈도우를 그려보려한다.
아래가 이번 엔진의 클래스 다이어그램(설계도)이다.
이번에 추가된게 많다. 일단 먼저 눈에 띄는건 윈도우 클래스가 프레임워크 클래스로 서브클래싱 되었다.
그리고 중심에는 싱글턴이 있는데 본인은 싱글턴이 익숙해져서 그런지 자주 애용한다. 싱글턴이 뭔지 모르는 사람을 위해
간단히 설명하면, 일종의 전역변수인데 명시적으로 설정할 수 있는건 한번뿐이고 이후에는 이용만 가능하지, 지워버릴 수는
없다. 꼭 싱글턴을 써야하는건 아니지만 싱글턴이 어디서든 접근할 수 있고 싱글턴 인터페이스를 통해 싱글턴들을 관리할 수
있기에 본인이 애용하는 것이다. 더 자세한건 인터넷 검색을 활용하자.
어찌됐든 이번에 핵심적인 이슈는 OpenGL를 이용한 렌더링이다. 싱글턴 하위로 RendererGL2D와 System이 있는데
System클래스는 엔진의 핵심적인 역할을 하는 함수들을 가지고 있으며, RendererGL2D가 바로 OpenGL 기능을 래핑한
클래스이다. 이번 포스트부터는 GLEW(OpenGL Extension Wrangler) 라이브러리를 도입하여 OpenGL의 확장기능 위주로
진행할 것이다. GLEW는 프로젝트의 external이라는 폴더에 넣어두었다. 이 라이브러리는 OpenGL의 확장기능을 보다 편하게
사용할 수 있게 도와주는 라이브러리다. 필자의 컴퓨터에서는 OpenGL 3.3 버전까지 지원되서 코드에도 명시적으로 3.3으로
GL Context를 만들게끔 만들어놨다.(이 컴퓨터는 2010년에 샀다. 최신은 아님)
라이브러리를 사용하는데 필요한 폴더 설정이나 모듈(lib) 설정은 이전 포스트와 방식이 비슷하니 참고하기 바란다. 되도록이면
그런 부가적인 설명을 제외하고 핵심적인 설명 위주로 하는게 낫다는 판단해서 이렇게 넘어가는 것이다. 혹시나 첨부 프로젝트
속성을 봐도 전혀 감이 안온다면 댓글로 질문을 달아놓으면 답을 하겠다. 참고로 glew는 오픈소스로써 구글링하면 소스가 바로
나온다. 필자는 직접 소스를 받아서 빌드 했으며 동적 라이브러리 말고 디버그 정적 라이브러리로 빌드 했다. glew32sd.lib라고
포함된 파일을 검색해보면 나온다. 지금까지 포스트는 모두 디버그 빌드인데 일단 지금은 디버깅을 하는게 더 낫고 성능적으로도
메인함수 코드만 보면 저번 윈도우 띄웠을때랑 크게 차이가 없다. 하지만 이번에 달라진건 ns_Framework라는 클래스를 상속하여 기능을 구현했다는 것이다. 프레임워크라는 것은 프로그램이 실행되는 큰 틀이라고 보면 되는데 이 틀 안에서 게임을 코딩하면 게임이 되는것이다. 프레임워크는 크게 3부분으로 나뉜다.
1. 초기화
2. 프레임
3. 해제
초기화는 말그대로 엔진에 필요한 초기화와 게임에 필요한 초기화를 하는 부분이다. 위에 ns_Framework::Init()이 엔진을 초기화하는 부분이다. 다음으로 제일 중요한 Frame 함수인데 이것은 프로그램이 반복 실행되는 동안 매번 불리우는 함수이다. 실제로는 메세지가 없을때만 불리지만 게임 중간에는 메세지가 발생할때보다 메세지가 없을때가 훨씬 더 많기때문에 매번 호출
된다고 봐도 무방하다. 해제는 게임을 종료하고 게임 자원을 해제하고 엔진을 내리는 함수이다. 어떻게 보면 그냥 응용 프로그램이 종료되면 다 날라가는데 그냥 루프만 빠져나가면 그만이지 할 수 있으나 쓰레드를 사용하여 기능을 분기한 경우나 네트워크 기능이 붙어있다면 큰 문제가 될 수 있다. 안전하게 종료하는게 여러모로 깔끔하다. 종료를 명시적으로 처리하지 않으면 마치 화장실에서 큰 일보고 뒤 안닦은 기분이 든다고나 할까...
Some time ago I started working on OpenGL(3.3) renderer for MFC program.Everything was fine till I decided to run some OpenGL debugging tools to see if I have some silent errors.I usedgDEBugger .Running gDebug program analyzing tool I immediately started getting errors of the following type: Debug String: Detected error: The debugged process asked for an extension function pointer (glGenBuffers) from one render context, but called this function pointer in another render context (context #2) In fact every GLEW method gets this error.Now I started looking for a problem in forums and also on MSDN and I found some people mention that in Windows environment some GLEW methods pointers should be redefined.Also I stumbled upon this tutorial which shows how pretty every GLEW method is redefined using Windows OpenGL methods like these: void CGLRenderer::InitAPI() { // Program glCreateProgram = (PFNGLCREATEPROGRAMPROC)wglGetProcAddress("glCreateProgram"); glDeleteProgram = (PFNGLDELETEPROGRAMPROC)wglGetProcAddress("glDeleteProgram"); glUseProgram = (PFNGLUSEPROGRAMPROC)wglGetProcAddress("glUseProgram"); ... .... My OpenGL context set-up looks like this: bool OpenGLMain::create30Context(HDC device_context){ //this->hwnd=hwnd; hdc=device_context;//GetDC(hwnd); hdcGlobal=&hdc; PIXELFORMATDESCRIPTOR kPFD; memset(&kPFD,0,sizeof(PIXELFORMATDESCRIPTOR)); kPFD.nSize=sizeof(PIXELFORMATDESCRIPTOR); kPFD.nVersion=1; kPFD.dwFlags= PFD_DRAW_TO_WINDOW| PFD_SUPPORT_OPENGL| PFD_GENERIC_ACCELERATED| PFD_DOUBLEBUFFER; kPFD.iPixelType=PFD_TYPE_RGBA; kPFD.cColorBits=32; kPFD.cDepthBits=32; kPFD.cStencilBits=8; kPFD.iLayerType=PFD_MAIN_PLANE; int iPixelFormat=ChoosePixelFormat(hdc,&kPFD); if(iPixelFormat==0){ // ReleaseDC(window,gs_hWindowDC); return false; } BOOL bSuccess=SetPixelFormat(hdc,iPixelFormat,&kPFD); if(!bSuccess){ // ReleaseDC(window,gs_hWindowDC); return false; } /////////init opengl context HGLRC tempOpenGLContext=wglCreateContext(hdc);/////Openggl 2.1 wglMakeCurrent(hdc,tempOpenGLContext);////male openGL 2.1 context current and active GLenum error =glewInit(); if(error!=GLEW_OK){ return false; } /////////context setup/////// int attributes[]={ WGL_CONTEXT_MAJOR_VERSION_ARB,3, WGL_CONTEXT_MINOR_VERSION_ARB,2, WGL_CONTEXT_FLAGS_ARB,WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,0 }; if(wglewIsSupported("WGL_ARB_create_context")==1){ hrc=wglCreateContextAttribsARB(hdc,NULL,attributes);///create OpenGL 3x context based on the supplied attributes wglMakeCurrent(NULL,NULL);//remove temp context wglDeleteContext(tempOpenGLContext); wglMakeCurrent(hdc,hrc); }else{ hrc=tempOpenGLContext;////if no support for OGL 3x detected roll back to 2.0 } /////////version check/////////////// int glVersion[2]={-1,-1}; glGetIntegerv(GL_MAJOR_VERSION,&glVersion[0]); glGetIntegerv(GL_MINOR_VERSION,&glVersion[1]); std::cout<<"Using openGL"<<glVersion[0]<<"."<<glVersion[1]<< std::endl; OutputDebugString(L"Using OPENGL version:"+glVersion[0]); return true; } Now I am really confused at this point because in fact the program runs fine inside VisualStudio without redefining all these GLEW methods.But it shows the empty screen (no geometry) if I run the executable directly.Also in all the other examples and tutorials I have ever read it had never been mentioned that one has to reset pointers on GLEW API methods.So my question is if anybody can point out to the right way to integrate OpenGL 3.3 in Windows API because it seems that there are many ways doing it.
It's a bit outdated, but here goes my 2cents contribution:
I was also looking for a code snipet to create EGL context on desktop NVidia. I was aware that on Windows, NVidia exposes EGL through extensions WGL_EXT_create_context_es_profile and WGL_EXT_create_context_es2_profile, and on Linux NVidia allows EGL through GLX_EXT_create_context_es_profile and GLX_EXT_create_context_es2_profile extensions.
Of course, you also have the afore mentioned OpenGL-ES drivers/emulators from PowerVR / Adreno / Mali / Angle, but I was looking for a raw implementation, i.e., my own homegrown libGLESv2.dll and libEGL.dll
After some struggle, I've got something like below. The basic thing is to get required extensions, by hand or using GLEW, create dummy context and then create your GL-ES context. Error handling is omitted for clearance. You also need to define and load all core and extension proc functions, and make then public for your libs.
after some struggle, I've got something like below.
The basic thing is to get required extensions, by hand or using GLEW, create dummy context and then create your GL-ES context. Error handling is omitted for clearance.
/** * adapted from * from https://www.opengl.org/wiki/Tutorial:_OpenGL_3.1_The_First_Triangle_%28C%2B%2B/Win%29 */ bool CGLRenderer::CreateGLContext(CDC* pDC){ PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), 1, PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, //Flags PFD_TYPE_RGBA, //The kind of framebuffer. RGBA or palette. 32, //Colordepth of the framebuffer. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 24, //Number of bits for the depthbuffer 8, //Number of bits for the stencilbuffer 0, //Number of Aux buffers in the framebuffer. PFD_MAIN_PLANE, 0, 0, 0, 0 }; int nPixelFormat = ChoosePixelFormat(pDC->m_hDC, &pfd); if (nPixelFormat == 0) return false; BOOL bResult = SetPixelFormat (pDC->m_hDC, nPixelFormat, &pfd); if (!bResult) return false; HGLRC tempContext = wglCreateContext(pDC->m_hDC); wglMakeCurrent(pDC->m_hDC, tempContext); // Using GLEW. Init it after ctx creation GLenum err = glewInit(); if (GLEW_OK != err){ AfxMessageBox(_T("GLEW is not initialized!")); } // create OPENGL ES 2 profile. It may return a compatible, but higher GL-ES, as 3.0 or 3.1 int attribList[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 2, WGL_CONTEXT_MINOR_VERSION_ARB, 0, WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_ES2_PROFILE_BIT_EXT, 0, }; if(wglewIsSupported("WGL_ARB_create_context") == 1){ m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs); wglMakeCurrent(NULL,NULL); wglDeleteContext(tempContext); wglMakeCurrent(pDC->m_hDC, m_hrc); }else{ //Failed to create a GL-ES context. m_hrc = NULL; } //Debug info - print out GL version const GLubyte *glVersionString = glGetString(GL_VERSION); const char *vendorChar = (char*)glGetString(GL_VENDOR); const char *rendererChar = (char*)glGetString(GL_RENDERER); int glVersion[2] = {-1,-1}; glGetIntegerv(GL_MAJOR_VERSION,&glVersion[0]); glGetIntegerv(GL_MINOR_VERSION,&glVersion[1]); cout<<"GL version string: "<< glVersionString << <<endl; cout<<"OpenGL version: "<<glVersion[0]<<"."<<glVersion[1]<<endl; cout<<"GPU: "<<vendorChar<<" - "<<rendererChar<<endl; if (!m_hrc) return false; return true; } // end of CreateGLContext
This page describes old functionality. You should use the core Framebuffer Object functionality if at all possible. RTT = render_to_textureThis page shows a few examples on how to setup a RTT and how to cleanup.The extension specification is at http://www.opengl.org/registry/specs/EXT/framebuffer_object.txtNote that this extension became core in GL 3.0 and at the same time they released the ARB version of the extensionhttp://www.opengl.org/registry/specs/ARB/framebuffer_object.txtGL_ARB_framebuffer_object brings together GL_EXT_framebuffer_object, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample, GL_EXT_packed_depth_stencil which are all folded into the core of GL 3.0.
Let's assume we want to render to a texture and we also want depth testing to take place. We need to create a color texture and we need to attach it to the FBO. We need a depth buffer RenderBuffer and attach it to the FBO. Once you are done rendering to this texture, you can use it like any other texture. In this case, we don't care what happens to the depth values. If you want to access the depth (for example, from within your shader), you need to make a depth texture instead of a depth buffer RenderBuffer. Please look at the other examples. Also, keep in mind we are using the GL_RGBA8 format here which is a format supported by all GPUs.
//--------------------
//RGBA8 2D texture, 24 bit depth texture, 256x256
glGenTextures(1, &color_tex);
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//NULL means reserve texture memory, but texels are undefined
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8,
256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
//-------------------------
glGenFramebuffersEXT(1, &fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
//Attach 2D texture to this FBO
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, color_tex, 0);
//-------------------------
glGenRenderbuffersEXT(1, &depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);
//-------------------------
//Attach depth buffer to FBO
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
GL_DEPTH_ATTACHMENT_EXT,
GL_RENDERBUFFER_EXT, depth_rb);
//-------------------------
//Does the GPU support current FBO configuration?
GLenum status;
status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
switch(status)
{
case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good";
default: HANDLE_THE_ERROR;
}
//-------------------------
//and now you can render to GL_TEXTURE_2D
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClearDepth(1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//-------------------------
glViewport(0, 0, 256, 256);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 256.0, 0.0, 256.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
//-------------------------
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
//-------------------------
//**************************
//RenderATriangle, {0.0, 0.0}, {256.0, 0.0}, {256.0, 256.0}
//Read http://www.opengl.org/wiki/VBO_-_just_examples
RenderATriangle();
//-------------------------
GLubyte pixels[4*4*4];
glReadPixels(0, 0, 4, 4, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
//pixels 0, 1, 2 should be white
//pixel 4 should be black
//----------------
//Bind 0, which means render to back buffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
//---------------------------
And in the end, cleanup
//Delete resources glDeleteTextures(1, &color_tex); glDeleteRenderbuffersEXT(1, &depth_rb); //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glDeleteFramebuffersEXT(1, &fb);
Quick example, render_to_texture (2D), mipmaps
This example is nearly identical to the above sample code with one difference : glGenerateMipmapEXT is used to generate the mipmaps. You can use it to generate mipmaps whenever you want. Generally, you render to the texture, then unbind the FBO with glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0), then bind the texture with glBindTexture, then call glGenerateMipmapEXT. ALSO, notice that glGenerateMipmapEXT doesn't have an "s".
//RGBA8 2D texture, 24 bit depth texture, 256x256 glGenTextures(1, &color_tex); glBindTexture(GL_TEXTURE_2D, color_tex); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //NULL means reserve texture memory, but texels are undefined //**** Tell OpenGL to reserve level 0 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); //You must reserve memory for other mipmaps levels as well either by making a series of calls to //glTexImage2D or use glGenerateMipmapEXT(GL_TEXTURE_2D). //Here, we'll use : glGenerateMipmapEXT(GL_TEXTURE_2D) //------------------------- glGenFramebuffersEXT(1, &fb); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); //Attach 2D texture to this FBO glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, color_tex, 0); //------------------------- glGenRenderbuffersEXT(1, &depth_rb); glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb); glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256); //------------------------- //Attach depth buffer to FBO glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb); //------------------------- //Does the GPU support current FBO configuration? GLenum status; status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); switch(status) { case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good"; default: HANDLE_THE_ERROR; } //------------------------- //and now you can render to GL_TEXTURE_2D glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); glClearColor(0.0, 0.0, 0.0, 0.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //------------------------- glViewport(0, 0, 256, 256); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0, 256.0, 0.0, 256.0, -1.0, 1.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); //------------------------- glDisable(GL_TEXTURE_2D); glDisable(GL_BLEND); glEnable(GL_DEPTH_TEST); //------------------------- //************************** //RenderATriangle, {0.0, 0.0}, {256.0, 0.0}, {256.0, 256.0} //Read http://www.opengl.org/wiki/VBO_-_just_examples RenderATriangle(); //------------------------- GLubyte pixels[4*4*4]; glReadPixels(0, 0, 4, 4, GL_BGRA, GL_UNSIGNED_BYTE, pixels); //pixels 0, 1, 2 should be white //pixel 4 should be black //---------------- //Bind 0, which means render to back buffer glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); //---------------- //**** Now that we rendered to level 0 of the texture, we must generate the mipmaps. //This should be quick since it is done on the GPU. glBindTexture(GL_TEXTURE_2D, color_tex); glGenerateMipmapEXT(GL_TEXTURE_2D)
And in the end, cleanup
//Delete resources glDeleteTextures(1, &color_tex); glDeleteRenderbuffersEXT(1, &depth_rb); //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glDeleteFramebuffersEXT(1, &fb);
Quick example, render_to_texture (Cubemap)
In case you want to have dynamic reflections on a shiny object, you would want to do render to a cubemap.
The concept behind rendering to a cubemap is the following. Bind a cubemap face, then render to it. Bind another cubemap face, then render to it. There are 6 faces in total. You may think that rendering 6 times your scene will drag down performance and you are right. Don't update the cubemap often. You can update every 2 frames. Make your cubemap small, for example 256x256.
//RGBA8 Cubemap texture, 24 bit depth texture, 256x256 glGenTextures(1, &color_tex); glBindTexture(GL_TEXTURE_CUBE_MAP, color_tex); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //NULL means reserve texture memory, but texels are undefined glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+0, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+1, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+2, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+3, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+4, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+5, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); //------------------------- glGenFramebuffersEXT(1, &fb); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); //Attach one of the faces of the Cubemap texture to this FBO glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_CUBE_MAP_POSITIVE_X, color_tex, 0); //------------------------- glGenRenderbuffersEXT(1, &depth_rb); glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb); glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256); //------------------------- //Attach depth buffer to FBO glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb); //------------------------- //Does the GPU support current FBO configuration? GLenum status; status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); switch(status) { case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good"; default: HANDLE_THE_ERROR; } //------------------------- //and now you can render to GL_TEXTURE_CUBE_MAP_POSITIVE_X //In order to render to the other faces, do this : glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, color_tex, 0); //... now render glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, color_tex, 0); //... now render //... and so on
And in the end, cleanup
//Delete resources glDeleteTextures(1, &color_tex); glDeleteRenderbuffersEXT(1, &depth_rb); //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glDeleteFramebuffersEXT(1, &fb);
Quick example, render_to_texture (2D Depth texture ONLY)
In this example, notice glDrawBuffer(GL_NONE) and glReadBuffer(GL_NONE). We don't need a color output so that is why we set them to GL_NONE. The important call is glDrawBuffer(GL_NONE). We do not want to render to a color buffer.
//32 bit depth texture, 256x256 glGenTextures(1, &depth_tex); glBindTexture(GL_TEXTURE_2D, depth_tex); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL); //NULL means reserve texture memory, but texels are undefined //You can also try GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24 for the internal format. //If GL_DEPTH24_STENCIL8_EXT, go ahead and use it (GL_EXT_packed_depth_stencil) glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, 256, 256, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL); //------------------------- glGenFramebuffersEXT(1, &fb); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); //Attach glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, depth_tex, 0); //------------------------- //Does the GPU support current FBO configuration? //Before checking the configuration, you should call these 2 according to the spec. //At the very least, you need to call glDrawBuffer(GL_NONE) glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); GLenum status; status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); switch(status) { case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good"; default: HANDLE_THE_ERROR; } //------------------------- //----and to render to it, don't forget to call //At the very least, you need to call glDrawBuffer(GL_NONE) glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); //------------------------- //If you want to render to the back buffer again, you must bind 0 AND THEN CALL glDrawBuffer(GL_BACK) //else GL_INVALID_OPERATION will be raised glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glDrawBuffer(GL_BACK); glReadBuffer(GL_BACK);
And in the end, cleanup
//Delete resources glDeleteTextures(1, &depth_tex); //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glDeleteFramebuffersEXT(1, &fb);
Quick example, render_to_texture (2D), mipmaps, depth_stencil
If GL_EXT_packed_depth_stencil is present, use it. Also called a D24S8 format. All common GPUs support this format.
//-----------------
//RGBA8 2D texture, D24S8 depth/stencil texture, 256x256
glGenTextures(1, &color_tex);
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
//NULL means reserve texture memory, but texels are undefined
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8,
256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
//You must reserve memory for other mipmaps
//levels as well either by making a series of calls to
//glTexImage2D or use glGenerateMipmapEXT(GL_TEXTURE_2D).
//Here, we'll use : glGenerateMipmapEXT(GL_TEXTURE_2D);
//-------------------------
glGenFramebuffersEXT(1, &fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
//Attach 2D texture to this FBO
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, color_tex, 0);
//-------------------------
glGenRenderbuffersEXT(1, &depth_stencil_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_stencil_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH24_STENCIL8_EXT, 256, 256);
//-------------------------
//Attach depth buffer to FBO
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_stencil_rb); //Also attach as a stencil glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_stencil_rb); //------------------------- //Does the GPU support current FBO configuration? GLenum status; status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); switch(status) { case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good"; default: HANDLE_THE_ERROR; } //------------------------- //and now you can render to GL_TEXTURE_2D glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); glClearColor(0.0, 0.0, 0.0, 0.0); //It's always a good idea to clear the stencil at the same time as the depth when the format is D24S8. glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); //------------------------- glViewport(0, 0, 256, 256); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0, 256.0, 0.0, 256.0, -1.0, 1.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); //------------------------- glDisable(GL_TEXTURE_2D); glDisable(GL_BLEND); glEnable(GL_DEPTH_TEST); //------------------------- //************************** //RenderATriangle, {0.0, 0.0}, {256.0, 0.0}, {256.0, 256.0} //Read http://www.opengl.org/wiki/VBO_-_just_examples RenderATriangle(); //------------------------- GLubyte pixels[4*4*4]; glReadPixels(0, 0, 4, 4, GL_BGRA, GL_UNSIGNED_BYTE, pixels); //pixels 0, 1, 2 should be white //pixel 4 should be black //---------------- //Bind 0, which means render to back buffer glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
And in the end, cleanup
//-------------------------
//Delete resources
glDeleteTextures(1, &color_tex);
glDeleteRenderbuffersEXT(1, &depth_stencil_rb);
//Bind 0, which means render to back buffer, as a result, fb is unbound
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glDeleteFramebuffersEXT(1, &fb);
//----------------------
Quick example, render_to_buffer (p-buffer replacement)
Around 2000, the p-buffer extension was released which was used to do offscreen rendering. These days, it is best to use GL_EXT_framebuffer_object. This extension is much easier to use compared to p-buffer and best of all, it is cross platform. This example creates a RenderBuffer using 2 calls to glRenderbufferStorageEXT. The first call is for creating a color buffer and the second is used to create a depth buffer.
//------------------------------
//RGBA8 RenderBuffer, 24 bit depth RenderBuffer, 256x256
glGenFramebuffersEXT(1, &fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
//Create and attach a color buffer
glGenRenderbuffersEXT(1, &color_rb);
//We must bind color_rb before we call glRenderbufferStorageEXT
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, color_rb);
//The storage format is RGBA8
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA8, 256, 256);
//Attach color buffer to FBO
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_RENDERBUFFER_EXT, color_rb);
//-------------------------
glGenRenderbuffersEXT(1, &depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);
//-------------------------
//Attach depth buffer to FBO
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
GL_DEPTH_ATTACHMENT_EXT,
GL_RENDERBUFFER_EXT, depth_rb);
//-------------------------
//Does the GPU support current FBO configuration?
GLenum status;
status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
switch(status)
{
case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good";
default: HANDLE_THE_ERROR;
}
//-------------------------
//and now you can render to the FBO (also called RenderBuffer)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//-------------------------
glViewport(0, 0, 256, 256);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 256.0, 0.0, 256.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
//-------------------------
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
//-------------------------
//**************************
//RenderATriangle, {0.0, 0.0}, {256.0, 0.0}, {256.0, 256.0}
//Read http://www.opengl.org/wiki/VBO_-_just_examples
RenderATriangle();
//-------------------------
GLubyte pixels[4*4*4];
glReadPixels(0, 0, 4, 4, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
//pixels 0, 1, 2 should be white
//pixel 4 should be black
//----------------
//Bind 0, which means render to back buffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
//------------------------------
And in the end, cleanup
//Delete resources
glDeleteRenderbuffersEXT(1, &color_rb);
glDeleteRenderbuffersEXT(1, &depth_rb);
//Bind 0, which means render to back buffer, as a result, fb is unbound
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glDeleteFramebuffersEXT(1, &fb);
Limitations of GL_EXT_framebuffer_object
One of the limitations of GL_EXT_framebuffer_object is that when you bind a color buffer and then you bind a depth buffer, both must have the same width and height or else the state of the FBO is considered invalid (incomplete). This means if you have 1 FBO that is 64x64, another which is 512x64, another that is 1024x1024, for each of those you have to allocate a separate depth buffer (if you need depth testing of course). This obviously wastes memory. In GL 3.0, FBO became core and that limitation was removed. You can create 1 depth buffer that is 1024x1024 and bind them to all 3 FBOs. Notice that the depth buffer is large enough for even the smaller textures like 64x64.
1 FBO or more
Is it better to make 1 FBO and bind your texture to it each time you need to render to the texture? An FBO itself doesn't use much memory. It is a state vector object. In terms of performance, each time you bind, the driver needs to validate the state which costs CPU time. Logically, it would be better to have 1 FBO per Render_To_Texture (RTT). However, it has been found that you get a speed boost if your textures is the same size and you use 1 FBO for them. If you have 10 textures that are 64x64 and 10 textures that are 512x64, make 2 FBOs. One FBO for each group.
The main framebuffer
Can you bind the main framebuffer's depth buffer as a depth buffer for your FBO? No. You must create a depth texture or a depth Render Buffer.
Does GL 3.0 allow using the main depth buffer? No.
Can you do MRT (multiple render targets) and have the main color framebuffer as one of the targets? No, you can only target a texture or a Render Buffer. GL 3.0 doesn't support it either.
MSAA
Are multisample Render_To_Texture (RTT) supported?
Not directly. You need GL_EXT_framebuffer_multisample and you would have to copy the contents of the AA-FBO to a standard RTT.
Note that GL_EXT_framebuffer_multisample also became core in GL 3.0 See also http://www.opengl.org/wiki/GL_EXT_framebuffer_multisample
Color texture, Depth texture
In this example, we are attaching a color texture and also a depth texture and we'll render to both of them.
//--------------------------------
//RGBA8 2D texture, 24 bit depth texture, 256x256
glGenTextures(1, &color_tex);
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//NULL means reserve texture memory, but texels are undefined
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glGenTextures(1, &depth_tex);
glBindTexture(GL_TEXTURE_2D, depth_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
//NULL means reserve texture memory, but texels are undefined
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 256, 256, 0,
GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
//-------------------------
glGenFramebuffersEXT(1, &fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
//Attach 2D texture to this FBO
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, color_tex, 0/*mipmap level*/);
//-------------------------
//Attach depth texture to FBO
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,
GL_TEXTURE_2D, depth_tex, 0/*mipmap level*/);
//-------------------------
//Does the GPU support current FBO configuration?
GLenum status;
status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
switch(status)
{
case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good";
default: HANDLE_THE_ERROR;
}
//--------------------------------
And in the end, cleanup
//-----------------------
//Delete resources
glDeleteTextures(1, &color_tex);
glDeleteTextures(1, &depth_tex);
//Bind 0, which means render to back buffer, as a result, fb is unbound
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glDeleteFramebuffersEXT(1, &fb);
//-----------------------
Depth only
This is similar to the case above (Color texture, Depth texture) except that since there is no color buffer, call glDrawBuffer(GL_NONE) before or after calling glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb) and then render. When you are done, call glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0) to render to the main framebuffer. This is important, call glDrawBuffer(GL_BACK) after. If you call before glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0), a GL error will be raised.
As for your fragment shader, you should write to gl_FragColor or whatever your color output it. The GPU will automatically generate the depth value and write to gl_FragDepth. The color value is obviously dropped automatically by the GPU. Example
Simply disable depth testing (glDisable(GL_DEPTH_TEST) and set the depth mask to FALSE (glDepthMask(GL_FALSE)) before you render to your RTT.
Stencil
NEVER EVER MAKE A STENCIL buffer. All GPUs and all drivers do not support an independent stencil buffer. If you need a stencil buffer, then you need to make a Depth=24, Stencil=8 buffer, also called D24S8. Please search for the example about GL_EXT_packed_depth_stencil on this page.
MRT
Talk about MRT
MRT and cubemaps
Talk about MRT and cubemaps
glReadPixels
Yes, you can bind a FBO and then render to it and then read with with a call to glReadPixels. It doesn't matter if what you have attached to the FBO is a RenderBuffer or a texture, glReadPixels will still read it and it will return the results.
For RTT (Render To Texture), if you will be using glGetTexImage, it is recommended that you unbind the FBO, make the texture current with a call to glActiveTexture and glBindTexture and use glGetTexImage. It is recommended that you avoid glGetTexImage and use the glReadPixels method since certain drivers don't do anything when you call glGetTexImage.