=======================
=======================
=======================
http://www.dhpoware.com/demos/glMultiSampleAntiAliasing.html
---------------------------------------------------------------------------------------------------------------------------------------------
멀티샘플링 소스:
---------------------------------------------------------------------------------------------------------------------------------------------
=======================
=======================
=======================
출처: http://www.programkr.com/blog/McjMxEDMwYT0.html
여기, 전체 화면 안티 톱의 이 아니다. 着色器 있는 FXAA 같은 방식으로 기반 아니라 호환성 더, 고정 파이프 라인 의 다중 샘플링 방식 기반.
먼저 말하겠습니다 개발 환경, 내가 쓰는 것은 VC2013+GLEW1.11.
꼭 통해 프로그램 설정 다중 샘플링 우선 필요한 진입 제어 패널, 확실히 안티 톱의 이 설정 '프로그램을 따라 정해진다 "물론 이 설정을 구체적인 이름, 다른 그래픽 메이커 다른 부르는 법 있다.
다음은 바로 중점 되었다.
Windows다음 꼭 통해 프로그램 설정 다중 샘플링 반드시 사용wglChoosePixelFormatARB이 함수. 현재 인터넷 기본적으로 모두 직접 가서 가져오는 중 이 함수 인터페이스 주소, 사실 절대 귀찮게 안. GLEW 이미 이 함수 인터페이스 있다, 우리는 할 것은 정확하게 사용 뿐이다. 이 함수 정확한 사용 하는 것이 바로 만듭니다. 임시 창 통해 이 창, 우리는 초기화할 수 GLEW 및 가져오기 반드시 기초 픽셀 형식, 다시 사용wglChoosePixelFormatARB이 함수, 받다, 사용 가능한 다중 샘플링 픽셀 형식, 마지막 에 렌더링 창 설정 이 텍스트 서식 수 있다.
//---임시 창 만듭니다.
HWND secondwnd; WNDCLASS wc; //창 클래스 구조체
wc.style = CS_HREDRAW | CS_VREDRAW; //폼 스타일
wc.lpfnWndProc = WndProc; //폼 처리 함수
wc.cbClsExtra = 0; //창 클래스 여부 by 확장
wc.cbWndExtra = 0; //폼 실행 여부 by 확장
wc.hInstance = NULL; //폼 핸들
wc.hIcon = LoadIcon(0, IDI_APPLICATION); //창 아이콘
wc.hCursor = LoadCursor(NULL, IDC_ARROW); //폼 마우스 스타일
wc.hbrBackground = (HBRUSH)GetStockObject(WHITE_BRUSH); //창 배경 색상
wc.lpszMenuName = NULL; //창 메뉴
wc.lpszClassName = L"BaseWndClass"; //폼 类名
if (!RegisterClass(&wc)) //폼 등록 성공 여부 판단
{
MessageBox(L"등록 실패");
return;
}
//새 창
secondwnd = CreateWindow(L"BaseWndClass", //폼 类名
L"MyFrom", //창 제목 명
WS_OVERLAPPEDWINDOW, //폼 스타일
CW_USEDEFAULT,
CW_USEDEFAULT, //폼 좌표
CW_USEDEFAULT,
CW_USEDEFAULT, //폼 좌표
NULL, //폼 의 부모 창
NULL, //폼 하위 메뉴
NULL, //폼 핸들
NULL); // 만약 함수 성공을 복귀 값 위해 새 창 페이지: 만약 함수 실패를 복귀 값 NULL 위해
if (secondwnd == 0)//새 창 성공 여부 판단
{
MessageBox(L"폼 생성 실패");
return;
}
::ShowWindow(secondwnd, SW_HIDE); //표시 폼
::UpdateWindow(secondwnd); //그리기 폼,
PIXELFORMATDESCRIPTOR pfd = //정의 텍스트 서식
{
sizeof(PIXELFORMATDESCRIPTOR), // 상술한 형식 설명자 크기
1, // 버전 번호
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA, // 신청 RGBA 형식
32,
0, 0, 0, 0, 0, 0, // 무시 색채 분
8, // 투명도 캐시 없다.
0, // Shift 키를 무시합니다 Bit
0, // 어큐뮬레이션 캐시 없다.
0, 0, 0, 0, // 무시 모인 자리
32, // 32비트 Z- 캐시 (깊이 캐시)
8, // 없다 蒙板 캐시
0, // 보조 캐시 없다.
PFD_MAIN_PLANE, // 그린 图层 주
0, // Reserved
0, 0, 0 // 무시 층 마스크
};
//-----이 임시 창 쓰는 얻을 수 HGLRC 하나
HDC secondDC = ::GetDC(secondwnd);
int nIndex = ChoosePixelFormat(secondDC, &pfd); //선택 방금 정의 텍스트 서식
if (nIndex == 0)
{
return;
}
SetPixelFormat(secondDC, nIndex, &pfd); //텍스트 서식 설정
glRC = wglCreateContext(secondDC);
wglMakeCurrent(secondDC, glRC);
//---그리고 GLEW 초기화
GLenum glerr = glewInit();
if (glerr != GLEW_OK)
{
MessageBox(L"GLEW 초기화할 수 없음!");
}
else
{
OutputDebugString(L"GLEW\n 성공적으로 불러왔습니다");
}
UINT Multisample = 4; //4 배, 다중 샘플링 레이트
int pixelformat;
UINT numformat;
float fAttributes[] = { 0, 0 };
int iAtributes[] =
{ WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
WGL_ACCELERATION_ARB,
WGL_FULL_ACCELERATION_ARB,
WGL_COLOR_BITS_ARB, 24,
WGL_ALPHA_BITS_ARB, 8,
WGL_DEPTH_BITS_ARB, 32,
WGL_STENCIL_BITS_ARB, 8,
WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
WGL_SAMPLE_BUFFERS_ARB, GL_TRUE,
WGL_SAMPLES_ARB, Multisample,
0, 0
};
//----얻다, 사용 가능한 다중 샘플링 픽셀 형식
if (!wglChoosePixelFormatARB(secondDC, iAtributes,
fAttributes, 1, &pixelformat, &numformat))
{
MessageBox(L"찾을 수 있는 안티 톱의 이 형식");
}
//----by 임시 창 얻은 HGLRC 삭제, 동시에 정리하기 임시 창
wglMakeCurrent(NULL, NULL);
wglDeleteContext(glRC);
::ReleaseDC(secondwnd, secondDC);
::DestroyWindow(secondwnd);
//-----으로 얻은 다중 샘플링 픽셀 형식을 가지고 폼 설정
SetPixelFormat(this->GetDC()->m_hDC, pixelformat, &pfd); //텍스트 서식 설정
glRC = wglCreateContext(this->GetDC()->m_hDC);
wglMakeCurrent(this->GetDC()->m_hDC, glRC);
//----------------------
위 그동안 코드, 바로 내가 말한 방식에 따라 초기화 거야 OpenGL.
이렇게 설치 후, 니가 glEnable/glDisable 통해 켜거나 다중 샘플링 되었다.
희망 이 경험 컨 나만큼 일찍이 시달리다 이 문제를 친구 도움.
=======================
=======================
=======================
출처: https://www.khronos.org/opengl/wiki/Tutorial:_OpenGL_3.1_The_First_Triangle_(C%2B%2B/Win)
Overview
This is just a short tutorial about drawing primitives in OpenGL 3.x without using deprecated functionality. The code uses Visual Studio and a link to download a freeGLUT version is available.
Adding GLEW Support
Dealing with OpenGL 3.1 is hard enough, so I'll skip gymnastics with OpenGL extension and use OpenGL Extension Wrangler Library (GLEW). GLEW is a cross-platform open-source C/C++ extension loading library, and can be freely downloaded from the following site: http://glew.sourceforge.net. The following snippet of code includes support for GLEW, and should be placed somewhere in your code. If you are building a Visual Studio MFC application, which I recommend, the best place for that is somewhere at the end of stdafx.h file.A cross-platform version of this code (which uses freeGLUT for windowing) is available on githuband freeGLUT can be downloaded from http://freeglut.sourceforge.net
//--- OpenGL --- #include "glew.h" #include "wglew.h" #pragma comment(lib, "glew32.lib") //--------------
GLRenderer Class
We will start with creation of class CGLRenderer. This class should gather together all OpenGL related code. My students will recognize the functions I insisted on during the lectures. The header file is the same as in good old OpenGL 2.1, but the implementation will be severely changed.
//----------------------------
class CGLRenderer
{
public:
CGLRenderer(void);
virtual ~CGLRenderer(void);
bool CreateGLContext(CDC* pDC); // Creates OpenGL Rendering Context
void PrepareScene(CDC* pDC); // Scene preparation stuff
void Reshape(CDC* pDC, int w, int h); // Changing viewport
void DrawScene(CDC* pDC); // Draws the scene
void DestroyScene(CDC* pDC); // Cleanup
protected:
void SetData(); // Creates VAO and VBOs and fill them with data
protected:
HGLRC m_hrc;// OpenGL Rendering Context
CGLProgram* m_pProgram; // Program
CGLShader* m_pVertSh; // Vertex shader
CGLShader* m_pFragSh; // Fragment shader
GLuint m_vaoID[2]; // two vertex array objects, one for each drawn object
GLuint m_vboID[3]; // three VBOs
};
//----------------------------
Rendering Context Creation
First we have to create an OpenGL Rendering Context. This is the task for CreateGLContext() function.
//----------------------
bool CGLRenderer::CreateGLContext(CDC* pDC)
{
PIXELFORMATDESCRIPTOR pfd;
memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cDepthBits = 32;
pfd.iLayerType = PFD_MAIN_PLANE;
int nPixelFormat = ChoosePixelFormat(pDC->m_hDC, &pfd);
if (nPixelFormat == 0) return false;
BOOL bResult = SetPixelFormat (pDC->m_hDC, nPixelFormat, &pfd);
if (!bResult) return false;
HGLRC tempContext = wglCreateContext(pDC->m_hDC);
wglMakeCurrent(pDC->m_hDC, tempContext);
GLenum err = glewInit();
if (GLEW_OK != err)
{
AfxMessageBox(_T("GLEW is not initialized!"));
}
int attribs[] = { WGL_CONTEXT_MAJOR_VERSION_ARB,
3,
WGL_CONTEXT_MINOR_VERSION_ARB,
1,
WGL_CONTEXT_FLAGS_ARB, 0, 0 };
if(wglewIsSupported("WGL_ARB_create_context") == 1)
{
m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs);
wglMakeCurrent(NULL,NULL);
wglDeleteContext(tempContext);
wglMakeCurrent(pDC->m_hDC, m_hrc);
}
else
{
//It's not possible to make a GL 3.x context. Use the old style
context (GL 2.1 and before)
m_hrc = tempContext;
}
//Checking GL version
const GLubyte *GLVersionString = glGetString(GL_VERSION);
//Or better yet, use the GL3 way to get the version number
int OpenGLVersion[2];
glGetIntegerv(GL_MAJOR_VERSION, &OpenGLVersion[0]);
glGetIntegerv(GL_MINOR_VERSION, &OpenGLVersion[1]);
if (!m_hrc)
return false;
return true;
}
//---------------------
Choosing and setting pixel format are the same as in previous version of OpenGL. The new tricks that should be done are:
- Create standard OpenGL (2.1) rendering context which will be used only temporarily (tempContext), and make it current
HGLRC tempContext = wglCreateContext(pDC->m_hDC); wglMakeCurrent(pDC->m_hDC,tempContext);
- Initialize GLEW
GLenum err = glewInit();
- Setup attributes for a brand new OpenGL 3.1 rendering context
//----------------------------------
int attribs[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 1,
WGL_CONTEXT_FLAGS_ARB, 0, 0 };
//----------------------------------
- Create new rendering context
m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs);
- Delete tempContext
wglMakeCurrent(NULL,NULL); wglDeleteContext(tempContext);
Have you noticed something odd in this initialization? In order to create new OpenGL rendering context you have to call function wglCreateContextAttribsARB(), which is an OpenGL function and requires OpenGL to be active when it is called. How can we fulfill this when we are at the beginning of OpenGL rendering context creation? The only way is to create an old context, activate it, and while it is active create a new one. Very inconsistent, but we have to live with it!
Scene Preparation
After we have created rendering context, the next step is to prepare scene. In the function PrepareScene() we will do whatever we have to do just once, before the scene is drawn for the first time.
//-----------------------
void CGLRenderer::PrepareScene(CDC *pDC)
{
glClearColor (1.0, 1.0, 1.0, 0.0);
m_pProgram = new CGLProgram();
m_pVertSh = new CGLShader(GL_VERTEX_SHADER);
m_pFragSh = new CGLShader(GL_FRAGMENT_SHADER);
m_pVertSh->Load(_T("minimal.vert"));
m_pFragSh->Load(_T("minimal.frag"));
m_pVertSh->Compile();
m_pFragSh->Compile();
m_pProgram->AttachShader(m_pVertSh);
m_pProgram->AttachShader(m_pFragSh);
m_pProgram->BindAttribLocation(0, "in_Position");
m_pProgram->BindAttribLocation(1, "in_Color");
m_pProgram->Link();
m_pProgram->Use();
SetData();
}
//------------------------
Shaders
Vertex shader is very simple. It just sends input values to the output, and converts vec3 to vec4. Constructors are the same as in previous versions of GLSL. The main difference, in regard to GLSL 1.2, is that there is no more attribute and varying qualifiers for variables inside shaders. Attribute variables are now in(put) and varying variables are out(put) for the vertex shaders. Uniforms stay the same.
//-----------------
// Vertex Shader – file "minimal.vert"
#version 140
in vec3 in_Position;
in vec3 in_Color;
out vec3 ex_Color;
void main(void)
{
gl_Position = vec4(in_Position, 1.0);
ex_Color = in_Color;
}
//-----------------
Fragment shader is even simpler. Varying variables in fragment shaders are now declared as in variables. Take care that the name of in(put) variable in fragment shader must be the same as out(put) variable in vertex shader.
//-----------------------
// Fragment Shader – file "minimal.frag"
#version 140
precision highp float; // needed only for version 1.30
in vec3 ex_Color;
out vec4 out_Color;
void main(void)
{
out_Color = vec4(ex_Color,1.0);
}
//----------------------
If you have problem with compiling shader’s code (for the reason OpenGL 3.1 is not supported), just change the version number. Instead of 140, put 130. These shaders are so simple that the code is the same in GLSL version 1.3 and version 1.4.
Setting Data
Function SetData() creates VAOs and VBOs and fill them with data.
//--------------------
void CGLRenderer::SetData()
{
// First simple object
float* vert = new float[9]; // vertex array
float* col = new float[9]; // color array
vert[0] =-0.3;
vert[1] = 0.5;
vert[2] =-1.0;
vert[3] =-0.8;
vert[4] =-0.5;
vert[5] =-1.0;
vert[6] = 0.2;
vert[7] =-0.5;
vert[8]= -1.0;
col[0] = 1.0; col[1] = 0.0; col[2] = 0.0;
col[3] = 0.0; col[4] = 1.0; col[5] = 0.0;
col[6] = 0.0; col[7] = 0.0; col[8] = 1.0;
// Second simple object
float* vert2 = new float[9]; // vertex array
vert2[0] =-0.2; vert2[1] = 0.5; vert2[2] =-1.0;
vert2[3] = 0.3; vert2[4] =-0.5; vert2[5] =-1.0;
vert2[6] = 0.8; vert2[7] = 0.5; vert2[8]= -1.0;
// Two VAOs allocation
glGenVertexArrays(2, &m_vaoID[0]);
// First VAO setup
glBindVertexArray(m_vaoID[0]);
glGenBuffers(2, m_vboID);
glBindBuffer(GL_ARRAY_BUFFER, m_vboID[0]);
glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert, GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, m_vboID[1]);
glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), col, GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1); // Second VAO setup
glBindVertexArray(m_vaoID[1]);
glGenBuffers(1, &m_vboID[2]);
glBindBuffer(GL_ARRAY_BUFFER, m_vboID[2]);
glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert2, GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glBindVertexArray(0);
delete [] vert;
delete [] vert2;
delete [] col;
}
//--------------------------------
Vertex buffer objects (VBO) are familiar item since OpenGL version 1.5, but the vertex array objects require more explanation. Vertex array objects (VAO) encapsulate vertex array state on the client side. These objects allow applications to rapidly switch between large sets of array state.
VAO saves all states for all vertex attributes. The maximum number supported by your video card can be obtained by calling glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &MaxVertexAttribs).
A VAO stores the states of the vertex attribute arrays (if each of them is enabled, their sizes, stride, type, if they are normalized or not, if they contain unconverted integers, vertex attribute array pointers, element array buffer bindings and attribute array buffer bindings). In order to test how it works, we will create two separate (simple) objects with different VAOs.
Setting Viewport
Reshape() function just sets a viewport.
void CGLRenderer::Reshape(CDC *pDC, int w, int h) { glViewport(0, 0, w, h); }
Drawing
DrawScene(), as its name implies, draws the scene.
//--------------------
void CGLRenderer::DrawScene(CDC *pDC)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(m_vaoID[0]); // select first VAO
glDrawArrays(GL_TRIANGLES, 0, 3); // draw first object
glBindVertexArray(m_vaoID[1]); // select second VAO
glVertexAttrib3f((GLuint)1, 1.0, 0.0, 0.0); // set constant color attribute
glDrawArrays(GL_TRIANGLES, 0, 3); // draw second object
glBindVertexArray(0);
SwapBuffers(pDC->m_hDC);
}
//--------------------
As we can see, VAO binding changes all vertex attribute arrays settings. But be very careful! If any vertex attribute array is disabled, VAO loses its binding to corresponding VBO. In that case, we have to call again glBindBuffer() and glVertexAttribPointer() functions. The specification tells nothing about this feature, but it is what we have to do with current version of NVidia drivers.
Cleaning up
And, at the end we have to clean up the whole mass...
//------------------------------------
void CGLRenderer::DestroyScene(CDC *pDC)
{
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDeleteBuffers(3, m_vboID);
glBindVertexArray(0);
glDeleteVertexArrays(2, m_vaoID);
m_pProgram->DetachShader(m_pVertSh);
m_pProgram->DetachShader(m_pFragSh);
delete m_pProgram;
m_pProgram = NULL;
delete m_pVertSh;
m_pVertSh = NULL;
delete m_pFragSh;
m_pFragSh = NULL;
wglMakeCurrent(NULL, NULL);
if(m_hrc)
{
wglDeleteContext(m_hrc);
m_hrc = NULL;
}
}
//------------------------------------
Final Result
=======================
=======================
=======================
출처: http://www.mbsoftworks.sk/index.php?page=tutorials&series=1&tutorial=2
Welcome to OpenGL 3.3+ tutorials series. In this series, you will learn how to use OpenGL the new way. This way is little bit more difficult than the way before, now OpenGL relies on you to do many things. But don't get scared, these tutorials will explain stuff slowly and step-by-step you will build a good basis to think the new way.
In old OpenGL (before 2.0 version), most functionality has been FIXED into OpenGL, having it easier for programmers to do simple tasks (like working with matrices, transforming vertices and so on), but it didn't offer much space to do some very specific stuff. With OpenGL 2.0 shaders came, allowing programmer to replace some of the fixed functionality and rewrite it the way he wanted. It was a very good thing. But till OpenGL 3.0, you could still rely on fixed functionality even in shaders. For example, till GLSL 1.40 (OpenGL Shading Language, the most important thing of new OpenGL, which will be covered in these tutorials), you could use function fttransform(), which stands for (I guess) Fixed Transformation, so you could transform vertices using OpenGL built-in modelview and projection matrix and everything was OK. But in OpenGL 3.0, this fixed functionality has been deprecated, and in OpenGL 3.2 and later removed from core functionality (so when using OpenGL 3.2 rendering context and later, calling these functions will have no effect).
So how is it in new OpenGL? Well, now you cannot use ol' good functions like glTranslatef(), glRotatef(), glScalef(), orglMatrixMode(GL_PROJECTION), then setting the perspective with gluPerspective and similar functions. Now you have to calculate the matrices yourself, then upload them to vertex shader, and handle vertices with it. But don't worry, there are libraries over the internet, that work with matrices. We will work with one such later. So it won't be that difficult in the end.
The next thing that has significantly changed is actual rendering of things. Now there is no glBegin() and glEnd() function. Everything is replaced using vertex buffer objects (VBOs) and vertex array objects (VAOs). While in old OpenGL, rendering a triangle was as intuitive as possible,
glBegin(GL_TRIANGLES);
glVertex2d(-5, 0); // Pass first vertex
glVertex2d( 5, 0); // Pass second vertex
glVertex2d( 0, 5); // Pass third vertex
glEnd();
the code of triangle render in OpenGL 3.3 can look like this:
// Some init scene function
UINT uiVAOid, uiVBOid;
void initScene()
{
float fVert[9];
fVert[0] = -5; fVert[1] = 0; fVert[2] = 0;
fVert[3] = 5; fVert[4] = 0; fVert[5] = 0;
fVert[6] = 0; fVert[7] = 5; fVert[8] = 0;
// Generate VAO
glGenVertexArrays(1, &uiVAOid);
// Setup of VAO
glBindVertexArray(uiVAOid);
glGenBuffers(1, &uiVBOid);
glBindBuffer(GL_ARRAY_BUFFER, m_vboID[0]);
glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), fVert, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
//...
}
// Some render scene function
void RenderScene()
{
//...
glBindVertexArray(uiVAOid);
glDrawArrays(GL_TRIANGLES, 0, 3);
//...
}
As you can see it is longer, and not as intuitive. But this will bring BLAZING FAST rendering. If you know something about assembly (you don't even have to), you will notice that each call of glVertex3f has 3 floats as parameters. These floats must pass as function parameters to processor registers, before they are sent to GPU. And for one triangle, it's 3 function calls (well for one triangle it really isn't a problem, but scene with one triangle probably isn't what we want :). And for object with 10000 triangles, it's 30000 calls. This is so-called CPU bottleneck, when rendering is slowed by processor passing all this data to GPU. In new OpenGL, you can see that we first set up our objects (store their data in GPU), then we call just few functions to tell which data to use and so on, and then we call (for example) glDrawArrays to render objects. Now the CPU sends only few data to GPU, and rendering is way way way faster :) Of course, in older OpenGL versions you could use for example vertex arrays to speed-up rendering, but it's always passing data from RAM (client memory) to GPU (server memory) each and every frame, which is, no matter how we look at it, not good. But since OpenGL 1.5, you could move forward to use VBOs (store data in GPU), and it would not be that bad. OK, so let's begin.
First thing we will need to use is glew (OpenGL Extension Wrangler Library). You can download it from here: http://glew.sourceforge.net. After downlading and extracting, we will need to be able to include it to our project. Since I'm using Visual Studio, the best option is to have glew extracted in some libraries directory, and then add include paths and library paths to Visual Studio. In Visual Studio 2008, you can do it under Tools -> Options -> Projects and Solutions -> VC++ Directories, as you can see on following picture:
In Show directories for, you must choose Include files, and add a glew_installation_folder/include (of course, put there real path, for example C:\Libraries\glew-1.7.0\include). Then you also must add library paths, so select Library files, and add glew_installation_folder/lib there. Then we can have in our code:
#include <gl/glew.h>
and it will be OK. The worse option is to copy glew.h to your directory, so don't do it. The good thing about this include paths is, that if new version of glew (or any other used library) is out, you will download it, then just change the include path to the new version of your library and you will have new features and functions. This header supersedes the gl.h header on Windows, which hasn't been updated since version 1.1 (along with opengl32.lib). I know that Microsoft wants Windows developers to use DirectX, but they really could offer an alternative and add OpenGL support right into Visual Studio. But they will probably never do. It's sad, but there's nothing I can do about it. But glew will do all the work for us, one single call will get function pointers to all procedures, and we can use OpenGL 3.3 features without problems.
My aim si to create a class, that will control OpenGL creation, releasing and practically everything that deals with OpenGL. So let's start with class declaration:
class COpenGLControl
{
public:
bool InitOpenGL(HINSTANCE hInstance, HWND* a_hWnd, int iMajorVersion, int iMinorVersion, void (*a_InitScene)(LPVOID), void (*a_RenderScene)(LPVOID), void(*a_ReleaseScene)(LPVOID), LPVOID lpParam);
void ResizeOpenGLViewportFull();
void Render(LPVOID lpParam);
void ReleaseOpenGLControl(LPVOID lpParam);
static void RegisterSimpleOpenGLClass(HINSTANCE hInstance);
static void UnregisterSimpleOpenGLClass(HINSTANCE hInstance);
void MakeCurrent();
void SwapBuffersM();
private:
bool InitGLEW(HINSTANCE hInstance);
HDC hDC;
HWND* hWnd;
HGLRC hRC;
static bool bClassRegistered;
static bool bGlewInitialized;
int iMajorVersion, iMinorVersion;
void (*InitScene)(LPVOID lpParam), (*RenderScene)(LPVOID lpParam), (*ReleaseScene)(LPVOID lpParam);
};
Even though it may seem little complicated at first glance, it's not that bad. Let's look at the functions:
initOpenGL - this is most important function, creates OpenGL rendering context within a given window, the parameters are - application instance (if you don't know what it is, it doesn't matter, it's not that important), major and minor version of OpenGL, and pointers to functions - init function, rendering function, and optional release function. The idea is to create one instance of COpenGLControl class somewhere, tell it which functions in your project are init function, rendering function and release function, and then you are ready to go. So simple call of this function will give us OpenGL context of version we want.
ResizeOpenGLViewportFull() - sets OpenGL viewport to whole window
RenderScene() - renders scene, the lpParam parameter is LPVOID type - it means it's a general pointer, it can point to anything you want. But basically, lpParam will point to our OpenGL Controller instance. The thing about using function callbacks is that the code isn't that intuitive at first glance, but they are a very good thing, even though may be hard for those who didn't see it before to understand. You may have a look at Wikipedia about callbacks:http://en.wikipedia.org/wiki/Callback_(computer_programming)
ReleaseOpenGLControl() - cleanup function - releases scene data (if release function callback was set) and deletes rendering context. lpParam applies to same concept, as written previously.
RegisterSimpleOpenGLClass - registers window class that supports OpenGL, this class is used in fake window (you'll see later)
UnregisterSimpleOpenGLClass - unregisters previously registered window class
MakeCurrent() - sets current rendering context to the one we created (it calls traditional wglMakeCurrent function
SwapBuffersM() - swaps front and back buffer - simply calls traditional SwapBuffers function, and that's why it has extra M in the name, because then compiler complains, even though I think it really doesn't have a reason, because we're taking class member function, but still (try it, I have tried)
InitGLEW - initializes GLEW library
That does it, now we should have an idea what does each function do. We will have a closer look only at the initGLEW and InitOpenGL function. In other functions there isn't much to explain, they are pretty straightforward.
bool COpenGLControl::InitGLEW(HINSTANCE hInstance)
{
if(bGlewInitialized)return true;
RegisterSimpleOpenGLClass(hInstance);
HWND hWndFake = CreateWindow(SIMPLE_OPENGL_CLASS_NAME, "FAKE", WS_OVERLAPPEDWINDOW | WS_MAXIMIZE | WS_CLIPCHILDREN,
0, 0, CW_USEDEFAULT, CW_USEDEFAULT, NULL,
NULL, hInstance, NULL);
hDC = GetDC(hWndFake);
// First, choose false pixel format
PIXELFORMATDESCRIPTOR pfd;
memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize= sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cDepthBits = 32;
pfd.iLayerType = PFD_MAIN_PLANE;
int iPixelFormat = ChoosePixelFormat(hDC, &pfd);
if (iPixelFormat == 0)return false;
if(!SetPixelFormat(hDC, iPixelFormat, &pfd))return false;
// Create the false, old style context (OpenGL 2.1 and before)
HGLRC hRCFake = wglCreateContext(hDC);
wglMakeCurrent(hDC, hRCFake);
bool bResult = true;
if(!bGlewInitialized)
{
if(GlewInit() != GLEW_OK)
{
MessageBox(*hWnd, "Couldn't initialize GLEW!", "Fatal Error", MB_ICONERROR);
bResult = false;
}
bGlewInitialized = true;
}
wglMakeCurrent(NULL, NULL);
wglDeleteContext(hRCFake);
DestroyWindow(hWndFake);
return bResult;
}
So what are we doing here? You may have guessed from the names of variables - we create a fake window. Then we set up the rendering context old way - using wglCreateContext. This will give us access to OpenGL functions. And here comes the reason for all this - now we can initalize GLEW library using glewInit. What GLEW does is that it gets function pointers to all OpenGL functions and extensions (if they are supported by the graphics card). It calls the wglGetProcAddress for every OpenGL function. But without OpenGL context, we couldn't get OpenGL function pointers, so that's why we create a fake window, get OpenGL function pointers and then destroy the fake window. I know - it isn't very nice, but searching OpenGL Wiki and some forums over internet, I didn't find a better way of doing this on Windows.
bool COpenGLControl::InitOpenGL(HINSTANCE hInstance, HWND* a_hWnd, int iMajorVersion, int iMinorVersion,
void (*a_InitScene)(LPVOID), void (*a_RenderScene)(LPVOID), void(*a_ReleaseScene)(LPVOID),
LPVOID lpParam)
{
if(!initGLEW(hInstance))return false;
hWnd = a_hWnd;
hDC = GetDC(*hWnd);
bool bError = false;
PIXELFORMATDESCRIPTOR pfd;
if(iMajorVersion <= 2)
{
memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize= sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cDepthBits = 32;
pfd.iLayerType = PFD_MAIN_PLANE;
int iPixelFormat = ChoosePixelFormat(hDC, &pfd);
if (iPixelFormat == 0)return false;
if(!SetPixelFormat(hDC, iPixelFormat, &pfd))return false;
// Create the old style context (OpenGL 2.1 and before)
hRC = wglCreateContext(hDC);
if(hRC)wglMakeCurrent(hDC, hRC);
else bError = true;
}
else if(WGLEW_ARB_create_context && WGLEW_ARB_pixel_format)
{
const int iPixelFormatAttribList[] =
{
WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB,
WGL_COLOR_BITS_ARB, 32,
WGL_DEPTH_BITS_ARB, 24,
WGL_STENCIL_BITS_ARB, 8,
0 // End of attributes list
};
int iContextAttribs[] =
{
WGL_CONTEXT_MAJOR_VERSION_ARB, iMajorVersion,
WGL_CONTEXT_MINOR_VERSION_ARB, iMinorVersion,
WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
0 // End of attributes list
};
int iPixelFormat, iNumFormats;
wglChoosePixelFormatARB(hDC, iPixelFormatAttribList, NULL, 1, &iPixelFormat, (UINT*)&iNumFormats);
// PFD seems to be only redundant parameter now
if(!SetPixelFormat(hDC, iPixelFormat, &pfd))return false;
hRC = wglCreateContextAttribsARB(hDC, 0, iContextAttribs);
// If everything went OK
if(hRC) wglMakeCurrent(hDC, hRC);
else bError = true;
}
else bError = true;
if(bError)
{
// Generate error messages
char sErrorMessage[255], sErrorTitle[255];
sprintf(sErrorMessage, "OpenGL %d.%d is not supported! Please download latest GPU drivers!", iMajorVersion, iMinorVersion);
sprintf(sErrorTitle, "OpenGL %d.%d Not Supported", iMajorVersion, iMinorVersion);
MessageBox(*hWnd, sErrorMessage, sErrorTitle, MB_ICONINFORMATION);
return false;
}
RenderScene = a_RenderScene;
InitScene = a_InitScene;
ReleaseScene = a_ReleaseScene;
if(InitScene != NULL)InitScene(lpParam);
return true;
}
At the beggining of function, we initialize GLEW. After we have info about OpenGL capabilities our graphics card has, we can proceed with creating context. If the desired context is 2.1 and lower, we just create OpenGL the old way. But for later versions of OpenGL (3.0 and beyond), we are using new set of functions -wglChoosePixelFormatARB and wglCreateContextAtrribsARB. The line:
if(WGLEW_ARB_create_context && WGLEW_ARB_pixel_format)
is used to check, whether we have access to these functions (whether they are supported by our graphics card). If this check succeeds, we can use new wgl functions - wglChoosePixelFormatARB and wglCreateContextAtrribsARB. These functions allow us to specify attributes of pixel format and context. You just pass a pointer, in this case it points to array of numbers, which has format ATTRIBUTE, VALUE, ATTRIBUTE, VALUE... and zero is at the end. You can specify as many parameters as you want, and you always end it with zero. It is more flexible than old PIXELFORMATDESCRIPTOR structure, which is fixed. However, if you look at SetPixelFormat function, you can see that I pass an unitialized PIXELFORMATDESCRIPTOR structure, even though I didn't use it for finding suitable pixel format. It is because when calling SetPixelFormat, you must pass PIXELFORMATDESCRIPTOR structure as the last parameter. I couldn't find anywhere about the right way of setting this OpenGL 3.3 context on Windows, but since we must pass something, we pass a dummy PIXELFORMATDESCRIPTOR, and everything works :) If there is some info on internet (on MSDN or anywhere), I will edit this article. But for now, I'm happy it works. So in conclusion - to find right pixel format, we usewglChoosePixelFormatARB, and for setting it, we call SetPixelFormat with whatever the third parameter (but not NULL). In the end of function, we just set the function pointers for init, render and release function, and finally call the init function to initialize our scene. And that does it - we are done with initialization of OpenGL.
Because this is only the first tutorial, it ends here. We won't render the first triangle now, but in second tutorial. It's because the first tutorial would get simply too long, since rendering primitives got little bit more difficult, and requires explanation of terms and functions used. But in order to test the functionality of our OpenGL context, we will clear the background color to nice light blue :) and it will look just like that:
So that's it for today! You can have a look at whole code, but really don't worry if you don't understand it. I'm a native Win32 platform programmer - it means I don't use MFC or any wrapper, and that's why I have to handle window messages and everything by myself. This approach has drawbacks of course - you must write more code (that's already written in some libraries, like MFC), but also many advantages, like you have a COMPLETE control over your application flow, there is no need for additional libraries, and the final EXE file isn't as big as if you have used for example MFC. And these tutorials want to teach OpenGL, not Win32 programming. Just know, that everything is setup fine :) Thank you for reading so far! Add comments and/or questions below.
=======================
=======================
=======================
출처: https://sites.google.com/site/opengltutorialsbyaks/introduction-to-opengl-3-2---tutorial-01
Introduction to OpenGL 3.2 - Tutorial 01
Working with Extensions and GL 3.2 Rendering Context
OpenGL 3.2 is up and running (although currently only with NV 190.56 beta drivers, but it will not stop us to try some of the brand new functionality), and we are ready to start coding. Or... Are we? As we all well know, on MS Windows we have direct access only to GL 1.1 API. Okay, okay, we all no that (I hear your boring yawn). We will go to GLEW or GLee or similar site and download latest library that can handle everything we need. But if we go to those sites we will see that libraries are out of date. Do we have to wait until new libraries are released? Certainly not! First part of this tutorial will cover direct accessing the extensions. If you already know that, just jump to GL 3.2 Rendering Context Creation section.
Extensions
There are just 4 steps to get access to new functionality...
Step 1. Go to the site http://www.opengl.org/registry/ and download the latest header files (for Windows we just need two of them: glext.h and wglext.h). There is also gl3.hfile that could be useful, but we will skip it this time. Downloaded files should be copied into Include/GL folder, or in the local folder of your application. Step 2. I suggest you to create two new files (for example ogl.h and ogl.cpp) where we can include all OpenGL related .h files and pointers to functions. The ogl.h file, at the moment, should look like this: // --- ogl.h --- #include "gl11.h" #include "glext.h" #include "wglext.h" #pragma comment(lib,"opengl32.lib") As you can see, I’ve copied all .h files into project’s folder. The file gl11.h is in fact gl.h that was shipped with previous versions of Visual Studio (VS2k8 does not include it), but I have to rename it (because of the project name) and copy into project’s folder (because VS2k8 does not include it). I’ve also added 11 to indicate OpenGL version supported by this file (ver.1.1). The file ogl.cpp currently include only this: // ogl.cpp #include "stdafx.h" Step 3. Include ogl.h into StdAfx.h file. Take a look at MSDN to see the purpose of StdAfx.h (Precompiled Header Files). //--- OpenGL--- #include "ogl.h" Step 4. Add new function to GLRenderer class (introduced in previous tutorial). Let’s name it InitAPI(), because it’s purpose will be to grab pointers to all functions that we need. We will explain the whole process on the function glCreateProgram(). This functions serves to create a program that will contain shaders. Step 4a. Define a proper function pointer in ogl.h. extern PFNGLCREATEPROGRAMPROC glCreateProgram; The type of that pointer should be created this way: PFN + ToUppercase(function_name) + PROC Step 4b. Set the pointer to NULL in ogl.cpp. PFNGLCREATEPROGRAMPROC glCreateProgram = NULL; Step 4c. Grab the function pointer in InitAPI(). void CGLRenderer::InitAPI() { glCreateProgram = (PFNGLCREATEPROGRAMPROC)wglGetProcAddress("glCreateProgram"); } Step 4d. Call InitAPI() inside CreateGLContext() function. bool CGLRenderer::CreateGLContext(CDC* pDC) { PIXELFORMATDESCRIPTOR pfd ; memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR)); pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR); pfd.nVersion = 1; pfd.dwFlags = PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 32; pfd.cDepthBits = 32; pfd.iLayerType = PFD_MAIN_PLANE; int nPixelFormat = ChoosePixelFormat(pDC->m_hDC, &pfd); if (nPixelFormat == 0) return false; BOOL bResult = SetPixelFormat (pDC->m_hDC, nPixelFormat, &pfd); if (!bResult) return false; // --- OpenGL 3.x --- HGLRC tempContext = wglCreateContext(pDC->m_hDC); wglMakeCurrent(pDC->m_hDC,tempContext); InitAPI(); //... } Steps 4a, 4b ad 4c have to be repeated for all functions we want to call inside our code. Just to enable shaders and draw a triangle using a single VBO we have to add plenty of functions. After some time of writing and copy/pasting, the function InitAPI() should look like this: void CGLRenderer::InitAPI() { // Program glCreateProgram = (PFNGLCREATEPROGRAMPROC)wglGetProcAddress("glCreateProgram"); glDeleteProgram = (PFNGLDELETEPROGRAMPROC)wglGetProcAddress("glDeleteProgram"); glUseProgram = (PFNGLUSEPROGRAMPROC)wglGetProcAddress("glUseProgram"); glAttachShader = (PFNGLATTACHSHADERPROC)wglGetProcAddress("glAttachShader"); glDetachShader = (PFNGLDETACHSHADERPROC)wglGetProcAddress("glDetachShader"); glLinkProgram = (PFNGLLINKPROGRAMPROC)wglGetProcAddress("glLinkProgram"); glGetProgramiv = (PFNGLGETPROGRAMIVPROC)wglGetProcAddress("glGetProgramiv"); glGetShaderInfoLog = (PFNGLGETSHADERINFOLOGPROC)wglGetProcAddress("glGetShaderInfoLog"); glGetUniformLocation = (PFNGLGETUNIFORMLOCATIONPROC)wglGetProcAddress("glGetUniformLocation"); glUniform1i = (PFNGLUNIFORM1IPROC)wglGetProcAddress("glUniform1i"); glUniform1iv = (PFNGLUNIFORM1IVPROC)wglGetProcAddress("glUniform1iv"); glUniform2iv = (PFNGLUNIFORM2IVPROC)wglGetProcAddress("glUniform2iv"); glUniform3iv = (PFNGLUNIFORM3IVPROC)wglGetProcAddress("glUniform3iv"); glUniform4iv = (PFNGLUNIFORM4IVPROC)wglGetProcAddress("glUniform4iv"); glUniform1f = (PFNGLUNIFORM1FPROC)wglGetProcAddress("glUniform1f"); glUniform1fv = (PFNGLUNIFORM1FVPROC)wglGetProcAddress("glUniform1fv"); glUniform2fv = (PFNGLUNIFORM2FVPROC)wglGetProcAddress("glUniform2fv"); glUniform3fv = (PFNGLUNIFORM3FVPROC)wglGetProcAddress("glUniform3fv"); glUniform4fv = (PFNGLUNIFORM4FVPROC)wglGetProcAddress("glUniform4fv"); glUniformMatrix4fv = (PFNGLUNIFORMMATRIX4FVPROC)wglGetProcAddress("glUniformMatrix4fv"); glGetAttribLocation = (PFNGLGETATTRIBLOCATIONPROC)wglGetProcAddress("glGetAttribLocation"); glVertexAttrib1f = (PFNGLVERTEXATTRIB1FPROC)wglGetProcAddress("glVertexAttrib1f"); glVertexAttrib1fv = (PFNGLVERTEXATTRIB1FVPROC)wglGetProcAddress("glVertexAttrib1fv"); glVertexAttrib2fv = (PFNGLVERTEXATTRIB2FVPROC)wglGetProcAddress("glVertexAttrib2fv"); glVertexAttrib3fv = (PFNGLVERTEXATTRIB3FVPROC)wglGetProcAddress("glVertexAttrib3fv"); glVertexAttrib4fv = (PFNGLVERTEXATTRIB4FVPROC)wglGetProcAddress("glVertexAttrib4fv"); glEnableVertexAttribArray = (PFNGLENABLEVERTEXATTRIBARRAYPROC)wglGetProcAddress("glEnableVertexAttribArray"); glBindAttribLocation = (PFNGLBINDATTRIBLOCATIONPROC)wglGetProcAddress("glBindAttribLocation"); // Shader glCreateShader = (PFNGLCREATESHADERPROC)wglGetProcAddress("glCreateShader"); glDeleteShader = (PFNGLDELETESHADERPROC)wglGetProcAddress("glDeleteShader"); glShaderSource = (PFNGLSHADERSOURCEPROC)wglGetProcAddress("glShaderSource"); glCompileShader = (PFNGLCOMPILESHADERPROC)wglGetProcAddress("glCompileShader"); glGetShaderiv = (PFNGLGETSHADERIVPROC)wglGetProcAddress("glGetShaderiv"); // VBO glGenBuffers = (PFNGLGENBUFFERSPROC)wglGetProcAddress("glGenBuffers"); glBindBuffer = (PFNGLBINDBUFFERPROC)wglGetProcAddress("glBindBuffer"); glBufferData = (PFNGLBUFFERDATAPROC)wglGetProcAddress("glBufferData"); glVertexAttribPointer = (PFNGLVERTEXATTRIBPOINTERPROC)wglGetProcAddress("glVertexAttribPointer"); } After acquiring each pointer we should test if it is NULL. If it is NULL, specified function is not implemented and we shouldn’t continue to execute program.
GL 3.2 Rendering Context Creation
As we have already learnt in the previous tutorial, to create a GL 3.x rendering context we have to: 1. define pixel format (as for old rendering context), 2. create an old context and activate it (make it current), 3. define attributes of the new (3.x) context, 4. create new (3.x) context using function wglCreateContextAttribsARB(), 5. deactivate and delete the old context. Everything should be done in CreateGLContext() function. bool CGLRenderer::CreateGLContext(CDC* pDC) { PIXELFORMATDESCRIPTOR pfd ; memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR)); pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR); pfd.nVersion = 1; pfd.dwFlags = PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 32; pfd.cDepthBits = 32; pfd.iLayerType = PFD_MAIN_PLANE; int nPixelFormat = ChoosePixelFormat(pDC->m_hDC, &pfd); if (nPixelFormat == 0) return false; BOOL bResult = SetPixelFormat (pDC->m_hDC, nPixelFormat, &pfd); if (!bResult) return false; // --- OpenGL 3.x --- HGLRC tempContext = wglCreateContext(pDC->m_hDC); wglMakeCurrent(pDC->m_hDC,tempContext); InitAPI(); int major, minor; GetGLVersion(&major, &minor); if( major < 3 || ( major == 3 && minor < 2 ) ) AfxMessageBox(_T("OpenGL 3.2 is not supported!")); int attribs[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, major, WGL_CONTEXT_MINOR_VERSION_ARB, minor, WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB, WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB, 0 }; PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB = NULL; wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC) wglGetProcAddress("wglCreateContextAttribsARB"); if(wglCreateContextAttribsARB != NULL) { m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs); } wglMakeCurrent(NULL,NULL); wglDeleteContext(tempContext); if (!m_hrc) { AfxMessageBox(_T("OpenGL 3.x RC was not created!")); return false; } return true; } Before creating the new context it is not a bad idea to check the version of OpenGL our driver supports. For that reason, we will create function GetGLVersion() which will retrieve the major and the minor version of supported OpenGL. If we have to deal OpenGL 3.x, than version should be retrieved calling function glGetIntegerv() with GL_MAJOR_VERSION and GL_MINOR_VERSION. // for GL 3.x glGetIntegerv(GL_MAJOR_VERSION, major); // major = 3 glGetIntegerv(GL_MINOR_VERSION, minor); // minor = 2 But to be sure that we will read OpenGL versions correctly if older than 3.x is supported, we should use glGetString(GL_VERSION). The next code demonstrates mixed technique to retrieve GL version. void CGLRenderer::GetGLVersion(int* major, int* minor) { // for all versions char* ver = (char*)glGetString(GL_VERSION); // ver = "3.2.0" *major = ver[0] - '0'; if( *major >= 3) { // for GL 3.x glGetIntegerv(GL_MAJOR_VERSION, major); // major = 3 glGetIntegerv(GL_MINOR_VERSION, minor); // minor = 2
}
else{ *minor = ver[2] - '0'; } // GLSL ver = (char*)glGetString(GL_SHADING_LANGUAGE_VERSION); // ver = "1.50 NVIDIA via Cg compiler" } It is also demonstrated how to read OpenGL Shading Language (GLSL) version. The function does not retrieve GLSL version. The comment shows how the returned string looks like. Let’s go back to attributes. The attributes are defined as a list of pairs (name, value) terminated with 0. If certain attributes are not defined in the list, the default values are used. WGL_CONTEXT_MAJOR_VERSION_ARB and WGL_CONTEXT_MINOR_VERSION_ARB define version of GL context we want to create. If not defined the default values are 1 and 0 respectively. That means "the latest version that also supports GL 1.0". In the case of GL 3.2 it is a GL 3.2 compatibility profile. WGL_CONTEXT_PROFILE_MASK_ARB defines which “profile” the context should support. There are two profiles: - core and - compatibility. If we create the core profile, only undeprecated functions can be used. The compatibility profile enables using all functionality, from GL 1.0 to GL 3.2. It is the superset of core profile. When GL 3.0 came to us, I thought that, so called deprecated functions, will die (and I was not very happy, because some of them were very useful). But, by default, GL 3.0 was backward compatible. Then GL 3.1 has come, and we have learnt that if we want to use deprecated functions we have to use extensions! Wow, it was something challenging, because drivers do not have to implement extensions. Everything out of the core is optional. So, we have to orient to core functions, reimplement geometric transformations, projections, and everything else. I thought the new age of GL has begun. GL 3.2 returned things where they stood almost a year ago. Now we have two profiles cohabitating without problems. Maybe the core profile will be better optimized, but ... maybe. After reading some posts of Mark Kilgard (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=258516#Post258516 and http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=258666#Post258666) I'm sure that compatibility mode/profile/however-it-is-called will not be removed in foreseeable future. For the core profile a predefined value WGL_CONTEXT_CORE_PROFILE_BIT_ARB (or hexadecimal value 0x00000001) should be used. This is the default value for WGL_CONTEXT_PROFILE_MASK_ARB attribute (GL 3.2)! To use compatibility mode set WGL_CONTEXT_PROFILE_MASK_ARB attribute to WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB (0x00000002). Previous implementations of OpenGL should ignore WGL_CONTEXT_PROFILE_MASK_ARB attribute. (I have tried WGL_CONTEXT_PROFILE_MASK_ARB on two older (3.0) versions of NV drivers and saw only trouble.) It is very interesting that the profile attribute is implemented as a bitmask rather than an enumeration. Currently, only a single profile can be specified, but it is not unlikely that eventually there will be profiles defined than can be implemented in the same context. The attribute WGL_CONTEXT_FLAGS_ARB specifies a set of flags affecting the rendering context. - If WGL_CONTEXT_DEBUG_BIT_ARB flag is set a “debug” context should be created. But currently (after three versions of GL) it still does not have any effects. - If WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB is set a “forward-compatible” context will be created. The “forward-compatible” context must not support functionality marked as “deprecated” by defined version of GL, while a non-forward-compatible context must support all functionality in that version, deprecated or not. WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB had a predominant function in GL 3.0 and 3.1 contexts, but now when profiles are defined its function is pretty unclear. Citation: If the forward compatibility bit is set, the returned context must not implement deprecated functionality in that profile (nothing is deprecated from the OpenGL 3.2 compatibility profile, so a forward-compatible compatibility profile has exactly the same functionality as a non-forward-compatible compatibility profile, while a forward-compatible core profile would leave out only the handful of features - wide lines and two queries - deprecated from core OpenGL 3.2). Drawing a TriangleNow we are ready to set up other functions so that we can see something meaningful on the screen. The function PrepareScene() creates shaders, loads code, compiles shaders, attaches them to a program, binds IDs to attributes locations, links and activates the program. void CGLRenderer::PrepareScene(CDC *pDC) { wglMakeCurrent(pDC->m_hDC, m_hrc);
//---------------------------------
glClearColor (1.0, 1.0, 1.0, 0.0);
m_pProgram = new CGLProgram();
m_pVertSh = new CGLShader(GL_VERTEX_SHADER);
m_pFragSh = new CGLShader(GL_FRAGMENT_SHADER);
if(!m_pVertSh->Load(_T("minimal.vert")))
AfxMessageBox(_T("Vertex shader loading error!"));
if(!m_pFragSh->Load(_T("minimal.frag")))
AfxMessageBox(_T("Fragment shader loading error!"));
if(!m_pVertSh->Compile())
AfxMessageBox(_T("Vertex shader compiling error!"));
if(!m_pFragSh->Compile())
AfxMessageBox(_T("Fragment shader compiling error!"));
m_pProgram->AttachShader(m_pVertSh);
m_pProgram->AttachShader(m_pFragSh);
m_pProgram->BindAttribLocation(0, "in_Position");
m_pProgram->BindAttribLocation(1, "in_Color");
if(!m_pProgram->Link())
AfxMessageBox(_T("Program linking error!"));
m_pProgram->Use();
SetData();
//---------------------------------
wglMakeCurrent(NULL, NULL);
}
The function SetData() creates and fills VBOs with data, and enables vertex attribute arrays. void CGLRenderer::SetData() {
float* vert = new float[9]; // vertex array
float* col = new float[9]; // color array
vert[0] = 0.0f; vert[1] = 0.8f; vert[2] =-1.0f;
vert[3] =-0.8f; vert[4] =-0.8f; vert[5] =-1.0f;
vert[6] = 0.8f; vert[7] =-0.8f; vert[8]= -1.0f;
col[0] = 1.0f; col[1] = 0.0f; col[2] = 0.0f;
col[3] = 0.0f; col[4] = 1.0f; col[5] = 0.0f;
col[6] = 0.0f; col[7] = 0.0f; col[8] = 1.0f;
glGenBuffers(2, &m_vboID[0]);
glBindBuffer(GL_ARRAY_BUFFER, m_vboID[0]);
glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert, GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, m_vboID[1]);
glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), col, GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1);
delete [] vert;
delete [] col;
}
The function Reshape() sets a viewport. void CGLRenderer::Reshape(CDC *pDC, int w, int h) { wglMakeCurrent(pDC->m_hDC, m_hrc); //--------------------------------- glViewport (0, 0, (GLsizei) w, (GLsizei) h); //--------------------------------- wglMakeCurrent(NULL, NULL); } The function DrawScene() actually draws the scene. void CGLRenderer::DrawScene(CDC *pDC) { wglMakeCurrent(pDC->m_hDC, m_hrc); //-------------------------------- glClear(GL_COLOR_BUFFER_BIT); glDrawArrays(GL_TRIANGLES, 0, 3); //-------------------------------- glFlush (); SwapBuffers(pDC->m_hDC); wglMakeCurrent(NULL, NULL); } The function DestroyScene() cleans everything up. void CGLRenderer::DestroyScene(CDC *pDC) { wglMakeCurrent(pDC->m_hDC, m_hrc); //-------------------------------- m_pProgram->DetachShader(m_pVertSh); m_pProgram->DetachShader(m_pFragSh); delete m_pProgram; m_pProgram = NULL; delete m_pVertSh; m_pVertSh = NULL; delete m_pFragSh; m_pFragSh = NULL; wglMakeCurrent(NULL,NULL); //-------------------------------- if(m_hrc) { wglDeleteContext(m_hrc); m_hrc = NULL; } }
And the shaders are exactly the same as in previous tutorial.
#version 150 core in vec3 in_Position; in vec3 in_Color; out vec3 ex_Color; void main(void) { gl_Position = vec4(in_Position, 1.0); ex_Color = in_Color; } // minimal.frag #version 150 core // precision highp float; in vec3 ex_Color; out vec4 out_Color; void main(void) { out_Color = vec4(ex_Color,1.0); } Only version is changed to 150 and the core profile is selected. More about GLSL 1.5 will be covered in the next tutorial. If everything is correct we will see a triangle eventually. |
=======================
=======================
=======================
출처: https://m.blog.naver.com/PostView.nhn?blogId=jungwan82&logNo=20108257993&proxyReferer=https%3A%2F%2Fwww.google.co.kr%2F
OpenGL 4.0이 이미 2010년 3월에 소개되었지만 대부분의 하드웨어(특히 저의...)들이 지원 준비가 되어있지 않은 관계로 이 튜토리얼에서는 OpenGL 3.2를 사용합니다. OpenGL 3.2는 2009년 7월 24일에 소개가 되었고 아직까지도 최신이라고 생각할 수 있습니다. 3.2버전부터 Geometry Shader가 사용 가능합니다.
g_hDC=GetDC(g_hWnd); // Get Device Context which is used to draw something on the
// client region.
// Pixel format setting
// not supported, zero is returned and you must try again
//with another pfd or quit your program
2. Accessing to extented functions of OpenGL and Getting a new RC
wglChoosePixelFormatARB()의 두번째 인자가 복잡해 보이지만 논리적입니다. 이것은 키와 값의 쌍으로 구성되어 있는데 마지막은 끝을 알리는 0을 포함합니다.
Step #4 Prepare the scene and clear background
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
glLoadIdentity(); // Reset The Current Modelview Matrix
return TRUE;
}
=======================
=======================
=======================
출처: http://www.gpgstudy.com/forum/viewtopic.php?t=20587#p105817
안티 얼라이싱과 멀티샘플링
------------------------------------------------------------------------------------------------------------------------------------------------------------------
Re: 안티 얼라이싱과 멀티샘플링
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
supersampling & multisampling
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
http://wrice.egloos.com
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
=======================
=======================
=======================
출처: http://w.nya.kr/doku.php?id=%EC%88%98%EC%97%85:opengl:%EC%95%88%ED%8B%B0%EC%95%A8%EB%A6%AC%EC%96%B4%EC%8B%B1
−목차
개요
-
화변에서 선을 그릴 때 , 선이 울퉁불퉁하게 나타나는 현상을 앨리어싱(aliasing)이라고 부른다.
-
안티앨리어싱 : 앨리어싱을 줄이기 위한 기법
-
안티 앨리어싱의 절차
-
RGBA 모드인 경우
-
OpenGL은 그림의 조각이 덮고 있는 화면의 픽셀 조각을 토대로 각 프래그먼트에 대한 커버리지(coverage,적용 범위)를 계산한다.
-
RGBA 모드에서는 프래그먼트의 알파값과 커버리지를 곱한다.
-
곱셈의 결과로 나온 알파값을 사용하여 프레임버퍼에 있던 픽셀과 프레그먼트를 블렌딩한다.
-
-
컬러 인덱스 모드인 경우
-
OpenGL은 그림의 조각이 덮고 있는 화면의 픽셀 조각을 토대로 각 프래그먼트에 대한 커버리지(coverage,적용 범위)를 계산한다.
-
프래그먼트의 커버리지에 기초해서 컬러 인덱스의 하위 4비트를 설정한다. (0000은 커버리지가 없음을, 1111은 커버리지가 전체임을 의미한다.)
-
-
void glHint(GLenum target, GLenum hint) | |
---|---|
커버리지를 계산하는 과정은 복잡하고, 구현마다 달라질 수 있기 때문에, glHint()를 사용하여 속도와 품질 사이의 관계에서 실험해볼 수 있다. | |
target | 제어할 동작 |
hint | GL_FASTEST(빠르게), GL_NICEST(고품질), GL_DONT_CARE(신경 안씀) |
-
glHint에서 사용할 수 있는 값들
매개변수 | 설명 |
---|---|
GL_POINT_SMOOTH_HINT, GL_LINE_SMOOTH_HINT, GL_POLYGON_SMOOTH_HINT | 안티 앨리어싱 작업에서 점이나 선, 다각형의 샘플링 품질을 설정. |
GL_FOG_HINT | 안개 효과 계산이 픽셀마다 이루어지는지(GL_NICEST), 정점마다 이루어지는지 여부를 설정. |
GL_PERSPECTIVE_CORRECTION_HINT | 컬러와 텍스처 좌표 보간의 품질을 설정한다. |
GL_GENERATE_MIPMAP_HINT | 자동 밉맵 레벨 생성에 대한 품질과 성능 설정 |
GL_TEXTURE_COMPRESSION_HINT | 텍스처 이미지 압축의 품질 및 성능을 설정. |
점과 선에 대한 안티앨리어싱
-
점이나 선에 대한 안티앨리어싱을 하기 위해선?
-
glEnable() 함수에 GL_POINT_SMOOTH나 GL_LINE_SMOOTH 인자를 주고 호출하여 안티앨리어싱 모드를 활성화한다.
-
glHint() 함수를 사용하여 품질에 대한 힌트 지정한다.
-
그 밖에 2장에서 우리는 점의 크기나, 선의 너비, 선의 모양을 결정할 수 있었다. (앨리어싱의 이야기는 아니지만..)
-
다음 과정은 RGBA 모드에 있는지, 컬러 인덱스 모드에 있는지에 따라 달라진다.
-
RGBA 모드에서의 안티앨리어싱
-
RGBA 모드에서는 블렌딩을 활성화 시켜야 한다.
-
가장 많이 사용되는 블렌딩 인자 : (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
컬러 인덱스 모드에서의 안티앨리어싱
-
컬러 인덱스 모드에서 안티 앨리어싱을 적용할 때, 컬러 인덱스의 마지막 4비트가 커버리지를 나타내기 때문에, 배경 컬러에서 오브젝트 컬러까지 총 16개의 연속된 인덱스를 컬러 램프(color ramp)에 로드해야한다.
멀티샘플링으로 기하 프리미티브 안티앨리어싱하기
-
멀티 샘플링이란 컬러나 깊이, 스텐실 정보(샘플) 등을 사용하여 점, 선, 폴리곤, 비트맵, 이미지 등과 같은 OpenGL 프리미티브를 안티앨리어싱하는 테크닉이다. 각각의 프래그먼트들은 서브 픽셀(SUBPIXEL)의 개수에 따라서 컬러와 깊이 텍스처 좌표 집합등을 여러개 가진다.
알파와 멀티샘블링 커버리지
-
멀티샘플링은 알파값과 관계없는 프래그먼트 커버리지를 계산한다.
void glSampleCoverage(GLclampf value, GLboolean invert) | ||
---|---|---|
멀티 샘플링 커버리지를 계산하는 동안 알파값을 해석하는데 사용될 매개변수를 설정한다. | ||
value | 임시 커버리지 값을 지정. | |
GL_SAMPLE_ALPHA_TO_COVERAGE | 프래그먼트의 알파값을 사용하여 최종 커버리지를 계산한다. | |
GL_SAMPLE_ALPHA_TO_ONE | 프래그먼트의 알파값을 최대 알파값인 1로 설정하고, 이 값을 커버리지 계산에 이용한다. | |
GL_SAMPLE_COVERAGE | 계산된 커버리지 값과 혼합된(AND 연산을 적용한) 값 집합을 사용한다. | |
invert | 임시 커버리지 값이 프래그먼트 커버리지에 사용되기 전에 전환될 지 여부. |
폴리곤 안티앨리어싱하기
-
채워진 폴리곤의 모서리를 안티 앨리어싱하는 것은 점이나 선을 안티 앨리어싱하는 것과 비슷하다.
-
폴리곤들이 모서리가 서로 중첩 시 컬러 값을 적절히 블렌딩이 필요.
-
안티앨리어싱이 활성회 된 경우 : glPolygonMode(GL_POINT) or glPolygonMode(GL_LINE) 에서도 안티앨리어싱 적용됨.
-
glPolygonMode(GL_FILL) : 채워진 폴리곤일 경우.
-
폴리곤을 안티앨리어싱할 때는 렌더링 순서와 블렌딩의 정확도가 중요.
-
하나 이상의 폴리곤에 안티앨리어싱을 적용시, 폴리곤들을 앞에서부터 뒤로 정렬
-
glBlendFunc()에서 (GL_SRC_ALPHA_SATURATE, GL_ONE) 사용
-
-
이 때문에 폴리곤을 컬러 인덱스 모드에서 안티 앨리어싱하는 것은 의미가 없다.
-
RGBA 모드에서 폴리곤 안티 앨리어싱하기
-
알파값을 사용하여 폴리곤 모서리의 커버리지 값을 표현한다.
-
glEnable()의 인자로 GL_POLYGON_SMOOTH를 주면 폴리곤 안티앨리어싱이 활성화됨
-
폴리곤 안티앨리어싱이 활성화 되면, 폴리곤의 가장자리 픽셀들은 안티엘리어싱된 선분처럼 각 픽셀들의 커버리지에 근거해서 부분적인 알파값을 할당 받는다.
-
그리고 겹쳐진 모서리를 적절하게 블렌딩한다.
-
겹쳐진 픽셀들이 그려지는 방법을 조절하기 위해 깊이버퍼를 OFF 시킴
-
입력값의 블렌딩 인자를 (GL_SRC_ALPHA_SATURATE, GL_ONE)으로 설정
-
이러한 블렌딩 함수를 사용하면, 마지막 컬러를 얻을 수 있다.
-
스케일 요소는 min(As, 1-Ad)을 사용한다.
-
알파 값이 큰 픽셀인 경우 1에서 기존 알파 값을 빼면 0에 가까우므로 최종 컬러에 거의 영향을 미치지 못한다는 것을 의미.
-
-
폴리곤의 모서리에 있는 픽셀은 나중에 그려지게 될 폴리곤의 픽셀 컬러와 블렌딩된다.
-
폴리곤을 그리기 전에 장면에 있는 모든 폴리곤들을 앞에 있는 것부터 순서대로 정렬.
-
-
-
=======================
=======================
=======================
출처: https://gamedev.stackexchange.com/questions/18777/how-do-i-implement-anti-aliasing-in-opengl
I want to do full-screen anti-aliasing in OpenGL, and dont want to use the anti-aliasing that is provided by OpenGL itself, since I am building a game and want good effects in it.
How should I proceed?
------------------------------------------------------------------------------------------------------------------------------------------------------------
By "the anti-aliasing provided by GL itself", do you refer to the old poorly named line and polygon quality hints that are labelled as "antialiasing", or are you referring to using regular MSAA through GLX/WGL visuals? –
|
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
There are several alternatives to native MSAA in OpenGL. With post-processing effects, the best thing about them is that you can usually just throw in the shader to the final, unprocessed image and it does the rest. Here are three methods worth taking a look:
- Fast Approximate Anti-Aliasing (Geeks3D) - Good in most cases. Pretty easy to apply and understand. Drawback is sharp, high contrast noise in textures gets blurred a bit. Edges as subtle as 1/4 pixels steep look dead-accurate as traditional MSAA. Any less steep than that, it loses a bit of accuracy.
- Normal Filtered Anti-Aliasing (GameDev) - Haven't tested this one yet accurately, but it's the easiest to understand. In best cases it resembles 16x MSAA and in worst cases it's like 2x MSAA. It generates a temporary normal map to represent edges and relative angles. You can sample the normal map either with luma difference or color difference.
- Morphological Anti-Aliasing (Iryoku) - been improved to SMAA - Subpixel Mophological AA. It's pretty complex at 4 passes, but achieves the best results I've seen. It creates gradients along edges as gradual as 1/100 to 1/200 pixels steep (!). Sampling can be luma-based, color-based or depth-based. Textures stay very crisp and clean. (the example is DX10 and HLSL based, would take some time to port it to GLSL accurately)
These techniques don't super-sample or multi-sample, so lines that appear less than 1 pixel in thickness will appear with gaps and not be anti-aliased correctly. This is the downside to using a non-MSAA approach. Since you're only working with a raster image at full resolution, you can't create additional information from these empty gaps.
Take notice that all of these techniques are dependent on sampling adjacent luma (brightness) or chroma (color) values. Calculating luma and optional gamma correction requires additional instructions on the AA shader, though it's pretty straightforward. You can offload this by calculating the luma in the previous shader that provides the un-retouched image, storing the luma in the alpha channel. Then in the AA shader, you will simply sample the alpha.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
There's lots of ways to do antialiasing. One is to use multisample antialiasing (MSAA), where your back buffer actually stores multiple sub-pixel samples, and when you render triangles, lines, etc. the system automatically fills in the correct set of samples in each pixel. Then at the end of rendering the image is "resolved" by averaging over all the sub-pixel samples to get one sample per pixel.
Another way is to use post-processing antialiasing, where you render the scene as normal and then do some targeted blurring on the end result to hide aliased edges. There are a variety of techniques for this, but one of the best / most popular at the moment is called FXAA (Fast approXimate Anti-Aliasing).
MSAA will generally give better-looking results than post-processing AA, but can be slower because it requires approximately double the memory bandwidth for every rendering operation. MSAA can also require more video memory than post-processing AA, depending on the details of the setup.
You can find specific info about implementing either MSAA or FXAA on the Web - just google either one of those terms.
Are you willing to code your own GLSL shader? If so, check this tutorial.
=======================
=======================
=======================
출처: https://stackoverflow.com/questions/20482851/how-to-use-multisampling-with-opengl-fbos
I'm trying to enable mutlisampling and alpha-to-coverage for an FBO. Using the default framebuffer, all I have to do is call glEnable(GL_MULTISAMPLE)
and glEnable(GL_SAMPLE_ALPHA_TO_COVERAGE)
. However, I am unable to achieve the same effect using my own FBO.
My goal: Draw the scene to an FBO the same way it would be drawn to the default framebuffer with the above properties. From there I want to be able to use the image as a texture for future passes through a shader.
This works: Code for making an FBO without multisampling/alpha-to-coverage, 1 color attachment, 1 depth attachment:
// Generate the color attachment glGenTextures(1,&defaultColorAttachment0); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D,defaultColorAttachment0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,screenWidth,screenHeight,0,GL_RGBA,GL_UNSIGNED_BYTE,NULL); // Bind the texture to the FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, defaultColorAttachment0,0); // Generate the depth attachment glGenRenderbuffers(1,&defaultDepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, defaultDepthBuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, screenWidth, screenHeight); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, defaultDepthBuffer);
This doesn't work. Code trying to make a multisampled FBO:
// Generate the color attachment glGenTextures(1,&defaultColorAttachment0); glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, defaultColorAttachment0); glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, 4, GL_RGBA, screenWidth, screenHeight, GL_FALSE); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, defaultColorAttachment0,0); // Generate the depth attachment glGenRenderbuffers(1,&defaultDepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, defaultDepthBuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, screenWidth, screenHeight); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, defaultDepthBuffer);
I have tried looking through the OpenGL wiki on this, although the it's incomplete (various unfinished headings make it look unprofessional). glGetError
never complains. I've tried messing around with this, but I either get a black screen or a screen full of garbage pixels.
Main Question: What things do I need to consider/change and where (FBO creation, textures, shaders) in order to get multisampling and alpha-to-coverage to work with an FBO?
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
Have you tried calling glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, defaultColorAttachment0,0); instead of glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, defaultColorAttachment0,0);? – Jherico Dec 9 '13 at 23:39
|
||
|
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
You need to allocate a multisampled depth buffer for this to work correctly and give it the same number of samples as your color buffer. In other words, you should be calling glRenderbufferStorageMultisample (...)
instead of glRenderbufferStorage (...)
.
Your FBO should be failing a completeness check the way it is allocated right now. A call to glCheckFramebufferStatus (...)
ought to return GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE
because your depth buffer has exactly 1 sample and your color buffer attachment has 4.
Since you are also using a multisampled texture attachment in this FBO, you should be aware of differences between sampling a single-sampled texture vs. multisampled in GLSL shaders.
Multisampled textures have a special sampler uniform type (e.g. sampler2DMS
) and you have to explicitly fetch each sample in the texture by its integer (non-normalized) texel coordinate and sample index using texelFetch (...)
. This also means that they cannot be filtered or mip-mapped.
You probably do not want a multisampled texture in this case, you probably want to use glBlitFramebuffer (...)
to do the MSAA resolve into a single-sampled FBO. If you do this instead you can read the anti-aliased results in your shaders rather than having to fetch each sample and implement the anti-aliasing yourself.
|
YES! This it super helpful. I've got it working now! I had to make another FBO to blit the multisampled image into, but that's okay. I lined up the depth buffer with the color buffer (multisample, RGBA => 32). Thanks so much. – GraphicsMuncher Dec 11 '13 at 1:34
|
=======================
=======================
=======================
출처:
History
Before GL_ARB_multisample extension, the edges of lines, polygons, and points could be selectively antialiased using using glEnable(GL_LINE_SMOOTH), glEnable(GL_POLYGON_SMOOTH), glEnable(GL_POINT_SMOOTH) respectively, combined with a blending function, such as glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). Such features were typically not implemented in hardware in early consumer graphics cards at the time, and were done in software resulting in poor performance. More expensive "workstation" graphics cards from HP, Sun, and SGI at the time did implement these features in hardware. Modern programs should not make use of these features.
Rendering with Multisampling
There are two pieces to rendering with multisampling:
- Allocating a multisample render target (window or FBO), and
- Enabling multisample rasterization (i.e.
glEnable( GL_MULTISAMPLE )
)
While the second step is standardized across all render target types and platforms, the first is only standardized for FBOs (as it is totally internal to GL). When rendering to a window, the allocation method depends on the platform-specific GL integration layer in-use (e.g. WGL, GLX, AGL, etc.). GLUT provides a wrapper around some of these so you don't have to care.
The following sections describe how to perform MSAA render target allocation (#1) for various render target types and platforms.
Allocating a Multisample Render Target
Render-to-FBO
As mentioned, allocation of multisample off-screen FBOs is platform independent. Here's an example:
//---------------------------------
glGenTextures( 1, &tex );
glBindTexture( GL_TEXTURE_2D_MULTISAMPLE, tex );
glTexImage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE,
num_samples, GL_RGBA8,
width, height, false );
glGenFramebuffers( 1, &fbo );
glBindFramebuffer( GL_FRAMEBUFFER, fbo );
glFramebufferTexture2D( GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D_MULTISAMPLE, tex, 0 );
GLenum status = glCheckFramebufferStatus( target ); ...
//-----------------------------------
Once you've rendered your scene into the multisampled FBO, you can display the anti-aliased result on the screen by blitting your multisampled FBO to the back buffer using glBlitFramebuffer()
. Before doing so, you must first remember to specify the back buffer as your draw buffer, and to specify your multisampled FBO as the read frame buffer, like so:
//-------------------
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
// Make sure no FBO is set as the draw framebuffer
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
// Make sure your multisampled FBO is the read framebuffer
glDrawBuffer(GL_BACK);
// Set the back buffer as the draw buffer
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
//-------------------
Doing this blit will automatically resolve the multisampled FBO. In other words, for each texel in the multisampled FBO, the blit will blend together the texel's samples, and this blended color is what is written into the corresponding pixel on the back buffer. If you wish, you can resolve your multisampled FBO into a different off-screen FBO by setting that FBO as your draw buffer.
Render-to-Window
Microsoft Windows (using wgl)
The wgl functions required to create a multisampled OpenGL context are not available until a valid OpenGL context is made current to the thread. This leads to a temporary context code path which can get very involved with platform specific detailed. Users of libraries such as GLUT, GLEW, or GLee can significantly reduce the amount of effort required. The rest of this section assumes that valid context is present and all wgl extension function pointers have been obtained, if not, please see how to create a temporary context.
A valid pixel format for the framebuffer is choose using the wglChoosePixelFormatARB function with a list of attributes to specify the multisampling properties. In order to choose a framebuffer format that incorporates multisampling, you must add it to the list of attributes. For example, this list of attributes does not select a multisampled pixel format:
int attributes[] = { WGL_DRAW_TO_WINDOW_ARB, GL_TRUE, WGL_SUPPORT_OPENGL_ARB, GL_TRUE, WGL_DOUBLE_BUFFER_ARB, GL_TRUE, WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB, WGL_COLOR_BITS_ARB, 32, WGL_DEPTH_BITS_ARB, 24, WGL_STENCIL_BITS_ARB, 8, 0 };
To consider multisampled visuals, the WGL_SAMPLE_BUFFERS_ARB and WGL_SAMPLES_ARB attributes must be present. The WGL_SAMPLE_BUFFERS_ARB must be set to 1, and WGL_SAMPLES_ARB is the number of samples, e.g. for 8x multisampling, WGL_SAMPLES_ARB would be set to 8. This attribute list is the same as above, but considers 4x multisampled pixel formats too:
int attributes[] = { WGL_DRAW_TO_WINDOW_ARB, GL_TRUE, WGL_SUPPORT_OPENGL_ARB, GL_TRUE, WGL_DOUBLE_BUFFER_ARB, GL_TRUE, WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB, WGL_COLOR_BITS_ARB, 32, WGL_DEPTH_BITS_ARB, 24, WGL_STENCIL_BITS_ARB, 8, WGL_SAMPLE_BUFFERS_ARB, 1, //Number of buffers (must be 1 at time of writing) WGL_SAMPLES_ARB, 4, //Number of samples 0 };
Once the correct pixel format is found, creating a context proceeds as normal.
X Windows (using GLX)
This is the method you'd use on UNIX/Linux if you want to create an X window with multisampling capability. There are a few methods for doing this, but the following locates an MSAA FBConfig, looks up its XVisual, and then creates an X window in that visual. You'll notice some similarities with the WGL method above.
// --- Find a MSAA FBConfig --- static const int Visual_attribs[] = { GLX_X_RENDERABLE , True, GLX_DRAWABLE_TYPE , GLX_WINDOW_BIT, GLX_RENDER_TYPE , GLX_RGBA_BIT, GLX_X_VISUAL_TYPE , GLX_TRUE_COLOR, GLX_RED_SIZE , 8, GLX_GREEN_SIZE , 8, GLX_BLUE_SIZE , 8, GLX_ALPHA_SIZE , 8, GLX_DEPTH_SIZE , 24, GLX_STENCIL_SIZE , 8, GLX_DOUBLEBUFFER , True, GLX_SAMPLE_BUFFERS , 1, // <-- MSAA GLX_SAMPLES , 4, // <-- MSAA None }; int attribs [ 100 ] ; memcpy( attribs, Visual_attribs, sizeof( Visual_attribs ) ); GLXFBConfig fbconfig = 0; int fbcount; GLXFBConfig *fbc = glXChooseFBConfig( display, screen, attribs, &fbcount ); if ( fbc ) { if ( fbcount >= 1 ) fbconfig = fbc[0]; XFree( fbc ); } if ( !fbconfig ) { printf( "Failed to get MSAA GLXFBConfig\n" ); exit(1); } // --- Get its VisualInfo --- XVisualInfo *visinfo = glXGetVisualFromFBConfig( display, fbconfig ); if ( !visinfo ) { printf( "Failed to get XVisualInfo\n" ); exit(1); } printf( "X Visual ID = 0x%.2x\n", int( visinfo->visualid ) ); // --- Now just create an X window in that visual --- XSetWindowAttributes winAttr ; winAttr.event_mask = StructureNotifyMask | KeyPressMask ; winAttr.background_pixmap = None ; winAttr.background_pixel = 0 ; winAttr.border_pixel = 0 ; winAttr.colormap = XCreateColormap( display, root_win, visinfo->visual, AllocNone ); unsigned int mask = CWBackPixmap | CWBorderPixel | CWColormap | CWEventMask; Window win = XCreateWindow ( display, root_win, WIN_XPOS, WIN_YPOS, WIN_XRES, WIN_YRES, 0, visinfo->depth, InputOutput, visinfo->visual, mask, &winAttr ) ; XStoreName( display, win, "My GLX Window");
MacOS X (using Cocoa)
TBD
Cross-platform (using GLUT)
GLUT is a wrapper around the platform-specific GL integration layers (e.g. WGL, GLX, etc.) which allows you to prototype simple cross-platform GL examples quickly. In GLUT, the key to allocating an MSAA window is this:
glutInitDisplayMode( ... | GLUT_MULTISAMPLE );
which directs GLUT to allocate an MSAA-capable window. The following is a complete example showing how to render with multisampling in a simple GLUT test program:
#include <GL/gl.h> #include <GL/glut.h> #include <GL/glu.h> #include <stdio.h> float rotation_angle=0; int msaa=1; void reshape(int width, int height) { glViewport(0, 0, width, height); } void mouse(int button, int state, int x, int y) { if (state==GLUT_DOWN) { msaa = !msaa; glutPostRedisplay(); } } void display() { int err=0; glClear(GL_COLOR_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(-1,1,-1,1,-1,1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(rotation_angle, 0,0,1); glColor4f(1,0,0,1); if (msaa) { glEnable(GL_MULTISAMPLE_ARB); printf("msaa on\n"); } else { printf("msaa off\n"); glDisable(GL_MULTISAMPLE_ARB); } glRectf(-.5,-.5,.5,.5); glutSwapBuffers(); err = glGetError(); if (err) fprintf(stderr, "%s\n", gluErrorString(err)); } int main (int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode( GLUT_RGBA | GLUT_DOUBLE | GLUT_MULTISAMPLE); glutCreateWindow(argv[0]); glutDisplayFunc(display); glutMouseFunc(mouse); glutReshapeFunc(reshape); glutReshapeWindow(400,400); printf("%s\n",glGetString(GL_RENDERER)); rotation_angle=30; glutMainLoop(); return 0; }
Extension References
- http://www.opengl.org/registry/specs/ARB/multisample.txtThese might interest you : - http://www.opengl.org/registry/specs/NV/multisample_filter_hint.txt - http://www.opengl.org/registry/specs/NV/explicit_multisample.txt - http://www.opengl.org/registry/specs/NV/multisample_coverage.txtConclusionSetting up FSAA takes a few steps but it is worth it. Today's GPUs are very rapid and the user wants control over the quality of the game's graphics.On Windows, the extension we are interested in are WGL_ARB_extensions_string which defines wglGetExtensionsStringARB, but the only way to check if this is available is to call wglGetExtensionsStringARB. So this is a chicken and egg situation. Just get the function pointer to wglGetExtensionsStringARB and use it to see if you have WGL_ARB_pixel_format and WGL_ARB_multisample.
Conclusion
Setting up FSAA takes a few steps but it is worth it. Today's GPUs are very rapid and the user wants control over the quality of the game's graphics.
On Windows, the extension we are interested in are WGL_ARB_extensions_string which defines wglGetExtensionsStringARB, but the only way to check if this is available is to call wglGetExtensionsStringARB. So this is a chicken and egg situation. Just get the function pointer to wglGetExtensionsStringARB and use it to see if you have WGL_ARB_pixel_format and WGL_ARB_multisample.
Conclusion
Setting up FSAA takes a few steps but it is worth it. Today's GPUs are very rapid and the user wants control over the quality of the game's graphics.
On Windows, the extension we are interested in are WGL_ARB_extensions_string which defines wglGetExtensionsStringARB, but the only way to check if this is available is to call wglGetExtensionsStringARB. So this is a chicken and egg situation. Just get the function pointer to wglGetExtensionsStringARB and use it to see if you have WGL_ARB_pixel_format and WGL_ARB_multisample.
=======================
=======================
=======================
출처: http://m.cafe.daum.net/Mzen/GLM7/81?q=D_TvqjBF8y-os0&
Motivation
I saw Pablo van der Meer's article detailing his CStarWarsCtrl. I thought it was very interesting but I didn't like the facts that it used -
- MFC and
StretchBlt
.
So I set upon the task to reimplement it using OpenGL while at the same time making it friendly for Win32.
What is it?
This article provides a Win32 OpenGL framework. It makes it easy for you by hiding most of the OpenGL initialization/shutdown code. The article shows how to use the framework to create a StarWars type effect of scrolling text.
Win32 OpenGL
The hardest part was setting up OpenGL under Win32. First you need to create a window class using the CS_OWNDC
style, and query for a pixel format. Once you have those, you can create an OpenGL rendering context and attach it to the DC of the window. After a little documentation, I found out it wasn't that difficult at all. I often make Win32 controls which do a lot of work for you. So here I stuck with my plan. I made a Win32 control that does all the OpenGL work for you. All you have to give the control is a COpenGLWndController
class and a requested pixel format and the control makes use of it. Let's take a look at it.
class COpenGLWndController { private: // these are friends because these functions need to call SetParameters friend static LRESULT OnOpenGLSetController(HWND hWnd,void *pController); friend static LRESULT OnOpenGLCreate(HWND hWnd,LPCREATESTRUCT lpcs); void SetParameters(HDC hdc,HGLRC hglrc); virtual void vDraw() = 0; // render it now HDC m_hdc; HGLRC m_hglrc; public: void Draw(); virtual ~COpenGLWndController() {;} virtual int ValidatePixelFormat(HDC hdc,int suggestedFormat); virtual void WindowSized(int cx,int cy) = 0; virtual void Init() = 0; // initialize textures virtual void Close() = 0; // the window is closing, destroy textures/etc };
Creation
To create an OpenGL window, use this function:
BOOL RegisterOpenGLWindow(HINSTANCE hInst); // Remember, once created, the window will call 'delete' on the controller. HWND CreateOpenGLWindow(HINSTANCE hInst,HWND hParent, DWORD style,UINT id,LPRECT rt, COpenGLWndController *pController, LPPIXELFORMATDESCRIPTOR pfd);
One simply has to create a subclass of COpenGLWndController
and implement WindowSized
, vDraw
, Init
, and
Close
. WindowSized
is called in response to a WM_SIZE
message, and this is where you change your OpenGL viewport. vDraw
is the function which renders the scene. Don't get this confused with Draw
. Draw
is the public function you call to repaint the window - it handles behind the scenes things like swapping the buffers. Draw
ends up calling vDraw
anyways. Init
is called when the OpenGL window has created its rendering context and is now ready for us. You can now load your textures or initialize OpenGL as you see fit. Close
is similar, here you can delete any OpenGL textures/objects etc. ValidatePixelFormat
does not need to be overridden, but it can be. You can use this function to fiddle with the pixel format, returning a new one if you want. I use it in my implementation to turn on FSAA (full screen antialiasing).
Implementation
Let's take a look at how our subclass works - CStarWarsController
. The code for WindowSized
is pretty self explanatory.
void CStarWarsController::WindowSized(int cx,int cy) { glMatrixMode (GL_PROJECTION); glLoadIdentity(); gluPerspective(60.0,(float)cx/(float)cy,1.0,90.0); glViewport (0, 0, cx, cy); }
The code for Init
initializes the fonts used.
void CStarWarsController::Init() { HFONT hOld; HFONT hFont = CreateFont(12, 0, 0, 0, FW_NORMAL, FALSE, FALSE, 0, ANSI_CHARSET, OUT_DEFAULT_PRECIS, CLIP_DEFAULT_PRECIS, DEFAULT_QUALITY, DEFAULT_PITCH, _T("Arial")); HDC hdc = wglGetCurrentDC(); hOld = (HFONT)SelectObject(hdc,hFont); wglUseFontOutlines(hdc, 0, MAX_TEXT, 1000, 0.0f, 0.1f,WGL_FONT_POLYGONS, m_agmf); SelectObject(hdc,hOld); DeleteObject(hFont); }
The code for Close
cleans up the fonts used and deletes our CObject
s
void CStarWarsController::Close() { glDeleteLists(1000,MAX_TEXT); // delete our objects now for (int i=0;i<NUMOBJECTS;++i) { if (pObjects[i]) { delete pObjects[i]; pObjects[i] = NULL; } } }
I mentioned the CObject
class. I use this class in the controller as it represents a moving object along the screen. Each line of text is treated as an object. Each object has a starting point, a vector it moves along, and a current position. Thus, for any time t
, I can calculate the current position from the starting point and movement vector. CObject
has one overriddable function, Draw()
. I provide two subclasses of CObject
: CTextObject
and CTexturedQuad
. The moving flag is a CTexturedQuad
.
The time offset might need some explaining. The objects are in an array. The first object needs to be followed by the others to look good. Each object has the same starting point. For this example, it's 0,-4,0. But each object has a time offset, for when they should appear. At 0 time, they'll appear at 0,-4,0. At a time offset of 2, they'll be closer to the viewer, because its 2 seconds behind. Therefore all objects in the array have an increasing time offset. The text objects typically require a 2 second time offset between them. This is how the objects are spaced apart. This means you can space them apart as far as you want by changing the time offset field.
typedef struct _tagVECTOR { float x; float y; float z; } VECTOR,*LPVECTOR; typedef struct _tagTDPOINT { float x; float y; float z; } TDPOINT,*LPTDPOINT; class CObject { public: CObject() ; virtual ~CObject() {;} virtual void Draw() = 0; float m_fAngle; float m_fTimeOffset; float m_fColor[3]; TDPOINT m_start; VECTOR m_slope; TDPOINT m_curPos; };
In my example, it renders constantly. CStarWarsController
has a function called Idle
which moves all the objects and stars around the screen. The code is easy, simple vector math.
void CStarWarsController::Idle() { LARGE_INTEGER now; // get current time QueryPerformanceCounter(&now); m_fTimeElapsed = ((float)(now.QuadPart - m_start.QuadPart) /(float)m_freq.QuadPart); // move the objects for (int i=0;<NUMOBJECTS;++i) { pObjects[i]->m_curPos.x = pObjects[i]->m_start.x; pObjects[i]->m_curPos.y = pObjects[i]->m_start.y + pObjects[i]->m_slope.y * (m_fTimeElapsed - pObjects[i]->m_fTimeOffset); pObjects[i]->m_curPos.z = pObjects[i]->m_start.z + pObjects[i]->m_slope.z * (m_fTimeElapsed - pObjects[i]->m_fTimeOffset); } // move the stars, calculate new time based on star m_start time m_fTimeElapsed = ((float)(now.QuadPart - m_starStart.QuadPart)/(float)m_freq.QuadPart); for (int i=0;i<m_iNumStars;++i) { // update their z position m_pStars[i].m_curPos[2] = m_pStars[i].m_start.z + m_pStars[i].speed.z * (m_fTimeElapsed - m_pStars[i].timeOffset); // ok they're out of view, respawn a new star if (m_pStars[i].m_curPos[2] >= EYE_Z) { m_pStars[i].m_start.x = GetRandom(-5.0,5.0); m_pStars[i].m_start.y = GetRandom(-5.0,5.0); m_pStars[i].m_start.z = -10.0f; m_pStars[i].timeOffset = m_fTimeElapsed; } else { m_pStars[i].m_curPos[0] = m_pStars[i].m_start.x; m_pStars[i].m_curPos[1] = m_pStars[i].m_start.y; } } }
Similarly, the vDraw
function doesn't do much besides rendering the stars and calling CObject::Draw
.
/* Method to actually draw on the control */ void CStarWarsController::vDraw() { glClearColor(0.0,0.0,0.0,0.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); if (!m_bStarted) return; glHint(GL_MULTISAMPLE_FILTER_HINT_NV,GL_NICEST); glEnable(GL_MULTISAMPLE_ARB); glDisable(GL_BLEND); glCullFace(GL_BACK); glEnable(GL_CULL_FACE); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(0.0,0.0,EYE_Z,0.0,0.0,0.0,0.0,1.0,0.0); // now draw stars - as points if (m_bPointStars) { glBegin(GL_POINTS); for (int i=0;i<m_iNumStars;++i) { glColor3fv(m_pStars[i].m_fColor); glVertex3fv(m_pStars[i].m_curPos); } glEnd(); } else // draw stars as quads { glBegin(GL_QUADS); for (int i=0;i<m_iNumStars;++i) { #define LENGTH 0.02f glColor3fv(m_pStars[i].m_fColor); glVertex3f(m_pStars[i].m_curPos[0]- LENGTH,m_pStars[i].m_curPos[1]-LENGTH, m_pStars[i].m_curPos[2]); glVertex3f(m_pStars[i].m_curPos[0]-LENGTH, m_pStars[i].m_curPos[1]+LENGTH, m_pStars[i].m_curPos[2]); glVertex3f(m_pStars[i].m_curPos[0]+LENGTH, m_pStars[i].m_curPos[1]+LENGTH, m_pStars[i].m_curPos[2]); glVertex3f(m_pStars[i].m_curPos[0]+LENGTH, m_pStars[i].m_curPos[1]-LENGTH, m_pStars[i].m_curPos[2]); } glEnd(); } // now draw text glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); float distance,alpha; for (int i=0;i<NUMOBJECTS;++i) { if (!pObjects[i]) continue; // determine distance from us distance = sqrtf(pObjects[i]->m_curPos.x*pObjects[i]->m_curPos.x + pObjects[i]->m_curPos.y*pObjects[i]->m_curPos.y + pObjects[i]->m_curPos.z*pObjects[i]->m_curPos.z); // approximate the alpha value based on the distance away from us alpha = 3.75f - sqrtf(distance); if (alpha > 1.0f) alpha = 1.0f; else if (alpha < 0.0) alpha = 0.0; glPushMatrix(); // move everything into position glScalef(0.50f,0.50f,0.50f); glTranslatef(pObjects[i]->m_curPos.x, pObjects[i]->m_curPos.y,pObjects[i]->m_curPos.z); glRotatef(pObjects[i]->m_fAngle,1.0,0.0,0.0); glColor4f(pObjects[i]->m_fColor[0], pObjects[i]->m_fColor[1], pObjects[i]->m_fColor[2],alpha); pObjects[i]->Draw(); glPopMatrix(); } // ok now we check the last alpha value, if it's <= 0.0, // everything has faded away, and we restart if (alpha <= 0.0) Start(); }
The last piece of interesting code is the ValidatePixelFormat
function. Due to limitations in the SetPixelFormat
function, in order to implement this function we have to go through some hoops. First I create a dummy window and an OpenGL context for it. Then I can call ValidatePixelFormat
. Inside this function it can use OpenGL functions to query a device's capabilities. Then once that function returns, I destroy the dummy window and rendering context and create the real window and context. Painful, but it works.
Scrolling text looks badly aliased. I wanted to solve this problem, so I figured out how to turn on FSAA if a video card supports it. Here's a look at the code:
// Overridden to enable multisampling (FSAA) int CStarWarsController::ValidatePixelFormat(HDC hdc,int suggestedFormat) { HDC hDC = wglGetCurrentDC(); PFNWGLCHOOSEPIXELFORMATARBPROC wglChoosePixelFormatARB = (PFNWGLCHOOSEPIXELFORMATARBPROC) wglGetProcAddress("wglChoosePixelFormatARB"); if (!wglChoosePixelFormatARB) return suggestedFormat; if (!GLExtensionExists("WGL_ARB_multisample ")) return suggestedFormat; int pixelFormat; BOOL bStatus; UINT numFormats; float fAttributes[] = {0,0}; int iAttributes[] = { WGL_DRAW_TO_WINDOW_ARB,GL_TRUE, WGL_SUPPORT_OPENGL_ARB,GL_TRUE, WGL_ACCELERATION_ARB,WGL_FULL_ACCELERATION_ARB, WGL_COLOR_BITS_ARB,24, WGL_ALPHA_BITS_ARB,8, WGL_DEPTH_BITS_ARB,16, WGL_STENCIL_BITS_ARB,0, WGL_DOUBLE_BUFFER_ARB,GL_TRUE, WGL_SAMPLE_BUFFERS_ARB,GL_TRUE, WGL_SAMPLES_ARB,4, 0,0}; bStatus = wglChoosePixelFormatARB(hDC,iAttributes, fAttributes,1,&pixelFormat,&numFormats); if ((bStatus == GL_TRUE) && (numFormats == 1)) { m_bMultiSample = true; return pixelFormat; } // ok that failed, try using 2 samples now instead of 4 iAttributes[19] = 2; bStatus = wglChoosePixelFormatARB(hDC,iAttributes, fAttributes,1,&pixelFormat,&numFormats); if ((bStatus == GL_TRUE) && (numFormats == 1)) { m_bMultiSample = true; return pixelFormat; } // failed, return the suggested format and continue return suggestedFormat; }
Other uses
This example shows you how to create a standalone OpenGL application. However, you can easily use my control as a child window. I wrote a Euchre game, and I embedded the OpenGL control + StarWars controller into my About Box. It makes a nice effect.
=======================
=======================
=======================
출처: https://www.opengl.org/discussion_boards/showthread.php/155906-wglChoosePixelFormatARB
wglChoosePixelFormatARB
Thanks for the help guys, I’ve made a bit of progress but am now stumped on something else along the same lines. I managed to get something going from a combination of things I read on NeHe, Humus etc. However while I now get no errors it’s not performing as expected.
What I’m actually after is an output of unclamped float values (32-bits per component if poss) from the fragment shader and my code looks something like this at the moment:
First I set a PIXELFORMATDESCRIPTOR
static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), 1, PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, PFD_TYPE_RGBA, 128, 32, 0, 32, 0, 32, 0, 32, 0, 0, 0, 0, 0, 0, 16, 0, 0, PFD_MAIN_PLANE, 0, 0, 0, 0 };
and set a basic pixel format with no WGL stuff (apparently you have to do this first??):
void SetupPixelFormat( HDC hDC ) { int nPixelFormat; nPixelFormat = ChoosePixelFormat( hDC, &pfd ); SetPixelFormat( hDC, nPixelFormat, &pfd ); }
Then I sort out the context:
hRC = wglCreateContext( hDC ); // Create the rendering context wglMakeCurrent( hDC, hRC ); // Make the rendering context
Next I setup the WGL stuff (just before the texture / shaders setup stuff, I’ve also tried placing it afterwards):
bool SetupWGLPixelFormat( HDC hDC ) { int nPixelFormat; wglChoosePixelFormatARB = (PFNWGLCHOOSEPIXELFORMATARBPROC)wglGetProcAddress( "wglChoosePixelFormatARB" ); if ( !wglChoosePixelFormatARB ) { MessageBox( NULL, "wglChoosePixelFormatARB not supported", "Error! (SetupWGLPixelFormat)", MB_OK ); return false; } BOOL bValidPixFormat; UINT nMaxFormats = 1; UINT nNumFormats; float pfAttribFList[] = { 0, 0 }; int piAttribIList[] = { WGL_DRAW_TO_WINDOW_ARB,GL_TRUE, WGL_SUPPORT_OPENGL_ARB, GL_TRUE, WGL_ACCELERATION_ARB, WGL_FULL_ACCELERATION_ARB, WGL_COLOR_BITS_ARB, 128, WGL_RED_BITS_ARB, 32, WGL_GREEN_BITS_ARB, 32, WGL_BLUE_BITS_ARB, 32, WGL_ALPHA_BITS_ARB, 32, WGL_DEPTH_BITS_ARB, 16, WGL_STENCIL_BITS_ARB, 0, WGL_DOUBLE_BUFFER_ARB, GL_TRUE, WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB, 0, 0 }; bValidPixFormat = wglChoosePixelFormatARB( hDC, piAttribIList, pfAttribFList, nMaxFormats, &nPixelFormat, &nNumFormats ); if ( !bValidPixFormat ) { MessageBox( NULL, "Invalid Pixel Format", "Error! (SetupWGLPixelFormat)", MB_OK ); return false; } SetPixelFormat( hDC, nPixelFormat, &pfd ); return true; }
Then I attempt to turn off clamping:
glClampColorARB( GL_CLAMP_VERTEX_COLOR_ARB, GL_FALSE ); glClampColorARB( GL_CLAMP_FRAGMENT_COLOR_ARB, GL_FALSE ); glClampColorARB( GL_CLAMP_READ_COLOR_ARB, GL_FALSE );
Before finally dumping values out with:
glReadPixels( 0, 0, WINDOWWIDTH, WINDOWHEIGHT, GL_RGBA, GL_FLOAT, fPixels ); glReadPixels( 0, 0, WINDOWWIDTH, WINDOWHEIGHT, GL_DEPTH_COMPONENT, GL_FLOAT, fDepth );
The problem is the values are still clamped……I'm told this is possible in GL so I must be missing something.
Hope these questions aren’t too daft but I’m jumping in at the deep end a bit with OpenGL having only used DirectX previously. So there’s a bit of a combination of internet trawling and guesswork in the code.
Thanks again.
=======================
=======================
=======================
출처: http://blog.naver.com/newsdu/80191220898
어제 콜백 클래스 만든다고 삽질 좀 하다가 만들었는데 멤버 함수 포인터가 항상 4바이트가 아니라는걸 알고
잠시 한수 접어뒀다...ㅋ
그래서 엔진 구조를 좀더 확장하고 프레임워크 구조에 OpenGL 렌더러를 만들어서 윈도우를 그려보려한다.
아래가 이번 엔진의 클래스 다이어그램(설계도)이다.
이번에 추가된게 많다. 일단 먼저 눈에 띄는건 윈도우 클래스가 프레임워크 클래스로 서브클래싱 되었다.
그리고 중심에는 싱글턴이 있는데 본인은 싱글턴이 익숙해져서 그런지 자주 애용한다. 싱글턴이 뭔지 모르는 사람을 위해
간단히 설명하면, 일종의 전역변수인데 명시적으로 설정할 수 있는건 한번뿐이고 이후에는 이용만 가능하지, 지워버릴 수는
없다. 꼭 싱글턴을 써야하는건 아니지만 싱글턴이 어디서든 접근할 수 있고 싱글턴 인터페이스를 통해 싱글턴들을 관리할 수
있기에 본인이 애용하는 것이다. 더 자세한건 인터넷 검색을 활용하자.
어찌됐든 이번에 핵심적인 이슈는 OpenGL를 이용한 렌더링이다. 싱글턴 하위로 RendererGL2D와 System이 있는데
System클래스는 엔진의 핵심적인 역할을 하는 함수들을 가지고 있으며, RendererGL2D가 바로 OpenGL 기능을 래핑한
클래스이다. 이번 포스트부터는 GLEW(OpenGL Extension Wrangler) 라이브러리를 도입하여 OpenGL의 확장기능 위주로
진행할 것이다. GLEW는 프로젝트의 external이라는 폴더에 넣어두었다. 이 라이브러리는 OpenGL의 확장기능을 보다 편하게
사용할 수 있게 도와주는 라이브러리다. 필자의 컴퓨터에서는 OpenGL 3.3 버전까지 지원되서 코드에도 명시적으로 3.3으로
GL Context를 만들게끔 만들어놨다.(이 컴퓨터는 2010년에 샀다. 최신은 아님)
라이브러리를 사용하는데 필요한 폴더 설정이나 모듈(lib) 설정은 이전 포스트와 방식이 비슷하니 참고하기 바란다. 되도록이면
그런 부가적인 설명을 제외하고 핵심적인 설명 위주로 하는게 낫다는 판단해서 이렇게 넘어가는 것이다. 혹시나 첨부 프로젝트
속성을 봐도 전혀 감이 안온다면 댓글로 질문을 달아놓으면 답을 하겠다. 참고로 glew는 오픈소스로써 구글링하면 소스가 바로
나온다. 필자는 직접 소스를 받아서 빌드 했으며 동적 라이브러리 말고 디버그 정적 라이브러리로 빌드 했다. glew32sd.lib라고
포함된 파일을 검색해보면 나온다. 지금까지 포스트는 모두 디버그 빌드인데 일단 지금은 디버깅을 하는게 더 낫고 성능적으로도
아직 최적화를 할 단계는 아니다. 필요하면 그때가서 릴리즈도 빌드를 시작할 예정이다.
-----------------------------------------------------------------------------------------------------------------
엔진의 사용방법은 이렇다. 다음 코드를 보자.
2 #include "ns_System.h"
3
4 class Test_Engine
5 : public ns_Framework
6 {
7 public:
8 void Init()
9 {
10 CreateWin();
11 ns_Framework::Init();
12 }
13
14 private:
15 void Frame()
16 {
17 g_System.Render();
18 }
19
20 void Release()
21 {
22 ns_Framework::Release();
23 }
24 };
25
26 int APIENTRY WinMain(HINSTANCE, HINSTANCE, LPSTR, int)
27 {
28 Test_Engine test;
29 test.Init();
30 test.Run();
31
32 return 0;
33 }
2 {
3 ShowWin(true);
4
5 MSG msg = { 0 };
6 while( msg.message != WM_QUIT )
7 {
8 if( ::PeekMessage( &msg, NULL, 0, 0, PM_REMOVE ) )
9 {
10 ::TranslateMessage( &msg );
11 ::DispatchMessage( &msg );
12 }
13 else
14 {
15 Frame();
16 }
17 }
18
19 Release();
20 m_hWnd = nullptr;
21 }
2 {
3 if(m_hWnd || m_hDC)
4 return;
5
6 m_hWnd = g_System.GetWindow()->GetHwnd();
7 m_hDC = ::GetDC(m_hWnd);
8 }
9
10 void ns_RendererGL2D::DeleteDeviceContext()
11 {
12 if(m_hWnd == nullptr || m_hDC == nullptr)
13 return;
14
15 ::ReleaseDC(m_hWnd, m_hDC);
16
17 m_hWnd = nullptr;
18 m_hDC = nullptr;
19 }
2 {
3 if(m_hDC == nullptr)
4 return;
5
6 int pixelFormat = 0;
7
8 // 예전 방식으로
9 // 원하는 픽셀 포맷 설정
10 PIXELFORMATDESCRIPTOR pfd = { 0, };
11 pfd.nSize = sizeof(pfd);
12 pfd.nVersion = 1;
13 pfd.dwFlags = PFD_DRAW_TO_WINDOW |
14 PFD_SUPPORT_OPENGL |
15 PFD_DOUBLEBUFFER;
16 pfd.iPixelType = PFD_TYPE_RGBA;
17 pfd.cColorBits = 32;
18 pfd.cDepthBits = 16;
19
20 // 원하는 것과 가장 비슷한 픽셀 포맷을 선택한다.
21 pixelFormat = ::ChoosePixelFormat(m_hDC, &pfd);
22
23 // 장치 컨텍스트에 픽셀 포맷을 지정한다.
24 ::SetPixelFormat(m_hDC, pixelFormat, &pfd);
25
26 // 확장 픽셀 포맷을 지원한다면...
27 if(WGLEW_ARB_pixel_format == GL_TRUE)
28 {
29 // 확장 방식
30 const int pixelAttrList[] =
31 {
32 WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
33 WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
34 WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
35 WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB,
36 WGL_COLOR_BITS_ARB, 32,
37 WGL_DEPTH_BITS_ARB, 16,
38 0
39 };
40
41 // 원하는 것과 가장 비슷한 픽셀 포맷을 선택한다.
42 uint numFormats = 0;
43 wglChoosePixelFormatARB(m_hDC, pixelAttrList, nullptr, 1, &pixelFormat, &numFormats);
44
45 // 장치 컨텍스트에 픽셀 포맷을 지정한다.
46 ::SetPixelFormat(m_hDC, pixelFormat, nullptr);
47 }
48 }
2 {
3 if(m_hRC != nullptr)
4 return;
5
6 SetPixelFormat();
7
8 HGLRC tmpRC = ::wglCreateContext(m_hDC);
9 ::wglMakeCurrent(m_hDC, tmpRC);
10
11 // GLEW 초기화
12 GLenum err = glewInit();
13 if(err != GLEW_OK)
14 return;
15
16 // 확장 렌더 컨텍스트를 지원한다면...
17 if(WGLEW_ARB_create_context)
18 {
19 const int contextAttrList[] =
20 {
21 WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
22 WGL_CONTEXT_MINOR_VERSION_ARB, 3,
23 WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
24 0
25 };
26
27 m_hRC = wglCreateContextAttribsARB(m_hDC, nullptr, contextAttrList);
28 }
29
30 if(m_hRC == nullptr)
31 {
32 // 예전 방식 컨텍스트 적용
33 m_hRC = tmpRC;
34 }
35 else
36 {
37 // 예전 컨텍스트 삭제
38 ::wglMakeCurrent(m_hDC, nullptr);
39 ::wglDeleteContext(tmpRC);
40 ::wglMakeCurrent(m_hDC, m_hRC);
41 }
42 }
43
44 void ns_RendererGL2D::DeleteRenderContext()
45 {
46 if(m_hRC == nullptr)
47 return;
48
49 ::wglMakeCurrent(m_hDC, nullptr);
50 ::wglDeleteContext(m_hRC);
51
52 m_hRC = nullptr;
53 }
[출처] 07. OpenGL - OpenGL로 렌더링하기|작성자 뎡형
=======================
=======================
=======================
출처: https://stackoverflow.com/questions/9663262/resetting-opengl-extension-pointers-in-windows-api
3
2
|
Some time ago I started working on OpenGL(3.3) renderer for MFC program.Everything was fine till I decided to run some OpenGL debugging tools to see if I have some silent errors.I usedgDEBugger .Running gDebug program analyzing tool I immediately started getting errors of the following type:
Debug String: Detected error: The debugged process asked for an extension function pointer (glGenBuffers) from one render context, but called this function pointer in another render context (context #2) In fact every GLEW method gets this error.Now I started looking for a problem in forums and also on MSDN and I found some people mention that in Windows environment some GLEW methods pointers should be redefined.Also I stumbled upon this tutorial which shows how pretty every GLEW method is redefined using Windows OpenGL methods like these: void CGLRenderer::InitAPI() { // Program glCreateProgram = (PFNGLCREATEPROGRAMPROC)wglGetProcAddress("glCreateProgram"); glDeleteProgram = (PFNGLDELETEPROGRAMPROC)wglGetProcAddress("glDeleteProgram"); glUseProgram = (PFNGLUSEPROGRAMPROC)wglGetProcAddress("glUseProgram"); ... .... My OpenGL context set-up looks like this: bool OpenGLMain::create30Context(HDC device_context){ //this->hwnd=hwnd; hdc=device_context;//GetDC(hwnd); hdcGlobal=&hdc; PIXELFORMATDESCRIPTOR kPFD; memset(&kPFD,0,sizeof(PIXELFORMATDESCRIPTOR)); kPFD.nSize=sizeof(PIXELFORMATDESCRIPTOR); kPFD.nVersion=1; kPFD.dwFlags= PFD_DRAW_TO_WINDOW| PFD_SUPPORT_OPENGL| PFD_GENERIC_ACCELERATED| PFD_DOUBLEBUFFER; kPFD.iPixelType=PFD_TYPE_RGBA; kPFD.cColorBits=32; kPFD.cDepthBits=32; kPFD.cStencilBits=8; kPFD.iLayerType=PFD_MAIN_PLANE; int iPixelFormat=ChoosePixelFormat(hdc,&kPFD); if(iPixelFormat==0){ // ReleaseDC(window,gs_hWindowDC); return false; } BOOL bSuccess=SetPixelFormat(hdc,iPixelFormat,&kPFD); if(!bSuccess){ // ReleaseDC(window,gs_hWindowDC); return false; } /////////init opengl context HGLRC tempOpenGLContext=wglCreateContext(hdc);/////Openggl 2.1 wglMakeCurrent(hdc,tempOpenGLContext);////male openGL 2.1 context current and active GLenum error =glewInit(); if(error!=GLEW_OK){ return false; } /////////context setup/////// int attributes[]={ WGL_CONTEXT_MAJOR_VERSION_ARB,3, WGL_CONTEXT_MINOR_VERSION_ARB,2, WGL_CONTEXT_FLAGS_ARB,WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,0 }; if(wglewIsSupported("WGL_ARB_create_context")==1){ hrc=wglCreateContextAttribsARB(hdc,NULL,attributes);///create OpenGL 3x context based on the supplied attributes wglMakeCurrent(NULL,NULL);//remove temp context wglDeleteContext(tempOpenGLContext); wglMakeCurrent(hdc,hrc); }else{ hrc=tempOpenGLContext;////if no support for OGL 3x detected roll back to 2.0 } /////////version check/////////////// int glVersion[2]={-1,-1}; glGetIntegerv(GL_MAJOR_VERSION,&glVersion[0]); glGetIntegerv(GL_MINOR_VERSION,&glVersion[1]); std::cout<<"Using openGL"<<glVersion[0]<<"."<<glVersion[1]<< std::endl; OutputDebugString(L"Using OPENGL version:"+glVersion[0]); return true; } Now I am really confused at this point because in fact the program runs fine inside VisualStudio without redefining all these GLEW methods.But it shows the empty screen (no geometry) if I run the executable directly.Also in all the other examples and tutorials I have ever read it had never been mentioned that one has to reset pointers on GLEW API methods.So my question is if anybody can point out to the right way to integrate OpenGL 3.3 in Windows API because it seems that there are many ways doing it. |
=======================
=======================
=======================
I was also looking for a code snipet to create EGL context on desktop NVidia.
I was aware that on Windows, NVidia exposes EGL through extensions WGL_EXT_create_context_es_profile and WGL_EXT_create_context_es2_profile, and on Linux NVidia allows EGL through GLX_EXT_create_context_es_profile and GLX_EXT_create_context_es2_profile extensions.
Of course, you also have the afore mentioned OpenGL-ES drivers/emulators from PowerVR / Adreno / Mali / Angle, but I was looking for a raw implementation, i.e., my own homegrown libGLESv2.dll and libEGL.dll
After some struggle, I've got something like below.
The basic thing is to get required extensions, by hand or using GLEW, create dummy context and then create your GL-ES context. Error handling is omitted for clearance.
You also need to define and load all core and extension proc functions, and make then public for your libs.
- /**
- * adapted from
- * from https://www.opengl.org/wiki/Tutorial:_OpenGL_3.1_The_First_Triangle_%28C%2B%2B/Win%29
- */
- bool CGLRenderer::CreateGLContext(CDC* pDC){
- PIXELFORMATDESCRIPTOR pfd =
- {
- sizeof(PIXELFORMATDESCRIPTOR),
- 1,
- PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, //Flags
- PFD_TYPE_RGBA, //The kind of framebuffer. RGBA or palette.
- 32, //Colordepth of the framebuffer.
- 0, 0, 0, 0, 0, 0,
- 0,
- 0,
- 0,
- 0, 0, 0, 0,
- 24, //Number of bits for the depthbuffer
- 8, //Number of bits for the stencilbuffer
- 0, //Number of Aux buffers in the framebuffer.
- PFD_MAIN_PLANE,
- 0,
- 0, 0, 0
- };
- int nPixelFormat = ChoosePixelFormat(pDC->m_hDC, &pfd);
- if (nPixelFormat == 0) return false;
- BOOL bResult = SetPixelFormat (pDC->m_hDC, nPixelFormat, &pfd);
- if (!bResult) return false;
- HGLRC tempContext = wglCreateContext(pDC->m_hDC);
- wglMakeCurrent(pDC->m_hDC, tempContext);
- // Using GLEW. Init it after ctx creation
- GLenum err = glewInit();
- if (GLEW_OK != err){
- AfxMessageBox(_T("GLEW is not initialized!"));
- }
- // create OPENGL ES 2 profile. It may return a compatible, but higher GL-ES, as 3.0 or 3.1
- int attribList[] = {
- WGL_CONTEXT_MAJOR_VERSION_ARB, 2,
- WGL_CONTEXT_MINOR_VERSION_ARB, 0,
- WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_ES2_PROFILE_BIT_EXT,
- 0,
- };
- if(wglewIsSupported("WGL_ARB_create_context") == 1){
- m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs);
- wglMakeCurrent(NULL,NULL);
- wglDeleteContext(tempContext);
- wglMakeCurrent(pDC->m_hDC, m_hrc);
- }else{ //Failed to create a GL-ES context.
- m_hrc = NULL;
- }
- //Debug info - print out GL version
- const GLubyte *glVersionString = glGetString(GL_VERSION);
- const char *vendorChar = (char*)glGetString(GL_VENDOR);
- const char *rendererChar = (char*)glGetString(GL_RENDERER);
- int glVersion[2] = {-1,-1};
- glGetIntegerv(GL_MAJOR_VERSION,&glVersion[0]);
- glGetIntegerv(GL_MINOR_VERSION,&glVersion[1]);
- cout<<"GL version string: "<< glVersionString << <<endl;
- cout<<"OpenGL version: "<<glVersion[0]<<"."<<glVersion[1]<<endl;
- cout<<"GPU: "<<vendorChar<<" - "<<rendererChar<<endl;
- if (!m_hrc) return false;
- return true;
- } // end of CreateGLContext
This is available at
http://stackoverflow.com/questions/31971373/how-to-create-egl-context-on-nvidia-desktop
출처": https://stackoverflow.com/questions/31971373/how-to-create-egl-context-on-nvidia-desktop
after some struggle, I've got something like below.
The basic thing is to get required extensions, by hand or using GLEW, create dummy context and then create your GL-ES context. Error handling is omitted for clearance.
/** * adapted from * from https://www.opengl.org/wiki/Tutorial:_OpenGL_3.1_The_First_Triangle_%28C%2B%2B/Win%29 */ bool CGLRenderer::CreateGLContext(CDC* pDC){ PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), 1, PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, //Flags PFD_TYPE_RGBA, //The kind of framebuffer. RGBA or palette. 32, //Colordepth of the framebuffer. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 24, //Number of bits for the depthbuffer 8, //Number of bits for the stencilbuffer 0, //Number of Aux buffers in the framebuffer. PFD_MAIN_PLANE, 0, 0, 0, 0 }; int nPixelFormat = ChoosePixelFormat(pDC->m_hDC, &pfd); if (nPixelFormat == 0) return false; BOOL bResult = SetPixelFormat (pDC->m_hDC, nPixelFormat, &pfd); if (!bResult) return false; HGLRC tempContext = wglCreateContext(pDC->m_hDC); wglMakeCurrent(pDC->m_hDC, tempContext); // Using GLEW. Init it after ctx creation GLenum err = glewInit(); if (GLEW_OK != err){ AfxMessageBox(_T("GLEW is not initialized!")); } // create OPENGL ES 2 profile. It may return a compatible, but higher GL-ES, as 3.0 or 3.1 int attribList[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 2, WGL_CONTEXT_MINOR_VERSION_ARB, 0, WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_ES2_PROFILE_BIT_EXT, 0, }; if(wglewIsSupported("WGL_ARB_create_context") == 1){ m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs); wglMakeCurrent(NULL,NULL); wglDeleteContext(tempContext); wglMakeCurrent(pDC->m_hDC, m_hrc); }else{ //Failed to create a GL-ES context. m_hrc = NULL; } //Debug info - print out GL version const GLubyte *glVersionString = glGetString(GL_VERSION); const char *vendorChar = (char*)glGetString(GL_VENDOR); const char *rendererChar = (char*)glGetString(GL_RENDERER); int glVersion[2] = {-1,-1}; glGetIntegerv(GL_MAJOR_VERSION,&glVersion[0]); glGetIntegerv(GL_MINOR_VERSION,&glVersion[1]); cout<<"GL version string: "<< glVersionString << <<endl; cout<<"OpenGL version: "<<glVersion[0]<<"."<<glVersion[1]<<endl; cout<<"GPU: "<<vendorChar<<" - "<<rendererChar<<endl; if (!m_hrc) return false; return true; } // end of CreateGLContext
=======================
=======================
=======================
RTT = render_to_textureThis page shows a few examples on how to setup a RTT and how to cleanup.The extension specification is at http://www.opengl.org/registry/specs/EXT/framebuffer_object.txtNote that this extension became core in GL 3.0 and at the same time they released the ARB version of the extensionhttp://www.opengl.org/registry/specs/ARB/framebuffer_object.txtGL_ARB_framebuffer_object brings together GL_EXT_framebuffer_object, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample, GL_EXT_packed_depth_stencil which are all folded into the core of GL 3.0.
Contents
[hide]- 1Quick example, render_to_texture (2D)
- 2Quick example, render_to_texture (2D), mipmaps
- 3Quick example, render_to_texture (Cubemap)
- 4Quick example, render_to_texture (2D Depth texture ONLY)
- 5Quick example, render_to_texture (2D), mipmaps, depth_stencil
- 6Quick example, render_to_buffer (p-buffer replacement)
- 7Limitations of GL_EXT_framebuffer_object
- 81 FBO or more
- 9The main framebuffer
- 10MSAA
- 11Color texture, Depth texture
- 12Depth only
- 13Color only
- 14Stencil
- 15MRT
- 16MRT and cubemaps
- 17glReadPixels
- 18Sampling and Rendering to the Same Texture
Quick example, render_to_texture (2D)
Let's assume we want to render to a texture and we also want depth testing to take place. We need to create a color texture and we need to attach it to the FBO. We need a depth buffer RenderBuffer and attach it to the FBO. Once you are done rendering to this texture, you can use it like any other texture. In this case, we don't care what happens to the depth values. If you want to access the depth (for example, from within your shader), you need to make a depth texture instead of a depth buffer RenderBuffer. Please look at the other examples. Also, keep in mind we are using the GL_RGBA8 format here which is a format supported by all GPUs.
//--------------------
//RGBA8 2D texture, 24 bit depth texture, 256x256
glGenTextures(1, &color_tex);
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//NULL means reserve texture memory, but texels are undefined
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8,
256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
//-------------------------
glGenFramebuffersEXT(1, &fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
//Attach 2D texture to this FBO
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, color_tex, 0);
//-------------------------
glGenRenderbuffersEXT(1, &depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);
//-------------------------
//Attach depth buffer to FBO
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
GL_DEPTH_ATTACHMENT_EXT,
GL_RENDERBUFFER_EXT, depth_rb);
//-------------------------
//Does the GPU support current FBO configuration?
GLenum status;
status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
switch(status)
{
case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good";
default: HANDLE_THE_ERROR;
}
//-------------------------
//and now you can render to GL_TEXTURE_2D
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClearDepth(1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//-------------------------
glViewport(0, 0, 256, 256);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 256.0, 0.0, 256.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
//-------------------------
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
//-------------------------
//**************************
//RenderATriangle, {0.0, 0.0}, {256.0, 0.0}, {256.0, 256.0}
//Read http://www.opengl.org/wiki/VBO_-_just_examples
RenderATriangle();
//-------------------------
GLubyte pixels[4*4*4];
glReadPixels(0, 0, 4, 4, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
//pixels 0, 1, 2 should be white
//pixel 4 should be black
//----------------
//Bind 0, which means render to back buffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
//---------------------------
And in the end, cleanup
//Delete resources glDeleteTextures(1, &color_tex); glDeleteRenderbuffersEXT(1, &depth_rb); //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glDeleteFramebuffersEXT(1, &fb);
Quick example, render_to_texture (2D), mipmaps
This example is nearly identical to the above sample code with one difference : glGenerateMipmapEXT is used to generate the mipmaps. You can use it to generate mipmaps whenever you want. Generally, you render to the texture, then unbind the FBO with glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0), then bind the texture with glBindTexture, then call glGenerateMipmapEXT. ALSO, notice that glGenerateMipmapEXT doesn't have an "s".
//RGBA8 2D texture, 24 bit depth texture, 256x256 glGenTextures(1, &color_tex); glBindTexture(GL_TEXTURE_2D, color_tex); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //NULL means reserve texture memory, but texels are undefined //**** Tell OpenGL to reserve level 0 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); //You must reserve memory for other mipmaps levels as well either by making a series of calls to //glTexImage2D or use glGenerateMipmapEXT(GL_TEXTURE_2D). //Here, we'll use : glGenerateMipmapEXT(GL_TEXTURE_2D) //------------------------- glGenFramebuffersEXT(1, &fb); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); //Attach 2D texture to this FBO glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, color_tex, 0); //------------------------- glGenRenderbuffersEXT(1, &depth_rb); glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb); glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256); //------------------------- //Attach depth buffer to FBO glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb); //------------------------- //Does the GPU support current FBO configuration? GLenum status; status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); switch(status) { case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good"; default: HANDLE_THE_ERROR; } //------------------------- //and now you can render to GL_TEXTURE_2D glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); glClearColor(0.0, 0.0, 0.0, 0.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //------------------------- glViewport(0, 0, 256, 256); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0, 256.0, 0.0, 256.0, -1.0, 1.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); //------------------------- glDisable(GL_TEXTURE_2D); glDisable(GL_BLEND); glEnable(GL_DEPTH_TEST); //------------------------- //************************** //RenderATriangle, {0.0, 0.0}, {256.0, 0.0}, {256.0, 256.0} //Read http://www.opengl.org/wiki/VBO_-_just_examples RenderATriangle(); //------------------------- GLubyte pixels[4*4*4]; glReadPixels(0, 0, 4, 4, GL_BGRA, GL_UNSIGNED_BYTE, pixels); //pixels 0, 1, 2 should be white //pixel 4 should be black //---------------- //Bind 0, which means render to back buffer glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); //---------------- //**** Now that we rendered to level 0 of the texture, we must generate the mipmaps. //This should be quick since it is done on the GPU. glBindTexture(GL_TEXTURE_2D, color_tex); glGenerateMipmapEXT(GL_TEXTURE_2D)
And in the end, cleanup
//Delete resources glDeleteTextures(1, &color_tex); glDeleteRenderbuffersEXT(1, &depth_rb); //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glDeleteFramebuffersEXT(1, &fb);
Quick example, render_to_texture (Cubemap)
In case you want to have dynamic reflections on a shiny object, you would want to do render to a cubemap.
The concept behind rendering to a cubemap is the following. Bind a cubemap face, then render to it. Bind another cubemap face, then render to it. There are 6 faces in total. You may think that rendering 6 times your scene will drag down performance and you are right. Don't update the cubemap often. You can update every 2 frames. Make your cubemap small, for example 256x256.
//RGBA8 Cubemap texture, 24 bit depth texture, 256x256 glGenTextures(1, &color_tex); glBindTexture(GL_TEXTURE_CUBE_MAP, color_tex); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //NULL means reserve texture memory, but texels are undefined glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+0, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+1, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+2, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+3, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+4, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+5, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); //------------------------- glGenFramebuffersEXT(1, &fb); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); //Attach one of the faces of the Cubemap texture to this FBO glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_CUBE_MAP_POSITIVE_X, color_tex, 0); //------------------------- glGenRenderbuffersEXT(1, &depth_rb); glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb); glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256); //------------------------- //Attach depth buffer to FBO glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb); //------------------------- //Does the GPU support current FBO configuration? GLenum status; status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); switch(status) { case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good"; default: HANDLE_THE_ERROR; } //------------------------- //and now you can render to GL_TEXTURE_CUBE_MAP_POSITIVE_X //In order to render to the other faces, do this : glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, color_tex, 0); //... now render glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, color_tex, 0); //... now render //... and so on
And in the end, cleanup
//Delete resources glDeleteTextures(1, &color_tex); glDeleteRenderbuffersEXT(1, &depth_rb); //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glDeleteFramebuffersEXT(1, &fb);
Quick example, render_to_texture (2D Depth texture ONLY)
In this example, notice glDrawBuffer(GL_NONE) and glReadBuffer(GL_NONE). We don't need a color output so that is why we set them to GL_NONE. The important call is glDrawBuffer(GL_NONE). We do not want to render to a color buffer.
//32 bit depth texture, 256x256 glGenTextures(1, &depth_tex); glBindTexture(GL_TEXTURE_2D, depth_tex); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL); //NULL means reserve texture memory, but texels are undefined //You can also try GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24 for the internal format. //If GL_DEPTH24_STENCIL8_EXT, go ahead and use it (GL_EXT_packed_depth_stencil) glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, 256, 256, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL); //------------------------- glGenFramebuffersEXT(1, &fb); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); //Attach glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, depth_tex, 0); //------------------------- //Does the GPU support current FBO configuration? //Before checking the configuration, you should call these 2 according to the spec. //At the very least, you need to call glDrawBuffer(GL_NONE) glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); GLenum status; status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); switch(status) { case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good"; default: HANDLE_THE_ERROR; } //------------------------- //----and to render to it, don't forget to call //At the very least, you need to call glDrawBuffer(GL_NONE) glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); //------------------------- //If you want to render to the back buffer again, you must bind 0 AND THEN CALL glDrawBuffer(GL_BACK) //else GL_INVALID_OPERATION will be raised glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glDrawBuffer(GL_BACK); glReadBuffer(GL_BACK);
And in the end, cleanup
//Delete resources glDeleteTextures(1, &depth_tex); //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glDeleteFramebuffersEXT(1, &fb);
Quick example, render_to_texture (2D), mipmaps, depth_stencil
If GL_EXT_packed_depth_stencil is present, use it. Also called a D24S8 format. All common GPUs support this format.
http://www.opengl.org/registry/specs/EXT/packed_depth_stencil.txt//-----------------
//RGBA8 2D texture, D24S8 depth/stencil texture, 256x256
glGenTextures(1, &color_tex);
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
//NULL means reserve texture memory, but texels are undefined
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8,
256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
//You must reserve memory for other mipmaps
//levels as well either by making a series of calls to
//glTexImage2D or use glGenerateMipmapEXT(GL_TEXTURE_2D).
//Here, we'll use : glGenerateMipmapEXT(GL_TEXTURE_2D);
//-------------------------
glGenFramebuffersEXT(1, &fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
//Attach 2D texture to this FBO
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, color_tex, 0);
//-------------------------
glGenRenderbuffersEXT(1, &depth_stencil_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_stencil_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH24_STENCIL8_EXT, 256, 256);
//-------------------------
//Attach depth buffer to FBO
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_stencil_rb); //Also attach as a stencil glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_stencil_rb); //------------------------- //Does the GPU support current FBO configuration? GLenum status; status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); switch(status) { case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good"; default: HANDLE_THE_ERROR; } //------------------------- //and now you can render to GL_TEXTURE_2D glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); glClearColor(0.0, 0.0, 0.0, 0.0); //It's always a good idea to clear the stencil at the same time as the depth when the format is D24S8. glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); //------------------------- glViewport(0, 0, 256, 256); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0, 256.0, 0.0, 256.0, -1.0, 1.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); //------------------------- glDisable(GL_TEXTURE_2D); glDisable(GL_BLEND); glEnable(GL_DEPTH_TEST); //------------------------- //************************** //RenderATriangle, {0.0, 0.0}, {256.0, 0.0}, {256.0, 256.0} //Read http://www.opengl.org/wiki/VBO_-_just_examples RenderATriangle(); //------------------------- GLubyte pixels[4*4*4]; glReadPixels(0, 0, 4, 4, GL_BGRA, GL_UNSIGNED_BYTE, pixels); //pixels 0, 1, 2 should be white //pixel 4 should be black //---------------- //Bind 0, which means render to back buffer glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
And in the end, cleanup
//-------------------------
//Delete resources
glDeleteTextures(1, &color_tex);
glDeleteRenderbuffersEXT(1, &depth_stencil_rb);
//Bind 0, which means render to back buffer, as a result, fb is unbound
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glDeleteFramebuffersEXT(1, &fb);
//----------------------
Quick example, render_to_buffer (p-buffer replacement)
Around 2000, the p-buffer extension was released which was used to do offscreen rendering. These days, it is best to use GL_EXT_framebuffer_object. This extension is much easier to use compared to p-buffer and best of all, it is cross platform. This example creates a RenderBuffer using 2 calls to glRenderbufferStorageEXT. The first call is for creating a color buffer and the second is used to create a depth buffer.
//------------------------------
//RGBA8 RenderBuffer, 24 bit depth RenderBuffer, 256x256
glGenFramebuffersEXT(1, &fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
//Create and attach a color buffer
glGenRenderbuffersEXT(1, &color_rb);
//We must bind color_rb before we call glRenderbufferStorageEXT
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, color_rb);
//The storage format is RGBA8
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA8, 256, 256);
//Attach color buffer to FBO
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_RENDERBUFFER_EXT, color_rb);
//-------------------------
glGenRenderbuffersEXT(1, &depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);
//-------------------------
//Attach depth buffer to FBO
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
GL_DEPTH_ATTACHMENT_EXT,
GL_RENDERBUFFER_EXT, depth_rb);
//-------------------------
//Does the GPU support current FBO configuration?
GLenum status;
status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
switch(status)
{
case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good";
default: HANDLE_THE_ERROR;
}
//-------------------------
//and now you can render to the FBO (also called RenderBuffer)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//-------------------------
glViewport(0, 0, 256, 256);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 256.0, 0.0, 256.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
//-------------------------
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
//-------------------------
//**************************
//RenderATriangle, {0.0, 0.0}, {256.0, 0.0}, {256.0, 256.0}
//Read http://www.opengl.org/wiki/VBO_-_just_examples
RenderATriangle();
//-------------------------
GLubyte pixels[4*4*4];
glReadPixels(0, 0, 4, 4, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
//pixels 0, 1, 2 should be white
//pixel 4 should be black
//----------------
//Bind 0, which means render to back buffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
//------------------------------
And in the end, cleanup
//Delete resources
glDeleteRenderbuffersEXT(1, &color_rb);
glDeleteRenderbuffersEXT(1, &depth_rb);
//Bind 0, which means render to back buffer, as a result, fb is unbound
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glDeleteFramebuffersEXT(1, &fb);
Limitations of GL_EXT_framebuffer_object
One of the limitations of GL_EXT_framebuffer_object is that when you bind a color buffer and then you bind a depth buffer, both must have the same width and height or else the state of the FBO is considered invalid (incomplete). This means if you have 1 FBO that is 64x64, another which is 512x64, another that is 1024x1024, for each of those you have to allocate a separate depth buffer (if you need depth testing of course). This obviously wastes memory.
In GL 3.0, FBO became core and that limitation was removed.
You can create 1 depth buffer that is 1024x1024 and bind them to all 3 FBOs. Notice that the depth buffer is large enough for even the smaller textures like 64x64.
1 FBO or more
Is it better to make 1 FBO and bind your texture to it each time you need to render to the texture?
An FBO itself doesn't use much memory. It is a state vector object. In terms of performance, each time you bind, the driver needs to validate the state which costs CPU time. Logically, it would be better to have 1 FBO per Render_To_Texture (RTT).
However, it has been found that you get a speed boost if your textures is the same size and you use 1 FBO for them.
If you have 10 textures that are 64x64 and 10 textures that are 512x64, make 2 FBOs. One FBO for each group.
The main framebuffer
Can you bind the main framebuffer's depth buffer as a depth buffer for your FBO? No. You must create a depth texture or a depth Render Buffer.
Does GL 3.0 allow using the main depth buffer? No.
Can you do MRT (multiple render targets) and have the main color framebuffer as one of the targets? No, you can only target a texture or a Render Buffer. GL 3.0 doesn't support it either.
MSAA
Are multisample Render_To_Texture (RTT) supported?Not directly. You need GL_EXT_framebuffer_multisample and you would have to copy the contents of the AA-FBO to a standard RTT.
Note that GL_EXT_framebuffer_multisample also became core in GL 3.0
See also http://www.opengl.org/wiki/GL_EXT_framebuffer_multisample
Color texture, Depth texture
In this example, we are attaching a color texture and also a depth texture and we'll render to both of them.
//--------------------------------
//RGBA8 2D texture, 24 bit depth texture, 256x256
glGenTextures(1, &color_tex);
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//NULL means reserve texture memory, but texels are undefined
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glGenTextures(1, &depth_tex);
glBindTexture(GL_TEXTURE_2D, depth_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
//NULL means reserve texture memory, but texels are undefined
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 256, 256, 0,
GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
//-------------------------
glGenFramebuffersEXT(1, &fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
//Attach 2D texture to this FBO
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, color_tex, 0/*mipmap level*/);
//-------------------------
//Attach depth texture to FBO
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,
GL_TEXTURE_2D, depth_tex, 0/*mipmap level*/);
//-------------------------
//Does the GPU support current FBO configuration?
GLenum status;
status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
switch(status)
{
case GL_FRAMEBUFFER_COMPLETE_EXT: cout<<"good";
default: HANDLE_THE_ERROR;
}
//--------------------------------
And in the end, cleanup
//-----------------------
//Delete resources
glDeleteTextures(1, &color_tex);
glDeleteTextures(1, &depth_tex);
//Bind 0, which means render to back buffer, as a result, fb is unbound
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glDeleteFramebuffersEXT(1, &fb);
//-----------------------
Depth only
This is similar to the case above (Color texture, Depth texture) except that since there is no color buffer, call glDrawBuffer(GL_NONE) before or after calling glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb) and then render. When you are done, call glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0) to render to the main framebuffer. This is important, call glDrawBuffer(GL_BACK) after. If you call before glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0), a GL error will be raised.
As for your fragment shader, you should write to gl_FragColor or whatever your color output it. The GPU will automatically generate the depth value and write to gl_FragDepth. The color value is obviously dropped automatically by the GPU. Example
//------------------------------
//[FRAGMENT SHADER]
#version 110
void main()
{
gl_FragColor = vec4(1.0);
}
//------------------------------
Color only
Simply disable depth testing (glDisable(GL_DEPTH_TEST) and set the depth mask to FALSE (glDepthMask(GL_FALSE)) before you render to your RTT.
Stencil
NEVER EVER MAKE A STENCIL buffer. All GPUs and all drivers do not support an independent stencil buffer. If you need a stencil buffer, then you need to make a Depth=24, Stencil=8 buffer, also called D24S8. Please search for the example about GL_EXT_packed_depth_stencil on this page.
MRT
Talk about MRT
MRT and cubemaps
Talk about MRT and cubemaps
glReadPixels
Yes, you can bind a FBO and then render to it and then read with with a call to glReadPixels. It doesn't matter if what you have attached to the FBO is a RenderBuffer or a texture, glReadPixels will still read it and it will return the results.
For RTT (Render To Texture), if you will be using glGetTexImage, it is recommended that you unbind the FBO, make the texture current with a call to glActiveTexture and glBindTexture and use glGetTexImage. It is recommended that you avoid glGetTexImage and use the glReadPixels method since certain drivers don't do anything when you call glGetTexImage.
=======================
=======================
=======================
기타링크:
http://grmanet.sogang.ac.kr/seminar/080314_fbuffer_object.pdf
http://www.dhpoware.com/demos/glMultiSampleAntiAliasing.html
---------------------------------------------------------------------------------------------------------------------------------------------
멀티샘플링 소스:
---------------------------------------------------------------------------------------------------------------------------------------------
=======================
=======================
=======================
'프로그래밍 관련 > 3D,2D DRAW 관련' 카테고리의 다른 글
OpenGL 을 window DC 에 따로 렌더링 하기 다른부분에 랜더링 관련 (0) | 2017.06.20 |
---|---|
[OpenGL] GL_POLYGON_SMOOTH 안티 엘리어싱 적용후 네모난 사각형도형의 POLYGON 또는 GL_QUADS의 텍스쳐에 깨진 대각선 선이 보이는경우 (0) | 2017.06.08 |
OpenGL D3D 그래픽 용어 정리 관련 (0) | 2017.05.30 |
OpenGL 중 glew.h 를 include하면 에러가 나는데 어떻게 해결해야 할까요? (0) | 2017.05.17 |
OpenGL 오픈지엘 테스트 그리기 관련 (0) | 2017.05.10 |