상세 컨텐츠

본문 제목

FreeImage Library 셋팅 관련

프로그래밍 관련/3D,2D DRAW 관련

by AlrepondTech 2020. 9. 15. 16:29

본문

반응형

 

 

=================================

=================================

=================================

 

 

출처: http://m.blog.naver.com/sogangori/220701976219

 

FreeImage는 리눅스와 윈도우즈 운영체제 양쪽에서 똑같이 사용할 수 있는 C++ 이미지 처리 라이브러리 입니다

NVidia 의 이미지 처리 CUDA 샘플 프로젝트들이 FreeImage 라이브러리를 사용합니다.

그동안 사용해 보니 좋은 라이브러리라고 생각됩니다.

윈도우 OS 의 경우 32bit와 64 bit 의 .lib 파일이 다르므로 64bit 프로젝트를 사용할때는 64bit 용 .lib를 사용해야 합니다.

http://freeimage.sourceforge.net/

 

윈도우 운영체제에서 사용하기

인스톨 방법입니다. 이 방법을 사용하면 .lib 파일을 가지고 다니는 등의 필요가 없이 가장 편리하게 사용할 수 있습니다.

1) 다운로드 받습니다. -  FreeImage3170.zip

2) 압축 풀고 프리이미지 폴더로 이동합니다 - cd FreeImage install folder

3) 컴파일 합니다 - make 

............

strenc.c:(.text+0x12cd): warning: the use of `tmpnam' is dangerous, better use `mkstemp'
mkdir -p Dist
cp *.a Dist/
cp *.so Dist/
cp Source/FreeImage.h Dist/
make[1]: Leaving directory `/home/way/FreeImage'

 

4) 인스톨합니다 - sudo make install 

way@way-All-Series:~/FreeImage$ sudo make install
make -f Makefile.gnu install 
make[1]: Entering directory `/home/way/FreeImage'
install -d //usr/include //usr/lib
install -m 644 -o root -g root Source/FreeImage.h //usr/include
install -m 644 -o root -g root libfreeimage.a //usr/lib
install -m 755 -o root -g root libfreeimage-3.17.0.so //usr/lib
ln -sf libfreeimage-3.17.0.so //usr/lib/libfreeimage.so.3
ln -sf libfreeimage.so.3 //usr/lib/libfreeimage.so    
make[1]: Leaving directory `/home/way/FreeImage'

 

5) 필요없는 파일들을 정리해 줍니다 - make clean

Installation 
------------ 
Note: You will need to have root privileges in order to install the library in the /usr/lib directory. 
The installation process is as simple as this :  
1) Enter the FreeImage directory 
2) Build the distribution :  
make 
make install 
3) Clean all files produced during the build process 
make clean

 

설치가 완료되었습니다. 빌드명령에 -lfreeimage  를 추가해 주기만 하면 됩니다.

You should be able to link progams with the -lfreeimage option after the library is compiled and installed.  
You can also statically link with libfreeimage.a.

 

압축 파일 내에는 Renux 용 샘플이 있습니다만 다른 추가 라이브러리를 필요로 하기 때문에 사용할 수가 없었습니다.

FreeImage 만 가지고 이미지 읽기 테스트를  했습니다.

예제 1 )  이미지 파일 읽기

#include <stdio.h>
#include <FreeImage.h>
#include <string.h>

int main(void) {
    printf("Hello C++\n");

    const char* imagePath = "/tmp/n2.png";
    FreeImage_Initialise(TRUE);

    FIBITMAP *dib = FreeImage_Load(FIF_PNG, imagePath, PNG_DEFAULT);
    return 0;
}

빌드하기 : sudo gcc -o main main.cpp -lfreeimage
 

 

예제 2 ) Gray/Color 이미지 자동으로 구분해서 읽기

#include <stdio.h>
#include <FreeImage.h>
#include <string.h>

int main(int argc, char** argv) {
    printf("argc = %d \n",argc);
    char* imagePath = "/tmp/n2.png";

    if(argc>1){
        for(int i=0; i<argc; i++){
            printf("argv[%d] = %s \n",i,argv[i]);
        }
        imagePath = argv[1];
    }

    printf("imagePath = %s\n",imagePath);
    FreeImage_Initialise(TRUE);
    FREE_IMAGE_FORMAT fif = FreeImage_GetFIFFromFilename(imagePath);
    FIBITMAP *dib = FreeImage_Load(fif, imagePath, PNG_DEFAULT);
    int width = FreeImage_GetWidth(dib);
    int height = FreeImage_GetHeight(dib);
    int bpp = FreeImage_GetBPP(dib);

    printf("width,height = %d x %d , bpp = %d\n", width,height,bpp);

    BYTE *ptr = FreeImage_GetBits(dib);
    if ( ptr == NULL )
    {
        printf("pixel Null\n");
    }

    RGBQUAD color;
  BYTE gray=0;
    for (int y = 0; y < FreeImage_GetHeight(dib); y++) {
        for (int x = 0; x < FreeImage_GetWidth(dib); x++) {
            if(bpp==8){
                FreeImage_GetPixelIndex(dib, x, y, &gray);
                printf("%d",gray);
            }else{
                FreeImage_GetPixelColor(dib, x, y, &color);
                printf("%d ",color.rgbBlue);
            }
        }
        printf("\n");
    }
    FreeImage_Unload(dib);
    return 0;
}

빌드하기 : sudo gcc -o ImageReadUsingFreeImage ImageReadUsingFreeImage.cpp -lfreeimage
실행하기 : ./ImageReadUsingFreeImage
실행하기 : ./ImageReadUsingFreeImage /tmp/n5.png
 

 

예제 3 ) 인자(-print)를 받아서 이미지 픽셀 정보 출력하기 

#include <stdio.h>
#include <FreeImage.h>
#include <string.h>

int main(int argc, char** argv) {
    printf("argc = %d \n",argc);
    char* imagePath = "/tmp/n2.png";
    char printCommand[] = "-print";
    bool isPrintPixel = false;

    if(argc>=2){
        for(int i=0; i<argc; i++){
            printf("argv[%d] = %s \n",i,argv[i]);
        }
        imagePath = argv[1];
        if(argc>=3 && strcmp(argv[2],printCommand)==0){
            printf("arg[2] is %s. Print Pixels \n",argv[2]);
            isPrintPixel=true;
        }
    }

    printf("\n");
    printf("imagePath = %s\n",imagePath);
    FreeImage_Initialise(TRUE);
    FREE_IMAGE_FORMAT fif = FreeImage_GetFIFFromFilename(imagePath);
    FIBITMAP *dib = FreeImage_Load(fif, imagePath, PNG_DEFAULT);
    int width = FreeImage_GetWidth(dib);
    int height = FreeImage_GetHeight(dib);
    int bpp = FreeImage_GetBPP(dib);

    printf("width,height = %d x %d , bpp = %d\n", width,height,bpp);

    BYTE *ptr = FreeImage_GetBits(dib);
    if ( ptr == NULL )
    {
        printf("pixel Null\n");
    }

    RGBQUAD color;
  BYTE gray=0;
    for (int y = 0; y < FreeImage_GetHeight(dib); y++) {
        for (int x = 0; x < FreeImage_GetWidth(dib); x++) {
            if(bpp==8){
                FreeImage_GetPixelIndex(dib, x, y, &gray);
                if(isPrintPixel)printf("%d",gray);
            }else{
                FreeImage_GetPixelColor(dib, x, y, &color);
                if(isPrintPixel)printf("%d ",color.rgbBlue);
            }
        }
        if(isPrintPixel)printf("\n");
    }
    FreeImage_Unload(dib);
    return 0;
}

빌드하기 : sudo gcc -o ImageReadUsingFreeImage ImageReadUsingFreeImage.cpp -lfreeimage
실행하기 : ./ImageReadUsingFreeImage
실행하기 : ./ImageReadUsingFreeImage /tmp/n5.png
실행하기 : ./ImageReadUsingFreeImage /tmp/n5.png -print
실행하기 : ./ImageReadUsingFreeImage /tmp/ioi.jpg
 

예제 4) 픽셀 데이터를 배열로 저장하기 memcpy 

         픽셀 데이터를 배열에 저장하는 것이 필요해서 FreeImage를 사용하는 경우가 많겠죠. 

         BYTE는 unsigned char로 FreeImage.h  typedef unsigned char BYTE;  되어 있습니다

#include <stdio.h>

#include <FreeImage.h>

#include <string.h>

#include <malloc.h>



int main() {

    char * imagePath = "./n9.png";

    printf("imagePath = %s\n", imagePath);

    FreeImage_Initialise(TRUE);

    FREE_IMAGE_FORMAT fif = FreeImage_GetFIFFromFilename(imagePath);

    FIBITMAP * dib = FreeImage_Load(fif, imagePath, PNG_DEFAULT);

    int width = FreeImage_GetWidth(dib);

    int height = FreeImage_GetHeight(dib);

    int bpp = FreeImage_GetBPP(dib);



    printf("width,height = %d x %d , bpp = %d\n", width, height, bpp);



    BYTE * ptr = FreeImage_GetBits(dib);

    if (ptr == NULL)

    {

        printf("pixel Null\n");

    }



    unsigned int dibSize = FreeImage_GetDIBSize(dib);



    printf("dibSize %d\n", dibSize);

    BYTE * src = (BYTE * ) malloc(width * height);

    memcpy(src, ptr, width * height);

    RGBQUAD color;

    BYTE gray = 0;

    for (int y = 0; y < FreeImage_GetHeight(dib); y++) {

        for (int x = 0; x < FreeImage_GetWidth(dib); x++) {

            if (bpp == 8) {

                gray = src[y * width + x];

                if (gray == 0) printf(" ");

                else printf("%d", gray);

            } else {

                FreeImage_GetPixelColor(dib, x, y, & color);

                printf("%d ", color.rgbBlue);

            }

        }

        printf("\n");

    }

    printf("\n");

    for (int y = 0; y < FreeImage_GetHeight(dib); y++) {

        for (int x = 0; x < FreeImage_GetWidth(dib); x++) {

            if (bpp == 8) {

                FreeImage_GetPixelIndex(dib, x, y, & gray);

                if (gray == 0) printf(" ");

                else printf("%d", gray);

            } else {

                FreeImage_GetPixelColor(dib, x, y, & color);

                printf("%d ", color.rgbBlue);

            }

        }

        printf("\n");

    }

    FreeImage_Unload(dib);

    return 0;

}

 

예제 4) 배열을 Gray 이미지로 저장하기

void FreeImageSetup() {
    const char * path = "ftest.bmp";
    int w = 128;
    int h = 128;
    int c = 1;
    FIBITMAP * bitmap = FreeImage_Allocate(w, h, 1);
    BYTE * src = new BYTE[w * h * c];
    for (int i = 0; i < w * h; i ++) {
        src[i] = i;
    }
    FIBITMAP * Image = FreeImage_ConvertFromRawBits(src, w, h, c, 8, 0, 0, 0, false);
    FreeImage_Save(FIF_PNG, Image, "src.png", 0);
    FreeImage_Unload(Image);
}

예제 5) 배열을 RGB로 저장하기

void FreeImageSetup() {
    const char * path = "color.bmp";
    int w = 16;
    int h = 16;
    int c = 3;
    FIBITMAP * bitmap = FreeImage_Allocate(w, h, c);
    BYTE * src = new BYTE[w * h * c];
    for (int i = 0; i < w * h; i ++) {
        src[i * c + 0] = i; // B
        src[i * c + 1] = i; // G
        src[i * c + 2] = 200; // R
    }
    FIBITMAP * Image = FreeImage_ConvertFromRawBits(src, w, h, w * c, 8 * c, 0, 0, 0, false);
    FreeImage_Save(FIF_BMP, Image, "src.png");
    FreeImage_Unload(Image);
}

 

예제 6) 배열을 이미지로 저장하기 

        FreeImage 로 이미지파일을 읽으면 RGBA 4채널로 읽습니다. 

       그래서 아래예제에서 Alpha 채널을 제외한 RGB만 뻈습니다. 

       배열에 RGB 데이터가 저장되어 있는 상황에서 이미지를 저장하는 예제입니다.

printf("imagePath = %s\n", path);
FreeImage_Initialise(TRUE);
FREE_IMAGE_FORMAT fif = FreeImage_GetFIFFromFilename(path);
FIBITMAP * dib = FreeImage_Load(fif, path, PNG_DEFAULT);
int width = FreeImage_GetWidth(dib);
int height = FreeImage_GetHeight(dib);
int pitch = FreeImage_GetPitch(dib);
int bpp = FreeImage_GetBPP(dib);
printf("width,height = %d x %d pitch=%d, bpp = %d\n", width, height, pitch, bpp);

int channel = 3;
BYTE * ptr = FreeImage_GetBits(dib);
BYTE * src = (BYTE * ) malloc(width * height * channel);
for (int i = 0; i < width * height; i++) {
    src[i * 3 + 0] = ptr[i * 4 + 0];
    src[i * 3 + 1] = ptr[i * 4 + 1];
    src[i * 3 + 2] = ptr[i * 4 + 2];
}
FIBITMAP * Image = FreeImage_ConvertFromRawBits(src, width, height, width * channel, 8 * channel, 0, 0, 0, false);
FreeImage_Save(FIF_PNG, Image, "src.png", 0);
FreeImage_Unload(Image);
어떤 이미지들은 위의 복사가 안되는 경우도 있습니다.아래 처럼 넣어줍니다.BYTE * src = (BYTE * ) malloc(width * height * channel);
for (int y = 0; y < height; y++) {
    for (int x = 0; x < width; x++) {
        int srcIndex = y * pitch + x * 3;
        int dstIndex = y * width * channel + x * 3;
        src[dstIndex + 0] = ptr[srcIndex + 0];
        src[dstIndex + 1] = ptr[srcIndex + 1];
        src[dstIndex + 2] = ptr[srcIndex + 2];
    }
}

 

윈도우 운영체제에서 사용하기

윈도우즈 운영체제에서는 Visual Studio 를 이용하겠습니다. 

1. C++ 프로젝트를 생성

2. 프로젝트 속성 페이지 - 구성 속성- VC++ 디렉토리 - 포함 디렉터리 && 라이브러리 디렉터리에 

각각 FreeImage.h와 FreeImage.lib 가 있는 폴더를 추가합니다.

이 두 파일은 같이 있어요.

 

2. 프로젝트 폴더에 FreeImage.dll을 복사합니다.

위의 소스 파일을 똑같이 사용합니다.

혹시 미리 컴파일된 헤더쪽에 문제가 있어서 컴파일이 되지 않는 다면

프로젝트 속성페이지- C/C++ - 미리 컴파일된 헤더 - 미리 컴파일된 헤더 - 미리 컴파일된 헤더 사용 안 함 을 선택하세요.

2-2. 프로젝트 폴더에 FreeImage.dll 을 매번 저장하기가 귀찮다면 C:/Windows/SysWOW64 에 FreeImage.dll 을 저장합니다.

 

 

 

 

  





 

 

 

 

 

 

 

=================================

=================================

=================================

 

 

출처: http://noteroom.tistory.com/entry/FreeImage-%EA%B8%B0%EB%B3%B8-%EC%82%AC%EC%9A%A9%EB%B2%95

FreeImage 기본적인 사용 중

파일을 읽은 후 이 것을 윈도우 창에 뿌려보는 작업을 정리해 보려한다.

 

1. 일단 FreeImage 라이브러리 파일을 다운받는다.

소스와 win32용으로 미리 빌드된 바이너리가 있는데, 굳이 컴파일 할 필요는 없으므로 바이너리를 받았다.

(FreeImage 를 사용해보려는 것이지 컴파일이 목적은 아니다. libpng 를 wince 에 포팅해보았지만.. 삽질일 뿐이다.

포팅은 성공했지만 프로젝트에서 png 파일을 쓸 일이 없어 결국 실제 사용하지도 않았다.)

 

2. FreeImage.h, FreeImage.lib 은 적당한 라이브러리 디렉토리에 옮기고,

FreeImage.dll 은 미리 windows\system(혹은 windows\system32) 에 옮겨 두었다.

 

3. 기본 작업

헤더를 포함시키고 (본인 디렉토리 패스 기준임..), 프로젝트 설정에 라이브러리도 추가.

#include "../LIB/FreeImage.h"

 

간단히 테스트용도이므로 전역에 벡터하나 생성

vector<FIBITMAP*> g_fibmp;

 

초기화/해제 작업 코드 삽입

FreeImage_Initialise() / FreeImage_DeInitialise()

 

4. 파일 로드 코드

FIBITMAP* fibmp;

FREE_IMAGE_FORMAT fiformat = FIF_UNKNOWN;

 

// 포멧을 알아낸다.

fiformat = FreeImage_GetFIFFromFilename(filename);
if(FIF_UNKNOWN != fiformat) {
    // 로드한다.
    fibmp = FreeImage_Load(fiformat, filename);
    g_fibmp.push_back(fibmp);
}

 

매우 간단하게 로드된다.

 

5. 뿌려보자.

HBITMAP hBmp;
FIBITMAP * fibmp;
fibmp = g_fibmp[0];
int cx,
cy;
cx = FreeImage_GetWidth(fibmp);
cy = FreeImage_GetHeight(fibmp);
BYTE * pData = FreeImage_GetBits(fibmp);
hBmp = CreateCompatibleBitmap(hdc, cx, cy);
SetDIBits(hdc, hBmp, 0, cy, pData, FreeImage_GetInfo(fibmp), DIB_RGB_COLORS);
DrawBitmap(hdc, 0, 0, hBmp);
DeleteObject(hBmp);

DI 비트맵을 작성후 이미지 데이터를 넣어주고 출력해주는 코드다.

DrawBitmap 함수는 winapi.co.kr 에서 가져온 것인데 그냥 인터넷에 널려있는 출력코드와

다른 것은 없다.

 

6. 해제시키자.

char buff[MAX_PATH];
for(int i = 0 ; i < g_fibmp.size() ; ++i) {
    sprintf(buff, "c:\\TestSave_%d.bmp", i);
    FreeImage_Save(FIF_BMP, g_fibmp[i], buff);
    FreeImage_Unload(g_fibmp[i]);
}

그냥 저장도 넣어봤다... 아주 잘된다.



출처: http://noteroom.tistory.com/entry/FreeImage-기본-사용법 [책읽는아이 낙서장]

 

 

 

=================================

=================================

=================================

 

 

출처: http://m.blog.naver.com/cra2yboy/90121411759

 

FreeImage란 무엇인가?

 

 

FreeImage는 많이 사용되는 PNG, BMP, JPEG, TIFF등 최근의 멀티미디어 프로그램이 요구하는 대중적인 그래픽 포맷을 지원하는 오픈 소스 라이브러리 프로젝트로 사용하기 쉽고, 빠르며, 멀티 쓰레드에 안정적이고 32비트 버전의 모든 Windows OS와 크로스 플랫폼을 지원합니다.(Linux, Mac OS X)

FreeImage는 C, C++, VB, C#, Delphi, Java와 스크립팅 언어인 Perl, Python, PHP, TCL 또는 Ruby 등 많은 언어에서 사용하실 수 있습니다.

라이브러리는 두가지 버전이 제공되는데 DLL 버전은 Win32 C/C++ 컴파일러에 링크되어 사용되어 집니다. Workspace 파일은 VS.NET 2003, VS.NET 2005 그리고 VS.NET 2008이 제공되어 지고 Linux, MinGW와 Mac OS X용 makefiles가 제공되어 있습니다.

 

그림 파일 불러오기

 

 

 

이미지 파일을 읽어서 화면에 그려보는 예제입니다.

 

#include <windows.h>

#include <string>

#include <tchar.h>



#include "FreeImage.h"

#pragma comment(lib, "FreeImage.lib")



typedef std::basic_string < TCHAR > tstring;



LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);



FIBITMAP * image;

bool DrawImage(HWND hWnd, FIBITMAP * dib);



int APIENTRY _tWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, intnCmdShow) {

    UNREFERENCED_PARAMETER(hPrevInstance);

    UNREFERENCED_PARAMETER(lpCmdLine);



    tstring strWinClass(_T("FreeImage Sample"));

    tstring strWinTitle(_T("FreeImage Sample: 그림 파일 불러오기"));



    WNDCLASSEX wcex = {

        sizeof(WNDCLASSEX),

        CS_HREDRAW | CS_VREDRAW,

        WndProc,

        0,
        0,

        hInstance,

        LoadIcon(NULL, IDC_ICON),

        LoadCursor(NULL, IDC_ARROW),

        (HBRUSH)(COLOR_WINDOW + 1),

        NULL,

        strWinClass.data(),

        NULL

    };



    if (!RegisterClassEx( & wcex))

        return E_FAIL;



    HWND hWnd;

    hWnd = CreateWindow(strWinClass.data(), strWinTitle.data(), WS_OVERLAPPEDWINDOW,

        CW_USEDEFAULT, CW_USEDEFAULT,

        //CW_USEDEFAULT, CW_USEDEFAULT,

        518, 308,

        NULL, NULL, hInstance, NULL);

    if (!hWnd)

        return E_FAIL;



    ShowWindow(hWnd, nCmdShow);

    UpdateWindow(hWnd);



    image = FreeImage_Load(FIF_PNG, "FFTA2_SimoonDunes.png", PNG_DEFAULT);

    if (!image)

        return 0;



    // 이미지 그리기

    DrawImage(hWnd, image);



    MSG msg;

    while (GetMessage( & msg, NULL, 0, 0)) {

        TranslateMessage( & msg);

        DispatchMessage( & msg);

    }



    return (int) msg.wParam;

}



LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) {

    PAINTSTRUCT ps;

    HDC hdc;



    switch (message) {

        case WM_PAINT:

        {

            hdc = BeginPaint(hWnd, & ps);

            DrawImage(hWnd, image);

            EndPaint(hWnd, & ps);

            break;

        }

        case WM_KEYUP:

        {

            switch (wParam)

            {

                case VK_ESCAPE:

                    PostQuitMessage(0);

                    break;

            }

            break;

        }

        case WM_DESTROY:

        {

            PostQuitMessage(0);

            break;

        }

        default:

            return DefWindowProc(hWnd, message, wParam, lParam);

    }

    return 0;

}



bool DrawImage(HWND hWnd, FIBITMAP * dib) {

    HDC hdc, memdc;

    HBITMAP hBmp;

    BITMAPINFOHEADER * bih;

    BITMAPINFO * bi;

    HGDIOBJ OldBmp;

    RECT rt;



    GetClientRect(hWnd, & rt);

    int nWidth = rt.right - rt.left;

    int nHeight = rt.bottom - rt.top;



    hdc = GetDC(hWnd);

    if (hdc == NULL) {

        return false;

    }



    memdc = CreateCompatibleDC(hdc);

    if (memdc == NULL) {

        return false;

    }



    hBmp = CreateCompatibleBitmap(hdc, nWidth, nHeight);

    if (hBmp == NULL) {

        return false;

    }



    OldBmp = SelectObject(memdc, hBmp);



    FillRect(memdc, & rt, (HBRUSH) GetStockObject(BLACK_BRUSH));



    bih = FreeImage_GetInfoHeader(dib);

    bi = FreeImage_GetInfo(dib);



    if (!SetDIBits(memdc, hBmp, 1, FreeImage_GetHeight(dib),

            FreeImage_GetBits(dib), bi, DIB_RGB_COLORS)) {

        return false;

    }



    BitBlt(hdc, 0, 0, nWidth, nHeight, memdc, 0, 0, SRCCOPY);

    SelectObject(memdc, OldBmp);

    DeleteObject(hBmp);

    DeleteObject(OldBmp);

    ReleaseDC(hWnd, memdc);

    ReleaseDC(hWnd, hdc);



    return true;

}

 

참고자료

 

FreeImage 공식 홈페이지 : http://freeimage.sourceforge.net

FreeImage Download Page : http://freeimage.sourceforge.net/download.html

 

 

 

 

반응형

 

728x90

 

 

 

=================================

=================================

=================================

 

 

 

출처: http://www.mbsoftworks.sk/index.php?page=tutorials&series=1&tutorial=9

 

What is texturing (for total newbies)

Texturing is a method for adding details to our scene by mapping texture images to our polygon. When we have a 3D model, and we want to render it with image mapped on it somehow, we feed OpenGL desired image (texture), texture coordinates (we're working with 2D textures now, so we will feed OpenGL with 2D texture coordinates), and then do some bureaucracy , like enabling texturing, and we are ready to go.

Texture mapping - how to do it

OK, first thing we need to do, is to be able to load pictures from disk and put them in some easy-to-use format, like RGB pixel per pixel. OpenGL doesn't deal with image loading, it just wants us to provide data in one such format, so that it can create a texture from it. And for purposes of loading images, I decided to go with FreeImage library, that is, as the name suggests, free, so no one will chase you after using it in your product . So go to:
http://freeimage.sourceforge.net/
and download it. After unpacking it somewhere to your libaries directory, add a new entry to Include Directories and Library Directories in your Visual Studio like this (it's explained in the first tutorial, in case you don't know where it is):

 

Now that we are able to load images, we can start working with textures. Textures in OpenGL are used similarly as other OpenGL objects - first we must tell OpenGL to generate textures, and then it provides us a texture name (ID), with which we can address the texture. To make things easy, we will create a wrapper C++ class that will encapsulate creation, deletion and every important thing related to texturing. Here is how the class looks like:

class CTexture {
    public:
        bool loadTexture2D(string a_sPath, bool bGenerateMipMaps = false);
    void bindTexture(int iTextureUnit = 0);

    void setFiltering(int a_tfMagnification, int a_tfMinification);

    int getMinificationFilter();
    int getMagnificationFilter();

    void releaseTexture();

    CTexture();
    private:
        int iWidth, iHeight, iBPP; // Texture width, height, and bytes per pixel
    UINT uiTexture; // Texture name
    UINT uiSampler; // Sampler name
    bool bMipMapsGenerated;

    int tfMinification, tfMagnification;

    string sPath;
};

We will get directly into loadTexture function, which is the maybe the most important function in this tutorial:

bool CTexture::loadTexture2D(string a_sPath, bool bGenerateMipMaps) {
    FREE_IMAGE_FORMAT fif = FIF_UNKNOWN;
    FIBITMAP * dib(0);

    fif = FreeImage_GetFileType(a_sPath.c_str(), 0); // Check the file signature and deduce its format

    if (fif == FIF_UNKNOWN) // If still unknown, try to guess the file format from the file extension
        fif = FreeImage_GetFIFFromFilename(a_sPath.c_str());

    if (fif == FIF_UNKNOWN) // If still unkown, return failure
        return false;

    if (FreeImage_FIFSupportsReading(fif)) // Check if the plugin has reading capabilities and load the file
        dib = FreeImage_Load(fif, a_sPath.c_str());
    if (!dib)
        return false;

    BYTE * bDataPointer = FreeImage_GetBits(dib); // Retrieve the image data

    iWidth = FreeImage_GetWidth(dib); // Get the image width and height
    iHeight = FreeImage_GetHeight(dib);
    iBPP = FreeImage_GetBPP(dib);

    // If somehow one of these failed (they shouldn't), return failure
    if (bDataPointer == NULL || iWidth == 0 || iHeight == 0)
        return false;

    // Generate an OpenGL texture ID for this texture
    glGenTextures(1, & uiTexture);
    glBindTexture(GL_TEXTURE_2D, uiTexture);

    int iFormat = iBPP == 24 ? GL_BGR : iBPP == 8 ? GL_LUMINANCE : 0;
    int iInternalFormat = iBPP == 24 ? GL_RGB : GL_DEPTH_COMPONENT;

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, iWidth, iHeight, 0, iFormat, GL_UNSIGNED_BYTE, bDataPointer);

    if (bGenerateMipMaps) glGenerateMipmap(GL_TEXTURE_2D);

    FreeImage_Unload(dib);

    glGenSamplers(1, & uiSampler);

    sPath = a_sPath;
    bMipMapsGenerated = bGenerateMipMaps;

    return true; // Success
}

First, when we provide an image file path to function, FreeImage will try to guess which file type it is (probably by extension, then by examining file headers maybe). We do this with functions FreeImage_GetFileTypeFreeImage_GetFIFFromFilename, and FreeImage_FIFSupportsReading. This function will determine if the given file is image and if FreeImage is capable of reading it. Don't worry, it supports all major graphic formats, so it really shouldn't be a problem. If everything is good, we call FreeImage_Load to finally load the image to memory.

Very important thing about textures is, that their dimensions MUST be powers of 2. It is so for several reasons (well to be honest, I don't know exactly why it is so ), but I can think of several reasons, that seems likely - like when creating mipmaps (more on that later), it may be problematic, or some memory alignments. If someone knows more on this stuff, you can write it to comments and I will edit the article. There are, however, extensions that allows arbitrary rectangular textures to be loaded, but in this tutorial, we will use 256x256 texture size.

Now we are ready to create an OpenGL texture from loaded data. First we must retrieve image properties, for later use in OpenGL. We store them in iWidthiHeight, and iBPP member variables. We also retrieve data pointer to with FreeImage_GetBits function (the name may be little misleading). Then we finally generate texture by calling glGenTextures. It takes two parameters - how many textures we want, and where to store their names (classic convention). After creating texture object, we must bind it to tell OpenGL we are gonna work with this one, by calling glBindTexture. Its parameters are target, which can be GL_TEXTURE_1DGL_TEXTURE_2DGL_TEXTURE_3D, or some other parameters, like GL_TEXTURE_CUBE_MAP (we'll get onto this in later tutorials). Refer to the manual pages for list of all. In this tutorial, we will stick to 2D textures, so the target is GL_TEXTURE_2D. Second parameter is texture ID generated previously.

Now it seems we can finally upload texture data to GPU, but there is still one thing we must solve. FreeImage doesn't store our images in RGB format, on Windows, it's actually BGR, and this thing should be platform-dependant as far as I know. But this is no problem, when sending data to GPU, we'll just tell it that they're in BGR format. And now we really are ready to upload data to GPU... or we are? Yes, but a little word about texture filters should be said.

Texture filtering

When telling OpenGL texture data, we must also tell it, how to FILTER the texture. What does this mean? It's the way how OpenGL takes colors from image and draws them onto a polygon. Since we will probably never map texture pixel-perfect (the polygon's on-screen pixel size is the same as texture size), we need to tell OpenGL which texels (single pixels (or colors) from texture) to take. There are several texture filterings available. They are defined for both minification and magnification. What does this mean? Well, first imagine a wall, that we are looking straight at, and its screen pixel size is the same as our texture size (256x256), so that each pixel has a corresponding texel:

 

 

 

As you can see, bilinear filtering gives us smoother results. You may wonder, that I have also heard of trilinear filtering. Soon, we'll get into that as well..In this case, everything is OK, there is no problem. But, if we moved closer to the wall, then we need to MAGNIFY the texture - because there are now more pixels on screen than texels in texture, we must tell OpenGL how to fetch the values from texture. In this case, there are two filterings: 

NEAREST FILTERING: GPU will simply take the texel, that is nearest to exactly calculated point. This one is very fast, as no additional calculations are performed, but it's quality is also very low, since multiple pixels have the same texels, and the visual artifacts are very bold. The closer to the wall you are, the more "squary" it looks (many squares with different colors, each square represents a texel).

BILINEAR FILTERING: This one doesn't only get the closest texel, but rather it calculates the distances from all 4 adjacent texels, and retrieves weighted average from them, depending on the distance. This results in a lot better quality than nearest filtering, but requires a little more computational time (on modern hardware, this time is negligible). Have a look at the pictures:

The second case is, if we moved further from the wall. Now the texture is bigger than the screen render of our simple wall, and thus it must be MINIFIED. The problem is, that now multiple texels may correspond to single fragment. And what shall we do now? One solution may be to average all corresponding texels, but this may be really slow, as whole texture might potentionally fall into single pixel. The nice solution to this problem is called MIPMAPPING. The original texture is stored not only in its original size, but also downsampled to all smaller resolutions, with each coordinate divided by 2, creating a "pyramid" of textures (this image is from Wikipedia):

 

Another, most computationally expensive, but with best results is ANISOTROPIC filtering. But this will be covered in some later tutorial, not this one, which should serve as introduction to texturing.Particular images are called mipmaps. With mipmapping enabled, GPU selects a mipmap of appropriate size, according to the distance we see object from, and then perform some filtering. This results in higher memory consumption (exactly by 33%, as sum of 1/4, 1/16, 1/256... converges to 1/3), but gives nice visual results at very nice speed. And here is another filtering term - TRILINEAR filtering. What's that? Well, it's the almost same as bilinear filtering, but addition to it is that we take two nearest mipmaps, do the bilinear filtering on each of them, and then average results. The name TRIlinear is from the third dimension that comes into it - in case of bilinear we were finding fragments in two dimensions, trilinear filtering extends this to three dimensions.

Finalizing our texture

After a brief explanation of texture filters, we can proceed with its creation. All we need to do is to send texture data to GPU, and then tell OpenGL in which format we stored them. Function for sending data to GPU is glTexImage2D. It's parameters (in order) are:

  1. target - in our case it is GL_TEXTURE_2D
  2. texture LOD - Level Of Detail - we set this to zero - this parameter is used for defining mipmaps. The base level (full resolution) is 0. All subsequent levels (1/4 of the texture size, 1/16 of the texture size...) are higher, i.e. 1, 2 and so on. But we don't have to do it manually (even though we can, and we don't even have to define ALL mipmap levels if we don't want to, OpenGL doesn't require that), there is luckily a function for mipmap generation (soon we'll get into that).
  3. internal format - specification says it's number of components per pixel, but it doesn't accept numbers, but constants like GL_RGB and so on (see spec). And even though we use BGR as format, we put here GL_RGB anyway, because this parameter doesn't accept GL_BGR, it really only informs about the number of components per texel. I don't find this very intuitive, but it's probably because of some backwards compatibility.
  4. width - Texture width
  5. height - Texture height
  6. border - width of border - in older OpenGL specifications you could create a border around texture (it's really useless), in new 3.3 specification (and also in future specifications, like 4.2 in time of writing this tutorial), this parameter MUST be zero
  7. format - Format in which we specify data, GL_BGR in this case
  8. type - data type of single value, we use unsigned bytes, and thus GL_UNSIGNED_BYTE as data type
  9. data - finally a pointer to the data

Phew, so many parameters. There's no need to remember them in order, if you need to use it, always consult specification. Important thing is that you understand what this function does. Now, the last thing that hasn't been covered, is creation of mipmaps. There are two ways - either we resize images ourselves, and then call glTexImage2D with different LODs, or we easily call function that OpenGL provides right after we uploaded data - glGenerateMipmaps. The only parameter is the target, which is GL_TEXTURE_2D in our case.

Now that we have data sent to GPU, we need to tell OpenGL how to filter the texture. Well, for those who remember OpenGL in older days (2.1 and below), we would do something like this to set filtering:

 

// Set magnification filter
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); 
// Set minification filter
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

 

But not now. Now we are ready to move on. Problem of the above was, that if we wanted to use the same texture with different filterings, we could do it by constantly changing its parameters. Well it could be done somehow, but isn't there a nicer, more elegant way? Yes there is - and it's called samplers.

Samplers

I couldn't find a definition of sampler on them internets , but I will try to explain it as easy as possible. Sampling is the process of fetching a value from a texture at a given position, so sampler is an object where we store info of how to do it. Like which texture to use and all filtering parameters. If we want to change filtering, we just bind different samplers with different propertiees, and we're done. This line is copied from spec: 

"If a sampler object is bound to a texture unit and that unit is used to sample from a texture, the parameters in the sampler are used to sample from the texture, rather than the equivalent parameters in the texture object bound to that unit."

One part of it basically says, that if a sampler is bound to the texture, its parameters supersedes texture parameters. So instead of setting texture parameters, we will create a sampler, which will do exactly this. Even though in this tutorial we create one sampler per one texture (so it's like without samplers), it's a more general solution and thus it's better. As all OpenGL objects, samplers are generated (we get their names), and then we access them with that name. So when loading texture, we just callglGenerateSamplers, and then we set its parameters with our member function:

 

void CTexture::setFiltering(int a_tfMagnification, int a_tfMinification) 

   // Set magnification filter
   if(a_tfMagnification == TEXTURE_FILTER_MAG_NEAREST) 
      glSamplerParameteri(uiSampler, GL_TEXTURE_MAG_FILTER, GL_NEAREST); 
   else if(a_tfMagnification == TEXTURE_FILTER_MAG_BILINEAR) 
      glSamplerParameteri(uiSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); 

   // Set minification filter
   if(a_tfMinification == TEXTURE_FILTER_MIN_NEAREST) 
      glSamplerParameteri(uiSampler, GL_TEXTURE_MIN_FILTER, GL_NEAREST); 
   else if(a_tfMinification == TEXTURE_FILTER_MIN_BILINEAR) 
      glSamplerParameteri(uiSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR); 
   else if(a_tfMinification == TEXTURE_FILTER_MIN_NEAREST_MIPMAP) 
      glSamplerParameteri(uiSampler, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST); 
   else if(a_tfMinification == TEXTURE_FILTER_MIN_BILINEAR_MIPMAP) 
      glSamplerParameteri(uiSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); 
   else if(a_tfMinification == TEXTURE_FILTER_MIN_TRILINEAR) 
      glSamplerParameteri(uiSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); 

   tfMinification = a_tfMinification; 
   tfMagnification = a_tfMagnification; 
}

 

We just pass in values from enumerator structure, defined in texture.h, and we change the filtering parameters. In application, you can press F1 and F2 keys, to switch between minification and magnification filterings of ice texture (run application in windowed mode, because in window title bar you can see actual texture filters). You may notice, that in enumerator structure, there are only 5 minification filters, and if you google things you can find 6. I just didn't put there filtering, that takes closest two mipmaps, performs nearest criterion on them, and then averages results - it simply doesn't make much sense to do that (even though OpenGL allows that). But if you really want, you can try it (set minification filter to GL_NEAREST_MIPMAP_LINEAR).

I hope, that I have demystified texture filterings for you, and now, we are ready to see how texture mapping is done.

Texture Coordinates

Yeah, that's it. Finally we came into it. Texture coordinates (also called UV coordinates) are the way how to map texture along the polygon. We just need to provide appropriate texture coordinates with every vertex and we're done. In our 2D texture case, texture coordinate will be represented by two numbers, one along X axis (U coordinate), and one along Y Axis (V Coordinate):

 

 

 

We simply need to copy the shape of our polygon also in texture coordinates in order to map texture properly. If we exceed the <0..1> range, our texture gets mapped more times, let's say, if we mapped coordinates (0.0, 10.0), (10.0, 10.0), (10.0, 0.0) and (0.0, 0.0) to the quad, texture would be mapped 10 times on X Axis and 10 times on Y Axis. This texture repeating is default behavior, it can be turned off actually, so the texture cannot exceed these values, or if it does, only border values are taken (this is used when creating skyboxes for example).So if we would like to map our texture to quad, we would simply provide (0.0, 1.0) coordinates to upper left vertex, (1.0, 1.0) to upper right vertex, (1.0, 0.0) to bottom-right vertex and (0.0, 0.0) to bottom-left vertex. But what if we wanted to map texture to let's say a triangle? Well, you probably may guess now, and this picture will demonstrate it:

Now that we know which texture coordinate values are right, we must learn how to provide them. Texture coordinate is just another vertex attribute. So when creating data for rendering in VBO, we add two additional floats per vertex for texture coordinates. Nothing else. We'll also need to add few lines into shaders as well. Starting from this tutorial, I will use my CVertexBuffer class, which wraps VBO and allows for dynamic addition of data (so I don't have to count number of polygons and size of VBO before rendering, I just add as many as I want, and then upload data to GPU). I'm just going to say it uses std::vector internally, and you can have a look at its code, if you're interested in it. We'll use one such for cube, pyramid and ground (which is only one quad, made of 2 triangles, textured with grass texture). Then we'll call glDrawArrays, but with different offsets, and different textures bound.

One important thing I changed in ths tutorial is the format of data. We don't have one VBO for vertices and one for texture coordinates, but with each vertex, we have three floats for vertex followed by two floats for texture coordinate. After that, we just need to tell OpenGL, when calling glVertexAttribPointer, what's the distance between two consecutive attributes (the STRIDE parameter). In this case, the distance between two consecutive vertex attributes is sizeof whole vertex data, i.e.sizeof(vec3)+sizeof(vec2) (it's 5 floats). You can find it in the initScene function. Don't forget to enable texturing, by calling glEnable(GL_TEXTURE_2D), it's in the end of initScene.

Accessing texture in fragment shader

This is the last thing that's covered in this extremely long tutorial  is how to access texture data in fragment shader. The first thing we must do, is to pass texture coordinate, that is an input variable in vertex shader, further to fragment shader. The second important thing, is to create an uniform sampler2D variable in fragment shader. Here is how fragment shader looks like (vertex shader is almost the same as in previous tutorial, I recommend to have a look at it as well):

 

#version 330 

in vec2 texCoord; 
out vec4 outputColor; 

uniform sampler2D gSampler; 

void main() 

   outputColor = texture2D(gSampler, texCoord); 
}

 

With this variable, we will fetch texture data based on texture coordinates. From program, we just need to set sampler to one integer. What does this integer mean? It's the TEXTURE UNIT number. Texture unit is another term, that's important. You may have heard of multitexturing - mapping multiple textures at once. Well, one we can have multiple texture units, each of them can have a different texture bound, and then differentiate between them with their numbers. To specify which texture unit we use, we use function glActiveTexture. The number of texture units supported is graphic-card dependant, but this number should be sufficient for most uses (I'm too lazy to find out how many my GTX 260 has , but I guess it's 32 or 64). Since we never need to use more than one texture at once (we only need data from one texture in fragment shader), we will only use texture unit 0. In our rendering, we must first bind our texture to texture unit 0, and then we must set sampler uniform variable to 0 as well, to tell OpenGL, that with that uniform variable we want a texture, that's bound to texture unit 0. Then in fragment shader, we just call function texture2D, that takes sampler variable as first parameter, and texture coordinates as second parameter.

Short word in the end...

This is what has been done today (You can play around by rotating objects in arrow keys):

I hope that you don't have a headache after reading this tutorial. It may take some time for all those things to settle down in your head, but once they do, you will realize that it isn't that difficult at all. I would say, that people in AMD and nVidia have it difficult, to actually implement this OpenGL specification  But that's not something we need to worry about. They are (probably) happy to do it , and we are users, that are happy to use it.

If you have any questions, you can write them to comments, or send me an e-mail. Next tutorial is going to be about blending, we'll make transparent objects, so stay tuned!

 

=================================

=================================

=================================

 

 

반응형


관련글 더보기

댓글 영역