CUDA(Compute Unified Device Architecture)计算统一设备框架

GPU架构特点

  • 核的角度:首先CPU由专为顺序串行处理而优化的几个核心组成。而GPU则由数以千计的更小、更高效的核心组成,这些核心专门为同时处理多任务而设计,可高效地处理并行任务。也就是,CPU虽然每个核心自身能力极强,处理任务上非常强悍,无奈他核心少,在并行计算上表现不佳;反观GPU,虽然他的每个核心的计算能力不算强,但他胜在核心非常多,可以同时处理多个计算任务,在并行计算的支持上做得很好。GPU和CPU的不同硬件特点决定了他们的应用场景,CPU是计算机的运算和控制的核心,GPU主要用作图形图像处理。图像在计算机呈现的形式就是矩阵,我们对图像的处理其实就是操作各种矩阵进行计算,而很多矩阵的运算其实可以做并行化,这使得图像处理可以做得很快,因此GPU在图形图像领域也有了大展拳脚的机会。

  • 数据处理角度:CPU需要很强的通用性来处理各种不同的数据类型,比如整型、浮点数等,同时它又必须擅长处理逻辑判断所导致的大量分支跳转和中断处理,所以CPU其实就是一个能力很强的伙计,他能把很多事处理得妥妥当当,当然啦我们需要给他很多资源供他使用(各种硬件),这也导致了CPU不可能有太多核心(核心总数不超过16)。而GPU面对的则是类型高度统一的、相互无依赖的大规模数据和不需要被打断的纯净的计算环境,GPU有非常多核心(费米架构就有512核),虽然其核心的能力远没有CPU的核心强,但是胜在多,在处理简单计算任务时呈现出“人多力量大”的优势,这就是并行计算的魅力。

异构计算:不同体系结构的处理器相互协作完成计算任务,并行部分在GPU上运行,串行部分在CPU运行。CPU负责总体的程序流程,而GPU负责具体的计算任务,当GPU各个线程完成计算任务后,我们就将GPU那边计算得到的结果拷贝到CPU端,完成一次计算任务。

CUDA线程模型

线程模型(软件角度)

kernel在device上执行时实际上是启动很多线程,一个kernel所启动的所有线程称为一个网格(grid),同一个网格上的线程共享相同的全局内存空间,grid是线程结构的第一层次,而grid又可以分为很多线程块(block),一个线程块里面包含很多线程,这是第二层次

1

    1. Thread:线程,并行的基本单位;
    1. Thread Block:线程块,互相合作的线程组,线程块有如下几个特点:
      1. 允许彼此同步;
      2. 可以通过共享内存快速交换数据;
      3. 可以以1维、2维或3维组织(一般为3维);
      4. 每一个block和每个thread都有自己的ID(blockIdxthreadIdx),我们通过相应的索引找到相应的线程块和线程。
    1. Grid:一组线程块
      1. 可以以1维、2维或3维组织(一般为2维);
      2. 共享全局内存;
      3. gridblock都是定义为dim3类型的变量,dim3可以看成是包含三个无符号整数(x, y, z)成员的结构体变量,在定义时,缺省值初始化为1;
      4. dim3仅为host端可见,其对应的device端类型为uint3
    1. Warp:GPU执行程序时调度和运行的单位(是SM的基本执行单元),目前cuda的warp的大小为32;同在一个warp的线程,以不同数据资源执行相同的指令,这就是所谓 SIMTSingle Instruction Multiple Threads,单指令多线程)。
    1. Kernel:在GPU上执行的核心程序,这个kernel函数是运行在某个Grid上的。
      1. One kernel <==> One Grid

流处理器(硬件角度)

    1. SP(streaming processor):也称CUDA core。最基本的处理单元,具体的指令和任务都是在SP上处理的。一个SP可以执行一个thread;
    1. SM(streaming multiprocessor):也叫GPU大核,由多个SP加上其他资源(如:warp schedulerregistershared memory等)组成。block在SM上执行。
      1. SM可以看做GPU的心脏(对比CPU核心),registershared memory是SM的稀缺资源。CUDA将这些资源分配给所有驻留在SM中的threads。因此,这些有限的资源就使每个SM中active warps有非常严格的限制,也就限制了并行能力;
      2. 每个SM包含的SP数量依据GPU架构而不同,Fermi架构GF100是32个,GF10X是48个,Kepler架构都是192个,Maxwell架构都是128个;
      3. SM上并不是所有的thread能够在同一时刻执行。Nvidia把32个threads组成一个warp,warp是调度和运行的基本单元。warp中所有threads并行的执行相同的指令。一个warp需要占用一个SM运行,多个warps需要轮流进入SM。由SM的硬件warp scheduler负责调度。目前每个warp包含32个threads(Nvidia保留修改数量的权利)。所以,一个GPU上resident thread最多只有 SM * warp个。
      4. block一旦被分配好SM,该block就会一直驻留在该SM中,直到执行结束。一个SM可以同时拥有多个blocks,但需要序列执行。

对应关系&硬件结构

2

3

内存模型

CUDA中的内存模型分为以下几个层次:

    1. 每个线程都有自己的registers(寄存器);
    1. 每个线程都有自己的local memory(局部内存);
    1. 每个线程块内都有自己的shared memory(共享内存),线程块内的所有线程共享这段内存资源
    1. 每个grid都有自己的global memory(全局内存),不同线程块的线程都可使用
    1. 每个grid都有自己的constant memory(常量内存)和texture memory(纹理内存),不同线程块的线程都可使用。

线程访问这几类存储器的速度是register > local memory >shared memory > global memory

4

编程模型

我们怎么写一个能在GPU跑的程序或函数呢?

Executed on the: Only callable from the:
__device__ float DeviceFunc() device device
__global__ void KernelFunc() device host
__host__ float HostFunc() host host

如上表所示,我们用__global__定义一个kernel函数,就是CPU上调用,GPU上执行,注意__global__函数的返回值必须设置为void

CPU和GPU间的数据传输怎么写?

    1. cudaMalloc():在设备端分配global memory
    1. cudaFree():释放存储空间
    1. cudaMemcpy(void *dst, void *src, size_t nbytes,enum cudaMemcpyKind direction):数据传输

      enum cudaMemcpyKind:传输方向

      1. cudaMemcpyHostToDevice(CPU到GPU)
      2. cudaMemcpyDeviceToHost(GPU到CPU)
      3. cudaMemcpyDeviceToDevice(GPU到GPU)

怎么用代码表示线程组织模型?

dim3类来表示网格和线程块的组织方式,网格grid可以表示为一维和二维格式,线程块block可以表示为一维、二维和三维的数据格式。

dim3 DimGrid(100, 50); //5000个线程块,维度是100*50 
dim3 DimBlock(4, 8, 8); //每个线层块内包含256个线程,线程块内的维度是4*8*8

线程索引方式速查

#include "cuda_runtime.h"
#include "device_launch_parameters.h"

#include <stdio.h>
#include <stdlib.h>
#include <iostream>

using namespace std;

//thread 1D
__global__ void testThread1(int *c, const int *a, const int *b)
{
int i = threadIdx.x;
c[i] = b[i] - a[i];
}

//thread 2D
__global__ void testThread2(int *c, const int *a, const int *b)
{
int i = threadIdx.x + threadIdx.y*blockDim.x;
c[i] = b[i] - a[i];
}

//thread 3D
__global__ void testThread3(int *c, const int *a, const int *b)
{
int i = threadIdx.x + threadIdx.y*blockDim.x + threadIdx.z*blockDim.x*blockDim.y;
c[i] = b[i] - a[i];
}

//block 1D
__global__ void testBlock1(int *c, const int *a, const int *b)
{
int i = blockIdx.x;
c[i] = b[i] - a[i];
}

//block 2D
__global__ void testBlock2(int *c, const int *a, const int *b)
{
int i = blockIdx.x + blockIdx.y*gridDim.x;
c[i] = b[i] - a[i];
}

//block 3D
__global__ void testBlock3(int *c, const int *a, const int *b)
{
int i = blockIdx.x + blockIdx.y*gridDim.x + blockIdx.z*gridDim.x*gridDim.y;
c[i] = b[i] - a[i];
}

//block-thread 1D-1D
__global__ void testBlockThread1(int *c, const int *a, const int *b)
{
int i = threadIdx.x + blockDim.x*blockIdx.x;
c[i] = b[i] - a[i];
}

//block-thread 1D-2D
__global__ void testBlockThread2(int *c, const int *a, const int *b)
{
int threadId_2D = threadIdx.x + threadIdx.y*blockDim.x;
int i = threadId_2D+ (blockDim.x*blockDim.y)*blockIdx.x;
c[i] = b[i] - a[i];
}

//block-thread 1D-3D
__global__ void testBlockThread3(int *c, const int *a, const int *b)
{
int threadId_3D = threadIdx.x + threadIdx.y*blockDim.x + threadIdx.z*blockDim.x*blockDim.y;
int i = threadId_3D + (blockDim.x*blockDim.y*blockDim.z)*blockIdx.x;
c[i] = b[i] - a[i];
}

//block-thread 2D-1D
__global__ void testBlockThread4(int *c, const int *a, const int *b)
{
int blockId_2D = blockIdx.x + blockIdx.y*gridDim.x;
int i = threadIdx.x + blockDim.x*blockId_2D;
c[i] = b[i] - a[i];
}

//block-thread 3D-1D
__global__ void testBlockThread5(int *c, const int *a, const int *b)
{
int blockId_3D = blockIdx.x + blockIdx.y*gridDim.x + blockIdx.z*gridDim.x*gridDim.y;
int i = threadIdx.x + blockDim.x*blockId_3D;
c[i] = b[i] - a[i];
}

//block-thread 2D-2D
__global__ void testBlockThread6(int *c, const int *a, const int *b)
{
int threadId_2D = threadIdx.x + threadIdx.y*blockDim.x;
int blockId_2D = blockIdx.x + blockIdx.y*gridDim.x;
int i = threadId_2D + (blockDim.x*blockDim.y)*blockId_2D;
c[i] = b[i] - a[i];
}

//block-thread 2D-3D
__global__ void testBlockThread7(int *c, const int *a, const int *b)
{
int threadId_3D = threadIdx.x + threadIdx.y*blockDim.x + threadIdx.z*blockDim.x*blockDim.y;
int blockId_2D = blockIdx.x + blockIdx.y*gridDim.x;
int i = threadId_3D + (blockDim.x*blockDim.y*blockDim.z)*blockId_2D;
c[i] = b[i] - a[i];
}

//block-thread 3D-2D
__global__ void testBlockThread8(int *c, const int *a, const int *b)
{
int threadId_2D = threadIdx.x + threadIdx.y*blockDim.x;
int blockId_3D = blockIdx.x + blockIdx.y*gridDim.x + blockIdx.z*gridDim.x*gridDim.y;
int i = threadId_2D + (blockDim.x*blockDim.y)*blockId_3D;
c[i] = b[i] - a[i];
}

//block-thread 3D-3D
__global__ void testBlockThread9(int *c, const int *a, const int *b)
{
int threadId_3D = threadIdx.x + threadIdx.y*blockDim.x + threadIdx.z*blockDim.x*blockDim.y;
int blockId_3D = blockIdx.x + blockIdx.y*gridDim.x + blockIdx.z*gridDim.x*gridDim.y;
int i = threadId_3D + (blockDim.x*blockDim.y*blockDim.z)*blockId_3D;
c[i] = b[i] - a[i];
}


void addWithCuda(int *c, const int *a, const int *b, unsigned int size)
{
int *dev_a = 0;
int *dev_b = 0;
int *dev_c = 0;

cudaSetDevice(0);

cudaMalloc((void**)&dev_c, size * sizeof(int));
cudaMalloc((void**)&dev_a, size * sizeof(int));
cudaMalloc((void**)&dev_b, size * sizeof(int));

cudaMemcpy(dev_a, a, size * sizeof(int), cudaMemcpyHostToDevice);
cudaMemcpy(dev_b, b, size * sizeof(int), cudaMemcpyHostToDevice);

//testThread1<<<1, size>>>(dev_c, dev_a, dev_b);

//uint3 s;s.x = size/5;s.y = 5;s.z = 1;
//testThread2 <<<1,s>>>(dev_c, dev_a, dev_b);

//uint3 s; s.x = size / 10; s.y = 5; s.z = 2;
//testThread3<<<1, s >>>(dev_c, dev_a, dev_b);

//testBlock1<<<size,1 >>>(dev_c, dev_a, dev_b);

//uint3 s; s.x = size / 5; s.y = 5; s.z = 1;
//testBlock2<<<s, 1 >>>(dev_c, dev_a, dev_b);

//uint3 s; s.x = size / 10; s.y = 5; s.z = 2;
//testBlock3<<<s, 1 >>>(dev_c, dev_a, dev_b);

//testBlockThread1<<<size/10, 10>>>(dev_c, dev_a, dev_b);

//uint3 s1; s1.x = size / 100; s1.y = 1; s1.z = 1;
//uint3 s2; s2.x = 10; s2.y = 10; s2.z = 1;
//testBlockThread2 << <s1, s2 >> >(dev_c, dev_a, dev_b);

//uint3 s1; s1.x = size / 100; s1.y = 1; s1.z = 1;
//uint3 s2; s2.x = 10; s2.y = 5; s2.z = 2;
//testBlockThread3 << <s1, s2 >> >(dev_c, dev_a, dev_b);

//uint3 s1; s1.x = 10; s1.y = 10; s1.z = 1;
//uint3 s2; s2.x = size / 100; s2.y = 1; s2.z = 1;
//testBlockThread4 << <s1, s2 >> >(dev_c, dev_a, dev_b);

//uint3 s1; s1.x = 10; s1.y = 5; s1.z = 2;
//uint3 s2; s2.x = size / 100; s2.y = 1; s2.z = 1;
//testBlockThread5 << <s1, s2 >> >(dev_c, dev_a, dev_b);

//uint3 s1; s1.x = size / 100; s1.y = 10; s1.z = 1;
//uint3 s2; s2.x = 5; s2.y = 2; s2.z = 1;
//testBlockThread6 << <s1, s2 >> >(dev_c, dev_a, dev_b);

//uint3 s1; s1.x = size / 100; s1.y = 5; s1.z = 1;
//uint3 s2; s2.x = 5; s2.y = 2; s2.z = 2;
//testBlockThread7 << <s1, s2 >> >(dev_c, dev_a, dev_b);

//uint3 s1; s1.x = 5; s1.y = 2; s1.z = 2;
//uint3 s2; s2.x = size / 100; s2.y = 5; s2.z = 1;
//testBlockThread8 <<<s1, s2 >>>(dev_c, dev_a, dev_b);

uint3 s1; s1.x = 5; s1.y = 2; s1.z = 2;
uint3 s2; s2.x = size / 200; s2.y = 5; s2.z = 2;
testBlockThread9<<<s1, s2 >>>(dev_c, dev_a, dev_b);

cudaMemcpy(c, dev_c, size*sizeof(int), cudaMemcpyDeviceToHost);

cudaFree(dev_a);
cudaFree(dev_b);
cudaFree(dev_c);

cudaGetLastError();
}


int main()
{
const int n = 1000;

int *a = new int[n];
int *b = new int[n];
int *c = new int[n];
int *cc = new int[n];

for (int i = 0; i < n; i++)
{
a[i] = rand() % 100;
b[i] = rand() % 100;
c[i] = b[i] - a[i];
}

addWithCuda(cc, a, b, n);

FILE *fp = fopen("out.txt", "w");
for (int i = 0; i < n; i++)
fprintf(fp, "%d %d\n", c[i], cc[i]);
fclose(fp);

bool flag = true;
for (int i = 0; i < n; i++)
{
if (c[i] != cc[i])
{
flag = false;
break;
}
}

if (flag == false)
printf("no pass");
else
printf("pass");

cudaDeviceReset();

delete[] a;
delete[] b;
delete[] c;
delete[] cc;

getchar();
return 0;
}

参考