CUDA loop on matlab
Asked Answered
P

2

9

I have been playing around with parallelization both using ACC and OpenMP in Fortran. I am now trying to do the same in matlab. I find it very interesting that it seems to be very hard to paralelize a loop using GPUs in matlab. Apparently the only way to do it is to by using arrayfun function. But I might be wrong.

At a conceptual level, I am wondering why is the GPU usage in matlab not more straightforward than in fortran. At a more practical level, I am wondering how to use GPUs on the simple code below.

Below, I am sharing three codes and benchmarks:

  1. Fortran OpenMP code
  2. Fortran ACC code
  3. Matlab parfor code
  4. Matlab CUDA (?) this is the one I don't know how to do.

Fortran OpenMP:

program rbc

 use omp_lib     ! For timing
 use tools
 implicit none

 real, parameter :: beta = 0.984, eta = 2, alpha = 0.35, delta = 0.01, &
                     rho = 0.95, sigma = 0.005, zmin=-0.0480384, zmax=0.0480384;
integer, parameter :: nz = 4, nk=4800;
real :: zgrid(nz), kgrid(nk), t_tran_z(nz,nz), tran_z(nz,nz);
real :: kmax, kmin, tol, dif, c(nk), r(nk), w(nk);
real, dimension(nk,nz) :: v=0., v0=0., ev=0., c0=0.;
integer :: i, iz, ik, cnt;
logical :: ind(nk);
real(kind=8) :: start, finish   ! For timing
real :: tmpmax, c1  


call omp_set_num_threads(12)


!Grid for productivity z

! [1 x 4] grid of values for z
call linspace(zmin,zmax,nz,zgrid)
zgrid = exp(zgrid)
! [4 x 4] Markov transition matrix of z
tran_z(1,1) = 0.996757
tran_z(1,2) = 0.00324265
tran_z(1,3) = 0
tran_z(1,4) = 0
tran_z(2,1) = 0.000385933
tran_z(2,2) = 0.998441
tran_z(2,3) = 0.00117336
tran_z(2,4) = 0
tran_z(3,1) = 0
tran_z(3,2) = 0.00117336
tran_z(3,3) = 0.998441
tran_z(3,4) = 0.000385933
tran_z(4,1) = 0
tran_z(4,2) = 0
tran_z(4,3) = 0.00324265
tran_z(4,4) = 0.996757

! Grid for capital k

kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)**(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)**(1/(alpha-1));

! [1 x 4800] grid of possible values of k
call linspace(kmin, kmax, nk, kgrid)


! Compute initial wealth c0(k,z)
do iz=1,nz
  c0(:,iz) = zgrid(iz)*kgrid**alpha + (1-delta)*kgrid;
end do

dif = 10000
tol = 1e-8
cnt = 1

do while(dif>tol)
    !$omp parallel do default(shared) private(ik,iz,i,tmpmax,c1)    
    do ik=1,nk;        
          do iz = 1,nz;
          tmpmax = -huge(0.)

          do i = 1,nk
             c1 = c0(ik,iz) - kgrid(i)
             if(c1<0) exit
             c1 = c1**(1-eta)/(1-eta)+ev(i,iz)
             if(tmpmax<c1) tmpmax = c1
          end do
          v(ik,iz) = tmpmax
       end do

    end do
    !$omp end parallel do
    ev = beta*matmul(v,tran_z)
    dif = maxval(abs(v-v0))
    v0 = v
    if(mod(cnt,1)==0) write(*,*) cnt, ':', dif
        cnt = cnt+1
end do


end program

Fortran ACC:

Just replace the mainloop syntax on the above code with:

do while(dif>tol)
    !$acc kernels
    !$acc loop gang
        do ik=1,nk;        
         !$acc loop gang
          do iz = 1,nz;
          tmpmax = -huge(0.)

          do i = 1,nk
             c1 = c0(ik,iz) - kgrid(i)
             if(c1<0) exit
             c1 = c1**(1-eta)/(1-eta)+ev(i,iz)
             if(tmpmax<c1) tmpmax = c1
          end do
          v(ik,iz) = tmpmax
       end do

    end do

    !$acc end kernels
    ev = beta*matmul(v,tran_z)
    dif = maxval(abs(v-v0))
    v0 = v
    if(mod(cnt,1)==0) write(*,*) cnt, ':', dif
        cnt = cnt+1
end do

Matlab parfor: (I know the code below could be made faster by using vectorized syntax, but the whole point of the exercise is to compare loop speeds).

tic;
beta = 0.984; 
eta = 2; 
alpha = 0.35; 
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;

v=zeros(nk,nz); 
v0=zeros(nk,nz);
ev=zeros(nk,nz);
c0=zeros(nk,nz);

%Grid for productivity z

%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;

% Grid for capital k

kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));

% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);

% Compute initial wealth c0(k,z)
for iz=1:nz
  c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end 

dif = 10000;
tol = 1e-8;
cnt = 1;

while dif>tol

    parfor ik=1:nk
          for iz = 1:nz
          tmpmax = -intmax;

          for i = 1:nk
             c1 = c0(ik,iz) - kgrid(i);
             if (c1<0) 
                 continue
             end 
             c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
             if tmpmax<c1 
                 tmpmax = c1;
             end
          end 
          v(ik,iz) = tmpmax;
          end 

    end 
    ev = beta*v*tran_z;
    dif = max(max(abs(v-v0)));
    v0 = v;
    if mod(cnt,1)==0 
        fprintf('%1.5f :  %1.5f \n', [cnt dif])
    end
        cnt = cnt+1;
end 


toc

Matlab CUDA:

This is what I have no clue how to code. Is using arrayfun the only way of doing this? In fortran is so simple to move from OpenMP to OpenACC. Isn't there an easy way in Matlab of going from parfor to GPUs loops?

The time comparison between codes:

Fortran OpenMP: 83.1 seconds 
Fortran ACC:    2.4 seconds
Matlab parfor:  1182 seconds

Final remark, I should say the codes above solve a simple Real Business Cycle Model and were written based on this.

Precautionary answered 19/11, 2018 at 11:50 Comment(7)
As for "easy" ways, there's the GPU coder, but it requires a toolbox. Other than that, there's an example in the MATLAB documentation that compares these things.Secondhand
You are thinking about it wrongly. Vectorized syntax is MATLABs way of optimizing a loop - whether it is on the CPU or the GPU. So the easiest way to use the GPU is to use gpuArray() to put everything on the GPU and then use the classical vectorized syntax. Then arrayfun is the more tedious alternative if you cannot write it in a vectorized manner.Fraenum
You can not code In MATLAB and CUDA. until 2018b. The newer very specialized toolbox allows to write kernels in MATLAB, but prior editions only allowed for very specific functions to be run using CUDA. I personally write the code in CUDA and mex it.Gilbert
I am using 2018b. Any reference that I can check?Precautionary
There is no point doing performance checks with Matlab while refusing to use the most basic optimization which is vectorization. There is a big overhead with each operation, incurred at each iteration with a for loop but only once with a vectorized op (which hides a loop anyway).Sheldonshelduck
@Precautionary yes, the official documentation: uk.mathworks.com/campaigns/offers/… . In my opinion, unless you need to play with the way memory is allocated on the host (see pinned memory) the best way is writing CUDA code and mex-ing it.Gilbert
Perfect. You replied on the wrong topic but that solved it :)Precautionary
M
0

Matlab Coder

First, as Dev-iL already mentioned, you can use GPU coder.

It (I use R2019a) would only require minor changes in your code:

function cdapted()
beta = 0.984; 
eta = 2; 
alpha = 0.35; 
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;

v=zeros(nk,nz); 
v0=zeros(nk,nz);
ev=zeros(nk,nz);
c0=zeros(nk,nz);

%Grid for productivity z

%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z = zeros([4,4]);
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;

% Grid for capital k

kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));

% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);

% Compute initial wealth c0(k,z)
for iz=1:nz
  c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end 

dif = 10000;
tol = 1e-8;
cnt = 1;

while dif>tol

    for ik=1:nk
        for iz = 1:nz
            tmpmax = double(intmin);

            for i = 1:nk
                c1 = c0(ik,iz) - kgrid(i);
                if (c1<0)
                    continue
                end
                c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
                if tmpmax<c1
                    tmpmax = c1;
                end
            end
            v(ik,iz) = tmpmax;
        end

    end
    ev = beta*v*tran_z;
    dif = max(max(abs(v-v0)));
    v0 = v;
    % I've commented out fprintf because double2single cannot handle it 
    % (could be manually uncommented in the converted version if needed)
    % ------------
    % if mod(cnt,1)==0       
    %     fprintf('%1.5f :  %1.5f \n', cnt, dif);
    % end
    cnt = cnt+1;
end
end

The script to build this is:

% unload mex files
clear mex


%% Build for gpu, float64

% Produces ".\codegen\mex\cdapted" folder and "cdapted_mex.mexw64"
cfg = coder.gpuConfig('mex');
codegen -config cfg cdapted

% benchmark it (~7.14s on my GTX1080Ti)
timeit(@() cdapted_mex,0)


%% Build for gpu, float32:

% Produces ".\codegen\cdapted\single" folder
scfg = coder.config('single');
codegen -double2single scfg cdapted

% Produces ".\codegen\mex\cdapted_single" folder and "cdapted_single_mex.mexw64"
cfg = coder.gpuConfig('mex');
codegen -config cfg .\codegen\cdapted\single\cdapted_single.m

% benchmark it (~2.09s on my GTX1080Ti)
timeit(@() cdapted_single_mex,0)

So, if your Fortran binary is using float32 precision (I suspect so), this Matlab Coder result is on par with it. That does not mean that both are highly efficient, though. The code, generated by Matlab Coder is still far from being efficient. And it does not fully utilize the GPU (even TDP is ~50%).


Vectorization and gpuArray

Next, I agree with user10597469 and Nicky Mattsson that your Matlab code does not look like normal "native" vectorized Matlab code.

There are many things to adjust. (But arrayfun is hardly better than for). Firstly, let's remove for loops:

function vertorized1()
t_tot = tic();
beta = 0.984; 
eta = 2; 
alpha = 0.35; 
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;

v=zeros(nk,nz); 
v0=zeros(nk,nz);
ev=zeros(nk,nz);
c0=zeros(nk,nz);

%Grid for productivity z

%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z = zeros([4,4]);
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;

% Grid for capital k

kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));

% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);

% Compute initial wealth c0(k,z)
for iz=1:nz
  c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end 

dif = 10000;
tol = 0.4; 
tol = 1e-8;
cnt = 1;

t_acc=zeros([1,2]);

while dif>tol

    %% orig-noparfor:
    t=tic();
    for ik=1:nk
          for iz = 1:nz
          tmpmax = -intmax;

          for i = 1:nk
             c1 = c0(ik,iz) - kgrid(i);
             if (c1<0) 
                 continue
             end 
             c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
             if tmpmax<c1 
                 tmpmax = c1;
             end
          end 
          v(ik,iz) = tmpmax;
          end 

    end 
    t_acc(1) = t_acc(1) + toc(t);    

    %% better:
    t=tic();          

    kgrid_ = reshape(kgrid,[1 1 numel(kgrid)]);
    c1_ = c0 - kgrid_;    
    c1_x = c1_.^(1-eta)/(1-eta);

    c2 = c1_x + reshape(ev', [1 nz nk]);
    c2(c1_<0) = -Inf;
    v_ = max(c2,[],3);        
    t_acc(2) = t_acc(2) + toc(t);    

    %% compare
    assert(isequal(v_,v));   
    v=v_;

    %% other
    ev = beta*v*tran_z;
    dif = max(max(abs(v-v0)));
    v0 = v;
    if mod(cnt,1)==0
        fprintf('%1.5f :  %1.5f \n', cnt, dif);
    end
    cnt = cnt+1;
end
disp(t_acc);
disp(toc(t_tot));
end

% toc result:
%   tol = 0.4  ->   12 iterations :: t_acc = [  17.7       9.8]
%   tol = 1e-8 -> 1124 iterations :: t_acc = [1758.6     972.0]
% 
% (all 1124 iterations) with commented-out orig :: t_tot = 931.7443

Now, it is strikingly evident that most computationally intense calculations inside the while loop (e.g. ^(1-eta)/(1-eta)) actually produce constants that could be pre-calculated. Once we fix that, the result would be already a bit faster than the original parfor-based version (on my 2xE5-2630v3):

function vertorized2()
t_tot = tic();
beta = 0.984; 
eta = 2; 
alpha = 0.35; 
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;

v=zeros(nk,nz); 
v0=zeros(nk,nz);
ev=zeros(nk,nz);
c0=zeros(nk,nz);

%Grid for productivity z

%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z = zeros([4,4]);
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;

% Grid for capital k

kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));

% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);

% Compute initial wealth c0(k,z)
for iz=1:nz
  c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end 

dif = 10000;
tol = 0.4; 
tol = 1e-8;
cnt = 1;

t_acc=zeros([1,2]);

%% constants:
kgrid_ = reshape(kgrid,[1 1 numel(kgrid)]);
c1_ = c0 - kgrid_;
mask=zeros(size(c1_));
mask(c1_<0)=-Inf;
c1_x = c1_.^(1-eta)/(1-eta);

while dif>tol

    %% orig:
    t=tic();
    parfor ik=1:nk
          for iz = 1:nz
          tmpmax = -intmax;

          for i = 1:nk
             c1 = c0(ik,iz) - kgrid(i);
             if (c1<0) 
                 continue
             end 
             c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
             if tmpmax<c1 
                 tmpmax = c1;
             end
          end 
          v(ik,iz) = tmpmax;
          end 

    end 
    t_acc(1) = t_acc(1) + toc(t);

    %% better:
    t=tic();       
    c2 = c1_x + reshape(ev', [1 nz nk]);
    c2 = c2 + mask;
    v_ = max(c2,[],3);        
    t_acc(2) = t_acc(2) + toc(t);    

    %% compare
    assert(isequal(v_,v));
    v=v_;

    %% other
    ev = beta*v*tran_z;
    dif = max(max(abs(v-v0)));
    v0 = v;
    if mod(cnt,1)==0
        fprintf('%1.5f :  %1.5f \n', cnt, dif);
    end
    cnt = cnt+1;
end
disp(t_acc);
disp(toc(t_tot));
end

% toc result:
%   tol = 0.4  ->   12 iterations :: t_acc = [  2.4    1.7] 
%   tol = 1e-8 -> 1124 iterations :: t_acc = [188.3  115.9]
% 
% (all 1124 iterations) with commented-out orig :: t_tot = 117.6217

This vectorized code is still inefficient (e.g. reshape(ev',...), which eats ~60% of time, could be easily avoided by re-ordering dimensions), but it is somewhat suitable for gpuArray():

function vectorized3g()
t0 = tic();
beta = 0.984; 
eta = 2; 
alpha = 0.35; 
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;

v=zeros(nk,nz); 
v0=zeros(nk,nz);
ev=gpuArray(zeros(nk,nz,'single'));
c0=zeros(nk,nz);

%Grid for productivity z

%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z = zeros([4,4]);
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;

% Grid for capital k

kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));

% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);

% Compute initial wealth c0(k,z)
for iz=1:nz
  c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end 

dif = 10000;
tol = 1e-8;
cnt = 1;

t_acc=zeros([1,2]);

%% constants:
kgrid_ = reshape(kgrid,[1 1 numel(kgrid)]);
c1_ = c0 - kgrid_;
mask=gpuArray(zeros(size(c1_),'single'));
mask(c1_<0)=-Inf;
c1_x = c1_.^(1-eta)/(1-eta);

c1_x = gpuArray(single(c1_x));


while dif>tol

    %% orig:
%     t=tic();
%     parfor ik=1:nk
%           for iz = 1:nz
%           tmpmax = -intmax;
% 
%           for i = 1:nk
%              c1 = c0(ik,iz) - kgrid(i);
%              if (c1<0) 
%                  continue
%              end 
%              c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
%              if tmpmax<c1 
%                  tmpmax = c1;
%              end
%           end 
%           v(ik,iz) = tmpmax;
%           end 
% 
%     end 
%     t_acc(1) = t_acc(1) + toc(t);

    %% better:
    t=tic();       
    c2 = c1_x + reshape(ev', [1 nz nk]);
    c2 = c2 + mask;
    v_ = max(c2,[],3);        
    t_acc(2) = t_acc(2) + toc(t);    

    %% compare
    %  assert(isequal(v_,v));        
    v = v_;

    %% other
    ev = beta*v*tran_z;
    dif = max(max(abs(v-v0)));
    v0 = v;
    if mod(cnt,1)==0
        fprintf('%1.5f :  %1.5f \n', cnt, dif);
    end
    cnt = cnt+1;
end
disp(t_acc);
disp(toc(t0));
end

% (all 849 iterations) with commented-out orig :: t_tot = 14.9040

This ~15 sec result is ~7x worse than those (~2sec) we get from Matlab Coder. But this option requires fewer toolboxes. In practice, gpuArray is most handy when you start from writing "native Matlab code". Including interactive use.

Finally, if you build this final vectorized version with Matlab Coder (you would have to do some trivial adjustments), it won't be faster than the first one. It would be 2x-3x slower.

Marley answered 9/7, 2019 at 19:51 Comment(2)
I am accepting this as it seems to be a fantastic answer. I will test it later tomorrow. Thank you. Eyeballing the code, I do not understand how the the code identifies what loop can be parallelized. Or none of them is, and the gains are on the vectorization? Could you post the exact timings that each code takes to run?Precautionary
@phdstudent, OK, I've just added more timings. Vectorized code contains no loops (except for while), but the underlying matrix-manipulation libraries use both SIMD and multi-threading.Marley
L
0

So, this bit is what is going to mess you up on this project. MATLAB stands for Matrix Laboratory. Vectors and matrices are kind of its thing. The number 1 way to optimize anything in MATLAB is to vectorize it. For this reason, when using performance enhancing tools like CUDA, MATLAB assumes that you are going to vectorize your inputs if possible. Given the primacy of vectorizing inputs in the MATLAB coding style, it is not a fair comparison to assess its performance using only loops. It would be like assessing the performance of C++ while refusing to use pointers. If you want to use CUDA with MATLAB, the main way to go about it is to vectorize your inputs and use gpuarray. Honestly, I haven't looked too hard at your code but it kind of looks like your inputs are already mostly vectorized. You may be able to get away with something as simple as gpuarray(1:nk) or kgrid=gpuarray(linspace(...).

Loutitia answered 3/12, 2018 at 22:42 Comment(0)
M
0

Matlab Coder

First, as Dev-iL already mentioned, you can use GPU coder.

It (I use R2019a) would only require minor changes in your code:

function cdapted()
beta = 0.984; 
eta = 2; 
alpha = 0.35; 
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;

v=zeros(nk,nz); 
v0=zeros(nk,nz);
ev=zeros(nk,nz);
c0=zeros(nk,nz);

%Grid for productivity z

%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z = zeros([4,4]);
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;

% Grid for capital k

kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));

% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);

% Compute initial wealth c0(k,z)
for iz=1:nz
  c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end 

dif = 10000;
tol = 1e-8;
cnt = 1;

while dif>tol

    for ik=1:nk
        for iz = 1:nz
            tmpmax = double(intmin);

            for i = 1:nk
                c1 = c0(ik,iz) - kgrid(i);
                if (c1<0)
                    continue
                end
                c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
                if tmpmax<c1
                    tmpmax = c1;
                end
            end
            v(ik,iz) = tmpmax;
        end

    end
    ev = beta*v*tran_z;
    dif = max(max(abs(v-v0)));
    v0 = v;
    % I've commented out fprintf because double2single cannot handle it 
    % (could be manually uncommented in the converted version if needed)
    % ------------
    % if mod(cnt,1)==0       
    %     fprintf('%1.5f :  %1.5f \n', cnt, dif);
    % end
    cnt = cnt+1;
end
end

The script to build this is:

% unload mex files
clear mex


%% Build for gpu, float64

% Produces ".\codegen\mex\cdapted" folder and "cdapted_mex.mexw64"
cfg = coder.gpuConfig('mex');
codegen -config cfg cdapted

% benchmark it (~7.14s on my GTX1080Ti)
timeit(@() cdapted_mex,0)


%% Build for gpu, float32:

% Produces ".\codegen\cdapted\single" folder
scfg = coder.config('single');
codegen -double2single scfg cdapted

% Produces ".\codegen\mex\cdapted_single" folder and "cdapted_single_mex.mexw64"
cfg = coder.gpuConfig('mex');
codegen -config cfg .\codegen\cdapted\single\cdapted_single.m

% benchmark it (~2.09s on my GTX1080Ti)
timeit(@() cdapted_single_mex,0)

So, if your Fortran binary is using float32 precision (I suspect so), this Matlab Coder result is on par with it. That does not mean that both are highly efficient, though. The code, generated by Matlab Coder is still far from being efficient. And it does not fully utilize the GPU (even TDP is ~50%).


Vectorization and gpuArray

Next, I agree with user10597469 and Nicky Mattsson that your Matlab code does not look like normal "native" vectorized Matlab code.

There are many things to adjust. (But arrayfun is hardly better than for). Firstly, let's remove for loops:

function vertorized1()
t_tot = tic();
beta = 0.984; 
eta = 2; 
alpha = 0.35; 
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;

v=zeros(nk,nz); 
v0=zeros(nk,nz);
ev=zeros(nk,nz);
c0=zeros(nk,nz);

%Grid for productivity z

%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z = zeros([4,4]);
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;

% Grid for capital k

kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));

% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);

% Compute initial wealth c0(k,z)
for iz=1:nz
  c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end 

dif = 10000;
tol = 0.4; 
tol = 1e-8;
cnt = 1;

t_acc=zeros([1,2]);

while dif>tol

    %% orig-noparfor:
    t=tic();
    for ik=1:nk
          for iz = 1:nz
          tmpmax = -intmax;

          for i = 1:nk
             c1 = c0(ik,iz) - kgrid(i);
             if (c1<0) 
                 continue
             end 
             c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
             if tmpmax<c1 
                 tmpmax = c1;
             end
          end 
          v(ik,iz) = tmpmax;
          end 

    end 
    t_acc(1) = t_acc(1) + toc(t);    

    %% better:
    t=tic();          

    kgrid_ = reshape(kgrid,[1 1 numel(kgrid)]);
    c1_ = c0 - kgrid_;    
    c1_x = c1_.^(1-eta)/(1-eta);

    c2 = c1_x + reshape(ev', [1 nz nk]);
    c2(c1_<0) = -Inf;
    v_ = max(c2,[],3);        
    t_acc(2) = t_acc(2) + toc(t);    

    %% compare
    assert(isequal(v_,v));   
    v=v_;

    %% other
    ev = beta*v*tran_z;
    dif = max(max(abs(v-v0)));
    v0 = v;
    if mod(cnt,1)==0
        fprintf('%1.5f :  %1.5f \n', cnt, dif);
    end
    cnt = cnt+1;
end
disp(t_acc);
disp(toc(t_tot));
end

% toc result:
%   tol = 0.4  ->   12 iterations :: t_acc = [  17.7       9.8]
%   tol = 1e-8 -> 1124 iterations :: t_acc = [1758.6     972.0]
% 
% (all 1124 iterations) with commented-out orig :: t_tot = 931.7443

Now, it is strikingly evident that most computationally intense calculations inside the while loop (e.g. ^(1-eta)/(1-eta)) actually produce constants that could be pre-calculated. Once we fix that, the result would be already a bit faster than the original parfor-based version (on my 2xE5-2630v3):

function vertorized2()
t_tot = tic();
beta = 0.984; 
eta = 2; 
alpha = 0.35; 
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;

v=zeros(nk,nz); 
v0=zeros(nk,nz);
ev=zeros(nk,nz);
c0=zeros(nk,nz);

%Grid for productivity z

%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z = zeros([4,4]);
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;

% Grid for capital k

kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));

% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);

% Compute initial wealth c0(k,z)
for iz=1:nz
  c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end 

dif = 10000;
tol = 0.4; 
tol = 1e-8;
cnt = 1;

t_acc=zeros([1,2]);

%% constants:
kgrid_ = reshape(kgrid,[1 1 numel(kgrid)]);
c1_ = c0 - kgrid_;
mask=zeros(size(c1_));
mask(c1_<0)=-Inf;
c1_x = c1_.^(1-eta)/(1-eta);

while dif>tol

    %% orig:
    t=tic();
    parfor ik=1:nk
          for iz = 1:nz
          tmpmax = -intmax;

          for i = 1:nk
             c1 = c0(ik,iz) - kgrid(i);
             if (c1<0) 
                 continue
             end 
             c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
             if tmpmax<c1 
                 tmpmax = c1;
             end
          end 
          v(ik,iz) = tmpmax;
          end 

    end 
    t_acc(1) = t_acc(1) + toc(t);

    %% better:
    t=tic();       
    c2 = c1_x + reshape(ev', [1 nz nk]);
    c2 = c2 + mask;
    v_ = max(c2,[],3);        
    t_acc(2) = t_acc(2) + toc(t);    

    %% compare
    assert(isequal(v_,v));
    v=v_;

    %% other
    ev = beta*v*tran_z;
    dif = max(max(abs(v-v0)));
    v0 = v;
    if mod(cnt,1)==0
        fprintf('%1.5f :  %1.5f \n', cnt, dif);
    end
    cnt = cnt+1;
end
disp(t_acc);
disp(toc(t_tot));
end

% toc result:
%   tol = 0.4  ->   12 iterations :: t_acc = [  2.4    1.7] 
%   tol = 1e-8 -> 1124 iterations :: t_acc = [188.3  115.9]
% 
% (all 1124 iterations) with commented-out orig :: t_tot = 117.6217

This vectorized code is still inefficient (e.g. reshape(ev',...), which eats ~60% of time, could be easily avoided by re-ordering dimensions), but it is somewhat suitable for gpuArray():

function vectorized3g()
t0 = tic();
beta = 0.984; 
eta = 2; 
alpha = 0.35; 
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;

v=zeros(nk,nz); 
v0=zeros(nk,nz);
ev=gpuArray(zeros(nk,nz,'single'));
c0=zeros(nk,nz);

%Grid for productivity z

%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z = zeros([4,4]);
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;

% Grid for capital k

kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));

% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);

% Compute initial wealth c0(k,z)
for iz=1:nz
  c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end 

dif = 10000;
tol = 1e-8;
cnt = 1;

t_acc=zeros([1,2]);

%% constants:
kgrid_ = reshape(kgrid,[1 1 numel(kgrid)]);
c1_ = c0 - kgrid_;
mask=gpuArray(zeros(size(c1_),'single'));
mask(c1_<0)=-Inf;
c1_x = c1_.^(1-eta)/(1-eta);

c1_x = gpuArray(single(c1_x));


while dif>tol

    %% orig:
%     t=tic();
%     parfor ik=1:nk
%           for iz = 1:nz
%           tmpmax = -intmax;
% 
%           for i = 1:nk
%              c1 = c0(ik,iz) - kgrid(i);
%              if (c1<0) 
%                  continue
%              end 
%              c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
%              if tmpmax<c1 
%                  tmpmax = c1;
%              end
%           end 
%           v(ik,iz) = tmpmax;
%           end 
% 
%     end 
%     t_acc(1) = t_acc(1) + toc(t);

    %% better:
    t=tic();       
    c2 = c1_x + reshape(ev', [1 nz nk]);
    c2 = c2 + mask;
    v_ = max(c2,[],3);        
    t_acc(2) = t_acc(2) + toc(t);    

    %% compare
    %  assert(isequal(v_,v));        
    v = v_;

    %% other
    ev = beta*v*tran_z;
    dif = max(max(abs(v-v0)));
    v0 = v;
    if mod(cnt,1)==0
        fprintf('%1.5f :  %1.5f \n', cnt, dif);
    end
    cnt = cnt+1;
end
disp(t_acc);
disp(toc(t0));
end

% (all 849 iterations) with commented-out orig :: t_tot = 14.9040

This ~15 sec result is ~7x worse than those (~2sec) we get from Matlab Coder. But this option requires fewer toolboxes. In practice, gpuArray is most handy when you start from writing "native Matlab code". Including interactive use.

Finally, if you build this final vectorized version with Matlab Coder (you would have to do some trivial adjustments), it won't be faster than the first one. It would be 2x-3x slower.

Marley answered 9/7, 2019 at 19:51 Comment(2)
I am accepting this as it seems to be a fantastic answer. I will test it later tomorrow. Thank you. Eyeballing the code, I do not understand how the the code identifies what loop can be parallelized. Or none of them is, and the gains are on the vectorization? Could you post the exact timings that each code takes to run?Precautionary
@phdstudent, OK, I've just added more timings. Vectorized code contains no loops (except for while), but the underlying matrix-manipulation libraries use both SIMD and multi-threading.Marley

© 2022 - 2024 — McMap. All rights reserved.