An index-free way to take the gradient of a neural network | regularize

]]>$$ \Vert x \Vert_{0,1} = \sum_{i\in I}1 + \sum_{i \in I^{c}} \vert x_{i} \vert $$

Where $I $ is indices. For examples, I = \left\lbrace: i : \vert x_{i} \vert leq 1 \right\rbrace. ]]>

It’s been a long time since Your post. Hope You’ll read this.

Can You explain this rule de Villalobos or give a link where to find it? The Wiki link ist empty as far as I can tell.

Thank You.

Cheers Tobias

PS: Or anyone Else? Thanks.

]]>After a few more search, I found an implementation which should be correct.

function fx = dxm(f)

fx = zeros(size(f));

fx(:, 2:end-1) = f(:,2:end-1) – f(:, 1:end-2);

fx(:, 1) = f(:, 1);

fx(:, end) = -f(:, end-1);

end

function fy = dym(f)

fy = zeros(size(f));

fy(2:end-1,:) = f(2:end-1,:) – f(1:end-2,:);

fy(1,:) = f(1,:);

fy(end,:) = -f(end-1,:);

end

Thanks again for your effort.

]]>Following is my code:

function test

u = im2double(imread(‘imgs/Bishapur.jpg’));

if size(u, 3) == 3

u = rgb2gray(u);

end

grad = @(u) cat(3,dxp(u),dyp(u));

div = @(V) dxm(V(:,:,1)) + dym(V(:,:,2));

gradu = grad(u);

divv = div(gradu);

diff = sum(gradu(:).*gradu(:)) + sum(u(:).*divv(:)) ;

msg = sprintf(‘diff: %f’, diff);

disp(msg);

end

function fx = dxp(f)

fx = [f(:,2:end) f(:,end)] – f;

end

function fy = dyp(f)

fy = [f(2:end,:); f(end,:)] – f;

end

function fx = dxm(f)

fx = f – [f(:,1) f(:,1:end-1)];

end

function fy = dym(f)

fy = f – [f(1,:); f(1:end-1,:)];

end