Constructing fused lasso penalty with cvxpy package in python
Asked Answered
K

2

6

Fused Lasso (Tibshirani et al, 2005) encourages sparsity of the coefficients and also sparsity of their differences.

This is the formula for the loss function and regularization:[Screenshot of the formula](https://static.mcmap.net/file/mcmap/ZG-AbGLDKwfjXFPYKmltXRctZS2jaRA/uploads/2019/11/17/967043f94a1382c5e14fdd1eb25bad9e-full.png)
The first term is the L2 (mse) loss, the second is an L1 penalty on the coefficients (Lasso regularization), and the last term is the new term introduced in the linked article.

How can I imitate this with cvxpy package - specifically, how can I implement the last term?

There are example codes for Lasso and Ridge Penalties seperately. I get a general idea how the codes work but there are some fuctions I do not get how I'm supposed to decide which one to use. For example let's compare Lasso and Ridge penalty codes.

# Lasso

import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt

def loss_fn(X, Y, beta):
    return cp.norm2(cp.matmul(X, beta) - Y)**2

def regularizer(beta):
    return cp.norm1(beta)

def objective_fn(X, Y, beta, lambd):
    return loss_fn(X, Y, beta) + lambd * regularizer(beta)

def mse(X, Y, beta):
    return (1.0 / X.shape[0]) * loss_fn(X, Y, beta).value

# Ridge

import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt

def loss_fn(X, Y, beta):
    return cp.pnorm(cp.matmul(X, beta) - Y, p=2)**2

def regularizer(beta):
    return cp.pnorm(beta, p=2)**2

def objective_fn(X, Y, beta, lambd):
    return loss_fn(X, Y, beta) + lambd * regularizer(beta)

def mse(X, Y, beta):
    return (1.0 / X.shape[0]) * loss_fn(X, Y, beta).value

For both of them we create a loss function but they are created with different commands?

Kancler answered 17/11, 2019 at 19:39 Comment(1)
interesting question!Daniel
N
0

From what I understand, you're having trouble expressing the penalty on the differences of coefficients \sum_k|b_k - b_{k-1}|.

To do this, you just need to define the vector difference matrix. Then you can hit beta with it, giving you the vector of differences. From there, you can use the standard L1 norm atom cvx.norm1.

def diff_op(shape):
    mat = np.zeros(shape)
    mat[range(2, shape[0]), range(1, shape[1])] = 1
    mat -= np.eye(shape)
    return mat

def difference_pen(beta, epsilon):
    return cvx.norm1(diff_op(beta.shape) @ beta)
Northerly answered 1/12, 2019 at 9:22 Comment(0)
A
0

The difference matrix is not necessary. You can use

cp.norm1(beta[1:] - beta[:-1])

for the last term.

Athwartships answered 21/1, 2023 at 2:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.