Activation functions

statinf.ml.activations.logit(x, weights, bias=0, tensor=False)[source]

Logistic function

Parameters
  • x (numpy.array) – Input value

  • weights (numpy.array) – Vector of weights \(\beta\)

  • bias (numpy.array) – Vector of bias \(\epsilon\), defaults to 0.

  • tensor (bool, optional) – Perform computation as tensor (theano type), defaults to False

Returns

Logistic transformation: \(logit(x, \beta) = \dfrac{1}{1 + e^{-x \beta}}\)

Return type

float

statinf.ml.activations.relu(x)[source]

Rectified Linear Units activation function

Parameters

x (float or numpy.array) – Input value

Returns

Activated value: \(relu(x) = \max(0, x)\)

Return type

float

statinf.ml.activations.sigmoid(x)[source]

Sigmoid activation function

Parameters

x (float or numpy.array) – Input value

Returns

Sigmoid activated value: \(sigmoid(x) = \dfrac{1}{1 + e^{-x}}\)

Return type

float

statinf.ml.activations.softplus(x)[source]

Softplus activation function

Parameters

x (float or numpy.array) – Input value

Returns

Activated value: \(softplus(x) = \log(1 + e^{-x})\)

Return type

float

statinf.ml.activations.tanh(x)[source]

Hyperbolic tangent activation function

Parameters

x (float or numpy.array) – Input value

Returns

Activated value: \(tanh(x)\)

Return type

float