Non linearity helps in training your model at a much faster rate and with more accuracy without the loss of your important information

Activation Functions >>> Non linearity helps in training your model at a much faster rate and with more accuracy without the loss of your important information >>> Introduction to TensorFlow

1.

Question 1

Non-linearity helps in training your model at a much faster rate and with more accuracy without the loss of your important information?

1 / 1 point

False

=========================================

3.

Question 3

During the training process, each additional layer in your network can successively reduce signal vs. noise. How can we fix this?

1 / 1 point

Use non-saturating, linear activation functions.

Use non-saturating, nonlinear activation functions such as ReLUs.

Sigmoid or tanh activation functions.

None of the above

=========================================

5.

Question 5

How can we stop ReLU layers from dying?

1 / 1 point

Smaller batch sizes

Batch normalization

Weight regularization

Lower your learning rates

=========================================

2.

Question 2

The activation function which is linear in the positive domain and the function is 0 in the negative domain?

1 / 1 point

A: Sigmoid

Tan-h

ReLU

None of the above.

=========================================

4.

Question 4

How can we solve the problem called internal covariate shift?