They are two different metrics to evaluate your model's performance usually being used in different phases.
Loss is often used in the training process to find the "best" parameter values for your model (e.g. weights in neural network). It is what you try to optimize in the training by updating weights.
Accuracy is more from an applied perspective. Once you find the optimized parameters above, you use this metrics to evaluate how accurate your model's prediction is compared to the true data.
Let us use a toy classification example. You want to predict gender from one's weight and height. You have 3 data, they are as follows:(0 stands for male, 1 stands for female)
y1 = 0, x1_w = 50kg, x2_h = 160cm;
y2 = 0, x2_w = 60kg, x2_h = 170cm;
y3 = 1, x3_w = 55kg, x3_h = 175cm;
You use a simple logistic regression model, that is
How do you find b1 and b2? You define a loss first and use optimization method to minimize the loss in an iterative way by updating b1 and b2.
In our example, a typical loss for this binary classification problem can be:
(a minus sign should be added in front of the summation sign)
We don't know what b1 and b2 should be. Let us make a random guess say b1 = 0.1 and b2 = -0.03. Then what is our loss now?
so the loss is
Then your learning algorithm (e.g. gradient descent) will find a way to update b1 and b2 to decrease the loss.
What if b1=0.1 and b2=-0.03 is the final b1 and b2 (output from gradient descent), what is the accuracy now?
Let's assume if y_hat >= 0.5, we decide our prediction is female(1), otherwise it would be 0. Therefore, our algorithm predicts y1 = 1, y2 = 1 and y3 = 1. What is our accuracy? We make a wrong prediction on y1 and y2 and make a correct one on y3. So now our accuracy is 1/3 = 33.33%.
PS: In Amir's answer, back-propagation is said to be an optimization method in NN. I think it would be treated as a way to find gradient for weights in NN. Common optimization method in NN are GradientDescent and Adam.