I suppose there's no harm in combining the two losses as they are quite "orthogonal" to each other; while cross-entropy treats every pixel as an independent prediction, dice-score looks at the resulting mask in a more "holistic" way.
Moreover, considering the fact that these two losses yields significantly different masks, each with its own merits and errors, I suppose combining this complementary information should be beneficial.
Make sure you weight the losses such that the gradients from the two losses are roughly in the same scale so you can equally benefit from both.
If you make it work, I'd be interested to hear about your experiments and conclusions ;)