The easiest way to think of Data Lake is to think of this large container that has like a real lake with rivers coming into the river you never know where the rivers are coming from (or what "type" of river).
Data Lake is able to stored the mass different types of data (Structured data, unstructured data, log files, real-time, images, etc. ) and to blend that together, to correlate many different data types. The key thing here is as we are moving from traditional way to the modern tools (like Hadoop, Cassandra, NoSQL DB, etc).
There's a whole bunch of data being created that we might get some value out of if we could analyze it. We can use the the Cloud to take that data, get it together in a store, and analyze it. In Azure, we have the Azure Data Lake Store. And we can take all of that data, and we can go and store that in Azure Data Lake Store. Azure Data Lake Store is like a cloud-based file service or file system that is pretty much unlimited in size.
We can run services on top of the data that's in that store. So you could use Hadoop or Spark in an HDInsight cluster, or you could use the Azure Data Lake analytic service, which is a complement to the Azure Data Lake Store. And what that service will let you do is to run jobs that effectively query the data you have stored in the Azure Data Lake store and generate output results.
Azure Data Lake Store is something where we could store all the data that we wanna analyze. Azure Data Lake Analytics as a service where we can run jobs that query that data to generate some sort of output for analysis. Hadoop is specific technology/ (open source distributed data processing cluster technology). You can implement a data lake using hadoop or using different tool.