I've been looking at MapReduce for a while, and it seems to be a very good way to implement fault-tolerant distributed computing. I read a lot of papers and articles on that topic, installed Hadoop on an array of virtual machines, and did some very interesting tests. I really think I understand the Map and Reduce steps.
But here is my problem : I can't figure out how it can help with http server logs analysis.
My understanding is that big companies (Facebook for instance) use MapReduce for the purpose of computing their http logs in order to speed up the process of extracting audience statistics out of these. The company I work for, while smaller than Facebook, has a big volume of web logs to compute everyday (100Go growing between 5 and 10 percent every month). Right now we process these logs on a single server, and it works just fine. But distributing the computing jobs instantly come to mind as a soon-to-be useful optimization.
Here are the questions I can't answer right now, any help would be greatly appreciated :
- Can the MapReduce concept really be applied to weblogs analysis ?
- Is MapReduce the most clever way of doing it ?
- How would you split the web log files between the various computing instances ?
Thank you.
Nicolas