Is it possible to develop a powerful web search engine using Erlang, Mnesia & Yaws?
Asked Answered
C

4

6

I am thinking of developing a web search engine using Erlang, Mnesia & Yaws. Is it possible to make a powerful and the fastest web search engine using these software? What will it need to accomplish this and how what do I start with?

Crib answered 12/10, 2008 at 18:17 Comment(2)
Are you asking this question, thinking someone has already done it and has the experience to tell you? :)Suicide
Robert P! its nothing like that ;). Actually, I am using these langs. for development of a product in my company. basically, this frame work(erlang, mnesia & yaws) are not vry known here in India. But I was thinkin of developing above search engine using these langs but dont know how it would react?Crib
S
20

Erlang can make the most powerful web crawler today. Let me take you through my simple crawler.

Step 1. I create a simple parallelism module, which i call mapreduce

-module(mapreduce).
-export([compute/2]).
%%=====================================================================
%% usage example
%% Module = string
%% Function = tokens
%% List_of_arg_lists = [["file\r\nfile","\r\n"],["muzaaya_joshua","_"]]
%% Ans = [["file","file"],["muzaaya","joshua"]]
%% Job being done by two processes
%% i.e no. of processes spawned = length(List_of_arg_lists)

compute({Module,Function},List_of_arg_lists)->
    S = self(),
    Ref = erlang:make_ref(),
    PJob = fun(Arg_list) -> erlang:apply(Module,Function,Arg_list) end,
    Spawn_job = fun(Arg_list) -> 
                    spawn(fun() -> execute(S,Ref,PJob,Arg_list) end)
                end,
    lists:foreach(Spawn_job,List_of_arg_lists),
    gather(length(List_of_arg_lists),Ref,[]).
gather(0, _, L) -> L; gather(N, Ref, L) -> receive {Ref,{'EXIT',_}} -> gather(N-1,Ref,L); {Ref, Result} -> gather(N-1, Ref, [Result|L]) end.
execute(Parent,Ref,Fun,Arg)-> Parent ! {Ref,(catch Fun(Arg))}.

Step 2. The HTTP Client

One would normally use either inets httpc module built into erlang or ibrowse. However, for memory management and speed (getting the memory foot print as low as possible), a good erlang programmer would choose to use curl. By applying the os:cmd/1 which takes that curl command line, one would get the output direct into the erlang calling function. Yet still, its better, to make curl throw its outputs into files and then our application has another thread (process) which reads and parses these files

Command: curl "http://www.erlang.org" -o "/downloaded_sites/erlang/file1.html"
In Erlang
os:cmd("curl \"http://www.erlang.org\" -o \"/downloaded_sites/erlang/file1.html\"").
So you can spawn many processes. You remember to escape the URL as well as the output file path as you execute that command. There is a process on the other hand whose work is to watch the directory of downloaded pages. These pages it reads and parses them, it may then delete after parsing or save in a different location or even better, archive them using the zip module
folder_check()->
    spawn(fun() -> check_and_report() end),
    ok.

-define(CHECK_INTERVAL,5).

check_and_report()->
    %% avoid using
    %% filelib:list_dir/1
    %% if files are many, memory !!!
    case os:cmd("ls | wc -l") of
        "0\n" -> ok;
        "0" -> ok;
        _ -> ?MODULE:new_files_found()
    end,
    sleep(timer:seconds(?CHECK_INTERVAL)),
    %% keep checking
    check_and_report().

new_files_found()->
    %% inform our parser to pick files
    %% once it parses a file, it has to 
    %% delete it or save it some
    %% where else
    gen_server:cast(?MODULE,files_detected).

Step 3. The html parser.
Better use this mochiweb's html parser and XPATH. This will help you parse and get all your favorite HTML tags, extract the contents and then good to go. The examples below, i focused on only the Keywords, description and title in the markup


Module Testing in shell...awesome results!!!

2> spider_bot:parse_url("http://erlang.org").
[[[],[],
  {"keywords",
   "erlang, functional, programming, fault-tolerant, distributed, multi-platform, portable, software, multi-core, smp, concurrency "},
  {"description","open-source erlang official website"}],
 {title,"erlang programming language, official website"}]

3> spider_bot:parse_url("http://facebook.com").
[[{"description",
   " facebook is a social utility that connects people with friends and others who work, study and live around them. people use facebook to keep up with friends, upload an unlimited number of photos, post links
 and videos, and learn more about the people they meet."},
  {"robots","noodp,noydir"},
    [],[],[],[]],
 {title,"incompatible browser | facebook"}]

4> spider_bot:parse_url("http://python.org").
[[{"description",
   "      home page for python, an interpreted, interactive, object-oriented, extensible\n      programming language. it provides an extraordinary combination of clarity and\n      versatility, and is free and
comprehensively ported."},
  {"keywords",
   "python programming language object oriented web free source"},
  []],
 {title,"python programming language – official website"}]

5> spider_bot:parse_url("http://www.house.gov/").
[[[],[],[],
  {"description",
   "home page of the united states house of representatives"},
  {"description",
   "home page of the united states house of representatives"},
  [],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
  [],[],[]|...],
 {title,"united states house of representatives, 111th congress, 2nd session"}]


You can now realise that, we can index the pages against their keywords, plus a good schedule of page revisists. Another challenge was how to make a crawler (something that will move around the entire web, from domain to domain), but that one is easy. Its possible by parsing an Html file for the href tags. Make the HTML Parser to extract all href tags and then you might need some regular expressions here and there to get the links right under a given domain.

Running the crawler

7> spider_connect:conn2("http://erlang.org").        

        Links: ["http://www.erlang.org/index.html",
                "http://www.erlang.org/rss.xml",
                "http://erlang.org/index.html","http://erlang.org/about.html",
                "http://erlang.org/download.html",
                "http://erlang.org/links.html","http://erlang.org/faq.html",
                "http://erlang.org/eep.html",
                "http://erlang.org/starting.html",
                "http://erlang.org/doc.html",
                "http://erlang.org/examples.html",
                "http://erlang.org/user.html",
                "http://erlang.org/mirrors.html",
                "http://www.pragprog.com/titles/jaerlang/programming-erlang",
                "http://oreilly.com/catalog/9780596518189",
                "http://erlang.org/download.html",
                "http://www.erlang-factory.com/conference/ErlangUserConference2010/speakers",
                "http://erlang.org/download/otp_src_R14B.readme",
                "http://erlang.org/download.html",
                "https://www.erlang-factory.com/conference/ErlangUserConference2010/register",
                "http://www.erlang-factory.com/conference/ErlangUserConference2010/submit_talk",
                "http://www.erlang.org/workshop/2010/",
                "http://erlangcamp.com","http://manning.com/logan",
                "http://erlangcamp.com","http://twitter.com/erlangcamp",
                "http://www.erlang-factory.com/conference/London2010/speakers/joearmstrong/",
                "http://www.erlang-factory.com/conference/London2010/speakers/RobertVirding/",
                "http://www.erlang-factory.com/conference/London2010/speakers/MartinOdersky/",
                "http://www.erlang-factory.com/",
                "http://erlang.org/download/otp_src_R14A.readme",
                "http://erlang.org/download.html",
                "http://www.erlang-factory.com/conference/London2010",
                "http://github.com/erlang/otp",
                "http://erlang.org/download.html",
                "http://erlang.org/doc/man/erl_nif.html",
                "http://github.com/erlang/otp",
                "http://erlang.org/download.html",
                "http://www.erlang-factory.com/conference/ErlangUserConference2009",
                "http://erlang.org/doc/efficiency_guide/drivers.html",
                "http://erlang.org/download.html",
                "http://erlang.org/workshop/2009/index.html",
                "http://groups.google.com/group/erlang-programming",
                "http://www.erlang.org/eeps/eep-0010.html",
                "http://erlang.org/download/otp_src_R13B.readme",
                "http://erlang.org/download.html",
                "http://oreilly.com/catalog/9780596518189",
                "http://www.erlang-factory.com",
                "http://www.manning.com/logan",
                "http://www.erlang.se/euc/08/index.html",
                "http://erlang.org/download/otp_src_R12B-5.readme",
                "http://erlang.org/download.html",
                "http://erlang.org/workshop/2008/index.html",
                "http://www.erlang-exchange.com",
                "http://erlang.org/doc/highlights.html",
                "http://www.erlang.se/euc/07/",
                "http://www.erlang.se/workshop/2007/",
                "http://erlang.org/eep.html",
                "http://erlang.org/download/otp_src_R11B-5.readme",
                "http://pragmaticprogrammer.com/titles/jaerlang/index.html",
                "http://erlang.org/project/test_server",
                "http://erlang.org/download-stats/",
                "http://erlang.org/user.html#smtp_client-1.0",
                "http://erlang.org/user.html#xmlrpc-1.13",
                "http://erlang.org/EPLICENSE",
                "http://erlang.org/project/megaco/",
                "http://www.erlang-consulting.com/training_fs.html",
                "http://erlang.org/old_news.html"]
ok
Storage: Is one of the most important concepts for a search engine. Its a big mistake to store search engine data in an RDBMS like MySQL, Oracle, MS SQL e.t.c. Such systems are completely complex and the applications that interface with them employ heuristic algorithms. This brings us to Key-Value Stores, of which the two of my best are Couch Base Server and Riak. These are great Cloud File Systems. Another important parameter is caching. Caching is attained using say Memcached, of which the other two storage systems mentioned above have support for it. Storage systems for Search engines ought to be schemaless DBMS,which focuses on Availability rather than Consistency. Read more on Search Engines from here: http://en.wikipedia.org/wiki/Web_search_engine
Selfabasement answered 23/10, 2010 at 14:17 Comment(5)
In the code above, i have used the re module, regular expressions, but there is now a better way of parsing HTML using XPATH implementation in Erlang built into Mochiweb. Follow this: ppolv.wordpress.com/2008/05/09/…Selfabasement
@MuzaayaJoshua Thank you so much for this answer! I wrote a web spider in PHP a couple years ago. However, it has grown far beyond what it was originally intended, and now the server resources it needs to run correctly are simply unrealistic. In house we are consider reengineering it in Erlang or C, but are very interested in Erlang because of its performance and scalability benefits. Do you have any comparisons as to how well this performes comapred to the same spider in other languages? Did you try it in other languages? How did it compare if so?Tush
Also, in your above code you have a note about %% If files are many, memory!!! What type of memory usage are you experiencing? My spider finds and processes about 350,000 links a day. Would that be considered 'many' or are you spidering of a larger number? smaller number?Tush
wow ! yours is actually much better. Are you using mochiweb Xpath to parse the files ?Selfabasement
i guess it also depends on the bandwidth available to the HTTP CLIENT. Anyways, lets continue to refine it. Some day, some one will build a smart search engine with Erlang and probably CouchbaseSelfabasement
R
4

As far as I know Powerset's natural language procesing search engine is developed using erlang.

Did you look at couchdb (which is written in erlang as well) as a possible tool to help you to solve few problems on your way?

Raina answered 12/10, 2008 at 18:38 Comment(0)
G
2

I would recommend CouchDB instead of Mnesia.

YAWS is pretty good. You should also consider MochiWeb.

You won't go wrong with Erlang

Gouveia answered 12/11, 2008 at 11:34 Comment(3)
By the way, Bjnortier! Mnesia has Map-reduce. It has a fuction "lists:map()" & "lists:foldl" which does exactly what Map & reduces respectively. For more information, you can refer gotapi.com/erlang.Crib
Granted. But CouchDB has incremental Map-Reduce, i.e it only recalculates the map and reduce outputs when the documents have actually changed.Gouveia
The statement that Mnesia is intended to be memory resident DB means that Mnesia performs so heavenly when in RAM. However, we have used Mnesia as Disc Only Database and our applications are very fine. Have you heard of something called Fragmentation ? well, mnesia is excellent with that and when tables are Fragmented, all limitations are gone. Try it on your own, a mnesia DB with several tables can be populated to Tera bytes of data when the tables are fragmented yet with very negligible loss in speed when you make a look up. Concurrency is everything, Mnesia is divine at concurrencySelfabasement
S
1

In the 'rdbms' contrib, there is an implementation of the Porter Stemming Algorithm. It was never integrated into 'rdbms', so it's basically just sitting out there. We have used it internally, and it worked quite well, at least for datasets that weren't huge (I haven't tested it on huge data volumes).

The relevant modules are:

rdbms_wsearch.erl
rdbms_wsearch_idx.erl
rdbms_wsearch_porter.erl

Then there is, of course, the Disco Map-Reduce framework.

Whether or not you can make the fastest engine out there, I couldn't say. Is there a market for a faster search engine? I've never had problems with the speed of e.g. Google. But a search facility that increased my chances of finding good answers to my questions would interest me.

Sulphurize answered 17/10, 2008 at 19:34 Comment(1)
well "speed" with search engines doesnt really mean the same thing as everything else, since when you query you're reading off of a index and all the building happens periodically... but you know this already Ulf ;)Stacte

© 2022 - 2024 — McMap. All rights reserved.