Monitoring URLs with Nagios
Asked Answered
C

3

13

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).

(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)


I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)

After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because a) I'm not using NRPE with this check b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...

./check_url some-domain.com | echo $0

returns "0" (which indicates the check was successful)

I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):

#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $*  /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*

Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):

'check_url' command definition
define command{
       command_name    check_url
       command_line    $USER1$/check_url $url$
}

(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)


Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:

  1. mkdir /usr/lib/nagios/libexec/check_url_status/
  2. downloaded both check_url_status and utils.pm
  3. Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
  4. Run the following:

    ./check_user_status -U some-domain.com. When I run the above command, I kept getting the following error:

bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in @INC (@INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34. BEGIN failed--compilation aborted at ./check_url_status line 34.


So at this point, I give up, and have a couple of questions:

  1. Which of these two plugins would you recommend? check_url or check_url_status? (After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
  2. Now, how would I fix my problem with whichever plugin you recommended?

At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).

The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:

###
# Monitoring Individual URLs...
#
###
define service{
        host_name                       {my-shared-web-server}
        service_description             URL: somedomain.com
        check_command                   check_url!somedomain.com
        max_check_attempts              5
        check_interval                  3
        retry_interval                  1
        check_period                    24x7
        notification_interval           30
        notification_period             workhours
}
Congregation answered 12/2, 2012 at 5:0 Comment(0)
C
26

I was making things WAY too complicated.

The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:

My Service Definition:

  define service{
            host_name                       myers
            service_description             URL: my-url.com
            check_command                   check_http_url!http://my-url.com
            max_check_attempts              5
            check_interval                  3
            retry_interval                  1
            check_period                    24x7
            notification_interval           30
            notification_period             workhours
    }

My Command Definition:

define command{
        command_name    check_http_url
        command_line    $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
Congregation answered 21/2, 2012 at 3:10 Comment(0)
O
2

The better way to monitor urls is by using webinject which can be used with nagios.

The below problem is due to the reason that you dont have the perl package utils try installing it.

bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in @INC (@INC contains:

Outpouring answered 25/7, 2012 at 9:49 Comment(0)
D
1

You can make an script plugin. It is easy, you only have to check the URL with something like:

`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`

$URL is what you pass to the script command by param.

Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

Diskson answered 21/1, 2016 at 10:44 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.