Get the subdomain from a URL
Asked Answered
W

18

114

Getting the subdomain from a URL sounds easy at first.

http://www.domain.example

Scan for the first period then return whatever came after the "http://" ...

Then you remember

http://super.duper.domain.example

Oh. So then you think, okay, find the last period, go back a word and get everything before!

Then you remember

http://super.duper.domain.co.uk

And you're back to square one. Anyone have any great ideas besides storing a list of all TLDs?

Weigela answered 13/11, 2008 at 23:54 Comment(7)
This question has already been asked here: Getting Parts of a URL Edit: A similar question has been asked here : )Weigela
Cam you clarify what you want? It seems that you're after the "official" domain part of the URL (i.e. domain.co.uk), regardless of how many DNS labels appear before it?Hindemith
I don't think it's the same question - this seems to be more about the administrative cuts in the domain name which can't be worked out just by looking at the stringHindemith
I agree. Expand more on what your end goal is.Pahlavi
See this answer : https://mcmap.net/q/18750/-get-domain-name-not-subdomain-in-phpDosia
You can use this api geekystats.com/api/v1/urlDetails?url=google.co.uk to get the detailsForelimb
In perl, at least, there is a pretty good module Net::PublicSuffixList that does the work for you.Expel
N
77

Anyone have any great ideas besides storing a list of all TLDs?

No, because each TLD differs on what counts as a subdomain, second level domain, etc.

Keep in mind that there are top level domains, second level domains, and subdomains. Technically speaking, everything except the TLD is a subdomain.

In the domain.com.uk example, "domain" is a subdomain, "com" is a second level domain, and "uk" is the TLD.

So the question remains more complex than at first blush, and it depends on how each TLD is managed. You'll need a database of all the TLDs that include their particular partitioning, and what counts as a second level domain and a subdomain. There aren't too many TLDs, though, so the list is reasonably manageable, but collecting all that information isn't trivial. There may already be such a list available.

Looks like http://publicsuffix.org/ is one such list—all the common suffixes (.com, .co.uk, etc) in a list suitable for searching. It still won't be easy to parse it, but at least you don't have to maintain the list.

A "public suffix" is one under which Internet users can directly register names. Some examples of public suffixes are ".com", ".co.uk" and "pvt.k12.wy.us". The Public Suffix List is a list of all known public suffixes.

The Public Suffix List is an initiative of the Mozilla Foundation. It is available for use in any software, but was originally created to meet the needs of browser manufacturers. It allows browsers to, for example:

  • Avoid privacy-damaging "supercookies" being set for high-level domain name suffixes
  • Highlight the most important part of a domain name in the user interface
  • Accurately sort history entries by site

Looking through the list, you can see it's not a trivial problem. I think a list is the only correct way to accomplish this...

Nunley answered 14/11, 2008 at 0:21 Comment(6)
Mozilla has code that uses this service. The project was spun off because the original cookie spec had linked TLD's to trust in cookies, but never worked. The "Cookie Monster" bug was the first problem, and the architecture was never fixed or replaced.Dielectric
The preferred language to solve this in isn't listed, but there is an opensource project that uses this list in C# code here: code.google.com/p/domainname-parserNam
Whether a domain is a "public suffix" or not should really be made available via the DNS protocol itself, perhaps via an EDNS flag. In that case the owner can set it, and there is no need to maintain a separate list.Luau
@PieterEnnes EDNS is for "transport related" flags, and can't be used for content-related metadata. I do agree that this information would be best placed in the DNS itself. ISTR there's plans for a "BoF session" at the upcoming IETF in Vancouver to discuss this.Hindemith
Thanks (+1) for link to http://publicsuffix.org, I've posted some shell and bash function based on your answer: https://mcmap.net/q/18714/-get-the-subdomain-from-a-urlEddy
There is also a perl module Net::PublicSuffixList that manage queries to this list.Expel
H
25

As Adam says, it's not easy, and currently the only practical way is to use a list.

Even then there are exceptions - for example in .uk there are a handful of domains that are valid immediately at that level that aren't in .co.uk, so those have to be added as exceptions.

This is currently how mainstream browsers do this - it's necessary to ensure that example.co.uk can't set a Cookie for .co.uk which would then be sent to any other website under .co.uk.

The good news is that there's already a list available at http://publicsuffix.org/.

There's also some work in the IETF to create some sort of standard to allow TLDs to declare what their domain structure looks like. This is slightly complicated though by the likes of .uk.com, which is operated as if it were a public suffix, but isn't sold by the .com registry.

Hindemith answered 14/11, 2008 at 0:32 Comment(3)
Eugh, the IETF should know better than to let their URLs die. The draft (last updated in Sept 2012) can now be reached here: tools.ietf.org/html/draft-pettersen-subtld-structureChantilly
The IETF working group on the subject (DBOUND) has been closed.Danieledaniell
Note that since I wrote this the .uk domain registry now permits registrations directly at the second level. This is reflected accordingly in the PSL.Hindemith
T
23

Publicsuffix.org seems the way to do. There are plenty of implementations out there to parse the contents of the publicsuffix data file file easily:

Thermography answered 6/6, 2009 at 23:24 Comment(4)
But remember it is not just a matter of parsing! This list at Publicsuffix.org is an unofficial project, which is incomplete (eu.org is missing, for instance), does NOT reflect automatically the policies of TLD and may become unmaintained at any time.Ticonderoga
Also, Ruby: github.com/weppos/public_suffix_serviceBathesda
The list at publicsuffix.org is not "unofficial" any more than anything else Mozilla does. Given that Mozilla, Opera and Chrome use it, it is unlikely to become unmaintained. As for being incomplete, any operator of a domain like eu.org can apply for inclusion if they want to, and they understand the consequences of doing so. If you want a domain added, get the owner to apply. Yes, it does not automatically reflect TLD policy, but then nothing does - there is no programmatic source of that information.Gatian
dagger/android: okhttp will give you topPrivateDomainAstronaut
M
9

As already said by Adam and John publicsuffix.org is the correct way to go. But, if for any reason you cannot use this approach, here's a heuristic based on an assumption that works for 99% of all domains:

There is one property that distinguishes (not all, but nearly all) "real" domains from subdomains and TLDs and that's the DNS's MX record. You could create an algorithm that searches for this: Remove the parts of the hostname one by one and query the DNS until you find an MX record. Example:

super.duper.domain.co.uk => no MX record, proceed
duper.domain.co.uk       => no MX record, proceed
domain.co.uk             => MX record found! assume that's the domain

Here is an example in php:

function getDomainWithMX($url) {
    //parse hostname from URL 
    //http://www.example.co.uk/index.php => www.example.co.uk
    $urlParts = parse_url($url);
    if ($urlParts === false || empty($urlParts["host"])) 
        throw new InvalidArgumentException("Malformed URL");

    //find first partial name with MX record
    $hostnameParts = explode(".", $urlParts["host"]);
    do {
        $hostname = implode(".", $hostnameParts);
        if (checkdnsrr($hostname, "MX")) return $hostname;
    } while (array_shift($hostnameParts) !== null);

    throw new DomainException("No MX record found");
}
Mariandi answered 4/2, 2013 at 14:33 Comment(5)
Is that what IETF is also suggesting here?Zoonosis
Even publicsuffix.org says (see sixth paragraph) that the proper way to do this is through the DNS, just like you said in your answer!Zoonosis
Except that you can completely have a domain without a MX record. And that the algorithm will be fooled by wildcard records. And on the opposite side you have TLDs that have MX records (like .ai or .ax to just name a few).Danieledaniell
@patrick: I totally agree; like I said in the introduction this algorithm is not bullet-proof, it's just a heuristic that works surprisingly well.Mariandi
The algorithm should return the shortest hostname having an MX record. There are domains that accept mail at subdomains. Typically mailing lists ([email protected]), but some large organizations also used to have separate servers for some departments.Stratify
C
2

For a C library (with data table generation in Python), I wrote http://code.google.com/p/domain-registry-provider/ which is both fast and space efficient.

The library uses ~30kB for the data tables and ~10kB for the C code. There is no startup overhead since the tables are constructed at compile time. See http://code.google.com/p/domain-registry-provider/wiki/DesignDoc for more details.

To better understand the table generation code (Python), start here: http://code.google.com/p/domain-registry-provider/source/browse/trunk/src/registry_tables_generator/registry_tables_generator.py

To better understand the C API, see: http://code.google.com/p/domain-registry-provider/source/browse/trunk/src/domain_registry/domain_registry.h

Crabber answered 6/12, 2012 at 18:43 Comment(2)
I also have a C/C++ library that has its own list although it is checked against the publicsuffix.org list as well. It's called the libtld and works under Unix and MS-Windows snapwebsites.org/project/libtldLatecomer
There is an archived copy of DesignDoc. A simplified implementation following the same design (but not requiring Python) is here (it in the form of a single test.c file)Stratify
P
2

As already said Public Suffix List is only one way to parse domain correctly. For PHP you can try TLDExtract. Here is sample code:

$extract = new LayerShifter\TLDExtract\Extract();

$result = $extract->parse('super.duper.domain.co.uk');
$result->getSubdomain(); // will return (string) 'super.duper'
$result->getSubdomains(); // will return (array) ['super', 'duper']
$result->getHostname(); // will return (string) 'domain'
$result->getSuffix(); // will return (string) 'co.uk'
Prize answered 23/6, 2016 at 14:53 Comment(0)
M
1

Just wrote a program for this in clojure based on the info from publicsuffix.org:

https://github.com/isaksky/url_dom

For example:

(parse "sub1.sub2.domain.co.uk") 
;=> {:public-suffix "co.uk", :domain "domain.co.uk", :rule-used "*.uk"}
Motivation answered 20/3, 2012 at 15:54 Comment(0)
E
1

and versions

In addition to Adam Davis's correct answer, I would like to post my own solution for this operation.

As list is something big, there is three of many differents tested solutions...

First prepare your TLD List in that way:

wget -O - https://publicsuffix.org/list/public_suffix_list.dat |
    grep '^[^/]' |
    tac > tld-list.txt

Note: tac will reverse list to ensure testing .co.uk before .uk.

shell version

splitDom() {
    local tld
    while read tld;do
        [ -z "${1##*.$tld}" ] &&
            printf "%s : %s\n" $tld ${1%.$tld} && return
    done <tld-list.txt
}

Tests:

splitDom super.duper.domain.co.uk
co.uk : super.duper.domain

splitDom super.duper.domain.com
com : super.duper.domain

version

In order to reduce forks (avoid myvar=$(function..) syntax), I prefer to set variables instead of dump output to stdout, in bash functions:

tlds=($(<tld-list.txt))
splitDom() {
    local tld
    local -n result=${2:-domsplit}
    for tld in ${tlds[@]};do
        [ -z "${1##*.$tld}" ] &&
            result=($tld ${1%.$tld}) && return
    done
}

Then:

splitDom super.duper.domain.co.uk myvar
declare -p myvar
declare -a myvar=([0]="co.uk" [1]="super.duper.domain")

splitDom super.duper.domain.com
declare -p domsplit
declare -a domsplit=([0]="com" [1]="super.duper.domain")

Quicker version:

With same preparation, then:

declare -A TLDS='()'
while read tld ;do
    if [ "${tld##*.}" = "$tld" ];then
        TLDS[${tld##*.}]+="$tld"
      else
        TLDS[${tld##*.}]+="$tld|"
    fi
done <tld-list.txt

This step is a significantly slower, but splitDom function will become a lot quicker:

shopt -s extglob 
splitDom() {
    local domsub=${1%%.*(${TLDS[${1##*.}]%\|})}
    local -n result=${2:-domsplit}
    result=(${1#$domsub.} $domsub)
}

Tests on my raspberry-pi:

Both scripts was tested with:

for dom in dom.sub.example.{,{co,adm,com}.}{com,ac,de,uk};do
    splitDom $dom myvar
    printf "%-40s %-12s %s\n" $dom ${myvar[@]}
done

version was tested with a detailed for loop, but

All test script produce same output:

dom.sub.example.com                      com          dom.sub.example
dom.sub.example.ac                       ac           dom.sub.example
dom.sub.example.de                       de           dom.sub.example
dom.sub.example.uk                       uk           dom.sub.example
dom.sub.example.co.com                   co.com       dom.sub.example
dom.sub.example.co.ac                    ac           dom.sub.example.co
dom.sub.example.co.de                    de           dom.sub.example.co
dom.sub.example.co.uk                    co.uk        dom.sub.example
dom.sub.example.adm.com                  com          dom.sub.example.adm
dom.sub.example.adm.ac                   ac           dom.sub.example.adm
dom.sub.example.adm.de                   de           dom.sub.example.adm
dom.sub.example.adm.uk                   uk           dom.sub.example.adm
dom.sub.example.com.com                  com          dom.sub.example.com
dom.sub.example.com.ac                   com.ac       dom.sub.example
dom.sub.example.com.de                   com.de       dom.sub.example
dom.sub.example.com.uk                   uk           dom.sub.example.com

Full script containing file read and splitDom loop take ~2m with posix version, ~1m29s with first bash script based on $tlds array, but ~22s with last bash script based on $TLDS associative array.

                Posix version     $tldS (array)      $TLDS (associative array)
File read   :       0.04164          0.55507           18.65262
Split loop  :     114.34360         88.33438            3.38366
Total       :     114.34360         88.88945           22.03628

So if populating associative array is a stonger job, splitDom function become lot quicker!

Eddy answered 6/9, 2020 at 7:24 Comment(0)
A
0

It's not working it out exactly, but you could maybe get a useful answer by trying to fetch the domain piece by piece and checking the response, ie, fetch 'http://uk', then 'http://co.uk', then 'http://domain.co.uk'. When you get a non-error response you've got the domain and the rest is subdomain.

Sometimes you just gotta try it :)

Edit:

Tom Leys points out in the comments, that some domains are set up only on the www subdomain, which would give us an incorrect answer in the above test. Good point! Maybe the best approach would be to check each part with 'http://www' as well as 'http://', and count a hit to either as a hit for that section of the domain name? We'd still be missing some 'alternative' arrangements such as 'web.domain.com', but I haven't run into one of those for a while :)

Architectural answered 14/11, 2008 at 0:54 Comment(4)
There is no guarantee that x.com points to a webserver at port 80 even if www.x.com does. www is a valid subdomain in this case. Perhaps an automated whois would help here.Certitude
Good point! A whois would clear it up, although maintaining a list of which whois servers to use for which for which tld/2nd level would mean solving the same problem for edge cases.Architectural
you are assuming that there runs an HTTP server in every domainMariandi
Will not work for .DK and some others, as http://dk/ works as is. This kind of heuristics are not the way to go...Danieledaniell
T
0

Use the URIBuilder then get the URIBUilder.host attribute split it into an array on "." you now have an array with the domain split out.

Tina answered 2/2, 2011 at 0:35 Comment(0)
R
0
echo tld('http://www.example.co.uk/test?123'); // co.uk

/**
 * http://publicsuffix.org/
 * http://www.alandix.com/blog/code/public-suffix/
 * http://tobyinkster.co.uk/blog/2007/07/19/php-domain-class/
 */
function tld($url_or_domain = null)
{
    $domain = $url_or_domain ?: $_SERVER['HTTP_HOST'];
    preg_match('/^[a-z]+:\/\//i', $domain) and 
        $domain = parse_url($domain, PHP_URL_HOST);
    $domain = mb_strtolower($domain, 'UTF-8');
    if (strpos($domain, '.') === false) return null;

    $url = 'http://mxr.mozilla.org/mozilla-central/source/netwerk/dns/effective_tld_names.dat?raw=1';

    if (($rules = file($url)) !== false)
    {
        $rules = array_filter(array_map('trim', $rules));
        array_walk($rules, function($v, $k) use(&$rules) { 
            if (strpos($v, '//') !== false) unset($rules[$k]);
        });

        $segments = '';
        foreach (array_reverse(explode('.', $domain)) as $s)
        {
            $wildcard = rtrim('*.'.$segments, '.');
            $segments = rtrim($s.'.'.$segments, '.');

            if (in_array('!'.$segments, $rules))
            {
                $tld = substr($wildcard, 2);
                break;
            }
            elseif (in_array($wildcard, $rules) or 
                    in_array($segments, $rules))
            {
                $tld = $segments;
            }
        }

        if (isset($tld)) return $tld;
    }

    return false;
}
Regarding answered 29/3, 2012 at 1:9 Comment(0)
C
0

You can use this lib tld.js: JavaScript API to work against complex domain names, subdomains and URIs.

tldjs.getDomain('mail.google.co.uk');
// -> 'google.co.uk'

If you are getting root domain in browser. You can use this lib AngusFu/browser-root-domain.

var KEY = '__rT_dM__' + (+new Date());
var R = new RegExp('(^|;)\\s*' + KEY + '=1');
var Y1970 = (new Date(0)).toUTCString();

module.exports = function getRootDomain() {
  var domain = document.domain || location.hostname;
  var list = domain.split('.');
  var len = list.length;
  var temp = '';
  var temp2 = '';

  while (len--) {
    temp = list.slice(len).join('.');
    temp2 = KEY + '=1;domain=.' + temp;

    // try to set cookie
    document.cookie = temp2;

    if (R.test(document.cookie)) {
      // clear
      document.cookie = temp2 + ';expires=' + Y1970;
      return temp;
    }
  }
};

Using cookie is tricky.

California answered 9/5, 2017 at 7:54 Comment(0)
B
0

If you're looking to extract subdomains and/or domains from an arbitrary list of URLs, this python script may be helpful. Be careful though, it's not perfect. This is a tricky problem to solve in general and it's very helpful if you have a whitelist of domains you're expecting.

  1. Get top level domains from publicsuffix.org
import requests

url = 'https://publicsuffix.org/list/public_suffix_list.dat'
page = requests.get(url)

domains = []
for line in page.text.splitlines():
    if line.startswith('//'):
        continue
    else:
        domain = line.strip()
        if domain:
            domains.append(domain)

domains = [d[2:] if d.startswith('*.') else d for d in domains]
print('found {} domains'.format(len(domains)))
  1. Build regex
import re

_regex = ''
for domain in domains:
    _regex += r'{}|'.format(domain.replace('.', '\.'))

subdomain_regex = r'/([^/]*)\.[^/.]+\.({})/.*$'.format(_regex)
domain_regex = r'([^/.]+\.({}))/.*$'.format(_regex)
  1. Use regex on list of URLs
FILE_NAME = ''   # put CSV file name here
URL_COLNAME = '' # put URL column name here

import pandas as pd

df = pd.read_csv(FILE_NAME)
urls = df[URL_COLNAME].astype(str) + '/' # note: adding / as a hack to help regex

df['sub_domain_extracted'] = urls.str.extract(pat=subdomain_regex, expand=True)[0]
df['domain_extracted'] = urls.str.extract(pat=domain_regex, expand=True)[0]

df.to_csv('extracted_domains.csv', index=False)
Betwixt answered 28/11, 2018 at 22:29 Comment(0)
F
0

To accomplish this, I wrote a bash function which depends on publicsuffix.org data and a simple regex.

Install publicsuffix.org client on Ubuntu 18:

sudo apt install psl

Get the domain suffix (longest suffix):

domain=example.com.tr
output=$(psl --print-unreg-domain $domain)

output is:

example.com.tr: com.tr

The rest is simple bash. Extract suffix (com.tr) from the domain and test if it still has more than one dots.

# split output by colon
arr=(${output//:/ })
# remove the suffix from the domain
name=${1/${arr[1]}/}
# test
if [[ $name =~ \..*\. ]]; then
  echo "Yes, it is subdomain."
fi

Everything together in a bash function:

is_subdomain() {
  local output=$(psl --print-unreg-domain $1)
  local arr=(${output//:/ })
  local name=${1/${arr[1]}/}
  [[ $name =~ \..*\. ]]
}

Usage:

d=example.com.tr
if is_subdomain $d; then
  echo "Yes, it is."
fi
Flophouse answered 13/9, 2020 at 8:3 Comment(0)
G
0
private String getSubDomain(Uri url) throws Exception{
                        String subDomain =url.getHost();
                        String fial=subDomain.replace(".","/");
                        String[] arr_subDomain =fial.split("/");
                        return arr_subDomain[0];
                    }

First index will always be subDomain

Garcon answered 15/12, 2020 at 13:37 Comment(0)
S
0

this snippet return correct domain name.

InternetDomainName foo = InternetDomainName.from("foo.item.shopatdoor.co.uk").topPrivateDomain();
System.out.println(foo.topPrivateDomain());
Subsequent answered 16/9, 2021 at 8:21 Comment(0)
R
-1

List of common suffixes (.co.uk, .com, et cetera) to strip out along with the http:// and then you'll only have "sub.domain" to work with instead of "http://sub.domain.suffix", or at least that's what I'd probably do.

The biggest problem is the list of possible suffixes. There's a lot, after all.

Ramses answered 14/11, 2008 at 1:0 Comment(0)
H
-3

Having taken a quick look at the publicsuffix.org list, it appears that you could make a reasonable approximation by removing the final three segments ("segment" here meaning a section between two dots) from domains where the final segment is two characters long, on the assumption that it's a country code and will be further subdivided. If the final segment is "us" and the second-to-last segment is also two characters, remove the last four segments. In all other cases, remove the final two segments. e.g.:

"example" is not two characters, so remove "domain.example", leaving "www"

"example" is not two characters, so remove "domain.example", leaving "super.duper"

"uk" is two characters (but not "us"), so remove "domain.co.uk", leaving "super.duper"

"us" is two characters and is "us", plus "wy" is also two characters, so remove "pvt.k12.wy.us", leaving "foo".

Note that, although this works for all examples that I've seen in the responses so far, it remains only a reasonable approximation. It is not completely correct, although I suspect it's about as close as you're likely to get without making/obtaining an actual list to use for reference.

Heterolysis answered 14/11, 2008 at 18:25 Comment(3)
There are lots of fail cases. This is the sort of algorithm browsers used to try and use. Don't do that, use the PSL - it works, and there are libraries to help you.Gatian
Nothing prohibits gTLDs to be "segmented" also, this was the case at the beginning of .NAME for example, when you could buy only firstname.lastname.name domain names. And in opposite direction, now .US is also flat, so you can have x.y.z.whatever.us by just purchasing whatever.us at the registry and then your algorithm will fail on it.Danieledaniell
Also about ("segment" here meaning a section between two dots) : this is called a label in the DNS world, no need to invent a new name.Danieledaniell

© 2022 - 2024 — McMap. All rights reserved.