Which characters make a URL invalid?
Asked Answered
V

13

639

Which characters make a URL invalid?

Are these valid URLs?

  • example.com/file[/].html
  • http://example.com/file[/].html
Vevine answered 10/10, 2009 at 13:10 Comment(2)
When validating, you should always "think positive": ask for "what is valid", everything else is invalid. Testing against the (few) valid characters is much safer (and easier!) than all possible invalid ones.Savaii
Related: What is the best regular expression to check if a string is a valid URL?Skye
V
705

In general URIs as defined by RFC 3986 (see Section 2: Characters) may contain any of the following 84 characters:

ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._~:/?#[]@!$&'()*+,;=

Note that this list doesn't state where in the URI these characters may occur.

Any other character needs to be encoded with the percent-encoding (%hh). Each part of the URI has further restrictions about what characters need to be represented by an percent-encoded word.

Vikki answered 10/10, 2009 at 13:26 Comment(24)
(of course, the list of characters doesn't state where in the uri they may occur)Sexagesimal
@Eamon Nerbonne: Yes, this is only the union of the sets of valid characters of all components.Vikki
Here's a regex that will determine if the entire string contains only the characters above: /^[!#$&-;=?-[]_a-z~]+$/Raby
@Leif that regex is missing some characters and doesn't properly escape others. This regex should work better: /(http[A-Za-z0-9\-\._~:\/\?#[]@!\$&'()*\+,;=%]+)/Rile
@techiferous, Yeah, I forgot to allow "%" escaped characters. It should've looked more like: /^([!#$&-;=?-[]_a-z~]|%[0-9a-fA-F]{2})+$/ Was there anything else that you found it should've been accepting? (Just to be clear, that regex only checks if the string contains valid URL characters, not if the string contains a well formed URL.)Raby
@LeifWickland That’s much better. But note that an empty string is also a valid URL.Vikki
I don’t think there is any requirement in URLs for % to be followed strictly by two hex digits. That is merely a convention that is used on top of URLs.Deepdyed
@Deepdyed RFC 3986 says, "A percent-encoded octet is encoded as a character triplet, consisting of the percent character "%" followed by the two hexadecimal digits representing that octet's numeric value." It also says, "Because the percent ("%") character serves as the indicator for percent-encoded octets, it must be percent-encoded as "%25" for that octet to be used as data within a URI." I read that as saying that a "%" may only appear if it is followed by two hex digits. How do you read it?Raby
# marks the start of the fragment, it wouldn't be wise to allow it via a regexPhotodynamics
@LeifWickland It looks like your both your regexes omit period ".", solidus "/", at-sign "@", apostrophe "'", parentheses "(" and ")", asterisk "*", plus "+" and comma ",". I think techiferous's regex includes all of them.Methoxychlor
@Rile Your regex doesn't escape the closing square bracket "]" in the character class. I think you meant /(http[A-Za-z0-9\-\._~:\/\?#[\]@!\$&'()*\+,;=%]+)/ EDIT - (Aha, StackOverflow deleted it. You need to include two for it to appear. I'm not sure whether it might have eaten other characters too...)Methoxychlor
@Methoxychlor My regex included those characters by using ranges. Between '&' and ';' and between '?' and '[' you'll find all those characters you didn't see.Raby
http://budyń.pl is an example of an address with character outside of given range of valid characters. And the address works. Funny thing is, it isn't parsed in SO correctly: budyń.pl I think you should be liberal when parsing URLs (http:// prefixed string is an obvious link), while being very strict in page naming, url rewriting etc.Inefficacious
@MarkusvonBroady budyń.pl is a so called International Domain Name (IDN) and is actually translated to the punycode xn--budy-e2a.pl. When you enter http://budyń.pl in your browser, it will actually request http://xn--budy-e2a.pl instead.Vikki
@Vikki I am aware of it, but the translation is done behind the scenes; when creating a forum script, like in situation described by question's author, you want to consider such unusual cases. In fact, I don't understand why a script should check e-mail/URL validity - most regexps I saw were much much more limiting than RFC and therefore causing problems with exotic addresses. I would only check the scheme if it's in a whitelist of http(s), ftp, magnet etc., and put the tag in <> not [], as the latter can be inside a link as well.Inefficacious
This w3c spec from 1994 seems to address the valid characters in more detail: w3.org/Addressing/URL/url-spec.txt See the section titled "BNF for specific URL schemes".Humeral
interestingly, this encoding doesn't seem to be mandated for fields like RefererVillarreal
@MarkusvonBroady, To prevent problems due to users' browsers. If you limit it to the "normal limit", there won't be problems.Pule
@Pule The best way to prevent problems is to disable links at all. If person A wants to share http://budyń.pl, and person B's browser can't handle it, you won't help person B by blocking the address - because in both cases person B will not access the Budyń.pl site.Inefficacious
@MarkusvonBroady, No no, you should take the input http://budyń.pl and link it to the translated version http://xn--budy-e2a.pl/. That way problem solved.Pule
@Pule My point is, you either: 1. Use regexp to find the link. That way, using the above version, you will never find the http://budyń.pl link and so you will never translate it and it will be parsed as plain text (at least the part after the character not inside regexp formula). --- 2. Use regexp to validate the link. That way, you will not allow a user to type http://budyń.pl, even though it is a valid link that will open in a browser. This is a reason why I tested it here, on SO, to support my point with a real life example:: budyń.plInefficacious
@MarkusvonBroady, 3. Don't use Regex. Imagine the world doesn't have such a wild creature.Pule
missing % charUnmoor
Could someone unmark this as the answer, please? there are too many problems with the various characters (#?&) that have special meanings in urlsMostly
C
266

The '[' and ']' in this example are "unwise" characters but still legal. If the '/' in the []'s is meant to be part of file name then it is invalid since '/' is reserved and should be properly encoded:

http://example.com/file[/].html

To add some clarification and directly address the question above, there are several classes of characters that cause problems for URLs and URIs.

There are some characters that are disallowed and should never appear in a URL/URI, reserved characters (described below), and other characters that may cause problems in some cases, but are marked as "unwise" or "unsafe". Explanations for why the characters are restricted are clearly spelled out in RFC-1738 (URLs) and RFC-2396 (URIs). Note the newer RFC-3986 (update to RFC-1738) defines the construction of what characters are allowed in a given context but the older spec offers a simpler and more general description of which characters are not allowed with the following rules.

Excluded US-ASCII Characters disallowed within the URI syntax:

   control     = <US-ASCII coded characters 00-1F and 7F hexadecimal>
   space       = <US-ASCII coded character 20 hexadecimal>
   delims      = "<" | ">" | "#" | "%" | <">

The character "#" is excluded because it is used to delimit a URI from a fragment identifier. The percent character "%" is excluded because it is used for the encoding of escaped characters. In other words, the "#" and "%" are reserved characters that must be used in a specific context.

List of unwise characters are allowed but may cause problems:

   unwise      = "{" | "}" | "|" | "\" | "^" | "[" | "]" | "`"

Characters that are reserved within a query component and/or have special meaning within a URI/URL:

  reserved    = ";" | "/" | "?" | ":" | "@" | "&" | "=" | "+" | "$" | ","

The "reserved" syntax class above refers to those characters that are allowed within a URI, but which may not be allowed within a particular component of the generic URI syntax. Characters in the "reserved" set are not reserved in all contexts. The hostname, for example, can contain an optional username so it could be something like ftp://user@hostname/ where the '@' character has special meaning.

Here is an example of a URL that has invalid and unwise characters (e.g. '$', '[', ']') and should be properly encoded:

http://mw1.google.com/mw-earth-vectordb/kml-samples/gp/seattle/gigapxl/$[level]/r$[y]_c$[x].jpg

Some of the character restrictions for URIs and URLs are programming language-dependent. For example, the '|' (0x7C) character although only marked as "unwise" in the URI spec will throw a URISyntaxException in the Java java.net.URI constructor so a URL like http://api.google.com/q?exp=a|b is not allowed and must be encoded instead as http://api.google.com/q?exp=a%7Cb if using Java with a URI object instance.

Chuckle answered 21/11, 2012 at 18:50 Comment(9)
Excellent, thorough answer, the only one to directly answer the actual question. Reserved section may need work, e.g. literal ? is just fine in the query section, but impossible before it, and I don't think @ belongs in any of these lists. Oh, and instead of %25 in the last string, don't you mean %7C?Cormophyte
Thanks. Good catch: the %25 was a typo in the example. Added footnote to the "reserved" syntax description directly from RFC-2396.Chuckle
This answer isn't bad, but there are some confusions and errors. You initially conflate disallowed and reserved characters (very different things), you make too much of the distinction between "unwise" characters and other disallowed characters (dropped in RFC 3986 and syntactically irrelevant even in RFC 2396), and you confusingly present a list of all reserved characters as the list reserved "within a query component".Saar
Thanks, didn't mean to group the disallowed and reserved as the same. Updated the answer. IMHO rules in RFC-2396 though older are simpler to understand than the updated rules in 3986. Answer reflects more on which characters might be troublesome in general rather than exactly which context it is allowed or not allowed.Chuckle
Hmm. This is still a little misleading since the list of characters reserved in the query component is taken from RFC 2396 (published in 1998) and doesn't match the newer RFC 3986 published in 2005. Confusingly, the terminology has changed a little (RFC 2396 says reserved characters are not reserved in some contexts and so don't need escaping; RFC 3986 says reserved characters do not act as delimiters in some contexts and so don't need escaping), but there's also a real change in meaning: ? and /, for instance, can be used unencoded in an RFC 3986 query string but not an RFC-2396 one.Saar
It's notable that Tomcat in recent releases (7.0.73+, 8.0.39+, 8.5.7+) have started rejecting requests with characters from the "unwise" category with HTTP 400 errors: "Invalid character found in the request target. The valid characters are defined in RFC 7230 and RFC 3986"Stamen
% is a special character in URLs for hex-encoding such as ftp://[email protected]/%2Fetc/motd where %2F is the hex-decimal notation of '/'.Chuckle
There are some misleading in this answer. RFC 3986 is a replacement for 2396, which also updates 1738. Also, there are some difference between 3986 and 2396 regarding characters (this answer implies they are the same).Rambouillet
which makes these safe? abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNAOPQRSTUVWXYZ1234567890-_.!~*'() love to add a copy/paste string.Mostly
S
174

Most of the existing answers here are impractical because they totally ignore the real-world usage of addresses like:

First, a digression into terminology. What are these addresses? Are they valid URLs?

Historically, the answer was "no". According to RFC 3986, from 2005, such addresses are not URIs (and therefore not URLs, since URLs are a type of URIs). Per the terminology of 2005 IETF standards, we should properly call them IRIs (Internationalized Resource Identifiers), as defined in RFC 3987, which are technically not URIs but can be converted to URIs simply by percent-encoding all non-ASCII characters in the IRI.

Per modern spec, the answer is "yes". The WHATWG Living Standard simply classifies everything that would previously be called "URIs" or "IRIs" as "URLs". This aligns the specced terminology with how normal people who haven't read the spec use the word "URL", which was one of the spec's goals.

What characters are allowed under the WHATWG Living Standard?

Per this newer meaning of "URL", what characters are allowed? In many parts of the URL, such as the query string and path, we're allowed to use arbitrary "URL units", which are

URL code points and percent-encoded bytes.

What are "URL code points"?

The URL code points are ASCII alphanumeric, U+0021 (!), U+0024 ($), U+0026 (&), U+0027 ('), U+0028 LEFT PARENTHESIS, U+0029 RIGHT PARENTHESIS, U+002A (*), U+002B (+), U+002C (,), U+002D (-), U+002E (.), U+002F (/), U+003A (:), U+003B (;), U+003D (=), U+003F (?), U+0040 (@), U+005F (_), U+007E (~), and code points in the range U+00A0 to U+10FFFD, inclusive, excluding surrogates and noncharacters.

(Note that the list of "URL code points" doesn't include %, but that %s are allowed in "URL code units" if they're part of a percent-encoding sequence.)

The only place I can spot where the spec permits the use of any character that's not in this set is in the host, where IPv6 addresses are enclosed in [ and ] characters. Everywhere else in the URL, either URL units are allowed or some even more restrictive set of characters.

What characters were allowed under the old RFCs?

For the sake of history, and since it's not explored fully elsewhere in the answers here, let's examine was allowed under the older pair of specs.

First of all, we have two types of RFC 3986 reserved characters:

  • :/?#[]@, which are part of the generic syntax for a URI defined in RFC 3986
  • !$&'()*+,;=, which aren't part of the RFC's generic syntax, but are reserved for use as syntactic components of particular URI schemes. For instance, semicolons and commas are used as part of the syntax of data URIs, and & and = are used as part of the ubiquitous ?foo=bar&qux=baz format in query strings (which isn't specified by RFC 3986).

Any of the reserved characters above can be legally used in a URI without encoding, either to serve their syntactic purpose or just as literal characters in data in some places where such use could not be misinterpreted as the character serving its syntactic purpose. (For example, although / has syntactic meaning in a URL, you can use it unencoded in a query string, because it doesn't have meaning in a query string.)

RFC 3986 also specifies some unreserved characters, which can always be used simply to represent data without any encoding:

  • abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-._~

Finally, the % character itself is allowed for percent-encodings.

That leaves only the following ASCII characters that are forbidden from appearing in a URL:

  • The control characters (chars 0-1F and 7F), including new line, tab, and carriage return.
  • "<>^`{|}

Every other character from ASCII can legally feature in a URL.

Then RFC 3987 extends that set of unreserved characters with the following unicode character ranges:

  %xA0-D7FF / %xF900-FDCF / %xFDF0-FFEF
/ %x10000-1FFFD / %x20000-2FFFD / %x30000-3FFFD
/ %x40000-4FFFD / %x50000-5FFFD / %x60000-6FFFD
/ %x70000-7FFFD / %x80000-8FFFD / %x90000-9FFFD
/ %xA0000-AFFFD / %xB0000-BFFFD / %xC0000-CFFFD
/ %xD0000-DFFFD / %xE1000-EFFFD

These block choices from the old spec seem bizarre and arbitrary given the latest Unicode block definitions; this is probably because the blocks have been added to in the decade since RFC 3987 was written.


Finally, it's perhaps worth noting that simply knowing which characters can legally appear in a URL isn't sufficient to recognise whether some given string is a legal URL or not, since some characters are only legal in particular parts of the URL. For example, the reserved characters [ and ] are legal as part of an IPv6 literal host in a URL like http://[1080::8:800:200C:417A]/foo but aren't legal in any other context, so the OP's example of http://example.com/file[/].html is illegal.

Saar answered 16/4, 2016 at 17:17 Comment(9)
"<>\^`{|} are not forbidden, they are marked as unsafe. But these characters are often used in real world.Algeciras
@Algeciras no, they're forbidden. The "unsafe" designation hasn't been used for those characters since RFC 1738 (published in 1994 and already obsoleted by RFC 2368 over two decades ago), and even there, it was just a quirky synonym for "forbidden"; RFC 1738 said that "All unsafe characters must always be encoded within a URL." (emphasis mine).Saar
Maybe these symbols are forbidden from some rfc perfective, but they are not encoded in real world and used often as-is by old clients and servers.Algeciras
I've found several webservers using u007F (delete) character in url. But I am not 100% sure that they are not something like broken honeypots.Algeciras
+1 for actually answering the question rather than interpreting it and answering another one. Coming from Google, I was looking for some invalid characters I can use to test an URL validation method. Others are answering how to write a URL validation method instead...Border
(1) "query strings (which isn't specified by RFC 3986)" - what do you mean by that? query string is specified under section 3.4. (2) "Any of the reserved characters above can be legally used in a URI without encoding [...] where such use could not be misinterpreted as the character serving its syntactic purpose" - is that correct? = can be misinterpreted when used like ?foo=bar=baz, yet = is allowed unencoded in a query string.Wilks
@Wilks (1) I wouldn't characterise section 3.4 as speccing the key=value format. About that format, it says only "query components are often used to carry identifying information in the form of "key=value" pairs". I'd say that acknowledges the existence of the format, but refrains from actually speccing it. It doesn't address, for instance, whether keys must be unique, what characters are allowed in keys, how to escape = and & characters in values, or what the syntax is for arrays. And the concepts of query keys and values are not present in the ABNF in Appendix A.Saar
@Wilks As for (2), I think PJ is interpreting the RFC the same way as me in the answer you link to, too. Yes, RFC 3986 doesn't mind if you use = anywhere in a query string, but that's because the key=value format isn't part of its spec and so it doesn't treat = and & as having any special syntactic purpose in query strings. The strings foo=bar, ====foo====bar==== and !?@!?@ are equally legal query strings under RFC 3986, but the second two are not well-formed examples of the common key=value format and will confuse any query string parser that is expecting key=value pairs.Saar
@Border so to augment OP question: which characters are definitively invalid?Mostly
A
19

In your supplementary question you asked if www.example.com/file[/].html is a valid URL.

That URL isn't valid because a URL is a type of URI and a valid URI must have a scheme like http: (see RFC 3986).

If you meant to ask if http://www.example.com/file[/].html is a valid URL then the answer is still no because the square bracket characters aren't valid there.

The square bracket characters are reserved for URLs in this format: http://[2001:db8:85a3::8a2e:370:7334]/foo/bar (i.e. an IPv6 literal instead of a host name)

It's worth reading RFC 3986 carefully if you want to understand the issue fully.

Arapaho answered 3/12, 2009 at 15:46 Comment(6)
After reading the RFC, I'm more inclined to agree with @Stephen C more detailed explanation.Secularity
A URLs are not a subset of URI. The [ and ] are not URI valid for almost parsers I have seen. This has actually screwed me in the real world: stackoverflow.com/questions/11038967/…Salish
@AdamGent URLs very much are a subset of URIs. The only difference between them is whether they describe the location of the resource - which is a semantic distinction, not a syntactic one. If the parsers you've seen that labelled themselves as "URI" parsers treated square brackets differently to those that labelled themselves as "URL" parsers, then that's pure coincidence, not caused by any difference between URLs and URIs.Saar
@Mark Amery it's analogous to saying C++ is a superset of C. It is for the most part but not entirely true because (URL and C) is much older they have to include behavior that is less strict. The problem is URL parsers will parse things that are not valid URI... And I mean most of them (frankly I'm so tired of pointing this out across so many languages) It is not coincidence it's backwards compatibility. Can we agree that URL spec is older atleast?Salish
@MarkAmery That is from Python, C#, Java and some C libraries the parsers will take Unwise very seriously for URIs and yet be fine with URL libraries. That is there is no flag to ignore Unwise. I'll have to check out what Rust lang (since it is being built for a browser I'm curious what it does) for URLs. Most browsers though will happily pass "[", "]" as well. So in theory just like I said with C/C++ they are sub/super but the reality is not so true. It is highly dependent on interpretation of the spec and semantics of super/subset.Salish
RFC3986 is crystal clear about this: " A host identified by an Internet Protocol literal address, version 6 [RFC3513] or later, is distinguished by enclosing the IP literal within square brackets ("[" and "]"). This is the only place where square bracket characters are allowed in the URI syntax. ". No other consideration is necessary; http://example.com/file[/].html is invalid as a URL.Nisan
H
12

All valid characters that can be used in a URI (a URL is a type of URI) are defined in RFC 3986.

All other characters can be used in a URL provided that they are "URL Encoded" first. This involves changing the invalid character for specific "codes" (usually in the form of the percent symbol (%) followed by a hexadecimal number).

This link, HTML URL Encoding Reference, contains a list of the encodings for invalid characters.

Herrin answered 10/10, 2009 at 13:22 Comment(1)
And for Unicode characters, the Wikipedia article Percent-encoding says the following: "The generic URI syntax mandates that new URI schemes that provide for the representation of character data in a URI must, in effect, represent characters from the unreserved set without translation, and should convert all other characters to bytes according to UTF-8, and then percent-encode those values."Skye
B
9

Several of Unicode character ranges are valid HTML5, although it might still not be a good idea to use them.

E.g., href docs say http://www.w3.org/TR/html5/links.html#attr-hyperlink-href:

The href attribute on a and area elements must have a value that is a valid URL potentially surrounded by spaces.

Then the definition of "valid URL" points to http://url.spec.whatwg.org/, which says it aims to:

Align RFC 3986 and RFC 3987 with contemporary implementations and obsolete them in the process.

That document defines URL code points as:

ASCII alphanumeric, "!", "$", "&", "'", "(", ")", "*", "+", ",", "-", ".", "/", ":", ";", "=", "?", "@", "_", "~", and code points in the ranges U+00A0 to U+D7FF, U+E000 to U+FDCF, U+FDF0 to U+FFFD, U+10000 to U+1FFFD, U+20000 to U+2FFFD, U+30000 to U+3FFFD, U+40000 to U+4FFFD, U+50000 to U+5FFFD, U+60000 to U+6FFFD, U+70000 to U+7FFFD, U+80000 to U+8FFFD, U+90000 to U+9FFFD, U+A0000 to U+AFFFD, U+B0000 to U+BFFFD, U+C0000 to U+CFFFD, U+D0000 to U+DFFFD, U+E1000 to U+EFFFD, U+F0000 to U+FFFFD, U+100000 to U+10FFFD.

The term "URL code points" is then used in the statement:

If c is not a URL code point and not "%", parse error.

in a several parts of the parsing algorithm, including the schema, authority, relative path, query and fragment states: so basically the entire URL.

Also, the validator http://validator.w3.org/ passes for URLs like "你好", and does not pass for URLs with characters like spaces "a b"

Of course, as mentioned by Stephen C, it is not just about characters but also about context: you have to understand the entire algorithm. But since class "URL code points" is used on key points of the algorithm, it that gives a good idea of what you can use or not.

See also: Unicode characters in URLs

Bulrush answered 29/8, 2014 at 14:19 Comment(0)
H
8

I needed to select characters to split URLs in a string, so I decided to create a list of characters which could not be found in the URL by myself:

>>> allowed = "-_.~!*'();:@&=+$,/?%#[]?@ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"
>>> from string import printable
>>> ''.join(set(printable).difference(set(allowed)))
'`" <\x0b\n\r\x0c\\\t{^}|>'

So, the possible choices are the newline, tab, space, backslash and "<>{}^|. I guess I'll go with the space or newline. :)

Hamburger answered 11/2, 2014 at 17:57 Comment(0)
A
3

I am implementing an old HTTP (0.9, 1.0, 1.1) request and response reader/writer. The request URI is the most problematic place.

You can't just use RFC 1738, 2396 or 3986 as it is. There are many old HTTP clients and servers that allow more characters. So I've made research based on accidentally published web server access logs: "GET URI HTTP/1.0" 200.

I've found that the following non-standard characters are often used in URIs:

\ { } < > | ` ^ "

These characters were described in RFC 1738 as unsafe.

If you want to be compatible with all old HTTP clients and servers - you have to allow these characters in the request URI.

Please read more information about this research in oghttp-request-collector.

Algeciras answered 12/4, 2020 at 16:37 Comment(1)
Is there any API to remove these characters from a stringAdventure
U
2

This is not really an answer to your question, but validating URLs is really a serious p.i.t.a. You're probably just better off validating the domain name and leave query part of the URL be. That is my experience.

You could also resort to pinging the URL and seeing if it results in a valid response, but that might be too much for such a simple task.

Regular expressions to detect URLs are abundant, google it :)

Undershot answered 10/10, 2009 at 13:19 Comment(2)
Related: What is the best regular expression to check if a string is a valid URL?Skye
This answer advises that URL validation is a job not for a regex, but for a language/platform-specific library.Skye
L
1

From the source (emphasis added when needed):

Unsafe:

Characters can be unsafe for a number of reasons. The space character is unsafe because significant spaces may disappear and insignificant spaces may be introduced when URLs are transcribed or typeset or subjected to the treatment of word-processing programs.

The characters "<" and ">" are unsafe because they are used as the delimiters around URLs in free text; the quote mark (""") is used to delimit URLs in some systems. The character "#" is unsafe and should always be encoded because it is used in World Wide Web and in other systems to delimit a URL from a fragment/anchor identifier that might follow it. The character "%" is unsafe because it is used for encodings of other characters. Other characters are unsafe because gateways and other transport agents are known to sometimes modify such characters. These characters are "{", "}", "|", "", "^", "~", "[", "]", and "`".

All unsafe characters must always be encoded within a URL. For example, the character "#" must be encoded within URLs even in systems that do not normally deal with fragment or anchor identifiers, so that if the URL is copied into another system that does use them, it will not be necessary to change the URL encoding. Source

Leavitt answered 19/10, 2022 at 2:31 Comment(0)
G
0

I can't comment on the above answers, but wanted to emphasize the point (in another answer) that allowed characters aren't allowed everywhere. For example, domain names can't have underscores, so http://test_url.com is invalid.

Georgiegeorgina answered 12/7, 2022 at 22:23 Comment(0)
I
-1

If you need to have a broader validation that includes emojis (that are used nowadays sporadically in URLS), for example :

http://factmyth.com/factoids/you-👏-can-👏-put-👏-emojis-👏-in-👏-urls-👏/

And even in domain names like : 😉.tld

Then this is a useful regex :

[-a-zA-Z0-9\u1F60-\uFFFF@:%_\+.~#?&//=!'(),;*\$\[\]]*

PS : It is not valid for all regex "flavors", used in programming languages. It will be valid for Python, Rust, Golang, modern Javascript, but not for PHP for example. Check here by selecting "flavors" on the left and checking for error messages : https://regex101.com/

Irmgard answered 8/4, 2023 at 11:7 Comment(1)
Your regex may be right for all I know, but without seeing a summary of what characters it allows and disallows and some explanation of how you constructed those sets of characters, I would never trust it enough to use it!Saar
Q
-7

I came up with a couple of regular expressions for PHP that will convert URLs in text to anchor tags. (First it converts all www. URLs to http://, and then converts all URLs with https?:// to a href=... HTML links

$string = preg_replace('/(https?:\/\/)([!#$&-;=?\-\[\]_a-z~%]+)/sim', '<a href="$1$2">$2</a>', preg_replace('/(\s)((www\.)([!#$&-;=?\-\[\]_a-z~%]+))/sim', '$1http://$2', $string) );

Quickstep answered 26/12, 2016 at 18:36 Comment(1)
-1; beyond the fact that they both involve URLs in some capacity, this has nothing to do with the question that was asked.Saar

© 2022 - 2024 — McMap. All rights reserved.