I have to do spell check for large number of big html and xml documents (more than 30.000). I also need custom dictionary and sophisticated algorithms of checking. I try to use BASH
+ linux utility (sed
, grep
, ...) with hunspell. Hunspell
has option -H that force it to check document as HTML (for XML the option is also suitable). But there is one problem: it output offsets and not number of line also it can check line by line because in this case it looks inside of tags (he can't find closed tag).
So what is the right way to do the task?
How to do spell check on html and xml?
Asked Answered
I just had a similar problem. You should be able to get a good output by using those undocumented switches, e.g. -u
or -U
. But be careful, as those features seem to be experimental right now, and I only found out about their existance by looking at the sources of hunspell.
So essentially:
hunspell -H -u my-file.html
should do it.
Alternatively, there are also the switches -u1
, -u2
and -u3
you can play around with.
Have you tried using tidy?
I have not used it on such elevated number of files, but it worked fine for finding issues in 100+ HTML pages. You can also use it on XML files and is able to accept a configuration file with many option which I have not yet explored.
I can't find options for custom dictionary specification. Is it possible? And how is it fast and reliable for spell check? –
Tactile
If it's not possible to add it on the configuration file I'm not sure it can be done in tidy. 1 html file is parsed instantly but I am not sure on how much will it take to parse thousands. You'll also need a script or something to parse the results, because they can have a lot of verbosity. –
Springs
© 2022 - 2024 — McMap. All rights reserved.
aspell
? – Mousselineaspell
output line number and not strange and useful offset (as inhunspell
). – Tactile-X
option for XML. – Korey