Golang truncate strings with special characters without corrupting data
Asked Answered
A

4

8

I am trying to write a function to truncate strings with special characters in golang. One example is below

"H㐀〾▓朗퐭텟şüöžåйкл¤"

However I am doing it based on the number of characters allowed and cutting it in the middle. This results in data getting corrupted.

The result comes out like

H㐀〾▓朗퐭텟şüöžå�...

The should not be there. How do we detect these special characters and split it based on the length of these characters?

package main

import (
    "fmt"
    "regexp"
)

var reNameBlacklist = regexp.MustCompile(`(&|>|<|\/|:|\n|\r)*`)
var maxFileNameLength = 30

// SanitizeName sanitizes user names in an email
func SanitizeName(name string, limit int) string {

    result := name
    reNameBlacklist.ReplaceAllString(result, "")
    if len(result) > limit {
        result = result[:limit] + "..."
    }
    return result
}



func main() {
  str := "H㐀〾▓朗퐭텟şüöžåйкл¤"
    fmt.Println(str)

    strsan := SanitizeName(str, maxFileNameLength)
    fmt.Println(strsan)

}
Aubert answered 26/9, 2017 at 0:4 Comment(0)
R
11

Slicing strings treats them as their underlying byte array; the slice operator operates on indexes of bytes, not of runes (which can be multiple bytes each). However, range over a string iterates on runes - but the index returned is of bytes. This makes it fairly straightforward to do what you're looking for (full playground example here):

func SanitizeName(name string, limit int) string {
    name = reNameBlacklist.ReplaceAllString(name, "")
    result := name
    chars := 0
    for i := range name {
        if chars >= limit {
            result = name[:i]
            break
        }
        chars++
    }
    return result
}

This is explained in further detail on the Go blog


Update:

As commenters below suggest, you can normalize arbitrary UTF8 to NFC (Normalization Form Canonical Composition), which combines some multi-rune forms like diacritics into single-rune forms where possible. This adds a single step using golang.org/x/text/unicode/norm. Playground example of this here: https://play.golang.org/p/93qxI11km2f

func SanitizeName(name string, limit int) string {
    name = norm.NFC.String(name)
    name = reNameBlacklist.ReplaceAllString(name, "")
    result := name
    chars := 0
    for i := range name {
        if chars >= limit {
            result = name[:i]
            break
        }
        chars++
    }
    return result
}
Randle answered 26/9, 2017 at 0:18 Comment(6)
The one difference from the question's code is the "..." when the limit kicks in. I was tempted to strip blacklisted chars from the shortened string, but then you either change the meaning (santitize(">>>abc", 3) becomes "..." instead of "abc...") or have to complicate the code.Misbegotten
Our current logic strips the string first which is why I kept the truncating afterwardsAubert
This answer is great. The only problem is, it doesn't take Unicode grapheme clusters into account (a grapheme cluster is the combination of a base rune like A and any extra runes that supply things like diacritics on top of it). It might be useful to normalize the string to NFC before truncating it, so that as many characters as possible are converted into their single-rune variants.Westmoreland
The code above also uses ReplaceAllString but does not use the returned string and the name variable is never updated. So none of the characters in the regex are replaced. The same issue is present in the answer by @TopoKassab
Good catch, my bad for copy & pasting from the question. Fixed.Randle
Normalization of the string to NFC as recommended by @Westmoreland can be done with norm.NFC.String(string) in golang.org/x/text/unicode/norm package.Inheritable
M
3

The reason your data is getting corrupted is because some characters use more than one byte and you are splitting them. To avoid this Go has type rune which represents a UTF-8 character. You can just cast the string to a []rune like this:

func SanitizeName(name string, limit int) string{   
    reNameBlacklist.ReplaceAllString(name, "")
    result := []rune(name)
    // Remove the special chars here
    return string(result[:limit])
}

This should only leave the first limit UTF-8 characters.

Meninges answered 26/9, 2017 at 0:24 Comment(4)
Adrian's approach avoids allocating the four bytes per Unicode code point, and does less work when the input string is long, so I'd go with that.Misbegotten
This is by far the simplest way, but it does have some drawbacks. For short strings however, the drawbacks are a minor issue at worst.Payload
This implementation breaks the string on codepoint boundaries (which is still better than breaking on byte boundaries) but this is not enough to break on character boundaries as a character may be represented by several codepoints.Bandanna
One optimization: the cost of conversion to runes can be avoided for short strings whose number of bytes (len) is lower than the limit.Bandanna
I
3

Another option is the utf8string package:

package main
import "golang.org/x/exp/utf8string"

func main() {
   s := utf8string.NewString("🧡💛💚💙💜")
   t := s.Slice(0, 2)
   println(t == "🧡💛")
}

https://pkg.go.dev/golang.org/x/exp/utf8string

Iveson answered 30/4, 2021 at 16:18 Comment(0)
T
2

Below is a function that truncates a UTF-8 encoded string to a maximum number of bytes, without corrupting the last rune.

// TruncateUTF8String truncate s to n bytes or less. If len(s) is more than n,
// truncate before the start of the first rune that doesn't fit. s should
// consist of valid utf-8. 
func TruncateUTF8String(s string, n int) string {
    if len(s) <= n {
            return s
    }
    for n > 0 && !utf8.RuneStart(s[n]) {
            n--
    }
    return s[:n]
}

I assume the goal is to limit either the number of bytes to store or the number of characters to display. This function is to limit the number of bytes before storing them. It is faster than counting runes or characters, since it only checks the last few bytes.

Note: Multi-rune characters are not considered, so it might cut a character that is a combination of multiple runes. Example: café can become cafe. I don't' know if this problem can be completely avoided, but it can be reduced by performing Unicode normalization before truncation.

Trihedral answered 18/6, 2023 at 20:22 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.