This sounds a lot like when I discovered I was writing the same code multiple times for different arrays of different types in other programming languages like C, fpc, or delphi. I invented parametric polymorphism for a language that will probably never have it implemented, using preprocessor tricks and called it "include file parametric polymorphism" as a proof of concept that you could in fact implement parametric polymorphism into a procedural language without needing OOP or any complex generics system. Using the preprocessor is a form of abuse though, it was just to prove the concept with FPC.
Since Golang doesn't use a preprocessor, you'll have to use interfaces or pointers and send the type in as a parameter. But even using pointers still means you have to write a lot of code to cast it and make it all work. Interfaces are better than pointers because pointers are less safe.
Solutions like this:
last := a[len(a)-1]
Are prone to bugs because someone may forget the minus 1. Some languages have something slightly better:
// return last element, the "high" of a
last := a[high(a)]
// return first element, the "low" of a
first := a[low(a)]
Above code doesn't work in Go AFAIK (haven't researched whether go has something similar to this), it's just what some other languages have (fpc) that might be something Go considers.
This low and high way of dealing with things absolutely ensures the last and first element are chosen whereas using "minus one" is prone to making basic math errors. Someone may forget the minus one...because they got confused about 1 based array, versus zero based arrays. Even if the language doesn't have a such thing as a 1 based array, one could still make the error due to humans sometimes thinking in 1 based ways (our fingers start at 1, not 0). Some clever programmers would argue that, no, our fingers start at zero, not one. Your thumb is zero. Okay, fine.. but.. for most of the world...;-) we end up switching back and forth our brains from 1 based to 0 based all day long in the real world vs the computer world, and this causes numerous bugs in software.
But some would argue "Low" and "High" are just syntactic sugar that is not necessary in a minimal language. It has to be decided whether the extra safety is worthwhile, which in many cases it can be. How much complexity LOW() and HIGH() adds to a compiler I'm not sure, and how it affects performance.. I'm not 100 percent sure... I think the compiler can be smart about optimizing high and low, but I'm not certain.