I am frequently needing to calculate mean and standard deviation for numeric arrays. So I've written a small protocol and extensions for numeric types that seems to work. I just would like feedback if there is anything wrong with how I have done this. Specifically, I am wondering if there is a better way to check if the type can be cast as a Double to avoid the need for the asDouble variable and init(_:Double)
constructor.
I know there are issues with protocols that allow for arithmetic, but this seems to work ok and saves me from putting the standard deviation function into classes that need it.
protocol Numeric {
var asDouble: Double { get }
init(_: Double)
}
extension Int: Numeric {var asDouble: Double { get {return Double(self)}}}
extension Float: Numeric {var asDouble: Double { get {return Double(self)}}}
extension Double: Numeric {var asDouble: Double { get {return Double(self)}}}
extension CGFloat: Numeric {var asDouble: Double { get {return Double(self)}}}
extension Array where Element: Numeric {
var mean : Element { get { return Element(self.reduce(0, combine: {$0.asDouble + $1.asDouble}) / Double(self.count))}}
var sd : Element { get {
let mu = self.reduce(0, combine: {$0.asDouble + $1.asDouble}) / Double(self.count)
let variances = self.map{pow(($0.asDouble - mu), 2)}
return Element(sqrt(variances.mean))
}}
}
edit: I know it's kind of pointless to get [Int].mean
and sd
, but I might use numeric elsewhere so it's for consistency..
edit: as @Severin Pappadeux pointed out, variance can be expressed in a manner that avoids the triple pass on the array - mean then map then mean. Here is the final standard deviation extension
extension Array where Element: Numeric {
var sd : Element { get {
let sss = self.reduce((0.0, 0.0)){ return ($0.0 + $1.asDouble, $0.1 + ($1.asDouble * $1.asDouble))}
let n = Double(self.count)
return Element(sqrt(sss.1/n - (sss.0/n * sss.0/n)))
}}
}
Int
is generally the same size asInt64
on newer devices (>=
iPhone 5S, which introduced the 64bit processor), so unless you're working with really large numbers, this shouldn't be an issue: but just know thatinit(_: Double)
can lead to an integer overflow (runtime exception) in cases where theElement = Int
type cannot store the integer representation of a given (huge)Double
value. Possibly not an issue if you just use your Swift apps yourself, but in case you ship to customers, this might be good to bear in mind. – Wrongdoingmin
andmax
property inNumeric
and check the double representation (under the assumption that all numeric values can can be seen as "kind of" as subset of the range of validDouble
values; i.e., always convertible toDouble
without any risk of overflow on that part, but I guess in worst case we getDouble.infinity
) of this property vs theDouble
valued sum from thereduce
operation above. E.g. something along these lines. – WrongdoingArray
extension in the gist somewhat with a help function). Yeah I agree, my first thought was going straight for the&+
operators, but I guess we don't (yet) have any similar for type conversions (failable initializers for such type coercion could be a nice addition). – Wrongdoing