F# performance in scientific computing
Asked Answered
C

10

73

I am curious as to how F# performance compares to C++ performance? I asked a similar question with regards to Java, and the impression I got was that Java is not suitable for heavy numbercrunching.

I have read that F# is supposed to be more scalable and more performant, but how is this real-world performance compares to C++? specific questions about current implementation are:

  • How well does it do floating-point?
  • Does it allow vector instructions
  • how friendly is it towards optimizing compilers?
  • How big a memory foot print does it have? Does it allow fine-grained control over memory locality?
  • does it have capacity for distributed memory processors, for example Cray?
  • what features does it have that may be of interest to computational science where heavy number processing is involved?
  • Are there actual scientific computing implementations that use it?

Thanks

Composed answered 2/5, 2010 at 2:8 Comment(1)
I removed C++ from title to make it non--confrontational. however I do like to know performance relative to C++ (so I can relate)Composed
C
40
  • F# does floating point computation as fast as the .NET CLR will allow it. Not much difference from C# or other .NET languages.
  • F# does not allow vector instructions by itself, but if your CLR has an API for these, F# should not have problems using it. See for instance Mono.
  • As far as I know, there is only one F# compiler for the moment, so maybe the question should be "how good is the F# compiler when it comes to optimisation?". The answer is in any case "potentially as good as the C# compiler, probably a little bit worse at the moment". Note that F# differs from e.g. C# in its support for inlining at compile time, which potentially allows for more efficient code which rely on generics.
  • Memory foot prints of F# programs are similar to that of other .NET languages. The amount of control you have over allocation and garbage collection is the same as in other .NET languages.
  • I don't know about the support for distributed memory.
  • F# has very nice primitives for dealing with flat data structures, e.g. arrays and lists. Look for instance at the content of the Array module: map, map2, mapi, iter, fold, zip... Arrays are popular in scientific computing, I guess due to their inherently good memory locality properties.
  • For scientific computation packages using F#, you may want to look at what Jon Harrop is doing.
Cagey answered 5/5, 2010 at 15:29 Comment(1)
I would just like to point out that the question was F# vs C++ and this answer is F# vs C# and that C++ and C# are different languages.Rata
E
65

I am curious as to how F# performance compares to C++ performance?

Varies wildly depending upon the application. If you are making extensive use of sophisticated data structures in a multi-threaded program then F# is likely to be a big win. If most of your time is spent in tight numerical loops mutating arrays then C++ might be 2-3× faster.

Case study: Ray tracer My benchmark here uses a tree for hierarchical culling and numerical ray-sphere intersection code to generate an output image. This benchmark is several years old and the C++ code has been improved upon dozens of times over the years and read by hundreds of thousands of people. Don Syme at Microsoft managed to write an F# implementation that is slightly faster than the fastest C++ code when compiled with MSVC and parallelized using OpenMP.

I have read that F# is supposed to be more scalable and more performant, but how is this real-world performance compares to C++?

Developing code is much easier and faster with F# than C++, and this applies to optimization as well as maintenance. Consequently, when you start optimizing a program the same amount of effort will yield much larger performance gains if you use F# instead of C++. However, F# is a higher-level language and, consequently, places a lower ceiling on performance. So if you have infinite time to spend optimizing you should, in theory, always be able to produce faster code in C++.

This is exactly the same benefit that C++ had over Fortran and Fortran had over hand-written assembler, of course.

Case study: QR decomposition This is a basic numerical method from linear algebra provided by libraries like LAPACK. The reference LAPACK implementation is 2,077 lines of Fortran. I wrote an F# implementation in under 80 lines of code that achieves the same level of performance. But the reference implementation is not fast: vendor-tuned implementations like Intel's Math Kernel Library (MKL) are often 10x faster. Remarkably, I managed to optimize my F# code well beyond the performance of Intel's implementation running on Intel hardware whilst keeping my code under 150 lines of code and fully generic (it can handle single and double precision, and complex and even symbolic matrices!): for tall thin matrices my F# code is up to 3× faster than the Intel MKL.

Note that the moral of this case study is not that you should expect your F# to be faster than vendor-tuned libraries but, rather, that even experts like Intel's will miss productive high-level optimizations if they use only lower-level languages. I suspect Intel's numerical optimization experts failed to exploit parallelism fully because their tools make it extremely cumbersome whereas F# makes it effortless.

How well does it do floating-point?

Performance is similar to ANSI C but some functionality (e.g. rounding modes) is not available from .NET.

Does it allow vector instructions

No.

how friendly is it towards optimizing compilers?

This question does not make sense: F# is a proprietary .NET language from Microsoft with a single compiler.

How big a memory foot print does it have?

An empty application uses 1.3Mb here.

Does it allow fine-grained control over memory locality?

Better than most memory-safe languages but not as good as C. For example, you can unbox arbitrary data structures in F# by representing them as "structs".

does it have capacity for distributed memory processors, for example Cray?

Depends what you mean by "capacity for". If you can run .NET on that Cray then you could use message passing in F# (just like the next language) but F# is intended primarily for desktop multicore x86 machines.

what features does it have that may be of interest to computational science where heavy number processing is involved?

Memory safety means you do not get segmentation faults and access violations. The support for parallelism in .NET 4 is good. The ability to execute code on-the-fly via the F# interactive session in Visual Studio 2010 is extremely useful for interactive technical computing.

Are there actual scientific computing implementations that use it?

Our commercial products for scientific computing in F# already have hundreds of users.

However, your line of questioning indicates that you think of scientific computing as high-performance computing (e.g. Cray) and not interactive technical computing (e.g. MATLAB, Mathematica). F# is intended for the latter.

Egregious answered 10/5, 2010 at 0:45 Comment(17)
In my earlier comments I'm thinking about what you're calling high-performance computing, not interactive.Discoverer
You haven't exactly posted that F# implenentation that allegedly outperformed MATLAB :-)Nereidanereids
@Jon Harrop 'memory locality? Better than most memory-safe languages but not as good as C' Which options for such locality-control exist for C, which are not available in F#? And is this a language or a platform restriction? ThanksHippel
@user492238: In C, you can do things like smuggling bits in pointers and obtain interior pointers that point into the middle of a heap-allocated block of memory. Garbage collected languages will almost always prohibit this. So there are some sacrifices but they are relatively tiny.Egregious
@Jon Harrop. Right. But does this really differ from indexing into an array? Furthermore, as you know, C# does allow extensive pointer manipulation. But right, this question goes about F#.Hippel
@user492238: Indexing into an array requires you to multiply the index by the size of an element and add that to the pointer to the start of the array before you can dereference an element. That is more expensive than just dereferencing a pointer but the difference is usually masked by bigger cache effects. Writing pointers is much more expensive in managed languages than C or C++ due to the write barrier (and that will also impact locality). C unions are another example.Egregious
@JonHarrop Thanks. I agree on the first part (even if its not true for C# and its pointers ;). Which write barrier are you referring to? It sounds like there would be some mechanism preventing a computational loop in IL to perform similar fast then the same loop in C++? Not that I knew of .. ?Hippel
@user492238: "not true for C#". In what sense is that not true in C#? The write barrier I was referring to is a piece of code injected by the VM whenever user code writes a pointer into the heap. It is used to keep the GC apprised of changes to the heap topology: memorymanagement.org/glossary/w.html#write.barrierEgregious
@JonHarrop Ah. The remembered list of younger objects. Right. "In what sense.." C# allows this dereferencing as well: double[A]; fixed(double* a = A) { a = a + 2000; a[0] = 4.0; a[1] = 5.0; is equiv. to A[2001] = 5.0; Once optimized JIT'ed, there is no overhead for execution.Hippel
@user492238: "there is no overhead for execution". Interior pointers must be saved and reloaded across potential GC safe points because .NET has a moving collector that might move the heap block that the interior pointer is pointing into. There must also be some way to find the parent heap block given an interior pointer in order to prevent the GC from reclaiming it. This is most likely accomplished by carrying the parent pointer around "inside" the interior pointer. So I doubt there is no overhead in the general case.Egregious
@JonHarrop By watching the generated machine instructions it gets clear, there is really no overhead. Dont exactly know, how this relates to your concerns of pointers and movements by the GC. Possibly, because such pointer operations are only allowed inside a fixed context. So the target heap area is safe and not getting moved.Hippel
@Hippel "only allowed inside a fixed context". Do you mean you cannot take an interior pointer and store it in the heap in C#? You can in C, of course.Egregious
So are you telling us, that your parallel QR decomposition is faster than a single-threaded one? Because it reads like that.Olsson
@ziggystar: Intel's MKL is parallelised, of course.Egregious
This post is full of unsubstantiated assertions. The idea that F# easily lets you create more performant code than C++ is especially questionable. I've been pretty deeply involved in F#, including many PRs to speed up the higher order Array functions and I can assure you this is not generally the case. That the creator of F# can create a faster thing in F# than you can in C++ may speak more to your relative talents in each language than any innate property of them.Abramabramo
@jackmott: I opened with "If you are making extensive use of sophisticated data structures in a multi-threaded program then F# is likely to be a big win. If most of your time is spent in tight numerical loops mutating arrays then C++ might be 2-3× faster" so I think it is pretty clear I'm not talking about the functions of the Array module here.Egregious
Just to add a comment on it years later. F# has access to native untyped pointer arithmetic through nativeint and nativepointer, and the ability to allocate both stack and heap memory, through nativeInterop and Runtime.InteropServices.Marshall.Berne
C
44

In addition to what others said, there is one important point about F# and that's parallelism. The performance of ordinary F# code is determined by CLR, although you may be able to use LAPACK from F# or you may be able to make native calls using C++/CLI as part of your project.

However, well-designed functional programs tend to be much easier to parallelize, which means that you can easily gain performance by using multi-core CPUs, which are definitely available to you if you're doing some scientific computing. Here are a couple of relevant links:

Regarding distributed computing, you can use any distributed computing framework that's available for the .NET platform. There is a MPI.NET project, which works well with F#, but you may be also able to use DryadLINQ, which is a MSR project.

Cement answered 2/5, 2010 at 9:48 Comment(0)
C
40
  • F# does floating point computation as fast as the .NET CLR will allow it. Not much difference from C# or other .NET languages.
  • F# does not allow vector instructions by itself, but if your CLR has an API for these, F# should not have problems using it. See for instance Mono.
  • As far as I know, there is only one F# compiler for the moment, so maybe the question should be "how good is the F# compiler when it comes to optimisation?". The answer is in any case "potentially as good as the C# compiler, probably a little bit worse at the moment". Note that F# differs from e.g. C# in its support for inlining at compile time, which potentially allows for more efficient code which rely on generics.
  • Memory foot prints of F# programs are similar to that of other .NET languages. The amount of control you have over allocation and garbage collection is the same as in other .NET languages.
  • I don't know about the support for distributed memory.
  • F# has very nice primitives for dealing with flat data structures, e.g. arrays and lists. Look for instance at the content of the Array module: map, map2, mapi, iter, fold, zip... Arrays are popular in scientific computing, I guess due to their inherently good memory locality properties.
  • For scientific computation packages using F#, you may want to look at what Jon Harrop is doing.
Cagey answered 5/5, 2010 at 15:29 Comment(1)
I would just like to point out that the question was F# vs C++ and this answer is F# vs C# and that C++ and C# are different languages.Rata
T
16

As with all language/performance comparisons, your mileage depends greatly on how well you can code.

F# is a derivative of OCaml. I was surprised to find out that OCaml is used a lot in the financial world, where number crunching performance is very important. I was further surprised to find out that OCaml is one of the faster languages, with performance on par with the fastest C and C++ compilers.

F# is built on the CLR. In the CLR, code is expressed in a form of bytecode called the Common Intermediate Language. As such, it benefits from the optimizing capabilities of the JIT, and has performance comparable to C# (but not necessarily C++), if the code is written well.

CIL code can be compiled to native code in a separate step prior to runtime by using the Native Image Generator (NGEN). This speeds up all later runs of the software as the CIL-to-native compilation is no longer necessary.

One thing to consider is that functional languages like F# benefit from a more declarative style of programming. In a sense, you are over-specifying the solution in imperative languages such as C++, and this limits the compiler's ability to optimize. A more declarative programming style can theoretically give the compiler additional opportunities for algorithmic optimization.

Tapley answered 2/5, 2010 at 2:17 Comment(10)
interesting. my world is limited somewhat to fortran and C++, but then trying to expand my horizons. I have not really seen OCaml applications in my fieldComposed
@Robert Harvey--I've heard that about OCaml as well. Blazing fast performance and small code as well.Sturgeon
F# is implemented in .NET, however, and that means it inherits some of its problems with regards to overspecification. F# functions are .NET methods behind the scenes, and these are guaranteed to execute in a particular order since they might have side effects - even if 99% of the time F# won't have these or you don't care about their order (e.g. debugging/logging statements). So, I'd caution about expecting too much performance from F# - it's nice; it can be reasonable fast - but it mostly gains brevity from its functional nature, not optimizability.Re
@Eamon: Actually most F# functions are not compiled into methods, execution order is not guaranteed in that sense (the compiler is free to reorder expressions just like the compilers for any high-performance language) and its functional nature does expose optimization opportunities (e.g. inlined higher-order functions with minimal overhead compared to hand-rolled loops).Egregious
Right, so if you use inlined functions and only use side-effect free operations (i.e. no .NET interop) then it can reorder. Unfortunately, as can be verified with reflector, plain F# functions are compiled into .NET methods. MS itself, on the MSDN page about inline functions, says "you should avoid using inline functions for optimization unless you have tried all other optimization techniques". But even if you do, what optimizations will F# make that similar code in C++ (static inline) couldn't make? With manual help, I'm sure F# is a step in the right direction - but it's no Haskell.Re
What I'm trying to say is not that it's impossible for F# to have specific advantages in particular situations, but that people shouldn't be lead to believe those advantages are in any way automatic or even always achievable. Semantically, the language is just not that different from C# - even if it encourages you to use structures that are side-effect free on a local scope and even if the currect compiler uses that information better that C#'s current compiler does. I really don't see how F#'s semantics enable more new compiler optimizations over, say, C++. No magic bullet, this...Re
@Eamon: As Haskell has shown, the optimizations that purity facilitates rarely lead to better performance in practice and usually result in uselessly unpredictable performance. Thanks to languages like Haskell, we now know that predictable performance characteristics are more valuable in practice than the mythical sufficiently smart compiler. Fortunately, F# did not copy Haskell's mistake.Egregious
@Jon: you're talking about enforced, absolute purity. I'm talking about the fact that F# can't take advantage of purity where practical at all (well, except if you use inline) - and that hurts. An automatically inferred (or occasionally manually placed) purity annotation could dramatically improve performance in many cases and leave semantics otherwise completely unaltered - compare with non-aliases parameters in some languages, or rvalue references in C++, or inlined functions in many languages. It doesn't require a super-smart compiler; detecting unmodified expressions here today.Re
Another example is .NET's ngen-across-image boundaries attribute (I forget what it's called)Re
Can you give concrete examples of practical cases where you believe purity can dramatically improve performance?Egregious
B
9

It depends on what kind of scientific computing you are doing.

If you are doing traditional heavy computing, e.g. linear algebra, various optimizations, then you should not put your code in .Net framework, at least not suitable in F#. Because this is at the algorithm level, most of the algorithms must be coded in an imperative languages to have good performance in running time and memory usage. Others mentioned parallel, I must say it is probably useless when you doing low level stuff like parallel an SVD implementation. Because when you know how to parallel an SVD, you simply won't use an high level languages, Fortran, C or modified C(e.g. cilk) are your friends.

However, a lot of the scientific computing today is not of this kind, which is some kind of high level applications, e.g. statistical computing and data mining. In these tasks, aside from some linear algebra, or optimization, there are also a lot of data flows, IOs, prepossessing, doing graphics, etc. For these tasks, F# is really powerful, for its succinctness, functional, safety, easy to parallel, etc.

As others have mentioned, .Net well supports Platform Invoke, actually quite a few projects inside MS are use .Net and P/Invoke together to improve the performance at the bottle neck.

Buckden answered 3/5, 2010 at 1:50 Comment(3)
"at the algorithm level, most of the algorithms must be coded in an imperative languages to have good performance in running time and memory usage" [citation needed]Toritorie
the running time of these algorithms is measured in flops, high level languages are hard to measure this. The memory usage is also hard to predict, where in C and Fortran you are able to count precisely how much bytes you would be using.Buckden
"it's easier to figure out the performance by inspection in an imperative language" is VERY different from "only imperative languages give good performance". And also wrong. Second-order effects such as cache coherency are so important on modern processors, that measuring algorithms in FLOPs is worthless. Between a FLOP-optimized algorithm and a locality-optimized algorithm that required 10x the FLOPs, the locality-optimized algorithm will win. Repeat after me: the FPU is no longer the bottleneck.Catchascatchcan
P
7

I don't think that you'll find a lot of reliable information, unfortunately. F# is still a very new language, so even if it were ideally suited for performance heavy workloads there still wouldn't be that many people with significant experience to report on. Furthermore, performance is very hard to accurately gauge and microbenchmarks are hard to generalize. Even within C++, you can see dramatic differences between compilers - are you wondering whether F# is competitive with any C++ compiler, or with the hypothetical "best possible" C++ executable?

As to specific benchmarks against C++, here are some possibly relevant links: O'Caml vs. F#: QR decomposition; F# vs Unmanaged C++ for parallel numerics. Note that as an author of F#-related material and as the vendor of F# tools, the writer has a vested interest in F#'s success, so take these claims with a grain of salt.

I think it's safe to say that there will be some applications where F# is competitive on execution time and likely some others where it isn't. F# will probably require more memory in most cases. Of course the ultimate performance will also be highly dependent on the skill of the programmer - I think F# will almost certainly be a more productive language to program in for a moderately competent programmer. Furthermore, I think that at the moment, the CLR on Windows performs better than Mono on most OSes for most tasks, which may also affect your decisions. Of course, since F# is probably easier to parallelize than C++, it will also depend on the type of hardware you're planning to run on.

Ultimately, I think that the only way to really answer this question is to write F# and C++ code representative of the type of calculations that you want to perform and compare them.

Pearlypearman answered 2/5, 2010 at 17:43 Comment(11)
THe f# compiler might be new (and the performance of the code generated by the F# compiler therefor unknown) but the functional oriented part of F# is far from new. It can with no changes (this is only true for F# writen in a specific way) be compiled as OCaml which has be around for centuries. OCaml is provable a very optimizer friendly language (due to immutability for one) if the optimizer in the F# is on par with the OCaml optimizer then heavy number crunching is very much suited for F#Tollbooth
@RuneFS - Achieving good performance in O'Caml often comes at the price of not using its higher-level constructs (see section 3.3 of janestreetcapital.com/minsky_weeks-jfp_18.pdf, for example). When talking about F# performance in the real world, the fact that the only current F# implementation runs on .NET (CLR or Mono) also means that certain optimizations may not be available. I am a huge F# fan, and in the future further optimizations may provide more speed, but at the moment I suspect that there are many applications where "optimal" C++ code would outperform "optimal" F# code.Pearlypearman
@Pearlypearman I pretty much agree with you. I was just trying to point out that the language was not new but that the compiler is and your arguments would hold true under that statement (rephrased to make it clear what I ment: "The F# compiler is still a very new compiler")Tollbooth
F# runs fast enough. I don't expect it's compiler to be able to drastically improve; the language is still at it's core a side-effect-permitting language which guarantees a particular order of execution; greatly constraining optimization. e.g. let f x y = (expensive x |> g) y is fundamentally different from let f x = expensive x |> g in F#, even though they are semantically equivalent in a functional world.Re
@Eamon - There are certainly challenges. However, I think that your position is overly bleak. Because F# runs on the CLR, improvements to either the F# compiler itself or the CLR JIT will affect performance. There are probably plenty of places where the .NET JIT compiler can be dramatically improved (e.g. skipping a wider variety of provably unnecessary array bounds checks, inlining heuristic improvements, etc.). Given that this is the first production release of a language created by a small team, I also wouldn't be surprised if further effort could improve the F# compiler's output.Pearlypearman
Also, there's always the possibility that something like purity annotations might become available in a future version of the language, which would provide much more latitude for aggressive optimization.Pearlypearman
Purity annotations might be a big win for performance. And I'm not trying to belittle F# - it's just that I see its benefits more on the code brevity and readability side, rather than expecting many performance benefits. I'd rather people choose F# for those reasons that because they think perf is better - and then discard it when they discover it rarely is. As to new-and-improved CLR optimizations: the CLR is 10 years old. While it's certainly not perfect, I wouldn't count on radical performance enhancements anymore; the obvious improvements will have already been made.Re
Why would you expect purity annotations to be a big win for performance? How is that not just a step towards Haskell and its dire performance?Egregious
@Jon - Is it your contention that there are no situations where an optimization (e.g. stream fusion, perhaps) could only be safely applied if a functions is known to be pure? How would being able to add purity annotations to an impure language put it on a path towards Haskell's "dire" performance?Pearlypearman
@kvb: There are obviously optimizations that depend upon knowledge of purity but Haskell already showed that such optimizations are virtually useless in practice: they unpredictably provide insignificant performance improvements in a few practically-irrelevant corner cases so developers end up enforcing the same optimizations by hand anyway to ensure that they get done when it is important. Where is this "big win" in performance of adding purity annotations supposed to come from?Egregious
@kvb: Results like these: texasmulticoretechnologies.com/technology/exampleEgregious
B
4

Here are two examples I can share:

  1. Matrix multiplication: I have a blog post comparing different matrix multiplication implementations.

  2. LBFGS

I have a large scale logistic regression solver using LBFGS optimization, which is coded in C++. The implementation is well tuned. I modified some code to code in C++/CLI, i.e. I compiled the code into .Net. The .Net version is 3 to 5 times slower than the naive compiled one on different datasets. If you code LBFGS in F#, the performance can not be better than C++/CLI or C#, (but would be very close).

I have another post on Why F# is the language for data mining, although not quite related to the performance issue you concern here, it is quite related to scientific computing in F#.

Buckden answered 5/5, 2010 at 1:29 Comment(6)
-1: This is not true: "If you code LBFGS in F#, the performance can not be better than C++/CLI or C#, (but would be very close).". This is exactly the kind of application where F# can be a lot faster than C#.Egregious
@Jon Why? Do you mean 'parallel'?Buckden
Optimization algorithms are higher-order functions that accept the function to be minimized as an argument. This design pattern is better represented in F# where both functions can be marked inline in order to remove the performance overhead of the abstraction. C# cannot express this so you must pay a hefty performance penalty for the functional abstraction in C# but not in F#.Egregious
@Jon. I have coded LBFGS, I know the tricks to improve performance and memory usage that must be coded in imperative style. FP seems to have good design patterns here, but the performance has less to do with style, especially for highly optimized numerical code. In most problems to use LBFGS, the time cost is mainly in the function value and gradient calculations, every few are used in LBFGS itself. Making it inline does boost the performance if there are far more LBFGS or line search iterations than computation in the function value and gradient. However, this is generally not true.Buckden
Second, I don't see the performance issue that directly pass an vector(an array pointer) to a function, run it and it returns you another pointer to the gradient array. Inline helps if this functions costs only a little time, when there is some overhead in the interaction. Because the gradient array is often of a big size, (this is why we need Limitedmemory-BFGS), we must make sure the gradient array is pre-allocated and reused in the future iterations. Just a lot of imperative thinking in the implementation in this kind of stuff.Buckden
No, the main benefit of inline in F# is not that it removes the overhead of function calls but, rather, that it causes the CLR to type-specialize your code. If your LBFGS is only handling float array or vector inputs and outputs then you have type specialized it by hand for one particular case and that has made it much less useful. A general-purpose BFGS implementation should read its input and write its output directly in the user's data structures using functions that the user supplies. F# has a huge performance advantage over C# here.Egregious
N
3

If I say "ask again in 2-3 years" I think that will answer your question completely :-)

First, don't expect F# to be any different than C# perf-wise, unless you are doing some convoluted recursions on purpose and I'd guess you are not since you asked about numerics.

Floating-point wise it is bound to be better than Java since CLR doesn't aim at cross-platform uniformity, meaning that JIT will go to 80-bits whenever it can. On the other side you don't control over that beyond watching the number of variables to make sure there's enough FP registers.

Vector-wise, if you scream loud enough maybe something happens in 2-3 yr since Direct3D is entering .NET as a general API anyway and C# code done in XNA runs on Xbox whihc is as close to the bare metal you can get with CLR. That still means that you'd need do so some intermediary code on your own.

So don't expect CUDA or even ability to just link NVIDIA libs and get going. You'd have much more luck trying that approach with Haskell if for some reason you really, really need a "functional" language since Haskell was designed to be linking-friendly out of pure necessity.

Mono.Simd has been mentioned already and while it should be back-portable to CLR it might be quite some work to actually do it.

There,s quite some code in a social.msdn posting on using SSE3 in .NET, vith C++/CLI and C#, come array blitting, injecting SSE3 code for perf etc.

There was some talk about running CECIL on compiled C# to extract parts into HLSL, compile into shaders and link a glue code to schedule it (CUDA is doing the equivalent anyway) but I don't think that there's anything runnable coming out of that.

A thing that might be worth more to you if you want to try something soon is PhysX.Net on codeplex. Don't expect it to just unpack and do the magic. However, ih has currently active author and the code is both normal C++ and C++/CLI and yopu can probably get some help from the author if you want to go into details and maybe use similar approach for CUDA. For full speed CUDA you'll still need to compile your own kernels and then just interface to .NET so the easier that part goes the happier you are going to be.

There is a CUDA.NET lib which is supposed to be free but the page gives just e-mail address so expect some strings attached, and while the author writes a blog he's not particularly talkative about what's inside the lib.

Oh and if you have the budget yo might give that Psi Lambda a look (KappaCUDAnet is the .NET part). Apparently they are going to jack up the prices in Nov (if it's not a sales trick :-)

Nereidanereids answered 13/10, 2010 at 12:30 Comment(1)
The optimization of pattern matches is one area where F# has the potential to do a lot but C# does nothing. This is relevant to symbolic computations in scientific computing. Not uncoincidentally, some of the world's largest symbolic computations were written in F#'s predecessor, OCaml.Egregious
R
2

Firstly C is significantly faster than C++.. So if you need so much speed you should make the lib etc in c.

With regards to F# most bench marks use Mono which is up to 2 * slower than MS CLR due t partially to its use of the boehm GC ( they have a new GC and LVVM but these are still immature dont support generics etc).

.NEt languages itself are compiled to an IR ( the CIL) which compile to native code as efficiently as C++. There is one problem set that most GC languages suffer in and that is large amounts of mutable writes ( this includes C++ .NET as mentioned above) . And there is a certain scientific problem set that requires this , these when needed should probably use a native library or use the Flyweight pattern to reuse objects from a pool ( which reduces writes) . The reason is there is a write barrier in the .NET CLR where when updating a reference field (including a box) it will set a bit in a table saying this table is modified . If your code consists of lots of such writes it will suffer.

That said a .NET app like C# using lots of static code , structs and ref/out on the structs can produce C like performance but it is very difficult to code like this or maintain the code ( like C) .

Where F# shines however is parralelism over immutable data which goes hand and hand with more read based problems. Its worth noting most benchmarks are much higher in mutable writes than real life applications.

With regard to floating point , you should use an alternative lib ( ie the .Net one) to the oCaml ones due to it being slow. C/C++ allows faster for lower precision which oCaml doesnt by default.

Lastly i woudl argue a high level language like C#, F# and proper profiling will give you betetr pefromance than c and C++ for the same developer time. If you change a bottle neck to a c lib pinvoke call you will also end up with C like performance for critical areas. That said if you have unlimited budget and care more about speed then maintenance than C is the way to go ( not C++) .

Rawalpindi answered 18/6, 2010 at 0:59 Comment(0)
D
1

Last I knew, most scientific computing was still done in FORTRAN. It's still faster than anything else for linear algebra problems - not Java, not C, not C++, not C#, not F#. LINPACK is nicely optimized.

But the remark about "your mileage may vary" is true of all benchmarks. Blanket statements (except mine) are rarely true.

Discoverer answered 2/5, 2010 at 2:21 Comment(7)
Sorry, I don't understand this comment at all.Discoverer
most of them are still fortran because of inertia (I do not think fortran has much advantage today). the same goes for linpack (which is superseded by lapack). some recent blas implementations, such as atlas and goto are actually C and platform intrinsics, rather than fortran.Composed
My data is dated, I'll admit. But I'd be interested in seeing a benchmark comparing Fortran and C today for linear algebra. The big key question: What language are vendors of modern, commercial packages using?Discoverer
that I do not know. I looked at binary strings of mkl and that appears to be mixture of C and fortran, more fortran. however I would have thought that there would be some large hand tuned assembly for kernels. would be interesting to know indeed.Composed
Our modern commercial packages for numerical computing are written in F# and it beats Fortran quite happily. FFTW provides the FFT routines in MATLAB and is written in OCaml and beats everything else quite happily.Egregious
@JonHarrop it's very hard to write fortran code that is always faster, however possible and should he be able to write it duffymo's peers certainly won't be able to read it.Eo
I see no reason to single me out on a comment that's five years old. You have no idea what I can read and understand.Discoverer

© 2022 - 2024 — McMap. All rights reserved.