C++/CLI performance compared to Native C++?
Asked Answered
H

3

12

Good morning,

I am writting a spell checker which, for the case, is performance-critical. That being, and since I am planning to connect to a DB and making the GUI using C#, I wrote an edit-distance calculation routine in C and compiled to a DLL which I use in C# using DllImport. The problem is that I think (though I am possibly wrong) that marshalling words one by one from String to char * is causing a lot of overhead. That being, I thought about using C++/CLI so that I can work with the String type in .NET directly... My question is then how does C++/CLI performance compares to native C code for heavy mathematical calculations and array access?

Thank you very much.

Harping answered 6/12, 2010 at 10:33 Comment(2)
I think, CIL do the same but implicitly.Spoofery
Why do you pass words one by one? Pass whole txt.Coachandfour
U
4

C++/CLI will have to do some kind of marshaling too.

Like all performance related problems, you should measure and optimize. Are you sure C# is not going to be fast enough for your purposes? Don't underestimate the optimizations that JIT compiler is going to do. Don't speculate on the overhead of a language implementation solely for being managed without trying. If it's not enough, have you considered unsafe C# code (with pointers) before trying unmanaged code?

Regarding the performance profile of C++/CLI, it really depends on the way it's used. If you compile to managed code (CIL) with (/clr:pure), it's not going to be very different from C#. Native C++ functions in C++/CLI will have similar performance characteristics to plain C++. Passing objects between native C++ and CLI environment will have some overhead.

Udele answered 6/12, 2010 at 10:50 Comment(1)
Unsafe C# code is about twice as slow as the C function I am importing with DllImport.Harping
P
1

I would not expect that the bottleneck will be with the DLLImport.
I have written programs which call DLLImport several hundert times per second and it just works fine.
You will pay a small performance fine, but the fine is small.

Plossl answered 6/12, 2010 at 11:4 Comment(0)
A
1

Don't assume you know what needs to be optimized. Let sampling tell you.

I've done a couple spelling-correctors, and the way I did it (outlined here) was to organize the dictionary as a trie in memory, and search upon that. If the number of words is large, the size of the trie can be much reduced by sharing common suffixes.

Autoclave answered 6/12, 2010 at 13:20 Comment(2)
That's not the case... I am in fact using a BK-Tree, so my approach is significantly different from that you said.Harping
@Miguel: OK, corrected. In any case, what I did was a branch-and-bound search in the trie, which worked pretty well. An alternative is mixed depth-first / breadth-first, but I thought branch-and-bound was about the same performance and much more flexible, in terms of the kinds of misspellings it could handle.Autoclave

© 2022 - 2024 — McMap. All rights reserved.