Task.Yield - real usages?
Asked Answered
M

7

51

I've been reading about Task.Yield , And as a Javascript developer I can tell that's it's job is exactly the same as setTimeout(function (){...},0); in terms of letting the main single thread deal with other stuff aka :

"don't take all the power , release from time time - so others would have some too..."

In js it's working particular in long loops. ( don't make the browser freeze...)

But I saw this example here :

public static async Task < int > FindSeriesSum(int i1)
{
    int sum = 0;
    for (int i = 0; i < i1; i++)
    {
        sum += i;
        if (i % 1000 == 0) ( after a bulk , release power to main thread)
            await Task.Yield();
    }

    return sum;
}

As a JS programmer I can understand what they did here.

BUT as a C# programmer I ask myself : why not open a task for it ?

 public static async Task < int > FindSeriesSum(int i1)
    {
         //do something....
         return await MyLongCalculationTask();
         //do something
    }

Question

With Js I can't open a Task (yes i know i can actually with web workers) . But with c# I can.

If So -- why even bother with releasing from time to time while I can release it at all ?

Edit

Adding references :

From here : enter image description here

From here (another ebook):

enter image description here

Mindexpanding answered 2/5, 2014 at 15:22 Comment(3)
That's a poor example, because Task.Yield won't keep the UI responsive. Also, async and await do not free up the UI thread; if you have a long calculation, you'll need to use Task.Run.Intersperse
@StephenCleary Considering a multi threaded app (ASP.NET or similar) where a task is already being executed on the thread pool and the thread pool is a valuable resource: If a long running CPU intensive task would call await Task.Yield() every second then other short running tasks should get a chance without pending in the queue for a long time. Wouldn't this be fair behavior?Cytosine
@springy76: The preemptive thread switching built into every modern OS combined with the self-adjusting thread pool in .NET will work far better than a manual await Task.Yield().Intersperse
D
89

When you see:

await Task.Yield();

you can think about it this way:

await Task.Factory.StartNew( 
    () => {}, 
    CancellationToken.None, 
    TaskCreationOptions.None, 
    SynchronizationContext.Current != null?
        TaskScheduler.FromCurrentSynchronizationContext(): 
        TaskScheduler.Current);

All this does is makes sure the continuation will happen asynchronously in the future. By asynchronously I mean that the execution control will return to the caller of the async method, and the continuation callback will not happen on the same stack frame.

When exactly and on what thread it will happen completely depends on the caller thread's synchronization context.

For a UI thread, the continuation will happen upon some future iteration of the message loop, run by Application.Run (WinForms) or Dispatcher.Run (WPF). Internally, it comes down to the Win32 PostMessage API, which post a custom message to the UI thread's message queue. The await continuation callback will be called when this message gets pumped and processed. You're completely out of control about when exactly this is going to happen.

Besides, Windows has its own priorities for pumping messages: INFO: Window Message Priorities. The most relevant part:

Under this scheme, prioritization can be considered tri-level. All posted messages are higher priority than user input messages because they reside in different queues. And all user input messages are higher priority than WM_PAINT and WM_TIMER messages.

So, if you use await Task.Yield() to yield to the message loop in attempt to keep the UI responsive, you are actually at risk of obstructing the UI thread's message loop. Some pending user input messages, as well as WM_PAINT and WM_TIMER, have a lower priority than the posted continuation message. Thus, if you do await Task.Yield() on a tight loop, you still may block the UI.

This is how it is different from the JavaScript's setTimer analogy you mentioned in the question. A setTimer callback will be called after all user input message have been processed by the browser's message pump.

So, await Task.Yield() is not good for doing background work on the UI thread. In fact, you very rarely need to run a background process on the UI thread, but sometimes you do, e.g. editor syntax highlighting, spell checking etc. In this case, use the framework's idle infrastructure.

E.g., with WPF you could do await Dispatcher.Yield(DispatcherPriority.ApplicationIdle):

async Task DoUIThreadWorkAsync(CancellationToken token)
{
    var i = 0;

    while (true)
    {
        token.ThrowIfCancellationRequested();

        await Dispatcher.Yield(DispatcherPriority.ApplicationIdle);

        // do the UI-related work item
        this.TextBlock.Text = "iteration " + i++;
    }
}

For WinForms, you could use Application.Idle event:

// await IdleYield();

public static Task IdleYield()
{
    var idleTcs = new TaskCompletionSource<bool>();
    // subscribe to Application.Idle
    EventHandler handler = null;
    handler = (s, e) =>
    {
        Application.Idle -= handler;
        idleTcs.SetResult(true);
    };
    Application.Idle += handler;
    return idleTcs.Task;
}

It is recommended that you do not exceed 50ms for each iteration of such background operation running on the UI thread.

For a non-UI thread with no synchronization context, await Task.Yield() just switches the continuation to a random pool thread. There is no guarantee it is going to be a different thread from the current thread, it's only guaranteed to be an asynchronous continuation. If ThreadPool is starving, it may schedule the continuation onto the same thread.

In ASP.NET, doing await Task.Yield() doesn't make sense at all, except for the workaround mentioned in @StephenCleary's answer. Otherwise, it will only hurt the web app performance with a redundant thread switch.

So, is await Task.Yield() useful? IMO, not much. It can be used as a shortcut to run the continuation via SynchronizationContext.Post or ThreadPool.QueueUserWorkItem, if you really need to impose asynchrony upon a part of your method.

Regarding the books you quoted, in my opinion those approaches to using Task.Yield are wrong. I explained why they're wrong for a UI thread, above. For a non-UI pool thread, there's simply no "other tasks in the thread to execute", unless you running a custom task pump like Stephen Toub's AsyncPump.

Updated to answer the comment:

... how can it be asynchronouse operation and stay in the same thread ?..

As a simple example: WinForms app:

async void Form_Load(object s, object e) 
{ 
    await Task.Yield(); 
    MessageBox.Show("Async message!");
}

Form_Load will return to the caller (the WinFroms framework code which has fired Load event), and then the message box will be shown asynchronously, upon some future iteration of the message loop run by Application.Run(). The continuation callback is queued with WinFormsSynchronizationContext.Post, which internally posts a private Windows message to the UI thread's message loop. The callback will be executed when this message gets pumped, still on the same thread.

In a console app, you can run a similar serializing loop with AsyncPump mentioned above.

Diversity answered 3/5, 2014 at 8:6 Comment(6)
Noseration , hi :-) . You said : "There is no guarantee it is going to be a different thread from the current thread, it's only guaranteed to be an asynchronous continuation" . — But if it would stay on the same thread - the operation become synchronous operation. no ? in other words , how can it be asynchronouse operation and stay in the same thread ? asynchrounsey is about continuations - meaning "I will come back and continue where it stopped". but if it is in the same thread , it is more likely it never left the oprtaion.....no ?Mindexpanding
Hey @RoyiNamir, it's quite possible to continue asynchronously on the same thread, check my update.Diversity
Yes :) , but you wrote the comment in the non-UI thread section , which makes me confusion.....Mindexpanding
@RoyiNamir, you can do that on a non-UI thread too, with the magic of SynchronizationContext. Have you looked at AsyncPump? After all, asynchrony doesn't assume multi-threading. It's just something that will deliver a result in the future. Note, even when you deal with ThreadPool without any s.context, there's a chance that the continuation will happen on the same thread (e.g., the same pool thread may start and later serve the completion of an async I/O operation, by coincidence).Diversity
I personnally use Task.Yield in only one scenario : protected async virtual Task OverridableMethod() { await Task.Yield(); } : here I want a default async implementation that just do nothing, but I can't leave it empty because the method needs to return a Task. Maybe there is a better way to achieve this ?Exonerate
@bN_, there is: protected virtual Task OverridableMethod() { return Task.Task.CompletedTask; }. Note async is not a part of the virtual method signature, so you still can use it when overriding this method in a derived class.Diversity
I
27

I've only found Task.Yield useful in two scenarios:

  1. Unit tests, to ensure the code under test works appropriately in the presence of asynchrony.
  2. To work around an obscure ASP.NET issue where identity code cannot complete synchronously.
Intersperse answered 2/5, 2014 at 20:53 Comment(0)
O
8

No, it's not exactly like using setTimeout to return control to the UI. In Javascript that would always let the UI update as the setTimeout always has a minimum pause of a few milliseconds, and pending UI work has priority over timers, but await Task.Yield(); doesn't do that.

There is no guarantee that the yield will let any work be done in the main thread, on the contrary the code that called the yield will often be prioritised over UI work.

"The synchronization context that is present on a UI thread in most UI environments will often prioritize work posted to the context higher than input and rendering work. For this reason, do not rely on await Task.Yield(); to keep a UI responsive."

Ref: MSDN: Task.Yield Method

Overhear answered 2/5, 2014 at 15:41 Comment(4)
IF not relying on it - so for what it is there ? can you supply REAL example ?Mindexpanding
@RoyiNamir: It's there for handling multithreading, but not in a way that is reliable for keeping the UI responsive. It makes the method asynchronous, but it doesn't keep it from competing with the main thread for CPU.Overhear
If it may win the main thread priority , then why would i want to use it ? What you are saying is that the code may run sync and not asyncMindexpanding
@RoyiNamir: The code runs asyncronously, but you may get the impression that it runs syncronously because of the way that the work is prioritised. Async tasks are more suited for work that is waiting for something to happen, for CPU intensive work you would rather look into the AsParallel method.Overhear
V
3

First of all let me clarify: Yield is not exactly the same as setTimeout(function (){...},0);. JS is executed in single thread environment, so that is the only way to let other activities to happen. Kind of cooperative multitasking. .net is executed in preemptive multitasking environment with explicit multithreading.

Now back to Task.Yield. As I told .net lives in preemptive world, but it is a little more complicated than that. C# await/async create interesting mixture of those multitasking mode ruled by state machines. So if you omit Yield from your code it will just block the thread and that's it. If you make it a regular task and just call start (or a thread) then it will just do it's stuff in parallel and later block calling thread when task.Result is called. What happens when you do await Task.Yield(); is more complicated. Logically it unblocks the calling code (similar to JS) and execution goes on. What it actually does - it picks another thread and continue execution in it in preemptive environment with calling thread. So it is in calling thread until first Task.Yield and then it is on it's own. Subsequent calls to Task.Yield apparently don't do anything.

Simple demonstration:

class MainClass
{
    //Just to reduce amount of log items
    static HashSet<Tuple<string, int>> cache = new HashSet<Tuple<string, int>>();

    public static void LogThread(string msg, bool clear=false) {
        if (clear)
            cache.Clear ();
        var val = Tuple.Create(msg, Thread.CurrentThread.ManagedThreadId);
        if (cache.Add (val))
            Console.WriteLine ("{0}\t:{1}", val.Item1, val.Item2);
    }

    public static async Task<int> FindSeriesSum(int i1)
    {
        LogThread ("Task enter");
        int sum = 0;
        for (int i = 0; i < i1; i++)
        {
            sum += i;
            if (i % 1000 == 0) {
                LogThread ("Before yield");
                await Task.Yield ();
                LogThread ("After yield");
            }
        }
        LogThread ("Task done");
        return sum;
    }

    public static void Main (string[] args)
    {
        LogThread ("Before task");
        var task = FindSeriesSum(1000000);
        LogThread ("While task", true);
        Console.WriteLine ("Sum = {0}", task.Result);
        LogThread ("After task");
    }
}

Here are results:

Before task     :1
Task enter      :1
Before yield    :1
After yield     :5
Before yield    :5
While task      :1
Before yield    :5
After yield     :5
Task done       :5
Sum = 1783293664
After task      :1
  • Output produced on mono 4.5 on Mac OS X, results may vary on other setups

If you move Task.Yield on top of the method it will by async from the beginning and will not block the calling thread.

Conclusion: Task.Yield can make possible to mix sync and async code. Some more or less realistic scenario: you have some heavy computational operation and local cache and task CalcThing. In this method you check if item is in cache, if yes - return item, if it is not there Yield and proceed to calculate it. Actually sample from your book is rather meaningless because nothing useful is achieved there. Their remark regarding GUI interactivity is just bad and incorrect (UI thread will be locked until first call to Yield, you should never do that, MSDN is clear (and correct) on that: "do not rely on await Task.Yield(); to keep a UI responsive".

Vespiary answered 2/5, 2014 at 16:41 Comment(2)
The analogy to JS was in context where GUI apps in c# has a single main thread which runs the UI -- same as JS single loop. ( and in JS - the the solution is setTimeout(...,0 )Mindexpanding
love the demo :)Leidaleiden
C
3

I think that nobody provided the real answer when to use the Task.Yield. It is mostly needed if a task uses a never ending loop (or lengthy synchronous job), and can potentially hold a threadpool thread exclusively (not allowing other tasks to use this thread). This can happen if inside the loop the code runs synchronously. the Task.Yield reschedules the task to the threadpool queue and the other tasks which waited for the thread can be executed.

The example:

  CancellationTokenSource cts;
  void Start()
  {
        cts = new CancellationTokenSource();

        // run async operation
        var task = Task.Run(() => SomeWork(cts.Token), cts.Token);
        // wait for completion
        // after the completion handle the result/ cancellation/ errors
    }

    async Task<int> SomeWork(CancellationToken cancellationToken)
    {
        int result = 0;

        bool loopAgain = true;
        while (loopAgain)
        {
            // do something ... means a substantial work or a micro batch here - not processing a single byte

            loopAgain = /* check for loop end && */  cancellationToken.IsCancellationRequested;
            if (loopAgain) {
                // reschedule  the task to the threadpool and free this thread for other waiting tasks
                await Task.Yield();
            }
        }
        cancellationToken.ThrowIfCancellationRequested();
        return result;
    }

    void Cancel()
    {
        // request cancelation
        cts.Cancel();
    }
Cottier answered 9/11, 2018 at 8:11 Comment(11)
From the second thought, you could probably use it like that, but it doesn't feel like a good design: queuing a large pile of CPU-bounds tasks to the thread pool (more than it can handle), so they have to yield with Task.Yield() to give their peers a chance to execute. There are better options for concurrent CPU-bound scenarios like this, e.g. Parallel or TPL Dataflow.Diversity
@Diversity The possible usage of this pattern if the code in the loop executes a user defined handler. The code does not know how it behaves - if it is async or synchronous. So as a precaution, in the end of the loop it is better to have a Task.Yield. This is common in message handling buses. When the bus instantiate handlers and process messages. If it has a lot of workers which run in parallel, then it can exaust the threadpool threads. I used it implementing worker coordinator in github.com/BBGONE/REBUS-TaskCoordinatorCottier
And one more thing - this can be tested. I have a TaskCoordinator test lab github.com/BBGONE/TaskCoordinator which uses this pattern. Without using Task.Yield if you configure to run a lot of tasks in parallel to process messages, then you won't get messages in the console to appear - because all pool threads are busy, and threadpool does not add more - because of the high CPU usage. But when using Task.Yield - the status messages appear in the console. And it processes around 200 000 messages per second, and when there was less overhead (no serialization) - around 500 000 per second.Cottier
I don't think using Task.Yield to overcome ThreadPool starvation while implementing producer/consumer pattern is a good idea. I suggest you ask a separate question if you want to go into details as to why.Diversity
@Diversity sounds like you're up :) #53263758Chaunceychaunt
Not sure why it got down-voted though, it's still an interesting exercise, particularly if you use a custom task scheduler to factor out ThreadPool.Diversity
@Diversity I know why it is downvoted. Because people don't test it - how it works in the real world.They are more theoretical than practical. In the real world there are usually larger number of tasks processing than the number of processors on the comp. You have 2 options - use ThreadPool.QueueUserWorkItem directly, or use a loop with async await. Async Await does the same, it queues items to the pool. If the job done on each iteration is more or less substantial, then an additional async - await does not harm. But it increases responsiveness of the system.Cottier
@MaximT, see my comment there. It'd be great if you can support your vision with a benchmark test like that, and then we could all learn something new. If you take on that, feel free to answer your own question there. Keep in mind though, even if the default ThreadPool wins, in real life you can't simply monopolize all of its threads for microtasks like those, because it's a shared resource used by the runtime elsewhere.Diversity
Task.Yield does not necessarily cause the continuation in the thread pool (TaskScheduler.Default). The continuation is in the ambient SynchronizationContext or TaskScheduler.Current.Abib
But most often the TaskScheduler.Current is TaskScheduler.Default and SynchronizationContext is not setCottier
Glad to see someone else finally agrees with me on this. I had a rather robust discussion with two folks yesterday on an essentially duplicate question I answered, that was not well received. Kudos in particular to pointing out the real world implications.Phenformin
P
1

This is a supplement to Maxim T's answer from 2018 (which I agree with). I decided to test the theory that Task.Yield can improve performance when the CPU is under heavy stress from many parallel CPU-intensive tasks/requests (e.g., extremely busy servers). The answer - which came as a surprise - is that Task.Yield indeed gives much better overall results than Task.Run alone or relying solely on multithreading. Now, before anyone heads straight for that down arrow, I hope you'll read on.

The Test

Each test initiates 512 parallel loops as quickly as possible, which all perform an infinite summation of +1, -1, +1, etc. Each loop further records in a static array (one for each loop) the number of iterations completed, and checks a CancellationToken to know when to terminate.

Here's how the loops are initiated (time keeping and logging code omitted):

internal static class EatMyThreads
{
    const int c_MaxTasks = 512;
    static long[] _iterCounter = new long[c_MaxTasks];
    static CancellationTokenSource _cts = new CancellationTokenSource();

    public delegate Task InfiniteSeriesDelegate(int i, CancellationToken token);

    public static void EatMyThreads(InfiniteSeriesDelegate series, string methodName, bool upMinThreads)
    {
        if (upMinThreads)
        {
            ThreadPool.GetMinThreads(out _, out var minPortThreads);
            System.Threading.ThreadPool.SetMinThreads(c_MaxTasks * 2, minPortThreads);
        }                       

        for (int i = 0; i < c_MaxTasks; i++)
        {
            _ = series(i, _cts.Token);
        }

        Thread.Sleep(5000);
        _cts.Cancel();
        Thread.Sleep(5000); // Give some time for cancellation

        // log results
    }

    // individual loop methods
}

I next define 3 methods conforming to InfiniteSeriesDelegate, each with a different approach to parallelism.

Method 1 - One Thread per Loop

This executes each loop within a single Task.Run without regard to blocking the thread, i.e., each loop monopolizes the entire thread.

    public static async Task DoInfiniteSeries_Method1(int i, CancellationToken c)
    {            
        await Task.Run(() =>
        {                
            double n = 0;                
            while (true)
            {
                if (c.IsCancellationRequested)
                    break;
                n += 1;
                n -= 1;
                _iterCounter[i]++;
            }
        });
    }

Method 2 - Chunks Wrapped in Task.Run

Here, chunks of 1000 loop iterations are each queued to the threadpool via an independent Task.Run call as soon as the last one completes. In theory this should be equivalent to Method 3 below.

    public static async Task DoInfiniteSeries_Method2(int i, CancellationToken c)
    {                        
        double n = 0;
        while (true)
        {
            if (c.IsCancellationRequested)
                break;
            await Task.Run(() =>
            {
                for (int j = 0; j < 1000; j++)
                {
                    n += 1;
                    n -= 1;
                    _iterCounter[i]++;
                }
            });                
        }            
    }

Method 3 - Chunks Punctuated with Task.Yield()

This final method does the same thing as Method 2 except with Task.Yield instead of Task.Run. Again, I was expecting this method to perform nearly identically to Method 2.

    public static async Task DoInfiniteSeries_Method3(int i, CancellationToken c)
    {
        await Task.Run(async () =>
        {
            double n = 0;
            while (true)
            {
                if (c.IsCancellationRequested)
                    break;
                for (int j = 0; j < 1000; j++)
                {
                    n += 1;
                    n -= 1;
                    _iterCounter[i]++;
                }
                await Task.Yield();
            }
        });
    }

For each method, I record how many iterations each of the 512 parallel loops completes in the aggregate within the allowed time (i.e. how much raw work is able to be done), and also how uniformly they do so (i.e. is each loop able to complete roughly the same number of iterations or do some loops never even get off the ground?). I also test each method with and without increasing the minimum number of threads beforehand using SetMinThreads to see if this is will force the runtime to give us the extra needed threads quickly enough to prevent starvation.

Other Details

Tests were run in a .NET8 console app in release mode without a debugger attached, on a 3.6GHz Core i9 with 20 logical cores. Compiler optimizations were disabled to eliminate the possibility of an unintended optimization impacting one or more of the methods differently. Each test was also performed with a fresh start of the application to eliminate the possibility of one test affecting the thread pool in a way that influenced the other tests.

Results

Metrics

Charts

Method 3 - employing Task.Yield on every 1,000th iteration - simply blew away the other methods both as a function of raw iterations completed and the uniformity of progress between each loop. Interestingly performance was not dramatically affected by SetMinThreads. In other words using Task.Yield to punctuate the loop allowed us to get a lot more work done, and with fewer threads, than the other methods did.

Method 2 (using Task.Run to schedule chunks of work) performed less than half as well as Method 3 in terms of raw power, and abysmally in terms of parallelism, with only a fraction of the loops able to get more than a single chunk queued up and executed. Like Method 3, Method 2 benefits only modestly from increasing the thread count at the start.

Method 1 - allowing thread blocking and relying entirely on the thread pool to grow as needed - is the undisputed loser. Unsurprisingly Method 1 does benefit from increasing the minimum thread count, but what is surprising is that setting SetMinThreads was still not enough to allow more than a few dozen loops to get underway within the first 5 seconds.

Conclusion

Task.Yield seems to offer a clear benefit over Task.Run as a means to promote parallelism and maximize raw performance in otherwise synchronous CPU-intensive code while minimizing the risk of threadpool starvation. Potentially long-running in-memory operations such as complex Regexs, hashes of large data sets, image processing without a GPU, etc., may benefit from being rewritten as async methods and periodically Yielding to other work. Simply trusting that cranking up the threadpool size will solve all problems and otherwise treating in-memory operations as cost-free would appear to be a fallacy.

I hope this exercise encourages folks to take a new look at this outlier of a .NET method and reconsider its utility. The entire code for these tests is available here, so I also would encourage anyone to see if they can reproduce these results or improve upon them. Of course if anyone sees a flaw in the methodology or has an explanation for the observed performance difference between Task.Run and Task.Yield I hope they'll weigh in.

Phenformin answered 6/4 at 16:20 Comment(6)
I like your scientific approach, and the effort that you obviously put on this answer. Nevertheless I have to ask what is the real-life scenario that your test attempts to simulate, and if this scenario can be better served by a different approach than the three that you tested. Because if a fourth approach exists that outperforms both the Task.Run and Task.Yield, the result of the comparison between them is kind of moot. Have you tested what happens if, instead of relying on the ThreadPool, you start 512 dedicated threads? The OS should do a good job at sharing the CPU power between them.Callida
@TheodorZoulias greatly appreciate that feedback! I haven't tested creating dedicated threads but will do so and follow up.. Yet another dimension would be whether the OS threadpool or managed thread pool are used. All these tests used managed but an AOT compile and use of the OS thread pool might be totally different.Phenformin
The real-life motive for me was a server app that uses the DiffMatch algorithm to detect changes in draft vs. final versions of long transcripts (kinda esoteric :)). The algorithm is totally synchronous and thread-blocking, and with 100s of pages of text can take many seconds, so I wanted to modify it to be async and non-blocking. That was the motive for coming up with these tests, i.e., to see if Task.Yield or Task.Run or just relying on the threadpool alone did the best job of keeping the thread pool stress-free and letting the jobs run peacefully in parallel.Phenformin
Ah, it's an esoteric scenario. Because in the common scenario of a web server overflowed by requests, you don't really want to share evenly the server resources to all requests. Doing so would just share the misery between the users, with all requests timing out after none of them completing in a timely manner. In that scenario you want to serve the requests in a FIFO base, and deny early the excessive requests when the queue reaches a reasonable maximum number. This way at least a few users will be served. The rest won't be served, but at least they'll know right away that the server is busy.Callida
@TheodorZoulias ok yeah starting individual threads was a good idea to try, thank you. Sure enough it blows away any other method provided each thread gets a brief nap before starting the loop, otherwise they choke up after a couple dozen just like the other methods. So it seems this all comes down to how quickly the threads can be created. When all threads finally get going they beat every other scenario, but Yield does do consistently well regardless of the thread count. I'll update this tomorrow with the new data and analysis.Phenformin
You could use the Barrier class, so that the threads start running after all are fully initialized. Barrier barrier = new(512);, and then add barrier.SignalAndWait(); as the first line inside the start delegate.Callida
P
0

You're assuming the long-running function is one that can run on a background thread. If it isn't, for example because it has UI interaction, then there is no way to prevent blocking the UI while it runs, so the times it runs should be kept short enough not to cause problems for the users.

Another possibility is that you have more long-running functions than you have background threads. In that scenario, it may be better (or it may not matter, it depends) to prevent a few of those functions from taking up all of your threads.

Procurer answered 2/5, 2014 at 15:30 Comment(16)
Your first case is potentially valid, but the second really isn't. If you have more work, or even workers, than you have threads, then they'll simply need to share that time. Giving some of that work to the UI thread isn't helpful there. If you didn't, it would simply mean the UI thread would be idling, thus yielding that time to worker threads.Seay
@hvd By saying UI interaction - you are talking about a code which can appear (Before || After) the await. right ?Mindexpanding
As a (possibly) related thought: it is possible to Join threads in the server code ... such actions within a Task means that anyone joining this thread doesn't (unknowingly) get blocked for the whole of the rest of this Thread?Paterfamilias
@Seay If you have more long-running functions than you have threads, and all threads are busy, then even short functions don't get a chance to run, and that may give a bad user experience for example if those short-running functions would give the user a status update.Procurer
@hvd You need to have many more operations than you have threads, for that to happen. It is called thread starvation. In all but the most extreme cases, the thread's scheduler is capable of solving the problem by changing the execution on each thread, allowing each short slices of time to run, rather than running each operation to completion. On top of that, the problem can be solved much more effectively through the use of a thread pool, which can have exactly as many workers as is optional, and queue up the actual work to be done in such a way as to optimize throughput.Seay
@RoyiNamir I was thinking of methods that use controls (update a label's text), which is something that must happen on the UI thread. That part is not something that's a problem with the method in your question, but it is something that applies to the more general case.Procurer
@Seay You merely need as many running operations as you have background threads, where none of the running tasks gives back control in any reasonable time, do you not?Procurer
@hvd No, that is not something that will cause a problem, because the thread's schedulers are not that poor, and the amount of time that the UI thread would need to be starved out for a human to perceive the lag is quite high. Other threads don't have the option of holding onto the CPU time forever; in fact, they have almost no control over the matter at all. It is not a cooperative system, unlike management of the UI thread from within the UI thread in which someone can intentionally hog the UI thread for as long as they want, starving everyone else out.Seay
@Seay That's not at all what I meant. If all background threads are busy running long-running asks, and anything then attempts to start a new task, that new task won't get a chance to run until all of those long-running tasks have finished.Procurer
@hvd That's false. The thread's CPU scheduler will schedule it for some work before all of those other threads have finished, it just means that every single thread needs to wait a little bit longer for its next time slice for each additional thread the system creates. You'd have to create hundreds, if not thousands (although this is dependant on the system's hardware), of threads in order to force the UI's time slices to be so infrequent as to make the UI feel unresponsive to a human.Seay
@Seay Again, that's not at all what I meant. I'm not talking about the UI thread becoming slow. I'm talking about the UI thread starting a new task, which would run on a background thread, do some short work, and once it's finished update the UI. That short work would take a very long time.Procurer
@hvd It will probably take longer, sure, as it needs to share its CPU time with other threads (unless it manages to finish execution in its first time slice). It won't wait to be scheduled until all other tasks finish though. It will get some time slices early on. Because so many CPU operations are short lived, new operations are generally given priority to try to finish quickly.Seay
@Seay Here's what I meant. One button starts a long-running task (waits 10 minutes, pretend it's doing a lot of useful work), one other button starts a task that should finish quickly (waits one second, pretend it's still doing some useful work). Start enough long-running tasks, then start a short-running task, and you'll still be left waiting a very long time even for the short-running task to finish. The UI won't be sluggish, but if the short-running task is supposed to do some work before the UI gets updated, that's still a bad user experience.Procurer
@hvd In that case you're using the thread pool, rather than creating new threads. There are a number of possible resolutions here. You shouldn't be performing really long running operations in the thread pool (it is designed for lots of reasonably small units of work). Stick the "long running" task creation option hint on there and it won't use the thread pool, and thus won't cause this problem. The other option is to break up the one big task into many smaller tasks. Some operations are well suited for that, some aren't.Seay
@Seay Third option: let the long-running tasks yield from time to time. Useful if you can't anticipate in advance whether it should be treated as long-running. Hence my answer.Procurer
@hvd Technically that's just one way of accomplishing the second option that I mentioned, it's just a fairly poorly designed way of accomplishing it. If a task is well suited to being broken up, then break it up. If it's not, just stick LongRunning on it rather than yield all over the place.Seay

© 2022 - 2024 — McMap. All rights reserved.