C++11 <thread> multithreads rendering with OpenGL prevents main thread reads stdin
Asked Answered
O

2

7

It seems to be platform related (works with Ubuntu 12.04 on my laptop, doesn't work with another Ubuntu 12.04 on my workstation).

This is a sample code about what I am doing with two threads.

#include <iostream>
#include <thread>
#include <chrono>
#include <atomic>
#include <GL/glfw.h>

using namespace std;

int main() {
  atomic_bool g_run(true);
  string s;
  thread t([&]() {
    cout << "init" << endl;

    if (!glfwInit()) {
      cerr << "Failed to initialize GLFW." << endl;
      abort();
    }

    glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 2);
    glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 1);

    if(!glfwOpenWindow(640, 480, 8, 8, 8, 0, 24, 0, GLFW_WINDOW)) {
      glfwTerminate();
      cerr << "Cannot open OpenGL 2.1 render context." << endl;
      abort();
    }

    cout << "inited" << endl;

    while (g_run) {
      // rendering something
      cout << "render" << endl;
      this_thread::sleep_for(chrono::seconds(1));
    }
    // unload glfw
    glfwTerminate();
    cout << "quit" << endl;
  });
  __sync_synchronize(); // a barrier added as ildjarn suggested.
  while (g_run) {
    cin >> s;
    cout << "user input: " << s << endl;
    if (s == "q") {
      g_run = false;
      cout << "user interrupt" << endl;
      cout.flush();
    }
  }
  __sync_synchronize(); // another barrier
  t.join();
}

Here is my compile parameters:

g++ -std=c++0x -o main main.cc -lpthread -lglfw

My laptop run this program, like this:

init
inited
render
render
q
user input: q
user interrupt
quit

And workstation just outputs:

init
inited
render
render
q
render
q
render
q
render
^C

It just simply ignored my inputs (another program same procedure with glew and glfw, just jump out of the while loop in main thread, without reading my inputs.) BUT this thing works normally with gdb!

any idea of what's going on?

Update

After more tests on other machines, NVIDIA's driver caused this. Same thing happens on other machines with NVIDIA graphics card.

Oxidate answered 22/5, 2012 at 13:14 Comment(20)
Try making g_run a std::atomic<bool> rather than a plain bool.Cosecant
tried and doesn't work. there is no racing condition in this case, as only one thread is writing to it.Oxidate
One thread is writing while another is reading. You need a memory barrier.Cosecant
@Cosecant w/r are not conflict at the same time if the field is of primitive type, for it can't be partiality updated. I have my source updated alreadyOxidate
Maybe you should read about memory barriers, as it's quite clear you're missing a core concept here... (The problem here is cache coherency -- this has nothing to do with atomicity.)Cosecant
@Cosecant Thanks, you are right about the barrier. But as I didn't enable compiler optimize there should not be any compiler reordering occurred. And this code was unlikely to be affected by Out-of-order execution, as most generated code are jump, call instead of move. And atomic_bool did't solve the case.Oxidate
With the atomic bool there I don't see any data races, and since by default you're getting sequential consistency semantics it seems like this should work. Do the gl functions affect the behavior?Descant
(and I don't see a need for __sync_synchronize)Descant
@Descant __sync_synchronize it to ruled out possibilities. It should have something to do with gl* stuff as the version without glfw works, but don't know what exactly cause this, and why running in gdb context is ok.Oxidate
I don't see user interrupt in your second output. Can you add more debug logs after cin >> s;? It's not clear now - your main thread does not receive CPU ticks or if (s == "q") does not work for some reason.Shu
@Shu no, there wasn't any output. let me add a new output before if.Oxidate
@Oxidate silly question.. Did you use -pthread option during linking?Shu
@Shu yes, otherwise it will receive a permission error, and terminate immediatelyOxidate
@Oxidate No clue. Looks like a bug in glfw library on your workstation.Shu
@Shu Thanks for your time. Do know what's the difference between start a program in gdb and in terminal? why running in gdb is fine, but spawn doesn't?Oxidate
@ildjarn: doesn't std::atomic_bool impose a memory barrier? My understanding is that it should have memory_order::memory_order_seq_cst characteristics, which implies a full memory barrier on accesses.Kalong
@MichaelBurr : Yes, it has the characteristics you describe, which is why I recommended it over raw bool (which is what the OP had originally; it's since been edited).Cosecant
@ildjarn: I see - I missed that.Kalong
It's not the cause of your problem, but a lambda longer than the rest of your program put together is disgusting and should be punishable by seven years writing PHP.Shlomo
Yep, it's for sampling purpose. Just to make it easier for viewers to read instead of doing human dereference.Oxidate
O
1

After more tests on other machines, NVIDIA's driver caused this. Same thing happens on other machines with NVIDIA graphics card.

To fix this problem, there is something to be done with the initialization order. On nvidia machines glfw has to be initialized before anything (eg. create thread, even though you are not using glfw's threading routine.) The initialization has to be complete, say, create the output window after glfwInit(), otherwise the problem persists.

Here is the fixed code.

#include <iostream>
#include <thread>
#include <chrono>
#include <atomic>
#include <GL/glfw.h>

using namespace std;

int main() {
  atomic_bool g_run(true);
  string s;

  cout << "init" << endl;

  if (!glfwInit()) {
    cerr << "Failed to initialize GLFW." << endl;
    abort();
  }

  glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 2);
  glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 1);

  if(!glfwOpenWindow(640, 480, 8, 8, 8, 0, 24, 0, GLFW_WINDOW)) {
    glfwTerminate();
    cerr << "Cannot open OpenGL 2.1 render context." << endl;
    abort();
  }

  cout << "inited" << endl;

  thread t([&]() {
    while (g_run) {
      cin >> s;
      cout << "user input: " << s << endl;
      if (s == "q") {
        g_run = false;
        cout << "user interrupt" << endl;
        cout.flush();
      }
    }
  });

  while (g_run) {
    // rendering something
    cout << "render" << endl;
    this_thread::sleep_for(chrono::seconds(1));
  }

  t.join();

  // unload glfw
  glfwTerminate();
  cout << "quit" << endl;
}

Thanks all your helps.

Oxidate answered 20/6, 2012 at 4:6 Comment(0)
R
2

I used this code to close my program and get my q key when its runing

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>
#include <termios.h>


static struct termios old, _new;
static void * breakonret(void *instance);

/* Initialize _new terminal i/o settings */
void initTermios(int echo)
{
 tcgetattr(0, &old); /* grab old terminal i/o settings */
 _new = old; /* make _new settings same as old settings */
 _new.c_lflag &= ~ICANON; /* disable buffered i/o */
 _new.c_lflag &= echo ? ECHO : ~ECHO; /* set echo mode */
 tcsetattr(0, TCSANOW, &_new); /* use these _new terminal i/o settings now */
}

/* Read 1 character with echo */
char getche(void)
{
 char ch;
 initTermios(1);
 ch = getchar();
 tcsetattr(0, TCSANOW, &old);
 return ch;
}

int main(){
 pthread_t mthread;
 pthread_create(&mthread, NULL, breakonret, NULL); //initialize break on return 
 while(1){
   printf("Data on screen\n");
   sleep(1);
 }
pthread_join(mthread, NULL);
}
static void * breakonret(void *instance){// you need to press q and return to close it
 char c;
 c = getche();
 printf("\nyou pressed %c \n", c);
 if(c=='q')exit(0);
 fflush(stdout);
}

With this you have a thread reading the data from your keyboard

Ruthful answered 14/6, 2012 at 18:46 Comment(1)
thanks for your reply, it indeed helps with my another question. but seems a little off-topic for this.Oxidate
O
1

After more tests on other machines, NVIDIA's driver caused this. Same thing happens on other machines with NVIDIA graphics card.

To fix this problem, there is something to be done with the initialization order. On nvidia machines glfw has to be initialized before anything (eg. create thread, even though you are not using glfw's threading routine.) The initialization has to be complete, say, create the output window after glfwInit(), otherwise the problem persists.

Here is the fixed code.

#include <iostream>
#include <thread>
#include <chrono>
#include <atomic>
#include <GL/glfw.h>

using namespace std;

int main() {
  atomic_bool g_run(true);
  string s;

  cout << "init" << endl;

  if (!glfwInit()) {
    cerr << "Failed to initialize GLFW." << endl;
    abort();
  }

  glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 2);
  glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 1);

  if(!glfwOpenWindow(640, 480, 8, 8, 8, 0, 24, 0, GLFW_WINDOW)) {
    glfwTerminate();
    cerr << "Cannot open OpenGL 2.1 render context." << endl;
    abort();
  }

  cout << "inited" << endl;

  thread t([&]() {
    while (g_run) {
      cin >> s;
      cout << "user input: " << s << endl;
      if (s == "q") {
        g_run = false;
        cout << "user interrupt" << endl;
        cout.flush();
      }
    }
  });

  while (g_run) {
    // rendering something
    cout << "render" << endl;
    this_thread::sleep_for(chrono::seconds(1));
  }

  t.join();

  // unload glfw
  glfwTerminate();
  cout << "quit" << endl;
}

Thanks all your helps.

Oxidate answered 20/6, 2012 at 4:6 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.