c++ - should i always lock the global data in multi-thread programming, why or why not? -


i'm new multi-thread programming(actually, i'm not fresh man in multi-threading, use global data reading , writing thread, think makes code ugly , slow, i'm eager improve skill)

and i'm developing forwarder server using c++, simplify question, suppose there 2 threads, receiving-thread , sending-thread, and, stupid design usual, have global std::list saving data :(

receiving-thread read raw data server , wirte global std::list.

sending-thread read global std::list , send several clients.

i use pthread_mutex_lock sync global std::list.

the problem performance of forward server poor, global list locked when receiving-thread wrting, @ time, sending-thread wanna read, must waiting, think waiting useless.

what should do, know global bad, but, without global, how can sync these 2 threads?

i'll keep searching , google.

any suggestions, guides, technology or books appreciated. thanks!

edit

  1. for suggestions, wanna know why or why not, please give me reason, lot.

notes:

  1. please provide more complete examples: http://sscce.org/

answers:

  1. yes, should synchronize access shared data.

    • note: makes assumptions std::list implementation - may or may not apply case - since assumptions valid implementation cannot assume implementation must thread safe without explicit guarantee
    • consider snippet:

      std::list g_list;  void thread1() {     while( /*input ok*/ )     {         /*read input*/         g_list.push_back( /*something*/ );     } }  void thread2() {     while( /*something*/ )     {         /*pop list*/         data x = g_list.front();         g_list.pop_front();     } } 
    • say example list has 1 element in it
    • std::list::push_back() must do:
      • allocate space (many cpu instructions)
      • copy data new space (many cpu instructions)
      • update previous element (if exists) point new element
      • set std::list::_size
    • std::list::pop_front() must do:
      • free space
      • update next element not have previous element
      • set std::list_size
    • now thread 1 calls push_back() - after checking there element (check on size) - continues update element - right after - before gets chance update element - thread 2 running pop_front - , busy freeing memory first element - result in thread 1 causing segmentation fault - or memory corruption. updates size result in push_back winning on pop_front's update - , have size 2 when have 1 element.
  2. do not use pthread_* in c++ unless know doing - use std::thread (c++11) or boost::thread - or wrap pthread_* in class - because if don't consider exceptions end deadlocks

  3. you cannot past form of synchronization in specific example - optimize synchronization

    1. don't copy data , out of std::list - copy pointer data , out of list
    2. only lock while accessing std::list - don't make mistake:

      {     // lock     size_t = g_list.size();     // unlock     if ( )     {         // lock         // work g_list ...         // unlock     } } 
  4. a more appropriate pattern here message queue - can implement 1 mutex, list , condition variable. here implementations can at:

  5. there option of atomic containers, at:

  6. you go asynchronous approach boost::asio - though case should quite fast if done right.


Comments