Tuesday, December 24, 2013

ThreadRAII + Thread Suspension = Trouble?

Topic 1: An RAII class for std::thread objects

In my GoingNative 2013 talk, I explained how destruction of joinable std::thread objects leads to program termination, and I introduced an RAII class, ThreadRAII, to make sure that joinable std::threads aren't destroyed. The details of ThreadRAII aren't important for this post, so here's a stripped-down version that always does a join before permitting a joinable std::thread to be destroyed:
class ThreadRAII {
public:
  ThreadRAII(std::thread&& thread): t(std::move(thread)) {}
  ~ThreadRAII() { if (t.joinable()) (t.join(); }

private:
  std::thread t;
};
Using an RAII class to make sure that a std::thread object is brought into an unjoinable state on every path out of a block seems like an obviously-reasonable thing to do.

Topic 2: Using void promises/futures to "start threads suspended"

This Q&A at StackOverflow explains why it might be useful to start threads in a suspended state. There's no direct support for that in the C++11 threading API, but one way to implement it is to create a std::thread running a lambda that waits on a std::future<void> before starting its real work. For example, if the "real work" is funcToRun, we could do this:
std::promise<void> p;

std::thread t([&p]{ p.get_future().wait();    // start t and  suspend it
                    funcToRun(); }            // (conceptually)
              );

...                      // t is "suspended" waiting  for p to be set

p.set_value();           // t may now continue
This isn't the only way to suspend thread execution after creation of the thread and before execution of the work it's supposed to do, but it seems reasonable. In contrast to having the lambda spin on an atomic bool waiting for the flag to be set, for example, there is no need for the lambda to poll.

Putting the two together

The use of a std::thread object in that last code example leads to the possibility of there being some flow of control that could cause that object to be destroyed in a joinable state, and that would lead to program termination. So it seems natural to use RAIIThread:
std::promise<void> p;                         // as before

std::thread t([&p]{ p.get_future().wait();    // as before
                    funcToRun(); }
             );             

ThreadRAII tr(std::move(t));                  // USE RAII CLASS HERE

...                                           // as before

p.set_value();                                // as before
The problem is that if an exception is thrown in the "...", we'll never set the value of the std::promise, and that means that the destructor of the RAII object (tr) will block forever. That's no better than program termination, and it might be worse.

My question

I'm trying to figure out what the fundamental problem is here. Is there something wrong with using an RAII class to keep joinable threads from being destroyed (hence causing program termination)? Is there something wrong with using void promises/futures to emulate starting thread execution in a suspended state?

If the two techniques are independently valid, am I combining them incorrectly, or are they somehow fundamentally incompatible?

I'd like to write about this in Effective C++11/14, but I want to offer my readers good advice. What should that advice be?

Thanks,

Scott


Thursday, September 12, 2013

My 1999 CD on Modern Browsers

In 1999, Addison-Wesley and I collaborated on a CD version of two of my books, Effective C++, Second Edition and More Effective C++. The CD was designed for contemporary browsers--contemporary to 1999. That was a time of rapid browser evolution, and it wasn't too long before browsers "progressed" to the point where the CD content didn't display correctly.

In 2003, Ian Roberts described how the content could be modified to work with IE6, and I published a program to apply his fixes to the files on the CD.

A few days ago, I got a message from Yong-guang Teng telling me that he'd come up with a shell script that permits the CD content to display better on current browsers. He's posted the script as a comment at the Amazon product page for the CD. Be sure to check the comment on his comment, because it tries to compensate for some text-mangling that apparently took place during Amazon's processing of his script text.

I haven't tested his script, so I don't know how well it works. And I don't advocate you buy a copy of the CD, because even if the content can be made to display perfectly with modern browsers, the content itself is still nearly 15 years old, and the second edition of Effective C++ has been superseded by the third edition. Still, Yong-guang Teng found it useful to breathe new life into the CD he bought many years ago, and perhaps you will, too.

Scott

Monday, September 9, 2013

"An Effective C++11/14 Sampler" Now Online

My talk from last week's Going Native 2013 conference, "An Effective C++11/14 Sampler" is now online. It covers these three guidelines:
  • Understand std::move and std::forward.
  • Declare functions noexcept whenever possible.
  • Make std::threads unjoinable on all paths.
Watch it here and let me know what you think.

Scott

Wednesday, August 21, 2013

Sale on AW C++ Books


The pepped-up marketing folks at Addison-Wesley recently sent me this, which I am dutifully passing on:
Please feel free to point folks to informit.com/cplusplus which features a Buy 1 Save 30% | Buy 2 or More Save 40% discount code for all C++ titles.  Note, while only the most recent C++ titles are featured on the landing page, the discount code applies to ALL titles (there is a giant "Shop all C++ Titles" button at the bottom of the page and folks can search for your products by your last name).
According to the banner at the top of the page, the deal also includes free ground shipping within the USA. To get the discounts, you have to enter the magic promotion code CPLUSPLUS at checkout.

Happy shopping!

Scott

Friday, August 2, 2013

Two videos coming, many videos past

On September 4-6, the 2013 edition of GoingNative will take place, and I'll be giving the following presentation:

An Effective C++11/14 Sampler

After years of intensive study (first of C++0x, then of C++11, and most recently of C++14), Scott thinks he finally has a clue. About the effective use of C++11, that is (including C++14 revisions). At last year’s GoingNative, Herb Sutter predicted that Scott would produce a new version of Effective C++ in the 2013-14 time frame, and Scott’s working on proving him almost right. Rather than revise Effective C++, Scott decided to write a new book that focuses exclusively on C++11/14: on the things the experts almost always do (or almost always avoid doing) to produce clear, efficient, effective code. In this presentation, Scott will present a taste of the Items he expects to include in Effective C++11/14. If all goes as planned, he’ll also solicit your help in choosing a cover for the book.

Like all the talks at GoingNative, mine will be live-streamed as well as recorded for later viewing. But it occurred to me a while ago that although my web site has a list of my past publications and my past presentations, it doesn't really have a list of videos of presentations I've given. Well, it didn't. It does now: check out my brand spanking new online videos page. If the thought of such a page moves you to nominate me for the Personal Vanity Hall of Shame, I understand, but my actual motivation was considerably more pedestrian. It's not uncommon for me to be asked whether my presentations are available online, and now there's an easy way for people to answer that question themselves.

Which reminds me. I presented my seminar, Better Software—No Matter What, at the Norwegian Developers Conference in June, and most of that talk has now been made available online. The original plan was for the entire thing to be recorded, but there was a technical glitch that prevented the first of six parts from being preserved. Parts 2-5 are now live, and when the NDC tells me where part 6 is, I'll add a link to that, too.

Scott

PS - Speaking of Effective C++11/14, just today I finished my first full draft of the chapter on rvalue references, move semantics, and perfect forwarding. It consists of eight Items and, if Microsoft Word is to be believed, 20,359 words. Assuming 90K words for the full book (in line with my past efforts, if FrameMaker is to be believed), that means I'm a bit over 20% of the way towards a full draft.

Wednesday, July 24, 2013

Video for "The Universal Reference/Overloading Collision Conundrum"

Last Wednesday evening, I gave a talk at the Northwest C++ Users' Group entitled "The Universal Reference/Overloading Collision Conundrum." The purpose of the talk was to try out the information behind a guideline from Effective C++11/14 (the book I'm currently working on).  That guideline is "Avoid overloading on universal references." The video for that talk is now available.

From my perspective, the talk was a success, but that may not be apparent from the video. Things went fine for the first 12 minutes, and then...things went less fine. Bugs in the slides. Questions I didn't answer. Material I didn't have time to cover. All of which may--should--make you wonder how I define "success."

As a general rule, I like to test material in front of live audiences before I put it in my books. Presenting technical material live is perhaps the best way to get feedback on it. Not only does it offer attendees an opportunity to ask questions and make comments (direct feedback), it gives me a chance to see people's reactions (indirect feedback). Even if an audience asks no questions and makes no comments, looking into their faces tells me if they're engaged or bored and if they're following what I'm saying or are confused. Plus, the simple act of explaining something gives me a chance to see how well it flows in practice. It's quite common for me to think to myself "this just isn't working the way I'd hoped..." while I'm speaking, and places where I think that identify parts of the presentation where I need to go back and make revisions.

From the presentation at the NWC++UG (including some conversations I had with attendees afterwards), I took away two primary lessons. First, the guideline I was presenting ("Avoid overloading on universal references")  is both valid and useful. That was reassuring. Second, the technical justification I give for this guideline needs a fair amount of work. In particular, I need to avoid getting side-tracked too much by the issues surrounding overloading on universal references and its interaction with compiler-generated special functions. Both lesson will help me produce a better book, and that's why I consider the talk a success.

At the same time, I was disappointed that there were bugs in my slides. I have pretty much a zero-tolerance mindset for errors in presentation materials (as well as books and articles and other forms of publication), because authors (including me) have essentially unlimited amounts of time to prepare the materials prior to making them public. (If there's insufficient time to prepare the materials properly, my feeling is that you shouldn't agree to present or publish them.) To be honest, I was also surprised that my materials had the errors that they did, because I hadn't skimped on prep time or QA work. I really thought they were ready to go. I was mistaken. In the future, I'll clearly have to find ways to do a better job.

Since giving the talk, I've corrected and revised the materials, and the corrected slide set is available here.

I hope you enjoy the talk, rocky parts notwithstanding.

Scott



Wednesday, July 10, 2013

C++11 Training Materials Updated--Now With C++14 Info!

For the seventh time since originally releasing them over three years ago, I've updated my annotated training materials for "The New C++". Until this update, "the new C++" referred to C++11, but with this revision, I'm including treatment of several features from draft C++14 that I believe will make it into the new new standard. As far as I know, this makes my training materials the first "book-like" publication that covers features in C++14.

In accord with my "free updates for life" policy, people who've purchased these materials are entitled to (and should recently have received notification about) the updated version.

The latest revision of the materials contains the usual mish-mash of bug-fixes and presentation improvements (the changelog, which is delivered along with the latest matereials, has details), and those alone had me planning a release this summer. But when the C++14 CD was adopted in April, I knew I had to find a way to shoehorn some C++14 material into the course. The result is 20 new pages of information, including overviews of the following C++14 features:
  • Polymorphic lambdas (i.e., auto parameters).
  • Generalized lambda captures (makes "move capture" possible).
  • Variadic and perfect-forwarding lambdas.
  • Generalized function return type deduction (i.e., auto return types).
  • Reader/writer locks (i.e., std::shared_mutex and std::shared_lock).
There's also a page summarizing other C++14 features, along with a slew of references for people who want to read more about the new goodies in C++14.

If you haven't done so already, I hope you'll consider purchasing a copy of these materials. As always, a free sample PDF of the first ~40 pages is available here. Don't expect too much C++14 information in that sample, because the first serious treatment of C++14 features begins on slide 90. That's not me being coy. It's just how things worked out, given the flow of topics in the course.

Scott

Sunday, July 7, 2013

When decltype meets auto

C++11 has three sets of type deduction rules:
  • Those used in template type deduction.
  • Those used in auto type deduction.
  • Those used by decltype.
The rules for auto type deduction are the same as the rules for template type deduction, except that given a braced initializer such as { 1, 2, 3, 4 }, auto will deduce a std::initializer list type (in the case of { 1, 2, 3, 4 }, it will be std::initializer_list<int>), while template type deduction will fail. (I have no idea why type deduction for auto and for templates is not identical. If you know, please tell me!) The rules for decltype are more complicated, because they don't just distinguish between lvalues and rvalues, they also distinguish between id-expressions (i.e., expressions consisting only of identifiers, e.g., variable or parameter names) and non-id-expressions. For details on all these rules, consult this article by Thomas Becker, this article by me, or this article by Herb Sutter (for auto) and this one by Andrew Koenig (for decltype).

But that's for C++11, which, among the C++-obsessed, is rapidly approaching yawnworthiness. Fortunately, C++14 is on the horizon, and one of the new features sure to stifle even the strongest of  yawns is the ability to declare types using decltype(auto). This feature leads to two questions, only the first of which is rhetorical:
  1. You can declare what?
  2. During type deduction for decltype(auto), which type deduction rules are to be followed: those for auto or those for decltype?  Or does decltype(auto) have its own set of type deduction rules?
The answer is that decltype(auto)uses the decltype type deduction rules. The reason is that the type deduced by auto for an initializing expression strips the ref-qualifiers (i.e., lvalue references and rvalue references) and top-level cv-qualifiers (i.e., consts and volatiles) from the expression, but decltype does not. As a result, if you want the ref- and cv-qualifier stripping behavior, you can just write auto. If you don't, C++14 gives you the option of writing decltype(auto).

For variable declarations, this saves you the trouble of typing the initializing expression twice,
decltype(longAndComplexInitializingExpression) var =
  longAndComplexInitializingExpression;                     // C++11

decltype(auto) var = longAndComplexInitializingExpression;  // C++14
For auto function return types (another new C++14 feature), it's even more convenient. Consider a function template, grab, that authenticates a user and, assuming authentication doesn't throw, returns the result of indexing into some container-like object. Bearing in mind that some standard containers return lvalue references from their operator[] operations (e.g., std::vector, std::deque), while others return proxy objects (e.g., std::vector<bool>), and I believe it would be valid to define a container such that invoking operator[] on an rvalue would yield an rvalue reference, the proper "generic" way to declare this function in C++11 would be (I think):
template<typename ContainerType, typename IndexType>                //C++11
auto grab(ContainerType&& container, IndexType&& index) -> 
  decltype(std::forward<ContainerType>(container)[std::forward<IndexType>(index)]);
{
  authenticateUser();
  return std::forward<ContainerType>(container)[std::forward<IndexType>(index)];
}

In C++14, I believe this can be simplified to the following, thanks to function return type deduction and decltype(auto):
template<typename ContainerType, typename IndexType>                // C++14
decltype(auto) grab(ContainerType&& container, IndexType&& index)
{
  authenticateUser();
  return std::forward<ContainerType>(container)[std::forward<IndexType>(index)];
}

Scott

Tuesday, June 25, 2013

Presentation at NW C++ Users' Group on July 17

On Wednesday, July 17, I'll be giving a talk in Redmond, Washington, for the Northwest C++ Users' Group. Admission is free, and pizza will be provided. Here's the talk summary:

The Universal Reference/Overloading Collision Conundrum

To help address the confusion that arises when rvalue references become lvalue references through reference collapsing, Scott Meyers introduced the notion of “universal references.” In this presentation, he builds on this foundation by explaining that overloading functions on rvalue references is sensible and useful, while seemingly similar overloading on universal references yields confusing, unhelpful behavior. But what do you do when you want to write a perfect forwarding function (which requires universal references), yet you want to customize its behavior for certain types? If overloading is off the table, what’s on? In this talk, Scott surveys a variety of options.
Though Scott will give a one-slide overview of the idea behind universal references at the beginning of the presentation, attendees are encouraged to familiarize themselves with the notion in more detail prior to the talk. Links to written and video introductions to universal references are available here.
For time, location, and other details, consult the talk announcement.

I hope to see you there!

Scott

Tuesday, June 4, 2013

Presentation at Oslo C++ Users Group on Friday, 14 June

On Friday, 14 June (a week from this coming Friday), I'll be giving a talk in Oslo for the Oslo C++ Users Group. Admission is free. The topic I'll be addressing is:

Lambdas vs. std::bind in C++11 and C++14

C++ developers have long had a need to bind functions and arguments together for a later call. This is what makes it possible to invoke member functions on objects inside STL algorithms. The same technology can be used to create custom callback functions and to adapt function interfaces to different calling contexts.

In C++98, such binding was accomplished via std::bind1st and std::bind2nd. TR1 added std::tr1::bind, which was promoted to std::bind in C++11. But C++11 also introduced lambda expressions, and they’re slated to become even more powerful in C++14. That means that there are now two mechanisms in C++ for binding functions to arguments for later calls: std::bind and lambda expressions.In this talk, Scott examines the pros and cons of each approach, comparing them in terms of expressiveness, clarity, and efficiency, and he comes to the conclusion that one should almost always be used instead of the other. But which one?

This presentation assumes a basic familiarity with std::bind and C++11 lambda expressions.
For time and location, consult the talk announcement.

I hope to see you there!

Scott

Sunday, June 2, 2013

New ESDS Book: Effective Objective-C 2.0

I'm pleased to report that a new member of my Effective Software Development Series, Matt Galloway's Effective Objective-C 2.0, has just been published.

The first thing I noticed when I opened my copy was that the code is beautiful. In the pre-publication manuscripts I read, everything was black and white and plain, but in the published version (both print and electronic), code examples are syntax-colored using both multiple colors and a mixture of "normal" and bold font faces. If this makes it sound garish, I'm not describing it properly, because the result is wonderful.  Here, look:


If you're an Objective-C developer, I encourage you to check out Effective Objective-C 2.0. According to Matt's blog post announcing the birth of his book, there's a discount code worth 35% off if you buy the book via InformIT.

Scott

C&B Early Bird Rates Expire in a Week!

The special "Early Bird" registration rate for this year's C++ and Beyond (to be held December 9-12 near Seattle, Washington, USA) expires on June 9--a week from today. Attendance is strictly limited to 64 participants, and well over half those spots have already been taken. If you'd like to be part of C&B  2013, be sure to register soon. If you'd like to save $300, be sure that "soon" is no later than June 9.

In recent weeks, session topics for this year's C&B have begun to be posted, so the form of the program is starting to develop. In view of the fact that the first full draft of C++14 appeared in April and that final adoption is expected next year, it shouldn't be surprising that C++14 is emerging as an important theme. Though I haven't officially announced it yet, I plan to offer at least one session derived from material in the book I'm working on now. Until recently, I expected that book to be called Effective C++11, but my working title has now become Effective C++11/14.

C&B sessions will consider more than just language features. The one talk I have officially announced is Concurrent Data Structures and Standard C++, which focuses on an important threading-related topic that isn't addressed by C++11 or C++14. My guess is that there will be at least one session focusing on pure performance, too, though it's too early to say for sure.  (Herb and Andrei and I develop our session topics independently, typically motivated by whatever issues we're most  passionate about at the time.The result is engaging sessions with extremely up-to-date content, but predicting the topics months in advance is difficult.)

To keep abreast of session topics as they are announced, subscribe to the C&B blog or mailing list, or follow us on Twitter or Facebook. You'll find links to all these things at the C&B home page. And don't forget that early bird registration expires on June 9!

Scott

Friday, May 31, 2013

"Effective C++11/14 Programming" in Oslo and London

In February, I announced that I'd be offering a new training seminar in Oslo, London, and Stuttgart. The seminar was Effective C++11 Programming. Because I was still working on the material for the seminar, I indicated that the course description I posted was preliminary.

A lot has happened since then. First, I finished my materials for the training course. Second, a draft version of C++14 was published. Third, I revised my materials to incorporate parts of C++14 that are particularly relevant and that seem likely to remain stable as C++14 is finalized. And fourth, I changed the name of the seminar to Effective C++11/14 Programming.  The course descriptions for the seminars I'll hold in Oslo and London have now been updated to reflect the change in course title and the no-longer-tentative list of topics. The links below will take you to these updated pages:
The information for the presentation in Stuttgart has not yet been updated. That should happen soon, but there may be further refinements to that description later this summer, because the version of the course I'll present there will benefit from experience I get delivering it in Oslo (world debut!) and London.

I hope to see you in Oslo, London, or Stuttgart to talk about how to make effective use of C++11 and C++14.

Scott

Wednesday, May 22, 2013

Lambdas vs. Closures

In recent days, I've twice found myself explaining the difference between lambdas and closures in C++11, so I figured it was time to write it up.

The term "lambda" is short for lambda expression, and a lambda is just that: an expression. As such, it exists only in a program's source code. A lambda does not exist at runtime.

The runtime effect of a lambda expression is the generation of an object. Such objects are known as closures.

Given

  auto f = [&](int x, int y) { return fudgeFactor * (x + y); };

the blue expression to the right of the "=" is the lambda expression (i.e., "the lambda"), and the runtime object created by that expression is the closure.

You could be forgiven for thinking that, in this example, f was the closure, but it's not. f is a copy of the closure. The process of copying the closure into f may be optimized into a move (whether it is depends on the types captured by the lambda), but that doesn't change the fact that f itself is not the closure. The actual closure object is a temporary that's typically destroyed at the end of the statement.

The distinction between a lambda and the corresponding closure is precisely equivalent to the distinction between a class and an instance of the class. A class exists only in source code; it doesn't exist at runtime. What exists at runtime are objects of the class type.  Closures are to lambdas as objects are to classes. This should not be a surprise, because each lambda expression causes a unique class to be generated (during compilation) and also causes an object of that class type--a closure--to be created (at runtime).

Scott

PS - I noted above that a closure is typically destroyed at the end of the statement in which it is created.  The exception to this rule is when you bind the closure to a reference. The simplest way to do that is to employ a universal reference,

  auto&& rrefToClosure = [&](int x, int y) { return fudgeFactor * (x + y); };

but binding it to an lvalue-reference-to-const will also work:

  const auto& lrefToConstToClosure = [&](int x, int y) { return fudgeFactor * (x + y); };

Monday, May 6, 2013

C++14 Lambdas and Perfect Forwarding

So the joke's on me, I guess.

In my discussion of std::move vs. std::forward, I explained that when you call std::forward, the expectation is that you'll pass a type consistent with the rules for template type deduction, meaning (1) an lvalue reference type for lvalues and (2) a non-reference type for rvalues.  I added,
If you decide to be a smart aleck and write [code passing an rvalue reference type], the reference-collapsing rules will see that you get the same behavior as [you would passing a non-reference type], but with any luck, your team lead will shift you to development in straight C, where you'll have to content yourself with writing bizarre macros.
Well.  As I said, the joke seems to be on me, because the standardization commitee  apparently consists largely of smart alecks.

Let me explain.

The recently-adopted C++14 CD includes beefy additions to lambda capabilities, including the support for polymorphic lambdas that Herb Sutter can't help but mention I've been whining about for years. This means that in C++14, we now have the expressive power that the Boost Lambda library has been offering since 2002. Ahem. But C++14 goes further, supporting also variadic lambdas, generalized captures (including capture-by-move), and, of particular relevance to this post, support for perfect forwarding.

Suppose we want to write a C++14 lambda that takes a parameter and perfect-forwards it to some function f:
auto forwardingLambda = [](auto&& param) { /* perfect-forward param to f */ };
Writing the perfect-forwarding call is easy, but it's probably not obvious how.  The normal way to perfect-forward something it to use std::forward, so we'd expect to write essentially this:
auto forwardingLambda = [](auto&& param) { f(std::forward<T>(param)); };
But, uh oh, there's no T to pass to std::forward.  (In the class generated from the lambda expression, there is, but inside the lambda itself, there's no type for param.)  So what do we pass to std::forward? We can hardly pass auto. (Consider what would happen if we had a lambda taking multiple parameters, each of type auto and each of which we wanted to forward. In that case, each std::forward<auto> would be ambiguous: which auto should std::forward use?)

The solution takes advantage of two observations. First, the type-deduction rules for auto in lambdas are the same as for templates. This means that if an lvalue argument is passed to the lambda, param's type will be an lvalue reference--exactly what we need for std::forward. If an rvalue argument is passed, its type will be an rvalue reference. For such parameters, we can recover the type to pass to std::forward by stripping it of its reference-ness. We could thus write forwardingLambda like this:
auto forwardingLambda = [](auto&& param) {
  f(std::forward<std::conditional<std::is_rvalue_reference<decltype(param)>::value,
                                  std::remove_reference<decltype(param)>::type,
                                  decltype(param)>::type>(param));
};

At least I think we could. I don't have a C++14 compiler to try it with, and, anyway, it's too gross to waste time on. It would be sad, indeed, if this is what the standardization committee expected us to do to effect perfect forwarding inside its spiffy new C++14 lambdas. Fortunately, it doesn't.

Which brings us to observation number two. As I noted near the beginning of this post,
If you decide to be a smart aleck and write [code passing an rvalue reference type to std::forward], the reference-collapsing rules will see that you get the same behavior as [you would passing a non-reference type].
That means that if param's type is an rvalue reference, there is no need to strip off its reference-ocity. Instead, you can smart aleck your way to success by simply passing that type directly to std::forward.  Like so:
auto forwardingLambda = [](auto&& param) { f(std::forward<decltype(param)>(param)); };
Frankly, this is more verbose than I'd prefer. One could imagine a world where you could say something like this:
auto forwardingLambda =
  [](<T1>&& param1, <T2>&& param2) { f(std::forward<T1>(param1), std::forward<T2>(param2)); };
But that's not the world we live in, and given that C++14 gives us polymorphic lambdas, variadic lambdas, and move-enabled lambdas, I'm not going to complain about the world of C++14 lambdas.  Except possibly to Herb :-)

Scott

Shared State from std::async remains special

In an earlier post, I pointed out that, contrary to the way things are generally described, it's not the futures returned from std::async that are special, it's the shared state they refer to that is. In the comments that followed that post, it was pointed out that this could change in C++14, but the proposal to that effect was rejected at the standardization committee meeting last month. As Anthony Williams put it in his blog post,
Herb Sutter's late paper on the behaviour of the destructor of std::future (N3630) was up next. This is a highly conterversial topic, and yielded much discussion. The crux of the matter is that as currently specified the destructor of std::future blocks if it came from an invocation of std::async, the asynchronous function was run on a separate thread (with the std::launch::async policy), and that thread has not yet finished.
[...]
 Much of the discussion focused on the potential for breaking existing code, and ways of preventing this. The proposal eventually morphed into a new paper (N3637) which created 2 new types of future: waiting_future and shared_waiting_future. std::async would then be changed to return a waiting_future instead of a future. Existing code that compiled unchanged would then keep the existing behaviour; code that changed behaviour would fail to compile. Though the change required to get the desired behaviour would not be extensive, the feeling in the full committee was that this breakage would be too extensive, and the paper was also voted down in full committee.
C++14 now has CD ("committee draft") status, but that doesn't mean things can't change. A member of the committee emailed me as follows:
[The] paper on changing [the behavior of futures referring to shared state from std::async] was rejected, after a LOT of discussion. The discussion has continued on the reflector, and we may get a NB comment on the C++14 draft about it, but for now there is no change.
My impression is that many committee-watchers had considered a change in the specification for std::async to be a sure thing, but, as I wrote in yet another blog post, the committee tends to be quite conservative about the possibility of breaking existing code. At this point, that looks to be the line they're going to follow as regards the behavior of (the shared state corresponding to) futures produced by std::async.

Scott

Friday, April 5, 2013

Draft TOC for EC++11 Concurrency Chapter

A couple of months ago, I posted a draft Table of Contents (TOC) for Effective C++11. At that point, the entries for the concurrency chapter were so rough, they weren't even in the form of guidelines. Now they are, and I'm pleased to unveil my first draft TOC for the chapter on concurrency support:
  • Create tasks, not threads.
  • Pass std::launch::async if asynchronicity is essential.
  • Make std::threads unjoinable on all paths.
  • Be aware of varying thread handle destructor behavior.
  • Consider void futures for one-shot event communication.
  • Pass parameterless functions to std::thread, std::async, and std::call_once.
  • Use std::lock to acquire multiple locks.
  • Prefer non-recursive mutexes to recursive ones.
  • Declare future and std::thread members last.
  • Code for spurious failures in try_lock, condvar wait, and weak CAS operations.
  • Distinguish steady from unsteady clocks.
  • Use native handles to transcend the C++11 API.
  • Employ sequential consistency if at all possible.
  • Distinguish volatile from std::atomic.
This is a draft TOC. There's nothing final about the presence, order, or wording of these Items. Furthermore, unless either your mind-reading skills are better than I expect or my mind is easier to read than I fear, it will be tough for you to anticipate what I plan to say in these Items based only on the Item titles. Still, if you see advice above that you think is either especially good or especially bad, don't be shy about letting me know about it.

I'm especially pleased with the first Item on the list ("Create tasks, not threads"), because when I came up with that wording, a number of up-to-that-point disparate thoughts fell into place with a very satisfying thud.

When I began that Item, the only thing I knew I wanted to talk about was that thread construction can throw.  In Effective C++, Second Edition, my advice about dealing with the fact that operator new can throw is "Be prepared for out-of-memory conditions," so I started thinking about guidance such as "Be prepared for std::thread exhaustion." But what does it mean to be prepared to run out of threads? With operator new, there's a new handler you can configure. There's nothing like that for thread creation. And if you request n bytes from operator new and you can't get it, you may be able to scale down your request to, say, n/2 bytes, then try again. But if you request a new thread and that fails, what are you supposed to do, request half a thread?

I didn't like where that was going.  So I decided to think about avoiding the problem of running out of threads by not requesting them directly.  The prospective guideline "Prefer std::async to std::thread" had been an elephant in the room from the beginning, so I started playing with that idea.  But one of the other guidelines I was considering was "Pass std::launch::async if asynchronicity is essential" (it's on the draft TOC above), and the spec for std::async says that it throws the same exception as the std::thread constructor if you pass std::launch::async as the launch policy and std::async can't create a new thread. So advising people to use std::async was not sufficient, because using std::async with std::launch::async is no better than using std::thread for purposes of avoiding out-of-thread exceptions.

Though my primary focus had been on figuring out how to avoid exceptions due to too many threads, another issue I wanted to address was how to deal with oversubscription: creating more threads than can efficiently run on the machine. The way to avoid that problem is to use std::async with the default launch policy, and that got me to thinking about what to call a function (or function object--henceforth simply a "function") that could be run either synchronously or asynchronously.  A raw function doesn't qualify, because if you run a raw function asynchronously on a std::thread, there is no way to get the result of the function.  (And if the function throws an exception, std::terminate gets called.) Fortunately, C++11 offers a way to prepare a function for possible asynchronous execution: wrap it in std::packaged_task. How fortuitous! I had been looking for an excuse to discuss std::packaged_task, and its existence allowed me to assign a C++11 meaning to the otherwise squishy notion of a "task".

Thus the (still tentative) Item title was born.

What I really like about it is that it's both design advice and coding advice.  At a design level, creating tasks means developing independent pieces of functionality that may be run either synchronously or asynchronously, depending on the computational resources dynamically available on the machine.  At a coding level, it means taking functions and making them suitable for asynchronous execution, either by wrapping them with std::packaged_task or, preferably, by submitting them to std::async (which does the wrapping for you).

"Create tasks, not threads" thus gives me a context in which to discuss exceptions thrown by thread creation requests, the problem of oversubscription, std::thread, std::async, std::packaged_task, and tasks versus threads. Along the way I also get to discuss thread pools and the conditions under which it can make sense to bypass tasks and go straight to std::threads. (Can you see a cross-reference to "Use native handles to transcend the C++11 API"?  I can.)

Scott

Monday, March 25, 2013

Thread Handle Destruction and Behavioral Consistency

Suppose you fire up a thread in a function, then return from the function without joining or detaching the thread:
void doSomeWork();

void f1()
{
  std::thread t(doSomeWork);
  ...                          // no join, no detach
}
What happens?

Your program is terminated. The destructor of a std::thread object that refers to a "joinable" thread calls std::terminate.

Now suppose you do the same thing, except instead of firing up the thread directly, you do it via std::async:
void f2()
{
  auto fut = std::async(std::launch::async, doSomeWork);
  ...                          // no get, no wait
}
Now what happens?

Your function blocks until the asychronously running thread completes. This is because the shared state for a std::async call causes the last future referring to that shared state to block in its destructor. Practically speaking, the destructor for the final future referring to a std::async shared state does an implicit join on the asynchronously running thread.

(The behavior I'm describing is mandated by the standard. Some implementations, notably Microsoft's, don't behave this way, because the standardization committee is considering changing this aspect of the standard, and Microsoft has implemented the revised behavior they believe will ultimately be adopted.)

Finally, suppose you create a packaged_task for the function to be run asynchronously, then you detach from the thread running the packaged_task, while retaining the future for the packaged_task:
void f3()
{
  std::packaged_task<void()> pt(doSomeWork);
  auto fut = pt.get_future();
  std::thread(std::move(pt)).detach();
  ...                          // no get, no wait
}
Now what happens?

Your function returns, even if the function to be run asynchronously is still running. In essence, the thread is detached. The destructor of the thread object no longer refers to a joinable thread (thanks to the call to detach), so it doesn't call std::terminate, and the destructor of the std::future refer doesn't refer to a shared state for a call to std::async, so it doesn't performs an implicit join.

"So what's your point?," you may be wondering. Well, we can think of both std::thread objects and futures as handles for asynchronously running threads, and it's interesting to note that when such handles are destroyed, in some cases, we terminate, in others we do an implicit join, and in others we do an implicit detach. As I've been known to put it, the standardaization committee, when faced with a choice of three possible behaviors, chose all three.

In fact, I'm making this post at the request of a member of the standardization committee who thought it would be worthwhile to point out this inconsistency in the standard's treatment of thread handles. Whether anything will be done about it remains to be seen. If the specification for std::async is modified such that its shared state no longer causes the blocking behavior I described in my last post, that would eliminate the implicit join behavior, but I'm not convinced that such a change is a shoe-in for adoption. The problem is that such a change to the standard could silently break the behavior of existing programs (i.e., code that depends on the implicit join in future destructors that are the final reference to a shared state coming from std::async), and the standardization committee is generally very reluctant to adopt changes that can silently change the behavior of conforming programs.

Scott




Wednesday, March 20, 2013

std::futures from std::async aren't special!

This is a slightly-revised version of my original post. It reflects information I've since received that confirms some of the suppositions I'd been making, and it rewords some things to clarify them.


It's comparatively well known that the std::future returned from std::async will block in its destructor until the asynchronously running thread has completed:
void f()
{
  std::future<void> fut = std::async(std::launch::async, 
                                     [] { /* compute, compute, compute */ });

}                                    // block here until thread spawned by
                                     // std::async completes
Only std::futures returned from std::async behave this way, so I had been under the impression that they were special. But now I believe otherwise. I now believe that all futures must behave the same way, regardless of whether they originated in std::async. This does not mean that all futures must block in their destructors. The story is more nuanced than that.

There's definiately something special about std::async, because futures you get from other sources (e.g., from a std::promise or a std:: packaged_task) don't block in their destructors.  But how does the specialness of std::async affect the behavior of futures?

C++11 futures are the caller's end of a communications channel that begins with a callee that's (typically) called asynchronously. When the called function has a result to communicate to its caller, it performs a set operation on the std::promise corresponding to the future.  That is, an asynchronous callee sets a promise (i.e., writes a result to the communication channel between it and its caller), and its caller gets the future (i.e., reads the result from the communications channel).

(As usual, I'm ignoring a host of details that don't affect the basic story I'm telling.  Such details including return values versus exceptions, waiting versus getting, unshared versus shared futures, etc.)

Between the time a callee sets its promise and its caller does a corresponding get, an arbitrarily long time may elapse. (In fact, the get may never take place, but that's a detail I'm ignoring.) As a result, the std::promise object that was set may be destroyed before a get takes place.  This means that the value with which the callee sets the promise can't be stored in the promise--the promise may not have a long enough lifetime.  The value also can't be stored in the future corresponding to the promise, because the std::future returned from std::async could be moved into a std::shared_future before being destroyed, and the std::shared_future could then be copied many times to new objects, some of which would subsequently be destroyed. In that case, which future would hold the value returned by the callee?

Because neither the promise nor the future ends of the communications channel between caller and callee are suitable for storing the result of an asynchronously invoked function, it's stored in a neutral location. This location is known as the shared state.  There's nothing in the C++ standard library corresponding to the shared state.  No class, no type, no function. In practice, I'm guessing it's implemented as a class that's templatized on at least the type of the result to be communicated between callee and caller.

The special behavior commonly attributed to futures returned by std::async is actually determined by the shared state. Once you know what to look for, this is indicated in only moderately opqaque prose (for the standard) in 30.6.8/3, where we learn that
The thread object [for the function to be run asynchronously] is stored in the shared state and affects the behavior of any asynchronous return objects [e.g., futures] that reference that state.
and in 30.6.8/5, where we read:
the thread completion [for the function run asynchronously] synchronizes with [i.e., occurs before] [1] the return from the first function that successfully detects the ready status of the shared state or [2] with the return from the last function that releases the shared state, whichever happens first.
It's provision [2] that's relevant to us here. It tells us that if a future holds the last reference to the shared state corresponding to a call to std::async, that future's destructor must block until the thread for the asynchronously running function finishes. This is a requirement for any future object. There is nothing special about std::futures returned from std::async. Rather, the specialness of std::async is manifested in its shared state.

By the way, when I write that the "future's destructor must block," I don't mean it literally. The standard just says that the function releasing the last reference to a shared state corresponding to a std::async call can't return as long as the thread for the asynchronously running function is still executing. That behavior doesn't have to be implemented by having a future's destructor directly block. The future destructor might simply call a member function to decrement the reference count on the shared state. Inside that call, if the result of the decrement was zero and the shared state corresponded to a std::async call, the member function would simply wait until the thread running the asynchronously running function completed before it returned to the future destructor.  From the future's point of view, it merely made a synchronous call to a function to decrement the reference count on the shared state.  The runtime behavior, however, would be that it could block until the asynchronously running thread completed.

The provision stating that, essentially, the shared state corresponding to a call to std::async must somehow indicate that the last future referring to them must block until the associated thread has finished running, is not popular. It's been proposed to be changed, and some standard library implementations (e.g., Microsoft's) have already revised their implementations to eliminate the "futures from std::async block in their destructors" behavior. That makes it trickier for you to test the behavior of this part of the standard, because the library you use may be deliberately nonconformant in this area.

Scott

PS - The reason I got caught up in this matter was that I was trying to find a way to perform the moral equivalent of a detach on a thread spawned via std::async.  Because I believed it was the std::future returned from std::async that was special, I started experimenting with things like moving that std::future into a std::shared_future in an attempt to return from the function calling std::async before the asynchronously running function had finished. But since it's the shared state that's special, not the std::future, this approach seems doomed. If you know how to get detach-like behavior when using std::async (without the cooperation of the function being run asynchronously), please let me know!

Wednesday, March 13, 2013

The Line-Length Problem

The bane of publishing code for consumption on a variety of platforms is that the available horizontal space varies.  I've blogged elsewhere that I want to avoid horizontal scrolling or bad line breaks in code, and I'm working with my publisher on how to do that. I'd like your help, too.

My understanding is that on Kindle and iPad (the platforms for which I currently have some data), the size of the text you see depends on both the font size specified in the document's CSS (which you, as a reader, typically can't control) as well as on the font size specified for the device (which you, as a reader, typically can).  The response to my earlier post about font choices showed a marked preference for code in a fixed-pitch font, so that's what I plan to use in Effective C++11. I've received the following information regarding how many characters fit on a line in Kindle and iPad in various combinations of device and CSS font sizes and device orientations:
It's interesting that on iPad, using the device in landscape mode shows two columns instead of one, thus providing less horizontal line space per line. As an author, this means I actually have more room to work with when the device is used in portrait mode.

As you can see, if I limit my code displays to 45 characters per line, that should display without problems under all but two combinations of settings above.  I think that 45 characters per line would look strange on devices with more horizontal room, however, and the data also show that for many combinations of settings, I could use up to 60 characters per line (which is about what I'd have in a printed book).  Not being a fan of lowest-common-denominator constraint satisfaction (i.e., not penalizing people with devices and settings for wider lines for the benefit of people with devices and settings for narrower lines) my thought is that I'll format my code displays twice, once with no more than 45 characters/line and once with up to 60. As an example of what that could mean in real life, here's some sample code from Item 3 of the current (third) edition of Effective C++. As is all code in that book, it's in a proportional font:

Here it is formatted in a fixed-pitch font with no more than 60 characters/line:
class TextBlock {  
public:
  ...  

  const char&
  operator[](std::size_t position) const   // operator[] for
  { return text[position]; }               // const objects

  char&
  operator[](std::size_t position)       // operator[] for
  { return text[position]; }             // non-const objects

private:
  std::string text;  
};
And here it is again with no more than 45 characters/line:
class TextBlock {  
public:
  ...  

  // operator[] for const objects
  const char&
  operator[](std::size_t position) const
  { return text[position]; }

  // operator[] for non-const objects
  char& operator[](std::size_t position)
  { return text[position]; }

private:
  std::string text;  
};
Do you think it's worth my formatting code displays twice, once for wide lines and once for narrow ones, or do you think that using narrow formatting everywhere would suffice? Don't worry about how much work it is for me. That's my problem. Focus on what would work better for you.

Assuming for the moment that formatting the code twice is preferable, there's a logistical issue that has to be addressed, namely, how to write a single manuscript that can generate documents with one of two sets of code displays. My plan had been to use Microsoft Word and to use conditional text to switch between code displays, i.e., to set up "wide" and "narrow" configurations and hide the code displays that did not correspond to the current configuration. Alas, Microsoft Word 2010 (the version I'm using) lacks support for conditional text, something that quite surprised me, because both FrameMaker and OpenOffice/LibreOffice have had it for years.  Switching to a different document authoring system leads to new problems, because the publication process likely to be followed by my book is likely to involve Microsoft Word as the point of entry, meaning that even if I produce my manuscript using, say, OpenOffice, that's likely to be converted into Word as step 0, so what Word can't represent is likely to be troublesome. (Before you bombard me with suggestions to use LaTeX or some other markup language, I'm on record as viewing those as inferior to WYSIWYG systems, as I detail here.)

Do you have any ideas about how I should approach the production of code displays that look good on all "reasonable" publication platforms and that can reasonably be produced and maintained by my authoring tool, which is highly likely to be Word 2010?

Thanks,

Scott


Friday, March 1, 2013

C++ and Beyond 2013 Registration has Begun!

Registration for this year's C++ and Beyond with me, Herb Sutter, and Andrei Alexandrescu is now open! Participation is limited to 64 developers. That's about two-thirds the demand of prior years, which means that not only will C&B 2013 sell out, it's likely to sell out quickly.

For details on this year's C++ and Beyond, consult its web site. Bottom line? If you're interested in joining a small group of developers as well as me, Herb, and Andrei for three intense days of C++ and C++-related topics December 9-12  at the Salish Lodge and Spa near Seattle, you'll want to register soon.

I look forward to seeing you there!

Scott

Saturday, February 23, 2013

Schulungsunterlagen zur Anwendung von C++ in Embedded Systems stehen zur Verfügung

[If you don't read German, this post is of no interest to you, sorry. If you're dying to know what follows, plop the text into Google Translate.]

2011 habe ich zum ersten Mal mein Seminar zum Thema die effektive Anwendung von C++ in Embedded Systems auf Deutsch gehalten. Das Seminar ist ziemlich gut gegangen, denke ich (niemand ist gestorben), aber nach dem Seminar war es mir klar, dass die deutsche Version des Seminars keine Vorteile im Vergleich zu der englischen Version hat, auch wenn die Teilnehmer aus deutschsprachigen Ländern kommen.  Es scheint, dass die an diesem Thema interessierten Entwickler entweder kein Problem mit Englisch haben oder sie sogar eine Vorliebe für technische Seminare auf Englisch haben. Die erste Präsentation dieses Seminars war deshalb auch die letzte.


Ich habe nun Unterlagen für ein Seminar, welches ich nicht vorhabe, je wieder anzubieten. Ich könnte sie auf meiner Festplatte liegen lassen, aber das dient niemandem. Obwohl sie nicht mehr so ganz aktuell wie die entsprechenden Unterlagen auf Englisch sind, und obwohl die Sprache in der Unterlagen gar nicht perfekt ist (ich habe Hilfe mit der Übersetzungsarbeit bekommen, und in einigen Stellen ist es klar, dass es besser gewesen wäre, hätte ich mehr Hilfe gekriegt), denke ich, dass es trotzdem nützliche Informationen in den Folien gibt. Ich habe mich deswegen entschlossen, die Unterlagen im Internet zu veröffentlichen. Sie sind hier zu finden.

Falls Sie Interesse an der Anwendung von C++ (hauptsächlich C++98) im Embedded-Bereich haben, schlage ich vor, dass Sie meine Unterlagen probieren. Wenn Sie finden, dass sie hilfreich sind, freut das mich. Falls Sie finden, dass sie nicht hilfreich sind, tut es mir leid, aber Sie können sich damit trösten, dass sie kostenlos sind :-)

Scott

Wednesday, February 13, 2013

Draft EC++11 Item

One of the most important reality checks I use to evaluate material I'm thinking about publishing is to use it in a training setting. Present a prospective guideline to a gaggle of professional C++ software developers, and you find out pretty quickly whether it comprises useful and practical advice. A prospective guideline I have for Effective C++11 is
Declare overriding functions override
I've drafted training slides for this guideline, and I'd like you to take a look and let me know what you think. (Links are at the end of this post.)

I don't normally ask for public feedback on material in the form of training slides, but in this case, I'd like to know what you think about some formatting decisions I'm in the process of making. I don't want to put a lot of effort into a manuscript only to find out later that I botched my choice of formatting options.

For over a decade, I've used a proportional font for my code examples.  Such a font uses differing widths for different characters.  An "m" is much wider than an "i", for example.  This has the advantage that I can get a lot more characters on a line, which is important when I'm trying to shoehorn commented code into pages or columns of relatively narrow width. It has the disadvantage that most programmers use a fixed-pitch font (one where all the characters are the same width), so the code I publish doesn't look like what they see in their daily work. In the example I'm making available, I'm using a fixed-pitch font, e.g.:
In a proportional font, it would look like this:
For a more extensive example of code in a proportional font, take a look at my C++11 training materials sample.

 Question #1: Do you have a preference which is used in the technical material you read?


Whenever I've had multiple colors for code at my disposal, I've used blue for "normal" code and red as a highlight color (see "const" in the code examples above). Setting aside the specific color choices (which have drawbacks, both for color-blind readers and when printed on monochrome printers), the key point is that I've used two colors for code. An obvious alternative is use multiple colors to syntax-highlight the code, then find another means to highlight important sections.  One approach is to mimic highlighting pens by using yellow as a background color.  This is what was done with my Universal References article at isocpp.org:


Another approach is to use bold face to indicate highlighted code sections.  Here's that approach applied to the first code fragment I showed above:
 Question #2: What approach to code coloring do you prefer?
  • One color for "normal" code, a second color for highlighted code.
  • Syntax-colored code with yellow highlighting.
  • Syntax-colored code with bold highlighting.
It's hard to form an opinion without more than the tiny code fragments I've used in this blog post, of course, so please take a look at my draft Item for "Declare overriding functions override." It's available in two versions:
I realize that's not all possible combinations of choices, but putting together the various combinations is more work than you might imagine.  That's why I've provided links to other examples where I've used different combinations of choices.

Please let me know what you think about the formatting choices I've described.  Of course, I welcome comments on the technical content, too :-)

Thanks for your help with this.

Scott

Monday, February 11, 2013

Public Presentations in 2013

I've just updated my Upcoming Talks page with the public presentations that are currently scheduled for 2013.  Most of them will take place in Europe (Oslo and London in June, and Stuttgart in November), but there are additional U.S. events in the works, so my talks at C++ and Beyond in December are unlikely to remain my only public presentations in the USA.

Of particular note is that I'll be giving presentations of my all-new-and-still-under-development seminar, Effective C++11 Programming, in Oslo, London, and Stuttgart, and there's a good chance that at least some of my talks at C++ and Beyond will focus on the effective use of features found only in C++11.  In view of the fact that my big project for this year is writing Effective C++11 (see this post and this one for details), it should come as no surprise that that topic will be a leitmotif for 2013.

As always, details of my upcoming public presentations are to be found at my Upcoming Talks page.

I hope to see you at at least one of my presentations this year.

Scott

C++ & Beyond 2013 Dates Announced: December 9-12

The official announcement about dates and location for C++ and Beyond 2013 just went out on the C&B Blog.  They're December 9-12 at the Salish Lodge in Snoqualmie, Washington, USA (not far from Seattle).

This fourth incarnation of C&B will harken back to its roots, in the sense that we're returning to our original venue, and we're also reinstituting some of the features of the initial C&B that had changed in the past couple of years.

For more details, please consult the official announcment on the C&B blog.

Scott