27.02.17

A pitfall in C++ low-level object creation and storage, and how to avoid it

While doing a couple recent reviews, I’ve read lots of code trying to solve the same general problem. Some code wants to store an object of some type (more rarely, an object from among several types), but it can’t do so yet. Some series of operations must occur first: a data structure needing to be put into the right state, or a state machine needing to transition just so. Think of this, perhaps, as the problem of having a conditionally initialized object.

Possibly-unsatisfactory approaches

One solution is to new the object at the proper time, storing nullptr prior to that instant. But this imposes the complexity of a heap allocation and handling its potential failure.

Another solution is to use mozilla::Maybe<T>. But Maybe must track whether the T object has been constructed, even if that status is already tracked elsewhere. And Maybe is potentially twice T‘s size.

The power approach

Tthe most common solution is to placement-new the object into raw storage. (This solution is particularly common in code written by especially-knowledgeable developers.) mfbt has historically provided mozilla::AlignedStorage and mozilla::AlignedStorage2 to implement such raw storage. Each class has a Type member typedef implementing suitable aligned, adequately-sized storage. Bog-standard C++11 offers similar functionality in std::aligned_storage.

Unfortunately, this approach is extremely easy to subtly misuse. And misuse occurs regularly in Mozilla code, and it can and has triggered crashes.

A detour into the C++ object model

C++ offers very fine-grained, low-level control over memory. It’s absurdly easy to access memory representations with the right casts.

But just because it’s easy to access memory representations, doesn’t mean it’s easy to do it safely. The C++ object model generally restricts you to accessing memory according to the actual type stored there at that instant. You can cast a true float* to uint32_t*, but C++ says undefined behavior — literally any behavior at all — may occur if you read from the uint32_t*. These so-called strict aliasing violations are pernicious not just because anything could happen, but because often exactly what you wanted to happen, happens. Broken code often still “works” — until it unpredictably misbehaves, in C++’s view with absolute justification.

A dragon lays waste to men attacking it
Here be dragons (CC-BY-SA, by bagogames)

There’s a big exception to the general by-actual-type rule: the memcpy exception. (Technically it’s a handful of rules that, in concert, permit memcpy. And there are other, non-memcpy-related exceptions.) memcpy(char*, const char*, size_t) has always worked to copy C-compatible objects around without regard to types, and C++’s object model permits this by letting you safely interpret the memory of a T as chars or unsigned chars. If T is trivially copyable, then:

T t1; 
char buf[sizeof(T)];
memcpy(buf, &t1, sizeof(T)); // stash bytes away
// ...time elapses during execution...
memcpy(&t1, buf, sizeof(T)); // restore them
// t1 safely has its original value

You can safely copy a T by the aforementioned character types elsewhere, then back into a T, and things will work. And second:

T t1, t2;
memcpy(&t1, &t2, sizeof(T)); // t1 safely has t2's value

You can safely copy a T into another T, and the second T will have the first T‘s value.

A C++-compatible placement-new approach

The placement-new-into-storage approach looks like this. (Real code would almost always use something more interesting than double, but this is the gist of it.)

#include <new> // for placement new

struct ContainsLazyDouble
{
    // Careful: align the storage consistent with the type it'll store.
    alignas(double) char lazyData[sizeof(double)];
    bool hasDouble_;

    // Indirection through these functions, rather than directly casting
    // lazyData to double*, evades a buggy GCC -Wstrict-aliasing warning.
    void* data() { return lazyData; }
    const void* data() const { return lazyData; }

  public:
    ContainsLazyDouble() : hasDouble_(false) {}

    void init(double d) {
      new (data()) double(d);
      hasDouble_ = true;
    }

    bool hasDouble() const { return hasDouble_; }
    double d() const {
      return *reinterpret_cast<const double*>(data());
    }
};

ContainsLazyDouble c;
// c.d(); // BAD, not initialized as double
c.init(3.141592654);
c.d(); // OK

This is safe. c.lazyData was originally char data, but we allocated a new double there, so henceforth that memory contains a double (even though it wasn’t declared as double), not chars (even though it was declared that way). The actual type stored in lazyData at that instant is properly respected.

A C++-incompatible extension of the placement-new approach

It’s safe to copy a T by the aforementioned character types. But it’s only safe to do so if 1) T is trivially copyable, and 2) the copied bytes are interpreted as T only within an actual T object. Not into a location to be (re)interpreted as T, but into a T. It’s unsafe to copy a T into a location that doesn’t at that instant contain a T, then reinterpret it as T.

So what happens if we use ContainsLazyDouble‘s implicitly-defined default copy constructor?

ContainsLazyDouble c2 = c;

This default copy constructor copies ContainsLazyDouble member by member, according to their declared types. So c.lazyData is copied as a char array that contains the object representation of c.d(). c2.lazyData therefore contains the same char array. But it doesn’t contain an actual double. It doesn’t matter that those chars encode a double: according to C++, that location does not contain a double.

Dereferencing reinterpret_cast<const double*>(data()) therefore mis-accesses an array of chars by the wrong type, triggering undefined behavior. c2.d() might seem to work if you’re lucky, but C++ doesn’t say it must work.

This is extraordinarily subtle. SpiderMonkey hackers missed this issue in their code until bug 1269319 was debugged and a (partly invalid, on other grounds) GCC compiler bug was filed. Even (more or less) understanding the spec intricacy, I missed this issue in some of the patchwork purporting to fix that bug. (Bug 1341951 provides an actual fix for one of these remaining issues.) Another SpiderMonkey hacker almost introduced another instance of this bug; fortunately I was reviewing the patch and flagged this issue.

Using AlignedStorage<N>::Type, AlignedStorage2<T>::Type, or std::aligned_storage<N>::type doesn’t avoid this problem. We mitigated the problem by deleting AlignedStorage{,2}::Type‘s copy constructors and assignment operators that would always do actual-type-unaware initialization. (Of course we can’t modify std::aligned_storage.) But only careful scrutiny prevents other code from memcpying those types. And memcpy will copy without respecting the actual type stored there at that instant, too. And in practice, developers do try to use memcpy for this when copy construction and assignment are forbidden, and reviewers can miss it.

What’s the solution to this problem?

As long as memcpy and memmove exist, this very subtle issue can’t be eradicated. There is no silver bullet.

The best solution is don’t hand-roll raw storage. This problem doesn’t exist in Maybe, mozilla::Variant, mozilla::MaybeOneOf, mozilla::Vector, and other utility classes designed to possibly hold a value. (Sometimes because we just fixed them.)

But if you must hand-roll a solution, construct an object of the actual type into your raw storage. It isn’t enough to copy the bytes of an object of the actual type into raw storage, then treat the storage as that actual type. For example, in ContainsLazyDouble, a correct copy constructor that respects C++ strict aliasing rules would be:

#include <string.h> // for memcpy

// Add this to ContainsLazyDouble:
ContainsLazyDouble(const ContainsLazyDouble& other)
  : hasDouble_(other.hasDouble_)
{
  if (hasDouble_)
  {
    // The only way to allocate a free-floating T, is to
    // placement-new it, usually also invoking T's copy
    // constructor.
    new (data()) double(other.d());

    // This would also be valid, if almost pointlessly bizarre — but only
    // because double is trivially copyable.  (It wouldn't be safe
    // to do this with a type with a user-defined copy constructor, or
    // virtual functions, or that had to do anything at all to initialize
    // the new object.)
    new (data()) double; // creates an uninitialized double
    memcpy(lazyData, other.lazyData, sizeof(lazyData)); // sets to other.d()
  }
}

// ...and this to the using code:
ContainsLazyDouble c2 = c; // invokes the now-safe copy constructor

The implicitly-generated copy assignment operator will usually require similar changes — or it can be = deleted.

Final considerations

AlignedStorage seems like a good idea. But it’s extremely easy to run afoul of a copy operation that doesn’t preserve the actual object type, by the default copy constructor or assignment operator or by memcpy in entirely-separate code. We’re removing AlignedStorage{,2} so these classes can’t be misused this way. (The former has just been removed from the tree — the latter has many more users and will be harder to kill.) It’s possible to use them correctly, but misuse is too easy to leave these loaded guns in the tree.

If it’s truly necessary to hand-roll a solution, you should hand-roll all of it, all the way down to the buffer of unsigned char with an alignas() attribute. Writing this correctly this is expert-level C++. But it was expert-level C++ even with the aligned-storage helper. You should have to know what you’re doing — and you shouldn’t need the std::aligned_storage crutch to do it.

Moreover, I hope that the extra complexity of hand-rolling discourages non-expert reviewers from reviewing such code. I’d feel significantly more confident Mozilla won’t repeat this mistake if I knew that every use of (for example) alignas were reviewed by (say) froydnj or me. Perhaps we can get some Mercurial tooling in place to enforce a review requirement along those lines.

In the meantime, I hope I’ve made the C++ developers out there who read this, at least somewhat aware of this pitfall, and at best competent to avoid it.

26.09.14

Minor changes are coming to typed arrays in Firefox and ES6

JavaScript has long included typed arrays to efficiently store numeric arrays. Each kind of typed array had its own constructor. Typed arrays inherited from element-type-specific prototypes: Int8Array.prototype, Float64Array.prototype, Uint32Array.prototype, and so on. Each of these prototypes contained useful methods (set, subarray) and properties (buffer, byteOffset, length, byteLength) and inherited from Object.prototype.

This system is a reasonable way to expose typed arrays. Yet as typed arrays have grown, it’s grown unwieldy. When a new typed array method or property is added, distinct copies must be added to Int8Array.prototype, Float64Array.prototype, Uint32Array.prototype, &c. Likewise for “static” functions like Int8Array.from and Float64Array.from. These distinct copies cost memory: a small amount, but across many tabs, windows, and frames it can add up.

A better system

ES6 changes typed arrays to fix these issues. The typed array functions and properties now work on any typed array.

var f32 = new Float32Array(8); // all zeroes
var u8 = new Uint8Array([0, 1, 2, 3, 4, 5, 6, 7]);
Uint8Array.prototype.set.call(f32, u8); // f32 contains u8's values

ES6 thus only needs one centrally-stored copy of each function. All functions move to a single object, denoted %TypedArray%.prototype. The typed array prototypes then inherit from %TypedArray%.prototype to expose them.

assertEq(Object.getPrototypeOf(Uint8Array.prototype),
         Object.getPrototypeOf(Float64Array.prototype));
assertEq(Object.getPrototypeOf(Object.getPrototypeOf(Int32Array.prototype)),
         Object.prototype);
assertEq(Int16Array.prototype.subarray,
         Float32Array.prototype.subarray);

ES6 also changes the typed array constructors to inherit from the %TypedArray% constructor, on which functions like Float64Array.from and Int32Array.of live. (Neither function yet in Firefox, but soon!)

assertEq(Object.getPrototypeOf(Uint8Array),
         Object.getPrototypeOf(Float64Array));
assertEq(Object.getPrototypeOf(Object.getPrototypeOf(Int32Array)),
         Function.prototype);

I implemented these changes a few days ago in Firefox. Grab a nightly build and test things out with a new profile.

Conclusion

In practice this won’t affect most typed array code. Unless you depend on the exact [[Prototype]] sequence or expect typed array methods to only work on corresponding typed arrays (and thus you’re deliberately extracting them to call in isolation), you probably won’t notice a thing. But it’s always good to know about language changes. And if you choose to polyfill an ES6 typed array function, you’ll need to understand %TypedArray% to do it correctly.

08.09.10

New ES5 strict mode support: now with poison pills!

tl;dr

Don’t try to use the arguments or caller properties of functions created in strict mode code. Don’t try to use the callee or caller properties of arguments objects corresponding to invocations of functions in strict mode code. Don’t try to use the caller property of a function if it might be called from strict mode code. You are in for an unpleasant surprise (a thrown TypeError) if you do.

function strict() { "use strict"; return arguments; }
function outer() { "use strict"; return inner(); }
function inner() { return inner.caller; }

strict.caller;    // !!! BAD IDEA
strict.arguments; // !!! BAD IDEA
strict().caller;  // !!! BAD IDEA
strict().callee;  // !!! BAD IDEA
outer();          // !!! BAD IDEA

Really, it’s best not to access the caller of a function, the current function (except by naming it), or the arguments for a given function (except via arguments or by use of the named parameter) at all.

ES5 strict mode: self-limitation, not wish fulfillment

ES5 introduces the curious concept of strict mode. Strict mode, whose name and concept derive from the similar feature in Perl, is a new feature in ES5 whose purpose is to deliberately reduce the things you can do in JavaScript. Instead of a feature, it’s really more the absence of several features, within the scope of the strict-annotated code: with, particularly intrusive forms of eval, silent failure of writes to non-writable properties, silent failure of deletions of non-configurable properties, implicit creation of global-object properties, and so on. The goal of these removals is to simplify both the reasoning necessary to understand such code and the implementation of it in JavaScript engines: to sand away some rough edges in the language.

Magical stack-introspective properties of functions and arguments

Consider this code, and note the expected behavior (expected per the web, but not as part of ES3):

function getSelf() { return arguments.callee; }
assertEq(getSelf(), getSelf); // arguments.callee is the enclosing function

function outer()
{
  function inner() { return arguments.callee.caller; } // inner.caller === outer
  return inner();
}
assertEq(outer(), outer); // fun.caller === nearest function in stack that called fun, or null

function args2()
{
  return args2.arguments;
}
assertEq(args2(17)[0], 17); // fun.arguments === arguments object for nearest call to fun in stack, or null

Real-world JavaScript implementations take many shortcuts for top performance. These shortcuts are not (supposed to be) observable, except by timing the relevant functionality. Two common optimizations are function inlining and avoiding creating an arguments object. The above “features” play havoc with both of these optimizations (as well as others, one of which will be the subject of a forthcoming post).

Inlining a function should conceptually be equivalent to splatting the function’s contents in that location in the calling function and doing some α-renaming to ensure no names in the splatted body conflict with the surrounding code. The ability to access the calling function defeats this: there’s no function being invoked any more, so what does it even mean to ask for the function’s caller? (Don’t simply say you’d hard-code the surrounding function: how do you know which property lookups in the inlined code will occur upon precisely the function being called, looking for precisely the caller property?) It is also possible to access a function’s arguments through fun.arguments. While the “proper” behavior here is more obvious, implementing it would be a large hassle: either the arguments would have to be created when the function was inlined (in the general case where you can’t be sure the function will never be used this way), or you’d have to inline code in such a way as to be able to “work backward” to the argument values.

Speaking of arguments, in offering access to the corresponding function via arguments.callee it has the same problems as fun.caller. It also presents one further problem: in (some, mostly old) engines, arguments.caller provides access to the variables declared within that function when it was most recently called. (If you’re thinking security/integrity/optimization hazard, well, you now know why engines no longer support it.)

In sum these features are bad for optimization. Further, since they’re a form of dynamic scoping, they’re basically bad style in many other languages already.

Per ES5, SpiderMonkey no longer supports this stack-inspecting magic when it interacts with strict mode

As of the most recent Firefox nightly, SpiderMonkey now rejects code like that given above when it occurs in strict mode (more or less). (The properties in question are now generally implemented through a so-called “poison pill” mechanism, an immutable accessor property which throws a TypeError when retrieved or set.) The specific scenarios which we reject are as follows.

First, attempts to access the caller or arguments (except by directly naming the object) of a strict mode function throw a TypeError, because these properties are poison pills:

function strict()
{
  "use strict";
  strict.caller;    // !!! TypeError
  strict.arguments; // !!! TypeError
  return arguments; // direct name: perfectly cromulent
}
strict();

Second, attempts to access the enclosing function or caller variables via the arguments of a strict mode function throw a TypeError. These properties too are poison pills:

function strict()
{
  "use strict";
  arguments.callee; // !!! TypeError
  arguments.caller; // !!! TypeError
}
strict();

Third (and most trickily, because non-strict code is affected), attempts to access a function’s caller when that caller is in strict mode will throw a TypeError. This isn’t a poison pill, because if the "use strict" directive weren’t there it would still “work”:

function outer()
{
  "use strict";
  return inner();
}
function inner()
{
  return inner.caller; // !!! TypeError
}
outer();

But if there’s no strict mode in sight, nothing will throw exceptions, and what worked before will still work:

function fun()
{
  assertEq(fun.caller, null); // global scope
  assertEq(fun.arguments, arguments);
  assertEq(arguments.callee, fun);
  arguments.caller; // won't throw, won't do anything special
  return arguments;
}
fun();

Conclusion

With these changes, which are required by ES5, stack inspection is slowly going the way of the dodo. Don’t use it or rely on it! Even if you never use strict mode, beware the third change, for it still might affect you if you provide methods for other code to call. (But don’t expect to be able to avoid strict mode forever: I expect all JavaScript libraries will adopt strict mode in short order, given its benefits.)

(For those curious about new Error().stack, we’re still considering what to do about it. Regrettably, we may need to kill it for information-privacy reasons too, or at least censor it to something less useful. Nothing’s certain yet; stay tuned for further announcements should we make changes.)

You can experiment with a version of Firefox with these changes by downloading a TraceMonkey branch nightly; these changes should also make their way into mozilla-central nightlies shortly, if you’d rather stick to trunk builds. (Don’t forget to use the profile manager if you want to keep the settings you use with your primary Firefox installation pristine.)

22.08.10

Incompatible ES5 change: literal getter and setter functions must now have exactly zero or one arguments

ECMAScript accessor syntax in SpiderMonkey

For quite some time SpiderMonkey and Mozilla-based browsers have supported user-defined getter and setter functions (collectively, accessors), both programmatically and syntactically. The syntaxes for accessors were once legion, but SpiderMonkey has pared them back almost to the syntax recently codified in ES5 (and added new syntax where required by ES5).

// All valid in ES5
var a = { get x() { } };
var b = { get "y"() { } };
var c = { get 2() { } };

var e = { set x(v) { } };
var f = { set "y"(v) { } };
var g = { set 2(v) { } };

SpiderMonkey has historically parsed literal accessors using a slightly-tweaked version of its function parsing code. Therefore, as previously explained SpiderMonkey would accept essentially anything which could follow function in a function expression as valid accessor syntax in object literals.

ES5 requires accessors have exact numbers of arguments

A consequence of parsing accessors using generalized function parsing is that SpiderMonkey accepted some nonsensicalities, such as no-argument setters or multiple-argument getters or setters:

var o1 = { get p(a, b, c, d, e, f, g) { /* why have any arguments? */ } };
var o2 = { set p() { /* to what value? */ } };
var o3 = { set p(a, b, c) { /* why more than one? */ } };

ES5 accessor syntax sensibly deems such constructs errors: a conforming ES5 implementation would reject all of the above statements.

SpiderMonkey is changing to follow ES5: getters require no arguments, setters require one argument

SpiderMonkey has now been changed to follow ES5. There seemed little to no gain in continuing to support bizarre numbers of arguments when the spec counseled otherwise, and any code which does end up broken is easily fixed.

As always, you can experiment with a version of Firefox with these changes to accessor syntax by downloading a nightly from nightly.mozilla.org. (Don’t forget to use the profile manager if you want to keep the settings you use with your primary Firefox installation pristine.)