20.10.11

Implementing mozilla::ArrayLength and mozilla::ArrayEnd, and some followup work

Tags: , , , , , , — Jeff @ 16:03

In my last post I announced the addition of mozilla::ArrayLength and mozilla::ArrayEnd to the Mozilla Framework Based on Templates, and I noted I was leaving a description of how these methods were implemented to a followup post. This is that post.

The C++ template trick used to implement mozilla::ArrayLength

The implementations of these methods are surprisingly simple:

template<typename T, size_t N>
size_t
ArrayLength(T (&arr)[N])
{
  return N;
}

template<typename T, size_t N>
T*
ArrayEnd(T (&arr)[N])
{
  return arr + ArrayLength(arr);
}

The trick is this: you can templatize an array based on its compile-time length. Here we templatize both methods on: the type of the elements of the array, so that each is polymorphic; and the number of elements in the array. Then inside the method we can refer to that length, a constant known at compile time, and simply return it to implement the desired semantics.

Templatizing on the length of an array may not seem too unusual. The part that may be a little unfamiliar is how the array is described as a parameter of the template method: T (&arr)[N]. This declares the argument to be a reference to an array of N elements of type T. Its being a reference is important: we don’t actually care about the array contents at all, and we don’t want to copy them to call the method. All we care about is its type, which we can capture without further cost using a reference.

This technique is uncommon, but it’s not new to Mozilla. Perhaps you’ve wondered at some point why Mozilla’s string classes have both EqualsLiteral and EqualsASCII: the former for use only when comparing to “an actual literal string”, the latter for use when comparing to any const char*. You can probably guess why this interface occurs now: EqualsLiteral is a method templatized on the length of the actual literal string passed to it. It can be more efficient than EqualsASCII because it knows the length of the compared string.

Using this trick in other code

I did most of the work to convert NS_ARRAY_LENGTH to ArrayLength with a script. But I still had to look over the results of the script to make sure things were sane before proceeding with it. In doing so, I noticed a decent number of places where an array was created, then it and its length were being passed as arguments to another method. For example:

void nsHtml5Atoms::AddRefAtoms()
{
  NS_RegisterStaticAtoms(Html5Atoms_info, ArrayLength(Html5Atoms_info));
}

Using ArrayLength here is safer than hard-coding a length. But safer still would be to not require callers to pass an array and a length separately — rather to pass them together. We can do this by pushing the template trick down another level, into NS_RegisterStaticAtoms (or at least into an internal method used everywhere, if the external API must be preserved for some reason):

static nsresult
RegisterStaticAtoms(const nsStaticAtom* aAtoms, PRUint32 aAtomCount)
{
  // ...old implementation...
}

template<size_t N>
nsresult
NS_RegisterStaticAtoms(const nsStaticAtom (&aAtoms)[N])
{
  return RegisterStaticAtoms(aAtoms, N);
}

The pointer-and-length method generally still needs to stick around somewhere, and it does in this rewrite here. It’s just that it wouldn’t be the interface user code would see, or it wouldn’t be the primary interface (for example, it might be protected or similar).

NS_RegisterStaticAtoms was just one such method which could use improvement. In a quick skim I also see:

…as potential spots that could be improved — at least for some callers — with some additional templatization on array length.

I didn’t look super-closely at these, so I might have missed some. Or I might have been over-generous in what could be rewritten to templatize on length, seeing only the one or two places that pass fixed lengths and missing the majority of cases that don’t. But there’s definitely a lot of cleaning that could be done here.

A call for help

Passing arrays and lengths separately is dangerous if you don’t know what you’re doing. The trick used here eliminates it, in certain cases. The more we can use this pattern, the more we can fundamentally reduce the danger of separating data from its length.

I don’t have time to do the above work myself. (I barely had time to do the ArrayLength work, really. I only did it because I’d sort of started the ball rolling in a separate bug, so I felt some obligation to make sure it got done.) And it’s not particularly hard work, requiring especial knowledge of the relevant code. It’s a better use of my time for me to work on JavaScript engine code, or on other code I know particularly well, than to do this work. But for someone interested in getting started working on Gecko C++ code, it would be a good first project. I’ve filed bug 696242 for this task; if anyone’s interested in a good first bug for getting into Mozilla C++ coding, feel free to start with that one. If you have questions about what to do, in any way, feel free to ask them there.

On the other hand, if you have any questions about the technique in question, or the way it’s used in Mozilla, feel free to ask them here. But if you want to contribute to fixing the issues I’ve noted, let’s keep them to the bug, if possible.

Computing the length or end of a compile-time constant-length array: {JS,NS}_ARRAY_{END,LENGTH} are out, mozilla::Array{Length,End} are in

Tags: , , , , , , , — Jeff @ 16:03

Determining the length of a fixed-length array

Suppose in C++ you want to perform variations of some simple task several times. One way to do this is to loop over the variations in an array to perform each task:

/* Defines the necessary envvars to 1. */
int setVariables()
{
  static const char* names[] = { "FOO", "BAR", "BAZ", "QUUX", "EIT", "GOATS" };
  for (int i = 0; i < 6; i++)
    if (0 > setenv(names[i], "1")) return -1;
  return 0;
}

Manually looping by index is prone to error. One particular issue with loop-by-index is that you must correctly compute the extent of iteration. Hard-coding a constant works, but what if the array changes? The constant must also change, which isn’t obvious to someone not looking carefully at all uses of the array.

The traditional way to get an array’s length is with a macro using sizeof:

#define NS_ARRAY_LENGTH(arr)  (sizeof(arr) / sizeof((arr)[0]))

This works but has problems. First, it’s a macro, which means it has the usual macro issues. For example, macros are untyped, so you can pass in “wrong” arguments to them and may not get type errors. This is the second problem: NS_ARRAY_LENGTH cheerfully accepts non-array pointers and returns a completely bogus length.

  const char* str = "long enough string";
  char* copy = (char*) malloc(NS_ARRAY_LENGTH(str)); // usually 4 or 8
  strcpy(copy, str); // buffer overflow!

Introducing mozilla::ArrayLength and mozilla::ArrayEnd

Seeing an opportunity to both kill a moderately obscure macro and to further improve the capabilities of the Mozilla Framework Based on Templates, I took it. Now, rather than use these macros, you can #include "mozilla/Util.h" and use mozilla::ArrayLength to compute the length of a compile-time array. You can also use mozilla::ArrayEnd to compute arr + ArrayLength(arr). Both these methods (not macros!) use C++ template magic to only accept arrays with compile-time-fixed length, failing to compile if something else is provided.

Limitations

Unfortunately, ISO C++ limitations make it impossible to write a method completely replacing the macro. So the macros still exist, and in rare cases they remain the correct answer.

The array can’t depend on an unnamed type (class, struct, union) or a local type

According to C++ §14.3.1 paragraph 2, “A local type [or an] unnamed type…shall not be used as a template-argument for a template type-parameter.” C++ makes this a compile error:

size_t numsLength()
{
  // unnamed struct, also locally defined
  static const struct { int i; } nums[] = { { 1 }, { 2 }, { 3 } };

  return mozilla::ArrayLength(nums);
}

It’s easy to avoid both limitations: move local types to global code, and name them.

// now defined globally, and with a name
struct Number { int i; };
size_t numsLength()
{
  static const Number nums[] = { 1, 2, 3 };
  return mozilla::ArrayLength(nums);
}

mozilla::ArrayLength(arr) isn’t a constant, NS_ARRAY_LENGTH(arr) is

Some contexts in C++ require a compile-time constant expression: template parameters, (in C++ but not C99) for local array lengths, for array lengths in typedefs, for the value of enum initializers, for static/compile-time assertions (which are usually bootstrapped off these other locations), and perhaps others. A function call, even one evaluating to a compile-time constant, is not a constant expression.

One other context doesn’t require a constant but strongly wants one: the values of static variables, inside classes and methods and out. If the value is a function call, even if it computes a constant, the compiler might make it a static initialization, delaying startup.

The long and short of it is that everything in the code below is a bad idea:

int arr[] = { 1, 2, 3, 5 };
static size_t len = ArrayLength(arr); // not an error, but don't do it
void t(JSContext* cx)
{
  js::Vector<int, ArrayLength(arr)> v(cx); // non-constant template parameter
  int local[ArrayLength(arr)]; // variadic arrays not okay in C++
  typedef int Mirror[ArrayLength(arr)]; // non-constant array length
  enum { L = ArrayLength(arr); }; // non-constant initializer
  PR_STATIC_ASSERT(4 == ArrayLength(arr)); // special case of one of the others
}

In these situations you should continue to use NS_ARRAY_LENGTH (or in SpiderMonkey, JS_ARRAY_LENGTH).

mozilla/Util.h is fragile with respect to IPDL headers, for include order

mozilla/Util.h includes mozilla/Types.h, which includes jstypes.h, which includes jsotypes.h, which defines certain fixed-width integer types: int8, int16, uint8, uint16, and so on. It happens that ipc/chromium/src/base/basictypes.h also defines these integer types — but incompatibly on Windows only. This header is, alas, included through every IPDL-generated header. In order to safely include any mfbt header in a file which also includes an IPDL-generated header, you must include the IPDL-generated header first. So when landing patches using mozilla/Util.h, watch out for Windows-specific bustage.

Removing the limitations

The limitations on the type of elements in arrays passed to ArrayLength are unavoidable limitations of C++. But C++11 removes these limitations, and compilers will likely implement support fairly quickly. When that happens we’ll be able to stop caring about the local-or-unnamed problem, not even needing to work around it.

The compile-time-constant limitation is likewise a limitation of C++. It too will go away in C++11 with the constexpr keyword. This modifier specifies that a function provided constant arguments computes a constant. The compiler must allow calls to the function that have constant arguments to be used as compile-time constants. Thus when compilers support constexpr, we can add it to the declaration of ArrayLength and begin using ArrayLength in compile-time-constant contexts. This is more low-hanging C++11 fruit that compilers will pick up soon. (Indeed, GCC 4.6 already implements it.)

Last, we have the Windows-specific #include ordering requirement. We have some ideas for getting around this problem, and we hope to have a solution soon.

A gotcha

Both these methods have a small gotcha: their behavior may not be intuitive when applied to C strings. What does sizeof("foo") evaluate to? If you think of "foo" as a string, you might say 3. But in reality "foo" is much better thought of as an array — and strings are '\0'-terminated. So actually, sizeof("foo") == 4. This was the case with NS_ARRAY_LENGTH, too, so it’s not new behavior. But if you use these methods without considering this, you might end up misusing them.

Conclusion

Avoid NS_ARRAY_LENGTH when possible, and use mozilla::ArrayLength or mozilla::ArrayEnd instead. And watch out when using them on strings, because their behavior might not be what you wanted.

(Curious how these methods are defined, and what C++ magic is used? See my next post.)

03.10.11

Washington, D.C., ex post: The decisions in Tapia and Microsoft

(Just started reading? See part 1, part 2, part 3, part 4, part 5, and part 6.)

Back in April I visited Washington, D.C.. I visited partly to pick up some bobbleheads at an opportune time (just before Easter, and just before visiting family nearly as far eastward from California) and partly to attend Supreme Court oral arguments while I had the chance. The two cases I saw argued were Tapia v. United States and Microsoft v. i4i Limited Partnership. Shortly after I made some minor predictions for the cases, following up on an introduction of the cases and thoughts from oral argument. Let’s take a look at how the cases turned out, before the October 2011 term arguments start. (At this point on Monday, October 3, there’s probably already a line outside the Supreme Court building for the first arguments of the term.) If you need a refresher on the cases themselves, read my introductions noted above: for space reasons I won’t review much here.

Tapia v. United States

The Court unanimously ruled for Tapia, deciding that a judge may not consider the availability of rehabilitation programs when imposing a sentence of imprisonment or in choosing to lengthen a sentence.

The opinion

Justice Kagan wrote the opinion for a unanimous Court. Tapia had been sentenced to 51 months in prison, seemingly because the sentencing judge thought she should take part in a particular drug treatment program: a program she’d only be eligible for if she were in prison for a longer sentence. Justice Kagan’s concluded that a sentencing court can’t impose a prison term, and it can’t extend a prison term when it has decided to impose one, to foster a defendant’s rehabilitation.

Justice Kagan first briefly reviewed the history of the Sentencing Reform Act which enacted the relevant statutes (displaying almost professorial affection in noting that, “Aficionados of our sentencing decisions will recognize much of the story line.”). She concluded that the Act was intended to make sentencing more deterministic and consistent by eliminating much discretionary authority during sentencing and prior to release.

Justice Kagan next turned to the text of the relevant laws. She examined the text of 18 U.S.C. §3582(a), which reads:

The court, in determining whether to impose a term of imprisonment, and, if a term of imprisonment is to be imposed, in determining the length of the term…shall consider the factors set forth in section 3553(a) to the extent that they are applicable, recognizing that imprisonment is not an appropriate means of promoting correction and rehabilitation.

Justice Kagan concluded that, “§3582(a) tells courts that they should acknowledge that imprisonment is not suitable for the purpose of promoting rehabilitation.” While Justice Kagan noted that the text could have been more commanding — “thou shalt not”, say — she thought that Congress had nonetheless made itself clear. Justice Kagan also considered the argument that the “recognizing” clause applied only when determining a sentence, not when possibly lengthening it. She rejected this argument, noting that standard rules of grammar argued that a court considers the relevant factors both when deciding to imprison and when determining the length of imprisonment, and from this concluding that a court must “recognize” the inappropriateness of imprisonment for rehabilitation both when sentencing and when choosing a duration of imprisonment.

Justice Kagan also noted context supporting her interpretation. She led with 28 U.S.C. § 994(k), which I previously noted could shed light on the proper interpretation. She also noted the pointed absence of statutory authority for courts to ensure offenders participated in rehabilitation programs. (Tapia didn’t participate in the relevant rehab program because she wasn’t sent to the prison the judge recommended and because she wasn’t interested in the program.) Finally, she noted that those willing to consider legislative history would find support for her interpretation in the relevant Senate Report.

Justice Kagan next rejected arguments that the “rehabilitation model” which the SRA supplanted referred only to undue belief in “isolation and prison routine” causing the prisoner to reform. She called this reading “too narrow”, citing an essay which characterized the rehabilitation model more broadly. This was Part III, section B, if you’re interested in more detail — I’m not going to attempt to summarize any further than that.

Justice Kagan last noted that the sentencing judge may have improperly considered rehabilitation in determining the length of Tapia’s sentence. Thus the Court left open the possibility that the sentencing judge might not have done so. Finally, the Court sent the case back to the Ninth Circuit for further action.

The concurrence

Neither Justice Sotomayor nor Justice Alito was convinced that the sentencing judge actually did improperly sentence Tapia. Evidently unsatisfied by Justice Kagan’s noting that the sentencing judge only might have acted improperly, Justice Sotomayor wrote a concurrence, joined by Justice Alito, explaining why she thought the sentencing judge had not acted improperly. At the same time, she noted that the sentencing judge’s rationale was less than clear, and that she wasn’t completely certain that he hadn’t acted improperly. Thus both justices nonetheless joined Kagan’s opinion in full.

The outcome

None of this means that Tapia will necessarily get what she presumably wants: a shortened prison sentence. The Court reversed the judgment of the circuit court that upheld her sentence, and it remanded so that court would take a second look, but it didn’t specify the actual outcome. Justice Kagan’s opinion doesn’t conclude that the sentencing judge improperly lengthened Tapia’s sentence for the purpose of rehabilitation: it merely says that the judge may have done so. Justice Sotomayor’s concurrence, joined by Justice Alito, only further emphasizes this point. So on remand, the lower court might conclude that the sentencing judge didn’t improperly lengthen Tapia’s sentence to 51 months. Or it might not. Either way, Tapia’s done well so far: getting the Supreme Court to hear your case, and to rule in your favor, is no small feat.

Even if Tapia convinces the Ninth Circuit that the sentencing judge improperly lengthened her sentence, Tapia might be unsuccessful. Justice Kagan’s opinion concludes with, “[w]e leave it to the Court of Appeals to consider the effect of Tapia’s failure to object to the sentence when imposed.” So Tapia might have missed her chance to win that argument.

Thoughts

I’d gone into this case understanding it to be a nice concise demonstration of statutory interpretation, and I wasn’t mistaken. I wasn’t certain of the correct outcome on first reading the briefs, but §994(k) sealed it for me. It was nice to be vindicated in my thoughts on the case.

It’s easy to overread a case, picking out extremely nitpicky details and magnifying their importance. At the same time, a few details in Kagan’s opinion stuck out at me. First, in analyzing the statutory text, Kagan turned to the 1987 Random House dictionary for definitions. The Sentencing Reform Act was enacted in 1984, so the 1987 dictionary is contemporaneous. Second, Kagan prefaces the paragraph dealing with legislative history, “for those who consider legislative history useful”. The textualists on the bench will insist that the proper dictionary to interpret language is one contemporary with its writing, as a 1987 dictionary would usually be for a 1984 law. And Justice Scalia in particular rejects any reference to legislative history: he believes the law is what was passed, not what was not passed, as the aforementioned Senate Report was not. I think Kagan probably wrote as she did as gestures of comity to her fellow justices, such that everyone would be happy with the resulting opinion. Maybe that’s an overread, but I would guess it isn’t.

It’s also worth noting that this case was unanimous. Remember, a plurality to (more often) a majority of all Supreme Court decisions are unanimous. The Justices are not as fractious a bunch as you would believe from the cases and decisions that receive significant airplay.

Microsoft v. i4i

The Court unanimously (minus Chief Justice Roberts, who had recused himself apparently because his family owned Microsoft stock) ruled that the standard of proof for patent invalidity was clear and convincing evidence, not the lesser burden of merely a preponderance of the evidence. Further, it concluded that this standard was consistent both for evidence which the Patent and Trademark Office had reviewed, and for evidence which it had not reviewed.

Justice Sotomayor’s opinion for the Court

Justice Sotomayor wrote the opinion for all but Justice Thomas (more on him later). Her opinion relied on Justice Cardozo’s opinion in RCA v. Radio Engineering Laboratories, Inc.. Justice Cardozo in 1934 had described the standard of proof for finding invalidity as “clear and cogent evidence”. By the time the language at issue in Microsoft was added, Justice Sotomayor deemed this language to have become part of the common law (roughly: judge-made law, when some dividing line or another must be set for consistency but no laws have specified one). Moreover, she deemed Congress’s language to have used terms of art with well-known meanings to judges, which codified the “clear and convincing” standard. Thus until Congress says otherwise, “clear and convincing evidence” is the standard of proof for declaring a patent invalid.

Justice Sotomayor disagreed with the various narrow views Microsoft took of prior patent decisions, both at the Supreme Court and in lower courts, which would have set different standards of proof for certain forms of evidence. (Curiously, those forms happened to be the ones Microsoft was trying to use.) She said that even “squint[ing]” the Court couldn’t see qualifications of when clear and convincing would apply as the standard.

Justice Sotomayor also disagreed with Microsoft’s alternative argument that a reduced standard of proof applies to evidence not reviewed by the PTO. She thought that prior cases at the Court and elsewhere had consistently at most concluded that evidence reviewed by the PTO could be deemed to have “more weight” than evidence not seen by it.

Finally, Justice Sotomayor addressed the competing policy arguments of both parties: “We find ourselves in no position to judge the comparative force of these policy arguments.” Instead she said the ball was in Congress’s court: if a different standard of proof was to apply, it was up to Congress to enact it.

Justice Breyer’s concurrence

Justice Breyer, joined by Justices Scalia and Alito, wrote separately to emphasize that the clear and convincing standard of proof applied only to questions of fact, not to questions of law. What’s the difference? A jury will decide the facts of a case, but it won’t decide what the nature of the legal issues are in it, or how those issues map onto the facts. Those legal issues are determined by judges, consistent with statutory and common law, at least partly to ensure consistency in application. Quoting from Breyer’s concurrence (citations omitted) will probably illuminate the difference better than I can summarize it (or at least illuminate no worse):

Many claims of invalidity rest, however, not upon factual disputes, but upon how the law applies to facts as given. Do the given facts show that the product was previously “in public use”? Do they show that the invention was “nove[l]” and that it was “non-obvious”? Do they show that the patent applicant described his claims properly? Where the ultimate question of patent validity turns on the correct answer to legal questions—what these subsidiary legal standards mean or how they apply to the facts as given—today’s strict standard of proof has no application.

Justice Thomas’s concurrence in the judgment

Justice Thomas in his opinion agreed with the result, but he didn’t agree with the reasoning used to reach it. Unlike the other justices, he thought that when Congress said a patent should be “presumed valid”, that did not clearly indicate to judges that Congress intended to codify the clear and convincing standard. But since Congress had not specified a standard of proof, Justice Thomas concluded that the common law rule from Justice Cardozo in RCA applied. So in the end Justice Thomas held that the standard of proof of invalidity was clear and convincing evidence, but he reached it in a different manner.

The outcome

On the face of it, Microsoft losing here means that if they want to avoid a $300 million judgment, they’re going to need to try another argument in the lower courts. But since they’ve already gone through once, they’re mostly limited to whatever arguments they’ve already made, and preserved to be argued further. I don’t know how many that is, but at this point I’m guessing it’s pretty small. So Microsoft is likely out $300 million at this point, plus a bunch more for the legal costs of litigating this matter for as long, and as far, as they did.

Thoughts

This was another fun case to follow, although unlike Tapia it was much harder to follow, and it required more knowledge of the surrounding law to really understand it. Policy-wise, I tend to think it might be better if patents were easier to overturn. Thus for that reason I think a lower standard of proof might be a better thing, although it’s hard to be sure if such a change wouldn’t have other adverse effects negating that benefit. But as far as the actual law goes, and not what I wish (however uncertainly) might be the case, Microsoft seemed maybe to be stretching a little. (Maybe. It was hard to be sure given the extent of my experience with any of the relevant laws, cases, &c.) Looking at the opinions in retrospect, that intuition seems to have been right.

As far as the opinions go, I find something to like in all of them, to some degree or another. The “clear and cogent” language in the Cardozo opinion did seem fairly clear in explaining a standard of proof, if one assumed Microsoft’s narrow read of the conditions when it applied to be a stretch. All the justices agreed on that. Breyer’s opinion distinguishing questions of fact and law seemed pretty smart, too: given how complex this area of law seemed just trying to read up for one case, probably nobody would be very happy if questions of law got lumped in with questions of fact for juries. And I liked the way Justice Sotomayor brushed off all of the policy arguments both sides made (arguments so lopsidely unbalanced and cherry-picked that relying on either completely would be destructive to the ends of the patent system). Ideally courts should merely interpret the law, not make policy or choose amongst policies, and the legislative and executive branches should decide policy.

But Justice Thomas’s opinion, lumped in with the parts of Justice Sotomayor’s opinion with which he agreed, seems like the best reading to me, at least based on what I (think I) know. I didn’t really think the words “shall be presumed valid” clearly referred to a particular standard of proof such that they could be a term of art, as all the justices but Thomas would have. At this point, assuming I understand how the law works correctly in the absence of legislative action, reverting to the state of the matter as it was before — that is, Justice Cardozo’s position — seems the right move to me.

Again, that’s just how I’m reading the law. It’s not really what I want in the patent system, which I think could use a good number of changes to adapt to the modern world.

It’s also worth noting — again — that this case, too, was unanimous. I was a little surprised that both cases turned out that way, as my half-informed readings had made me think neither case was quite that straightforward. Then again, the Supreme Court never really gets easy cases, yet even still they’re frequently unanimous. So I shouldn’t be too surprised even in these particular cases.

Conclusion

If you haven’t done it already, I’d recommend taking a look at the actual opinions in these cases. Law has this stigma of being inscrutable. In various areas of law, it doubtless is just that. But in areas not densely technical, legal opinions (particularly higher-court opinions) can be surprisingly readable (once you condition yourself to skip over all the inline citations). Both cases weren’t so densely technical that an intelligent reader couldn’t follow them. Indeed, I’d say they were generally fairly readable. Give it a shot: you might be surprised what you can learn reading the occasional legal opinion. And when a news story breaks, you’ll get a much less colored view of it if you read it from the source, rather than merely read coverage of it.

27.09.11

ಠ_ಠ

Tags: , , , — Jeff @ 10:50

This is an utterly content-free rant in which I express my anger at recent Internet Explorer preview releases requiring installation of what is effectively an entirely new operating system. I would like to know how new IE behaves on various testcases. But I don’t want to potentially hose my primary functioning Windows system to do it, especially if I then lose access to a working IE9 installation. And I am really not interested in wasting a bunch of time to spin up a virtual machine just so I can waste a bunch more time to “upgrade” it to test a new version of IE.

Here’s a novel concept: what about shipping a browser that doesn’t have to insert itself deep into operating system guts? Maybe you could even install and uninstall it distinct from the OS. But that’s crazytalk, nobody would ever do that, right?

So, yeah, whatever the latest IE10 does, meh. Someone who cares can be the sacrificial lamb and find that out, if it actually matters.

(“Content-free rant”, indeed. Future posts will return to substantive form.)

07.09.11

Followup to recent .mozconfig detection changes: $topsrcdir/mozconfig and $topsrcdir/.mozconfig now both work

Two weeks ago changes landed in Mozilla to reduce the locations searched for a mozconfig to just $MOZCONFIG and $topsrcdir/.mozconfig. Previously a bunch of other weird places were searched, like $topsrcdir/mozconfig.sh and $topsrcdir/myconfig.sh and even some files in $HOME (!). This change made specifying build options more explicit, in line with build system policy to be “as explicit as possible”. Reducing complexity by killing off a bunch of truly odd configuration option locations was good. But I thought it went too far.

The changes also removed $topsrcdir/mozconfig. This location wasn’t nearly as bizarre as the others, and it was more explicit than $topsrcdir/.mozconfig: it appeared in directory listings and folder views. I wasn’t the only person who thought $topsrcdir/mozconfig should stay: the bug which reduced the mozconfig guesswork included rumblings from others wanting to keep support for $topsrcdir/mozconfig, and the blog post announcing the change included yet more.

I filed a bug to re-support $topsrcdir/mozconfig, and the patch has landed. $topsrcdir/.mozconfig and $topsrcdir/mozconfig (either but not both) now work again: use whichever name you like.

« NewerOlder »