03.11.11

How I organize my Mozilla trees

Tags: , , , — Jeff @ 10:17

Using Mozilla trees more smartly

A month ago I got a new laptop, requiring me to migrate my Mozilla trees, patches, and related work from old laptop to new. My previous setup was the simplest, stupidest thing that could work: individual clones of different trees, no sharing among those trees, sometimes multiple clones of the same tree for substantial, independent patchwork I didn’t want to explicitly order. Others have tried smarter tricks in the past, and I decided to upgrade my setup.

A new setup

The new setup is essentially this:

  • I have one local clone of mozilla-inbound in ~/moz/.clean-base which I never develop against or build against, and never modify except by updating it.
  • Whenever I want a mozilla-inbound tree, I clone ~/moz/.clean-base. I change the default-push entry in the new clone to point to the original mozilla-inbound. (I don’t change the default entry; pulling is entirely local.)
  • If I want to push a patch, I pull and update ~/moz/.clean-base. Then I pull and update the local clone that has the patch I want to push. Then I finish my patch and push it. Because default-push points to the remote mozilla-inbound, hg push as usual does exactly what I want.

Advantages

This setup has many advantages:

  • Getting a new mozilla-inbound tree is fast. I never clone the remote mozilla-inbound tree, because I have it locally. It’s not modified by a patch queue where I’d have to temporarily checkpoint work, pop to clone, then reapply after.
  • Updating a working mozilla-inbound tree is fast. Pulling and updating are completely local with no network delay.
  • I only need to update from the remote mozilla-inbound once for new changes to be available for all local trees. Instead of separately updating my SpiderMonkey shell tree, updating my browser tree, and updating any other trees I’m using, at substantial cost in time, one pull in ~/moz/.clean-base benefits all trees.
  • My working trees substantially share storage with ~/moz/.clean-base.

Pitfalls, and workarounds

Of course any setup has down sides. I’ve noticed these so far:

  • Updating a working trees is a two-step process: first updating ~/moz/.clean-base, then updating the actual tree.
  • I’ll almost always lose a push race to mozilla-inbound. If my local working tree is perfectly up-to-date with my ~/moz/.clean-base, that’s generally not up-to-date with the remote tree, particularly as rebasing my patches is now a two-step process. That produces a larger window of time for others to push things after I’ve updated my clean tree but before I’ve rebased my working tree.
  • I have to remember to edit the default-push in new trees, lest I accidentally mutate ~/moz/.clean-base.

Some of these problems are indeed annoying, but I’ve found substantial workarounds for them such that I no longer consider them limitations.

Automate updating ~/moz/.clean-base

Updating is only a two-step process if I update ~/moz/.clean-base manually, but it’s easy to automate this with a cronjob. With frequent updates ~/moz/.clean-base is all but identical to the canonical mozilla-inbound. And by making updates automatic, I also lose push races much less frequently (particularly if I rebase and push right after a regular update).

I’ve added this line to my crontab using crontab -e to update ~/moz/.clean-base every twenty minutes from 07:00-01:00 every day but Sunday (this being when I might want an up-to-date tree):

*/20 00-01,07-23 * * 1-6 /home/jwalden/moz/inflight/pull-updated-inbound >/dev/null 2>&1

I perform the update in a script, piping all output to /dev/null so that cron won’t mail me the output after every update. It seems better to have a simpler crontab entry, so I put the actual commands in /home/jwalden/moz/inflight/pull-updated-inbound:

#!/bin/bash

cd ~/moz/.clean-base/
hg pull -u

With these changes in place, updating a working tree costs only the time required to rebase it: network delay doesn’t exist. And the intermediate tree doesn’t intrude on my normal workflow.

Add a hook to ~/moz/.clean-base to prevent inadvertent pushes

My setup depends on ~/moz/.clean-base being clean. Local changes or commits will break automatic updates and might corrupt my working trees. I want ~/moz/.clean-base to only change through pulls.

I can enforce this using a Mercurial prechangegroup hook. This hook, run when a repository is about to accept a group of changes, can gate changes before they’re added to a tree. I use such a hook to prevent any changes except by a push by adding these lines to ~/moz/.clean-base/.hg/hgrc:

# Prevent pushing into local mozilla-inbound clone: only push after changing a clone's default-push.
[hooks]
prechangegroup.prevent_pushes = python:prevent_pushes.prevent_pushes.hook

This invokes the hook function in prevent_pushes.py:

#!/usr/bin/python

def hook(ui, repo, **kwargs):
  source = kwargs['source']
  if source != 'pull':
    print "Changes pushed into non-writable repository!  Only pulls permitted."
    return 1
  print "Updating pristine mozilla-inbound copy..."
  return 0

On my Fedora-based system, I place this file in /usr/lib/python2.7/site-packages/prevent_pushes/ beside an empty __init__.py. Mercurial will find it and invoke the hook whenever ~/moz/.clean-base receives changesets.

Only pushing from a new clone without a default-push would attempt to modify ~/moz/.clean-base, so the need to prevent changes to ~/moz/.clean-base might seem small. Yet so far this hook has prevented such changes more than once when I’ve forgotten to set a default-push, and I expect it will again.

Conclusion

There are doubtless many good ways to organize Mozilla work. I find this system works well for me, and I hope this description of it provides ideas for others to incorporate into their own setups.

21.10.11

Properly resizing vector image backgrounds

Tags: , , , , , , — Jeff @ 20:43

Resizing backgrounds in CSS

The CSS background-image property allows web developers to add backgrounds to parts of pages, but only at their original sizes. CSS 3 added background-size to resize background images, and I implemented it in Gecko. But I couldn’t implement it for SVG backgrounds, because Gecko didn’t support them. When support arrived, nobody updated the background sizing algorithm for vector images’ increased flexibility. I’d hoped to prevent this omission by adding “canary” tests for background-size-modified SVG backgrounds. But I miswrote the tests, so background-size wasn’t updated for SVG-in-CSS.

Since I’d inadvertently allowed this mess to happen, I felt somewhat obligated to fix it. Starting with Firefox 8, Firefox will properly render backgrounds which are vector images, at the appropriate size required by the corresponding background-size. To the best of my knowledge Gecko is the first browser engine to properly render vector images in CSS backgrounds.

How do images scale in backgrounds now?

It’s complicated: so complicated that to have any real confidence in complete correctness of a fix, I generated tests for the Cartesian product of several different variables, then manually assigned an expected rendering to those 200-odd tests. It was the only way to be sure I’d implemented every last corner correctly. (In case you’re wondering, these tests have been submitted to the CSS test suite where they await review. That should be lots of fun, I’m sure!)

Still, the algorithm can mostly (but not entirely!) be summarized by following a short list of rules:

  1. If background-size specifies a fixed dimension (percentages and relative units are fixed by context), that wins.
  2. If the image has an intrinsic ratio (its width-height ratio is constant — 16:9, 4:3, 2.39:1, 1:1, &c.), the rendered size preserves that ratio.
  3. If the image specifies a size and isn’t modified by contain or cover, that size wins.
  4. With no other information, the rendered size is the corresponding size of the background area.

Note that sizing only cares about the image’s dimensions and proportions, or lack thereof. A vector-based image with fixed dimensions will be treated identically to a pixel-based image of the same size.

Subsequent examples use the following highly-artistic images:

File name Image (at 150×150) Description
no-dimensions-or-ratio.svg Image containing corner-to-corner gradient, no width, height, or intrinsic ratio This image is dimensionless and proportionless: think of it like a gradient desktop background that you could use on a 1024×768 screen as readily as on a 1920×1080 screen.
100px-wide-no-height-or-ratio.svg Image containing a vertical gradient, 100 pixel width and of indeterminate height, with no intrinsic ratio This image is 100 pixels wide but has no height or intrinsic ratio. Imagine it as a thin strip of wallpaper that could be stretched the entire height of a webpage.
100px-height-3x4-ratio.svg Image containing a vertical gradient, 100 pixel height and of indeterminate width, with a width/height intrinsic ratio of 3:4 This image is 100 pixels high but lacks a width, and it has an intrinsic ratio of 3:4. The ratio ensures that its width:height ratio is always 3:4, unless it’s deliberately scaled to a disproportionate size. (One dimension and an intrinsic ratio is really no different from two dimensions, but it’s still useful as an example.)
no-dimensions-1x1-ratio.svg Square image with indeterminate width and height, containing a horizontal gradient This image has no width or height, but it has an intrinsic ratio of 1:1. Think of it as a program icon: always square, just as usable at 32×32 or 128×128 or 512×512.

In the examples below, all enclosing rectangles are 300 pixels wide and 200 pixels tall. Also, all backgrounds have background-repeat: no-repeat for easier understanding. Note that the demos below are all the expected rendering, not the actual rendering in your browser. See how your browser actually does on this demo page, and download an Aurora or Beta build (or even a nightly) to see the demo rendered correctly.

Now consider the rules while moving through these questions to address all possible background-size values.

Does the background-size specify fixed lengths for both dimensions?

Per rule 1, fixed lengths always win, so we always use them.

background: url(no-dimensions-or-ratio.svg);
background-size: 125px 175px;

background: url(100px-wide-no-height-or-ratio.svg);
background-size: 250px 150px;

background: url(100px-height-3x4-ratio.svg);
background-size: 275px 125px;

background: url(no-dimensions-1x1-ratio.svg);
background-size: 250px 100px;

Is the background-size contain or cover?

cover makes the picture as small as possible to cover the background area. contain makes the picture as large as possible while still fitting in the background area. For an image with an intrinsic ratio, exactly one size matches the cover/fit criteria alone. But for a vector image lacking an intrinsic ratio, cover/fit is insufficient, so the large/small constraints choose the resulting rendered size.

Rule 1 is irrelevant, so try rule 2: preserve any intrinsic ratio (while respecting contain/cover). Preserving a 3:4 intrinsic ratio for a 300×200 box with contain, for example, means drawing a 150×200 background.

background: url(100px-height-3x4-ratio.svg);
background-size: contain;

background: url(100px-height-3x4-ratio.svg);
background-size: cover;

background: url(no-dimensions-1x1-ratio.svg);
background-size: contain;

background: url(no-dimensions-1x1-ratio.svg);
background-size: cover;

Rule 3 is irrelevant, so if there’s no intrinsic ratio, then per rule 4, the background image covers the entire background area, satisfying the largest-or-smallest constraint.

background: url(no-dimensions-or-ratio.svg);
background-size: contain;

background: url(100px-wide-no-height-or-ratio.svg);
background-size: contain;

Is the background-size auto or auto auto?

Per rule 2, rendering must preserve any intrinsic ratio.

If we have an intrinsic ratio, any dimension (or both) fixes the other, and we’re done. If we have an intrinsic ratio but no dimensions, then per rule 4, we use the background area — but see rule 2! To preserve the intrinsic ratio, the image is rendered as if for contain.

background: url(100px-height-3x4-ratio.svg);
background-size: auto auto;

background: url(no-dimensions-1x1-ratio.svg);
background-size: auto auto;

If we have no intrinsic ratio, then per rule 3, we use the image’s dimension if available, and per rule 4, the corresponding background area dimension if not.

background: url(no-dimensions-or-ratio.svg);
background-size: auto auto;

background: url(100px-wide-no-height-or-ratio.svg);
background-size: auto auto;

The background-size is one auto and one length.

Per rule 1 we use the specified dimension, so we have one dimension to determine.

If we have an intrinsic ratio, rule 2 plus the specified dimension determines rendering size.

background: url(100px-height-3x4-ratio.svg);
background-size: 150px auto;

background: url(no-dimensions-1x1-ratio.svg);
background-size: 150px auto;

Otherwise, per rule 3 we consult the image, using the image’s dimension if it has it. If it doesn’t, per rule 4, we use the background area’s dimension. Either way, we have our rendered size.

background: url(no-dimensions-or-ratio.svg);
background-size: auto 150px;

background: url(100px-wide-no-height-or-ratio.svg);
background-size: 200px auto;

background: url(100px-wide-no-height-or-ratio.svg);
background-size: auto 125px;

Whee, that’s a mouthful!

Yes. Yes it is. (Two hundred tests, remember?) But it’s shiny!

Anything else to know?

In rewriting the sizing algorithm, I was confronted with the problem of how to resize CSS gradients (distinct from gradients embedded in SVG images), which CSS treats as an image subtype.

Our previous sizing algorithm happened to treat gradients as if they were a special image type which magically inherited the intrinsic ratio of their context. Thus if the background were resized with a single length, the gradient would paint over a proportional part of the background area. Other resizing would simply expand them to cover the background area.

CSS 3 Image Values specifies the nature of the images represented by gradients: they have no dimensions and no intrinsic ratio. Firefox 8 implements these semantics, which are a change from gradient rendering semantics in previous releases. This will affect rendering only in the case where background-size is auto <length> or <length> auto (and equivalently, simply <length>). Thus if you wish to resize a gradient background, you should not use a length in concert with auto to do so, because in that case rendering will vary across browsers.

Conclusion

SVG background images have now become more powerful than you can possibly imagine. If the SVG has fixed dimensions, it’ll work like any raster image. But SVG goes beyond this: if an SVG image only has partial fixed dimensions, the final rendering will respect that partial dimension information. Proportioned images will remain proportioned (unless you force them out of proportion); they won’t be disproportionally stretched just because the background area has particular dimensions. Images with a dimension but no intrinsic ratio will have the dimension used when the background is auto-sized, rather than simply have it be ignored. These aren’t just bugfixes: they’re new functionality for the web developer’s toolbox.

Now go out and make better shiny with this than I have. 🙂

20.10.11

Implementing mozilla::ArrayLength and mozilla::ArrayEnd, and some followup work

Tags: , , , , , , — Jeff @ 16:03

In my last post I announced the addition of mozilla::ArrayLength and mozilla::ArrayEnd to the Mozilla Framework Based on Templates, and I noted I was leaving a description of how these methods were implemented to a followup post. This is that post.

The C++ template trick used to implement mozilla::ArrayLength

The implementations of these methods are surprisingly simple:

template<typename T, size_t N>
size_t
ArrayLength(T (&arr)[N])
{
  return N;
}

template<typename T, size_t N>
T*
ArrayEnd(T (&arr)[N])
{
  return arr + ArrayLength(arr);
}

The trick is this: you can templatize an array based on its compile-time length. Here we templatize both methods on: the type of the elements of the array, so that each is polymorphic; and the number of elements in the array. Then inside the method we can refer to that length, a constant known at compile time, and simply return it to implement the desired semantics.

Templatizing on the length of an array may not seem too unusual. The part that may be a little unfamiliar is how the array is described as a parameter of the template method: T (&arr)[N]. This declares the argument to be a reference to an array of N elements of type T. Its being a reference is important: we don’t actually care about the array contents at all, and we don’t want to copy them to call the method. All we care about is its type, which we can capture without further cost using a reference.

This technique is uncommon, but it’s not new to Mozilla. Perhaps you’ve wondered at some point why Mozilla’s string classes have both EqualsLiteral and EqualsASCII: the former for use only when comparing to “an actual literal string”, the latter for use when comparing to any const char*. You can probably guess why this interface occurs now: EqualsLiteral is a method templatized on the length of the actual literal string passed to it. It can be more efficient than EqualsASCII because it knows the length of the compared string.

Using this trick in other code

I did most of the work to convert NS_ARRAY_LENGTH to ArrayLength with a script. But I still had to look over the results of the script to make sure things were sane before proceeding with it. In doing so, I noticed a decent number of places where an array was created, then it and its length were being passed as arguments to another method. For example:

void nsHtml5Atoms::AddRefAtoms()
{
  NS_RegisterStaticAtoms(Html5Atoms_info, ArrayLength(Html5Atoms_info));
}

Using ArrayLength here is safer than hard-coding a length. But safer still would be to not require callers to pass an array and a length separately — rather to pass them together. We can do this by pushing the template trick down another level, into NS_RegisterStaticAtoms (or at least into an internal method used everywhere, if the external API must be preserved for some reason):

static nsresult
RegisterStaticAtoms(const nsStaticAtom* aAtoms, PRUint32 aAtomCount)
{
  // ...old implementation...
}

template<size_t N>
nsresult
NS_RegisterStaticAtoms(const nsStaticAtom (&aAtoms)[N])
{
  return RegisterStaticAtoms(aAtoms, N);
}

The pointer-and-length method generally still needs to stick around somewhere, and it does in this rewrite here. It’s just that it wouldn’t be the interface user code would see, or it wouldn’t be the primary interface (for example, it might be protected or similar).

NS_RegisterStaticAtoms was just one such method which could use improvement. In a quick skim I also see:

…as potential spots that could be improved — at least for some callers — with some additional templatization on array length.

I didn’t look super-closely at these, so I might have missed some. Or I might have been over-generous in what could be rewritten to templatize on length, seeing only the one or two places that pass fixed lengths and missing the majority of cases that don’t. But there’s definitely a lot of cleaning that could be done here.

A call for help

Passing arrays and lengths separately is dangerous if you don’t know what you’re doing. The trick used here eliminates it, in certain cases. The more we can use this pattern, the more we can fundamentally reduce the danger of separating data from its length.

I don’t have time to do the above work myself. (I barely had time to do the ArrayLength work, really. I only did it because I’d sort of started the ball rolling in a separate bug, so I felt some obligation to make sure it got done.) And it’s not particularly hard work, requiring especial knowledge of the relevant code. It’s a better use of my time for me to work on JavaScript engine code, or on other code I know particularly well, than to do this work. But for someone interested in getting started working on Gecko C++ code, it would be a good first project. I’ve filed bug 696242 for this task; if anyone’s interested in a good first bug for getting into Mozilla C++ coding, feel free to start with that one. If you have questions about what to do, in any way, feel free to ask them there.

On the other hand, if you have any questions about the technique in question, or the way it’s used in Mozilla, feel free to ask them here. But if you want to contribute to fixing the issues I’ve noted, let’s keep them to the bug, if possible.

Computing the length or end of a compile-time constant-length array: {JS,NS}_ARRAY_{END,LENGTH} are out, mozilla::Array{Length,End} are in

Tags: , , , , , , , — Jeff @ 16:03

Determining the length of a fixed-length array

Suppose in C++ you want to perform variations of some simple task several times. One way to do this is to loop over the variations in an array to perform each task:

/* Defines the necessary envvars to 1. */
int setVariables()
{
  static const char* names[] = { "FOO", "BAR", "BAZ", "QUUX", "EIT", "GOATS" };
  for (int i = 0; i < 6; i++)
    if (0 > setenv(names[i], "1")) return -1;
  return 0;
}

Manually looping by index is prone to error. One particular issue with loop-by-index is that you must correctly compute the extent of iteration. Hard-coding a constant works, but what if the array changes? The constant must also change, which isn’t obvious to someone not looking carefully at all uses of the array.

The traditional way to get an array’s length is with a macro using sizeof:

#define NS_ARRAY_LENGTH(arr)  (sizeof(arr) / sizeof((arr)[0]))

This works but has problems. First, it’s a macro, which means it has the usual macro issues. For example, macros are untyped, so you can pass in “wrong” arguments to them and may not get type errors. This is the second problem: NS_ARRAY_LENGTH cheerfully accepts non-array pointers and returns a completely bogus length.

  const char* str = "long enough string";
  char* copy = (char*) malloc(NS_ARRAY_LENGTH(str)); // usually 4 or 8
  strcpy(copy, str); // buffer overflow!

Introducing mozilla::ArrayLength and mozilla::ArrayEnd

Seeing an opportunity to both kill a moderately obscure macro and to further improve the capabilities of the Mozilla Framework Based on Templates, I took it. Now, rather than use these macros, you can #include "mozilla/Util.h" and use mozilla::ArrayLength to compute the length of a compile-time array. You can also use mozilla::ArrayEnd to compute arr + ArrayLength(arr). Both these methods (not macros!) use C++ template magic to only accept arrays with compile-time-fixed length, failing to compile if something else is provided.

Limitations

Unfortunately, ISO C++ limitations make it impossible to write a method completely replacing the macro. So the macros still exist, and in rare cases they remain the correct answer.

The array can’t depend on an unnamed type (class, struct, union) or a local type

According to C++ §14.3.1 paragraph 2, “A local type [or an] unnamed type…shall not be used as a template-argument for a template type-parameter.” C++ makes this a compile error:

size_t numsLength()
{
  // unnamed struct, also locally defined
  static const struct { int i; } nums[] = { { 1 }, { 2 }, { 3 } };

  return mozilla::ArrayLength(nums);
}

It’s easy to avoid both limitations: move local types to global code, and name them.

// now defined globally, and with a name
struct Number { int i; };
size_t numsLength()
{
  static const Number nums[] = { 1, 2, 3 };
  return mozilla::ArrayLength(nums);
}

mozilla::ArrayLength(arr) isn’t a constant, NS_ARRAY_LENGTH(arr) is

Some contexts in C++ require a compile-time constant expression: template parameters, (in C++ but not C99) for local array lengths, for array lengths in typedefs, for the value of enum initializers, for static/compile-time assertions (which are usually bootstrapped off these other locations), and perhaps others. A function call, even one evaluating to a compile-time constant, is not a constant expression.

One other context doesn’t require a constant but strongly wants one: the values of static variables, inside classes and methods and out. If the value is a function call, even if it computes a constant, the compiler might make it a static initialization, delaying startup.

The long and short of it is that everything in the code below is a bad idea:

int arr[] = { 1, 2, 3, 5 };
static size_t len = ArrayLength(arr); // not an error, but don't do it
void t(JSContext* cx)
{
  js::Vector<int, ArrayLength(arr)> v(cx); // non-constant template parameter
  int local[ArrayLength(arr)]; // variadic arrays not okay in C++
  typedef int Mirror[ArrayLength(arr)]; // non-constant array length
  enum { L = ArrayLength(arr); }; // non-constant initializer
  PR_STATIC_ASSERT(4 == ArrayLength(arr)); // special case of one of the others
}

In these situations you should continue to use NS_ARRAY_LENGTH (or in SpiderMonkey, JS_ARRAY_LENGTH).

mozilla/Util.h is fragile with respect to IPDL headers, for include order

mozilla/Util.h includes mozilla/Types.h, which includes jstypes.h, which includes jsotypes.h, which defines certain fixed-width integer types: int8, int16, uint8, uint16, and so on. It happens that ipc/chromium/src/base/basictypes.h also defines these integer types — but incompatibly on Windows only. This header is, alas, included through every IPDL-generated header. In order to safely include any mfbt header in a file which also includes an IPDL-generated header, you must include the IPDL-generated header first. So when landing patches using mozilla/Util.h, watch out for Windows-specific bustage.

Removing the limitations

The limitations on the type of elements in arrays passed to ArrayLength are unavoidable limitations of C++. But C++11 removes these limitations, and compilers will likely implement support fairly quickly. When that happens we’ll be able to stop caring about the local-or-unnamed problem, not even needing to work around it.

The compile-time-constant limitation is likewise a limitation of C++. It too will go away in C++11 with the constexpr keyword. This modifier specifies that a function provided constant arguments computes a constant. The compiler must allow calls to the function that have constant arguments to be used as compile-time constants. Thus when compilers support constexpr, we can add it to the declaration of ArrayLength and begin using ArrayLength in compile-time-constant contexts. This is more low-hanging C++11 fruit that compilers will pick up soon. (Indeed, GCC 4.6 already implements it.)

Last, we have the Windows-specific #include ordering requirement. We have some ideas for getting around this problem, and we hope to have a solution soon.

A gotcha

Both these methods have a small gotcha: their behavior may not be intuitive when applied to C strings. What does sizeof("foo") evaluate to? If you think of "foo" as a string, you might say 3. But in reality "foo" is much better thought of as an array — and strings are '\0'-terminated. So actually, sizeof("foo") == 4. This was the case with NS_ARRAY_LENGTH, too, so it’s not new behavior. But if you use these methods without considering this, you might end up misusing them.

Conclusion

Avoid NS_ARRAY_LENGTH when possible, and use mozilla::ArrayLength or mozilla::ArrayEnd instead. And watch out when using them on strings, because their behavior might not be what you wanted.

(Curious how these methods are defined, and what C++ magic is used? See my next post.)

07.09.11

Followup to recent .mozconfig detection changes: $topsrcdir/mozconfig and $topsrcdir/.mozconfig now both work

Two weeks ago changes landed in Mozilla to reduce the locations searched for a mozconfig to just $MOZCONFIG and $topsrcdir/.mozconfig. Previously a bunch of other weird places were searched, like $topsrcdir/mozconfig.sh and $topsrcdir/myconfig.sh and even some files in $HOME (!). This change made specifying build options more explicit, in line with build system policy to be “as explicit as possible”. Reducing complexity by killing off a bunch of truly odd configuration option locations was good. But I thought it went too far.

The changes also removed $topsrcdir/mozconfig. This location wasn’t nearly as bizarre as the others, and it was more explicit than $topsrcdir/.mozconfig: it appeared in directory listings and folder views. I wasn’t the only person who thought $topsrcdir/mozconfig should stay: the bug which reduced the mozconfig guesswork included rumblings from others wanting to keep support for $topsrcdir/mozconfig, and the blog post announcing the change included yet more.

I filed a bug to re-support $topsrcdir/mozconfig, and the patch has landed. $topsrcdir/.mozconfig and $topsrcdir/mozconfig (either but not both) now work again: use whichever name you like.

« NewerOlder »