21.10.11

Properly resizing vector image backgrounds

Tags: , , , , , , — Jeff @ 20:43

Resizing backgrounds in CSS

The CSS background-image property allows web developers to add backgrounds to parts of pages, but only at their original sizes. CSS 3 added background-size to resize background images, and I implemented it in Gecko. But I couldn’t implement it for SVG backgrounds, because Gecko didn’t support them. When support arrived, nobody updated the background sizing algorithm for vector images’ increased flexibility. I’d hoped to prevent this omission by adding “canary” tests for background-size-modified SVG backgrounds. But I miswrote the tests, so background-size wasn’t updated for SVG-in-CSS.

Since I’d inadvertently allowed this mess to happen, I felt somewhat obligated to fix it. Starting with Firefox 8, Firefox will properly render backgrounds which are vector images, at the appropriate size required by the corresponding background-size. To the best of my knowledge Gecko is the first browser engine to properly render vector images in CSS backgrounds.

How do images scale in backgrounds now?

It’s complicated: so complicated that to have any real confidence in complete correctness of a fix, I generated tests for the Cartesian product of several different variables, then manually assigned an expected rendering to those 200-odd tests. It was the only way to be sure I’d implemented every last corner correctly. (In case you’re wondering, these tests have been submitted to the CSS test suite where they await review. That should be lots of fun, I’m sure!)

Still, the algorithm can mostly (but not entirely!) be summarized by following a short list of rules:

  1. If background-size specifies a fixed dimension (percentages and relative units are fixed by context), that wins.
  2. If the image has an intrinsic ratio (its width-height ratio is constant — 16:9, 4:3, 2.39:1, 1:1, &c.), the rendered size preserves that ratio.
  3. If the image specifies a size and isn’t modified by contain or cover, that size wins.
  4. With no other information, the rendered size is the corresponding size of the background area.

Note that sizing only cares about the image’s dimensions and proportions, or lack thereof. A vector-based image with fixed dimensions will be treated identically to a pixel-based image of the same size.

Subsequent examples use the following highly-artistic images:

File name Image (at 150×150) Description
no-dimensions-or-ratio.svg Image containing corner-to-corner gradient, no width, height, or intrinsic ratio This image is dimensionless and proportionless: think of it like a gradient desktop background that you could use on a 1024×768 screen as readily as on a 1920×1080 screen.
100px-wide-no-height-or-ratio.svg Image containing a vertical gradient, 100 pixel width and of indeterminate height, with no intrinsic ratio This image is 100 pixels wide but has no height or intrinsic ratio. Imagine it as a thin strip of wallpaper that could be stretched the entire height of a webpage.
100px-height-3x4-ratio.svg Image containing a vertical gradient, 100 pixel height and of indeterminate width, with a width/height intrinsic ratio of 3:4 This image is 100 pixels high but lacks a width, and it has an intrinsic ratio of 3:4. The ratio ensures that its width:height ratio is always 3:4, unless it’s deliberately scaled to a disproportionate size. (One dimension and an intrinsic ratio is really no different from two dimensions, but it’s still useful as an example.)
no-dimensions-1x1-ratio.svg Square image with indeterminate width and height, containing a horizontal gradient This image has no width or height, but it has an intrinsic ratio of 1:1. Think of it as a program icon: always square, just as usable at 32×32 or 128×128 or 512×512.

In the examples below, all enclosing rectangles are 300 pixels wide and 200 pixels tall. Also, all backgrounds have background-repeat: no-repeat for easier understanding. Note that the demos below are all the expected rendering, not the actual rendering in your browser. See how your browser actually does on this demo page, and download an Aurora or Beta build (or even a nightly) to see the demo rendered correctly.

Now consider the rules while moving through these questions to address all possible background-size values.

Does the background-size specify fixed lengths for both dimensions?

Per rule 1, fixed lengths always win, so we always use them.

background: url(no-dimensions-or-ratio.svg);
background-size: 125px 175px;

background: url(100px-wide-no-height-or-ratio.svg);
background-size: 250px 150px;

background: url(100px-height-3x4-ratio.svg);
background-size: 275px 125px;

background: url(no-dimensions-1x1-ratio.svg);
background-size: 250px 100px;

Is the background-size contain or cover?

cover makes the picture as small as possible to cover the background area. contain makes the picture as large as possible while still fitting in the background area. For an image with an intrinsic ratio, exactly one size matches the cover/fit criteria alone. But for a vector image lacking an intrinsic ratio, cover/fit is insufficient, so the large/small constraints choose the resulting rendered size.

Rule 1 is irrelevant, so try rule 2: preserve any intrinsic ratio (while respecting contain/cover). Preserving a 3:4 intrinsic ratio for a 300×200 box with contain, for example, means drawing a 150×200 background.

background: url(100px-height-3x4-ratio.svg);
background-size: contain;

background: url(100px-height-3x4-ratio.svg);
background-size: cover;

background: url(no-dimensions-1x1-ratio.svg);
background-size: contain;

background: url(no-dimensions-1x1-ratio.svg);
background-size: cover;

Rule 3 is irrelevant, so if there’s no intrinsic ratio, then per rule 4, the background image covers the entire background area, satisfying the largest-or-smallest constraint.

background: url(no-dimensions-or-ratio.svg);
background-size: contain;

background: url(100px-wide-no-height-or-ratio.svg);
background-size: contain;

Is the background-size auto or auto auto?

Per rule 2, rendering must preserve any intrinsic ratio.

If we have an intrinsic ratio, any dimension (or both) fixes the other, and we’re done. If we have an intrinsic ratio but no dimensions, then per rule 4, we use the background area — but see rule 2! To preserve the intrinsic ratio, the image is rendered as if for contain.

background: url(100px-height-3x4-ratio.svg);
background-size: auto auto;

background: url(no-dimensions-1x1-ratio.svg);
background-size: auto auto;

If we have no intrinsic ratio, then per rule 3, we use the image’s dimension if available, and per rule 4, the corresponding background area dimension if not.

background: url(no-dimensions-or-ratio.svg);
background-size: auto auto;

background: url(100px-wide-no-height-or-ratio.svg);
background-size: auto auto;

The background-size is one auto and one length.

Per rule 1 we use the specified dimension, so we have one dimension to determine.

If we have an intrinsic ratio, rule 2 plus the specified dimension determines rendering size.

background: url(100px-height-3x4-ratio.svg);
background-size: 150px auto;

background: url(no-dimensions-1x1-ratio.svg);
background-size: 150px auto;

Otherwise, per rule 3 we consult the image, using the image’s dimension if it has it. If it doesn’t, per rule 4, we use the background area’s dimension. Either way, we have our rendered size.

background: url(no-dimensions-or-ratio.svg);
background-size: auto 150px;

background: url(100px-wide-no-height-or-ratio.svg);
background-size: 200px auto;

background: url(100px-wide-no-height-or-ratio.svg);
background-size: auto 125px;

Whee, that’s a mouthful!

Yes. Yes it is. (Two hundred tests, remember?) But it’s shiny!

Anything else to know?

In rewriting the sizing algorithm, I was confronted with the problem of how to resize CSS gradients (distinct from gradients embedded in SVG images), which CSS treats as an image subtype.

Our previous sizing algorithm happened to treat gradients as if they were a special image type which magically inherited the intrinsic ratio of their context. Thus if the background were resized with a single length, the gradient would paint over a proportional part of the background area. Other resizing would simply expand them to cover the background area.

CSS 3 Image Values specifies the nature of the images represented by gradients: they have no dimensions and no intrinsic ratio. Firefox 8 implements these semantics, which are a change from gradient rendering semantics in previous releases. This will affect rendering only in the case where background-size is auto <length> or <length> auto (and equivalently, simply <length>). Thus if you wish to resize a gradient background, you should not use a length in concert with auto to do so, because in that case rendering will vary across browsers.

Conclusion

SVG background images have now become more powerful than you can possibly imagine. If the SVG has fixed dimensions, it’ll work like any raster image. But SVG goes beyond this: if an SVG image only has partial fixed dimensions, the final rendering will respect that partial dimension information. Proportioned images will remain proportioned (unless you force them out of proportion); they won’t be disproportionally stretched just because the background area has particular dimensions. Images with a dimension but no intrinsic ratio will have the dimension used when the background is auto-sized, rather than simply have it be ignored. These aren’t just bugfixes: they’re new functionality for the web developer’s toolbox.

Now go out and make better shiny with this than I have. 🙂

20.10.11

Implementing mozilla::ArrayLength and mozilla::ArrayEnd, and some followup work

Tags: , , , , , , — Jeff @ 16:03

In my last post I announced the addition of mozilla::ArrayLength and mozilla::ArrayEnd to the Mozilla Framework Based on Templates, and I noted I was leaving a description of how these methods were implemented to a followup post. This is that post.

The C++ template trick used to implement mozilla::ArrayLength

The implementations of these methods are surprisingly simple:

template<typename T, size_t N>
size_t
ArrayLength(T (&arr)[N])
{
  return N;
}

template<typename T, size_t N>
T*
ArrayEnd(T (&arr)[N])
{
  return arr + ArrayLength(arr);
}

The trick is this: you can templatize an array based on its compile-time length. Here we templatize both methods on: the type of the elements of the array, so that each is polymorphic; and the number of elements in the array. Then inside the method we can refer to that length, a constant known at compile time, and simply return it to implement the desired semantics.

Templatizing on the length of an array may not seem too unusual. The part that may be a little unfamiliar is how the array is described as a parameter of the template method: T (&arr)[N]. This declares the argument to be a reference to an array of N elements of type T. Its being a reference is important: we don’t actually care about the array contents at all, and we don’t want to copy them to call the method. All we care about is its type, which we can capture without further cost using a reference.

This technique is uncommon, but it’s not new to Mozilla. Perhaps you’ve wondered at some point why Mozilla’s string classes have both EqualsLiteral and EqualsASCII: the former for use only when comparing to “an actual literal string”, the latter for use when comparing to any const char*. You can probably guess why this interface occurs now: EqualsLiteral is a method templatized on the length of the actual literal string passed to it. It can be more efficient than EqualsASCII because it knows the length of the compared string.

Using this trick in other code

I did most of the work to convert NS_ARRAY_LENGTH to ArrayLength with a script. But I still had to look over the results of the script to make sure things were sane before proceeding with it. In doing so, I noticed a decent number of places where an array was created, then it and its length were being passed as arguments to another method. For example:

void nsHtml5Atoms::AddRefAtoms()
{
  NS_RegisterStaticAtoms(Html5Atoms_info, ArrayLength(Html5Atoms_info));
}

Using ArrayLength here is safer than hard-coding a length. But safer still would be to not require callers to pass an array and a length separately — rather to pass them together. We can do this by pushing the template trick down another level, into NS_RegisterStaticAtoms (or at least into an internal method used everywhere, if the external API must be preserved for some reason):

static nsresult
RegisterStaticAtoms(const nsStaticAtom* aAtoms, PRUint32 aAtomCount)
{
  // ...old implementation...
}

template<size_t N>
nsresult
NS_RegisterStaticAtoms(const nsStaticAtom (&aAtoms)[N])
{
  return RegisterStaticAtoms(aAtoms, N);
}

The pointer-and-length method generally still needs to stick around somewhere, and it does in this rewrite here. It’s just that it wouldn’t be the interface user code would see, or it wouldn’t be the primary interface (for example, it might be protected or similar).

NS_RegisterStaticAtoms was just one such method which could use improvement. In a quick skim I also see:

…as potential spots that could be improved — at least for some callers — with some additional templatization on array length.

I didn’t look super-closely at these, so I might have missed some. Or I might have been over-generous in what could be rewritten to templatize on length, seeing only the one or two places that pass fixed lengths and missing the majority of cases that don’t. But there’s definitely a lot of cleaning that could be done here.

A call for help

Passing arrays and lengths separately is dangerous if you don’t know what you’re doing. The trick used here eliminates it, in certain cases. The more we can use this pattern, the more we can fundamentally reduce the danger of separating data from its length.

I don’t have time to do the above work myself. (I barely had time to do the ArrayLength work, really. I only did it because I’d sort of started the ball rolling in a separate bug, so I felt some obligation to make sure it got done.) And it’s not particularly hard work, requiring especial knowledge of the relevant code. It’s a better use of my time for me to work on JavaScript engine code, or on other code I know particularly well, than to do this work. But for someone interested in getting started working on Gecko C++ code, it would be a good first project. I’ve filed bug 696242 for this task; if anyone’s interested in a good first bug for getting into Mozilla C++ coding, feel free to start with that one. If you have questions about what to do, in any way, feel free to ask them there.

On the other hand, if you have any questions about the technique in question, or the way it’s used in Mozilla, feel free to ask them here. But if you want to contribute to fixing the issues I’ve noted, let’s keep them to the bug, if possible.

Computing the length or end of a compile-time constant-length array: {JS,NS}_ARRAY_{END,LENGTH} are out, mozilla::Array{Length,End} are in

Tags: , , , , , , , — Jeff @ 16:03

Determining the length of a fixed-length array

Suppose in C++ you want to perform variations of some simple task several times. One way to do this is to loop over the variations in an array to perform each task:

/* Defines the necessary envvars to 1. */
int setVariables()
{
  static const char* names[] = { "FOO", "BAR", "BAZ", "QUUX", "EIT", "GOATS" };
  for (int i = 0; i < 6; i++)
    if (0 > setenv(names[i], "1")) return -1;
  return 0;
}

Manually looping by index is prone to error. One particular issue with loop-by-index is that you must correctly compute the extent of iteration. Hard-coding a constant works, but what if the array changes? The constant must also change, which isn’t obvious to someone not looking carefully at all uses of the array.

The traditional way to get an array’s length is with a macro using sizeof:

#define NS_ARRAY_LENGTH(arr)  (sizeof(arr) / sizeof((arr)[0]))

This works but has problems. First, it’s a macro, which means it has the usual macro issues. For example, macros are untyped, so you can pass in “wrong” arguments to them and may not get type errors. This is the second problem: NS_ARRAY_LENGTH cheerfully accepts non-array pointers and returns a completely bogus length.

  const char* str = "long enough string";
  char* copy = (char*) malloc(NS_ARRAY_LENGTH(str)); // usually 4 or 8
  strcpy(copy, str); // buffer overflow!

Introducing mozilla::ArrayLength and mozilla::ArrayEnd

Seeing an opportunity to both kill a moderately obscure macro and to further improve the capabilities of the Mozilla Framework Based on Templates, I took it. Now, rather than use these macros, you can #include "mozilla/Util.h" and use mozilla::ArrayLength to compute the length of a compile-time array. You can also use mozilla::ArrayEnd to compute arr + ArrayLength(arr). Both these methods (not macros!) use C++ template magic to only accept arrays with compile-time-fixed length, failing to compile if something else is provided.

Limitations

Unfortunately, ISO C++ limitations make it impossible to write a method completely replacing the macro. So the macros still exist, and in rare cases they remain the correct answer.

The array can’t depend on an unnamed type (class, struct, union) or a local type

According to C++ §14.3.1 paragraph 2, “A local type [or an] unnamed type…shall not be used as a template-argument for a template type-parameter.” C++ makes this a compile error:

size_t numsLength()
{
  // unnamed struct, also locally defined
  static const struct { int i; } nums[] = { { 1 }, { 2 }, { 3 } };

  return mozilla::ArrayLength(nums);
}

It’s easy to avoid both limitations: move local types to global code, and name them.

// now defined globally, and with a name
struct Number { int i; };
size_t numsLength()
{
  static const Number nums[] = { 1, 2, 3 };
  return mozilla::ArrayLength(nums);
}

mozilla::ArrayLength(arr) isn’t a constant, NS_ARRAY_LENGTH(arr) is

Some contexts in C++ require a compile-time constant expression: template parameters, (in C++ but not C99) for local array lengths, for array lengths in typedefs, for the value of enum initializers, for static/compile-time assertions (which are usually bootstrapped off these other locations), and perhaps others. A function call, even one evaluating to a compile-time constant, is not a constant expression.

One other context doesn’t require a constant but strongly wants one: the values of static variables, inside classes and methods and out. If the value is a function call, even if it computes a constant, the compiler might make it a static initialization, delaying startup.

The long and short of it is that everything in the code below is a bad idea:

int arr[] = { 1, 2, 3, 5 };
static size_t len = ArrayLength(arr); // not an error, but don't do it
void t(JSContext* cx)
{
  js::Vector<int, ArrayLength(arr)> v(cx); // non-constant template parameter
  int local[ArrayLength(arr)]; // variadic arrays not okay in C++
  typedef int Mirror[ArrayLength(arr)]; // non-constant array length
  enum { L = ArrayLength(arr); }; // non-constant initializer
  PR_STATIC_ASSERT(4 == ArrayLength(arr)); // special case of one of the others
}

In these situations you should continue to use NS_ARRAY_LENGTH (or in SpiderMonkey, JS_ARRAY_LENGTH).

mozilla/Util.h is fragile with respect to IPDL headers, for include order

mozilla/Util.h includes mozilla/Types.h, which includes jstypes.h, which includes jsotypes.h, which defines certain fixed-width integer types: int8, int16, uint8, uint16, and so on. It happens that ipc/chromium/src/base/basictypes.h also defines these integer types — but incompatibly on Windows only. This header is, alas, included through every IPDL-generated header. In order to safely include any mfbt header in a file which also includes an IPDL-generated header, you must include the IPDL-generated header first. So when landing patches using mozilla/Util.h, watch out for Windows-specific bustage.

Removing the limitations

The limitations on the type of elements in arrays passed to ArrayLength are unavoidable limitations of C++. But C++11 removes these limitations, and compilers will likely implement support fairly quickly. When that happens we’ll be able to stop caring about the local-or-unnamed problem, not even needing to work around it.

The compile-time-constant limitation is likewise a limitation of C++. It too will go away in C++11 with the constexpr keyword. This modifier specifies that a function provided constant arguments computes a constant. The compiler must allow calls to the function that have constant arguments to be used as compile-time constants. Thus when compilers support constexpr, we can add it to the declaration of ArrayLength and begin using ArrayLength in compile-time-constant contexts. This is more low-hanging C++11 fruit that compilers will pick up soon. (Indeed, GCC 4.6 already implements it.)

Last, we have the Windows-specific #include ordering requirement. We have some ideas for getting around this problem, and we hope to have a solution soon.

A gotcha

Both these methods have a small gotcha: their behavior may not be intuitive when applied to C strings. What does sizeof("foo") evaluate to? If you think of "foo" as a string, you might say 3. But in reality "foo" is much better thought of as an array — and strings are '\0'-terminated. So actually, sizeof("foo") == 4. This was the case with NS_ARRAY_LENGTH, too, so it’s not new behavior. But if you use these methods without considering this, you might end up misusing them.

Conclusion

Avoid NS_ARRAY_LENGTH when possible, and use mozilla::ArrayLength or mozilla::ArrayEnd instead. And watch out when using them on strings, because their behavior might not be what you wanted.

(Curious how these methods are defined, and what C++ magic is used? See my next post.)

03.10.11

Washington, D.C., ex post: The decisions in Tapia and Microsoft

(Just started reading? See part 1, part 2, part 3, part 4, part 5, and part 6.)

Back in April I visited Washington, D.C.. I visited partly to pick up some bobbleheads at an opportune time (just before Easter, and just before visiting family nearly as far eastward from California) and partly to attend Supreme Court oral arguments while I had the chance. The two cases I saw argued were Tapia v. United States and Microsoft v. i4i Limited Partnership. Shortly after I made some minor predictions for the cases, following up on an introduction of the cases and thoughts from oral argument. Let’s take a look at how the cases turned out, before the October 2011 term arguments start. (At this point on Monday, October 3, there’s probably already a line outside the Supreme Court building for the first arguments of the term.) If you need a refresher on the cases themselves, read my introductions noted above: for space reasons I won’t review much here.

Tapia v. United States

The Court unanimously ruled for Tapia, deciding that a judge may not consider the availability of rehabilitation programs when imposing a sentence of imprisonment or in choosing to lengthen a sentence.

The opinion

Justice Kagan wrote the opinion for a unanimous Court. Tapia had been sentenced to 51 months in prison, seemingly because the sentencing judge thought she should take part in a particular drug treatment program: a program she’d only be eligible for if she were in prison for a longer sentence. Justice Kagan’s concluded that a sentencing court can’t impose a prison term, and it can’t extend a prison term when it has decided to impose one, to foster a defendant’s rehabilitation.

Justice Kagan first briefly reviewed the history of the Sentencing Reform Act which enacted the relevant statutes (displaying almost professorial affection in noting that, “Aficionados of our sentencing decisions will recognize much of the story line.”). She concluded that the Act was intended to make sentencing more deterministic and consistent by eliminating much discretionary authority during sentencing and prior to release.

Justice Kagan next turned to the text of the relevant laws. She examined the text of 18 U.S.C. §3582(a), which reads:

The court, in determining whether to impose a term of imprisonment, and, if a term of imprisonment is to be imposed, in determining the length of the term…shall consider the factors set forth in section 3553(a) to the extent that they are applicable, recognizing that imprisonment is not an appropriate means of promoting correction and rehabilitation.

Justice Kagan concluded that, “§3582(a) tells courts that they should acknowledge that imprisonment is not suitable for the purpose of promoting rehabilitation.” While Justice Kagan noted that the text could have been more commanding — “thou shalt not”, say — she thought that Congress had nonetheless made itself clear. Justice Kagan also considered the argument that the “recognizing” clause applied only when determining a sentence, not when possibly lengthening it. She rejected this argument, noting that standard rules of grammar argued that a court considers the relevant factors both when deciding to imprison and when determining the length of imprisonment, and from this concluding that a court must “recognize” the inappropriateness of imprisonment for rehabilitation both when sentencing and when choosing a duration of imprisonment.

Justice Kagan also noted context supporting her interpretation. She led with 28 U.S.C. § 994(k), which I previously noted could shed light on the proper interpretation. She also noted the pointed absence of statutory authority for courts to ensure offenders participated in rehabilitation programs. (Tapia didn’t participate in the relevant rehab program because she wasn’t sent to the prison the judge recommended and because she wasn’t interested in the program.) Finally, she noted that those willing to consider legislative history would find support for her interpretation in the relevant Senate Report.

Justice Kagan next rejected arguments that the “rehabilitation model” which the SRA supplanted referred only to undue belief in “isolation and prison routine” causing the prisoner to reform. She called this reading “too narrow”, citing an essay which characterized the rehabilitation model more broadly. This was Part III, section B, if you’re interested in more detail — I’m not going to attempt to summarize any further than that.

Justice Kagan last noted that the sentencing judge may have improperly considered rehabilitation in determining the length of Tapia’s sentence. Thus the Court left open the possibility that the sentencing judge might not have done so. Finally, the Court sent the case back to the Ninth Circuit for further action.

The concurrence

Neither Justice Sotomayor nor Justice Alito was convinced that the sentencing judge actually did improperly sentence Tapia. Evidently unsatisfied by Justice Kagan’s noting that the sentencing judge only might have acted improperly, Justice Sotomayor wrote a concurrence, joined by Justice Alito, explaining why she thought the sentencing judge had not acted improperly. At the same time, she noted that the sentencing judge’s rationale was less than clear, and that she wasn’t completely certain that he hadn’t acted improperly. Thus both justices nonetheless joined Kagan’s opinion in full.

The outcome

None of this means that Tapia will necessarily get what she presumably wants: a shortened prison sentence. The Court reversed the judgment of the circuit court that upheld her sentence, and it remanded so that court would take a second look, but it didn’t specify the actual outcome. Justice Kagan’s opinion doesn’t conclude that the sentencing judge improperly lengthened Tapia’s sentence for the purpose of rehabilitation: it merely says that the judge may have done so. Justice Sotomayor’s concurrence, joined by Justice Alito, only further emphasizes this point. So on remand, the lower court might conclude that the sentencing judge didn’t improperly lengthen Tapia’s sentence to 51 months. Or it might not. Either way, Tapia’s done well so far: getting the Supreme Court to hear your case, and to rule in your favor, is no small feat.

Even if Tapia convinces the Ninth Circuit that the sentencing judge improperly lengthened her sentence, Tapia might be unsuccessful. Justice Kagan’s opinion concludes with, “[w]e leave it to the Court of Appeals to consider the effect of Tapia’s failure to object to the sentence when imposed.” So Tapia might have missed her chance to win that argument.

Thoughts

I’d gone into this case understanding it to be a nice concise demonstration of statutory interpretation, and I wasn’t mistaken. I wasn’t certain of the correct outcome on first reading the briefs, but §994(k) sealed it for me. It was nice to be vindicated in my thoughts on the case.

It’s easy to overread a case, picking out extremely nitpicky details and magnifying their importance. At the same time, a few details in Kagan’s opinion stuck out at me. First, in analyzing the statutory text, Kagan turned to the 1987 Random House dictionary for definitions. The Sentencing Reform Act was enacted in 1984, so the 1987 dictionary is contemporaneous. Second, Kagan prefaces the paragraph dealing with legislative history, “for those who consider legislative history useful”. The textualists on the bench will insist that the proper dictionary to interpret language is one contemporary with its writing, as a 1987 dictionary would usually be for a 1984 law. And Justice Scalia in particular rejects any reference to legislative history: he believes the law is what was passed, not what was not passed, as the aforementioned Senate Report was not. I think Kagan probably wrote as she did as gestures of comity to her fellow justices, such that everyone would be happy with the resulting opinion. Maybe that’s an overread, but I would guess it isn’t.

It’s also worth noting that this case was unanimous. Remember, a plurality to (more often) a majority of all Supreme Court decisions are unanimous. The Justices are not as fractious a bunch as you would believe from the cases and decisions that receive significant airplay.

Microsoft v. i4i

The Court unanimously (minus Chief Justice Roberts, who had recused himself apparently because his family owned Microsoft stock) ruled that the standard of proof for patent invalidity was clear and convincing evidence, not the lesser burden of merely a preponderance of the evidence. Further, it concluded that this standard was consistent both for evidence which the Patent and Trademark Office had reviewed, and for evidence which it had not reviewed.

Justice Sotomayor’s opinion for the Court

Justice Sotomayor wrote the opinion for all but Justice Thomas (more on him later). Her opinion relied on Justice Cardozo’s opinion in RCA v. Radio Engineering Laboratories, Inc.. Justice Cardozo in 1934 had described the standard of proof for finding invalidity as “clear and cogent evidence”. By the time the language at issue in Microsoft was added, Justice Sotomayor deemed this language to have become part of the common law (roughly: judge-made law, when some dividing line or another must be set for consistency but no laws have specified one). Moreover, she deemed Congress’s language to have used terms of art with well-known meanings to judges, which codified the “clear and convincing” standard. Thus until Congress says otherwise, “clear and convincing evidence” is the standard of proof for declaring a patent invalid.

Justice Sotomayor disagreed with the various narrow views Microsoft took of prior patent decisions, both at the Supreme Court and in lower courts, which would have set different standards of proof for certain forms of evidence. (Curiously, those forms happened to be the ones Microsoft was trying to use.) She said that even “squint[ing]” the Court couldn’t see qualifications of when clear and convincing would apply as the standard.

Justice Sotomayor also disagreed with Microsoft’s alternative argument that a reduced standard of proof applies to evidence not reviewed by the PTO. She thought that prior cases at the Court and elsewhere had consistently at most concluded that evidence reviewed by the PTO could be deemed to have “more weight” than evidence not seen by it.

Finally, Justice Sotomayor addressed the competing policy arguments of both parties: “We find ourselves in no position to judge the comparative force of these policy arguments.” Instead she said the ball was in Congress’s court: if a different standard of proof was to apply, it was up to Congress to enact it.

Justice Breyer’s concurrence

Justice Breyer, joined by Justices Scalia and Alito, wrote separately to emphasize that the clear and convincing standard of proof applied only to questions of fact, not to questions of law. What’s the difference? A jury will decide the facts of a case, but it won’t decide what the nature of the legal issues are in it, or how those issues map onto the facts. Those legal issues are determined by judges, consistent with statutory and common law, at least partly to ensure consistency in application. Quoting from Breyer’s concurrence (citations omitted) will probably illuminate the difference better than I can summarize it (or at least illuminate no worse):

Many claims of invalidity rest, however, not upon factual disputes, but upon how the law applies to facts as given. Do the given facts show that the product was previously “in public use”? Do they show that the invention was “nove[l]” and that it was “non-obvious”? Do they show that the patent applicant described his claims properly? Where the ultimate question of patent validity turns on the correct answer to legal questions—what these subsidiary legal standards mean or how they apply to the facts as given—today’s strict standard of proof has no application.

Justice Thomas’s concurrence in the judgment

Justice Thomas in his opinion agreed with the result, but he didn’t agree with the reasoning used to reach it. Unlike the other justices, he thought that when Congress said a patent should be “presumed valid”, that did not clearly indicate to judges that Congress intended to codify the clear and convincing standard. But since Congress had not specified a standard of proof, Justice Thomas concluded that the common law rule from Justice Cardozo in RCA applied. So in the end Justice Thomas held that the standard of proof of invalidity was clear and convincing evidence, but he reached it in a different manner.

The outcome

On the face of it, Microsoft losing here means that if they want to avoid a $300 million judgment, they’re going to need to try another argument in the lower courts. But since they’ve already gone through once, they’re mostly limited to whatever arguments they’ve already made, and preserved to be argued further. I don’t know how many that is, but at this point I’m guessing it’s pretty small. So Microsoft is likely out $300 million at this point, plus a bunch more for the legal costs of litigating this matter for as long, and as far, as they did.

Thoughts

This was another fun case to follow, although unlike Tapia it was much harder to follow, and it required more knowledge of the surrounding law to really understand it. Policy-wise, I tend to think it might be better if patents were easier to overturn. Thus for that reason I think a lower standard of proof might be a better thing, although it’s hard to be sure if such a change wouldn’t have other adverse effects negating that benefit. But as far as the actual law goes, and not what I wish (however uncertainly) might be the case, Microsoft seemed maybe to be stretching a little. (Maybe. It was hard to be sure given the extent of my experience with any of the relevant laws, cases, &c.) Looking at the opinions in retrospect, that intuition seems to have been right.

As far as the opinions go, I find something to like in all of them, to some degree or another. The “clear and cogent” language in the Cardozo opinion did seem fairly clear in explaining a standard of proof, if one assumed Microsoft’s narrow read of the conditions when it applied to be a stretch. All the justices agreed on that. Breyer’s opinion distinguishing questions of fact and law seemed pretty smart, too: given how complex this area of law seemed just trying to read up for one case, probably nobody would be very happy if questions of law got lumped in with questions of fact for juries. And I liked the way Justice Sotomayor brushed off all of the policy arguments both sides made (arguments so lopsidely unbalanced and cherry-picked that relying on either completely would be destructive to the ends of the patent system). Ideally courts should merely interpret the law, not make policy or choose amongst policies, and the legislative and executive branches should decide policy.

But Justice Thomas’s opinion, lumped in with the parts of Justice Sotomayor’s opinion with which he agreed, seems like the best reading to me, at least based on what I (think I) know. I didn’t really think the words “shall be presumed valid” clearly referred to a particular standard of proof such that they could be a term of art, as all the justices but Thomas would have. At this point, assuming I understand how the law works correctly in the absence of legislative action, reverting to the state of the matter as it was before — that is, Justice Cardozo’s position — seems the right move to me.

Again, that’s just how I’m reading the law. It’s not really what I want in the patent system, which I think could use a good number of changes to adapt to the modern world.

It’s also worth noting — again — that this case, too, was unanimous. I was a little surprised that both cases turned out that way, as my half-informed readings had made me think neither case was quite that straightforward. Then again, the Supreme Court never really gets easy cases, yet even still they’re frequently unanimous. So I shouldn’t be too surprised even in these particular cases.

Conclusion

If you haven’t done it already, I’d recommend taking a look at the actual opinions in these cases. Law has this stigma of being inscrutable. In various areas of law, it doubtless is just that. But in areas not densely technical, legal opinions (particularly higher-court opinions) can be surprisingly readable (once you condition yourself to skip over all the inline citations). Both cases weren’t so densely technical that an intelligent reader couldn’t follow them. Indeed, I’d say they were generally fairly readable. Give it a shot: you might be surprised what you can learn reading the occasional legal opinion. And when a news story breaks, you’ll get a much less colored view of it if you read it from the source, rather than merely read coverage of it.