03.10.11

Washington, D.C., ex post: The decisions in Tapia and Microsoft

(Just started reading? See part 1, part 2, part 3, part 4, part 5, and part 6.)

Back in April I visited Washington, D.C.. I visited partly to pick up some bobbleheads at an opportune time (just before Easter, and just before visiting family nearly as far eastward from California) and partly to attend Supreme Court oral arguments while I had the chance. The two cases I saw argued were Tapia v. United States and Microsoft v. i4i Limited Partnership. Shortly after I made some minor predictions for the cases, following up on an introduction of the cases and thoughts from oral argument. Let’s take a look at how the cases turned out, before the October 2011 term arguments start. (At this point on Monday, October 3, there’s probably already a line outside the Supreme Court building for the first arguments of the term.) If you need a refresher on the cases themselves, read my introductions noted above: for space reasons I won’t review much here.

Tapia v. United States

The Court unanimously ruled for Tapia, deciding that a judge may not consider the availability of rehabilitation programs when imposing a sentence of imprisonment or in choosing to lengthen a sentence.

The opinion

Justice Kagan wrote the opinion for a unanimous Court. Tapia had been sentenced to 51 months in prison, seemingly because the sentencing judge thought she should take part in a particular drug treatment program: a program she’d only be eligible for if she were in prison for a longer sentence. Justice Kagan’s concluded that a sentencing court can’t impose a prison term, and it can’t extend a prison term when it has decided to impose one, to foster a defendant’s rehabilitation.

Justice Kagan first briefly reviewed the history of the Sentencing Reform Act which enacted the relevant statutes (displaying almost professorial affection in noting that, “Aficionados of our sentencing decisions will recognize much of the story line.”). She concluded that the Act was intended to make sentencing more deterministic and consistent by eliminating much discretionary authority during sentencing and prior to release.

Justice Kagan next turned to the text of the relevant laws. She examined the text of 18 U.S.C. §3582(a), which reads:

The court, in determining whether to impose a term of imprisonment, and, if a term of imprisonment is to be imposed, in determining the length of the term…shall consider the factors set forth in section 3553(a) to the extent that they are applicable, recognizing that imprisonment is not an appropriate means of promoting correction and rehabilitation.

Justice Kagan concluded that, “§3582(a) tells courts that they should acknowledge that imprisonment is not suitable for the purpose of promoting rehabilitation.” While Justice Kagan noted that the text could have been more commanding — “thou shalt not”, say — she thought that Congress had nonetheless made itself clear. Justice Kagan also considered the argument that the “recognizing” clause applied only when determining a sentence, not when possibly lengthening it. She rejected this argument, noting that standard rules of grammar argued that a court considers the relevant factors both when deciding to imprison and when determining the length of imprisonment, and from this concluding that a court must “recognize” the inappropriateness of imprisonment for rehabilitation both when sentencing and when choosing a duration of imprisonment.

Justice Kagan also noted context supporting her interpretation. She led with 28 U.S.C. § 994(k), which I previously noted could shed light on the proper interpretation. She also noted the pointed absence of statutory authority for courts to ensure offenders participated in rehabilitation programs. (Tapia didn’t participate in the relevant rehab program because she wasn’t sent to the prison the judge recommended and because she wasn’t interested in the program.) Finally, she noted that those willing to consider legislative history would find support for her interpretation in the relevant Senate Report.

Justice Kagan next rejected arguments that the “rehabilitation model” which the SRA supplanted referred only to undue belief in “isolation and prison routine” causing the prisoner to reform. She called this reading “too narrow”, citing an essay which characterized the rehabilitation model more broadly. This was Part III, section B, if you’re interested in more detail — I’m not going to attempt to summarize any further than that.

Justice Kagan last noted that the sentencing judge may have improperly considered rehabilitation in determining the length of Tapia’s sentence. Thus the Court left open the possibility that the sentencing judge might not have done so. Finally, the Court sent the case back to the Ninth Circuit for further action.

The concurrence

Neither Justice Sotomayor nor Justice Alito was convinced that the sentencing judge actually did improperly sentence Tapia. Evidently unsatisfied by Justice Kagan’s noting that the sentencing judge only might have acted improperly, Justice Sotomayor wrote a concurrence, joined by Justice Alito, explaining why she thought the sentencing judge had not acted improperly. At the same time, she noted that the sentencing judge’s rationale was less than clear, and that she wasn’t completely certain that he hadn’t acted improperly. Thus both justices nonetheless joined Kagan’s opinion in full.

The outcome

None of this means that Tapia will necessarily get what she presumably wants: a shortened prison sentence. The Court reversed the judgment of the circuit court that upheld her sentence, and it remanded so that court would take a second look, but it didn’t specify the actual outcome. Justice Kagan’s opinion doesn’t conclude that the sentencing judge improperly lengthened Tapia’s sentence for the purpose of rehabilitation: it merely says that the judge may have done so. Justice Sotomayor’s concurrence, joined by Justice Alito, only further emphasizes this point. So on remand, the lower court might conclude that the sentencing judge didn’t improperly lengthen Tapia’s sentence to 51 months. Or it might not. Either way, Tapia’s done well so far: getting the Supreme Court to hear your case, and to rule in your favor, is no small feat.

Even if Tapia convinces the Ninth Circuit that the sentencing judge improperly lengthened her sentence, Tapia might be unsuccessful. Justice Kagan’s opinion concludes with, “[w]e leave it to the Court of Appeals to consider the effect of Tapia’s failure to object to the sentence when imposed.” So Tapia might have missed her chance to win that argument.

Thoughts

I’d gone into this case understanding it to be a nice concise demonstration of statutory interpretation, and I wasn’t mistaken. I wasn’t certain of the correct outcome on first reading the briefs, but §994(k) sealed it for me. It was nice to be vindicated in my thoughts on the case.

It’s easy to overread a case, picking out extremely nitpicky details and magnifying their importance. At the same time, a few details in Kagan’s opinion stuck out at me. First, in analyzing the statutory text, Kagan turned to the 1987 Random House dictionary for definitions. The Sentencing Reform Act was enacted in 1984, so the 1987 dictionary is contemporaneous. Second, Kagan prefaces the paragraph dealing with legislative history, “for those who consider legislative history useful”. The textualists on the bench will insist that the proper dictionary to interpret language is one contemporary with its writing, as a 1987 dictionary would usually be for a 1984 law. And Justice Scalia in particular rejects any reference to legislative history: he believes the law is what was passed, not what was not passed, as the aforementioned Senate Report was not. I think Kagan probably wrote as she did as gestures of comity to her fellow justices, such that everyone would be happy with the resulting opinion. Maybe that’s an overread, but I would guess it isn’t.

It’s also worth noting that this case was unanimous. Remember, a plurality to (more often) a majority of all Supreme Court decisions are unanimous. The Justices are not as fractious a bunch as you would believe from the cases and decisions that receive significant airplay.

Microsoft v. i4i

The Court unanimously (minus Chief Justice Roberts, who had recused himself apparently because his family owned Microsoft stock) ruled that the standard of proof for patent invalidity was clear and convincing evidence, not the lesser burden of merely a preponderance of the evidence. Further, it concluded that this standard was consistent both for evidence which the Patent and Trademark Office had reviewed, and for evidence which it had not reviewed.

Justice Sotomayor’s opinion for the Court

Justice Sotomayor wrote the opinion for all but Justice Thomas (more on him later). Her opinion relied on Justice Cardozo’s opinion in RCA v. Radio Engineering Laboratories, Inc.. Justice Cardozo in 1934 had described the standard of proof for finding invalidity as “clear and cogent evidence”. By the time the language at issue in Microsoft was added, Justice Sotomayor deemed this language to have become part of the common law (roughly: judge-made law, when some dividing line or another must be set for consistency but no laws have specified one). Moreover, she deemed Congress’s language to have used terms of art with well-known meanings to judges, which codified the “clear and convincing” standard. Thus until Congress says otherwise, “clear and convincing evidence” is the standard of proof for declaring a patent invalid.

Justice Sotomayor disagreed with the various narrow views Microsoft took of prior patent decisions, both at the Supreme Court and in lower courts, which would have set different standards of proof for certain forms of evidence. (Curiously, those forms happened to be the ones Microsoft was trying to use.) She said that even “squint[ing]” the Court couldn’t see qualifications of when clear and convincing would apply as the standard.

Justice Sotomayor also disagreed with Microsoft’s alternative argument that a reduced standard of proof applies to evidence not reviewed by the PTO. She thought that prior cases at the Court and elsewhere had consistently at most concluded that evidence reviewed by the PTO could be deemed to have “more weight” than evidence not seen by it.

Finally, Justice Sotomayor addressed the competing policy arguments of both parties: “We find ourselves in no position to judge the comparative force of these policy arguments.” Instead she said the ball was in Congress’s court: if a different standard of proof was to apply, it was up to Congress to enact it.

Justice Breyer’s concurrence

Justice Breyer, joined by Justices Scalia and Alito, wrote separately to emphasize that the clear and convincing standard of proof applied only to questions of fact, not to questions of law. What’s the difference? A jury will decide the facts of a case, but it won’t decide what the nature of the legal issues are in it, or how those issues map onto the facts. Those legal issues are determined by judges, consistent with statutory and common law, at least partly to ensure consistency in application. Quoting from Breyer’s concurrence (citations omitted) will probably illuminate the difference better than I can summarize it (or at least illuminate no worse):

Many claims of invalidity rest, however, not upon factual disputes, but upon how the law applies to facts as given. Do the given facts show that the product was previously “in public use”? Do they show that the invention was “nove[l]” and that it was “non-obvious”? Do they show that the patent applicant described his claims properly? Where the ultimate question of patent validity turns on the correct answer to legal questions—what these subsidiary legal standards mean or how they apply to the facts as given—today’s strict standard of proof has no application.

Justice Thomas’s concurrence in the judgment

Justice Thomas in his opinion agreed with the result, but he didn’t agree with the reasoning used to reach it. Unlike the other justices, he thought that when Congress said a patent should be “presumed valid”, that did not clearly indicate to judges that Congress intended to codify the clear and convincing standard. But since Congress had not specified a standard of proof, Justice Thomas concluded that the common law rule from Justice Cardozo in RCA applied. So in the end Justice Thomas held that the standard of proof of invalidity was clear and convincing evidence, but he reached it in a different manner.

The outcome

On the face of it, Microsoft losing here means that if they want to avoid a $300 million judgment, they’re going to need to try another argument in the lower courts. But since they’ve already gone through once, they’re mostly limited to whatever arguments they’ve already made, and preserved to be argued further. I don’t know how many that is, but at this point I’m guessing it’s pretty small. So Microsoft is likely out $300 million at this point, plus a bunch more for the legal costs of litigating this matter for as long, and as far, as they did.

Thoughts

This was another fun case to follow, although unlike Tapia it was much harder to follow, and it required more knowledge of the surrounding law to really understand it. Policy-wise, I tend to think it might be better if patents were easier to overturn. Thus for that reason I think a lower standard of proof might be a better thing, although it’s hard to be sure if such a change wouldn’t have other adverse effects negating that benefit. But as far as the actual law goes, and not what I wish (however uncertainly) might be the case, Microsoft seemed maybe to be stretching a little. (Maybe. It was hard to be sure given the extent of my experience with any of the relevant laws, cases, &c.) Looking at the opinions in retrospect, that intuition seems to have been right.

As far as the opinions go, I find something to like in all of them, to some degree or another. The “clear and cogent” language in the Cardozo opinion did seem fairly clear in explaining a standard of proof, if one assumed Microsoft’s narrow read of the conditions when it applied to be a stretch. All the justices agreed on that. Breyer’s opinion distinguishing questions of fact and law seemed pretty smart, too: given how complex this area of law seemed just trying to read up for one case, probably nobody would be very happy if questions of law got lumped in with questions of fact for juries. And I liked the way Justice Sotomayor brushed off all of the policy arguments both sides made (arguments so lopsidely unbalanced and cherry-picked that relying on either completely would be destructive to the ends of the patent system). Ideally courts should merely interpret the law, not make policy or choose amongst policies, and the legislative and executive branches should decide policy.

But Justice Thomas’s opinion, lumped in with the parts of Justice Sotomayor’s opinion with which he agreed, seems like the best reading to me, at least based on what I (think I) know. I didn’t really think the words “shall be presumed valid” clearly referred to a particular standard of proof such that they could be a term of art, as all the justices but Thomas would have. At this point, assuming I understand how the law works correctly in the absence of legislative action, reverting to the state of the matter as it was before — that is, Justice Cardozo’s position — seems the right move to me.

Again, that’s just how I’m reading the law. It’s not really what I want in the patent system, which I think could use a good number of changes to adapt to the modern world.

It’s also worth noting — again — that this case, too, was unanimous. I was a little surprised that both cases turned out that way, as my half-informed readings had made me think neither case was quite that straightforward. Then again, the Supreme Court never really gets easy cases, yet even still they’re frequently unanimous. So I shouldn’t be too surprised even in these particular cases.

Conclusion

If you haven’t done it already, I’d recommend taking a look at the actual opinions in these cases. Law has this stigma of being inscrutable. In various areas of law, it doubtless is just that. But in areas not densely technical, legal opinions (particularly higher-court opinions) can be surprisingly readable (once you condition yourself to skip over all the inline citations). Both cases weren’t so densely technical that an intelligent reader couldn’t follow them. Indeed, I’d say they were generally fairly readable. Give it a shot: you might be surprised what you can learn reading the occasional legal opinion. And when a news story breaks, you’ll get a much less colored view of it if you read it from the source, rather than merely read coverage of it.

27.09.11

ಠ_ಠ

Tags: , , , — Jeff @ 10:50

This is an utterly content-free rant in which I express my anger at recent Internet Explorer preview releases requiring installation of what is effectively an entirely new operating system. I would like to know how new IE behaves on various testcases. But I don’t want to potentially hose my primary functioning Windows system to do it, especially if I then lose access to a working IE9 installation. And I am really not interested in wasting a bunch of time to spin up a virtual machine just so I can waste a bunch more time to “upgrade” it to test a new version of IE.

Here’s a novel concept: what about shipping a browser that doesn’t have to insert itself deep into operating system guts? Maybe you could even install and uninstall it distinct from the OS. But that’s crazytalk, nobody would ever do that, right?

So, yeah, whatever the latest IE10 does, meh. Someone who cares can be the sacrificial lamb and find that out, if it actually matters.

(“Content-free rant”, indeed. Future posts will return to substantive form.)

07.09.11

Followup to recent .mozconfig detection changes: $topsrcdir/mozconfig and $topsrcdir/.mozconfig now both work

Two weeks ago changes landed in Mozilla to reduce the locations searched for a mozconfig to just $MOZCONFIG and $topsrcdir/.mozconfig. Previously a bunch of other weird places were searched, like $topsrcdir/mozconfig.sh and $topsrcdir/myconfig.sh and even some files in $HOME (!). This change made specifying build options more explicit, in line with build system policy to be “as explicit as possible”. Reducing complexity by killing off a bunch of truly odd configuration option locations was good. But I thought it went too far.

The changes also removed $topsrcdir/mozconfig. This location wasn’t nearly as bizarre as the others, and it was more explicit than $topsrcdir/.mozconfig: it appeared in directory listings and folder views. I wasn’t the only person who thought $topsrcdir/mozconfig should stay: the bug which reduced the mozconfig guesswork included rumblings from others wanting to keep support for $topsrcdir/mozconfig, and the blog post announcing the change included yet more.

I filed a bug to re-support $topsrcdir/mozconfig, and the patch has landed. $topsrcdir/.mozconfig and $topsrcdir/mozconfig (either but not both) now work again: use whichever name you like.

13.07.11

“I am sorry, very sorry, but a bicycle that has suffered this degree of damage cannot be repaired by any means that I know of.” *

(Subtitle: “Banach-Tarski! Banach-Tarski! Why isn’t this thing working?”)

My year-old road bike, now with a frame broken at the join point with the front stem, with the front wheel twisted around above its normal location for dramatic effect
Yes, I'm fine enough now — wasn't so great at the time, but it could have been much worse.

You might (and I think should) have a right to be stupid. That doesn’t mean you should use it. I repeat myself: wear a helmet. Don’t be an idiot.

Also amusing this long after the fact: this is what the police report describes as “moderate damage to the frame”.

Also relevant: this, although the mangled sound track makes me want to do violence to the current state of copyright law that doubtless makes it hard to find an unaltered copy.

* I don’t really know that it can’t be repaired. Although when I was walking back from retrieving the bike from the Palo Alto police, I stopped by Palo Alto Bicycles just for giggles to ask if it would need a new frame. I’ll give you one guess at the answer. I’m unsure exactly what I’m going to do with the bike, or to replace it, just yet.

07.06.11

Introducing mozilla/RangedPtr.h: a smart pointer for members of buffers

Tags: , , , , — Jeff @ 15:17

Introduction

Suppose you’re implementing a method which searches for -- as part of an HTML parser:

bool containsDashDash(const char* chars, size_t length)
{
  for (size_t i = 0; i < length; i++)
  {
    if (chars[i] != '-')
      continue;
    if (chars[i + 1] == '-')
      return true;
  }
  return false;
}

But your method contains a bug! Can you spot it?

The buffer bounds-checking problem

The problem with the above implementation is that the bounds-checking is off. If chars doesn’t contain -- but ends with a -, then chars[i + 1] will read past the end of chars. Sometimes that might be harmless; other times it might be an exploitable security vulnerability.

The most obvious way to fix this in C++ is to pass an object which encapsulates both characters and length together, so you can’t use the characters except in accordance with the length. If you were using std::string, for example, accessing characters by [] or at() would generally assert or throw an exception for out-of-bounds access.

For one reason or another, however, you might not want to use an encapsulated type. In a parser, for example, you probably would want to use a pointer to process the input string, because the compiler might not be able to optimize an index into the equivalent pointer.

Is there a way to get “safety” via debug assertions or similar without giving up a pointer interface?

Introducing RangedPtr

We’re talking C++, so of course the answer is yes, and of course the answer is a smart pointer class.

The Mozilla Framework Based on Templates in mozilla-central now includes a RangedPtr<T> class. It’s defined in mfbt/RangedPtr.h and can be #included from mozilla/RangedPtr.h. RangedPtr stores a pointer, and in debug builds it stores start and end pointers fixed at construction time. Operations on the smart pointer — indexing, deriving new pointers through addition or subtraction, dereferencing, &c. — assert in debug builds that they don’t exceed the range specified by the start and end pointers. Indexing and dereferencing are restricted to the half-open range [start, end); new-pointer derivation is restricted to the range [start, end] to permit sentinel pointers. (It’s possible for start == end, although you can’t really do anything with such a pointer.)

The RangedPtr interface is pretty straightforward, supporting these constructors:

#include "mozilla/RangedPtr.h"

int nums[] = { 1, 2, 5, 3 };
RangedPtr<int> p1(nums, nums, nums + 4);
RangedPtr<int> p2(nums, nums, 4); // short for (nums, nums, nums + 4)
RangedPtr<int> p3(nums, 4); // short for (nums, nums, nums + 4)
RangedPtr<int> p4(nums); // short for (nums, length(nums))

RangedPtr<T> supports all the usual actions you’d expect from a pointer — indexing, dereferencing, addition, subtraction, assignment, equality and comparisons, and so on. All methods assert self-correctness as far as is possible in debug builds. RangedPtr<T> differs from T* only in that it doesn’t implicitly convert to T*: use get() method to get the corresponding T*. In addition to being explicit and consistent with nsCOMPtr and nsRefPtr, this will serve as a nudge to consider changing the relevant code to use RangedPtr instead of a raw pointer. But in essence RangedPtr is a pretty easy drop-in replacement for raw pointers in buffers. For example, adjusting containsDashDash to use it to assert in-rangeness is basically a single-line change:

#include "mozilla/RangedPtr.h"

bool containsDashDash(const char* charsRaw, size_t length)
{
  RangedPtr<const char> chars(charsRaw, length);
  for (size_t i = 0; i < length; i++)
  {
    if (chars[i] != '-')
      continue;
    if (chars[i + 1] == '-')
      return true;
  }
  return false;
}

(And to resolve all loose ends, if you wanted containsDashDash to be correct, you’d change the loop to go from 1 rather than 0 and would check chars[i - 1] and chars[i]. Thanks go to Neil in comments for noting this.)

A minor demerit of RangedPtr

RangedPtr is extremely lightweight and should almost always be as efficient as a raw pointer, even as it provides debug-build correctness checking. The sole exception is that, for sadmaking ABI reasons, using RangedPtr<T> as an argument to a method may be slightly less efficient than using a T* (passed-on-the-stack versus passed-in-a-register, to be precise). Most of the time the cost will be negligible, and if the method is inlined there probably won’t be any cost at all, but it’s worth pointing out as a potential concern if performance is super-critical.

Bottom line

Raw pointers into buffers bad, smart RangedPtrs into buffers good. Go forth and use RangedPtr throughout Mozilla code!

« NewerOlder »