Software Patents: Separating Rhetoric from Facts
In a digitally enabled economy, software is of great and growing importance. Getting the right legal, regulatory, and trade framework in place is, or should be, a priority of the highest order.
However, questions about whether software should be patentable were raised early on (e.g. the 1966 Report of the President’s Commission on the Patent System) and have never gone away. The debate has intensified with the emergence of patent aggregators and trolls as a growing force in the market, along with high-profile global-scale litigation between major technology companies as seen in the “smartphone wars.”
Paradoxically, software patents are both increasingly entrenched and increasingly controversial. The arguments on software patents range from precedent-based legal reasoning to the heterogeneous nature of the technology, the evolution of the market products and services, and the practical considerations of navigating and managing the patent system.
Whatever one’s views of the basic arguments on patentability, software is bringing out some troublesome limitations of the patent system. Can the system be fixed to better accommodate software? Many in the patent world claim the answer is improving patent quality, an unobjectionable goal, except that patent quality is hard to define and measure in a meaningful way.
In a speech at the Center for American Progress in November (Science Progress covered the event here), then Under Secretary of Commerce for Intellectual Property and Director of the US Patent and Trademark Office David Kappos offered a spirited defense of software patents and their quality.
[To] those reporting and commenting on the smartphone patent wars as if to suggest that the system is broken: let’s move beyond flippant rhetoric and instead engage in thoughtful discussion.
The week following the speech, Mr. Kappos announced that he would step down. Indeed, this speech proved to be a swan song of sorts, and his last public speech.
As he has done before, Kappos belittled the smartphone wars, and remarkably, did not even mention the issue of patent trolls – more formally known as patent assertion entities. Yet 62 percent of troll litigation involves software patents, and trolls have accelerated their attacks, not only on producing companies and retailers, but on mere users of technology, a phenomenon unique to the United States. While the major players have billions of dollars in cash reserves, more than enough to do battle about patent quality in court, app developers typically lack the resources to defend against trolls.
As well, Mr. Kappos chose to defend patent quality by citing the recent high-profile smartphone lawsuits among Apple, Microsoft, Samsung, and Motorola:
[T]he various dire reports and commentary have omitted a critical component—the facts. So we decided to get the facts, undertaking our own study to look at the U.S. patents involved in some of the highest profile litigation among major firms in the smartphone industry. We found that in the vast majority of these cases, over 80 percent, the courts have construed the software patents at issue as valid.
However, a closer look at the study he cites shows that this is the “vast majority” of a very small number. The study mentioned was reported in an article by USPTO’s chief economist in the January 2013 issue of the Journal of Economic Perspectives, as one of four freely available papers in a patent symposium:
While 133 patents were initially asserted across 13 lawsuits, a substantial share was dismissed from the cases and, as of November 2012, only 73 patents remained in controversy…. We found that 65 could be fairly characterized as involving “software” inventions… Of the 65 software patents still involved in this litigation, thus far only 21 of them—less than one-third—have received court decisions of the type that provide some indication of their validity or likely validity. Of those, only four patents have had decisions indicating they are invalid or likely invalid. The remaining 17 software patents evaluated so far in these cases have been declared by a court to be valid or likely valid. This 80 percent favorability ratio is not consistent with the pronouncements that the smart phone wars are being driven by low-quality software patents.
So the “vast majority” is actually 17 out of 21—a very small sample. Furthermore, contrary to both Kappos and the journal article, the dataset behind the study reveals that only 5 of the 21 involve federal district court holdings. The other 16 in the sample are findings of the International Trade Commission, or ITC, an independent agency in Washington that examines imports for patent infringement. An ITC proceeding is an expedited administrative process with limited discovery, and its determinations on validity are not binding elsewhere.
Although the remaining five are federal district court determinations, all by a single judge in a single district court involving litigation between Apple and Samsung, and all were based on “likelihood” of validity for the purpose of preliminary injunctions sought by Apple. Of the five patents deemed likely valid by the judge, one has been invalidated by the USPTO itself. The preliminary injunction awarded for the other four was reversed by the Court of Appeals for the Federal Circuit, also called just the Federal Circuit, because of the relatively minor role the patents played in the devices.
Given the recent onset of the smartphone patent wars, it makes more sense to look at outcomes on software patent validity in general, rather than focus on just a few high-profile cases. A recent study by Shawn Miller using a substantial dataset of 980 conclusive district court decisions on patent validity (based on novelty or obviousness) found that for non-software, 33 percent of the patents were held wholly or partially invalid. But, for the 270 software patents in the sample, this rises to 49 percent, a substantial difference. Following the USPTO’s own logic, the agency gets software right only half the time.
However, this methodology should be regarded with caution since most patent disputes never reach a judicial determination; in fact, only around 5 percent of patent cases filed are litigated to a full trial. If a defendant comes up with patent-defeating prior art, the patent holder may dismiss the claim or settle (on undisclosed terms, of course) in order to preserve the patent for assertions against others. If prior art is discovered early enough, it may discourage the patentee from filing an infringement action, so there will be no public record of the assertion. The former chief patent counsel for Apple testified before Congress that he received 25 letters claiming infringement for every patent suit actually filed against Apple.
Established well-resourced defendants not only have a better sense of the prior art, but they can afford to dig long and hard for patent-invalidating prior art that the examiner missed – and can hire costly but convincing lawyers to argue against the patent. Questionable patents in large portfolios are especially unlikely to get tested in court. In Gary Reback’s classic account of IBM’s negotiating tactics with Sun Microsystems, it appeared that six out of seven asserted patents were likely invalid (the seventh was not infringed), but when IBM threatened to haul up more patents from its vast arsenal, Sun agreed to pay and the remaining legions of IBM patents went unevaluated.
In a second argument, Mr. Kappos noted that rejections of software patent applications are upheld in both internal and external appeals. This, he argued, suggests patent quality is effectively evaluated:
[R]ejections in software patent applications taken to our appeals board are upheld at a slightly higher rate than for the office as a whole, and those few decisions appealed to the Federal Circuit are affirmed 95 percent of the time. So to those commenting on the smartphone patent wars with categorical statements that blame the “broken” system on bad software patents, I say—get the facts—they don’t support your position.
But this argument is beside the point. These appeals involved denied applications – which have nothing to do industry concerns about the allowance of too many low-quality patents.
Examination is a one-sided (ex parte) process between the examiner and the applicant. If the examiner denies the patent, the applicant can either appeal the denial or re-file. In fact, the applicant can re-file endlessly, time after time – a unique feature of the U.S. patent system. But if the examiner allows an application wrongly, there is no opportunity for anyone to formally contest an application.
Mr. Kappos offered a third and final data point:
Patent quality isn’t broken at all. In fact, our decisions on both allowances and rejections correctly comply with all laws and regulations over 96 percent of the time.
But the 96 percent figure refers to the USPTO’s internal procedures, not the quality of the patents themselves. This is explained the USPTO’s work on quality metrics (see the overview chart on pages 3-4).
The fact they did it “by the book” 96 percent of the time does not mean that searches were 96 percent accurate. After all, the examiners have only 18 hours on average to examine each patent. The single “external” metric that the USPTO uses is a survey of patent applicants, who seem very pleased with the examination process. This methodology evokes the old days of 1996 – 2002, when the avowed mission of the patents operation was to “to help customers get patents” and applicants were surveyed as to how helpful the examiners were. Clearly, such a culture is at odds with the objectives of ensuring careful scrutiny and high patent quality by rejecting questionable applications.
Naturally applicants and their attorneys have been grateful for the increased allowances, which contrasted dramatically with the tightening of standards under Undersecretary Kappos’s predecessor. This is documented in a new paper by Cotropia, Quillen, and Webster that draws on PTO data. Their figures show an abrupt shift in the allowance rate beginning in 2009 when Kappos took office:
The decreasing allowance rate preceding 2009 suggests that the USPTO was working to raise the bar before quality was redefined in terms of internal processes (“quality does not equal rejection”) rather than issued patents. Yet, again, it is the quality of issued patents that innovators in industry are concerned about. The redefinition of quality as internal procedure made patent applicants happy and increased morale among examiners since they could ease up on applicants. The lower patent quality that resulted will not be felt for years given the time lag between the examination and litigation in high-tech. Trolls do not file suit for an average of 8.3 years after a patent is granted. More fundamentally, the pain that low quality patents eventually bring will be felt only by yet-undetermined victims in the private sector, not by the USPTO.
The article in the Journal of Economic Perspectives concludes:
In summary, the US federal district courts, which are the principal reviewers of Patent Office decision-making, are finding in a large share of these cases that prior Patent Office examinations of the software patents involved in the smart phone litigation have been completed properly.
But district courts do not decide based on how the examiner handles the application. They look at the validity of the patent itself. Indeed, they are required by judicial precedent to hold the patent valid unless there is “clear and convincing evidence” to the contrary – a heightened presumption that cannot be justified given very limited time the examiner can devote to the application.
Some facts are clear. The total of issued patents has skyrocketed in the last few years:
The 2012 numbers for utility patents represent a surge of 51 percent in the past three years. However, the growth in software patents was substantially greater:
Following the definition used by James Bessen in A Generation of Software Patents, the USPTO granted 75 percent more software patents in 2012 than it did in 2009!
Remarkably this surge comes after the Supreme Court’s 2007 decision in KSR v. Teleflex, which made it easier for the examiner to show obviousness and so reject marginal patent applications. Research shows that the KSR ruling had a significant impact on the decisions of districts court and the Court of Appeals for the Federal Circuit, resulting in significantly more invalidations based on obviousness. While it is difficult to determine why patent applications are abandoned (and some are never made public), it appears that the KSR ruling had no impact on the allowance rate. As figure 1 shows, the number of disallowed or abandoned applications dropped by almost two-thirds from 2009 to 2012.
Unfortunately, the data suggests that in practice the line is all too amenable to the peculiar culture and politics of the patent system. In his speech, Mr. Kappos claimed:
By getting [quality] right, we grant patents only for great algorithmic ideas.
From a bureaucratic perspective, granting 68,000 software patents per year (figure 3) may look like extraordinary productivity. From a software developer’s perspective, this is the equivalent of facing 68,000 new laws each year that she must obey. Just as ignorance is no excuse for violating a law, independent invention is no defense to patent infringement.
Moreover, “great” is not in the Patent Act. The way the statute is framed, applicants are entitled to patents, unless the examiner can show otherwise. As Judge Giles Rich, the dean of Federal Circuit jurisprudence (who also drafted the statute) put it, “[patents] are not for exceptional inventors but for average inventors and should not be made hard to get.”
Perhaps Judge Rich was wrong and David Kappos was right: Patents should be limited to great ideas. The statutory standard – that the invention cannot be obvious to a “person having ordinary skill in the art” – is low enough to provide a lot of work for attorneys and a lot of opportunities for inadvertent infringement by real innovators. Perhaps the standard is low enough to make quality control impractical and to undermine respect for those that it is supposed to benefit.
It is time, in Undersecretary Kappos’s words, for a “thoughtful discussion.” What would that discussion look like? A few big issues stand out:
Quality: Is ordinary skill the right benchmark? What should the threshold of inventiveness be to minimize conflicts resulting from independent invention and to ensure that patents are a source of useful information? How can patent quality be measured, monitored, and ensured in terms that make sense to the intended beneficiaries: software innovators?
Abstraction: In principle, the patent system protects ideas but not abstract ideas. What are technical, economic, and legal problems associated with different levels of abstraction?
Costs: The costs of investigating and litigating patents are notoriously high – and are multiplied by large numbers of patents, uncertain scope and validity, dispersed innovation and ownership, and product complexity. How can costs be reduced, especially given the rise of patent assertion entities and nuisance lawsuits?
Uncertainty: How can patent-related uncertainty and risk be measured and managed, especially for small entities? How can the patent system be made more transparent, predictable, and accountable?
These problems are not unique to software, but software and the digital economy have brought some of the most troublesome aspects of patents into focus. If we care about the future of innovation in the U.S. and around the world, the problems must be addressed openly and thoughtfully – in terms of real costs and real benefits, not rhetoric and ideology.
Brian Kahin is Senior Fellow at the Computer & Communications Industry Association and Fellow at the MIT Sloan School’s Center for Digital Business.
Comments on this article