Science Progress. https://scienceprogress.org Mon, 13 Jan 2014 16:33:57 +0000 en hourly 1 Your Genes Not for Sale https://scienceprogress.org/2013/06/your-genes-not-for-sale/ https://scienceprogress.org/2013/06/your-genes-not-for-sale/#comments Mon, 24 Jun 2013 19:23:46 +0000 Jason Karlawish http://scienceprogress.org/?p=28504 In a unanimous ruling last week, the U.S. Supreme Court rejected that a patent can be granted to a gene sequence that describes the risk of a disease. Simply stated, you cannot own a piece of the human genome. Praise to the scientists whose hard work discovered the gene, but their discovery is like the periodic table of the elements—not an invention of human creation, but an art of nature. With this decision, the Court has said that information about how genes describe risks of diseases such as breast and ovarian cancer is free for all to use and no one to own.

That it took the highest court of the United States to step into the world of desktop medicine and proclaim “stop” to the armies of entrepreneurs racing to stake a claim on the human genome illustrates how the zeal to fulfill both ambition and wealth have seized the biomedical scientific imagination and enterprise.

For nearly two decades, biomedical science and ethics have debated whether ownership of gene sequences was permissible, and much of the debate was the biotech industry’s and academic investigators’ steamrolled agenda of collectively insisting that only with patent protection could they invest their money, time, and hard work to discover genes that explain who is more or less likely to develop diseases such as breast cancer or Alzheimer’s disease.

The promise of a period of exclusive ownership was their incentive to risk time and money. Yet bioethics and some investigators as well argued that owning a gene that explains why Angelina Jolie is at such a high risk of breast cancer that she should have a double mastectomy seems like owning how to read a radar map to deduce that a rainstorm is coming. You can invent radar and patent it, but surely you can’t patent how to interpret it.

With this ruling and its similarly unanimous 2012 Mayo v. Prometheus Laboratories ruling, the Supreme Court has decided that the medical mind cannot be owned. The ruling is a call for a reboot on the encroachment of the business model on medicine and medical science.

In Mayo v. Prometheus, the Court ruled that you cannot own a biomarker test that involves measuring a metabolite of a drug to then infer how to dose that drug to treat patients with inflammatory bowel disease. During that case’s oral arguments, Justice Stephen Breyer suggested that such a patent would be like owning how to fertilize a garden. When the lawyer for Prometheus—a biotech company named after the titan who stole fire from the Gods and gave it to humans so they could live better lives—replied that how to think about fertilizing a garden was patentable, that was perhaps the moment when the collective justices decided “enough!” With their Myriad ruling, the Court now places bookends on an issue that will define 21st century biomedical-industrial science. Observing and thinking about the risks of the world of health and disease are not inventions. Euclid can’t own his geometry.

What are the lessons of this case that mark the end of a Wild West free-for-all era of owning nature?

First, biomedical science—the scientists—showed that as a profession, they couldn’t agree on a professional ethic that said such ownership is simply not part of being a scientist. Once upon a time, they believed that. In the late 1970s, Eugene Goldwasser, the discoverer of the hormone erythropoietin that drives red blood cell production, didn’t bother to patent his discovery. But by the late 20th century, many researchers accepted and even embraced as part of being a scientist that they should own their discoveries. And no doubt this change in their ethic was at least in part because they, as well as their academic institutions, could profit mightily from a claim as evident as describing a correlation between a gene and a risk of cancer. To them, this was an “invention” akin to a motor part or chemical assay. As is too often the case in research ethics, society—this time the Supreme Court—had to step in and say to the scientists, unequivocally, “enough!”

Second, we see the power of patient disease advocacy groups. At the heart of the Myriad case were informed, motivated, and fired-up women who argued that no one could own their ability to learn their genetic risk for breast and ovarian cancers. Yet again, we see the power of the patient.

Like the HIV community that in the 1980s transformed FDA drug testing and approval regulations, the breast cancer community is a model for disease advocacy. Among the lessons: Stick to your guns and don’t simply become a patient-based extension of the private sector. Let the PR firms cheerlead the scientists and corporations. Your role is to responsibly collaborate and advocate with them, but also to be prepared to act up and fight back. This is a powerful lesson for the groups who represent the patients who suffer from the killers of the 21st century, especially Alzheimer’s disease and frailty.

Third, the case shows how, despite the relentless march of science that all but banished nature from its textbooks, laboratories, and clinics, the appeal to nature and the natural still evoke deep convictions in the nonscientists. Few scientists would dare to describe themselves using once-upon-a-time common 18th century monikers such as a “steward of God’s Creation,” and no one dies of “natural causes.” And yet Justice Clarence Thomas’s argument and the Federal Circuit ruling that preceded it read like a strange synthesis of Biology 101 and Aquinas 101.

Justice Thomas writes how “A naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated, but cDNA is patent eligible because it is not naturally occurring,” and “The nucleotides on the DNA strand pair naturally with their counterparts, with the exception that RNA uses the nucleotide base uracil instead of thymine.” Such nature engages in creation as when Thomas writes: “The pre-RNA is then naturally ‘spliced’ by the physical removal of the introns. … The exons-only strand is known as messenger RNA (mRNA), which creates amino acids through translation.” All of these events “occur naturally within cells.” Why? Because “The location and order of the nucleotides existed in nature before Myriad found them.” Amen.

In contrast, cDNA, which the justices ruled a patentable invention, is not of nature but of man. “Because the natural creation of mRNA involves splicing that removes introns, the synthetic DNA created from mRNA also contains only the exon sequences. This synthetic DNA created in the laboratory from mRNA is known as complementary DNA (cDNA).”

Association for Molecular Pathology v. Myriad Genetics, Inc. illustrates that although by the late 19th century, science dismissed nature as a concept worthy of even mention, society still cares deeply about nature.

Finally, the case is a call to order. Biotech and its compatriots argued in Myriad and as well in last year’s Mayo v. Prometheus that patents on biomarkers and genes are essential to support their business model. And yet, in the aftermath of the Myriad ruling, the free market gave Myriad the corporation a boost. Its stock went up. If we believe that free markets are nature’s invisible hand that aggregates information, then we can conclude that the Supreme Court has managed to balance competing commitments. Some discoveries are inventions worthy of private ownership and others are simply part of the laws of nature.

]]>
https://scienceprogress.org/2013/06/your-genes-not-for-sale/feed/ 0
Turning the Page https://scienceprogress.org/2013/06/turning-the-page/ https://scienceprogress.org/2013/06/turning-the-page/#comments Wed, 19 Jun 2013 17:10:46 +0000 Matthew Stepp http://scienceprogress.org/?p=28483 The Center for American Progress, The Heritage Foundation, and the Information Technology and Innovation Foundation teamed up to release a joint report assessing options to reform the national lab system to better position it to meet 21st century challenges. Download the full report or the executive summary in PDF format.

Executive Summary

Since their creation in the 1940s, the Department of Energy’s, or DOE’s, National Labs have been a cornerstone of high-impact, federally funded research and development. The labs have helped seed society with new ideas and technologies in leading disciplines such as energy, biotechnology, nuclear physics, and material science. While the labs’ primary mission must continue to focus on supporting the nation’s research needs not met by the private sector, the time has come to move the DOE labs past their Cold War roots and into the 21st century.

As the United States moves deeper into the 21st century, the importance of advancing innovation becomes even more important if our nation is to thrive. Creating wealth depends on the use of traditional inputs such as natural resources, land, and labor, but most importantly, it requires the discovery and development of new ideas and technology. Today’s science and technological challenges are increasingly complex and require multidisciplinary and often unique solutions that the labs can help provide.

While the pace of innovation and the complexity of national challenges have accelerated, the labs have not kept stride. Although private-sector innovation will remain the cornerstone of economic growth, lab scientists and engineers do important work that can be of significant future use to private enterprise. Examples include commercial global positioning system, or GPS, applications and genetics analysis. The problem is that the labs’ tether to the market is weak, often by design. Though the mission of the labs must not be or subsidize private-sector research, efficient means for transferring scientific discovery into the market should exist. But the labs’ bureaucracy remains largely unchanged and does not reflect the nimble characteristics of today’s innovation-driven economy. Inefficiencies, duplicative regulations, and top-down research micromanagement are having a stifling effect on innovation. Furthermore, institutional biases against transferring market-relevant technology out of the labs and into the private sector reduce incentives for technology transfer.

The federal government must reform the labs from their 20th century atomic-energy roots to create 21st century engines of innovation. This report aims to lay the groundwork for reform by proposing a more flexible lab-management model that strengthens the labs’ ability to address national needs and produce a consistent flow of innovative ideas and technologies. The underlying philosophy of this report is not to just tinker around the edges but to build policy reforms that re-envision the lab system.

The analysis presented by this working group represents a consensus between members of three organizations with diverse ideological perspectives. We may not agree on funding levels, funding priorities, or the specific role of government in technological innovation, and nothing in this report should be construed as support for or opposition to those things. Instead, the purpose of this report is to put forth a set of recommendations that will bring greater efficiency and effectiveness to the DOE lab system, produce more relevant research, and increasingly allow that research to be pulled into the private sector. These recommendations are as relevant to a large, highly funded research agenda as they are to a much more limited one.

Our analysis and policy recommendations fall into three major categories, which are summarized below.

Transforming Lab Management from DOE Micromanagement to Contractor Accountability

Creation of a high-level task force to develop DOE-actionable reforms on lab effectiveness and accountability. The Department of Energy, together with the Office of Science and Technology Policy, should lead a top-to-bottom review of the lab-stewardship system with the goal of identifying and reducing redundant bureaucratic processes, reforming the relationships between the labs and the contactors who manage them, and developing better technology-transfer metrics. This report should be submitted to Congress within one year.

Transition to a performance-based contractor-accountability model. DOE should cede decision-making responsibility to lab managers instead of micromanaging the labs from Washington. This builds upon the existing contractor-assurance system, or CAS, and would free lab managers to operate more nimbly with regard to infrastructure spending, operations, human-capital management, and external partnerships. The labs should report to Congress annually during the transition period to the new accountability model to ensure critical congressional oversight of taxpayer resources.

Expand the Performance Evaluation Management Plan process to include a new accountability model. As an alternative to direct transactional oversight for all decisions, Management and Operation, or M&O, contractor performance should be evaluated annually via an expanded and unified review process for all the labs based on the DOE Office of Science’s Performance Evaluation Management Plan, or PEMP, process.

Unifying Lab Stewardship, Funding, and Management Stovepipes with Innovation Goals

Merge the existing under secretaries of science and energy into a new Office of Science and Technology. The new, single under secretary would have both budgeting and stewardship authority for all of the labs except for those currently managed by the National Nuclear Security Administration, or NNSA.

Combine the research functions of the Office of Science and those of the under secretary for energy under the new Office of Science and Technology. Congress should create new, broader program offices under the Office of Science and Technology to better coordinate activities throughout the entire research spectrum.

Remove top-down overhead accounting rules. Congress should remove prescriptive overhead accounting rules and allow labs greater latitude to use overhead funds to support project and mission success. This would include removing the cap on laboratory-directed research and development funds, also known as LDRD, and providing a more inclusive description of technology transfer.

Moving Technology to Market With Better Incentives and More Flexibility

Expand ACT agreements. The Department of Energy should expand the Agreements for Commercializing Technology, or ACT, template to allow for use with any kind of partner, regardless of whether the partnering entity has received other federal funding.

Allow labs to use flexible pricing for user facilities and special capabilities. Congress should remove legal barriers to allow the labs to charge a market rate for proprietary research and to operate technical facilities and capabilities at a level informed by market demand.

Allow labs autonomy in non-federal funding-partnership agreements. The secretary of energy should grant the labs the authority to implement a pilot program that allows lab managers to agree to collaborations with third parties for research within the United States—through collaborative research and development agreements, Work for Others agreements, or other partnerships—absent DOE preapproval.

Add weight to technology transfer in the expanded PEMP process. DOE should create a new top-level category for the expanded PEMP process called “Technology Impact,” which would evaluate labs on the transfer of technology into the U.S. private sector. The exact weight of this category would be negotiated in the M&O contract, based on the unique programs, capabilities, and strategic vision for each lab and DOE administration.

Execute consistent guidelines on conflicts of interest. The secretary of energy should issue new, consistent guidance to the labs encouraging research and management teams to partner with companies and entrepreneurs in the United States to avoid differing interpretations of laws and policies, including guidance on implementing consistent entrepreneurial leave and exchange programs.

Read the full report or the executive summary (pdf).

]]>
https://scienceprogress.org/2013/06/turning-the-page/feed/ 0
Fulfilling the Promise of Concentrating Solar Power https://scienceprogress.org/2013/06/fulfilling-the-promise-of-concentrating-solar-power/ https://scienceprogress.org/2013/06/fulfilling-the-promise-of-concentrating-solar-power/#comments Mon, 10 Jun 2013 19:12:43 +0000 Science Progress http://scienceprogress.org/?p=28463 This is the executive Summary of a report, available in full here. Endnotes and citations are available in the PDF version of this report.

Concentrating solar power—also known as concentrated solar power, concentrated solar thermal, and CSP—is a cost-effective way to produce electricity while reducing our dependence on foreign oil, improving domestic energy-price stability, reducing carbon emissions, cleaning our air, promoting economic growth, and creating jobs. One physicist has even touted it as the “technology that will save humanity.”
Grandiose claims aside, concentrating solar power has recently garnered the attention of the U.S. Department of Energy. The agency has created the SunShot Initiative to lead research into the technology—work that aims to increase efficiency, lower costs, and deliver more reliable performance from concentrating solar power. Additionally, high-profile U.S.-based companies such as IBM have invested in CSP research. Increasingly, private and public stakeholders believe that the technology holds the greatest potential to harness the power of the sun to meet national sustainability goals.

As the White House prepares a climate-change-reform agenda that embodies the bold spirit of this year’s State of the Union address, in which President Barack Obama emphasized executive authority to regulate greenhouse gases, Congress has begun debating the nation’s new energy future. Concentrating solar power should be a key component of this dialogue.

Some are concerned that clean technologies are too immature and unreliable to produce the vast stores of affordable baseload energy needed to power the 21st century American economy. Others are worried that the nation cannot switch to carbon-free electricity without ruining the economy. CSP technology, however, presents a compelling response to each of these concerns.

In this report we detail why the United States should invest in concentrating solar power and delineate the market and regulatory challenges to the innovation and deployment of CSP technology. We also offer the following low-cost policy solutions that can reduce risk, promote investment, and drive innovation in the CSP industry:

Reducing risk and cost of capital for clean solar energy:

  • Establish an independent clean energy deployment bank.
  • Implement CLEAN contracts for concentrating solar power.
  • Reinstate the Department of Energy Loan Guarantee Program.
  • Put a national price on carbon.

Streamlining regulation and tax treatment of CSP:

  • Reform the tax code to put capital-intensive clean technologies on equal footing with fossil fuels.
  • Guarantee transmissions grid connection for concentrating solar power and other solar projects.
  • Stabilize and monetize existing tax incentives.
  • Streamline the regulatory approval process by creating an interagency “onestop shop” for concentrating solar power and other clean energy power-generation facilities.
  • Ensure long-term regulatory transparency.

Sean Pool is a Policy Analyst and Managing Editor of Science Progress. John Dos Passos Coggin is a freelance writer and environmental analyst.

]]>
https://scienceprogress.org/2013/06/fulfilling-the-promise-of-concentrating-solar-power/feed/ 0
Fetal Anomalies, Undue Burdens, and 20-week Abortion Bans https://scienceprogress.org/2013/05/fetal-anomalies-undue-burdens-and-20-week-abortion-bans/ https://scienceprogress.org/2013/05/fetal-anomalies-undue-burdens-and-20-week-abortion-bans/#comments Thu, 23 May 2013 18:52:31 +0000 Lisa M. Corrigan http://scienceprogress.org/?p=28452 With the introduction of a 20-week abortion ban in the District of Columbia in April by Rep. Trent Franks (R-AZ), anti-choice activists are once again looking to restrict abortion access in a city where blowback from the residents won’t have much political fallout for members of Congress.

The Pain-Capable Unborn Child Protection Act follows the 20-week abortion ban template provided to legislators across the country by the National Right to Life Committee and Franks hope to use it in D.C. to capitalize on the success of 20-week bans nationwide. Alabama, Alaska, Arizona, Arkansas, Georgia, Idaho, Indiana, Kansas, Louisiana, North Dakota, and Oklahoma have passed similar 20-week abortion bans since 2011. Many of the 20-week bans have not been tested through court challenge, with the exception of Arizona, where the Ninth Circuit Court of Appeals recently tossed out the ban on constitutional grounds, and Georgia. In Georgia, the American Civil Liberties Union won a temporary injunction against the implementation of the 20-week bans after a state judge upheld the law by citing the presence of “fetal pain” at 20 weeks.

Additionally, the Idaho ban was recently struck down by Chief District Judge Winmill, who wrote:

The State’s clear disregard of this controlling Supreme Court precedent and its apparent determination to define viability in a manner specifically and repeatedly condemned by the Supreme Court evinces an intent to place an insurmountable obstacle in the path of women seeking non-therapeutic abortions of a nonviable fetus at and after twenty weeks’ gestation.

Judge Winmill passionately argued that the purpose of the 20-week “categorical ban is to protect the fetus — not the mother,” thereby undermining Supreme Court rulings since Roe on viability, particularly in Colautti v. Franklin, which held:

Because this point [of viability] may differ with each pregnancy, neither the legislature nor the courts may proclaim one of the elements entering into the ascertainment of viability — be it weeks of gestation or fetal weight or any other single factor — as the determinant of when the State has a compelling interest in the life or health of the fetus.

Winmill’s decision emphasized the illegality of categorical bans legislating the point of viability and lambasted the legislature for gross overreach and for attempting to impose an “undue burden” on women seeking legal medical care.

Mapping the changes in state abortion laws demonstrates how the proliferation of 20-week bans (and those banning abortion even earlier) are changing the nature of legal abortion access across the nation. This is despite the fact that 20-week abortion bans are a tiny fraction of the abortions performed in the United States. The Centers for Disease Control and Prevention document that 98.7 percent of abortions happen before 21 weeks.

Nonetheless, the 20-week bans were the top legislative priority of the National Right to Life Committee for 2012 who saw them as measures designed to slowly roll back the viability standard introduced in the Supreme Court’s 1972 decision in Roe v. Wade. In that decision, the Court opined that abortion was a right up until natural or assisted viability between 24-28 weeks.

Advocates of 20-week abortion bans generally rely on junk science based on the pseudoscience of fetal pain to warrant the state laws prohibiting third trimester abortions. Their claims stem from erroneous assertions that the fetus feels pain at 20 weeks, despite several comprehensive literature reviews demonstrating no credible evidence of fetal pain until the third trimester. Likewise, the case for “fetal pain” rests on the argument that the rights of the fetus should take precedence over the civil rights of the mother.

Missing in the larger public conversation about 20-week abortion bans is the fact that contemporary medical textbooks show a large number of fetal anomalies are only detected via ultrasound after 18-24 weeks because gynecological norms suggest that “the ideal time to perform the second trimester ultrasound is between 20-22 weeks.” While ultrasounds administered prior to 20 weeks are generally adequate to assess major organ systems, they fail to detect major cardiac, skeletal, and craniofacial anomalies, particularly those that are lethal to the fetus.

Of particular concern are two classes of fetal anomalies that cannot be detected early in a pregnancy. First are the variable-onset fetal anomalies. These anomalies begin at variable gestational ages but are often detected beyond 20 weeks. Second are the late-onset anomalies that develop late in the gestational age of the fetus, typically in the second or third trimester, or are undetectable until the abnormality is at the end-point of a pregnancy. Importantly, the 20-week bans passing across the states generally do not include exceptions for lethal fetal anomalies, meaning women are forced to carry fetuses with anomalies to term, regardless of viability.

The American College of Obstetricians and Gynecologists, or ACOG, writes that variable-onset and late-onset anomalies are difficult to diagnose before 20 weeks. In a brief supporting the doctors who have challenged Arizona’s law, ACOG notes, “by the time a diagnosis is confirmed by a specialist capable of diagnosing these anomalies, the pregnancy has often progressed beyond 20 weeks.” This is usually due to the length of time it takes to schedule additional tests and to receive results. They add that the obesity epidemic in states like Arizona compounds the ability of earlier ultrasounds to detect major anomalies, including those that are lethal.

Numerous examples of lethal fetal anomalies detected after 20 weeks include, but are not limited to:

  • anencephaly, which is a lethal fetal anomaly characterized by the absence of the brain and cranium above the base of the skull, leading to death before or shortly after birth
  • renal agenesis, where the kidneys fail to materialize, leading to death before or shortly after birth
  • limb-body wall complex, where the organs develop outside of the body cavity
  • neural tube defects such as encephalocele (the protrusion of brain tissue through an opening in the skull), and severe hydrocephaly (severe accumulation of excessive fluid within the brain)
  • meningomyelocele, which is an opening in the vertebrae through which the meningeal sac may protrude
  • caudal regression syndrome, a structural defect of the lower spine leading to neurological impairment and incontinence
  • lethal skeletal dysplasias, where spinal and limb growth are grossly impaired leading to stillbirths, premature birth, and often death shortly after birth, often from respiratory failure

For many families who have never dealt with the trauma of fetal anomalies, it may seem difficult to understand why third term abortions are necessary. But when abortion care is restricted at 20 weeks, women are often forced to carry nonviable fetuses, often to term. In the case of lethal fetal anomalies, this requirement means countless appointments, treatments, tests, and conversations about the imminent death of their fetus, inflicting preventable trauma on families who want to carry a healthy fetus to term.

Above and beyond the fact that major fetal anomalies often go undetected before 20 weeks and that these anomalies compromise the viability of the fetus is the fact that many of these 20-week bans provide no exceptions for life of the mother. And, even with a life of the mother exception, women often face numerous serious health complications during their pregnancies. Cancer, diabetes, lupus, and major heart conditions can all arise during pregnancy, making adequate treatment impossible if such treatment compromises the fetus. In their brief supporting the challenge to Arizona’s 20-week ban, the ACOG explains that allowing abortions only in the case of the life of the mother

“will jeopardize women’s health by severely curtailing physicians’ ability to treat patients who face serious health conditions later in pregnancy and will force women to carry pregnancies to term when their fetuses suffer from serious impairments, including those that are incompatible with life. And notwithstanding the legislature’s and Defendants’ claim that the Act is intended to protect women from the alleged health risks posed by abortion, clear medical evidence shows that abortion is many times safer for a woman than carrying a pregnancy to term and giving birth, that abortion past 20 weeks is not more dangerous than carrying to term and giving birth, and that abortion does not harm the psychological well- being of pregnant women.” [Emphasis added]

Even in the case of an exception for the life of the mother, infections and complications that can seriously harm the life or fertility of the mother are insufficient to meet the criteria under such narrow exceptions.

The devastating consequences of these laws forcing the birth of nonviable fetuses are wide-ranging. First, beyond the physical stress of carrying a nonviable fetus, is the psychological stress on families dealing with fetal anomalies, particularly if they are fatal. In restricting abortion access, these callous and patently unconstitutional laws impose an undue psychological burden on families whose ability to plan their families is circumscribed by laws based on dubious science and erroneous logic.

Second, recent studies demonstrate that certain populations of low-income women and women of color are likely to face enormous financial and physical barriers to reproductive health care, including gynecological care, meaning that pregnancy detection is delayed. A 2009 study published in the American Journal of Public Health concludes, “The cost of abortion is an important factor in access to care because abortions increase in price with weeks of pregnancy and are therefore more expensive later in the second trimester. When associated expenses, such as transportation, overnight lodging (because later second-trimester abortions require more than one day to perform), and child care are added, the price of abortion in the later second trimester rises dramatically.” Likewise, given the declining availability of abortion access in the United States due to unreasonable legislation, at-risk women are unlikely to have received abortion counseling until later in their pregnancies because the logistics and cost delay examination and treatment. These two factors suggest that 20-week abortion bans have a disproportionately negative effect on poor women and women of color.

Third, these bans contribute to an erosion of scientifically driven public policy on issues of reproductive access. The reliance on junk science instead of data on fetal anomalies leads to laws that ignore double-blind, peer-reviewed science in favor of laws that punish women and doctors unnecessarily. These laws complicate the ability of doctors to provide timely and complete prenatal care for women and they elevate the fetus, regardless of viability, over the rights of women and their families.

Finally, most of the 20-week bans also do not even contain exceptions for rape or incest. And, many of them also impose criminal penalties for mothers and doctors who pursue abortions after 20 weeks. Taken together, these two aspects of 20-week bans highlight how punitive the legislation is for women and their families and underscore how 20-week bans actually undermine competent medical care for pregnant women while simultaneously categorizing women as criminals even as they pursue constitutionally protected medical care.

Sadly, the proliferation of 20-week bans suggests that efforts to roll back abortion access impose substantial physical, psychological, and political consequences on women and their families. Legislation prohibiting access to reproductive health care at any arbitrary point alienates women from the policymaking process by objectifying them and attempting to erode their right to physical autonomy by privileging the fetus over the needs of the mother.

Lisa M. Corrigan, Ph.D. is an Assistant Professor of Communication and Co-Chair of the Gender Studies Program in the Fulbright College of Arts & Sciences at the University of Arkansas.

]]>
https://scienceprogress.org/2013/05/fetal-anomalies-undue-burdens-and-20-week-abortion-bans/feed/ 0
Software Patents: Separating Rhetoric from Facts https://scienceprogress.org/2013/05/software-patents-separating-rhetoric-from-facts/ https://scienceprogress.org/2013/05/software-patents-separating-rhetoric-from-facts/#comments Wed, 15 May 2013 13:35:48 +0000 Brian Kahin http://scienceprogress.org/?p=28406 In a digitally enabled economy, software is of great and growing importance. Getting the right legal, regulatory, and trade framework in place is, or should be, a priority of the highest order.

However, questions about whether software should be patentable were raised early on (e.g. the 1966 Report of the President’s Commission on the Patent System) and have never gone away. The debate has intensified with the emergence of patent aggregators and trolls as a growing force in the market, along with high-profile global-scale litigation between major technology companies as seen in the “smartphone wars.”

Paradoxically, software patents are both increasingly entrenched and increasingly controversial. The arguments on software patents range from precedent-based legal reasoning to the heterogeneous nature of the technology, the evolution of the market products and services, and the practical considerations of navigating and managing the patent system.

Whatever one’s views of the basic arguments on patentability, software is bringing out some troublesome limitations of the patent system. Can the system be fixed to better accommodate software? Many in the patent world claim the answer is improving patent quality, an unobjectionable goal, except that patent quality is hard to define and measure in a meaningful way.

In a speech at the Center for American Progress in November (Science Progress covered the event here), then Under Secretary of Commerce for Intellectual Property and Director of the US Patent and Trademark Office David Kappos offered a spirited defense of software patents and their quality.

[To] those reporting and commenting on the smartphone patent wars as if to suggest that the system is broken: let’s move beyond flippant rhetoric and instead engage in thoughtful discussion.

The week following the speech, Mr. Kappos announced that he would step down. Indeed, this speech proved to be a swan song of sorts, and his last public speech.

As he has done before, Kappos belittled the smartphone wars, and remarkably, did not even mention the issue of patent trolls – more formally known as patent assertion entities. Yet 62 percent of troll litigation involves software patents, and trolls have accelerated their attacks, not only on producing companies and retailers, but on mere users of technology, a phenomenon unique to the United States. While the major players have billions of dollars in cash reserves, more than enough to do battle about patent quality in court, app developers typically lack the resources to defend against trolls.

As well, Mr. Kappos chose to defend patent quality by citing the recent high-profile smartphone lawsuits among Apple, Microsoft, Samsung, and Motorola:

[T]he various dire reports and commentary have omitted a critical component—the facts. So we decided to get the facts, undertaking our own study to look at the U.S. patents involved in some of the highest profile litigation among major firms in the smartphone industry. We found that in the vast majority of these cases, over 80 percent, the courts have construed the software patents at issue as valid.

However, a closer look at the study he cites shows that this is the “vast majority” of a very small number. The study mentioned was reported in an article by USPTO’s chief economist in the January 2013 issue of the Journal of Economic Perspectives, as one of four freely available papers in a patent symposium:

While 133 patents were initially asserted across 13 lawsuits, a substantial share was dismissed from the cases and, as of November 2012, only 73 patents remained in controversy…. We found that 65 could be fairly characterized as involving “software” inventions… Of the 65 software patents still involved in this litigation, thus far only 21 of them—less than one-third—have received court decisions of the type that provide some indication of their validity or likely validity. Of those, only four patents have had decisions indicating they are invalid or likely invalid. The remaining 17 software patents evaluated so far in these cases have been declared by a court to be valid or likely valid. This 80 percent favorability ratio is not consistent with the pronouncements that the smart phone wars are being driven by low-quality software patents.

So the “vast majority” is actually 17 out of 21—a very small sample. Furthermore, contrary to both Kappos and the journal article, the dataset behind the study reveals that only 5 of the 21 involve federal district court holdings. The other 16 in the sample are findings of the International Trade Commission, or ITC, an independent agency in Washington that examines imports for patent infringement. An ITC proceeding is an expedited administrative process with limited discovery, and its determinations on validity are not binding elsewhere.

Although the remaining  five are federal district court determinations, all by a single judge in a single district court involving litigation between Apple and Samsung, and all were based on “likelihood” of validity for the purpose of preliminary injunctions sought by Apple. Of the five patents deemed likely valid by the judge, one has been invalidated by the USPTO itself. The preliminary injunction awarded for the other four was reversed by the Court of Appeals for the Federal Circuit, also called just the Federal Circuit, because of the relatively minor role the patents played in the devices.

Given the recent onset of the smartphone patent wars, it makes more sense to look at outcomes on software patent validity in general, rather than focus on just a few high-profile cases.  A recent study by Shawn Miller using a substantial dataset of 980 conclusive district court decisions on patent validity (based on novelty or obviousness) found that for non-software, 33 percent of the patents were held wholly or partially invalid. But, for the 270 software patents in the sample, this rises to 49 percent, a substantial difference. Following the USPTO’s own logic, the agency gets software right only half the time.

However, this methodology should be regarded with caution since most patent disputes never reach a judicial determination; in fact, only around 5 percent of patent cases filed are litigated to a full trial. If a defendant comes up with patent-defeating prior art, the patent holder may dismiss the claim or settle (on undisclosed terms, of course) in order to preserve the patent for assertions against others. If prior art is discovered early enough, it may discourage the patentee from filing an infringement action, so there will be no public record of the assertion. The former chief patent counsel for Apple testified before Congress that he received 25 letters claiming infringement for every patent suit actually filed against Apple.

Established well-resourced defendants not only have a better sense of the prior art, but they can afford to dig long and hard for patent-invalidating prior art that the examiner missed – and can hire costly but convincing lawyers to argue against the patent. Questionable patents in large portfolios are especially unlikely to get tested in court. In Gary Reback’s classic account of IBM’s negotiating tactics with Sun Microsystems, it appeared that six out of seven asserted patents were likely invalid (the seventh was not infringed), but when IBM threatened to haul up more patents from its vast arsenal, Sun agreed to pay and the remaining legions of IBM patents went unevaluated.

In a second argument, Mr. Kappos noted that rejections of software patent applications are upheld in both internal and external appeals. This, he argued, suggests patent quality is effectively evaluated:

[R]ejections in software patent applications taken to our appeals board are upheld at a slightly higher rate than for the office as a whole, and those few decisions appealed to the Federal Circuit are affirmed 95 percent of the time. So to those commenting on the smartphone patent wars with categorical statements that blame the “broken” system on bad software patents, I say—get the facts—they don’t support your position.

But this argument is beside the point. These appeals involved denied applications – which have nothing to do industry concerns about the allowance of too many low-quality patents.

Examination is a one-sided (ex parte) process between the examiner and the applicant. If the examiner denies the patent, the applicant can either appeal the denial or re-file. In fact, the applicant can re-file endlessly, time after time – a unique feature of the U.S. patent system. But if the examiner allows an application wrongly, there is no opportunity for anyone to formally contest an application.

Mr. Kappos offered a third and final data point:

Patent quality isn’t broken at all. In fact, our decisions on both allowances and rejections correctly comply with all laws and regulations over 96 percent of the time.

But the 96 percent figure refers to the USPTO’s internal procedures, not the quality of the patents themselves. This is explained the USPTO’s work on quality metrics (see the overview chart on pages 3-4).

The fact they did it “by the book” 96 percent of the time does not mean that searches were 96 percent accurate. After all, the examiners have only 18 hours on average to examine each patent. The single “external” metric that the USPTO uses is a survey of patent applicants, who seem very pleased with the examination process. This methodology evokes the old days of 1996 – 2002, when the avowed mission of the patents operation was to “to help customers get patents” and applicants were surveyed as to how helpful the examiners were. Clearly, such a culture is at odds with the objectives of ensuring careful scrutiny and high patent quality by rejecting questionable applications.

Naturally applicants and their attorneys have been grateful for the increased allowances, which contrasted dramatically with the tightening of standards under Undersecretary Kappos’s predecessor. This is documented in a new paper by Cotropia, Quillen, and Webster that draws on PTO data. Their figures show an abrupt shift in the allowance rate beginning in 2009 when Kappos took office:

Figure 1

The decreasing allowance rate preceding 2009 suggests that the USPTO was working to raise the bar before quality was redefined in terms of internal processes (“quality does not equal rejection”) rather than issued patents. Yet, again, it is the quality of issued patents that innovators in industry are concerned about. The redefinition of quality as internal procedure made patent applicants happy and increased morale among examiners since they could ease up on applicants. The lower patent quality that resulted will not be felt for years given the time lag between the examination and litigation in high-tech.  Trolls do not file suit for an average of 8.3 years after a patent is granted. More fundamentally, the pain that low quality patents eventually bring will be felt only by yet-undetermined victims in the private sector, not by the USPTO.

The article in the Journal of Economic Perspectives concludes:

In summary, the US federal district courts, which are the principal reviewers of Patent Office decision-making, are finding in a large share of these cases that prior Patent Office examinations of the software patents involved in the smart phone litigation have been completed properly.

But district courts do not decide based on how the examiner handles the application. They look at the validity of the patent itself. Indeed, they are required by judicial precedent to hold the patent valid unless there is “clear and convincing evidence” to the contrary – a heightened presumption that cannot be justified given very limited time the examiner can devote to the application.

Some facts are clear. The total of issued patents has skyrocketed in the last few years:

Figure 2:

The 2012 numbers for utility patents represent a surge of 51 percent in the past three years. However, the growth in software patents was substantially greater:

Figure 3:

Following the definition used by James Bessen in A Generation of Software Patents, the USPTO granted 75 percent more software patents in 2012 than it did in 2009!

Remarkably this surge comes after the Supreme Court’s 2007 decision in KSR v. Teleflex, which made it easier for the examiner to show obviousness and so reject marginal patent applications. Research shows that the KSR ruling had a significant impact on the decisions of districts court and the Court of Appeals for the Federal Circuit, resulting in significantly more invalidations based on obviousness. While it is difficult to determine why patent applications are abandoned (and some are never made public), it appears that the KSR ruling had no impact on the allowance rate. As figure 1 shows, the number of disallowed or abandoned applications dropped by almost two-thirds from 2009 to 2012.

Unfortunately, the data suggests that in practice the line is all too amenable to the peculiar culture and politics of the patent system. In his speech, Mr. Kappos claimed:

By getting [quality] right, we grant patents only for great algorithmic ideas.

From a bureaucratic perspective, granting 68,000 software patents per year (figure 3) may look like extraordinary productivity. From a software developer’s perspective, this is the equivalent of facing 68,000 new laws each year that she must obey. Just as ignorance is no excuse for violating a law, independent invention is no defense to patent infringement.

Moreover, “great” is not in the Patent Act. The way the statute is framed, applicants are entitled to patents, unless the examiner can show otherwise. As Judge Giles Rich, the dean of Federal Circuit jurisprudence (who also drafted the statute) put it, “[patents] are not for exceptional inventors but for average inventors and should not be made hard to get.”

Perhaps Judge Rich was wrong and David Kappos was right: Patents should be limited to great ideas. The statutory standard – that the invention cannot be obvious to a “person having ordinary skill in the art” – is low enough to provide a lot of work for attorneys and a lot of opportunities for inadvertent infringement by real innovators. Perhaps the standard is low enough to make quality control impractical and to undermine respect for those that it is supposed to benefit.

It is time, in Undersecretary Kappos’s words, for a “thoughtful discussion.” What would that discussion look like?  A few big issues stand out:

Quality: Is ordinary skill the right benchmark? What should the threshold of inventiveness be to minimize conflicts resulting from independent invention and to ensure that patents are a source of useful information? How can patent quality be measured, monitored, and ensured in terms that make sense to the intended beneficiaries: software innovators?

Abstraction: In principle, the patent system protects ideas but not abstract ideas. What are technical, economic, and legal problems associated with different levels of abstraction?

Costs: The costs of investigating and litigating patents are notoriously high – and are multiplied by large numbers of patents, uncertain scope and validity, dispersed innovation and ownership, and product complexity. How can costs be reduced, especially given the rise of patent assertion entities and nuisance lawsuits?

Uncertainty: How can patent-related uncertainty and risk be measured and managed, especially for small entities? How can the patent system be made more transparent, predictable, and accountable?

These problems are not unique to software, but software and the digital economy have brought some of the most troublesome aspects of patents into focus. If we care about the future of innovation in the U.S. and around the world, the problems must be addressed openly and thoughtfully – in terms of real costs and real benefits, not rhetoric and ideology.

Brian Kahin is Senior Fellow at the Computer & Communications Industry Association and Fellow at the MIT Sloan School’s Center for Digital Business.

]]>
https://scienceprogress.org/2013/05/software-patents-separating-rhetoric-from-facts/feed/ 0
Getting Innovative with Regional Innovation Funding https://scienceprogress.org/2013/05/getting-innovative-with-regional-innovation-funding/ https://scienceprogress.org/2013/05/getting-innovative-with-regional-innovation-funding/#comments Fri, 10 May 2013 18:56:35 +0000 Sean Pool http://scienceprogress.org/?p=28365 The Obama administration announced yesterday the latest chapter in its unfolding story of regional innovation partnership programs. More than any past administration, the Obama White House has been aggressive in aligning existing federal program resources to empower communities to enact bold, creative, and multi-faceted regional innovation strategies. Yesterday’s announcement of the first funding opportunity for the “Investing in Manufacturing Communities” Partnership, or IMCP, shows that the administration’s commitment to America’s manufacturing communities remains strong.

Manufacturing communities across the country are still reeling from the depression. Fifty thousand manufacturers have closed shop since the recession began, after a decade of declining manufacturing jobs that cost the economy 6 million jobs in total since 1998. By many accounts manufacturing jobs are beginning to rebound, with more than half a million added since 2010. But the future of American manufacturing is by no means assured, and much must be done to ensure the many interconnected regional economies that drive United States growth remain the most competitive places in the world for manufacturing companies to invent, invest, and do business.

In this time of change, communities are increasingly forced to look beyond traditional “smokestack-chasing” strategies to attract manufacturing investment. Smokestack chasing is the practice of offering special tax breaks to specific companies to lure them to invest in operations within a community. The evidence shows, however, that this approach results in a low return on investment for the taxpayer and can quickly result in a “race to the bottom,” with many communities competing to give the most generous tax break to lure companies to within their borders.

Bottom-up innovation strategies such as the IMCP provide a compelling alternative to this antiquated and increasingly obsolete approach. Rather than spending local resources to attract specific companies, the IMCP and the programs that preceded it share a holistic focus on investing in all of the building blocks needed to support manufacturing innovation communities. Instead of spending money on tax breaks, the IMCP is, according to Deputy Secretary of Commerce Rebecca Blank, “designed to improve the way we use federal resources for economic development initiatives, by challenging communities to coordinate their development efforts around workforce training, infrastructure, and innovation to create the best possible environment for investment.”

Encouraging communities to get creative and leverage their strengths

The IMCP is divided into two phases: planning grants and challenge grants. In the first phase, which is open for business as of yesterday, regional stakeholders from a particular community are invited to submit proposals to develop “implementation strategies” that leverage their regional assets toward an innovation-related goal. According to last week’s release, approximately 25 planning grants of up to $200,000 will be awarded to the best proposals from across the country.

In the second phase, five or six $25 million challenge grants will be awarded to the communities with the best, most achievable and high value-added plans to invest in the building blocks of innovation locally. One of the most significant aspects of this program is that it “encourages communities to think critically about their strengths—such as research, workforce, and industrial assets.” That’s according to Neal Orringer, Chief Strategist for Manufacturing Policy at the Department of Commerce and one of the architects of the program.

In years and decades past, economic development funding was delivered based on the degree of economic “need” or “distress” in a community. Funding award criteria focused on emphasizing how bad the situation was and how badly help was needed. But with the IMCP, that philosophy is being turned on its head. Winners will be chosen based on how clearly they articulate how their projects will leverage local strengths and develop workable strategies to deal with economic distress. In particular, the Department of Commerce fact sheet says that the IMCP will support and reward communities that:

  • Recognize their comparative advantages and develop implementation-ready plans
  • Invest in public goods and institutions through public and private funding
  • Encourage community links that reinforce and expand their commercial appeal to investors

The department also lists a few examples of the kinds of second-phase projects we can expect winning applicants to undertake:

  • Specialized research centers at local universities relevant to target regional industrial sectors
  • Business incubators focused on targeted technology sectors
  • Community college programs to train workers for specialized skillsets needed in targeted regional industrial sectors
  • Infrastructure and public works to ensure regions have key 21st-century infrastructure assets and accessible transport networks
  • Viable export-promotion plans for regional innovation companies
  • Regional coordination platforms to support well-integrated supply chains
  • Platforms for cross-pollination between local government, education, workforce, research, and business leaders

Plugging gaps to support more robust regional economic strategies

Besides a shift from rewarding distress to rewarding creative strategies to deal with distress, another significant aspect of program is the level of coordination with other agencies. The Department of Commerce seems to be using the IMCP process as a platform to re-evaluate the level of coordination across economic development programs present in many agencies across the whole federal government.

Today, dozens of agencies are dotted with literally hundreds of separate programs tasked with doling out grants, contracts, and technical assistance to a constellation of “separate” activities among innovation stakeholders in regions. For example, community college workforce development grants, rural broadband projects, university-industry research park feasibility studies, export promotion for high-tech goods, and small-business assistance services are each managed by separate departments or agencies, with little coordination. The following image was taken from a White House info graphic about this issue, and shows the multitudes of programs and departments with overlapping mandates:

In the program fact sheet, the department announced it would “lead an interagency effort that aligns economic development programs across the government. This effort will bring together federal programs covering workforce training, technical assistance, specialized research and commercialization centers, infrastructure, and energy efficiency, among others.”

Although this interagency process is only mentioned in passing, it could well be the most significant part of yesterday’s program announcement. While the millions of dollars of EDA funding for bottom-up regional innovation strategies are not insignificant, realigning the billions of dollars of funding flowing into communities from across the federal government to be more strategic around innovation would make a much larger impact.

A detailed case study recently found that the lack of coordination among these many separately “siloed” federal funding mechanisms for regional economic competitiveness limits the impact the programs have. But despite the differing systems within which these various trade, technology, training, and economic development programs operate, they are all connected by the outcome they support: bottom-up regional innovation and economic development strategies. And past research has shown that better coordination of these funds, even without increasing the actual spending on the programs, can multiply the impact.

With $113 million requested in the president’s fiscal year 2014 budget, the IMCP will serve as the “glue” connecting a package of mutually reinforcing grants, technical assistance, and other services from several agencies. Together, these coordinated packages of funding empower diverse regional stakeholders to come together to build and implement more comprehensive and strategic regional innovation plans. In so doing the program builds upon the successes of past efforts championed by the Economic Development Administration, including the i6 grants programs (announced in 2010), the Energy Regional Innovation Clusters program (2010), the Jobs and Innovation Accelerator (2011), and the still nascent National Network for Manufacturing Innovation (2012).

In each case the Obama administration has used the uniquely flexible authority given to the EDA by Congress to help “plug gaps” between existing innovation programs. Instead of applying to handfuls or dozens of separate programs with different requirements in different agencies, local, civic, and regional leaders can submit a plan detailing what their assets and needs are to the IMCP. The EDA can then work with the dozens of other federal funding sources to create a package of assistance from existing federal programs to meet those needs.

But the IMCP interagency effort may take this philosophy further than past programs did. Leveraging the EDA’s authority to break down funding silos and work proactively with other agencies will represent a big step toward the goal of a broader “common application” approach to federal funding programs. Aligning federal trade, technology, training, and economic development funding programs was first proposed by the Center for American Progress in 2012 in a report titled “Rewiring the Federal Government for Competitiveness.” In the report we put forward a proposal to unify management of the more than 300 programs managed by more than a dozen agencies that support bottom-up, innovation-based economic development.

Figure 1 from the paper demonstrates the chaos of the current system of delivering funding through dozens or hundreds of different silos and stovepipes to invest in bottom-up innovation strategies. Figure 2 summarizes how the proposed “common application approach” would better align these related streams of funding into coordinated packages that promote the formation of growth-enhancing innovation networks, rather than a cluster of concurrent but separate activities.

A “common application” approach has two major benefits. First, it makes it easier for regional innovation stakeholders—including university and community college leaders, regional business interests, labor and workforce-training organizations, nonprofit economic development boards, small businesses, and others—to find and access existing support services. Coordinating multiple related programs through a single point of entry makes them easier to find and locate, and easier to apply to.

Second, and more importantly, this approach makes not only the application process but also the programs themselves stronger by allowing community leaders to submit multifaceted plans for workforce, trade, training, and infrastructure. Under the status quo such support would have to be accessed separately through separate application processes with separate timelines. This limits the level of sophistication that bottom-up innovation plans can have, since the failure or delay of any one of many separately moving pieces could sink the whole strategy.

For example, take a hypothetical community working to build a public-private partnership that brings together a regional research university with local technology companies and the local community college to form a research park, similar to the one examined in Science Progress’s in-depth case study. Securing funding to enhance the community college curriculum to better prepare the local workforce for the needs of the research consortium won’t be useful if the grant to build the research park itself is not secured. Under the common application approach, and presumably under the IMCP, the EDA works to mitigate such risks by looking at whole innovation plans holistically, rather than piece by piece.

In 2012, Senator Kay Hagan (D-NC) introduced a bill to create an interagency task force to move toward this approach. The Small Business Common Application Act of 2012 would have built on the Obama administration’s efforts in this area to create a single entry platform for making competitive grant programs more accessible and coordinated, but it died in committee. The IMCP’s ongoing interagency review process has the potential to lead to a similar outcome using executive authority alone.

Conclusion

We know that innovation is one of the primary drivers of wealth creation, job creation, and economic growth. Rather than bribing companies to locate nearby, communities are increasingly realizing that smart investments in the building blocks of innovation—human, physical, and institutional capital—are a smarter bet to promote long-term endogenous growth. By working to connect other existing federal grant programs through a single point of entry, the IMCP is encouraging communities to get creative in developing strategies that leverage their strengths and compensate for their weaknesses.

The launch of this program has shown once again that the Department of Commerce’s Economic Development Administration continues to be a potent yet underrated tool for cultivating creative bottom-up growth across the country. As the story of innovative competitive grant programs like the IMCP continues to unfold, it will be important to follow the progress of the communities and regional economies benefitting from them. With continued monitoring and strong data collection, new lessons will doubtlessly emerge to inform the future direction of the management of the hundreds of programs supporting technology, trade, training, and economic growth across America’s communities.

Sean Pool is a Policy Analyst and Managing Editor of Science Progress. This post was updated on May 11, 2013 to better describe how IMCP focuses on economic distress.

]]>
https://scienceprogress.org/2013/05/getting-innovative-with-regional-innovation-funding/feed/ 0
The U.S. Outsources Cybersecurity & Defense To Contractors That Keep Getting Hacked https://scienceprogress.org/2013/05/the-u-s-outsources-cybersecurity-defense-to-contractors-that-keep-getting-hacked/ https://scienceprogress.org/2013/05/the-u-s-outsources-cybersecurity-defense-to-contractors-that-keep-getting-hacked/#comments Thu, 09 May 2013 13:31:02 +0000 Sean Pool http://scienceprogress.org/?p=28308 Last week, Bloomberg reported that QinetiQ, a high tech defense contractor specializing in secret satellites drones and software used by U.S. special forces, was the victim of a sustained cybersecurity breach for several years starting in 2007.

According to Bloomberg, documents released in the Anonymous Stratfor hack reveal QinetiQ was compromised as part of a cyber-espionage attack originating in China — and notes the breach was part of a much broader campaign targetting U.S. contractors:

“QinetiQ’s espionage expertise didn’t keep Chinese cyber- spies from outwitting the company. In a three-year operation, hackers linked to China’s military infiltrated QinetiQ’s computers and compromised most if not all of the company’s research. At one point, they logged into the company’s network by taking advantage of a security flaw identified months earlier and never fixed [...]

QinetiQ was only one target in a broader cyberpillage. Beginning at least as early as 2007, Chinese computer spies raided the databanks of almost every major U.S. defense contractor and made off with some of the country’s most closely guarded technological secrets, according to two former Pentagon officials who asked not to be named because damage assessments of the incidents remain classified.”

U.S. intelligence reports ranked cyber threats as the top danger facing the country for the first time in April, but tensions have been running high about the government’s ability to protect digital assets and intelligence for years. A 2011 Department of Justice report noted that only 64 percent of FBI agents assigned to national security-related cyber investigations had the appropriate skills and expertise to handle those types of cases.

Government cybersecurity contracting exploded during the Bush Administration, with many roles traditionally filled by government employees or resources outsourced to external companies over whom the government has less oversight. The Obama Administration has made efforts to curb that trend, but that expansion, combined with a lack of cybersecurity expertise in the military and federal agencies, resulted in many cybersecurity defense operations being outsourced or completed under the heavy supervision of outside contractors. This has sometimes led to less than ideal outcomes, despite a 2011 General Services Administration, or GSA, rule requiring all contractors and subcontractors that provide federal agencies with IT services, systems, or supplies to submit a cybersecurity plan that matches government regulations.

The history of breaches in contractors related to defense is particularly concerning: In 2011 RSA, a cybersecurity company with contracts with Lockheed Martin and the Department of Defense was breached — possibly contributing to a later cyberattack on Lockheed Martin. That same year, FBI cybersecurity contractor ManTech was hacked by Anonymous. Just earlier this year, computers at Bit9, a contractor that provides network security services to the U.S. government and many Fortune 100 firms, were actually used to spread malware.

In 2012, presumably in response to evidence of breaches, the Pentagon expanded and made permanent a trial program that teamed the government with internet service providers to scan network traffic to and from defense contractors for data theft from adversaries, somewhat similar to the cybersecurity executive order President Obama signed earlier this year encouraging voluntary threat intelligence sharing for critical infrastructure.

This January the Pentagon announced it would increase its ability to conduct defensive and offensive cyber operations five-fold, several months after the President signed a secret directive reclassifying some cybersecurity actions previously classified as offensive as defensive.

]]>
https://scienceprogress.org/2013/05/the-u-s-outsources-cybersecurity-defense-to-contractors-that-keep-getting-hacked/feed/ 0
How the Political Crusade Against Fisker Automotive Stifles Innovation https://scienceprogress.org/2013/05/how-the-political-crusade-against-fisker-automotive-stifles-innovation/ https://scienceprogress.org/2013/05/how-the-political-crusade-against-fisker-automotive-stifles-innovation/#comments Fri, 03 May 2013 16:16:27 +0000 Sean Pool http://scienceprogress.org/?p=28291 Martin LaMonica via our partners at OnEarth.

Before raising more than $1 billion in private capital to start his own “green car” company, Henrik Fisker was a designer for the likes of Aston Martin and BMW, and it shows in the cutting-edge, plug-in electric vehicle he released two years ago. This is the kind of car that will make you turn around and notice — slung low to the ground, boasting bold curves and an overall sleek look.

Based in Anaheim, California, Fisker Automotive has sold about 2,000 of the innovative designer’s new Karmas, a hybrid luxury sedan that cost more than $100,000. But the company is now struggling and has become a target for politicians seeking to challenge the government’s role in encouraging innovation and energy efficiency.

Last week, Fisker and a Department of Energy official were hauled in front of a congressional panel and grilled on why the electric carmaker is in jeopardy of losing almost $200 million in federal loans. The fact is, you need a lot more than cool-looking cars to be a successful automotive startup. And Fisker has not delivered when it comes to business execution. The Karma came out later than originally planned and then got mixed reviews. Consumer Reports complained that the car was cramped inside, felt bulky when driving, and had a small back seat and trunk. The car suffered numerous technical glitches, as well. In an infamous incident, it had to be towed away during Consumer Reports’ testing because the batteries failed.

The car isn’t exactly a home run when it comes to its green claims, either. It runs on batteries for short trips and then uses a gasoline engine to maintain charge when the battery runs low — the same type of powertrain as the Chevy Volt. The EPA rated the Fisker Karma at 55 mpg equivalent on all-electric mode, but only 21 mpg in charge-sustaining mode for highway driving, which is well short of other plug-ins. (The plug-in Prius hybrid, by comparison, is rated at over 100 mpg for all-electric driving and 49 mpg in hybrid mode.)

But it’s not Fisker’s business or engineering failures that have gotten critics in Congress and conservative circles all revved up. They’re using the company’s problems as an opportunity to question the government’s role in commercializing clean tech innovation. By its definition, the Department of Energy’s loan guarantee program, which provided money to Fisker and failed solar company Solyndra, carries risks — after all, some new technology companies are destined to fail, just like any companies in an emerging market would.

No doubt the loan program could benefit from a review to make it more effective, but seizing on company failures to score political points is just a distraction from a more substantive discussion. And the negative publicity fanned by political hearings has already made the Energy Department more risk-averse and clean-tech companies “wary of being associated with government support,” according to a recent Government Accountability Office report.

Though Fisker started with private backing, when it came time to start producing the Karma and a lower-priced plug-in sedan called the Atlantic (which is still in the planning stages), the company sought and was approved for a $529 million federal loan guarantee meant for automakers investing in new technology and advanced manufacturing. But in 2011, when Fisker didn’t meet certain milestones to continue receiving money, the DOE cut it off. Fisker failed to make its most recent repayment and is considering going into bankruptcy.

Still, even if the company does go belly-up and never makes the Atlantic, it’s hard to see the rest of auto industry missing much of a beat. Automakers are well aware of the challenges regarding the public’s adoption of electric vehicles, namely high battery costs and the limited range of all-electrics. Fisker’s demise can be blamed on plenty of factors specific to its situation. Without the government loan, it would be just another plug-in auto startup that hit a wall trying to get market traction. At least five plug-in carmakers have failed to get beyond early prototypes in the past three years, a sign of how difficult it is to crack into the auto market. You can’t say that Fisker’s core powertrain technology was fundamentally flawed, because other automakers already use or plan to use the same approach.

In terms of policy, it’s become a conservative talking point that the government should stick to funding basic research and not provide loans to help grow young companies. But there’s still a case to be made for some role of government in advancing technologies with economic and societal benefits. Banks are wary of loaning money for first-of-a-kind projects to young companies because there’s substantial risk. Without some policy mechanism to scale up alternative energy and efficiency technologies, many business ideas that would help make our energy system cleaner will likely never make it beyond the labs or drawing boards. It’s worth noting that the DOE loan guarantee programs were crafted and authorized under President George W. Bush.

Ford, Nissan, and Tesla Motors also received loans from this program and are on track to pay them back. But it now looks like Fisker will not. What’s an acceptable rate of failure? Is there a way to structure the program to guard against politicians’ desires to favor pet projects? Can the program be reformed to minimize taxpayer risk? These are valid questions Congress should be discussing. In fact, last week Republican senator Lisa Murkowski voiced optimism that the loan guarantee program could be preserved with some simple changes. But in today’s politicized environment around energy, it seems constructive conservations of this sort are rare.

Financially, the people with the most money to lose in the Fisker saga are private investors, not public taxpayers, whatever the partisan rhetoric. And if there’s a broader chilling effect from Fisker’s political woes, it may well be on innovators and investors. In the past two years, the Department of Energy hasn’t provided any more auto or renewable energy-related loans despite having billions of dollars available. No doubt, that’s partly because some loan applications were deemed too risky. But the withering attacks from politicos in the wake of failed companies could also help explain why loans have effectively stopped. With the loan guarantee program at a stand still, there’s one less source of capital to help bring new energy technologies to market. That’s a loss for everyone.

Martin LaMonica is an independent technology and science journalist. An unabashed energy geek and long-time tech industry reporter, he writes for MIT Technology Review and other publications.

]]>
https://scienceprogress.org/2013/05/how-the-political-crusade-against-fisker-automotive-stifles-innovation/feed/ 0
Navigating the Junk Science of Fetal Pain https://scienceprogress.org/2013/04/navigating-the-junk-science-of-fetal-pain/ https://scienceprogress.org/2013/04/navigating-the-junk-science-of-fetal-pain/#comments Mon, 29 Apr 2013 13:00:04 +0000 Sean Pool http://scienceprogress.org/?p=28248

March 13, 2013, was a significant day for abortion activists as a federal judge struck down an Idaho law banning abortions past 20 weeks. The ban had been put in place based on unfounded assertions that the fetus could feel pain beyond this point in a pregnancy.

In his 42-page decision, U.S. District Judge B. Lynn Winmill sided with Jennie L. McCormack and her attorney, Richard Hearn, declaring that the Idaho fetal pain law placed an undue burden on a woman’s right to access an abortion. The judge also chastised the Republican-dominated legislature, arguing that fetal protection efforts do not outweigh a women’s right to choose abortion.

Winmill’s decision in the case was significant because Idaho had joined Nebraska, which approved a similar measure in 2010, as well as seven other states that adopted fetal pain laws in 2011. That legislative session saw a proliferation of antiabortion bills in state legislatures across the country in an attempt to undermine the protections for abortion prior to fetal viability (usually between 24 and 26 weeks of pregnancy) in Roe v. Wade.

Hearn has said he is willing to take the case against fetal pain restrictions all the way to the Supreme Court. Such a trajectory for this challenge to fetal pain laws has the potential to remove these relatively new barriers to abortion access across the country: should the notion of “fetal pain” ultimately be debunked and declared a roadblock to abortion access by the Supreme Court, it would invalidate other similar laws in states around the country.

Many of these fetal pain bills are based on the “Pain-Capable Unborn Child Protection Act” template produced by the National Right to Life Committee. This template asserts that fetuses feel pain as early as 13 weeks and suggests that the possible presence of fetal pain should abrogate a woman’s constitutional right to abortion.

Despite the passage of several “pain capable” bills, the science behind fetal pain remains a footnote in the abortion debates. This is because antichoice lawmakers marshal so-called junk science to support claims of fetal pain to justify their bills that restrict access to abortion. The term “junk science” refers to spurious pseudoscience funded or written by special-interest lobbies who intend to distort public perceptions, particularly those involving public health risks.

Probably the most prominent example of junk science involves what Naomi Oreskes and Erik M. Conway call the “Tobacco Strategy,” which refers to the way that tobacco companies marshaled their own research “experts” (that were both bullied and bought) for judicial trials in the 1970s and 1980s that involved plaintiffs alleging serious health complications as a result of smoking cigarettes. The tobacco companies found that as long as they could present reasonably credible scientists to testify in suits alleging long-term physical harm from cigarettes, they would win lawsuits and avoid paying damages. Tobacco experts routinely testified that cancers, emphysema, heart attacks, and strokes were not prompted by cigarette smoking based on their own studies or cherry-picked data that argued against a causal link between smoking and negative health effects.

In both the case of cigarette smoking and fetal pain, the use of junk science demonstrates how credible, peer-reviewed scholarship is too often disregarded for pseudoscience that touts conservative values at the expense of empirical data. In these examples and in many more, junk science serves to manipulate public perceptions of the scientific process.

The junk science of fetal pain, for example, hinges almost exclusively on factsheets and testimony that cherry pick quotations about the development of neural pain receptors in the fetus, rather than on comprehensive scientific literature. The junk science used to support the case for fetal pain relies on tying together assertions about how the fetus has reflexive responses to noxious stimuli as the fetus develops, though clearly reflexive responses aren’t synonymous with the experience of pain.

In response to scientific discourse pertaining to the utility of fetal anesthesia during abortion procedures, many double-blind peer-reviewed studies have tackled both the assertion of fetal pain and the suggestion of anesthetics in neutralizing fetal pain. Consequently, comprehensive studies of fetal pain have concluded that fetuses do not feel pain at the 20-week mark. And the most exhaustive review of studies finds that claims of fetal pain are unsupported by peer-reviewed science. These studies suggest that while the neural pathways to experience pain begin forming around 23 weeks gestation, the pathways are not functional and cannot transmit the noxious stimuli to the brain before 29 or 30 weeks.

These studies also impugn other assertions by the junk science pushed by “fetal pain” lobbyists. In a review published in the Journal of the American Medical Association, or JAMA, for example, the authors suggest:

Pain perception requires conscious recognition or awareness of a noxious stimulus. Neither withdrawal reflexes nor hormonal stress responses to invasive procedures prove the existence of fetal pain, because they can be elicited by nonpainful stimuli and occur without conscious cortical processing. Fetal awareness of noxious stimuli requires functional thalamocortical connections. Thalamocortical fibers begin appearing between 23 to 30 weeks’ gestational age, while electroencephalography suggests the capacity for functional pain perception in preterm neonates probably does not exist before 29 or 30 weeks.

The JAMA authors conclude:

Because pain perception probably does not function before the third trimester, discussions of fetal pain for abortions performed before the end of the second trimester should be noncompulsory. Fetal anesthesia or analgesia should not be recommended or routinely offered for abortion because current experimental techniques provide unknown fetal benefit and may increase risks for the woman.

In this, the most extensive scientific literature review on fetal pain, JAMA concludes that fetal pain is not present until the third trimester. Scientists concur that the fetus is suspended in a sleep-like coma until the third trimester. In the most well-regarded, peer-reviewed, double-blind periodicals in the United States and in the United Kingdom, the consensus is that fetal pain is a political construction rather than a scientific fact.

Given these conclusions in the review of fetal pain literature, the use of junk science in political debates about abortion access has three major implications.

First, this strategy undermines scientific integrity by hyping partial scientific information and drawing specious conclusions for partially documented data.

Second, it makes civil discourse impossible, since legislators using junk science reject the established criteria of scholarly peer review as a test of evidence. The scientific method establishes that hypotheses should be tested empirically and then replicated for a conclusion to be accurate. The idea of fetal pain, however, emerged politically to roll back abortion rights. Junk science has been used to justify the policy rather than allowing the scientific data on fetal development to drive legislation. Fetal pain advocates have put the cart before the horse, so to speak.

Finally, this junk science provides a means of cutting women out of the debates over abortion entirely by inventing new criteria for data under which to evaluate the “rights” of the fetus. By highlighting the potential of physical pain for the fetus, the term “fetal pain,” as a political device, serves to elevate concerns about the fetus over the civil rights of women. This move further alienates women from the political process by which their abortion rights are circumscribed.

With the Idaho decision, a spotlight is shining on the language and data used to frame the debate over abortion rights. A clearer understanding of the political motives in the creation of the term “fetal pain” helps us understand how junk science is used to advance nonscientific political outcomes in public policy.

Lisa M. Corrigan, Ph.D. is an assistant professor of communication and co-chair of the Gender Studies Program in the Fulbright College of Arts & Sciences at the University of Arkansas.

]]>
https://scienceprogress.org/2013/04/navigating-the-junk-science-of-fetal-pain/feed/ 0
On the Ethics of Publishing Genomes https://scienceprogress.org/2013/04/the-ethics-of-publishing-genomes/ https://scienceprogress.org/2013/04/the-ethics-of-publishing-genomes/#comments Fri, 26 Apr 2013 12:22:33 +0000 Sean Pool http://scienceprogress.org/?p=28155 Andrea Peterson via Think Progress.

In 1951, a black tobacco farmer and mother of five named Henrietta Lacks died of cervical cancer at the age of 31. But before she died, the doctors treating her at Johns Hopkins took samples of her cervical tumor — and without her knowledge or consent those samples went on to become the most prolific human cells in medical research.

The cells, commonly referred to as HeLa cells, have helped research treatments for everything from HIV to cancer. But they have also become a cautionary tale about the importance of ethical standards in research: Lacks’ own family didn’t even know about her research legacy for over 20 years. When European researchers published the full genome transcript of HeLa cells without the knowledge or consent of her family earlier this month, they started a new chapter in that tale about the complex relationship between researchers and the privacy of genetic information.

It’s a complicated chapter, as Dr. William Pewen, Assistant Professor of Public Health and Family Medicine at Marshall University, and a former top health care adviser to the now retired Sen. Olympia Snowe (R-ME), noted to ThinkProgress:

“The release of Henrietta Lack’s genome illustrates the fact that genetic information isn’t an individual matter — it impacts family members as well.This underscores the need to ensure the rights of individuals and preserve the confidentiality of research data. Once patient privacy is lost, problems are simply compounded. Just how can today’s family members give consent for the next generation?

It’s easy to argue that HeLa cells have saved lives, that their net result was good — but to say so without acknowledging that they are also the result of a highly suspect act that demonstrated disregard for the privacy and consent of a patient ignores the ethical standards that should define scientific research. Similarly, with genetic information, many concerns are raised: It’s very difficult, if not impossible, to guarantee the long-term confidentiality of genetic information and it can reveal much more about an individual and their relatives than basic medical records.

And while the public may generally agree that there are great benefits to be gained through genetic research, it’s important that the concept of medical progress is not put ahead of the informed consent of patients — especially, as Pewen also notes because “[i]n an age of technology advances and ‘Big Data’ analytics, it’s clear that medical data can be used in countless detrimental ways. That will simply be fostered if we allow ethics and human rights to be undermined by expediency.”

The researchers who posted the HeLa genome sequence have already retracted their paper due to complaints from the Lacks family, but as other have noted Lacks’ genome sequence can still be gleamed from other databases.

Genetic information is health information protected by Health Insurance Portability and Accountability (HIPAA) Act disclosure rules and there have been legislative attempts to resolve some of the issues raised by advances in genetic testing, such as the Genetic Information Nondiscrimation Act (GINA) of 2008, which also prohibited employers and health insurance companies from discriminating based on genes — but those regulations won’t help Henrietta Lacks’ descendants: The final rule modifying HIPAA to implement GINA and Health Information Technology for Economic and Clinical Health Act (HITECH) protections effective yesterday does “not protect the individually identifiable health information of persons who have been deceased for more than 50 years,” conveniently excluding HeLa cell information.

]]>
https://scienceprogress.org/2013/04/the-ethics-of-publishing-genomes/feed/ 0
How Smartphones Are Revolutionizing Home Care For Alzheimer’s And Autism Patients https://scienceprogress.org/2013/04/how-smartphones-are-revolutionizing-home-care-for-alzheimer%e2%80%99s-and-autism-patients/ https://scienceprogress.org/2013/04/how-smartphones-are-revolutionizing-home-care-for-alzheimer%e2%80%99s-and-autism-patients/#comments Wed, 24 Apr 2013 15:53:04 +0000 Sy Mukherjee http://scienceprogress.org/?p=28221 Sy Mukjerjee via ThinkProgress.

As technological innovation empowers consumers to take greater control over their lives, the health industry has taken particular advantage of emerging internet and mobile devices. The burgeoning mHealth industry — which involves using mobile devices to improve health care delivery and outcomes — has exploded in the last five years, allowing everyday Americans to access better information about medical conditions and provide better ongoing care to themselves and their families. Now, creative new apps are helping home care workers better assist Americans with Alzheimer’s and autism.

mHealth apps are particularly useful for monitoring patients with ongoing and chronic medical needs, since such programs provide a multitude of services to keep track of medication schedules, exchange notes with doctors and professional home care workers, and even track the patients themselves. That comes in handy for caretakers such as Laura Jones, who had to keep working full time to provide her 50-year-old Alzheimer’s-afflicted husband with health insurance:

Using Comfort Zone, which is offered by the Alzheimer’s Association starting at $43 a month, [Jones] was able to go online and track exactly where [her husband] was and where he had been.

Her husband carried a GPS device, which sent a signal every five minutes. If Jones checked online every hour, she would see 12 points on a map revealing her husband’s travels. She would also get an alert if he left a designated area.

Eventually, the tracking revealed that Jones’ husband was getting lost.

“He would make a big funny loop off the usual route and we knew it was time to start locking down on him,” she said.

Conveniences like that may be difficuly to pin a numerical value on — but they make an enormous pragmatic difference in the lives of real Americans. By being able to track her husband, Jones doesn’t have to entrust such care to a salaried full-time worker, and has the freedom to be more intimately involved in her husband’s care.

When it comes to conditions that tend to onset earlier in life, such as autism, mHealth apps can offer an interactive medium that makes it easier to engage with autistic children:

Lisa Goring, vice president of Autism Speaks, said tablets have been a boon to families with autistic children. The organization has given iPads to 850 low-income families. And the Autism Speaks website lists hundreds of programs — from Angry Birds to Autism Language Learning — that families have found useful.

Samantha Boyd of McConnellstown, Pa., said her 8-year-old autistic son gets very excited when the iPad is brought out.

“There’s no way he’d be able to use a keyboard and mouse,” she said. “But with the iPad, we use the read-aloud books, the songs, the flash card apps.”

Other popular applications include the inexpensive pillbox app “Balance,” which lets users schedule alerts for their complex treatment regimens, and CareGiver apps that let families find and monitor professional caregivers who serve their loved ones. Not only does this kind of technology empower consumers — it also cuts down on health care costs. Pillbox apps are particularly promising on this front, since noncompliance with treatment regimens is a major contributor to bloated U.S. health care spending. And overall, organizations like Allie Health World estimate that the use of mHealth could double access to health care services while lowering administrative costs through better data collections — even potentially reducing seniors’ health care costs by 25 percent.

]]>
https://scienceprogress.org/2013/04/how-smartphones-are-revolutionizing-home-care-for-alzheimer%e2%80%99s-and-autism-patients/feed/ 0
The ‘Broader Impacts’ of Sequestration on Science https://scienceprogress.org/2013/04/the-%e2%80%98broader-impacts%e2%80%99-of-sequestration-on-science/ https://scienceprogress.org/2013/04/the-%e2%80%98broader-impacts%e2%80%99-of-sequestration-on-science/#comments Wed, 24 Apr 2013 15:52:29 +0000 Sean Pool http://scienceprogress.org/?p=28233 Now that we’ve been driven off the “fiscal cliff,” perhaps we should look around and assess the results. It turns out that sequestration is raising interesting questions about the relation between science and ethics—in particular, on whether the pursuit of scientific knowledge can ever be usefully separated from the question of larger societal concerns.
Take one impact of sequestration, for example: the suspension of all education and public outreach, or EPO, activities conducted by the National Aeronautics and Space Administration:

Effective immediately, all education and public outreach activities should be suspended, pending further review. In terms of scope, this includes all public engagement and outreach events, programs, activities, and products developed and implemented by Headquarters, Mission Directorates, and Centers across the Agency, including all education and public outreach efforts conducted by programs and projects.

The scope comprises activities intended to communicate, connect with, and engage a wide and diverse set of audiences to raise awareness and involvement in NASA, its goals, missions and programs, and to develop an appreciation for, exposure to, and involvement in STEM [science, technology, engineering, and math]. [The full notice can be read here.]

The idea seems to be: Protect the real work of NASA by cutting the useful, albeit peripheral tasks such as “permanent and traveling exhibits … speeches, presentations, and appearances, with the exception of technical presentations by researchers at scientific and technical symposia.” EPO activities can be reestablished after funding is restored.

But how well does this distinction between science and outreach hold up? Consider two of NASA’s most high-profile projects. The Mars rover “Curiosity” has been exploring the red planet since last August. NASA’s page on Curiosity lists its top five scientific and engineering achievements to date. In particular, two of them are finding an ancient streambed and drilling into a Martian rock.

What are these scientific and engineering achievements? According to NASA, the ancient streambed “suggests that at least some parts of Mars may have been habitable billions of years ago.” As for the drilling, “the gray powder it drilled out” suggests that “the rover’s landing site could have supported microbial life billions of years ago.”

Neat news! But is the discovery of habitable conditions—or the existence of life itself—solely a scientific fact? Doesn’t it simultaneously count as a social, political, cultural, and religious fact? Is it really possible to place such discoveries into only the “scientific and technical” category?

Again, consider the case of what is perhaps the most prominent NASA project of the past 20 years: the Hubble Space Telescope. Hubble has offered an endless stream of pictures and data since its delivery into orbit by the space shuttle in 1990. NASA argues that in the early stages of the Hubble, one of its key scientific objectives was identified as “a study of the nearby intergalactic medium using quasar absorption lines to determine the properties of the intergalactic medium and the gaseous content of galaxies and groups of galaxies.”

Pretty impressive—and also pretty impressively technical. Pursuing this question, results from the Hubble Space Telescope have taught us that we do not live in an “island universe,” but rather galaxies are tied together in a grand interstellar web, with streams of gases crossing the intergalactic medium. But again, are these solely scientific facts? Or is the science inextricably bound with philosophical and religious motives tied to our desire to understand where we have come from and what our destiny is?

These issues point toward the difficulty of separating the intellectual merit from the broader impacts of science. The underlying motives behind NASA are political—such as national pride—as well as cultural and philosophical in nature. In terms of the rover Curiosity, think of the response if life was discovered on Mars.

The science and engineering involved are the means to those ends: Without the larger vision, there would be no need for NASA science and engineering. The results are interdisciplinary as well; after all, scientists seek significant findings—ones that make a difference to people. So why would NASA cut off the EPO activities when those were precisely the point of the missions and also of NASA itself?

Public science is built upon the claim that its work is separate from politics. And that’s true, if we mean that science must be based on rational beliefs grounded in the best available evidence.

But in another sense, science is political: It is always motivated by the needs and values of the community. Even the vaunted motivation of “curiosity-driven” research is really just another way of expressing what Aristotle called the pure desire to know.

Now, NASA could reply that if there are going to be future EPO activities, then there will need to be scientific findings to report, and that this justifies prioritizing the scientific research. But it is also true that without public support, NASA does not get to conduct any science at all. And EPO activities are the means for bringing these discoveries to the public’s attention.

This same conceptual conundrum—whether politics is writ large inside or outside of science—has vexed the National Science Foundation, or NSF, for the past 15 years. In 1997 NSF changed the criteria for reviewing proposals to include “broader impacts” as well as “intellectual merit.” Since then it has struggled to find a proper balance between the two concepts. But as my colleague Britt Holbrook and I argued some years ago, the very distinction between intellectual merit and broader impacts seems questionable. Rather than seeing these as different types of interest, it is better to see them as one interest that differs with different types of audience. Disciplinary—such as scientists—and nondisciplinary—such as the public—audiences have different interests, but in both cases their interests mix intellectual merit and broader impact.

NSF continues to refine its thinking on these matters: It issued new requirements for grant proposals just this past January. Evaluation of broader-impacts activities, using especially well-established means of evaluation, will now be even more integrated into the grant-proposal-review process. Most of those well-established means of evaluation focus on EPO activities. But NASA’s conundrum raises the question of whether such disciplinary approaches to evaluation can work anymore at all, or whether we need to move toward explicit interdisciplinary criteria and review panelists.

Consider the following scenario: the NSF—or NASA—convenes its next review panel. Instead of a peer-review board consisting entirely of scientists, the panel consists of six members: three scientists, one philosopher, one education expert, and one citizen. The panel must come to an agreement about what to fund; that is, how to spend the public’s money. The scientists are free to make any type of argument they wish to convince the nonscientists, for example, that the research would answer critical and crosscutting questions. The only requirement is that there must be a clear consensus—not necessarily unanimity—on the project.

I suspect that such a suggestion will be greeted with horror in some quarters. But this is where the conversation about broader impacts is heading—as I believe it should. Today Congress and society demand greater accountability for public expenditures. I see this as a way to acknowledge this impulse, while keeping the drive for accountability from turning into a straightjacket that limits the reach of science.

Robert Frodeman is professor of philosophy and director of the Center for the Study of Interdisciplinarity at the University of North Texas.

]]>
https://scienceprogress.org/2013/04/the-%e2%80%98broader-impacts%e2%80%99-of-sequestration-on-science/feed/ 0
Death Spiral Video: Arctic Sea Ice Minimum Volume 1979-2012 https://scienceprogress.org/2013/04/death-spiral-video-arctic-sea-ice-minimum-volume-1979-2012/ https://scienceprogress.org/2013/04/death-spiral-video-arctic-sea-ice-minimum-volume-1979-2012/#comments Mon, 22 Apr 2013 18:59:04 +0000 Sean Pool http://scienceprogress.org/?p=28212 Many experts now say that if recent volume trends continue we will see a “near ice-free Arctic in summer” within a decade.

Creative tech guru Andy Lee Robinson shows why in a wondrous new video — set to music he wrote and played:

Let’s help this video go viral!

Related Posts:

Cross-posted at Climate Progress.
]]>
https://scienceprogress.org/2013/04/death-spiral-video-arctic-sea-ice-minimum-volume-1979-2012/feed/ 0
Equity Crowdfunding: Boost for Innovation or Haven for Scams? https://scienceprogress.org/2013/04/equity-crowdfunding-the-next-big-boost-for-innovation-or-haven-for-scams/ https://scienceprogress.org/2013/04/equity-crowdfunding-the-next-big-boost-for-innovation-or-haven-for-scams/#comments Fri, 19 Apr 2013 13:06:51 +0000 Sean Pool http://scienceprogress.org/?p=28193 Shortly after the passage of the Jumpstart Our Business Startups, or JOBS, Act just over a year ago, an article in the New York Times Dealbook section observed that “it is unclear how regulators will respond to the JOBS Act now that it is law.”

With respect to the equity crowdfunding portions of the Act, those words could have been written today. The Securities and Exchange Commission is now three months past its self-imposed deadline to develop the regulatory framework that would allow equity crowdfunding – the online process by which a company sells a stake in itself to a group of microinvestors – to commence in the United States, with a decision expected sometime before the end of the summer.

The delay has given both sides of the equity crowdfunding debate a chance to prognosticate – crowdfunding will replace angel investment, crowdfunding will only work for local, community-based businesses, crowdfunding will fall flat on its face, or is too vulnerable to wide-scale fraud. What the debate has tended to overlook is the fact that equity crowdfunding, though a young phenomenon, is not brand new. There are examples of functional equity crowdfunding across Europe, and they provide a guide for what equity crowdfunding in the United States may look like.

Crowdfunding is the process by which an organization or individual raises funds via many small contributions, usually online. Kickstarter is probably the most visible incarnation of this practice, but many similar platforms exist. For the most part, crowdfunding in the United States has been centered on charities or on lending, and it is not yet legal for a small enterprise to publicly offer equity via online crowdfunding. The JOBS act however, will change that, once the SEC puts in place new rules.

At its heart, equity crowdfunding is an attempt to tackle a familiar problem. Most small enterprises begin to raise capital via the “friends and family” model, wherein the founders and their closest personal network contribute. While there is obviously some variance, this well tends to run dry around at the $50,000 to $200,000 mark. When a startup’s plans have expanded to require a capital investment of $1-2 million or more (again, speaking generally), it may attract the attention of a venture capital firm or angel investor.

The problem, of course, is what happens between $200,000 and $2m. This stage of new venture development involves more money than friends and family can generate, but is typically too small to interest venture capitalists, and involves more risk than many banks will tolerate. As a result, entrepreneurs and policymakers often refer to this mid-six-figure range as the “Valley of Death” for startups. This challenge has proven bigger than any single solution, and the equity crowdfunding provisions of the JOBS act present a promising new approach.

But equity crowdfunding has its detractors. The primary objection to equity crowdfunding in the run-up to the JOBS Act was that it invites fraud. Providing the regulatory infrastructure to supervise thousands of startups connecting via the web with thousands of potential investors is the responsibility of the SEC. Former Chairwoman Mary Shapiro strongly opposed equity crowdfunding when it was up for a vote last spring, and the SEC today is so delinquent on developing a working framework for the practice that a recent article in Businessweek raised the possibility of sabotage by inaction.

Leadership turnover seems a more likely explanation for the slow process (although, no doubt, an institutional aversion to equity crowdfunding doesn’t help). In either case, some of the fretting over fraud is much ado about nothing – the very model of equity crowdfunding contains within it some inherent safeguards against fraud, mostly to do with the fact that it involves an unusually large body of interested parties analyzing the potential value of an investment and interacting with the startup, making it an unattractive field for potential fraudsters.

For the most part, equity crowdfunding in other countries has been run through a small number of private platforms – CrowdCube and Seedrs in the UK, Symbid in the Netherlands, Innovestment in Germany, and others– which are on the sharp end of questions of fraud and due diligence. Each platform has its own approach to due diligence; some firms wait until the total capital target has been raised and is held in escrow, and then investigate the company, while others vet entrepreneurs, sometimes through several stages of examination, before permitting their startups to appear on the platform.

Consistent within the model are a few elements that would deter fraud. The first is that any funds raised are withheld from the startup until the fundraising target is met; if it is not, the funds are returned to the investors. This makes it difficult to use equity crowdfunding to perpetrate a Ponzi scheme, and raises the question of what kind of confidence man would decide to run a scam in which he would have to develop, promote, and defend a business plan that could pass the scrutiny of both the platform and potential investors with the very real possibility that he would get not a penny out of it anyway.

Many crowdfunding platforms encourage communication between investors and entrepreneurs during the funding window, which can last from 30 days to one year, depending on the platform. A startup might deal with hundreds of interactions from potential investors, each one an opportunity for an investor to catch a whiff of fraud and, taken collectively, a pattern that the platforms can use an early-warning system. If an entrepreneur is evasive or refuses to speak to investors, this sets off warning bells. Not only does the entrepreneur have every incentive to connect with an investor, but would be atypical to the point of raising red flags if an entrepreneur were unwilling to talk about his or her big idea.

Of course, much of this hypothetical. A 2012 report from Nesta, the UK’s innovation foundation, noted that Crowdcube, the largest of the UK’s crowdfunding platforms, has yet to face a single case of fraud.

Concerns about crowdfunding may be overblown, but they are not invented, and any policy change should be approached with scrutiny. Nonetheless, equity crowdfunding presents vast potential for some entrepreneurs to more smoothly navigate the valley of death and increase innovation, and examples from other countries should give the SEC confidence that these rules can be implemented in a way that both protects investors and, to use the language of the act itself, jumpstarts American startups.

Frank Spring is a innovation policy expert and director of a progressive political consulting firm that “helps people who do good do it better.” Image via BigStockPhoto.

]]>
https://scienceprogress.org/2013/04/equity-crowdfunding-the-next-big-boost-for-innovation-or-haven-for-scams/feed/ 0
The ‘Scariest Search Engine On The Internet’ Has Been Around For 3 Years And Is Used For Good https://scienceprogress.org/2013/04/the-%e2%80%98scariest-search-engine-on-the-internet%e2%80%99-has-been-around-for-3-years-and-is-used-for-good/ https://scienceprogress.org/2013/04/the-%e2%80%98scariest-search-engine-on-the-internet%e2%80%99-has-been-around-for-3-years-and-is-used-for-good/#comments Wed, 17 Apr 2013 16:37:21 +0000 Andrea Peterson http://scienceprogress.org/?p=28185

Andrea Peterson via ThinkProgress.

CNNMoney posted an ominously titled column “Shodan: The scariest search engine on the Internet” last week about a search application that discovers unprotected technology connected to the internet. The column that was promptly aggregated by other outlets like FastCompany – but not until the last third of the article did the author mention two key facts: Shodan has existed for three years and is “almost exclusively used for good.”

Make no mistake, the things Shodan can uncover are scary: It’s essentially a way find technology currently online that was never intended to be networked in the first place, or networked with such laughably thin security protocols like using default admin logins and passwords that it’s child’s play to compromise — with the vulnerable tech ranging from the seemingly mundane like home printers and garage doors to the sort of things you really don’t want to be connected to the outside world, such as citywide traffic systems and nuclear command and control centers.

And as we move closer to a world where everything from our refrigerators to our pacemakers are connected to the Internet in one way or another, these problems will only multiply: An “Internet of things” that lacks security built into the devices that join together to create that network could potentially put everyone at risk. The issue is that these vulnerabilities exist in the first place, not that Shodan can uncover them — as previous coverage of Shodan by David Maas in San Francisco City Beat notes:

“The fact that somebody is basically shining a flashlight into a dark room shouldn’t be the part people are afraid of,” says Dan Tentler, a San Diego-based information-security consultant. “The part people should be afraid of is the fact that some genius decided to take, for example, a five-megawatt hydroelectric plant in France, put its control computer on the Internet and allowed everybody that knew about the IP address to connect to it and make changes to this dam, with no encryption or authentication to speak of.

As with almost all technological developments, Shodan is neutral. In fact, the bad guys have a vested interest in keeping these types of vulnerabilities quiet so their exploitation will go unnoticed. With Shodan, security experts have a simpler way of identifying what networks are at risk and potentially taking them offline or improving security thus bettering the entire system. And security experts does mean hackers: While the word has taken on a lot of negative connotations in the media, hacking is a process of discovering vulnerabilities that is neutral. Just as it’s questionable to call Shodan scary because the things it uncovers are settling, decrying the process of hacking and all people that do it because they reveal problems with systems is equally objectionable.

There are certainly bad hackers, but there are also good hackers: Just ask Peiter Zatko (better known as Mudge) who spent the last few years as a program manager at the Defense Advanced Research Projects Agency, or DARPA, focusing on cybersecurity projects. When he left last week he tweeted that he didn’t know which was neater: “getting Office of SecDef highest award, OR the positive use of ‘hackers’ in the letter!”

]]>
https://scienceprogress.org/2013/04/the-%e2%80%98scariest-search-engine-on-the-internet%e2%80%99-has-been-around-for-3-years-and-is-used-for-good/feed/ 0
President Obama Launches $100 Million Initiative To Map The Human Brain https://scienceprogress.org/2013/04/president-obama-launches-100-million-initiative-to-map-the-human-brain/ https://scienceprogress.org/2013/04/president-obama-launches-100-million-initiative-to-map-the-human-brain/#comments Wed, 03 Apr 2013 14:34:21 +0000 Sy Mukherjee http://scienceprogress.org/?p=28157 Editor’s Note: Science Progress applauds the President’s latest initiative to invest in the next generation of neuroscience research. Whether this is the beginning of the next human genome project ready to open the doors to vast new treatments, markets, and domestic companies, or whether it is much ado about nothing, only time will tell.

We know that when we set ambitious science and technology goals, positive benefits inevitably result. The Human Genome Project brought us the modern biotechnology industry, now responsible for a nearly $1 trillion domestic biotechnology industry that supports hundreds of thousands of jobs. The Apollo program brought us countless innovations from better solar panels to GPS and advanced defense technologies. The NIH creates about twice as much economic benefit each year as it costs taxpayers to run. 

With the cost of sequencing genes on a steep decline, the new frontier in understanding our own humanity is understanding cognition–the brain. We applaud the administration’s ambitious 15-year plan, created in large part by the researchers, and funded by both government agencies and the private sector. Whether this is the beginning of the next human genome project, ready to open the doors to vast new treatments, markets, and domestic companies only time will tell. But whatever the results of research, simply asking hard questions has always led to its own rewards. The remainder of this post is a reprint from Think Progress:

According to a White House press release, President Obama will follow through on his State of the Union call for a comprehensive map of the human brain by announcing $100 million in federal investments for the project on Tuesday morning. Funds for the project — dubbed the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative — will be appropriated through the National Institutes of Health (NIH), the Defense Advanced Research Projects Agency (DARPA), and the National Science Foundation (NSF), and will be included in the FY 2014 budget that the president is set to release next week.

The project’s central component will be the Brain Activity Map, which seeks to “accelerate the development and application of new technologies that will enable researchers to produce dynamic pictures of the brain that show how individual brain cells and complex neural circuits interact at the speed of thought” in an effort to “explore how the brain records, processes, uses, stores, and retrieves vast quantities of information, and shed light on the complex links between brain function and behavior.” As President Obama explained during the State of the Union, such advancements could herald the key to unlocking pressing public health mysteries, including effective methods of curing brain injuries and degenerative diseases like Alzheimer’s.

That’s particularly significant in a time of rising dementia rates among Americans. A recent pair of studies released by the Centers for Disease Control (CDC) concluded that a combination of factors — including an aging population, more targeted early diagnosis efforts, and the failure to discover a viable cure — led to a staggering 68 percent increase in Alzheimer’s mortality rates between 2000 and 2010. The associated health care costs of that rise in the disease were $200 billion in 2012 alone, including $140 billion to government insurance programs such as Medicare and Medicaid. If the current trend holds, those costs could balloon to over $1 trillion by the year 2050.

However, whether or not the announced funding is sufficient for a project of this size and scope is an open question. The BRAIN Initiative is most often compared — including by the president himself — to the watershed Human Genome Project. But the initiative to map the genome was funded by the Department of Energy and NIH to the tune of $3.8 billionover the course of 13 years, leading the project to be completed two full years ahead of schedule and giving the federal government an $800 billion return on its investment. The planned $100 million in funding for the BRAIN initiative constitutes 1/38th that price tag, and comes during a time when congressional cuts to the federal budget have slashed research and development funding, which was already half of what it was in 1962 before the sequester went into effect. The initiative will also rely heavily on public-private partnerships through research universities and organizations such as the Allen Institute for Brain Science.

While there are several related projects already in the works — such as the the Human Connectome Project — they are unlikely to overlap much with the BRAIN Initiative, which is appreciably more ambitious and detailed in scope. But that doesn’t mean researchers can’t take away lessons from the ongoing efforts, which are still in preliminary stages. Although progress in the Human Connectome Project has led to groundbreaking methods of data collection associated with the human brain, as the New Yorker’s Gary Marcus pointed out earlier this month, “it is easier to collect massive amounts of data than to understand them.”

]]>
https://scienceprogress.org/2013/04/president-obama-launches-100-million-initiative-to-map-the-human-brain/feed/ 0
The Dollars And Science of Fishery Management https://scienceprogress.org/2013/04/the-dollars-and-science-of-fishery-management/ https://scienceprogress.org/2013/04/the-dollars-and-science-of-fishery-management/#comments Tue, 02 Apr 2013 20:34:38 +0000 Sean Pool http://scienceprogress.org/?p=28162 By Michael Conathon, via The Center for American Progress

In September 2013 the Magnuson-Stevens Fishery Conservation and Management Act, which regulates America’s fisheries, will be up for reauthorization for the first time since it was previously amended in 2006. The process that led to the most recent reauthorization took a contentious and laborious seven years of debate in Congress and led to dramatic changes in the law. Perhaps the most ambitious amendment was the addition of a requirement for the National Oceanic and Atmospheric Administration, or NOAA, to set annual catch limits — caps on how much fishermen can catch of each of the fish stocks that NOAA manages.

Earlier this month the House Committee on Natural Resources formally kicked off the Magnuson reauthorization festivities with a hearing that, according to Committee Chairman Doc Hastings (R-WA), was “intended to highlight issues that could provide the basis for future hearings.” A hearing about future hearings: government efficiency at its finest.

Most remarkable about this particular bit of political theater was that peeling back the veneer of partisanship that rules House proceedings these days revealed an almost unanimous agreement that, again in the words of Rep. Hastings, “many of the current challenges may not be due to the Act itself, but rather with its implementation.”

This realization tracks well with a Natural Resources Defense Council, or NRDC, report issued earlier this month. The NRDC found that since the Magnuson-Stevens Act was amended in 1996 to require overfished stocks to be rebuilt within 10 years, nearly two-thirds of all once-overfished stocks have met their target numbers. As a result, revenues from U.S. commercial fishing have increased by 54 percent since 1996 when adjusted for inflation, with fishermen receiving more than half a billion dollars in additional revenue annually.

While this accomplishment is remarkable, it doesn’t extend to all fisheries or all regions of the country. The rising tide of improved fishery management in the United States has not lifted all boats. New England, for example, is still home to 11 overfished stocks — more than twice as many as any other region, according to NOAA’s 2011 “Status of Stocks” report to Congress. And each region of the country has at least one stock of fish on the overfished list.

So if you’re a charter-boat captain trying to get red snapper for your customers in the Gulf of Mexico—where the recreational season lasted just 40 days in 2012 — or if you’re a groundfisherman in New England facing 77 percent cuts to your quota of Gulf of Maine cod, NRDC’s big-picture statistics don’t do a whole lot for your bottom line.

While virtually everyone who spoke at this month’s House hearing agreed that the Magnuson-Stevens Act should remain in place and that the lingering difficulties were due to implementation and not the legislation itself, the question of how to resolve these problems raised a far more contentious debate.

There was general consensus that NOAA simply hasn’t been spending enough money on stock assessments to do the job that is required. In normal times, such an obvious point of agreement would lead to a clear solution. But in today’s poisonous partisan atmosphere, the debate over this point became predictably divisive. Most Republicans trumpeted government inefficiency, suggesting that NOAA is misusing its funds, while most Democrats felt that more money is necessary to get the job done right.

Rep. John Fleming (R-LA) was particularly vociferous in taking the former side. He charged NOAA with using its funding “inefficiently” to research climate change and at one point pontificated that the cost of one “climate satellite” was $750 million. Presumably, he was referring to NOAA’s environmental-monitoring satellites that—in addition to collecting climate data — also happen to be fundamental weather-forecasting tools. Without them, hurricane tracking, for example, could lose as much as 50 percent of its accuracy — a degradation with which Rep. Fleming’s downstate neighbors might have more than a passing concern.

Perhaps the broader point that Rep. Fleming missed in pointing his finger at the agency is that Congress prohibits NOAA from dipping into its satellite funding to cover the cost of fisheries research. That’s called “misuse of funds,” and Congress gets very angry when agencies tamper with its carefully crafted spending plans. In fact, NOAA found itself in hot water last year when Congress found that the agency “reprogrammed” $36 million from one National Weather Service account to pay employees in another part of the Weather Service. So shifting funding from the satellite program to the fisheries account seems as if it would be a shady solution at best.

Rep. Ed Markey (D-MA) nailed the problem during the hearing. “Inadequate funding for science,” he opined, “makes poor management a self-fulfilling prophecy.” His point is that ill-funded science produces uncertain data. The less certain the data, the higher the buffer needs to be between what scientists think the right number of fish to catch should be and how much regulators can allow fishermen to catch. Simply put, more certainty means a better chance at more fish.

The trouble is, stock assessments are a complicated business, and the most complex assessments are expensive. Each year the Magnuson-Stevens Act requires NOAA to set annual catch limits on 203 fish stocks. To set an accurate catch limit, the agency relies on its most recent stock assessment, but the agency’s FY 2012 budget for fishery-stock assessments was $64 million—far short of what would be required to carry out new assessments on every stock or even the most commercially important stocks. Until Congress allocates additional funding for stock assessments, fishermen will suffer.

Sound management requires sound science, and sound science costs money. But as the NRDC report found, sound management based on sound science also makes money — a lot of it, too, and it ends up in the hands of fishermen.

So when it comes to fisheries management, House Republicans have a choice. They can recognize the reality that federal dollars spent for the good of hard-working Americans are a real investment in our society, or they can stop insisting that their fishermen are being harmed by inadequate science. When you’re the guys controlling the bag of money, you don’t get to complain that there isn’t enough being spent.

Michael Conathan is the Director of Ocean Policy at the Center for American Progress. This piece is reprinted with permission.

]]>
https://scienceprogress.org/2013/04/the-dollars-and-science-of-fishery-management/feed/ 0
Science and the Public Square https://scienceprogress.org/2013/03/science-and-the-public-square/ https://scienceprogress.org/2013/03/science-and-the-public-square/#comments Fri, 29 Mar 2013 13:52:52 +0000 Heather Douglas http://scienceprogress.org/?p=28133 In the past decade, we have heard a lot about how Conservatives don’t like science. In their 2011 book Merchants of Doubt, Naomi Oreskes and Erik Conway document the conservative lean of the champions of many science-obscuring campaigns over the past half century—for example about the link between tobacco and health, on environmental concerns such as acid rain, ozone depletion, and climate change, and on national security items like the Strategic Defense Initiative.

Trends in polling make clear that scientists have increasingly leaned towards the more liberal politics. Slates of Nobel Prize winning scientists have signed letters of support for Barack Obama, for example.

Recently, Dan Sarewitz, Director of the Arizona State University’s Center for Science, Policy, and Outcomes, has argued that this is bad for science.

Sarewitz has suggested that the liberal bent of science unnecessarily politicizes science and that scientists should make more of an effort to be bi-partisan. But Sarewtiz does not see that a deep structural issue makes science more of a thorn in the side of Conservatives than of Liberals, and not just for social science. The problem arises at the boundary between what is considered private behavior (and thus none of the government’s business) and public behavior (what could reasonably come under the purview of public policy).

John Dewey, in his 1927 book The Public and Its Problems, noted that the boundary between the public and the private shifts over time. Behavior that was once thought to be public becomes a private matter. For example, matters of personal faith were once public issues, because flocks were damned or saved together (or at least the faith of one’s neighbor was thought to matter materially on one’s own salvation prospects). But this belief shifted, beginning with the Reformation, to the idea that all that really mattered was the faith of the individual. As the shift occurred, an individual’s faith became a private matter. Similar shifts from public to private have occurred over issues like sodomy and interracial marriage.

Change in the opposite direction also occurs. Behavior that was once considered private can become a public concern. Wife beating was once largely a private matter, but as the women’s movement gained steam and sociologists studying spousal abuse began to track the enormous toll such behavior elicits, spousal abuse became very much a public matter, and laws and policies were enacted to curtail it and protect abused spouses.

But it is not just areas of social science that have this impact. Pollution was once thought of as just the private disposal of one’s waste. But as the physical effects of pollution were tracked and evidence of harm, both human and ecological, were studied, environmental waste disposal, whether through the air, water, or land, became a public matter.

And this is where the asymmetrical relationship between science and the opposite ends of the political spectrum snaps into focus. One of the most important reasons for moving the boundary between public and private is the discovery of significant effects of private action on people who are not part of the private action—and such discoveries often come through science.

Dewey argued that, in general, an action that only has effects on those who engage in the action—even a group of people—remains a private concern. But when we discover that an action affects those beyond the group directly involved—particularly when a large group is affected dramatically—a public policy issue arises. In other words, what was once a private matter becomes a public matter.

How do we discover that someone’s ostensibly private action is affecting people not involved in that action? We study it, and discover causal relationships not seen before. This instigates a change in our social understanding, and something that was once private becomes a public issue. Or, conversely, we might study some set of behaviors thought to have public import and find no public harm or impact from them, and they recede into the realm of the private.

These days, it is science that usually does such discovering. Thus, while custom often sets where the boundary is, it is science that often drives whether the boundary should be moved.

And moving the boundary, one way or the other, is precisely what Conservatives, especially social conservatives, are going to hate. The very idea of social conservatism, that social change is inherently difficult and tradition should be valued, makes social conservatives more unwilling to countenance arguments that suggest we should accept the fundamental social changes involved with shifting the public/private boundary. Liberals, by the very nature of their political perspective, are much more willing to consider calls for such social change.

That many social conservatives are also small government conservatives exacerbates the dislike of science, because moving something from private to public concern expands the realm of the government. The growth of government into the realm of food and medical oversight (the Food and Drug Administration, and the United States Department of Agriculture), environmental oversight (Environmental Protection Agency), and consumer product oversight (Consumer Product Safety Commission) exemplify the discovery of the widespread public effects of private behavior, change in the public/private boundary, and growth of government.

Not all issues will fall out along these lines. For example, social conservatives are desperate to generate evidence that abortion has harmful side effects, either on the women who have them or on the broader society, so that it can become a more regulated and restricted practice. In cases like this, social conservatives often seek out evidence to show that something private should become public. Often they do so in an attempt to return society to an earlier public/private boundary.

But many cases clearly exemplify a resistance to social change, for example:

1) anthropogenic climate change, which makes burning fossil fuels, cutting down trees, and any other action that increases greenhouse gases publicly relevant,

2) gun violence and gun regulation, which turns a private ownership issue into a public policy issue,

3) restrictions on the use of chemicals in the environment due to toxicity concerns,

4) increased oversight of consumer products, and so on.

In these cases, evidence suggests that private choices have significant impacts beyond the choosers, and that a public policy issue is present. The expansion of government subsequently looms.

This does not mean that Conservatives hate evidence. It does not mean they are so wedded to their ideologies that they cannot see the evidence (at least not always). It does not mean that there are just some “ideological concerns” that happen to conflict with science, as Amanda Marcotte, a provocative political blogger, has suggested.

Conservatives are generally going to demand a higher burden of proof for science that discovers significant unintended consequences of private actions (like climate change from greenhouse gas emissions) or discovers an absence of harm from previously public actions (like the lack of ill-effects on children raised by gay parents). Because Conservatives are so much more cautious about social change, they are always going to be hostile to science that instigates such social change—doubly so for discoveries that expand the purview of government.

What should we do about this? One might note that lots of science does not produce evidence which bears on the shifting public/private boundary. For example, having good weather data and good predictions on what the weather will be is simply useful for everyone’s planning, for both public and private decisions. Or knowing which flu strains will be most useful for the next year’s vaccine has little bearing on changing social mores—although calls for universal flu vaccination policies do press on the public/private boundary. Scientists could emphasize their work that does not challenge the boundary when attempting to reach across the ideological divides.

But this is little comfort for the current political imbalances and impasses created at the science policy interface. Perhaps knowing why Conservatives are likely to be hostile to science will help both scientists and politicians deal better with science. Perhaps Conservatives can be more upfront with their concerns, demanding higher (and clearer) standards of proof for claims they find worrisome, standards at which scientists can aim. Instead of ignoring evidence, Conservative politicians could demand a higher level of evidential surety before acting, and encourage (rather than discourage) further scientific exploration of key issues.

Perhaps scientists can explore ways of addressing questions and consequences that don’t alter the public/private boundary too much or too quickly, or address such issues explicitly in the structure of their work. For example, policies that depend on better informed private action, achieved through public education, rather than stronger government policies could be used to test possible policy shifts. Or explicit test policies, implemented in one location to track the effects, both intended and not, could assuage Conservatives concerns about rapid social change.

Perhaps it will help just understanding that the asymmetry between Conservatives and Liberals is real, but that in a democracy, having the research done that discovers impacts (or the lack of impacts) is crucial for our public discourse. It is not a temporary cultural shift nor irrationality nor a current ideology that is driving the distaste for science among Republicans. It is their core conservatism that is at issue.

Heather Douglas is the  Waterloo Chair of Science and Society at the University of Waterloo in Ontario, Canada. Image via BigStockPhoto.

]]>
https://scienceprogress.org/2013/03/science-and-the-public-square/feed/ 0
World’s Most Powerful Private Supercomputer Won’t Cure Cancer, But Will Find Oil Super Fast https://scienceprogress.org/2013/03/world%e2%80%99s-most-powerful-private-supercomputer-won%e2%80%99t-cure-cancer-but-will-find-oil-super-fast/ https://scienceprogress.org/2013/03/world%e2%80%99s-most-powerful-private-supercomputer-won%e2%80%99t-cure-cancer-but-will-find-oil-super-fast/#comments Tue, 26 Mar 2013 16:46:07 +0000 Sean Pool http://scienceprogress.org/?p=28121 Ryan Koronowski via Climate Progress.

Twice a year, a group of experts release a ranked list of the world’s most powerful computers called TOP500. It is likely that the new list in June will have a new member of the Top 10 of the Top 500: a computer dubbed Pangea. Its output is is 2.3 petaflops. A petaflop is a quadrillion “floating-point operations per second.” Today’s desktop computers deal in gigaflops, or billions.

The system is the fastest commercially-owned computer in the world. The other faster computers on TOP500′s list are owned by governments or academic institutions and therefore used for research.

Pangea is owned by Total SA, the fifth-largest oil and gas company in the world. So the supercomputer will not be changing the future of health care IT like former Jeopardy champion Watson or revolutionizing climate projections and weather research like supercomputers at NCAR and Oak Ridge National Laboratory. It will be searching for oil and gas, according toReuters.

Pangea helped analyze seismic data from Total’s Kaombo project in Angola in just nine days, instead of the four and a half months it would have taken with its previous computer, Philippe Malzac, IT director at Total’s Exploration division, told Reuters:

Total trumps British rival BP with the 2.3-petaflop supercomputer. BP said last December it was building a 2 petaflop supercomputing facility in Houston, Texas.

“Our competitors are also working on these kind of algorithms, but we think this is giving us a head start,” Malzac said.

The price of the system is undisclosed, but it will cost nearly $20 million per year just to run Pangea. The technological achievement may be impressive, but the reality is that oil and gas reserves are finite and getting more expensive to extract, while renewable fuels like wind and solar are getting cheaper to utilize.

Raymond T. Pierrehumbert, a lead author on the third IPCC Assessment Report, explained last month in Slate that it is getting harder and more expensive to squeeze oil out of the ground.

Oil production technology is giving us ever more expensive oil with ever diminishing returns for the ever increasing effort that needs to be invested. According to the statistics presented by J. David Hughes at the AGU session, we are now drilling 25,000 wells per year just to bring production back to the levels of the year 2000, when we were drilling only 5,000 wells per year. Worse, the days are long gone when you could stick a pitchfork in the ground and get a gusher that would produce for years.

That is when an oil company knows where to drill without the help of a historically fast supercomputer. Global oil and gas exploration and production costs are expected to rise again to $644 billion in 2013, according to an annual survey by Barclays. These fuels are getting increasingly expensive and difficult to produce, requiring massive computational power to find a way to squeeze more dinosaur juice out of the Earth’s crust. The climate clock is ticking, and it is worth asking if such investments in oil & gas extraction are worth it. Pierrehumbert again puts the scenario plainly:

Whales were driven to the brink of extinction before petroleum replaced whale oil, and we may well fry our planet—and bankrupt ourselves while doing so — before we’re finally forced to kick the fossil fuel habit. It will be hard to muster the resources to develop replacements for fossil fuel energy if we wait until both the economy and climate are in ruins. We are in for a hard landing if we don’t use our current prosperity to pave the way for a secure energy and climate future.

That includes using recent powerful technological advances to get ourselves off fossil fuels and onto renewables.

Ryan Koronowski is Deputy Editor of Climate Progress.

]]>
https://scienceprogress.org/2013/03/world%e2%80%99s-most-powerful-private-supercomputer-won%e2%80%99t-cure-cancer-but-will-find-oil-super-fast/feed/ 0
Future Choices II https://scienceprogress.org/2013/03/future-choices-ii/ https://scienceprogress.org/2013/03/future-choices-ii/#comments Mon, 25 Mar 2013 16:42:00 +0000 Sean Pool http://scienceprogress.org/?p=28093 In 2007 the Center for American Progress released its report “Future Choices: Assisted Reproductive Technologies and the Law,” which described a range of assisted reproductive technologies and their legal and regulatory background. The report also examined the policy implications of the largely unregulated field of reproductive technology, especially in the context of traditional feminist positions on reproductive rights. If a woman has the ultimate right to decide whether or not to bear a child when she is pregnant, for instance, does that principle hold true when she would like to become pregnant with the use of specific embryos? Is surrogacy a noble pursuit undertaken by autonomous, well-informed, and altruistic women, or is it a practice that exploits the low-income and vulnerable?

These questions have not gotten any easier to answer in the intervening years. Indeed, advances in reproductive technologies have continued to outpace the development of the laws that might govern them. At the same time, more and more people who would have been unable to procreate or become parents in past generations have been able to bring a child into their home or build a family of their choosing, including those who have historically been deemed “infertile” for social reasons such as their sexual orientation, gender identity, or unmarried status. When things do not go as planned, however, the law’s failure to prescribe clear guidelines for resolving the disputes that inevitably arise can lead to real confusion and hardship for families. And all the while, the questions keep coming.

The landscape of assisted reproductive technologies has continued to evolve since our 2007 report, and new questions have arisen as a result. Case in point: In 2010 President Barack Obama signed the Patient Protection and Affordable Care Act. Should fertility treatments be considered essential health benefits that must be required in every health plan, and what are the implications of including or excluding these services?

As assisted reproductive technologies become increasingly common and accessible, other questions demand answers: How should states define family relationships? Should the government support children created after the death of a parent as it does the children of deceased parents created when that parent was alive? Should religiously affiliated employers be allowed to discriminate against employees who use assisted reproductive technologies with which the employers disagree? How do we address the rise in international surrogacy and other forms of reproductive tourism as world economies become increasingly globalized? What are the parameters for establishing citizenship for such children born abroad?

While some court opinions offer new clarity to a handful of unresolved issues, many court decisions only further muddle the landscape. We find that despite the increasing popularity of assisted reproductive technologies, the rights and responsibilities surrounding those who take part in these processes are still largely undefined.

As with the first “Future Choices,” this report examines the three primary areas in which legislatures and courts have spoken— health insurance coverage, embryo disposition, and parentage determinations—as well as additional areas where significant developments in the laws governing assisted reproductive technologies have occurred.

Jessica Arons is the Director of the Women’s Health & Rights Program, and Elizabeth Chen is a Policy Analyst at the Center for American Progress. This article can be read at the Center for American Progress website, or download the PDF.

]]>
https://scienceprogress.org/2013/03/future-choices-ii/feed/ 0
The Government Can (Still) Read Most Of Your Emails Without A Warrant https://scienceprogress.org/2013/03/the-government-can-still-read-most-of-your-emails-without-a-warrant/ https://scienceprogress.org/2013/03/the-government-can-still-read-most-of-your-emails-without-a-warrant/#comments Sat, 23 Mar 2013 16:40:16 +0000 Andrea Peterson http://scienceprogress.org/?p=28086 Andrea Peterson, via ThinkProgress.

Senator Patrick Leahy (D-VT) and Senator Mike Lee (R-UT) introduced a bipartisan bill Tuesday to reform the Electronic Communications Privacy Act (ECPA) that would grant new privacy protections for email and other cloud stored data. Under current ECPA standards the government doesn’t need a warrant to access the content of emails that are more than 180 days old — instead all it requires is an administrative subpoena — although some companies including Google, Microsoft, Yahoo and Facebook have challenged that assertion on Fourth Amendment grounds.

Sen. Leahy,  the author of the original 1986 law, commented on how much times have changed since then:

“No one could have imagined just how the Internet and mobile technologies would transform how we communicate and exchange information today[...] Privacy laws written in an analog era are no longer suited for privacy threats we face in a digital world. Three decades later, we must update this law to reflect new privacy concerns and new technological realities, so that our Federal privacy laws keep pace with American innovation and the changing mission of our law enforcement agencies.”

A similar proposal was introduced by Reps. Zoe Lofgren (D-San Jose), Ted Poe (R-TX) and Suzan DelBene (D-WA) earlier this month, and the House Judiciary committee heard testimony on reforming ECPA this morning. In written testimony submitted before that hearing by acting assistant attorney general and former White House lawyer Elana Tyrangiel, the Obama administration dropped its claim that police should be able to look at Americans’ email content without warrants for the first time, but promoted a number of other expanded government surveillance powers.

These expansions include giving federal agency’s civil attorneys warrantless access to American’s electronic communications and eliminating some of the privacy protections currently applying to company records in order to reveal who is sending or receiving email, Facebook, Twitter, and other similar types of messages.

When the ECPA legislation was first passed in 1986, most people couldn’t imagine that online data storage would approach the point where it was so inexpensive people would leave their data online, so it was assumed that email left in networked storage over 180 days could be considered abandoned — like garbage on the curb. But with the rise of cheap, or in many cases free, storage in the cloud the 180 days rule has essentially become a way for law enforcement to access most archived email without the same level of due process expected for personal communications. Civil liberties advocates advocated for ECPA reform for years due to these technological and social changes and came very close to succeeding last year when it was almost passed as an amendment to a video-sharing bill backed by Netflix, but the amendment was inexplicably dropped over the Christmas break.

Just earlier this week, the American Civil Liberties Union, Americans for Tax Reform, and Center for Democracy & Technology announced a new coalition called the Digital 4th that, along with other broader groups including the Digital Due Process coalition, will advocate for privacy-driven ECPA reform, among other Fourth Amendment based privacy protections for current generation tech.

]]>
https://scienceprogress.org/2013/03/the-government-can-still-read-most-of-your-emails-without-a-warrant/feed/ 0
Bloomberg’s Supersize Soda Ban Rejected By Judge, But Backed By Science https://scienceprogress.org/2013/03/bloomberg%e2%80%99s-supersize-soda-ban-rejected-by-judge-but-backed-by-science/ https://scienceprogress.org/2013/03/bloomberg%e2%80%99s-supersize-soda-ban-rejected-by-judge-but-backed-by-science/#comments Tue, 19 Mar 2013 13:15:04 +0000 Tara Culp-Ressler http://scienceprogress.org/?p=28072 Mayor Michael Bloomberg’s (I) public health initiative to ban the sale of sugary drinks larger than 16 ounces was set to begin on Tuesday — but after a state judge struck down the initiative on Monday, New Yorkers won’t have to relinquish their supersize sodas anytime soon.

The news will likely come as a relief to the New Yorkers who were already preparing to circumvent the city’s ban. Even if the new regulation had gone into effect, there would still have been several ways for soda lovers to get their super size fix — by going to any local convenience store (which wouldn’t have been subjected to the rule because they’re regulated by the state), by crossing state lines into New Jersey, or simply buying several smaller-sized sodas at once.

The judge’s opinion cites those loopholes as one his primary reasons for striking down the law, since he believed the “uneven enforcement” throughout the city rendered the regulation ineffective. But even though Bloomberg’s proposal wasn’t perfect, it was on the right track.

As an increasing body of research has tied the consumption of sugary drinks to obesity, public efforts like Bloomberg’s represent one small step toward reorienting a culture where portion sizes have continued to spiral out of control. Restaurants’ portion sizes are more than four times larger now than they were in the 1950s — and that culture of excess is making its way into Americans’ homes, too, where meals are also getting bigger. Soft drinks sizes specifically have seen one of the largest increases, ballooning by over 50 percent since the mid-1970s. And research suggests that larger portion sizes do lead people to consume more than they would have otherwise, since we tend to estimate calories with our eyes rather than our stomachs.

The average American child consumes about 270 calories from soft drinks each day, which adds up to U.S. children drinking about 7 trillion calories from soda each year. That’s a huge problem in the larger context of childhood obesity rates, which have tripled since 1980. But there’s evidence that innovative public health measures can pay off. After all, states with aggressive nutrition policies, which include limits on sugary drinks and fried foods in public schools cafeterias, have experienced decreases in their childhood obesity rates.

The impact of sugary drinks on the ongoing obesity epidemic, and how best to encourage Americans to make healthier choices, is one that health advocates continue to grapple with, and there’s general consensus that proposals like Bloomberg’s are worth a shot. Beverage manufacturers, on the other hand, remain largely resistant to addressing their role in the public health crisis. The New York mayor’s soda ban was unanimously approved by the city’s health board in September — while the soda industry, which claimed the loophole-ridden policy was an affront to consumer freedom, became its loudest critic.

Tara Culp-Ressler is a Reporter/Blogger at ThinkProgress. This article cross-posted at Think Progress.

]]>
https://scienceprogress.org/2013/03/bloomberg%e2%80%99s-supersize-soda-ban-rejected-by-judge-but-backed-by-science/feed/ 0
Why You Should Care About The Increasing Amount Of Fraud In Scientific Research https://scienceprogress.org/2013/03/why-you-should-care-about-the-increasing-amount-of-fraud-in-scientific-research/ https://scienceprogress.org/2013/03/why-you-should-care-about-the-increasing-amount-of-fraud-in-scientific-research/#comments Fri, 15 Mar 2013 14:10:05 +0000 Andrea Peterson http://scienceprogress.org/?p=28069 The Washington Post reported on the equivalent of an ongoing academic thriller unfurling at Johns Hopkins earlier this week, involving a researcher who alleges he was fired in retaliation for his criticism of flawed methodology — later used in an article published in Nature, one of most prestigious research journals — and the suicide of the primary author of the research while drafting a response to that criticism.

But while the full story remains to play out — Johns Hopkins refuses to comment and Nature has been quiet besides saying they expect to release a response in the future — this seedy tale can help bring one dark underbelly of the modern research world to light: How the academic politics of retraction and the pressure to publish may have an adverse effect on the quality of modern research.

A study published last year by Proceedings of the National Academy of Sciences noted that there has been a tenfold increase in scientific articles retracted due to fraud since 1975. Of the over 2000 biomedical and life-science retracted research articles studied, 21.3 percent of them were attributed to errors while 67.4 percent were due to researcher misconduct.  The Washington Post discussed the issue with one of the study’s authors, Ferric C. Fang:

“Fang said retractions may be rising because it is simply easier to cheat in an era of digital images, which can be easily manipulated. But he said the increase is caused at least in part by the growing competition for publication and for NIH grant money.

He noted that in the 1960s, about two out of three NIH grant requests were funded; today, the success rate for applicants for research funding is about one in five. At the same time, getting work published in the most esteemed journals, such as Nature, has become a “fetish” for some scientists, Fang said.”

While public funds support a majority of basic research in the U.S., those resources have been dwindling for years and took a significant hit in the sequester. That increase in competitiveness pressures researchers to present results, undoubtedly leading to some researchers falsifying their data in order to preserve their slice of the dwindling public research pie — also known as fraud. And when fraudulent research makes it through the publication process, it becomes part of the knowledge base built upon by other researchers around the world. For every fraudulent piece of research published, many more may rely on faulty grounding for future research projects, thus intellectually contaminating research areas with incorrectly drawn conclusions and impeding future advances.

The pitfalls of fraudulent research aren’t just theoretical: In the late 1990s, a medical researcher “misrepresented or altered the medical histories of all 12 of the patients” in a much-publicized study linking childhood vaccination to autism, in what some other researchers have called “the most damaging medical hoax of the last 100 years.” Between when the study was first came out and when it was disproved and retracted, there was a notable drop in youth vaccinations — which is bad for the health of our nation’s children, and the public at large.

Yet despite the severity of the problem and the great stakes at play, there is no centralized database to track these retractions — although new resources have emerged, like Retraction Watch, a blog run by two health journalists keeping tabs on the ongoing drama.

This is a ThinkProgress cross-post.

]]>
https://scienceprogress.org/2013/03/why-you-should-care-about-the-increasing-amount-of-fraud-in-scientific-research/feed/ 0
Is There Daylight Between the Two Sides of the Energy ‘Innovation’ Versus ‘Deployment’ Debate? https://scienceprogress.org/2013/03/is-there-really-daylight-between-the-two-sides-of-the-energy-innovation-versus-deployment-debate/ https://scienceprogress.org/2013/03/is-there-really-daylight-between-the-two-sides-of-the-energy-innovation-versus-deployment-debate/#comments Thu, 14 Mar 2013 13:41:46 +0000 Sean Pool http://scienceprogress.org/?p=28058 Recently there has been a resurgence of what is becoming a classic (see: tired) debate between very smart people about the tension between clean energy innovation and deployment.

This is a “debate” that has played out several times over, enough times now that I think there are a few things the clean energy community needs to acknowledge about it:

1. There is a difference between policy and communications.

2. There are disagreements about communications. However, conflating communications and policy allows disagreements about messaging to spill over into the policy discussion.

3. Ultimately, there is little functional difference between the policies actually advocated by these two “camps.”

4. In letting this happen, the community misses an opportunity to coalesce around items on the policy agenda that we do agree on.

5. The only real beneficiary of this infighting is the fossil fuel industry, which hardly needs the help.

What can be done? First, separate the communications debate from the policy debate, and try to have a real conversation about the merits of each messaging approach given the outcomes we are trying to achieve. Second, discuss the policy agenda outside the context of this conversation about messaging, to isolate the items where there is substantial agreement. Third, galvanize around the agenda items where there is significant agreement, and push for those policies based on whatever strategy can be salvaged out of the communications discussion.

It may be that there are irreconcilable differences on the communications front (which, I suspect, is the case). If so, there still should be some understanding about what policies have our support- and we can cut back on the friendly fire. Pragmatically speaking, there is no benefit to allowing differences on messaging style to interfere with coalition building around policies.

A Case Study: “Innovators” vs. “Deployers”

To illustrate my points, let’s take a look at the most recent installment of this debate. It started, as best I can tell, with Stephen Lacey’s TakePart post here. Lacey takes issue with President Obama’s recent comment that “some big technological breakthrough” is needed to shift away from the high carbon energy sources that cause climate change. This statement, he says, is as negatively influential as those made by climate deniers.

Lacey goes on to outline the two camps in the climate action world: a) the deployment advocates who believe that high-penetrations clean energy of can be achieved with existing technology (let’s call them “Deployers”) and b) those who argue we cannot do anything on climate without major technological breakthroughs (let’s call them “Innovators”).

Lacey equates the communications and rhetoric from folks in this second camp to climate doubters and denialists. This prompted a fair bit of outrage from a crowd of folks who (I assume) believe they are in this second camp. Queue the slew of quips, factoids, and citing of precedent from both sides.

Below, I will use this recent incarnation of the “Deployers vs. Innovators” debate to walk through the 5 points I make above, and highlight the need for a pragmatic solution.

1. There is a difference between policy and communications.

Our governance system is designed to have rhetoric and politics intertwined. However this does mean that, by necessity, public policies must have an accompanying messaging strategy to survive in the political world.

Think tanks are positioned to advocate for policies which are consistent with their missions. Advocacy requires that policy and communications are conflated, so that they can maximize the impact of a given policy recommendation. After all, what good is a policy without a compelling story about its need and impact?

This particular debate is mostly raging among current or former think-tankers. As such, it is not surprising that the distinction between policy and communications has gotten a little lost in the shuffle. However, policy and communications are two separate worlds, which have very different objectives and constraints.

The formal definition of Policy is “a course of action adopted and pursued by a government, ruler, political party, etc.” Communications is “the imparting or interchange of thoughts, opinions, or information by speech, writing, or signs.” The distinction that needs to be made here is that, while certainly communications is used to explain policies, the policy itself is an actual course of action. We can agree on policies, but disagree on how to communicate them.

2. There are disagreements about communications. However, conflating communications and policy allows disagreements about messaging to spill over into the policy discussion.

The hard part is that there is a genuine disagreement about communications. Below, I will lay out what the disagreement is and weigh in.

Let’s go back to Lacey’s piece. Communications in the “climate action” context that Lacey mentions is fundamentally about raising awareness about climate change so that action can be taken. The counterpoint usually is that climate communications should solely be about passing along information. As such, there is a difference on messaging because one side believes that communications should be oriented to achieve a certain outcome, and the other believes that approach often rushes conclusions. This difference may always exist, but I think there is something to be said for this particular example of equating Obama’s rhetoric with climate denial.

From a communications perspective, the way you talk about energy policy may impact the perception of the public on the climate change issue. For example, Lacey notes that the President talking about the need for “some big technological breakthrough” to address climate change has the same functional communications impact as denying that climate change is a problem. This is persuasive, when it comes to convincing the public to take action- there isn’t much light between a problem that isn’t real and a problem you can do nothing about.

The President saying that “some big technological breakthrough” is needed to fix the climate problem does have a very real impact on public perception. If there was an asteroid hurtling towards Earth, and the President said that “some big technological breakthrough” was needed to address the issue- most American’s could be forgiven for thinking that meant that we were doomed. I am sympathetic to the idea that climate communications isn’t just about getting Americans aware of the asteroid- it is about getting the President to do something about it.

Let’s address the counterpoint here. If communications ought to just be about passing along information, would the Presidents statement still be the functional equivalent of climate denial? Since the functional impact of climate denial is passing along facts that are not true, then answering this question is really a matter of asking: is the President right? Is some big technological breakthrough needed to address climate change?

I don’t think so. But, I also did not see many actually address this issue in the debate. Most of what I saw boiled down to, as section 3 will address, a conflation of innovation and deployment objectives. Lacey cites the NREL study showing electricity can be provided by 80 percent renewable sources. This seems like a case-in-point example for the fact that a big technological breakthrough is not needed to address climate change. Would there be costs? Yes. But do we need a technological breakthrough to do it…? No. This sounds contentious here, but it isn’t as adversarial as it sounds, as section 3 will explain.

If the communications shoe were on the other foot, it would be very surprising if the President talking about how we can deploy existing technologies to fight climate change caused a widespread public perception that innovation dollars are no longer needed. However, the opposite does seem to be the case. Because clean energy, in the climate context, is a means to an end.

3. Ultimately, there is little functional difference between the policies actually advocated by these two “camps.”

This is where there is a lot of huffing and puffing but not much real content. The people in the first camp, the Deployers, do not generally think that no technological innovation is needed. See Laceys comment that “of course we want to encourage technological leaps by investing in R&D and helping bring emerging technologies to market.” However, the threat of climate change is imminent, so relying solely on innovation would be foolish.

But the people in the second camp, the Innovators, don’t think we should just rely solely on innovation! In fact, they overwhelmingly do not seem to think that deployment of current technologies is mutually exclusive with technological innovation.

From a public policy perspective, this makes the grey area between these two camps pretty expansive. Do both camps support ARPA-E? Seems like it. What about the demonstration projects funded through Title XVII and the Recovery Act? As far as I can tell, yes. There are a few buzzwords and catchphrases which are supposed to distinguish these camps, but I cannot really see where the significant difference lies. Lets go through a few of them.

“Reform clean energy subsidies!” This one is thrown around an awful lot, which would make you think there is some disagreement. But honestly, I don’t know what it is. Let’s take the PTC for example. Neither side is enamored with the PTC in its current bulky, stop-and-go form- although I’d venture to say both sides would take it over n0 policy support for wind. Both sides support reforming the PTC to encourage technological progress, and shifting to a tax credit that shows more support for low capacity factor siting. Both camps seemed pretty supportive of AWEA’s proposed phase-out. I honestly could not tell you where the difference lies on this issue, except in some nuanced minutiae. I’ll get to why these differences don’t matter in section 4.

“The role of government in clean energy investment” is another alleged hotspot. The Innovators often cite the Department of Energy role in natural gas as a good precedent for what the government’s involvement in energy innovation should look like. But I doubt that, if really pressed, they would say that the U.S. government should abandon all deployment and push funding solely into energy innovation. Remember, that investment in natural gas took 30+ years to pay off- and the climate clock is ticking. Deployers believe in helping create a market for clean energy through government procurement and demand policies that “pull” technology to market. There may be a dispute over the extent of these policies, but state RPS’s and feed-in-tariffs, as well as greening the federal government, seem to have wide support.

Yes, the Deployers want to see stronger policy signals from the government to bolster the market adoption of what we already have. But none of them would recommend stripping the federal budget of innovation dollars, or crafting clumsy policy tools in the name of stability. Unfortunately for the Deployers, they haven’t had much luck getting an actual, nuanced, stable policy- so they have had to take what they can get. The Innovators have repeatedly called for research development and demonstration (so-called RD&D). Seems like we are singing the same tune here.

“Carbon tax.” If it were progressive, and put some money into innovation as well as deployment, I doubt you’d hear too much fuss.

The only substantial difference seems to be on cap-and-trade and, sorry everyone, that isn’t on the table at the moment. So it hardly seems worth squabbling over.

4. In letting this happen, the community misses an opportunity to coalesce around items on the policy agenda that we do agree on.

This brings me to my fourth point. As the above indicates, there is a pretty concrete and robust set of initiatives that have broad-based agreement. While there may be nuanced differences (how much money from a carbon tax goes to innovation, how exactly the PTC is structured and tiered to promote technological progress) we can all support the big picture.

We are missing out on an opportunity to pick and organize around the policy objectives that we share. Sure, Congress is dysfunctional and the Administration is being pulled to the “center” on issues like oil and gas. That is all the more reason the clean energy and climate community needs to outline a bold, cogent, and widely shared set of priorities that the Administration hears about- no matter who in the community they talk to.

Too often the disputes around messaging has sidetracked the productive conversation about policy progress. If we can’t agree on how we should talk about a carbon tax, can we at least agree that we need to talk about a carbon tax? If we cannot settle on whether rhetoric needs to be carefully crafted to spur action, can we at least agree that rhetoric should accurately capture the real picture? For example, if the President had said “we need to continue to promote technological development and innovation to drive down the costs of clean energy, while aggressively deploying available technologies,” would any of us really be upset? We should all seize the President’s narrow rhetoric about relying solely on innovation as an opportunity to say “Mr. President, that is an important step, but let’s remember the big picture. Here are all the things you can do.”

5The only real beneficiary of this infighting is the fossil fuel industry, which hardly needs the help.

My final point is that the only real winner when we in the clean energy policy community fight amongst ourselves is the fossil fuel industry. This is true on both fronts. When it comes to communications, there is nothing the fossil fuel industry wants more than to have us wait and see (indefinitely). But since both camps, practically speaking, want to innovate and deploy- that should be our message. No, we do not need to wait. We need to innovate and deploy, because climate change is real.

On a policy front, the story is similar. The fossil fuel industry is the real beneficiary of disputes within the clean energy community, because they benefit from the status quo. If we become so paralyzed by artificial differences that we fail to act, or articulate a common agenda, the U.S will default to the way things have always been.

We cannot afford that. We must find, and articulate, our common goals. We should continue to discuss the merits of different messaging strategies, but not at the cost of pragmatic change. The stakes are too high, and the headwinds are too strong.

So innovate and deploy, because climate change is real.

Adam James is Executive Director of the Clean Energy Leadership Institute and a Research Assistant for Energy and Environmental Policy at American Progress. His work covers clean energy, finance, infrastructure, smart grid, and efficiency. A version of this article first appeared on the Energy Collective.

]]>
https://scienceprogress.org/2013/03/is-there-really-daylight-between-the-two-sides-of-the-energy-innovation-versus-deployment-debate/feed/ 0
Recent Observed Global Warming Is ‘Amazing And Atypical’ https://scienceprogress.org/2013/03/recent-observed-global-warming-is-%e2%80%98amazing-and-atypical%e2%80%99/ https://scienceprogress.org/2013/03/recent-observed-global-warming-is-%e2%80%98amazing-and-atypical%e2%80%99/#comments Mon, 11 Mar 2013 17:18:51 +0000 Joe Romm http://scienceprogress.org/?p=28039 Dr. Joe Romm, via Climate Progress. The figure at right shows temperature change over the past 11,300 years (in blue, via Science, 2013) plus projected warming this century on humanity’s current emissions path (in red, via recent literature).

A stable climate enabled the development of modern civilization, global agriculture, and a world that could sustain a vast population. Now, the most comprehensive “Reconstruction of Regional and Global Temperature for the Past 11,300 Years” ever done reveals just how stable the climate has been — and just how destabilizing manmade carbon pollution has been and will continue to be unless we dramatically reverse emissions trends.

Researchers at Oregon State University (OSU) and Harvard University published their findings today in the journal Science. Their funder, the National Science Foundation, has a news release:

With data from 73 ice and sediment core monitoring sites around the world, scientists have reconstructed Earth’s temperature history back to the end of the last Ice Age.

The analysis reveals that the planet today is warmer than it’s been during 70 to 80 percent of the last 11,300 years.

… during the last 5,000 years, the Earth on average cooled about 1.3 degrees Fahrenheit–until the last 100 years, when it warmed about 1.3 degrees F.

In short, thanks primarily to carbon pollution, the temperature is changing 50 times faster than it did during the time modern civilization and agriculture developed, a time when humans figured out where the climate conditions — and rivers and sea levels — were most suited for living and farming. We are headed for 7 to 11°F warming this century on our current emissions path — increasing the rate of change 5-fold yet again.

By the second half of this century we will have some 9 billion people, a large fraction of the whom will be living in places that simply can’t sustain them —  either because it is too hot and/or dry, the land is no longer arable, the glacially fed rivers have dried up, or the seas have risen too much.

We could keep that close to 4°F — and avoid the worst consequences — but only with immediate action.

This research vindicates the work of Michael Mann and others showing that recent warming is unprecedented in the past 2000 years — the so-called Hockey Stick — and in fact extends that back to at least 4000 years ago. I should say “vindicates for the umpteenth time” (see “Yet More Studies Back Hockey Stick: Recent Global Warming Is Unprecedented In Magnitude And Speed And Cause“).

Lead author Shaun Marcott of OSU told NPR that the paleoclimate data reveal just how unprecedented our current warming is: “It’s really the rates of change here that’s amazing and atypical.” He noted to the AP, “Even in the ice age the global temperature never changed this quickly.”

And the rate of warming is what matters most, as Mann noted in an email to me:

This is an important paper. The key take home conclusion is that the rate and magnitude of recent global warmth appears unprecedented for *at least* the past 4K and the rate *at least* the past 11K. We know that there were periods in the past that were warmer than today, for example the early Cretaceous period 100 million yr ago. The real issue, from a climate change impacts point of view, is the rate of change—because that’s what challenges our adaptive capacity. And this paper suggests that the current rate has no precedent as far back as we can go w/ any confidence—11 kyr arguably, based on this study.

Katharine Hayhoe, an atmospheric scientist at Texas Tech University, told the AP:

We have, through human emissions of carbon dioxide and other heat-trapping gases, indefinitely delayed the onset of the next ice age and are now heading into an unknown future where humans control the thermostat of the planet.

Unfortunately, we have decided to change the setting on the thermostat from “Very Stable, Don’t Adjust” to “Hell and High Water.” It is the single most self-destructive act humanity has ever undertaken, but there is still time to aggressively slash emissions and aim for a setting of “Dangerous, But Probably Not Fatal.”

Joe Romm is the editor of Climate Progress, and a former physicist and Energy Department official.

]]>
https://scienceprogress.org/2013/03/recent-observed-global-warming-is-%e2%80%98amazing-and-atypical%e2%80%99/feed/ 0
Eliminate Violence Against Women and Girls? There’s An App for That https://scienceprogress.org/2013/03/eliminate-violence-against-women-and-girls-worldwide-there%e2%80%99s-an-app-for-that/ https://scienceprogress.org/2013/03/eliminate-violence-against-women-and-girls-worldwide-there%e2%80%99s-an-app-for-that/#comments Fri, 08 Mar 2013 17:41:51 +0000 Lindsay Rosenthal http://scienceprogress.org/?p=28022 Today is International Women’s Day, which falls during the 57th annual week-long session of the Commission on the Status of Women, or CSW, that is currently taking place at the U.N. headquarters in New York.

There could not be a more pressing theme than the one chosen for this year’s meeting: the elimination and prevention of all forms of violence against women and girls. Global estimates reveal that 1 billion women worldwide have experienced some form of physical or sexual violence. The problem has reached crisis proportions that we cannot afford to ignore.

Over the past few months alone, we have seen a number of tragic and highly publicized stories of gender-based violence across the globe. In October Malala Yousafzai, a young girl who insisted that she and her female peers have a right to an education, was gunned down and nearly killed by the Taliban for her activism. In December an NFL player for the Kansas City Chiefs shot and killed his girlfriend, Kasandra Perkins, before killing himself. In January protests erupted after the gang rape of a 23-year-old medical student in India, which ultimately led to her death. Last month in Papua New Guinea, a young woman was burned alive by a mob after she was accused of being a witch. On Valentine’s Day South African model Reeva Steenkamp was fatally shot by her boyfriend and Olympian, Oscar Pistorius. And just last week a 15-year-old girl in the Maldives was sentenced to 100 lashes for having premarital sex.

These highly publicized acts of violence are only the tip of the iceberg: Every day in the United States, including today on International Women’s Day, more than three women will be murdered by their intimate partners. In Central America half of all women are at risk of domestic violence during their lifetime. Eastern Congo has been named the “rape capital of the world,” as 48 women are raped every hour.

This sea of statistics exposes the sheer depth and pervasiveness of gender-based violence and demonstrates the urgency of the crisis we face. But they may also be so daunting as to give the impression that the violence women experience every day is inevitable. We know that it is not inevitable and we can take steps to prevent it.

While there is no silver bullet that can create the deep-seated cultural shift in gender attitudes around the world that will be required to eliminate violence against women and girls, new technologies are bringing renewed hope to an old struggle. A major theme in the field of violence prevention in recent years has been to identify the ways that technology can be leveraged to empower individuals and communities to work toward preventing violence and to more effectively disrupt existing violence by connecting victims with needed services in their communities.

Recent technological advances in gender-based violence prevention

Governments, the private sector, and health and technology organizations around the world have joined together to bring the power of new media to bear on violence prevention and other public-health initiatives. Mobile phones have been used for everything from suicide prevention, to preventing elder abuse, to helping organize antibullying campaigns.

Here are a few examples of remarkable efforts that have focused specifically around preventing gender-based violence:

  • Last year the World Health Organization hosted a “Hackathon Against Domestic Violence,” where more than 350 web developers collaborated with each other to figure out innovative ways to raise awareness about domestic violence through developing new apps. The winning team built an anonymous cyberspace forum for victims to learn from and share their experiences without having to give up their privacy. Other winning prototypes included a web and SMS-based app to alert trusted friends and family in the case of teenage girls being taken abroad and an SMS and web-integrated hotline that provides information on gender-related violence and how to report an incident.
  • In 2011 the White House launched “Apps Against Abuse,” an initiative that challenged developers to come up with ideas for ways mobile phones could be leveraged to help young women and men take a proactive role in preventing dating violence and sexual assault. The winner of the competition was an iPhone app called “Circle of 6,” which makes it easy for a person to quickly reach their circle of supporters and let them know where they are and what they need. The app uses text messaging to contact a young woman’s chosen network, uses GPS to locate her if she needs help, and connects her to reputable domestic violence organizations if needed.
  • The Institute of Medicine and the National Research Council convened a conference in 2011 of international experts on Communications and Technology for Violence Prevention and produced a book surveying the landscape of new media for violence prevention. The book highlights goals for developing and evaluating new media tools to identify the safest and most effective ways to implement technological strategies for violence prevention. The conference was a tremendous collaborative effort between a number of private and public organizations, and the book has provided the most comprehensive resource available for technology in the field of violence prevention.

Technology is already helping to prevent violence against women

While these conferences and competitions hold great promise for leveraging technology to prevent violence against women in the future, technology deployed around the world today is already making an impact.

Real-time violence mapping tools: In Egypt, where 83 percent of women have been exposed to sexual harassment, a tool called HarassMap receives reports of sexual harassment through SMS messages and uploads them in real time to a map that shows where sexual harassment is happening in Cairo. The map helps women avoid harassment and local authorities identify “hotspots” where harassment is happening so they can effectively intervene. The project also connects women with available resources in their community. Similar programs called “Maps4Aid” and “Street Watch Palestine” are being used in India and Palestine respectively to track violence against women and aid interventions.

In addition to helping with reporting and intervention, these programs help raise awareness and break the culture of silence around violence against women by allowing them to report attacks in a visible public forum.

Texting: Texting has proved to be a cheap and effective way to give victims of domestic violence a way to reach out for help in the case of intimate partner violence. In Ohio an SMS service called “FamilyFirst” was set up to allow victims to silently report incidents to a crisis worker or police officer without having to actually make a phone call. The program, which costs $380 to set up, processed thousands of text messages in its first year and helped to convict 18 abusers.

Public awareness campaigns and organizing tools: The power of social media to raise awareness alone should give us hope. While stories like those of Malala Yousafzai are tragic, there has never before been a moment when it was possible for her story and struggle to be tweeted to the world by a public figure with as much reach as The New York Times reporter Nicholas Kristof, who equated her fight for gender equality with “the campaigns against slavery in the 19th century and against totalitarianism in the 20th century.” There has never been a moment—until now—when Malala’s neighbors would have had access to social-media outlets that they could use to organize public vigils in her honor that would have a global witness. The ability and revolutionary potential of largely egalitarian social platforms to raise awareness and allow women’s stories to be heard around the world should not be underestimated.

Other types of public awareness campaigns include Liz Claiborne’s “Love Is Not Abuse” iPhone app that is designed to educate the public about teen dating violence. With this app, parents can sign up to receive digital simulations of harassment through text, phone, and social media in real time—as examples of what their child might experience in an abusive relationship—in order to educate them about what to look out for when their teens are dating.

More traditional targeted public-service announcements have been also been effectively used in many countries. In India, for example, a public service campaign called “Bell Bajao”—or Ring on the Bell—encourages neighbors who hear a woman being battered to knock on her door and ask something like “Can I borrow a cup of sugar?” as a nonaggressive way to disrupt the event, and to let the abuser know that the community has heard the violence and is watching his actions.

Other groups have mobilized to encourage girls and their communities to proactively develop ways to employ new media to empower themselves. A movement called “Take Back the Tech!” —a pun on “Take Back the Night,” a common organizing cry to raise awareness about sexual assault on college campuses—encourages 16 days of gender-based activism from November 25 to December 10 to use technology to end violence against women.

Information technology has also led to increased transparency and accountability for violence in a collective context where women’s liberty and physical safety is at stake in a larger movement. The Arab Spring is one of many examples in which new media has been used to contribute to political organizing and amplifying populist voices in a context where women have played a key role in advocating for themselves as part of a larger liberation movement. 

Opportunities and challenges in new media and violence prevention

The gender gap in the digital divide: While 9 out of 10 women report feeling safer with a mobile phone, there is a significant gender gap in mobile phone ownership between men and women worldwide. Women in the Middle East and Africa are 25 percent less likely than men to have a mobile phone, while women in South Asia are 37 percent less likely than men to own a mobile phone. And these gaps are even greater among the poor.

Similar gender gaps exist for other forms of connectivity such as access to the Internet. A major obstacle to executing violence prevention through technology will be the extent to which women can gain access to technological devices in hierarchal societies where they may not be economically or culturally empowered to obtain them.

Nonetheless, three-quarters of the world’s population currently has access to a mobile phone, and there are vast opportunities for helping the women and girls that do have access to mobile phones.

The power of big data: Data collected through various media tools will help experts across disciplines develop a better understanding of the factors underlying violence. The use of large datasets can help identify predictors of domestic and sexual violence and enable the development of more effective and collaborative strategies for intervention. The book published by the Institute of Medicine and the National Research Council has suggested direction for research and evaluation based on data that has been collected in recent years.

A double-edged sword: While new technologies such as those that include GPS tracking systems can increase women’s safety, they can also be used by abusers to have further control over their victims. Incidents of cyber stalking and digital abuse have been widely reported. It’s important that we consider collateral consequences in the development and implementation of prevention and intervention programs that utilize technological tools.

The United States and violence against women

Yesterday women in the United States celebrated as President Barack Obama signed the Violence Against Women Act, or VAWA, after the bill was caught up in a lengthy political battle. House Republicans eventually dropped their opposition to provisions of the bill expanding protections for Native American women, immigrant women, and the LGBT community.

But the victory was bittersweet since the political battle would not have happened in the past. The Violence Against Women Act has enjoyed bipartisan support since it was first enacted in the 1990s and the recent politicization of the law is disheartening at a time when the needs of women are so great.

This International Women’s Day we are also reminded that our government has fallen behind in leadership on violence against women and other women’s rights issues in the international community. The United States is one of the only countries that still has not ratified the Convention on the Elimination of All Forms of Discrimination Against Women, or CEDAW, an embarrassment that keeps us in the company of only five other countries: Iran, Somalia, Sudan, and the Pacific island nations of Palau and Tonga.

In recent years advocates have been successful in challenging the assumption that violence against women is inevitable and have motivated the world to see that it is in fact preventable when individuals and communities step up and intervene. New media is playing an important role in facilitating that process and the United States could be a leader in maximizing its effectiveness and realizing the potential of new media to help prevent violence. But that will only happen if we continue to demand that our government steps up to the plate for women and girls.

Lindsay Rosenthal is a Research Assistant with the Women’s Health and Rights and Health Policy teams at the Center for American Progress. Image by BigStockPhoto.

]]>
https://scienceprogress.org/2013/03/eliminate-violence-against-women-and-girls-worldwide-there%e2%80%99s-an-app-for-that/feed/ 0
How Corporations Score Big Profits By Limiting Access To Publicly Funded Academic Research https://scienceprogress.org/2013/03/how-corporations-score-big-profits-by-limiting-access-to-publicly-funded-academic-research/ https://scienceprogress.org/2013/03/how-corporations-score-big-profits-by-limiting-access-to-publicly-funded-academic-research/#comments Sun, 03 Mar 2013 20:48:22 +0000 Andrea Peterson http://scienceprogress.org/?p=28009 Andrea Peterson via Think Progress.

Here’s how the academic publishing industry works: Academics do research (frequently supported by public funds) and submit that research to journals, often paying “$600-$2,000 to either the publisher or the academic society that owns the journal” for the privilege of publication. Then journals send the research back out to other academics to be reviewed (typically pro-bono–a 2008 study estimated the worldwide worth of unpaid peer review was £1.9 billion a year), and the (often for-profit) journal publishers sell access to the published research, mostly to the academic institutions who do the majority of basic research.

The system is big business: The largest of the for profit academic publishers, Elsevier, reportedly earned over $1 billion in profits in 2011 with a profit margin around 35 percent and 71 percent of their revenue coming from academic customers like university libraries.

But the rapid inflation of journal subscription prices–the per subscription cost rose by 215% between 1986 and 2003–has left many of those universities struggling to keep up. In a statement last spring, the Harvard Faculty Council called rising costs to maintain access to scholarly works “untenable” and the University of California San Francisco Library spends 85 percent of their collection budget on journal subscriptions, but “[d]espite cancelling the print component of more than 100 journal subscriptions in 2012 to keep up with a budget reduction, [their] costs still increased by 3 percent.”

This major disconnect between how much of this research is funded and produced and who controls the final product has led to a flourishing Open Access movement with broad support among private and public academic institutions, focused on using technological innovations to democratize access to scholarly research and correct what they see as imbalances in the current system through reform on local and national levels. One such national reform they welcomed was the White House Office of Science and Technology Policy memorandum outlining a plan to open up access to research to some federally funded research.

ThinkProgress’ coverage of that announcement drew criticism from an executive at Elsevier:

When reached for comment, Elsevier head of Corporate Relations Tom Reller agreed with her comment and confirmed Smith is VP for Global Internal Communications for Reed Elsevier subsidiary Elsevier, but referred questions about the company’s support of Open Access movement to its website and a recent statement of support for the White House’s proposal. Elsevier’s website says the company “will continue to identify access gaps, and work towards ensuring that everyone has access to quality scientific content anytime, anywhere.”

But their parent company’s lobbying disclosures in 2012 and members of the Open Access community suggest a very different position. When asked over email if they have seen Elsevier and many of the for-profit academic publishers actively cooperate with the Open Access movement on advancing public access to federally funded research, Heather Joseph, the Executive Director of the Scholarly Publishing & Academic Resources Coalition, or SPARC, balked at the suggestion:

Quite the opposite. SPARC and the Open Access community spent the first eight weeks of 2012 fighting The Research Works Act (H.R 3699) — a bill introduced into the House of Representatives with the sole aim of overturning the highly successful NIH Public Access Policy, and prohibiting other Federal Agencies from enacting similar policies. Elsevier and the American Association of Publishers were two of only three organizations who publicly endorsed the bill.

If this was the first time they took this tactic, I might be tempted to cut them some slack. But it was a repeat performance; in 2008, they tried the same thing with “The Fair Copyright in Research Works Act (H.R. 801)” — a bill that tried to amend U.S. copyright code to make the NIH Policy — and policies like it — illegal.

According to the U.S. Senate Lobbying Database, Elsevier’s parent company Reed Elsevier spent $1,420,000 lobbying the U.S. government in 2012. Reed Elsevier’s in-house lobbying team disclosures and those from the Podesta Group listing Reed Elsevier as a client corroborate Wilson’s comments about their support for The Research Works Act — only withdrawing support after a boycott of from academic communities, according to news reports. That boycott continues today, and has attracted over 13,000 scholars and academics who object to Elsevier’s business practices.

Reed Elsevier lobbied OSTP on “[c]opyright issues related to scientific, technology, and medical publications” during the run up to the White House’s Open Access announcement and their in-house lobbying team reported working on “[i]ssues related to science, technical, medical and scholarly publications” and on “all provisions” of the Federal Research Public Access Act, or FRPAA–a proposal similar to the recently introduced Fair Access to Science and Technology Research Act, also known as FASTR, that would have required federal agencies with annual extramural research budgets of $100 million or more to provide the public with online access to research manuscripts stemming from funded research no later than six months after publication in a peer-reviewed journal.

Elsevier was one of 81 publishers to sign a Association of American Publishers, or AAP, letter opposing FRPAA, with AAP President and CEO Tom Allen calling it “little more than an attempt at intellectual eminent domain, but without fair compensation to authors and publishers.” Remember, these publishers claiming to be concerned about “fair compensation to authors,” are the same ones often charging them publication fees.

As Reller noted to ThinkProgress, the sum total of Reed Elsevier’s 2012 lobbying expenditures represents the all lobbying done in support of their business ventures and their disclosures list a number of bills unrelated to Open Access. Companies are not required to disclose what proportion of their total lobbying is spent on which topics. We do know that Elsevier, the corporate subsidiary involved with academic publishing, accounted for over 47 percent of Reed Elsevier’s adjusted operating profits in 2011.

While AAP released a statement in support of the White House’s plan Open Access memorandum, their comments praised how the plan only included guidelines for releasing research, not mandates, saying the policy’s success is dependent on “how the agencies use their flexibility to avoid negative impacts” on the current system and calling it fair “[i]n stark contrast to angry rhetoric and unreasonable legislation offered by some” — a reference to the Open Access movement. Elsevier’s similar response to the plan praised it for promoting “gold open access funded through publishing charges and flexible embargo periods for green open access” and dismissed Open Access legislative proposals, saying they would like “open-access advocates [to] withdraw their support from unnecessary and divisive open access legislation now introduced in the US at federal level.”

There’s ample room to credit the academic publishing industry’s history of serving as the shepherds of scholarly research — but technology has dramatically changed researchers’ ability to share knowledge without intermediaries. There is an ideological debate at hand, and it’s about if the public is better served by expanding access to the research they fund or protecting the interests of companies who have a substantial financial stake in limiting that access.

]]>
https://scienceprogress.org/2013/03/how-corporations-score-big-profits-by-limiting-access-to-publicly-funded-academic-research/feed/ 0
The Ten Principles of 3D Printing https://scienceprogress.org/2013/03/the-ten-principles-of-3d-printing/ https://scienceprogress.org/2013/03/the-ten-principles-of-3d-printing/#comments Fri, 01 Mar 2013 17:43:32 +0000 Hod Lipson http://scienceprogress.org/?p=27987 Fabricated: The New World of 3D Printing, Lipson and Kurman lay out the 10 most important things about 3D printing that make it special—and the 10 reasons why it really could be the next big thing in American Manufacturing.]]> Editor’s Note: Additive manufacturing, also known as 3D printing, is a new manufacturing technology of increasing relevance across many industries and across the globe. 3D printers work in a similar way to standard inkjet printers, except that they can use materials like plastics, carbon fiber, or titanium to print 3-dimensional objects instead of 2-dimensional documents.

With prices for the technology decreasing rapidly and quality on the rise, additive manufacturing presents tremendous opportunity  for innovation in industries as diverse as aerospace, consumer goods, and medicine, and has been heralded as the technology that will save American manufacturing. See Science Progress’s primer on 3D printing here, and our analysis of the Obama administration’s latest 3D printing policy proposal here.

By Hod Lipson and Melba Kurman Authors of Fabricated: The New World of 3D Printing.

Predicting the future is a crapshoot. When we were writing this book and interviewing people about 3D printing, we discovered that a few underlying “rules” kept coming up. People from a broad and diverse array of industries and backgrounds and levels of expertise described similar ways that 3D printing helped them get past key cost, time and complexity barriers.

We have summarized what we learned. Here are ten principles of 3D printing we hope will help people and businesses take full advantage of 3D printing technologies.

Principle one: Manufacturing complexity is free. In traditional manufacturing, the more complicated an object’s shape, the more it costs to make. On a 3D printer, complexity costs the same as simplicity. Fabricating an ornate and complicated shape does not require more time, skill, or cost than printing a simple block. Free complexity will disrupt traditional pricing models and change how we calculate the cost of manufacturing things.

Principle two: Variety is free. A single 3D printer can make many shapes. Like a human artisan, a 3D printer can fabricate a different shape each time. Traditional manufacturing machines are much less versatile and can only make things in a limited spectrum of shapes. 3D printing removes the over- head costs associated with re-training human machinists or re-tooling factory machines. A single 3D printer needs only a different digital blueprint and a fresh batch of raw material.

Principle three: No assembly required. 3D printing forms interlocked parts. Mass manufacturing is built on the backbone of the assembly line. In modern factories, machines make identical objects that are later assembled by robots or human workers, sometimes continents away. The more parts a product contains, the longer it takes to assemble and the more expensive it becomes to make. By making objects in layers, a 3D printer could print a door and attached interlocking hinges at the same time, no assembly required. Less assembly will shorten supply chains, saving money on labor and transportation; shorter supply chains will be less polluting.

Principle four: Zero lead time. A 3D printer can print on demand when an object is needed. The capacity for on-the-spot manufacturing reduces the need for companies to stockpile physical inventory. New types of business services become possible as 3D printers enable a business to make specialty — or custom — objects on demand in response to customer orders. Zero-lead-time manufacturing could minimize the cost of long-distance shipping if printed goods are made when they are needed and near where they are needed.

Principle five: Unlimited design space. Traditional manufacturing technologies and human artisans can make only a finite repertoire of shapes. Our capacity to form shapes is limited by the tools available to us. For example, a traditional wood lathe can make only round objects. A mill can make only parts that can be accessed with a milling tool. A molding machine can make only shapes that can be poured into and then extracted from a mold. A 3D printer removes these barriers, opening up vast new design spaces. A printer can fabricate shapes that until now have been possible only in nature.

Principle six: Zero skill manufacturing. Traditional artisans train as apprentices for years to gain the skills they needed. Mass production and computer-guided manufacturing machines diminish the need for skilled production. However traditional manufacturing machines still demand a skilled expert to adjust and calibrate them. A 3D printer gets most of its guidance from a design file. To make an object of equal complexity, a 3D printer requires less operator skill than does an injection molding machine. Unskilled manufacturing opens up new business models and could offer new modes of production for people in remote environments or extreme circumstances.

Principle seven: Compact, portable manufacturing
. Per volume of production space, a 3D printer has more manufacturing capacity than a traditional manufacturing machine. For example, an injection molding machine can only make objects significantly smaller than itself. In contrast, a 3D printer can fabricate objects as large as its print bed. If a 3D printer is arranged so its printing apparatus can move freely, a 3D printer can fabricate objects larger than itself. A high production capacity per square foot makes 3D printers ideal for home use or office use since they offer a small physical footprint.

Principle eight: Less waste by-product. 3D printers that work in metal create less waste by-product than do traditional metal manufacturing techniques. Machining metal is highly wasteful as an estimated 90 percent of the original metal gets ground off and ends up on the factory floor. 3D printing is more wasteless for metal manufacturing. As printing materials improve, “Net shape” manufacturing could be a greener way to make things.

Principle nine: Infinite shades of materials. Combining different raw materials into a single product is difficult using today’s manufacturing machines. Since traditional manufacturing machines carve, cut, or mold things into shape, these processes can’t easily blend together different raw materials. As multi-material 3D printing develops, we will gain the capacity to blend and mix different raw materials. New previously inaccessible blends of raw material offer us a much larger, mostly unexplored palette of materials with novel properties or useful types of behaviors.

Principle ten: Precise physical replication. A digital music file can be endlessly copied with no loss of audio quality. In the future, 3D printing will extend this digital precision to the world of physical objects. Scanning technology and 3D printing will together introduce high resolution shapeshifting between the physical and digital worlds. We will scan, edit, and duplicate physical objects to create exact replicas or to improve on the original.

Some of these principles already hold true today. Others will come true in the next decade or two (or three). By removing familiar, time-honored manufacturing constraints, 3D printing sets the stage for a cascade of downstream innovation. In the following chapters we explore how 3D printing technologies will change the ways we work, eat, heal, learn, create and play. Let’s begin with a visit to the world of manufacturing and design, where 3D printing technologies ease the tyranny of economies of scale.
Co-authors Hod Lipson and Melba Kurman are leading experts on 3D printing, frequently speaking and advising on this technology to industry, academia, and government. Lipson’s lab at Cornell University has pioneered interdisciplinary research in 3D printing, product design, artificial intelligence, and smart materials. Kurman is a technology analyst and business strategy consultant who writes about game-changing technologies in lucid, engaging language Excerpted with permission from the publisher, Wiley, from Fabricated: The New World of 3D Printing by Hod Lipson and Melba Kurman. Copyright © 2013.

]]>
https://scienceprogress.org/2013/03/the-ten-principles-of-3d-printing/feed/ 0
Suppressed South Carolina Climate Change Report Warns of Big Impacts https://scienceprogress.org/2013/02/suppressed-south-carolina-climate-change-report-warns-of-big-impacts/ https://scienceprogress.org/2013/02/suppressed-south-carolina-climate-change-report-warns-of-big-impacts/#comments Tue, 26 Feb 2013 16:43:57 +0000 Shiva Polefka http://scienceprogress.org/?p=27975 South Carolina news outlet TheState.com reported on Sunday that an official, comprehensive assessment of dramatic climate change impacts looming large in South Carolina’s future was buried and barred from release, apparently due to political pressure.

According to TheState.com, the report, completed by a working group of 18 senior state scientists under the auspices of the South Carolina Department of Natural Resources, or DNR, found that the Palmetto State faces average temperature rise of as much as 9°F over the next 70 years, bringing with it increases in wildlife disease, loss of habitat for wild game, degradation of the state’s valuable recreational and commercial fisheries, increases in “dead zones” off the state’s coast, and salt water intrusion into coastal rivers and freshwater aquifers.

The report also issued a dramatic warning that as South Carolina’s climate warms, it could face in-migration of harmful invasive species from Florida, including piranha and Asian swamp eels.

Even more alarming than piranhas and eels, however, is the possibility that South Carolina’s conservative state government may have suppressed the report—intended for public education and planning purposes—for political reasons.

Despite detailing major risks to vital state industries and natural resources, the document was never released after its completion in 2011.  TheState.com reports that it recently “obtained” a copy but that it otherwise remains unavailable to the public.  While the previous head of DNR, John Frampton, reportedly wanted to release the document for public review, he retired suddenly before the release occurred, after what he claimed was pressure to resign from an administrative appointee of Governor Nikki Haley.

According to TheState.com, DNR’s new director says the agency’s “priorities have changed,” to matters including expansion of the ports of Savannah and Charleston, and a new gold mine.

The developments in South Carolina resemble the woeful political meddling in strategic planning for climate change of its northern neighbor.  In 2010 a study from the State of North Carolina’s Panel on Coastal Hazards used sea level rise projections of approximately one meter by 2100—in line with the National Academy of Science and other coastal states including Maine, Florida and California—to estimate that the state should prepare for inundation and increased flood risk for more than 2,000 square miles of coastal lands.  In response, North Carolina’s legislature passed a bill in 2012 mandating that coastal counties ignore the best available science, and instead follow a formula using “historical data” that projects sea level rise of no more than 8-12 inches by 2100.

Unfortunately, there has been no mention of whether South Carolina’s DNR is integrating the two-foot sea level rise reportedly predicted in its 2011 climate change report into the state’s new port expansion plans.

Update:  TheState.com has posted the original S.C. Department of Natural Resources climate change report to its website and published additional quotes from Frampton in a follow-up article.  “From a wildlife and natural resources standpoint, climate change is definitely going to have an impact,” it quotes Frampton as saying. “I would liked to have seen the DNR be a leader.”

Shiva Polefka is a research associate in the Ocean Program at the Center for American Progress.  Tiffany Germain, ThinkProgress War Room Senior Climate/Energy Researcher, contributed research.

]]>
https://scienceprogress.org/2013/02/suppressed-south-carolina-climate-change-report-warns-of-big-impacts/feed/ 0
Keeping Good Research from Going Bad https://scienceprogress.org/2013/02/keeping-good-research-from-going-bad/ https://scienceprogress.org/2013/02/keeping-good-research-from-going-bad/#comments Fri, 22 Feb 2013 22:25:12 +0000 Oliver Kendall http://scienceprogress.org/?p=27956 The White House Office of Science and Technology Policy released yesterday the second installment of a policy initiative to address research that, while being done for the right reasons, could be used to cause significant harm.

Dual Use Research of Concern—or DURC, as such research has come to be known—has risen quickly on agenda of the Obama administration as life scientists have become increasingly capable of manipulating genetic material in microbes. This ability can speed the development of medicines and vaccines, but it can also be used to create particularly dangerous pathogens. Release of the draft policy—which is open for public comment for the next 60 days—marks the latest effort by the government in a delicate process of figuring out how to balance safety and security interests with concerns about stifling scientific freedom and technological progress. This new policy, which is focused on federally funded studies, could also serve as a harbinger for future government actions pertaining to this type of privately funded research.

Background

The newly proposed policy, which is now open for public comment, has its roots in the fall of 2011, when the world was surprised to learn that the editors of two major science journals (Nature and Science) were poised to publish two controversial studies. Researchers in two separate institutions had—with funding from the National Institutes of Health—created strains of potentially deadly H5N1 (bird flu) virus with a dangerously enhanced ability to spread. Other scientists had taken similar steps. In 2002, for example, researchers in New York used commercially available genetic material to create infectious polioviruses from scratch. And in 2005 researchers at the Centers for Disease Control recreated the deadly 1918 influenza virus, which caused the one of the most devastating pandemics in human history.

Although federally funded, the work done in these influenza studies apparently took the government by surprise, causing the Obama administration’s health officials and scientists to confront the issue of how to respond to scientists’ growing ability to create deadly diseases with relative ease.

Policies currently standing

Some relevant protections are already in place. Since 1997 the federal government has maintained a list of “Select Agents”—biological agents and toxins declared by the Department of Health and Human Services or the Department of Agriculture to have the potential to cause a serious risk to public health and safety. The Centers for Disease Control runs the “Select Agent Program,” which regulates labs that possess, use, or transfer the agents within the United States. But the authors of the Select Agent Program did not anticipate the possibility that scientists would gain the ability to create such agents—or even more dangerous ones—from relatively benign organisms or from easily obtainable genetic components.

To address this gap after the flu-research issue arose, the government brought together experts from various health- and security-related agencies to consider how to handle risks posed by DURC, which they defined as life science research that could reasonably be anticipated to provide knowledge or technologies that could be misused to pose a significant risk to public health and safety, agriculture, the environment, or materiel or national security. In March 2012 the government released a multistep program that specifies how funding agencies should deal with potential DURC.

The first step of this program requires that federal agencies identify research—both newly proposed and ongoing—that specifically involves any of the 15 “Tier 1,” or most dangerous, agents on the Select Agent List. The second step is to identify any research involving those agents that could be expected to result in any of the following:

  • Enhancement of the harmful consequences of the agent
  • Disruption to the immunity or effectiveness of an immunization to the agent
  • Conference of drug resistance to the agent or any other action that makes it harder to detect
  • Increase in the stability or transmissibility of the agent
  • Alteration to the host range of the agent—an increase, for example, in the number of vulnerable species
  • Enhancement of the vulnerability of a potential victim population of the agent
  • Reconstitution of an agent that no longer exists in the natural world

Under this policy, any research fulfilling these criteria will lose funding unless the researchers craft an acceptable plan mitigating potential risks.

The March 2012 policy took a top-down approach, requiring federal agencies to assess for potential DURC research under their umbrella that was already underway and work with researchers to create plans for risk mitigation. The policy released Thursday applies the same philosophy from the bottom up: Instead of federal agencies bankrolling projects before assessing them for DURC, in order to receive funding the researchers and institutions must now follow the same protocol for risk identification and mitigation required of the agencies in their reviews under the March 2012 policy. If the research qualifies as DURC, risk mitigation will be written into the contract under which researchers and institutions receive federal grants.

Implications

If implemented, the draft policy will affect huge numbers of researchers and institutions, and it remains to be seen how well they will respond to the proposal. Institutions may not appreciate the increased responsibility of reviewing research proposals for DURC, and scientists may chafe at the new guidelines; no one likes to feel as though they are not trusted, and many researchers have long viewed it as their right to pursue basic knowledge without much oversight. In this view, extra layers of oversight could stifle scientific progress. Just how burdensome this policy is will likely depend on what degree and form of risk mitigation is deemed acceptable by those granting funding.

From the perspective of the Obama administration—which has felt pressure from members of Congress who worry that certain federally funded research could threaten national security—the policy as proposed is much less burdensome than it could be. The government could have required mitigation plans for research involving any select agents instead of just Tier 1 agents, or it could have even called for classification or other restrictions on communicating the results of DURC, which would have had the potential to slow the information-sharing among scientists that is so important for scientific momentum. Furthermore, by introducing the policy in draft form with a 60-day comment period, the Obama administration appears to be signaling its willingness to be flexible and open to changes. The government also plans to gather data on how well the policy is working over the next several years and—ultimately—incorporate lessons learned.

Private-sector oversight

Though this latest policy effectively addresses DURC where government funding is involved, a comprehensive approach to handling DURC in the private sector appears to be some time away, in part because it would require congressional action—a long shot, considering current gridlock. An intensive lobbying campaign could reasonably be expected. Since such a policy would address situations where the government does not provide the money—and could not therefore threaten to pull funding as an incentive for compliance or as a punishment for noncompliance—there would have to be some independent enforcement mechanism. The Select Agent Program offers one approach, where violators potentially face fines and jail time. But industry representatives would undoubtedly see such a threat as having the potential to stifle American innovation in the life sciences—and if such a federal policy is not drawn clearly, they could very well be right.

What to do in the mean time

While it will be some time before the policy on federally funded DURC is finalized—and significantly longer before any such policy is likely to be enacted for the private sector—there are immediate steps that can be taken by the government to address the biosafety and biosecurity concerns raised by DURC. The most concrete and least burdensome way to do this involves raising awareness within the life science community and working with existing institutions to encourage best practices.

Lessons can be learned from the physics community. During the Cold War, much of the research done by physicists into nuclear technology was classified because of the imminent risk such knowledge posed. No one is calling for widespread classification in the field of genetic engineering—for one thing, the tools are far simpler and more accessible than those required for nuclear bomb-making—but life scientists would do well to take some tips from the physics community, which has become accustomed to thinking about the national security implications of its work. This consciousness is still underdeveloped within the life science community, despite the growing capacity of the life sciences to wreak havoc on a large scale.

Awareness of safety and security concerns could be encouraged in many ways, such as through small online tutorials, such as the ones currently in existence to educate health care workers about patient protections under the Health Insurance Portability and Accountability Act, or HIPAA, and to reduce the risks posed by blood-borne pathogens. Such tutorials might not tell scientists anything they do not already know, but they could help raise the threat of DURC in their day-to-day consciousness in much the same way that the “See Something, Say Something” campaign implemented by the Department of Homeland Security has made people more likely to notice other potential risks.

Increasing awareness will only help so much, though, in cases where institutions have no reporting mechanism for researchers to register concerns. Institutions should have mechanisms in place where researchers can raise questions about their own or others’ work in a nonthreatening way. The goal must be to assess risks and address them without triggering an overly burdensome response that could delay the development of defenses against real biothreats.

Another topic to confront immediately within the government and the scientific community is how best to mitigate the risk when DURC is identified—something for which the new policy provides little guidance. As the government was reminded in H5N1 influenza situation that brought this discussion to the fore, it is difficult to know where limits on scientific knowledge should be placed. The two previously mentioned journals, for example, published virtually all of the H5N1 research, leaving out a few details to make it more difficult to replicate for purposes of bioterror. If full-blown classification is not warranted, should large chunks of studies be redacted when published—and if so, wouldn’t this serve as an impediment to researchers? If America goes down that road, what authority will decide who has access to certain knowledge and who does not?

Conclusion

It’s widely appreciated that a major reason U.S. science has been so successful is because research in this country is relatively free of unwarranted restrictions. It is therefore imperative that unnecessary roadblocks are not put in the way of research—even as the life sciences make inroads into potentially worrisome terrain. Certain scientific advancements, though, do carry great potential to cause harm, which indicates a necessity for at least some minimal guidelines. Rightly or wrongly, public opinion is also a factor here, and as the intense media interest in the H5N1 research showed, some research is bound to appear to the general public as a cause for alarm. Scientists who fail to be sensitive to this fact risk triggering a crackdown on scientific freedoms by Congress or the Obama administration.

The Obama presidency has prided itself from the start on its respect for the vast importance of science—which comes as a relief, after science suffered years of neglect under the Bush administration. Moving forward, it will be incumbent upon the Obama administration—as well as on researchers and their institutions—to strike the right balance between safety and security concerns and the research freedoms that fuel scientific progress.

Oliver Kendall has worked for numerous political campaigns and organizations and studies political science at Macalester College in St. Paul, Minnesota.

]]>
https://scienceprogress.org/2013/02/keeping-good-research-from-going-bad/feed/ 0
Americans Ask White House For The Right To Unlock Their Cell Phones https://scienceprogress.org/2013/02/americans-ask-white-house-for-the-right-to-unlock-their-cell-phones/ https://scienceprogress.org/2013/02/americans-ask-white-house-for-the-right-to-unlock-their-cell-phones/#comments Thu, 21 Feb 2013 17:57:34 +0000 Andrea Peterson http://scienceprogress.org/?p=27949 Andrea Peterson, via Think Progress.

You probably don’t have as much control over your cell phone as you think: Thanks to a bizarre enforcement of the Digital Millennium Copyright Act that bars “circumventing digital locks“, consumers don’t have the right to unlock a phone they paid for — but a We The People petition that just passed the 100,000 signature response threshold asks the Obama administration to help fix this glaring consumer choice issue.

The petition provides a good summary of the situation:

The Librarian of Congress decided in October 2012 that unlocking of cell phones would be removed from the exceptions to the DMCA.

As of January 26, consumers will no longer be able unlock their phones for use on a different network without carrier permission, even after their contract has expired.

Consumers will be forced to pay exorbitant roaming fees to make calls while traveling abroad. It reduces consumer choice, and decreases the resale value of devices that consumers have paid for in full.

The Librarian noted that carriers are offering more unlocked phones at present, but the great majority of phones sold are still locked.

We ask that the White House ask the Librarian of Congress to rescind this decision, and failing that, champion a bill that makes unlocking permanently legal.

As the petition notes, the heart of the issue is what rights consumers have over a product they own and if the Librarian’s decision protects the profits of big name wireless carriers at the expense of those rights. The Librarian’s office has sided with consumers before on this issue: It granted exemptions for unlocking phones in 2006 and 2010, but following the implementation of the new decision consumers could face up to $2,500 per unlocked phone in a civil suit and $500,000 or five years in prison in a criminal case where the unlocking is done for “commercial advantage” if carriers take the unlocker to court and win.

Despite the popularity of the petition, there is a jurisdictional dispute as noted by Jon Healey at the Los Angeles Times: “The Library of Congress is a legislative branch agency, not one subject to presidential oversight” and “the law provides no avenue for appealing the librarian’s decisions.” And with the President’s legislative plate already filled to the brim with immigration reform and the looming sequester, it seems unlikely the administration will expend political capital on this issue — even if the situation does effect the 85 percent of Americans who own mobile phones.

]]>
https://scienceprogress.org/2013/02/americans-ask-white-house-for-the-right-to-unlock-their-cell-phones/feed/ 0
The Other Aaron’s Law https://scienceprogress.org/2013/02/the-other-aaron%e2%80%99s-law/ https://scienceprogress.org/2013/02/the-other-aaron%e2%80%99s-law/#comments Mon, 18 Feb 2013 21:15:36 +0000 Andrea Peterson http://scienceprogress.org/?p=27943 Andrea Peterson, via Think Progress.

Just over a month after internet folk hero and activist Aaron Swartz ended his own life, a bipartisan group of law-makers have introduced legislation that would make progress on a cause near and dear to his heart: Open access to publicly funded research. The Fair Access to Science and Technology Research Act (FASTR), introduced this week by Reps. Zoe Lofgren (D-CA), Mike Doyle (D-PA), and Kevin Yoder (R-KS) in the House and Senators John Cornyn (R-TX) and Ron Wyden (D-OR) in the Senate, “require[s] federal agencies with annual extramural research budgets of $100 million or more to provide the public with online access to research manuscripts stemming from funded research no later than six months after publication in a peer-reviewed journal,” building on the success of the National Institutes of Health’s (NIH) 2008 public access policy.

Swartz faced a maximum sentence of decades in prison at the time of his death for charges related to his alleged downloading of nearly 5 million documents from the academic database JSTOR, in what many believe was an attempt to release the data. While efforts to reform the Computer Fraud and Abuse Act (CFAA), the law Swartz was being prosecuted under, using the moniker “Aaron’s Law” emerged quickly, the introduction of FASTR is the first legislative effort since his death to address the open access movement — the effort to provide unrestricted access to peer-reviewed research online.

Here’s how academic publishing works: Research is largely done by members of university communities (frequently funded by the public) who submit research to journals for publication (sometimes paying for the privilege). Then journals send the research back out to other academics to be edited blind (usually pro-bono), and the journal’s (often for profit) publishers sell back access to the published research to university libraries.

While the largest of the for-profit academic publishers, Elsevier, made $1.1 billion in profits in 2011 with a profit margin of around 35 percent, libraries have struggled to afford rising subscription costs that drove up expenditures by a staggering 273 percent between 1986 and 2004. The Harvard Faculty Council released a statement on the crisis last year noting that the prices for online content from two major providers increased by around 145 percent over the last six years alone, saying “[m]any large journal publishers have made the scholarly communication environment fiscally unsustainable and academically restrictive.”

FASTR is not an outright solution to this broken system, but it is a substantive step in the right direction that would provide open access because academic federal funding is the primary source of basic research support in the U.S. (the majority of which is carried out by academic institutions). And there are signs that the open access movement is making dents in the the academic publishing industry’s armor, like JSTOR’s Register & Read program. Neither that limited concession or FASTR will fully bring about the world of free information Swartz envisioned, but taken together they are a sign that world is slowly moving in the right direction.

]]>
https://scienceprogress.org/2013/02/the-other-aaron%e2%80%99s-law/feed/ 0
How The Sequester’s R&D Cuts Will Hurt Science And Innovation https://scienceprogress.org/2013/02/how-the-sequester%e2%80%99s-rd-cuts-will-hurt-science-and-innovation/ https://scienceprogress.org/2013/02/how-the-sequester%e2%80%99s-rd-cuts-will-hurt-science-and-innovation/#comments Thu, 14 Feb 2013 18:26:11 +0000 Andrea Peterson http://scienceprogress.org/?p=27933 After President Obama’s called to attain a “level of research and development not seen since the height of the Space Race” in the State of the Union, universities are renewing their cry for a deal to avoid the so-called “sequester” in order to preserve federal research and development, or R&D, funds.

ScienceWorksForU.S., a project of the Association of American Universities, the Association of Public and Land-grant Universities, and The Science Coalition, is releasing videos from university leaders across the country about how the scheduled cuts will impact the U.S.’s long-term competitiveness, like this one from University of Kansas Chancellor Bernadette Gray-Little:

 

National investments in R&D as a percentage of discretionary public spending are down to around 9 percent today from a high of 17 percent in 1962, and the automatic cuts looming in the sequester threaten an 8.4 percent reduction to discretionary spending programs across the board. When you take into account non-discretionary and discretionary spending, total R&D cuts from sequestration over the nine year period will amount to $95 billion.

These cuts will hit universities especially hard because academic institutions perform a huge amount of the research that drives our economy, doing 53 percent of total basic research and 36 percent of all research funded by the U.S. government in 2009. But the $95 billion figure doesn’t reflect the true damage the cuts will have to the U.S. economy because of the exponential impact innovation has in driving our economic growth. A report released last September by the Information Technology and Innovation Foundation estimates the sequester’s R&D cuts will reduce GDP by between $203 billion and $860 billion over nine years, and result in 200,000 job losses in 2013 alone.

While the economic impacts could be devastating, looking beyond the numbers, it’s almost impossible to calculate the value add of innovations fueled by federal R&D funding in the U.S. and how many of them — such as medical treatments, the internet, or cell phones — have fundamentally improved or saved the lives of millions.

Andrea Peterson is the Social Media and Online Analytics Editor at American Progress. Cross-posted at Think Progress Economy.

]]>
https://scienceprogress.org/2013/02/how-the-sequester%e2%80%99s-rd-cuts-will-hurt-science-and-innovation/feed/ 0
TIMELINE: U.S. Cybersecurity Policy in Context https://scienceprogress.org/2013/02/u-s-cybersecurity-policy-in-context/ https://scienceprogress.org/2013/02/u-s-cybersecurity-policy-in-context/#comments Wed, 13 Feb 2013 21:02:30 +0000 Andrea Peterson http://scienceprogress.org/?p=27922 President Barack Obama signed a long-rumored executive order and presidential directive on Tuesday aimed at strengthening the cybersecurity of critical infrastructure.

America’s enemies are “seeking the ability to sabotage our power grid, our financial institutions, and our air traffic control systems … and swipe our corporate secrets,” President Obama said on Tuesday night during his State of the Union address. Indeed, Secretary of Defense Leon Panetta once used the term, “cyber-Pearl Harbor” to describe the looming threat we face.

These threats to both digital and physical infrastructure could not be more real. In 2007 the Department of Homeland Security demonstrated that hackers could take over a 5,000 diesel engine—the kind routinely used as backup generators in our power grid—and, using nothing but computer code, caused the machine to destroy itself. Using a similar technique, U.S. intelligence officials allegedly used a computer virus dubbed “Stuxnet” to sabotage more than 1,000 uranium enrichment centrifuges in Iran in 2010.

Unfortunately, the government’s past responses to these new and developing threats have been piecemeal and lacking in coordination. In the timeline below we outline the major policy initiatives that led us to yesterday’s executive order, and the cyber attack incidents that spurred them.

Yesterday’s actions are designed to accomplish two goals:

Specifically, the order and the directive implement a voluntary program for companies working in sectors that involve critical infrastructure, such as power grids, pipelines, or transportation operations, creates new information sharing programs under the Department of Homeland Security, clarifies the role of various federal agencies in pursuing cyber resiliency, and tasks the National Institute of Standards and Technology with designing and implementing a framework to reduce long-term cyber risks.

This comes amidst a new and more aggressive stance by the Pentagon to weaponize cyberspace, and similarly proactive stance that is evolving from private sector actors.

In some ways, cyberspace is like the Wild West of our time—dangerous, difficult to police, and still largely unexplored. What is certain is that yesterday’s executive order will not be the end of this story. It is likely only the beginning.

Andrea Peterson is the Social Media and Analytics Editor at the Center for American Progress. Sean Pool is the Managing Editor of Science Progress. Jason Thomas contributed to the research for this timeline.

]]>
https://scienceprogress.org/2013/02/u-s-cybersecurity-policy-in-context/feed/ 0
Government Audit Says The FCC Failed To Fix Network Security Holes https://scienceprogress.org/2013/02/government-audit-says-the-fcc-failed-to-fix-network-security-holes/ https://scienceprogress.org/2013/02/government-audit-says-the-fcc-failed-to-fix-network-security-holes/#comments Tue, 12 Feb 2013 13:21:17 +0000 Andrea Peterson http://scienceprogress.org/?p=27911 Andrea Peterson, via Think Progress.

Last week the Government Accountability Office (GAO) released an audit on the Federal Communications Commission’s (FCC) Enhanced Secured Network (ESN) project that questions the network security of the very agency that regulates online communications. Things are going so poorly with the project, the GAO couldn’t even release full findings to the public — instead, a separate report with limited distribution was prepared “making 26 recommendations associated with 21 findings to resolve technical information security weaknesses related to access controls and configuration management of the ESN.”

Sean Gallagher at Ars Technica explains the back story:

“In August of 2011, while in the middle of upgrading its network security monitoring, the Federal Communications Commission discovered it had already been hacked. Over the next month, the commission’s IT staff and outside contractors worked to identify the source of the breach, finding an unspecified number of PCs infected with backdoor malware.

After pulling the infected systems from the network, the FCC determined it needed to do something dramatic to fix the significant security holes in its internal networks that allowed the malware in. The organization began pulling together a $10 million “Enhanced Secured Network” project to accomplish that.”

But according to Gallagher, that $10 million plan was largely put together by Octo Consulting, and the GAO findings make it clear almost nothing went well:

“FCC’s efforts to effectively manage the ESN project were hindered by its inconsistent implementation of procedures for estimating costs, developing and maintaining an integrated schedule, managing project risks, and conducting oversight.”

The report concludes that as the result of this mismanagement, the FCC did not implement appropriate security controls in the initial phase of the project, nor has it consistently implemented key security procedures for managing the program to the point that the “FCC’s information remained at unnecessary risk of inadvertent or deliberate misuse, improper disclosure, or destruction” — essentially leaving the system, and thus sensitive internal FCC communications and information about the people and companies doing business with the FCC, vulnerable to the same sort of breach found in 2011 that prompted the Enhanced Secured Network project in the first place.

While the shortage of cybersecurity expertise in government is nothing new, that the very agency responsible for regulating online communications was forced to resort to outside assistance to secure its networks — and just how spectacularly that outside assistance failed — is yet another wake up call to the severity of the shortage and the real impacts it has on our government’s ability to do its job.

]]>
https://scienceprogress.org/2013/02/government-audit-says-the-fcc-failed-to-fix-network-security-holes/feed/ 0
Announcement Raises Hopes About Cheaper-Than-Coal Solar Technology https://scienceprogress.org/2013/02/announcement-raises-hopes-about-cheaper-than-coal-solar-technology/ https://scienceprogress.org/2013/02/announcement-raises-hopes-about-cheaper-than-coal-solar-technology/#comments Mon, 11 Feb 2013 16:50:40 +0000 Sean Pool http://scienceprogress.org/?p=27900 David Roberts at Grist has reported on some interesting news which, if true, would be pretty significant: a new solar technology company claims to have designed a new solar photovoltaic product capable of producing electrons as cheaply as can coal.

Before you read on, its important to note that there are many factors that help determine the final cost of electricity to the consumer (called the “levelized cost of electricity,” or LCOE), and that many of them are difficult to predict and  vary widely from region to region. A global investment bank called Lazard for years has stewarded a methodology for estimating the LCOE for different energy sources based on a number of factors that include the cost of manufacture of the components and equipment, fuel costs, cost of capital, government incentives, land and water requirements, and electricity dispatch characteristics (i.e. the dynamics of integrating the technology into the dynamic bid process of marketing electrons to distributors on the grid).

But given the uncertainty inherent in predicting each of these for new technologies, these estimates must be taken with a grain of salt. It is very difficult to know the true LCOE for a technology that has not yet been widely deployed. Further, the LCOE estimates released by the company, V3Solar were not calculated by Lazard, though they were confirmed by an external technical review.

The long and the short of it is that this announcement is another reason for hope that cheap-as-dirt solar may around the corner, but we’ll still have to wait and see. The rest of Robert’s post is pasted below, or you can read it over at Grist:

Over time I’ve grown more and more suspicious of stories about breakthrough technologies. I always think back to those heady days of EEStor, the guys who were going to make a battery that would revolutionize grid storage and electric cars alike. “EEStor CEO says game-changing energy storage device coming by 2010”! As you may have noticed, 2010 came and went and the game remains unchanged.

All of which is to say, regarding the post to follow: caveat lector.

Still, this looks very, very cool.

CleanTechnica has an exclusive on a new solar technology that claims to be able to produce power with a levelized cost of energy, or  LCOE, of 8¢ per kilowatt-hour. That is mind-boggling, “two-thirds the price of retail electricity and over 3 times cheaper than current solar technology.” If the claim proves to be true (and a lot can happen between prototype and mass manufacturing), it could revolutionize the solar industry.

The company is called V3Solar (formerly Solarphasec) and its product, the Spin Cell, ingeniously solves two big problems facing solar photovoltaics (PV).

First, most solar panels are flat, which means they miss most of the sunlight most of the time. They only briefly face direct sunlight, unless expensive tracking systems are added. The Spin Cell is a cone:

 

V3Solar Spin Cell

 

The conical shape catches the sun over the course of its entire arc through the sky, along every axis. It’s built-in tracking.

The second problem: Solar panels produce much more energy if sunlight is concentrated by a lens before it hits the solar cell; however, concentrating the light also creates immense amounts of heat, which means that concentrating solar panels (CPV) require expensive, specialized, heat-resistant solar cell materials.

V3Solar spin cellThe Spin Cell concentrates sunlight on plain old (cheap) silicon PV, but keeps it cool by spinning it.

It’s just so damn clever.

Here’s a video that explains:

The company’s technology claims have been confirmed by a technical review commissioned from independent consultant Bill Rever. As to the 8¢/kWh cost claim, the company told CleanTechnica, “We think we can go below that, but we want to stay conservative.” Hitting it, or close to it, could shake up the energy world. Here’s a chart comparing LCOE for various power sources:

V3Solar: LCOE

That is a whole new ballgame right there.

The company’s aim is to capture 3 percent of the energy market. For context, CleanTechnica notes that “all solar power installed in the U.S. to date currently accounts for about 0.5-1% of the energy market.” More than tripling the size of the U.S. solar market is … well, not short on ambition.

Most impressively, to me, the company tells CleanTechnica that it already has over 4 GW of requests for orders. There is 7 GW of installed solar in the U.S., total.

There’s lots, lots more on the technology over on CleanTechnica, if you want to dig in.

To me, the most exciting implications of the technology (again, if it proves out) are for distributed energy. Spin Cells are only a meter across and quite aesthetically appealing. You could carpet a city in these. Like this:

V3Solar power poles

Maybe this tech or this company will peter out before reaching mass-market scale. But advances in solar technology are coming faster and faster. (Small, distributed energy technologies are inherently more prone to innovation than large, capital-intensive energy technologies.) Sooner or later, solar will be woven seamlessly into the fabric of our lives. Our built environment will harvest energy as a matter of course (from the sun, from the wind, from waste), store it effectively, and use it wisely. Power harvesting and power management will be ubiquitous; power imported from large, distant, polluting power plants over long-distance transmission lines will come to be seen as back-up, a necessary evil. And perhaps, someday, an unnecessary one.

]]>
https://scienceprogress.org/2013/02/announcement-raises-hopes-about-cheaper-than-coal-solar-technology/feed/ 0
Success of Northeast Cap-and-Trade System Shows Market-Based Climate Policy Is Well Within Reach https://scienceprogress.org/2013/02/success-of-northeast-cap-and-trade-system-shows-market-based-climate-policy-is-well-within-reach/ https://scienceprogress.org/2013/02/success-of-northeast-cap-and-trade-system-shows-market-based-climate-policy-is-well-within-reach/#comments Fri, 08 Feb 2013 15:26:51 +0000 Sean Pool http://scienceprogress.org/?p=27887 The success of a regional cap-and-trade program in reducing carbon pollution in the U.S. Northeast over the past few years has scarcely been mentioned in the debate over how to reduce carbon pollution.

Unwilling to wait for the federal government to enact sensible carbon limits and clean energy investment policies, nine U.S. states—Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New York, Rhode Island, and Vermont—joined together in 2008 to create the Regional Greenhouse Gas Initiative, or RGGI.

Since its inception, the Regional Greenhouse Gas Initiative has delivered cost-effective results for power companies and consumers. Today the program announced plans to set a new carbon-reduction goal that is 45 percent more ambitious than the one before it. This is great news for consumers and energy companies in the Northeast—and for the climate.

Business leaders in the region are already lauding the move in a press release sent to journalists from Cater Communications. “I’ve personally witnessed rapid growth among clean energy and energy efficiency businesses, and strengthening our commitment to RGGI will help create additional job opportunities while improving air quality for generations to come,” said Sarah Brown, the owner of a small marketing and communications firm in New Hampshire.

“RGGI creates a policy environment that makes clean energy investment attractive in Massachusetts,” said David Miller, the executive managing director of the Clean Energy Venture Group in Massachusetts. “When you have a program that’s helping businesses grow and creating jobs at a fast clip in this economy, the logic for strengthening the program is clear.”

Peter Arpin, the owner of a moving and storage company in Rhode Island, said, “The return-on-investment for state and local economies is enormous.”

The oft-repeated mantra of both conservatives and progressives is that “the states are the laboratories of democracy”—places where innovative policy solutions can be tested and vetted before being adopted more broadly at the federal level. The fact that these nine states—together comprising roughly 20 percent of all U.S. economic output—have been so successful at reducing carbon emissions from power plants should serve as a wake-up call to policymakers in Washington.

A recent report by the initiative is a must read for anyone who wants to understand how cap and trade is working today to reduce carbon pollution in 20 percent of the U.S. economy while fostering steady economic growth and keeping electricity rates low at the same time.

The report states that between 2008 and 2011, the states involved with the Regional Greenhouse Gas Initiative have reduced their carbon emissions by about 20 percent. During the same time period, the program put $1.1 billion back into the pockets of consumers through savings on their energy bills. It has also created 16,000 job years. (A “job year” is one job sustained for one year.) The program is expected to lead to a net economic boost of $1.6 billion for the region by the end of the decade, as consumers and businesses spend the extra cash they saved on energy bills in more productive ways, according to an independent report by the Analysis Group. The savings are coming through a combination of improvements in energy efficiency—made possible by reinvesting the proceeds from carbon-credit auctions—and rebates through a Direct Bill Assistance program to more than 84,000 low-income families.

If there is any doubt that the program is successfully reducing carbon pollution in the power sector as it promotes economic growth, figures 4 and 5 from the RGGI report, which show marked decline in power-sector emissions side by side with steady economic growth, should dispel it:

Contrary to the commonly held misconception that cap and trade would be a “bureaucratic nightmare,” this cap-and-trade program has reduced carbon, created jobs, and promoted clean energy investments with a lean staff of fewer than 10 employees. Overall administrative costs for all stakeholders are estimated to be only 0.5 percent of the sum of auction proceeds.

Also contrary to common criticisms of cap and trade: The Regional Greenhouse Gas Initiative has not had a perceptible impact on electricity ratepayers. Estimates suggest that retail electricity rates have risen by less than 1 percent, or 43 cents per month, for the average ratepayer since 2011.

In addition to reducing carbon pollution, keeping electricity rates low, and creating jobs, the program has also generated $617 million in revenue for the participating states. While each state has leeway on how to use these funds, in aggregate they have spent 52 percent of these funds on energy efficient investments, 11 percent on deployment initiatives for new clean energy sources, and the rest on low-income rebates to ensure no harmful effects of the program. A detailed accounting of how the revenues are being invested is available in the investment report.

The Regional Greenhouse Gas Initiative’s success since 2008 is demonstrable proof that sensibly limiting carbon pollution from power plants is not only doable, but it’s actually being done—right here in the United States. And it is being done so successfully and cost effectively that the states that volunteered to be part of the experiment have now announced that they will lower the carbon cap by an additional 45 percent.

As California moves to implement its own market-based carbon-pollution-control system, and the Environmental Protection Agency mulls possible carbon regulations on existing power plants, its time lawmakers in Congress took note of the example set by the Regional Greenhouse Gas Initiative.

Sean Pool is the Science Innovation Policy Analyst and Managing Editor of Science Progress, the Center for American Progress’s online science and technology policy journal. This article cross-posted at the Center for American Progress.

]]>
https://scienceprogress.org/2013/02/success-of-northeast-cap-and-trade-system-shows-market-based-climate-policy-is-well-within-reach/feed/ 0
Cybersecurity Bill Supporters Regroup As Executive Order Looms https://scienceprogress.org/2013/02/cybersecurity-bill-supporters-regroup-as-executive-order-looms/ https://scienceprogress.org/2013/02/cybersecurity-bill-supporters-regroup-as-executive-order-looms/#comments Thu, 07 Feb 2013 13:50:01 +0000 Andrea Peterson http://scienceprogress.org/?p=27879 The Hill reports Rep. Dutch Ruppersberger (D-MD), the ranking member of the House Intelligence Committee, plans to re-introduce the Cyber Intelligence Sharing and Protection Act (CISPA), with the committee’s chairman Rep. Mike Rogers (R-MI) this year.

CISPA passed the House in 2012 despite significant organized opposition from privacy advocates, but was not considered by the Senate as it focused on its own cybersecurity proposal — one which also stalled, leading to reports the White House plans to issue a cybersecurity executive order calling for the creation of a voluntary program including minimum safety standards in critical infrastructure sectors.

CISPA proposed making information sharing between private companies and the intelligence agencies easier in order to allow collaborative responses to cyberattacks, likely at the expense of internet users’ privacy. While the bill enjoyed the support of many major companies including Facebook, Microsoft, IBM, Oracle, Symantec, AT&T and Verizon, civil liberties organizations expressed major doubts about the proposal and continue to do so. In a comment about renewed interest in CISPA to ThinkProgress today, Gregory T. Nojeim, Director of the Project on Freedom, Security & Technology at the Center for Democracy & Technology said:

“CISPA is deeply flawed. Under a broad cybersecurity umbrella, it permits companies to share user communications directly with the super secret National Security Agency and permits the NSA to use that information for non-cybersecurity reasons. This risks turning the cybersecurity program into a back door intelligence surveillance program run by a military entity with little transparency or public accountability. Members should seriously consider whether CISPA — which inflamed grassroots activists last year and was under a veto threat for these and other flaws — is the right place to start.”

The White House is expected to release a cybersecurity executive order after the State of the Union, although rumors of its imminence have been floating around since September. Nojeim noted that last year there were reasons to be optimistic about the cybersecurity executive order when rumors of it first emerged — including the White House’s threat to veto CISPA.

The executive order wouldn’t be the first foray into cybersecurity for President Obama: He signed a secret directive that redefined some cybersecurity actions previously deemed offensive as defensive in October as part of an effort to enable military personal to be more proactive in thwarting cyberattacks. The move occurred around the same time Secretary of Defense Leon Panetta warned of an impending “cyber-Pearl Harbor.

The threat of cyber attacks on public and private infrastructure is very real, as demonstrated by the huge jump in incidents involving critical infrastructure requiring the involvement of U.S. Industrial Control System Cyber Emergency Response Team jumping from 9 in 2009 to 198 in 2011.

Outside of traditionally defined critical infrastructure, other sectors have also been the target of recent high profile cybersecurity breaches, including many major newspapers and banks.

Cross-posted at ThinkProgress.

]]>
https://scienceprogress.org/2013/02/cybersecurity-bill-supporters-regroup-as-executive-order-looms/feed/ 0
Manmade Carbon Pollution Has Already Put Us On Track For 69 Feet Of Sea Level Rise https://scienceprogress.org/2013/02/27868/ https://scienceprogress.org/2013/02/27868/#comments Mon, 04 Feb 2013 15:57:28 +0000 Joe Romm http://scienceprogress.org/?p=27868 Joe Romm, via climate progress.

The bad news is that we’re all but certain to end up with a coastline at least this flooded (20 meters or 69 feet).

The “good” news is that this might take 1000 to 2000 years (or longer), and the choices we make now can affect the rate of rise and whether we blow past 69 feet to beyond 200 feet.

Glaciologist Jason Box makes this point in a Climate Desk interview with Chris Mooney, “Humans Have Already Set in Motion 69 Feet of Sea Level Rise“:

So what can we do? For Box, any bit of policy helps. “The more we can cool climate, the slower Greenland’s loss will be,” he explained. Cutting greenhouse gases slows the planet’s heating, and with it, the pace of ice sheet losses.

This shouldn’t be a surprise to anyone who follows the scientific literature. Just last year the National Science Foundation (NSF) reported on paleoclimate research that examined “rock and soil cores taken in Virginia, New Zealand and the Eniwetok Atoll in the north Pacific Ocean.” Lead author Kenneth Miller of Rutgers University said:

The natural state of the Earth with present carbon dioxide levels is one with sea levels about 70 feet higher than now.”

And that was only slightly less worrisome than a 2009 paper in Science that found the last time CO2 levels were this high, it was 5° to 10°F warmer and seas were 75 to 120 feet higher.

Now I tend to think that if the multiple, simultaneous devastating impacts we are headed toward by century’s end – from widespread Dust-Bowlification to 3 to 6 feet of sea level rise to 10°F warming (20°F in the Arctic) –  isn’t enough to motivate action, then just how much we are going to screw up the planet after 2100 won’t do the trick.

But there are some people who understand the staggering immorality of handing over to future generations seas rising as fast as 6 to 12 inches a decade for centuries on end. How precisely would people adapt to that? As one thoughtful person recently said, failure to act on climate change “would betray our children and future generations.”

Here is a short video of the Box interview:

If we were truly doubly wise, homo sapiens sapiens, as we cleverly named ourselves, the nation would join with the world in a WWII-scale effort to actually reduce the atmospheric CO2 level from its current 394 parts per million. That would not only lower the ultimate sea level rise, but would slow down the rate of change.

Tragically, we are headed for more than a doubling of CO2 concentrations from current levels. For those truly concerned about future generations, consider that on our current emissions path, CO2 levels in 2100 will hit levels last seen when the Earth was 29°F (16°C) hotter. So that not only means an ice-free planet with sea levels more than 200 feet higher than today, but a rate of sea level rise that is beyond imagining.

]]>
https://scienceprogress.org/2013/02/27868/feed/ 0
Clicking Online Ads More Likely To Deliver Malware Than Surfing Porn Sites, Report Finds https://scienceprogress.org/2013/02/clicking-online-ads-more-likely-to-deliver-malware-than-surfing-porn-sites-report-finds/ https://scienceprogress.org/2013/02/clicking-online-ads-more-likely-to-deliver-malware-than-surfing-porn-sites-report-finds/#comments Fri, 01 Feb 2013 21:38:06 +0000 Andrea Peterson http://scienceprogress.org/?p=27860 Andrea Peterson, via ThinkProgress.

Your online habits may be less dangerous than you think if they involve the less savory aspects of the web: According to Cisco’s annual 2013 Security Report internet users are 182 times more likely to get malware from clicking on online ads than visiting a porn site. It turns out, the site on the gray or black market edges of the web most of us traditionally think of as dangerous aren’t the biggest threats to your online security, instead:

“The dangers […] are often hidden in plain sight through exploit-laden online ads that are distributed to legitimate websites, or hackers targeting the user community on the common sites they use most.”

Those common sites include online shopping and search engines, which were 21 and 27 times more likely respectively to deliver malicious content than counterfeit software sites according to Cisco. Unsurprisingly, the Pew Internet & American Life Project reports of the 81% of American adults who use the internet some 91 percent report using search engines to find information and 71 percent buy products online.

Of course, many online users (around 10 percent according to one 2012 study) are already using ad-blocking software to avoid being served possibly malicious ads. And the proportion of online resources and time devoted to racy material is up for debate, with just 4 percent of the 1 million most popular of sites in 2010 revolving around sex and 13 percent of searches being for erotic content.

Beyond the eye-catching numbers about the relative safety of surfing for porn, the Cisco report identifies a number of other emerging threats — key among them the rise of Android malware exploits and the possible info-security minefield represented by the internet of things.

Android malware grew much faster than any other form of web delivered malware, with a staggering 2,577 percent increase in malware encounters over 2012. Although only .5 percent of web malware encounters in 2012 were on mobile devices, 95 percent of them were on Android devices — not great news considering Android now controls a majority of the smartphone market.

When it comes to the ever expanding internet of things, much of Cisco’s commentary was speculative – but the core argument rings true: With great connection, comes great responsibility. And there will be great connection:

“Considering that less than 1 percent of things in the physical world are connected today, there remains vast potential to “connect the unconnected.” It is projected that with an Internet that already has an estimated 50 billion “things” connected to it, the number of connections will increase to 13,311,666,640,184,600 by the year 2020.”

Here’s hoping they all don’t serve malware-laced ads, or it could mean trouble.

]]>
https://scienceprogress.org/2013/02/clicking-online-ads-more-likely-to-deliver-malware-than-surfing-porn-sites-report-finds/feed/ 0
Divest Over Global Warming? https://scienceprogress.org/2013/01/divest-over-global-warming/ https://scienceprogress.org/2013/01/divest-over-global-warming/#comments Thu, 31 Jan 2013 16:48:02 +0000 Dr. James Powell http://scienceprogress.org/?p=27825 James Lawrence Powell, in a campaign letter for the FossilFree.

A generation ago, students urged colleges to sell their stock in companies doing business in Apartheid South Africa. At least 155 colleges and universities, as well as 26 state governments, 22 countries, and 90 cities, partially or fully divested. One of the first private institutions to divest was Columbia University, whose trustees said in 1978 that they had done so “to maintain educational leadership,” which demanded “ethical and humane positions that give effective expression to our highest national ideals” (Columbia Spectator, June 8, 1978). In 1986, the University of California sold $3 billion in South Africa-related stocks, the largest public institution to do so.

In 1990, South African President de Klerk began negotiations to end Apartheid. By 1993 it had been largely dismantled and the next year universal suffrage in South Africa led to the election of Nelson Mandela. Desmond Tutu recently said that “We could not have achieved our freedom and just peace without the help of people around the world, who through the use of non-violent means, such as boycotts and divestment, encouraged their governments and other corporate actors to reverse decades-long support for the Apartheid regime.” Two of the largest American investors in South Africa at the time of the divestment movement were U.S. oil companies Mobil and Caltex (a joint venture of Chevron and Texaco.)

Apartheid was not the only cause over which academic institutions and others have divested. Some, including Harvard and Haverford, CCNY and the University of California, as well as foundations, health organizations, insurance companies, and pension funds, sold their stock in tobacco companies. As Harvard president Derek Bok explained in 1990, the university did so because it did not want “to be associated with companies [whose] products create a substantial and unjustifiable risk of harm to other human beings.”

Today, scientists and many others recognize global warming as a far greater threat than Apartheid or smoking. According to award-winning Ohio State climatologist Lonnie Thompson, “virtually all of us are now convinced that global warming poses a clear and present danger to civilization.”

Students and others, frustrated by science denial and inaction in Washington, have begun to urge colleges to sell their stock in fossil fuel companies. Here are ten objections global warming activists are apt to hear from trustees and college administrators, with my responses.

1. “Scientists disagree as to whether global warming is even real.” Among 33,700 authors of peer-reviewed scientific articles on global warming published between 1991 and November 2012, many of them scientists at American universities, only about one author in a thousand rejects human-caused global warming.

2. “Global warming is not a moral issue like Apartheid.” Apartheid was immoral because a class of better-off whites oppressed poor blacks. Global warming is immoral because the third world countries and foundering island nations who are the least responsible will suffer the most. Colleges who showed enough concern for Africans to divest over Apartheid should recognize that global warming is likely already worsening drought and famine in East Africa. Climate models project a future of increasing drought over most of Africa, southern Europe and the Middle East, most of the Americas, Australia, and Southeast Asia.

3. “Global warming is not a health issue like smoking.” A recent study in Health Affairs analyzed the health costs of six “climate change–related events” in the U.S. between 2001 and 2009. The events included ozone pollution, heat waves, hurricanes, infectious disease outbreaks, river flooding, and wildfires. The six accounted for $14 billion in lost lives and health costs (Knowlton et. al, Health Affairs, November 2011 vol. 30, 2167-2176). According to a study published in the medical journal Lancet: “Climate change is the biggest global health threat of the 21st century.” The lead author of the study said, “The impacts will be felt all around the world – and not just in some distant future but in our lifetimes and those of our children.”

4. “Colleges have no direct interest in preventing global warming.” Unlike Apartheid, global warming is already affecting colleges and students directly. The AAUP says that Hurricane Katrina caused “undoubtedly the most serious disruption of American higher education in the nation’s history.” Hurricane Sandy closed dozens of colleges and according to CNN, affected an estimated 1.2 million students. Even if scientists are unsure exactly how much global warming contributed to Katrina and Sandy—not whether they contributed—those two storms and other recent extreme weather events offer an ominous portent of what lies ahead. Colleges will not be immune from the coming heat, wildfire, drought, megastorms, and sea level rise. Today’s college graduates and their children and grandchildren will have to live in the greenhouse world that we are knowingly creating.

5. “We do not invest or divest for social causes.” But colleges did divest over Apartheid, selling their stock in oil companies doing business in South Africa. Harvard, for example , sold Mobil, Shell, and Texaco. Thus the question is not whether a college will divest from fossil fuel companies, but when divestment is justified. By threatening human health and even the future of civilization, global warming is a worse evil than Apartheid and a far greater danger than smoking.

6. “Our sole endowment objective is to maximize investment return.” The overriding obligation of those responsible for a college endowment is to ensure that future student generations benefit to the same relative extent as the current generation. Trustees achieve this balance by adjusting how much of endowment earnings they spend each year and how much they reinvest. But global warming puts a new slant on the matter. By investing in fossil fuel companies, colleges are using their current financial resources in a way that jeopardizes the quality of life of their future alumni. By any reasoned and humane interpretation, this violates colleges’ professed commitment to intergenerational equity.

7. “Selling stock in fossil fuel companies will lower investment return and cause the college to have to make significant budget cuts.” This is the same argument that some colleges made when faced with the issue of divesting over Apartheid, yet many went ahead and found little financial effect. One academic study from 1986 found that “Historical returns since 1959 indicate that the South Africa-free portfolio, diluted with Treasury bills to bring its risk in line with the NYSE, would have outperformed the NYSE by 0.187 per cent annually. ” (Financial Implications of South African Divestment, Blake R. Grossman and William F. Sharpe, Financial Analysts Journal, Vol. 42, No. 4 (Jul. – Aug., 1986), pp. 15-29.) When a college sells stock it has the exact same amount of cash as the market value of the stock at the time of sale, minus transaction costs (estimated at 0.4% in the study of South African divestment), and can reinvest that money. The immediate financial consequences are small and phasing in divestment over several years would ameliorate the long term consequences.

8. “Divestment is controversial and would hurt future fundraising.” Colleges did not let the fear of controversy stop them from divesting over Apartheid. Yes, donations from fossil fuel companies will decline, but gifts from donors who agree that global warming is a moral and a financial issue will rise, and there will be more of those donors. Colleges ought to do what is right, not what is expedient.

9. “Our most effective impact on climate change comes through our teaching, our research, and the careers of our alumni.” While colleges and universities have gone about their business during the second half of the twentieth century, CO2 emissions have risen by 40% over natural levels, enough to make a temperature rise of at least 2°C (3.6°F) during the rest of this century inevitable. In 2011, CO2 emissions rose by 3.2% to the highest level ever recorded. Business-as-usual by colleges has failed to curtail global warming and there is no reason to believe that more business-as- usual will produce a different result.

10. “Divestment won’t do any good.” It is true that divestment will have little financial impact on the fossil fuel companies. Instead, the impact will come from example and moral suasion. Divestment would have the benefit of asking colleges and their trustees, who include some of the most influential members of society, to address global warming and take a stand on an issue that directly affects colleges and their alumni. The publicity from a widespread divestment campaign would call attention to global warming and pressure fossil fuel companies to become part of the solution. In addition, divestment would provide colleges with the funds for a different type of investment. The top 500 colleges have over $400 billion in their endowments. Redirecting just 1% of that amount would free $4 billion for investment in companies that produce clean energy.

Humans have already emitted enough CO2 to ensure that global warming will not end in the lifetime of any person reading this essay. As the years and decades go by and its effects become ever more dire, global warming will grow into a perennial campus issue. It is not going away. Some colleges will take the lead and divest now; others will follow eventually. The question for each college is whether, on the most important issue of this century, it will be a leader or a follower.

To end as Elizabeth Kolbert ended her Field Notes from a Catastrophe, “It may seem impossible to imagine that a technologically advanced society could choose, in essence, to destroy itself, but that is what we are now in the process of doing.”

James Lawrence Powell was Acting President of Oberlin College, President of Franklin and Marshall College, President of Reed College, President of the Franklin Institute, and President of the Los Angeles County Museum of Natural History. Presidents Reagan and George H. W. Bush appointed him to the National Science Board, where he served for twelve years. He has a PhD from MIT and DSc degrees from Berea College and Oberlin College. He is the author of nine books. His most recent print book is The Inquisition of Climate Science (Columbia University Press). Powell currently serves as Executive Director of the National Physical Science Consortium.

]]>
https://scienceprogress.org/2013/01/divest-over-global-warming/feed/ 0
Terminating the Terminator: What to do About Autonomous Weapons https://scienceprogress.org/2013/01/terminating-the-terminator-what-to-do-about-autonomous-weapons/ https://scienceprogress.org/2013/01/terminating-the-terminator-what-to-do-about-autonomous-weapons/#comments Tue, 29 Jan 2013 16:41:25 +0000 Wendell Wallach http://scienceprogress.org/?p=27836 “The Terminator” is clearly science fiction, but it speaks to a deep intuition that the robotization of warfare is a slippery slope—the endpoint of which can neither be predicted nor fully controlled. Two reports released soon after the November 2012 election have propelled the issue of autonomous killing machines onto the political radar.

The first, a November 19 report from Human Rights Watch and the Harvard Law School Human Rights Clinic, calls for an international ban on killer robots. Four days later a Department of Defense Directive titled “Autonomy in Weapons Systems” was published under the signature of Deputy Defense Secretary Ashton Carter. The two documents may only be connected by the timing of their release, but the directive should nevertheless be read as an effort to quell any public concern about the dangers posed by semiautonomous and autonomous weapons systems—which are capable of functioning with little or no direct human involvement—and to block attempts to restrict the development of robotic weaponry. In the directive, the Department of Defense wants to expand the use of self-directed weapons, and it is explicitly asking us not to worry about autonomous robotic weaponry, saying that the Department of Defense will put in place adequate oversight on its own.

Hidden in the subtext is a plea for the civilian sector not to regulate the Department of Defense’s use of autonomous weapons. Military planners do not want their near-term options limited by speculative possibilities. Neither military leaders nor anyone else, however, want warfare to expand beyond the bounds of human control. The directive repeats eight times that the Department of Defense is concerned with minimizing “failures that could lead to unintended engagements or loss of control of the system.” Nevertheless, a core problem remains. Even if one trusts that the Department of Defense will establish robust command and control in deploying autonomous weaponry, there is no basis for assuming that other countries and nonstate actors will do the same. The directive does nothing to limit an arms race in autonomous weapons capable of initiating lethal force. In fact, it may actually be promoting it.

For thousands of years the machines used in warfare have been extensions of human will and intention. Bad design and flawed programming have been the primary dangers posed by much of the computerized weaponry deployed to date, but this is rapidly changing as computer systems with some degree of artificial intelligence become increasingly autonomous and complex.

The “Autonomy in Weapons Systems Directive” promises that the weaponry the U.S. military deploys will be fully tested. Military necessity during the wars in Iraq and Afghanistan, however, prompted the secretary of defense, Robert Gates, to authorize the deployment of new drone systems—unmanned air vehicles, or UAVs, with very little autonomy—before they were fully tested. The unique and changing circumstances of the battlefield give rise to situations for which no weapons system can be fully tested. Even the designers and engineers who build complex systems cannot always predict how they will function in new situations with untested combinations of inputs.

Increasing autonomy will increase uncertainties as to how weaponry will perform in new situations. Personnel find it extremely difficult to coordinate their actions with “intelligent” systems whose behavior they cannot absolutely predict. David Woods and Erik Hollnagel’s book Joint Cognitive Systems: Patterns in Cognitive Systems Engineering illustrates this problem with the example of a 1999 accident in which a Global Hawk UAV went off the runway, causing a collapsed nose and $5.3 million in damage. The accident occurred because the operators misunderstood what the system was trying to do. Unfortunately, placing blame on the operators and increasing the autonomy of the system may actually exacerbate coordinating the activities of human and robotic agents.

The intent of the military planners who authored the directive is to put in place extensive controls for maintaining the safety of autonomous weaponry. But the nature of complex autonomous systems and of war is such that they will be less successful in doing so than the directive suggests.

Research on artificial intelligence over the past 50 years has arguably been a contemporary Tower of Babel. While AI continues to be a rich field of study and innovation, much of its edifice is built upon hype, speculation, and promises that cannot be fulfilled. The U.S. military and other government agencies have been the leaders in bankrolling new computer innovations and the AI tower of babble, and they have wasted countless billions of dollars in the process. Buying into hype and promises that cannot be fulfilled is wasteful. Failure to adequately assess the dangers posed by new weapons systems, however, places us all at risk.

The long-term consequences of building autonomous weapons systems may well exceed the short-term tactical and strategic advantages they provide. Yet the logic of maintaining technological superiority demands that we acquire new weapons systems before our potential adversaries—even if in doing so we become the lead driver propelling the arms race forward. There is, however, an alternative to a totally open-ended competition for superiority in autonomous weapons.

A longstanding concept in just war theory and international humanitarian law is that certain activities such as rape and the use of biological weapons are evil in and of themselves—what Roman philosophers called “mala in se.” I contend that machines picking targets and initiating lethal and nonlethal force are not just a bad idea, but also mala in se. Machines lack discrimination, empathy, and the capacity to make the proportional judgments necessary for weighing civilian casualties against achieving military objectives. Furthermore, delegating life and death decisions to machines is immoral because machines cannot be held responsible for their actions.

So let us establish an international principle that machines should not be making decisions that are harmful to humans. This principle will set parameters on what is and what is not acceptable. We can then go on to a more exacting discussion as to the situations in which robotic weapons are indeed an extension of human will and when their actions are beyond direct human control. This is something less than the absolute ban on killer robots proposed by Human Rights Watch, but it will set limits on what can be deployed.

The primary argument I have heard against this principle is the contention that future machines will have the capacity for discrimination and will be more moral in their choices and actions than human soldiers. This is all highly speculative. Systems with these capabilities may never exist. If and when robots become ethical actors that can be held responsible for their actions, we can then begin debating whether they are no longer machines and are deserving of some form of personhood. But warfare is not the place to test speculative possibilities.

As a first step, President Barack Obama should sign an executive order declaring that a deliberate attack with lethal and nonlethal force by fully autonomous weaponry violates the Law of War. This executive order would establish that the United States holds that this principle already exists in international law. NATO would soon follow suit, leading to the prospect of an international agreement that all nations will consider computers and robots to be machines that can never make life and death decisions. A responsible human actor must always be in the loop for any offensive strike that harms a human. An executive order establishing limits on autonomous weapons will reinforce the contention that the United States places humanitarian concerns as a priority in fulfilling its defense responsibilities.

The Department of Defense directive should have declared a five-year moratorium on the deployment of autonomous weapons. A moratorium would indicate that  military planners recognize that this class of weaponry is problematic. More importantly, it would provide an opportunity to explore with our allies the issues in international humanitarian law that impinge upon the use of lethal autonomous weapons. In addition, a moratorium signals to defense contractors that they lack a ready buyer for autonomous systems they might develop. No one, however, anticipates that autonomous weapons capable of precision targeting will be available in the next five years. Furthermore, a moratorium is unlikely to reassure other countries that look to the United States as they gauge their own defense needs.

There is no way to ensure that other countries and nonstate actors will emulate standards and testing protocols similar to those outlined in the directive before they use autonomous weapons. Some country is likely to deploy crude autonomous drones or ground-based robots capable of initiating lethal force, and that will justify efforts within the U.S. defense industry to establish our superiority in this class of weaponry.

The only viable route to slow and hopefully arrest an inexorable march toward future wars that pit one country’s autonomous weapons against another’s is a principle or international treaty that puts the onus on any party that deploys such weapons. Instead of placing faith in the decisions made by a few military planners within the Pentagon about the feasibility of autonomous weapons, we need an open debate within the Obama administration and within the international community as to whether prohibitions on autonomous offensive weapons are implicit under existing international humanitarian law. A prohibition on machines making life-and-death decisions must either be made explicit and/or established and codified in a new international treaty.

The inflection point for setting limits on autonomous weaponry initiating lethal force exists now. This opportunity will disappear, however, as soon as many arms manufacturers and countries perceive short-term advantages that could accrue to them from a robot arms race.

Wendell Wallach chairs the technology and ethics study group at Yale University’s Interdisciplinary Center for Bioethics, and is co-author (with Colin Allen) of Moral Machines: Teaching Robots Right From Wrong. His proposal for an “Executive Order Establishing Limits on Autonomous Weapons Capable of Initiating Lethal Activity” has been circulating within military and executive circles for the past 10 months. Image via movies.com. Update: This article was corrected to show that Secretary Robert Gates, not Donald Rumsfeld, ordered the use of drones during the Iraq/Afghanistan wars before all testing was complete. It was also updated on 2/1/13 to correct an inaccuracy about the content of the DoD directive, which does not declare a 5-year moratorium, as previously stated.

]]>
https://scienceprogress.org/2013/01/terminating-the-terminator-what-to-do-about-autonomous-weapons/feed/ 0
Surpassing Outdated Law, Google Requires Warrants For Government Access To Email Content https://scienceprogress.org/2013/01/surpassing-outdated-law-google-requires-warrants-for-government-access-to-email-content/ https://scienceprogress.org/2013/01/surpassing-outdated-law-google-requires-warrants-for-government-access-to-email-content/#comments Fri, 25 Jan 2013 16:34:38 +0000 Andrea Peterson http://scienceprogress.org/?p=27827 Andrea Peterson, via Think Progress.

In a major change to how America’s largest tech companies handle online privacy, Google revealed this week that it requires warrants for users’ email content and data stored in the cloud, imposing hurdles to government access to data beyond the scope of a 1986 electronic privacy law.

But even as Google’s policy is a big step forward for digital due process advocates, it doesn’t extend to a significant portion of the information Google releases to law enforcement agencies such as IP addresses used to access Google accounts, message time stamps, and to and from fields. And Google’s recently released transparency data shows that getting information on your online activities can still largely be done without a warrant.

The report shows less than a quarter of the 8,428 government requests for U.S. user data they received in July to December 2012 were search warrants, and 88 percent of requests were fully or partially complied with. The U.S. led country rankings in terms of total number of requests made and the percentage of requests complied with.

Wired quotes Google spokesman Chris Gaither on Google’s newly outlined warrant policy, which has been in effect for an unclear amount of time: “Google requires an ECPA search warrant for contents of Gmail and other services based on the Fourth Amendment to the Constitution, which prevents unreasonable search and seizure.” Google’s interpretation is novel because under the Electronic Communication Privacy Act of 1986, or ECPA, messages over 180 days old stored in the cloud only require an administrative subpoena — rather than a warrant approved by a judge — largely due to how email technology worked in 1986: It was very unusual for data to remain on external servers because of hosting costs, leading to a belief that any data left on an external server for that could be considered abandoned.

While our use of technology has changed dramatically since 1986, the law has not: An attempt to update the law last year stalled over the holidays. And while the law does not require a warrant to access some data, two federal appellate courts came to differing conclusions on the issue in 2010, one stating that obtaining the content of email messages stored on an email provider’s server requires a warrant, and another allowing magistrate judges discretion to require warrants from the government when requesting location information from cellphone providers — although both rulings only apply to their judicial district.

Google’s public stance on warrants may signal that tech companies are no longer willing to quietly accept the lack of progress on technology policy. In many sectors it has become clear our laws have not kept up with the pace of technological innovation, yet the biggest success of the tech sector last year focused on preventing bad legislation rather than updating woefully outdated regulation.

]]>
https://scienceprogress.org/2013/01/surpassing-outdated-law-google-requires-warrants-for-government-access-to-email-content/feed/ 0
Blue Pill or Red Pill? https://scienceprogress.org/2013/01/blue-pill-or-red-pill/ https://scienceprogress.org/2013/01/blue-pill-or-red-pill/#comments Thu, 24 Jan 2013 17:19:43 +0000 James Flory http://scienceprogress.org/?p=27803

The electronic medical record could save the clinical trial, cut health care costs, and improve the value of research.]]>
“If there’s a blue pill and a red pill, and the blue pill is half the price of the red pill and works just as well, why not pay half price for the thing that’s going to make you well?”

With these words, President Barack Obama not only demonstrated his hip sci-fi credentials—Morpheus’s choice to Neo was either to take the blue pill and remain happy but ignorant of the truth, or the red pill, which would reveal to him a sometimes-painful reality and also launch the lucrative “Matrix” trilogy of movies—but also his desire to take a 21st-century, data-driven approach to clinical decision making and health care policy.

Among competing treatments for the same disease, which one is best? Which one is worth the money? These questions are the core of comparative effectiveness research. Half of insured patients in the United States are on chronic medications for conditions such as diabetes, hypertension, and high cholesterol. Patients, physicians, and policymakers need reliable data to know what to take, what to recommend, and what is worth paying for. Typically, however, they don’t have these data.

The Affordable Care Act, better known as Obamacare, has implemented a number of initiatives to address this problem. One of the largest is the Patient-Centered Outcomes Research Institute, or PCORI. A core mission of PCORI is to conduct comparative effectiveness research that gives patients and their health care providers the best evidence to help make more informed decisions. As promising and common sense as this mission is—because why not pay half price?—solid gold evidence to answer a patient’s question “Should I take the red pill or blue pill?” is hard to obtain.

The fundamental problem is that the gold standard for studying comparative effectiveness, the randomized controlled trial, or RCT, is too costly and disruptive to be done for every important comparative effectiveness question. At the RCT’s core is the assignment of an intervention to each subject by a “flip of a coin,” meaning that some patients receive drug A, and some patients receive drug B.

Unfortunately, an RCT is a massive enterprise. Special procedures such as using a “flip of a coin” at a central study site to assign each patient to an intervention are so different from routine clinical practice that trials must hire expert clinical investigators and take place at special study sites. Meanwhile, patients and physicians alike can be reluctant to engage in an activity so potentially disruptive to routine clinical care. Ethical oversight helps ensure that clinical care is not truly compromised, but this oversight is intensive, costly, and time-consuming too. The result? RCTs can take years and cost millions of dollars.

Given these challenges, relatively few RCTs are done. The major pharmaceutical companies are among the few institutions that can single-handedly muster the resources to implement large RCTs, and they use them to get their drugs approved, typically by comparing the drug to a placebo. Even if multiple competing drugs are available for a disease, drug companies rarely conduct comparative effectiveness studies. Why should they? For a company, the decision to conduct a RCT is not a matter of public policy. It’s a business decision. If a company does a comparativeness effectiveness trial, the study design often uses clever design features that, unsurprisingly, stack the data to show their that drug is more effective.

Fundamental questions—such as “Does drug A or drug B have a better chance at keeping a diabetic patient from needing insulin? Does drug A or drug B prolong life more in heart failure patients?”—go unanswered because, outside of the big pharmaceutical companies, few institutions have the resources to do an RCT to answer these questions. Part of the answer may be for the Food and Drug Administration to ask for more RCTs to address comparative effectiveness questions, but we also need new methods to do comparative effectiveness research more efficiently.

The RCT is a 20th century method that worked well for acute, serious diseases such as infectious diseases, heart attacks, and pediatric cancers, where entry criteria were simple, options in clinical care were few, and results could be obtained relatively quickly. Since the middle of the 20th century when the debt-weary post-war British National Health Service used it to inform whether streptomycin therapy was worth the cost for the treatment of tuberculosis, it has been the court system that decides which promising therapies are in fact safe and effective and which are not. For complex, common, and chronic diseases such as diabetes that can require lifelong treatment, however, the RCT is a large and costly enterprise akin to moving an armada across an ocean.

President Obama’s call for a trial to compare the blue pill to the red pill would mean mustering millions of dollars and recruiting thousands of patients as research subjects to be followed for many years. Even then, the results will likely be subject to a fusillade of questions because patients who participate in an RCT are typically not like the usual patient, the protocols often limit usual care, and treatment options may have changed in the years it took to execute the trial. A more modern, streamlined approach is needed.

Just as the RCT was made possible by 20th-century advances in statistics and research technologies, 21st-century advances now present an alternative to the large, expensive, and cumbersome clinical trial. The critical change happening now is the linking of fast, user friendly, networked computers into large databases replete with medical information—the so-called electronic medical record, or EMR.

Most proposals to use EMR as a tool for comparative effectiveness research simply use the EMR as a large database for a traditional observational study.  This possibility has received deserved attention, but has also been appropriately criticized, because such traditional observational studies are not nearly as reliable as RCTs in distinguishing true causal effects of drugs from non-causal associations. We propose a complementary way to use EMR that will retain some of the special advantages of RCTs at much lower cost and with fewer ethical problems. We call it Prompted Optional Randomization Trial, or PORTS, a design impossible in the days of paper charts but easily implemented [subscription required] using an EMR.

Physicians who use the EMR have experienced how the system talks back to them. It can, for example, prompt a physician to reconsider or even change a medication that is linked to a documented patient allergy, interacts with another medication, or is not on formulary. These prompts sometimes result in rapid, appropriate adjustment of medications, but perhaps more often the physician finds the suggested change inappropriate and overrides the prompt with the click of a button.

The same technology can be used to introduce one of the RCT’s essential features, the “flip of a coin,” where the computer can choose whether the patient receives the red pill or the blue pill. Whenever a physician order one of these colorful pills, the computer can make its own random choice between the drugs. The computer can then prompt a physician to consider changing his or her prescription, but only when a physician’s order and the computer’s random choice are discordant. When identical to the randomly generated orders,, physicians’ orders stand.

If, for example, a physician orders the blue pill, 50 percent of the time the computer will also choose the blue pill. No prompt will be displayed, and the physician prescribes blue. If the computer chooses red instead, it displays a prompt to consider prescribing the red pill instead of the blue pill. A physician who prefers the blue pill for the particular patient dismisses the prompt with a single click and prescribes the blue pill. A physician with no preference between treatments, however, can endorse the change with a single click and prescribe the red pill.

A PORTS study design makes sense when the red pill and the blue pill are both used interchangeably in clinical practice, but physicians truly do not know which one is safer or more effective. This design increases the probability that a patient will receive the randomly assigned treatment. The association will not be perfect, since in many cases the patient and physician will prefer a drug and appropriately ignore a prompt that conflicts with that preference. Intuitively, however, if the blue pill is in fact a little better than the red and a prompt for blue makes patients more likely to get blue, the patients who do get a prompt for blue will on average do a little better than patients who get a prompt for red.

Crucially, that difference will reflect the properties of the pills themselves, not subtle differences between the kinds of patients who choose red and those who would rather have blue. A relatively simple technique called instrumental variable analysis formalizes this intuition and makes it possible to take these data and uncover the difference in effectiveness between the red pill and the blue pill. It turns routine clinical practice into an efficient and low-cost engine of discovery that will tell Americans whether we should take the red pill or the blue pill.

To be sure, this method has ethical challenges. Some patients will get a treatment different from what they and their doctor would have otherwise selected. Is it possible that some patients will be harmed? We would argue that it is not, because the physician can override the prompt if there is any reason to suspect one drug is worse than the other. Should patients give consent before this method is used? Would they need to give it every time, or just once when they establish care at a practice that uses this method? These are questions that need to be addressed, but they are mere shadows compared to the glare of the serious ethical concerns traditional RCTs raise.

To date, the EMR has received middling marks as a technology to reduce health care costs. The PORTS proposal is just one example of the more general but untapped promise of the EMR in medicine, a promise that could be as revolutionary as the RCT, and before that, the stethoscope.

Electronic systems, prompts, and other tools can introduce small probabilistic changes in care, changes that can yield the kind of unbiased quality improvement data that to date has been available only at the high cost of the RCT. Small, benign random variations in practice could gradually develop a far more comprehensive picture of what works and what does not.

We just have to summon the will to take the red pill and discover the innovative ways to interact with the new matrix of medical data.

James Flory is a fellow in endocrinology at Weill Cornell Medical Center. Jason Karlawish is a professor of medicine, medical ethics and health policy at the University of Pennsylvania. Image: Warner Bros / Village Roadshow Pictures.

]]>
https://scienceprogress.org/2013/01/blue-pill-or-red-pill/feed/ 0
ARPA-E is Here to Stay https://scienceprogress.org/2013/01/arpa-e-is-here-to-stay/ https://scienceprogress.org/2013/01/arpa-e-is-here-to-stay/#comments Tue, 22 Jan 2013 19:44:01 +0000 Varun Mehra http://scienceprogress.org/?p=27766 Citations and footnotes are available in the pdf version of this article.

Since its inception in 1958, countless groundbreaking innovations have been commercialized from the Department of Defense’s unique research arm: the Defense Advanced Research Projects Agency, or DARPA. Created in the aftermath of Russia’s launch of the first space satellite, Sputnik, DARPA’s primary goal was to ensure that the United States military maintained its technological superiority. As time progressed, inventions churned out of DARPA—from global positioning satellite systems, to the iPhone’s Siri, to the Internet—have become integral parts of out society and changed the way everything, from our daily life to the economy, operates.

In recent years a similar debate over whether the United States was losing its technological prowess in the realm of alternative-energy innovation has gained momentum. The National Academies report from 2005, “Rising Above the Gathering Storm”, highlighted the need for the United States government to stimulate high-impact, clean-energy innovation on our soil. One recommendation was to create an energy agency focused on catalyzing groundbreaking clean energy technologies towards market adoption.  Thus, in 2007 Congress authorized the creation of the Advanced Research Projects Agency – Energy, or ARPA-E, with the first appropriation coming in 2009 from the American Recovery and Reinvestment Act.  Housed within the Department of Energy, ARPA-E’s main goal is to invest, develop, and commercialize “transformational energy technologies” that “disrupt the status quo”.

In its first year ARPA-E received a staggering 3,700 concept papers (relatively brief overviews of the proposed technology’s merits) — well above the 500 to 800 expected.  This number underscores the need of ARPA-E in today’s advanced clean energy landscape.  The first funding-opportunity announcement led ARPA-E to fund 37 projects worth a collective $151 million.  ARPA-E plays an important role in the commercialization process of these capital-intensive energy innovations by bridging potential first “valley of death”— a common term used to describe the funding gap between a technological concept and a working prototype —for winning proposals.  It is in ARPA-E’s mission to find and fund the formation of advanced energy technologies, while filling a funding gap that the private sector usually finds too risky to undertake.

Due to the technical progress of ARPA-E’s early stage projects, millions in venture capital and corporate dollars have been drawn off the sidelines.  For example, OPX Biotechnologies, an ARPA-E project in the Electrofuels program armed with a $6 million grant, garnered $36.5 million in private funding last year after showing progress—a substantial return on the dollar for federal investment. Similarly, FloDesign Wind Turbine, an ARPA-E project developing a completely new wind turbine inspired by jet engines, announced a fundraising round of $27 million from private investors. As ARPA-E displays ability to bridge valleys of death for early stage clean energy companies, examples like OPX Biotechnologies and FloDesign Wind Turbine should become more common.

On the surface, and in name, ARPA-E resembles its predecessor, DARPA. In terms of similarities, the agencies have comparable structural organizations and envision concepts for technology programs in a similar manner. But beneath the hood, ARPA-E and DARPA differ in the way in which technologies move along the commercialization path. DARPA was able to move its innovations from concept to reality in large part because it leveraged the procurement power of the Department of Defense.  ARPA-E, however, lacks a similarly large and guaranteed source of demand or procurement ability in the Department of Energy.  Furthermore, given how complex and established the current energy landscape is in the United States, there is not a simple answer on how ARPA-E can directly transfer its technologies to energy markets.

Using DARPA as a relevant backdrop, the goal of this article will be to explain these salient differences, and identify potential strategies for ARPA-E technologies to experience similar commercial success. As this paper will explore, the differences between ARPA-E and DARPA include the availability of first adopters of technology; how their projects identify and penetrate markets; and whether the technologies each agency funds have to compete on price. By providing a comparative analysis of DARPA and ARPA-E, this paper examines the differences between the two agencies—and will glean from this analysis a few recommendations on how ARPA-E can amplify its efforts.

 

DARPA and the power of ‘market pull’

 As time progressed after DARPA’s launch in 1958, the agency developed a number of unique attributes for a government-funded R&D agency. DARPA’s organizational structure, mission objectives, and insulation from congressional inspection allowed the agency to gain a critical role in “seeding and encouraging” new technology fields in the United States.  Unlike most other R&D funding agencies, DARPA set its sights on a technological vision and would then source and fund technologies that had the potential to reach these goals. Since its onset, DARPA has been envisioning these “white spaces” in various transformative technology areas.

When thinking about next steps for the technologies DARPA funds, the agency streamlines the R&D process from discrete steps to more of a “connected R&D” method. In general, DARPA employs four stages of innovation to ensure that technologies are on the right track.  In an analysis done by William Bonvillian from MIT’s Washington Office and Richard Van Atta from Institute for Defense Analyses, this track is seemingly broken up into the following four stages:

1) breakthrough/R&D stage
2) prototype/demonstration stage
3) incremental advances stage
4) initial market-creation stage

Technologies developed by DARPA have consistently led to both military and commercial applications. However, because many DARPA projects originate to serve Defense Department needs, DARPA can effectively rely on military services’ procurement to provide the needed market pull to ensure successful commercialization of its technologies, as the fourth bullet above indicates. In the case of the budding information technology industry, both DARPA and the National Science Foundation contributed about 30 percent of overall federal R&D in 1990. But by 2005 DARPA represented only 6 percent while the National Science Foundation contributed 35 percent in overall federal R&D for information technology. In a time span of just 15 years, we can numerically see how DARPA’s catalytic role in the industry dwindled as the industry itself matured.

On the civilian side, DARPA-funded research can be traced to numerous technology products now sold by private-sector companies. Some examples include Xerox’s Ethernet, Apple’s desktop computing and graphical user interface, and Cisco’s Internet-protocol routers.  Even today’s internet giants, such as Facebook and Google, can trace some of their intellectual property back to what was originally DARPA-funded research, originally designed for military application and procurement.

In the case of Sun Microsystems, an information technology, computer software, and computer hardware company recently acquired by Oracle, DARPA was instrumental in its spinout to the private sector during its early stages.  Sun Microsystems had licensed its workstation-board technologies from DARPA to begin selling them commercially.  In 1982 Sun Microsystems had raised $4.5 million in venture capital funding, but DARPA was critical in engaging with academic institutions to encourage them to purchase these workstation computers — providing that critical market pull.  Stanford, UC Berkeley, and Carnegie Mellon University actually accounted for 80 percent of orders received in Sun Microsystems’ first year, in large part thanks to DARPA’s funding to these institutions.

DARPA also created opportunities for universities and laboratories to work together, with the hope that these partnerships would lead to the scaling of these technologies.

The Strategic Computing Initiative, a 10-year DARPA project that began in 1983, is another example of linking industrial, university, and research entities to work together and form “innovation networks.”  The goal of this initiative was to bridge together advances in a number of different technology areas and apply them to technological needs of the military.  In particular, the program looked to develop advanced machine-intelligence technology by leveraging advancements made with faster chips and other recent computing advances.

The Strategic Computing Initiative began out of the desire from the military to apply advanced computing strategies to their needs. Specifically, the Army desired an autonomous land vehicle, the Navy desired an aircraft carrier battle-management system, and the Air Force desired an automated pilot associate. At the time, folks at the Strategic Computing Initiative realized that recent advances in microelectronics, computer science, and artificial intelligence should be coupled to undertake these technological requests. Though the initiative had its challenges, the main point is that the integration of milestones was a central characteristic of the program. Visionary demand-pull factors played an important role with the initiative: The military applications led to requirements for intelligent functions, the intelligent functions drove the requirements for the system architectures, and the system architectures set forth the requirements for suppliers of microelectronics.

In order to ensure the long-term success of ARPA-E funded projects, one must not overlook the issues surrounding stage 4: initial market creation. Considering the agency is only three years old, the main focus thus far for ARPA-E has been to bring technologies from stage 1 to stage 2. Overall, the energy industry has a low appetite for risk and existing firms usually push for incremental advances. Furthermore, though utilities are now investing in renewable energy technologies, they are also diversifying their energy portfolio to spread their risk. In the above examples, DARPA’s synergistic orchestration of both technical and market forces to develop desired applications for the military with Sun Microsystems and the Strategic Computing Initiative are relevant models for the advanced energy space.

 

DARPA vs. ARPA-E: Compare and contrast

 

Similarities

ARPA-E imported many of the important qualities that DARPA exhibited.  In regards to the organizational structure, programs developed within each agency have short timeframes (three to five years); these programs, all focused on different yet sometimes related technology areas, have to generate results or else they are terminated.  Within their respective departments, Defense and Energy, both agencies are insulated from bureaucracy in that they report directly to the Office of the Secretary.  This is a significant factor in eliminating bureaucratic lag time that many other offices have to go through, and enables both ARPA-E and DARPA to operate at a more efficient pace.

The unique organizational structure allows both agencies to act relatively swiftly in making personnel and project-management decisions, accelerating the motion of technologies down the innovation pipeline. ARPA-E followed DARPA’s footsteps in utilizing a challenge-based “right-left” research model, where the end-result goal sits on the right, and the technology pathways to get there begin on the left. Program directors contemplate technology solutions in different sectors (right), and then source cutting-edge technology research projects to achieve these results (left). This model contrasts the “peer-review” process, where proposals are reviewed by various experts of the particular research area, that most R&D agencies employ to deliver grant funds to qualified research projects.  Rather than focusing on incremental advances, ARPA-E and DARPA prioritize envisioning unique “white spaces” for future technological success.

Differences

But despite their similarities, there are also significant differences between the two agencies. (See Appendix 2) The projects ARPA-E funds are entering what can be described as a “complex established legacy sector.” The energy sector is a highly complicated and diversified industry, with energy sources, regulations, and infrastructure all varying at regional, state, and municipal levels.  While DARPA technologies were conceived with the idea that there were no current technological solutions, ARPA-E is developing technologies to alter the status quo of the existing energy economy, where much of the end product is a uniform commodity (liquid fuels or delivered electricity). These are two very different missions.

Case in point: The idea of unmanned air vehicles, now being heavily used by the United States for surveillance and attacks across the Middle East and Asia, was conceptualized at DARPA because there was no current technology with the capabilities desired.  On the other hand, current ARPA-E funded projects including modernizing existing grid infrastructure (Grid-Scale Rampable Intermittent Dispatchable Storage , or GRIDS), Green Electricity Network Integration, or GENI, programs, improving end-use energy efficiency (Building Energy Efficiency Through Innovative Thermodevices, or BEETIT), Agile Delivery of Electrical Power Technology, or ADEPT programs), altering transportation methods (Batteries for Electrical Energy Storage in Transportation , or BEEST, and Methane Opportunities for Vehicular Energy, or MOVE) programs), and changing the domestic fuel paradigm (Plants Engineered to Replace Oil, or PETRO, and Electrofuels programs). Thus, from a conceptual standpoint, unmanned aerial vehicles developed a drastically new technological direction, whereas ARPA-E programs are focused on drastically improving the existing way in which we “generate, store, and utilize” energy.

Another obstacle ARPA-E faces but that DARPA does not is that it works without a major first adopter. Technologies advanced through DARPA had a natural, albeit not guaranteed, customer in the Department of Defense. Since the Department of Energy isn’t a major direct purchaser or procurer of technology or equipment, ARPA-E doesn’t have a parallel source of demand for its technologies. This deprives ARPA-E technologies of a key “market-pull” effect so critical to commercialization.

The importance of market demand in pulling innovations out of development and into the market cannot be stressed enough. ARPA-E technologies must enter a crowded market and ultimately compete on price with the legacy fossil and nuclear sectors, while DARPA technologies do not suffer the same market competition. In the case of unmanned air vehicles, the primarily goal of this technology wasn’t necessarily to be cost competitive, but to give the military advanced surveillance and striking capabilities beyond what was currently available.  In the case of ARPA-E’s Batteries for Electrical Energy Storage in Transportation (BEEST) program, on the other hand, the goal of the program isn’t just to fund batteries with higher energy densities, but to also make sure that these batteries are 30 percent below today’s cost of vehicle batteries. In order for projects in ARPA-E’s BEEST portfolio to achieve eventual consumer adoption, they must display an ability to compete on price in existing markets. This cost component makes ARPA-E’s job considerably more challenging than that of DARPA.

 

Addressing the market challenge

Despite these challenges, ARPA-E is working to smooth the path to market and testing under real-world settings. In 2011,ARPA-E signed a memorandum of understanding with the Electric Power Research Institute and Duke Energy to provide a test-bed for many of its technologies. The Electric Power Research Institute’s main focus is to conduct research programs on electricity across a number of different energy sources. In addition, the institute works heavily with electric utilities and its members represent over 90 percent of the electricity generated in the United State. Duke Energy is one of the largest energy companies in the country and delivers electricity across five states to roughly four million customers.

This agreement is an important next step for ARPA-E projects, catered for (but not limited to) the GRIDS, GENI, and ADEPT programs.  Duke Energy plans to study the performance of ARPA-E technologies at the company’s own test-bed facilities for renewable energy, smart grid, and energy storage. The agreement should also pave the way for ARPA-E awardees and Electric Power Research Institute members to deploy and test technologies at agreed-upon sites and facilities. Testing and demonstration partnerships that can integrate multiple ARPA-E programs are a great way for these technologies to display their potential—all while getting critical feedback.

While this one-off deal is great, it represents only a subset of the market. Utilities, the obvious bulk purchasers of energy in the private sector, tend to be conservative and have a vested interest in providing the lowest possible rate for their ratepayers. Similarly, major corporations or government agencies can’t and won’t adopt next-generation energy technologies if there isn’t a positive bottom line effect for a return on investment. So despite the promise of ARPA-E technologies, the agency still has the additional challenge of navigating these barriers. Thus, despite the important role that ARPA-E plays in the innovation-commercialization pipeline, there is a need for additional federal policies to help foster demand for the clean energy technologies ARPA-E was designed to develop.

 

Prioritizing energy R&D: comparisons with the Manhattan Project and Apollo Program

The federal government has historically funded research to guide the innovation and development of a variety of technological and scientific advancements that meet critical public or national needs.  In defense, environmental protection, public health, and elsewhere, leaders of both parties have historically agreed that innovation and technological advancement have significant public benefits, and have invested public resources that reflect this common good.  In defense and in many other areas, the seeds that these federal investments have planted have grown to become major economic drivers in industry, cure fatal diseases in health, and iconize American ingenuity in space exploration. Without these sage investments, our country would likely not have enjoyed the economic prosperity that we have seen over the past 60 years.

As many scientists, policymakers, and economists have noted for years, our nation is at a critical juncture when it comes to the global clean-energy race. Unfortunately, annual allocation of federal R&D for energy has decreased proportionally when compared to other research areas over time. Federal R&D investments in health care and defense have grown consistently, while we have seen energy R&D fall to a fraction of overall federal R&D since 1980. (See Appendix 1)  If the United States desires to be a leader in the globalizing clean-energy landscape, a major commitment needs to be made now for funding advanced energy R&D.

Major government-led pushes for new technologies are nothing new. In the case of defense and space exploration, the federal government has promoted and directed major R&D efforts on scales far more significant than we’ve seen in energy. Specifically, the Manhattan Project was created to develop the atomic bomb in the context of World War II, while the Apollo Program was formed to send astronauts to the moon in the context of the Cold War.

Both received a level of focused public investment that we have not seen with energy, though they also had other attributes that made them different than in the case of energy. For example, both the Manhattan Project and the Apollo Program were major pushes to develop and implement relatively small number of physical machines for very specialized purposes—to end World War II and to put Americans on the moon, respectively. Furthermore, both were put on very strict timelines given that the United States was competing with other nations in both cases to develop those specialized capabilities. With the facts and effects of climate change still disputed by incumbent energy interests within Washington, the parallel imperative for funding alternative energy sources is not felt strongly in today’s polarized political climate. As a result, we haven’t seen a similar injection of funds — at such scales as the Manhattan and Apollo projects — into the renewable-energy sector.

But looking at funding amounts across various R&D initiatives isn’t a fair comparison in the case of energy.  Two major market failures have hindered widespread deployment of nascent clean energy technologies: an underinvestment in R&D and an absence of mechanisms to address environmental externalities. The former market failure relates to the aforementioned discussion of “technology-push” factors, such as increasing clean energy R&D investments; the latter market failure can be denoted as the “demand-pull” factor for clean energy technologies.

Many prominent academics, think tanks, politicians, and venture capitalists have called for appropriations to ARPA-E to significantly increase, and the benefits of having a larger and thus more influential agency are well discussed and understood.

However the amount of funding is only part of the battle for ARPA-E. Both the Manhattan Project and Apollo Program had concrete goals with an end-user of their respective technological developments. DARPA’s projects have enjoyed success with the military as a guaranteed customer for successful technologies. These comparisons suggest that ARPA-E’s technology push approach would be more effective if complemented with demand-pull effect.

 

Conclusions and recommendations

In general, policymakers must realize that the combination of climate change, energy security, and economic growth provide for a compelling reason for action to accelerate clean-energy innovation. Unfortunately, funding for energy R&D has both dropped over time and is incomparable to the massive undertakings of the likes of the Apollo Program or the Manhattan Project.

But technology-push factors in R&D can only be truly effective if they are complemented by market-pull factors. Considering ARPA-E funds capital-intensive projects that need to be incorporated into existing legacy energy markets and infrastructure in a way that DARPA does not, identifying partnerships and opportunities for follow-on funding and demonstration to advance these technologies in the short term is of utmost importance.

The above analysis gives insight on the origins of ARPA-E, and a present day comparison with DARPA. Again, ARPA-E’s primary focus thus far has been to find and fund technologies that are game changing and transformative—but that is only the beginning of the story. Because ARPA-E’s goal is to help its grantees translate technology for the first time from lab scale to market scale, there is a need for the agency to define commercialization roadmaps for the projects it funds. Though many of ARPA-E’s projects may have technological promise, coupling ARPA-E’s funding for early-stage innovation with strategic partnerships to create market demand, both in the public and private sector, is crucial to the longstanding success of its projects—and perhaps to the existence of the agency itself.

Though it was easier said than done, DARPA was able to strategically leverage its role in the Department of Defense to help prioritize its own research programs. Despite the fact that DARPA’s mission has changed throughout its history, its projects usually had narrowly defined objectives. This enabled the agency to take advantage of recent technological breakthroughs, build upon existing research, and strategically integrate technological milestones. The Strategic Computing Initiative was an example of DARPA’s role in solving unique military problems—with an eventual commercial application at the end. At the same time, having these interactions with end users allowed DARPA to display its advances to and iterate with decision makers in the military establishment.

For the first time, many ARPA-E projects are now in a transition period approaching options for manufacturing scalability and market viability for their technologies. Broadly speaking, the government can help facilitate this transition by providing beneficial standards, directly opting to procure advanced energy technologies, enacting a price on carbon, and/or setting a national clean energy standard. These broader demand-pull factors may have a more direct effect as ARPA-E-funded technologies mature down the line. Since ARPA-E is in its youth, it is timely to understand how the agency is approaching near-term ‘handoff’ mechanisms.

For these early-stage technologies, the near-term solutions include providing viable pathways to market by targeting first adopters, providing opportunities for technology demonstration, and identifying follow-on funding opportunities for potential customers as well as producers of the new energy technologies. Both the government and the private sector can help in this area. Furthermore, given that ARPA-E’s technology programs are diverse, from advanced electric vehicle batteries to innovative carbon-capture technologies, these pull-factors are unique to each of ARPA-E’s program areas.

In the energy landscape there is a clear gap in the government for additional de-risking of these advanced technologies. Specifically, moving these various technologies into demonstration and commercial pilot plant stages should be emphasized. Yet, ARPA-E itself may not be the place to do this work; the agency should stay true to its transition-stage role in the innovation pipeline. More formalized and recognized relationships with different aspects of the Department of Energy, such as the Office of Energy Efficiency and Renewable Energy, and more partnerships, such as the established agreement with Duke Energy and the Electric Power Research Institute, would allow ARPA-E to continue its role in finding and funding transformative technologies.

Even though ARPA-E may be making significant technological advances, there are clearly many hurdles on the roads to commercialization. Given the number of technology areas ARPA-E is investing in, there isn’t a cookie-cutter approach to understanding how these programs will succeed beyond the laboratory. Looking toward the future while building upon experience, larger scale market-pull factors must be coupled with the technology push and translational work occurring at ARPA-E. If ARPA-E can further cement linkages and handoff mechanisms all while focusing on the high-risk, high-payoff R&D it was built for, the agency will lay a foundation on which it can build a visionary and respected reputation of its own.

Varun Mehra has a degree in Mathematics/Economics from University of California, Los Angeles, and is currently a DOE Scholar with ARPA-E’s Technology-to-Market team. Opinions expressed in this article are those of the author and do not necessarily represent the views of ARPA-E or any government agency. Image via BigStockPhoto.

Download the full pdf with endotes here.

]]>
https://scienceprogress.org/2013/01/arpa-e-is-here-to-stay/feed/ 0
Duck, Rabbit, Gas Well https://scienceprogress.org/2013/01/duck-rabbit-gas-well/ https://scienceprogress.org/2013/01/duck-rabbit-gas-well/#comments Thu, 17 Jan 2013 16:22:39 +0000 Adam Briggle http://scienceprogress.org/?p=27735 Thomas Kuhn’s The Structure of Scientific Revolutions, now in its 50th year of print, offers a unique way to understand the debate over fracking. Kuhn speculates that two people might “see different things when looking at the same sorts of objects.” Jane looks at a gas pad site and sees an environmental apocalypse. Dick looks at the same site and sees a clean energy boon. Jane sees corporate colonialism. Dick sees an energy independent America.

It’s like the duck-rabbit illusion. The human mind makes the image into a picture of something. Some see a rabbit and others see a duck, but no one sees just lines or a bunch of parts. This is a basic tenet of Gestalt psychology. So too, the pad site must appear as something or other to human minds that are simultaneously perceiving and making it.

When it’s just rabbits and ducks we are comfortable with ambiguous multi-stability. But people who switch readily back and forth between ducks and rabbits are immovably either pro or anti fracking. When it comes to energy, they insist, there is one unified and stable reality. With ducks and rabbits, there are only interpretations. But with fracking, there is the truth and there are lies.

Dick sees environmentalist hysteria clouding science with fear. Jane sees corporate greed manipulating and stifling scientific truth. I think Jane has a stronger case. As I’ve argued before, we can’t create billions of dollars of vested corporate interests around a new technology and expect an objective scientific assessment of its risks.

But if we take Kuhn’s point seriously, we have to question what ‘objective science’ means. Dick and Jane are opposites in everything save this: they both think there is only one right way to see what they are looking at and that science gives us that sight. Science provides truths that correspond with a mind-independent reality. New York Governor Andrew Cuomo noted that when it comes to his state’s rules for fracking he will “Let the science dictate the conclusion.”

Science will compel the right policy action. For those inspired by Kuhn, this sounds as plausible as the tail wagging the dog. The ‘right’ action is not as simple as settling on the facts. Everything depends on perspectives, which are deeply rooted in psychological worldviews. Or to play loosely with Kuhn’s own term, it all hinges on ‘paradigms.’ Jane will see any scientific study through her paradigm and Dick through his. Paradigms precede and preconfigure perception. Jane sees ducks regardless of any science that might suggest rabbits and vice versa for Dick.

Much has been made of the lack of scientific understanding about fracking (for which the industry is partly to blame). That needs to be remedied through more monitoring, and certainly outright lies and misconduct must be dispelled.

But as more scientific studies accumulate, gridlock may not disappear. As science policy scholar Daniel Sarewitz has argued, large bodies of knowledge “can be legitimately assembled and interpreted in different ways to yield competing views of the ‘problem’ and of how society should respond.” One can always find a “set of scientifically legitimated facts” to support any values position in an environmental controversy.

This isn’t about one side having sound science and the other having junk science. Sarewitz is saying that nature is sufficiently complex to provide “an excess of objectivity,” where all sides can legitimately paint a picture about what fracking means. More science may actually make fracking gridlock worse, because uncertainty is not just a lack of scientific understanding but also a “lack of coherence among competing scientific understandings.”

We are all awaiting the statewide health science assessment in New York and the nationwide water quality assessment being conducted by the EPA.  But it may be that these studies will produce intensified controversy rather than consensus on a shared picture of reality.

Science won’t dictate any conclusions. It is all driven by deep-seated beliefs. So, what are the warring paradigms in the fracking debate?

Jane’s is a “precautionary” paradigm. It posits a transcendent nature that sets limits on what humans can do. As I’ve written about before, precautionaries live in a steady-state world where nature is being exhausted and is soon to bring human civilization down with it. Limits are the name of the game. Risks should be avoided and uncertainty should be reduced via scientific inquiry prior to action.

Dick’s is a “proactionary” paradigm. It posits humanity as co-creators of nature with possibilities yet to be realized. Proactionaries live in an open-ended world where natural and artificial capital feed off of one another. Striving is the name of the game. Risk should be generally encouraged and uncertainty should be reduced via trial-and-error fixes.

As Kuhn notes, “When paradigms enter, as they must, into a debate about paradigm choice, their role is necessarily circular. Each group uses its own paradigm to argue in that paradigm’s defense.”

The question for those inspired by this Kuhnian read of the debate is not which rules should govern fracking. That suggests some objective way (outside of any paradigm) of discerning the ‘right’ rules. The question is: Who gets to write the rules – those who see ducks or those who see rabbits? And to that question there are no easy answers.

Adam Briggle is a faculty fellow at the University of North Texas Center for the Study of Interdisciplinarity and co-author with Carl Mitcham of Ethics and Science: An Introduction (Cambridge University Press).

]]>
https://scienceprogress.org/2013/01/duck-rabbit-gas-well/feed/ 0
Why DDoS Attacks Are the Wrong Way to Honor Aaron Swartz https://scienceprogress.org/2013/01/why-ddos-attacks-are-the-wrong-way-to-honor-aaron-swartz/ https://scienceprogress.org/2013/01/why-ddos-attacks-are-the-wrong-way-to-honor-aaron-swartz/#comments Tue, 15 Jan 2013 22:20:18 +0000 Andrea Peterson http://scienceprogress.org/?p=27746

Next time online "hacktivists" want to honor Aaron Swartz, they should choose a tactic that celebrates his work to build the internet up, not try to tear the place down.]]>
When internet folk hero Aaron Swartz’s family released a statement blaming the Massachusetts U.S. Attorney’s office and MIT for decisions that contributed to his suicide, it was almost inevitable that cyber-vigilante collective Anonymous would react. By Sunday night MIT’s website was taken down by a distributed denial of service, or DDoS, attack – a popular Anonymous tactic that works by overwhelming the host’s server with more requests than it can process. But DDoS attacks make a poor tribute to the freedom of expression and information Swartz championed.

The risk of being downed by a DDoS attack isn’t new, but its adoption as a tool for cyber disruption has raised its profile considerably in recent years. Russian hackers used DDoS attacks to take down Estonian websites in 2007 and hackers most believe to be backed by the Iranian government leveraged malware infected data servers to take out major U.S. banks this past winter.

It’s also a favorite of Anonymous: They let loose DDoS attacks on Paypal in 2010 for refusing to process Wikileaks donations (reportedly causing causing £3.5 million in costs), and used them to take down sites belonging to the hateful Westboro Baptist Church when they picket Sandy Hook victim’s funerals. Members of the group even started a We The People petition last week asking for DDoS attacks to be considered a “legal form of protesting,” arguing DDoS attacks are “no different than any “occupy” protest. Instead of a group of people standing outside a building to occupy the area, they are having their computer occupy a website to slow (or deny) service of that particular website for a short time.”

But occupying an area doesn’t mean that area ceases to exist for others. And while, yes, advances in technology change how we express ourselves, the anonymous claim that DDoS attacks are the protest of the future ignores a simple fact about the nature of the attacks: They are a tool of harassment whose major outcome is censorship.

DDoS attacks make sites inaccessible from the web, potentially silencing the owner and users of a site and causing real economic harm if e-commerce is disrupted. (The latter as evidenced by Paypal’s aforementioned troubles.)  In fact, they even share some elements with the SOPA and PIPA copyright enforcement proposals Swartz fought to oppose: While SOPA and PIPA threatened to take down entire sites for copyright infringements of individual users, a DDoS attack allows the actions of some internet users to silence others. Both fundamentally block freedom of expression.

It’s hard not to admire some of Anonymous’ goals like bringing publicity to rape cases, going after those mocking Sandy Hook families, or protecting the memory of one of the Internet’s most tragic activists. But next time they want to honor Aaron Swartz they should choose a tactic that celebrates his work to build the internet up, not try to tear the place down.

]]>
https://scienceprogress.org/2013/01/why-ddos-attacks-are-the-wrong-way-to-honor-aaron-swartz/feed/ 0
Working Toward a More Fitting Tribute to Aaron Swartz than JSTOR’s Register & Read https://scienceprogress.org/2013/01/working-toward-a-more-fitting-tribute-to-aaron-swartz-than-jstor%e2%80%99s-register-read/ https://scienceprogress.org/2013/01/working-toward-a-more-fitting-tribute-to-aaron-swartz-than-jstor%e2%80%99s-register-read/#comments Mon, 14 Jan 2013 18:48:24 +0000 Andrea Peterson http://scienceprogress.org/?p=27725 Just two days before internet folk hero Aaron Swartz took his own life, online journal archive JSTOR announced an expansion of its free-access “Register & Read” program, from 76 publishers to over 700. The move is a crack in the for-profit academic publishing stronghold’s armor, but not the paywall-demolishing revolution of the open-access movement’s dreams.

Swartz, 26, was a prominent activist in that open-access movement, advocating that academic research funded by taxpayers should be made available to the taxpayers for free online. At the time of his death, Swartz was facing over 30 years in prison for allegedly downloading nearly 5 million academic documents from JSTOR in what is thought to have been the first step in a radical plan to liberate the data. There is widespread speculation that the severity of the punishments he faced may have been a factor in his decision to take his life.

The Register & Read program allows anyone to read up to three articles every two weeks online from selected offerings in exchange for some personal information, such as occupation and institutional affiliation. JSTOR then shares that information with publishers as a thank you for offering their journals through the program. Downloads and the full online catalogue remain only available to associated library patrons and subscription purchasers. And it’s very expensive: American academic libraries spent about $1 billion on electronic access to serial subscriptions in 2008 and prices have just risen since.

And more often than not, the research those subscriptions guard was funded by taxpayers. Academic institutions performed 53 percent of total basic research and 36 percent of all research funded by the U.S. government in 2009. And in academic publishing, not only are authors rarely compensated, they’re frequently charged by publishers for the privilege of being published. Yet, despite the public investment in much of the research most publishers are for-profit—with the largest, Elsevier, making US $1.1 billion in profits in 2011, a profit margin of around 35 percent.

If this whole system seems a little unbalanced or unfair to you, you’re not alone. Discontent about the academic publishing system has led to a flourishing open-access movement in university circles, with Ivy League schools, such as Harvard University, and public institutions, such as the University of Kansas, leading the way. In his own right, Swartz, promoted policies to provide unrestricted access to peer-reviewed research online through advocacy and civil disobedience. By the end of 2012 the number of open-access journals increased by 1,133, to more than 8,461, and the number of online repositories allowing public access to university research grew by 449, to more than 3,000 worldwide.

But instead of working with this flourishing movement, academic publishers have increased the price of online journal subscriptions to levels libraries increasingly cannot afford. In April 2012 the Harvard Faculty Council released a statement on the crisis’ impact on their library system:

“We write to communicate an untenable situation facing the Harvard Library. Many large journal publishers have made the scholarly communication environment fiscally unsustainable and academically restrictive. This situation is exacerbated by efforts of certain publishers (called “providers”) to acquire, bundle, and increase the pricing on journals.

Harvard’s annual cost for journals from these providers now approaches $3.75 million. In 2010, the comparable amount accounted for more than 20 percent of all periodical subscription costs and just under 10 percent of all collection costs for everything the Library acquires. Some journals cost as much as $40,000 per year, others in the tens of thousands. Prices for online content from two providers have increased by about 145 percent over the past six years, which far exceeds not only the consumer price index, but also the higher education and the library price indices. These journals therefore claim an ever-increasing share of our overall collection budget. Even though scholarly output continues to grow and publishing can be expensive, profit margins of 35 percent and more suggest that the prices we must pay do not solely result from an increasing supply of new articles.

The Library has never received anything close to full reimbursement for these expenditures from overhead collected by the University on grant and research funds.”

Harvard’s situation is far from unique. The University of California San Francisco Library spends 85 percent of their collection budget on journal subscriptions—yet “[d]espite cancelling the print component of more than 100 journal subscriptions in 2012 to keep up with a budget reduction, [their] costs still increased by 3 percent.” Libraries are now being put in a position where they must reduce their physical acquisitions in order to stay subscribed to electronic archives—even though that means they’re investing in limited access, not building their actual collection.

There have been attempts to help resolve the crisis through legislation: The bi-partisan Federal Research Public Access Act, or FRPAA, for example, would require free online public access to most publicly funded research in the United States and strengthen the National Institutes of Health’s open-access mandate by cutting the maximum time from publish date to being available to the public in half, to six months.

But while the legislation didn’t get past committees in the 112th Congress, for-profit publishers have already taken action to smear these efforts to increase access to academic publishing. In the spring of 2012, the president and CEO of the Association of American Publishers ignored the public interest when he claimed: “FRPAA is little more than an attempt at intellectual eminent domain, but without fair compensation to authors and publishers.”

To clarify, that is a representative of a sector in the publishing industry that actually charges many authors to publish their work, and charges universities for access to the finished product, while claiming to represent the best interests of both. Meanwhile, in memorial of Aaron Swartz, academic authors across the world are uploading copies of their own copyrighted material to share online.

Given JSTOR’s nonprofit status, there is perhaps hope that the expansion of the Register & Read program is just a first step in the direction of increased transparency and open-access in the future. And it is likely that Swartz’s death may harden the resolve of academics and online activists to put pressure on the publishing industry to find new and more effective ways of sharing publicly funded research with the world.

But for now JSTOR’s peace offering of access to just three articles every two weeks from a limited selection of their catalogue with reader choices tracked shows how far the industry is from addressing the real conversation our society needs to have about who should be able to access the research we all pay for.

Andrea Peterson is the Social Media and Analytics Editor at American Progress.

]]>
https://scienceprogress.org/2013/01/working-toward-a-more-fitting-tribute-to-aaron-swartz-than-jstor%e2%80%99s-register-read/feed/ 0
White House Announces U.S. Government Will Not Build Death Star https://scienceprogress.org/2013/01/white-house-announces-u-s-government-will-not-build-death-star/ https://scienceprogress.org/2013/01/white-house-announces-u-s-government-will-not-build-death-star/#comments Mon, 14 Jan 2013 15:56:04 +0000 Sean Pool http://scienceprogress.org/?p=27705

"Sure, the Death Star is a giant superlaser that can blow up planets. But over 1,000,000 of our employees just like to call it 'home.'"

It is official: the United States government will not build an orbital battle-station with planet destroying capabilities, according to an official posting by the White House Office Space and Science Budget Chief Paul Shawcross.

Last November, Star Wars fans filed a petition asking the White House to “secure resources and funding, and being construction of a Death Star by 2016,” citing national defense, construction, and job creation as major benefits. The Death Star is the moon-sized, planet-destroying space station that serves as the major source of drama in the conclusion of the popular film Star Wars.

The clever petitioners used the Obama White House’s “We the People” online petition tool. To encourage public engagement in presidential policy, the Obama administration’s policy has been to respond to any petition that successfully secures at least 25,000 signatures. With 34,400 as of this writing, the Death Star petition was entitled to a response.

Fiction aside, Shawcross used the opportunity not just to reiterate the Administration’s anti-planet destroying policies, but also to do some clever advocacy for public innovation investments by showing off some of the United States’ very real, and very impressive accomplishments in science, technology, and engineering. Shawcross writes:

Look carefully (here’s how) and you’ll notice something already floating in the sky — that’s no Moon, it’s a Space Station! Yes, we already have a giant, football field-sized International Space Station in orbit around the Earth that’s helping us learn how humans can live and thrive in space for long durations.

The International Space Station is a dedicated U.S. National Lab, built and operated jointly by more than a dozen countries, and is home to countless science research experiments in astrobiology, life science, physical science, materials science, space weather, meteorology, and other fields.

Shawcross went on to discuss the two roving robot science labs (another allusion to Star Wars characters R2D2 and C3PO) that the U.S. has successfully landed on Mars, floating robot space assistants, two American spacecraft currently leaving the Solar System, and other American innovations.

We are living in the future! Enjoy it. Or better yet, help build it by pursuing a career in a science, technology, engineering or math-related field.

Kudos to an administration official with both a sense of humor and a healthy sense of opportunism for turning a prank into an opportunity to educate the public about the significance of our public investments in science, engineering and innovation. You can read the full, Star Wars Easter egg-laden petition response below:

Official White House Response to: “Secure resources and funding, and begin construction of a Death Star by 2016.”

This Isn’t the Petition Response You’re Looking For

By Paul Shawcross

The Administration shares your desire for job creation and a strong national defense, but a Death Star isn’t on the horizon. Here are a few reasons:

  • The construction of the Death Star has been estimated to cost more than $850,000,000,000,000,000. We’re working hard to reduce the deficit, not expand it.
  • The Administration does not support blowing up planets.
  • Why would we spend countless taxpayer dollars on a Death Star with a fundamental flaw that can be exploited by a one-man starship?

However, look carefully (here’s how) and you’ll notice something already floating in the sky — that’s no Moon, it’s a Space Station! Yes, we already have a giant, football field-sized International Space Station in orbit around the Earth that’s helping us learn how humans can live and thrive in space for long durations. The Space Station has six astronauts — American, Russian, and Canadian — living in it right now, conducting research, learning how to live and work in space over long periods of time, routinely welcoming visiting spacecraft and repairing onboard garbage mashers, etc. We’ve also got two robot science labs — one wielding a laser — roving around Mars, looking at whether life ever existed on the Red Planet.

Keep in mind, space is no longer just government-only. Private American companies, through NASA’s Commercial Crew and Cargo Program Office (C3PO), are ferrying cargo — and soon, crew — to space for NASA, and are pursuing human missions to the Moon this decade.

Even though the United States doesn’t have anything that can do the Kessel Run in less than 12 parsecs, we’ve got two spacecraft leaving the Solar System and we’re building a probe that will fly to the exterior layers of the Sun. We are discovering hundreds of new planets in other star systems and building a much more powerful successor to the Hubble Space Telescope that will see back to the early days of the universe.

We don’t have a Death Star, but we do have floating robot assistants on the Space Station, a President who knows his way around a light saber and advanced (marshmallow) cannon, and the Defense Advanced Research Projects Agency, which is supporting research on building Luke’s arm, floating droids, and quadruped walkers.

We are living in the future! Enjoy it. Or better yet, help build it by pursuing a career in a science, technology, engineering or math-related field. The President has held the first-ever White House science fairs and Astronomy Night on the South Lawn because he knows these domains are critical to our country’s future, and to ensuring the United States continues leading the world in doing big things. He

If you do pursue a career in a science, technology, engineering or math-related field, the Force will be with us! Remember, the Death Star’s power to destroy a planet, or even a whole star system, is insignificant next to the power of the Force.

Paul Shawcross is Chief of the Science and Space Branch at the White House Office of Management and Budget

Tell us what you think about this response and We the People.

Sean Pool is the Managing Editor of Science Progress. Image and caption courtesy Death Star PR.

 

]]>
https://scienceprogress.org/2013/01/white-house-announces-u-s-government-will-not-build-death-star/feed/ 0
Supreme Court Allows Assault On Stem Cell Research To Die https://scienceprogress.org/2013/01/supreme-court-allows-assault-on-stem-cell-research-to-die/ https://scienceprogress.org/2013/01/supreme-court-allows-assault-on-stem-cell-research-to-die/#comments Fri, 11 Jan 2013 23:22:25 +0000 Ian Millhiser http://scienceprogress.org/?p=27700 Ian Millhiser via Think Progress.

Two years ago, Reagan-appointed Chief Judge Royce Lamberth suspended all federal funding for embryonic stem cell research in a sweeping opinion that even invalidated funding permitted under President George W. Bush’s policies. Despite the fact that the Clinton, Bush and Obama Administrations all agreed that Judge Lamberth misinterpreted federal law, Lamberth relied on a federal law forbidding funding of “research in which a human embryo or embryos are destroyed” to hold that federal spending not only cannot fund the destruction of a new embryo, it also cannot fund research that builds on past research that resulted in the destruction of an embryo.

Lamberth’s decision was eventually reversed by a conservative panel of the United States Court of Appeals for the District of Columbia Circuit. The appeals court held, correctly, that even though Lamberth might have proposed a plausible reading of federal law, longstanding Supreme Court precedent generally requires courts to defer to an agency’s reading of a statute. As the appeals court explained, “the plaintiffs are unlikely to prevail because Dickey-Wicker is ambiguous and the NIH seems reasonably to have concluded that, although Dickey-Wicker bars funding for the destructive act of deriving an [embryonic stem cell] from an embryo, it does not prohibit funding a research project in which an [embryonic stem cell] will be used.” Yesterday, the Supreme Court announced it would not hear this case, effectively killing this challenge to stem cell research.

This is an important victory for science, and it is just as much a victory for judicial restraint. As the near-success of the Affordable Care Act lawsuits demonstrate, conservative judges and justices are increasingly willing to substitute their policy preferences for the law, even when they must rely on legal theories that, in the words of one of the nation’s most conservative judges, have no basis “in either the text of the Constitution or Supreme Court precedent.” The requirement that judges defer to agencies in interpreting ambiguous statutes is an important check on the judiciary’s ability to impose their policy views on the nation. Agency leaders change with each presidential election; judges do not. And so the power to interpret a genuinely ambiguous statute should rest with officials whose legitimacy flows more closely from the will of the people.

]]>
https://scienceprogress.org/2013/01/supreme-court-allows-assault-on-stem-cell-research-to-die/feed/ 0