Minding the gap in research and policy

Evidence-based policy is at risk. The evidence itself isn't making it from research to recommendations fast enough. Or, at least that's what some participants were arguing at the annual EPIP (European Policy in Intellectual Property) conference in Glasgow last week. The focus was primarily on the failing of academic research to permeate IP policy.  This is not unique to IP, it remains a challenge to generating evidence-based policy.

Julia Reda, CC by Tobias M. Eckrich
Opening keynote speaker Julia Reda, MEP for the German Pirate Party, started the debate by calling for more and better evidence. Recounting a number of tales of poor stats, she warned that industry lobbyists are quick to fill the evidence void.

The types of pithy data that gain currency in policy debates aren't necessarily the type of data produced by academic research.  When data does make it into policy debates, it often loses its caveats or even origins.  [Merpel is considering inventing her own IP katistic to see how far it goes.]

"Data measurement isn't glamourous," as Jonathan Haskel, Imperial College, noted, and is less likely to get published. Stuart Graham, former Chief Economist of the USPTO and currently Georgia Tech, called for journal editors to devote more pages to data-focused articles. (e.g. a forthcoming special edition of Journal of Economics and Management Strategy.)

A challenge to translating research into policy is the misalignment of incentives. Career advancement in academia is primarily dependent on publications and securing funding, not impact.  While the UK government funders have introduced an "Impact Agenda" designed to encourage impact, it's a slow process.  A concrete step is the new policy that funded research be published in open access journals (free at the point of consumption.) Academics are also encouraged to use social media to disseminate research, but it's not translating to career advancement.

Timing is also out of sync.  While changes in policy may seem slow, in reality they can be very fast and have very specific deadlines. Policy makers have to be quick in building and evidencing policy, which may leave little time to identify research. Academic research is less constrained by specific questions and works to flexible timing. A fantastic statistic showing that green patents save baby seals, but published the day after the government's policy is signed off, may be as helpful to furry animals as Chanel's $1M sable fur coat.

Pamela Samuelson
Closing keynote speaker Pamela Samuelson, Berkeley, encouraged academics to write more for non-academic audiences.  She recounted her great fear that she would never be taken seriously again after penning an article for WIRED on the 'Copyright Grab.'   Her fears were unfounded, but it does touch on a key point - there is a cultural taboo associated with non-academic publishing within academia. (Aha! That explains the slight terror I have every time I click the Blogger 'publish' button.)

Professor Samuelson encouraged IP academics to publish in venues policy makers are likely to read, to write the '2-pager' (a two-page summary) and to seek out policy makers. Funnily enough, that sounds like an oddly familiar strategy - it's exactly what lobbyists do.

Academics adopting lobbying strategies is an interesting proposition.  It could really increase the impact of research and inform policy.  Yet it raises a number of questions about independence, academic freedom and the impartiality of academic research.

So, whose responsibility is it to translate academic research into policy?  The obvious candidates are academics and policy makers.  At the moment, it is primarily lobbyists who selectively publicise statistics supporting their arguments. Yet, as Ian Hargreaves, Cardiff, quipped, "the voices of the digital many should not be drowned out by the digital self-interested few."
Minding the gap in research and policy Minding the gap in research and policy Reviewed by Nicola Searle on Tuesday, September 08, 2015 Rating: 5

8 comments:

  1. The problem is that the "poor stats" are usually on both sides of the argument.

    ReplyDelete
  2. Academics do need to contribute a lot more to all debates, including IP. But they probably don't feel valued and there is probably little incentive for them to add that challenge to all their other challenges.

    ReplyDelete
  3. It would be useful to know if the author is speaking for her employer, the IPO, here.

    Here's what Justice Green had to say about the quality of the IPO's evidence submitted in defence of the private copyright exception, which was introduced without compensation to rightsholders:


    "Ultimately, it is very difficult (impossible) to see how these matters referred to in the reasons can, upon any rational basis, be said to amount to 'evidence' which justified the decision.


    http://archive.is/wWwZn [256]

    The evidence submitted to support the argument that harm was de minimis turned out to be a thought experiment by Professor Hargreaves This was clearly good enough for Professor Hargreaves and the IPO, but thought experiments fall short of the legal standard required to justify the legislative change.

    So have lessons been learned at the IPO? Ros Lynch at the Westminster Forum this week said they had, and the quality of evidence needed to be improved. This post suggests otherwise, particularly as it includes the unqualified assertion that:

    "At the moment, it is primarily lobbyists who selectively publicise statistics supporting their arguments."

    No reflection or apology appears to be forthcoming for this expensive waste of taxpayer's money. Perhaps activists like Professor Hargreaves who wish to pass judgement on others evidence, should first examine the quality of their own?

    ReplyDelete
  4. Historically, the government-of-the-day's approach to evidence-based policy has been to decide policy first and then look for the evidence to support it afterwards. Because real-life different data rarely points in one direction, it is usually possible to find some figures to support the current policy. In a module on "Official Statistics" at the Civil Service College some years ago, it was explained that this was something that Civil Servants would often be called on to provide. In an exercise using real official statistics, the attendees were split into two groups and asked to produce figures to support two diametrically-opposite policy statements. Ever since, when I come across official pronouncements about what "the figures show", I contemplate the other suppressed figures that might show the contradictory view.

    ReplyDelete
  5. Ex-Examiner, I completely agree. When we are provided with experimental data we can easily use to support contradictory positions by being selective in what is really shows. Data which supports the effect strongly can be used to show advantages and data which shows weakness of the effect shows it would not have been obvious to do. I'm sure 'real-life' data is the same.

    ReplyDelete
  6. Andrew - I am no longer an IPO employee and my IPKat posts in no way should be interpreted as representing the IPO. I am now at Goldsmiths, University of London.

    ReplyDelete
  7. Thank you Nicola - the primitive comment system does not allow me to make corrections.

    Your post focuses on the perceived failings shortcomings of "industry lobbyists". Was there any recognition from academics and bureaucrats present that their own evidence fell short, too?

    ReplyDelete
  8. Andrew - Yes and no. Many of the statistic referred to by Ms. Reda were government produced or commissioned statistics. She also specifically warned against reliance on industry statistics. With that notable exception, most of the public discussion was related to general failings rather than specific evidence.

    As is standard, the results of individual (primarily academic) papers were critiqued by participants in parallel sessions. That is part of the purpose of such conferences.

    In private conversations, industry/academic/government statistics were equally critiqued. Economists are not known for letting a suspected failing slide...

    ReplyDelete

All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.

Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html

Powered by Blogger.