Full Program - Tuesday, 20 and Wednesday 21, January 2015


Berlin-Brandenburg Academy of Sciences, Gendarmenplatz, Berlin, Germany

On Tuesday and Wednesday we meet at the Berlin-Brandenburg Academy of Sciences and Humanities, Gendarmenmarkt, for the Full Conference Program. The APE 2015 Conference Program is being developed by a high level Program Committee and will offer a broad perspective, varying from research excellence, peer reviewing, use of information, content innovation, funding and investing, business models, new types of information, enabling technologies, repositories, search engines, dissemination, access and sharing of knowledge. There will be a lot of time for discussions and meeting with friends and colleagues ...


Berlin Brandenburg Academy of Sciences, Gendarmenmarkt, Markgrafenstr. 38, 10117 BERLIN

Restaurant Gendarmerie  for APE 2012  Dinner Conference


... and enjoy the conference dinner

The conference dinner at Tuesday night will be offered in 'Restaurant Refugium ', which is a very fine location under the French dome on Gendarmenmarkt.


Restaurant Refugium
Gendarmenmarkt 5
10117 BERLIN, Germany




click for program: Tuesday 20 January, 2015: Status10 January 2015

Doors open for Registration (Coffee, Tea & Snacks)


Welcome and Opening

Arnoud de Kemp

Barbara Schneider-Kempf, Director General, Staatsbibliothek zu Berlin

Prof. Dr. Martin Grötschel, Incoming President of the Berlin Brandenburg Academy of Sciences, President of the Zuse Institute (ZIB), Berlin


Message from Brussels: Towards an Open Science Vision

Dr. Celina Ramjoué, Head of Sector, Open Accesst to Scientific Publications and Data, European Commission, Directorate General for Communications Networks, Content & Technology (CONNECT), Brussels



Web25: The Road Ahead

Putting Data at the Heart of the Open Web Platform

Phil Archer, W3C Data Activity Lead, Ipswich


Publishing data in support of academic papers presents challenges to publishers, not least how to establish a sustainable business model. Enrichment, linkage and visualization are the key - and the Web is pretty good at that.

The World Wide Web Consortium (W3C) is an international community where Member organizations, a full-time staff, and the public work together to develop Web standards. Led by Web inventor Tim Berners-Lee and CEO Jeffrey Jaffe, W3C's mission is to lead the Web to its full potential by developing protocols and guidelines that ensure the long-term growth of the Web. The ultimate vision is One Web.



The Changing Environment for University Presses

Peter M. Berkery, Jr., Executive Director, The Association of American University Presses, New York


While university presses have faced claims of "crisis" for well over 40 years now, both the scope and velocity of the changes they currently face are in many ways unprecedented. AAUP's executive director will provide an environmental scan of the current state of university press publishing,
focusing on the challenges unique to this segment of the scholarly communications ecosystem. He'll highlight the things AAUP members have started doing to meet those new challenges, from technology initiatives to new consortial activities, as well as outline what still lies ahead.



An Overview of the evolving US Policies for Public Access to Scholarly Publications and Data

Dr. H. Frederick Dylla, Executive Director and CEO, American Institute of Physics, College Park


Over the last two years there has been significant activity in the United States with the development and implementation of public access policy for both scholarly publications and data resulting from publicly funded research. In February 2013 the US Office of Science and Technology Policy (OSTP) issued a directive to federal agencies that fund research to develop, together with the stakeholders, broad-based public access plans. To help satisfy the requirement, the publishing community formed the Clearinghouse for the Open Research of the United States (CHORUS), and a coalition of university organizations initiated the Shared Access Research Ecosystem (SHARE). By late 2014, one major US research agency had released its plan and other agencies are expected to follow suit. This talk will review the agency plans that are public to-date and the community’s response to them


Buffet Lunch



Open Research or Infringement of Authors' Rights?

The Suitability of the CC-BY License for Research Publications

Moderation: Prof. Dr. Christian Sprang, Legal Counsel, German Association of Publishers and Booksellers (Börsenverein), Frankfurt am Main


Dr. Albrecht Hauff, Publisher, Georg Thieme Verlag KG, Stuttgart
Robert Kiley, Head of Digital Services & Acting Head of Library, Wellcome Library, London
Carlo Scollo Lavizzari, Advocate, Lenz & Caemmerer, Basel

Dr. Martin Schaefer, Boehmert & Boehmert, Berlin


Over the past few years, funding agencies in a variety of countries have begun imposing requirements on authors whose research they have funded; authors that publish articles should use specific forms of Creative Commons (CC) licenses. The current trend among research organizations including governmental funders is towards increased mandates on authors around CC-BY and variants thereof.  Failure to comply with these mandates may lead to adverse consequences, such as loss of future research funds.


The CC-BY license requires attribution to the author, but otherwise allows users to reuse materials without receiving separate, express permission of the author for both commercial and non-commercial reuses, irrevocably, with no right of remuneration for the author. This includes the right to make derivative works. The "Share Alike" (SA) variant requires that those who create derivative works must relicense the derivative additions on the same terms as the underlying work (e.g. CC-BY SA). In each case, attribution is required to the author. Creative Commons offers other license options, which limit commercial reuse and bar creation of derivative works. Those more restrictive variations are disfavored by many funding agencies.


The funder mandates around CC-BY and CC-BY SA raise a number of ethical and legal questions. Some scholars, as well as learned societies, have expressed strong concerns that the CC-BY licence may not be appropriate for published works. It has been suggested, that the CC-BY license facilitates and promotes commercial re-use and uses akin to plagiarism; that the license therefore amounts to an infringement of authors' moral and intellectual property rights; and that it is likely to damage the quality of education. Legal concerns may, depending on the jurisdiction, implicate moral rights, constitutional law, free speech, contract law, copyright/authors rights and patent law.


The session deals with main questions in this heated debate and gives inside views of supporters and opponents of CC BY mandates.


Coffee & Tea and Networking



The Practical Side of Open Access - Building an APC Billing Infrastructure

Moderation: Dr. Ralf Schimmer, Head, Max Planck Digital Library, Max-Planck-Gesellschaft, Munich


With firm commitments growing to make research results openly available, the handling requirements for Article Processing Charges (APC) receive more and more attention. To be prepared for larger scalability, a robust infrastructure will be required. The first components already exist and will be discussed from the perspectives of different stakeholders.


Perspective of a System Provider: Services and Billing Start with Submission

Richard Wynne, Vice President of Sales and Marketing, Aries Systems Corporation


It’s often said that Open Access is: “just a new business model”. In pure economic terms that might be accurate, but when it comes to publisher back-office operations OA is very different. While the subscription model requires minimal interaction between editorial/production operations and publishers’ subscription management and financial systems, that is not the case for OA. With OA, the financial and commercial systems are part of the workflow not separate. Funders and academic institutions face the same challenge. Their payment management and control systems have been optimized for subscription management and are not optimized to process high volumes of APCs (Article Publication Charges) in an efficient manner. In his presentation, Richard Wynne will explain how the adoption of appropriate standards (ORCID ID, FundRef, Ringgold, etc.) can be combined with software APIs in workflow systems such as the Editorial Manager® peer review system to facilitate the transition to efficient, high-volume APC processing.


Perspective of a Service Provider: RightsLink for Open Access

Jake Kelleher, Senior Director of Licensing and Business Development, Copyright Clearance Center


Across all stakeholders (funders, publishers, institutions & authors), the management of APCs is often complicated and inefficient resulting in a need for service providers to help automate the APC workflow. This presentation will provide insight into the current state of APC management, drawing on CCC’s experience managing APCs on behalf of publishers during the past 10 years. We’ll also provide a brief overview of CCC’s RightsLink for Open Access solution and touch upon the need for continued cooperation among the stakeholders to continuously improve solutions that are scalable, data-driven and that meet the growing challenges of Open Access.


Perspective of a Publisher:Expanding the Portfolio Options

Veronika Spinka, Open Access Manager, Springer, Heidelberg


Expanding the open access portfolio options challenge the traditional publisher’s publication workflows. Simplifying the administrative process of APC handling for institutions and publishers is crucial for all kinds of Open Access offerings but especially for the set-up on large scale. To arrange and establish the various “deposit models” properly in existing processes, which among others facilitate centralized APC handling, publishers are required to improve their services.
This talk will focus on the demands and requirements publishers face when expanding their open access portfolio options.


Perspective of a Customer:Handling APCs at an Academic Institution

Dirk Pieper, Deputy Head of Bielefeld University Library


From the perspective of academic institutions there is no doubt that there is an ongoing and irreversible transformation process towards open access on the subscription market. Although the German Research Foundation excluded the funding of hybrid publications from the very beginning of its "Open Access Publishing Programme", which now supports the Open Access publication funds of 32 universities in Germany and which was extended until 2020, there is a current debate about external and internal costs for handling APCs and about the transparency of publication costs within universities and research organisations similar to the discussion in the UK. The presentation will consider the situation in Germany from a customer’s point of view with a look towards several international developments and initiatives like ESAC and will also introduce the latest initiative from German universities and research organisations to ensure price transparency on the APC market.



Peer Review - A Publisher Value-Add? Or Essential to the Scientific Communication System?

Kent Anderson, Publisher, Science (AAAS), Washington


Publishers have become accustomed to describing peer review as a “value-add” they provide to research papers. But economic theory strongly suggests that the priority system created by third-party peer review is essential to encouraging the communication of scientific results and allowing rough approximations of the value of scientific contributions. This talk will explore the importance of publishers as arbiters of the scientific priority system, why we have become confused about our role, and why changes to the system should not be undertaken without comprehending the entire sweep of what peer review accomplishes.


The APE Lecture

The Race for Structured Knowledge -

Language Understanding for Extracting Knowledge from the Web

Prof. Dr. Hans Uszkoreit, Professor of Computational Linguistics, Saarland University at Saarbrücken, Scientific Director at the German Research Center for Artificial Intelligence (DFKI) and Head of DFKI Language Technology Lab


The work of the open knowledge communities has changed the way we access and use knowledge and information resources. The most impressive result are the wikipedias in many languages, built by millions of unpaid authors, but also resources such as Wikitravel/Wikivoyage and Wikispecies. Now the race is on for structured knowledge. The English Wikipedia has been a wonderful source for building up new structured knowledge repositories such as Freebase, DBpedia, Yago and the Google Knowledge Graph. They have achieved a coverage of the world that was beyond the reach of sophisticated AI-oriented intellectual knowledge engineering.


In my talk I will present different approaches for extracting knowledge from the Web. I will report on recent progress but will also show which obstacles remain to be surmounted before we can extend our structured knowledge repositories. In this context I will also look at the diversity of text types, subject areas and human languages. And I will try to address the crucial question of quality and reliability.


Conference Dinner at the Restaurant 'Refugium' under the French Dome( please note: on Invitation or with separate Registration)




click for program: Wednesday, 21 January 2015: Status 10 January 2015

Doors open (Coffee, Tea & Snacks)


Wake-up Session:

How to make Money with Semantics?

Moderation: Richard Padley, CEO, Semantico, Brighton

with Prof.Dr. Krzysztof Janowicz, Drs. Jan Velterop, Christian Kohl, Laura Hassink (invited) …



Dotcoms-to-Watch: Sharing is Multiplying

Moderation: Drs. Eefke Smit, Director, Standards and Technology, STM, The Hague


With a new generation of scientists and researchers constantly online, living and breathing the paradigm of sharing information during 24/7, scholarly communication is in flux more than ever before. Part of the new movements is an ever increasing flow of new dotcoms, launching new apps and services that make sharing and virtual networking between researchers easier and easier. Come and hear about the dotcoms we have selected this year for APE 2015; their founders will tell you how they will be changing the landscape of scholarly communication. This is your chance to meet a new generation of disruptors:


Olivier Acher, Sample of Science

Nicko Goncharov, Digital Science

Max Mosterd and Lukas Klement, INCEND

Deepika Bajaj, RedLink

David Sommer, KUDOS


Coffee & Tea and Networking



Discovery. About Context & Content

Moderated by Arnoud de Kemp, Co-Editor-in-Chief, ‘Information Services & Use’, Berlin


Proactive Discovery and Semantic Search

Alex D. Wade, Director, Scholarly Communication, Internet Services Research Group, Microsoft Research,Redmond, WA


Web-scale search has been around for more than twenty years, and it is now evolving beyond simple keyword based search to support better understanding of the content, large-scale mapping of the world's knowledge, and richer ways to elicit users intent. As a result, web-scale search can now provide richer discovery than ever before, and to bring the right content and answers to users when and where it is needed. This talk will cover several new approaches to academic information discovery.


Defragmentation: Optimising the Use of Existing Knowledge

Drs. Jan Velterop, Advocate and Advisor, Open Access and Open Science, Guildford


Fragmentation of scientific information is a long-standing problem. Different journals; different publishers; different platforms; different formats. They all act as encumbrances to seamless knowledge-pattern-analyses, which is fast becoming an absolute necessity due to the vast amounts of published material, growing every year, and, of course, in the aggregate. The work necessary to assemble large enough bodies of literature, from many and disparate sources, takes away much time and money from what could have been spent on actual research. Open science, particularly material published under CC-BY licences, makes it possible to tackle these issues and transition to new ways of using, and building on, vast amounts of existing published knowledge, even when actual reading of individual articles is only possible to a very limited degree. The benefits of defragmentation will be discussed and a new initiative that brings together large amounts of articles, in a variety of standard formats, ready to be used by researchers, will be presented.


Why the Data Train Needs Semantic Rails

Prof. Dr. Krzysztof Janowicz, Co-Editor-in-Chief, ‘Semantic Web’, Geography Department, University of California, Santa Barbara


Today's discussion about data is dominated by the believe, that the synthesis and analysis of data at an ever-increasing spatial, temporal, and thematic resolution will lead to new insights, while, at the same time, reducing the need for strong domain theories.  The sovereignty over the interpretation of these data and the gained insights is attributed to predictive analytics or statistical techniques in general as well as recent advances in algorithm parallelization and supercomputing. Interestingly, however, while this perspective takes the availability of data as a given, it does not answer the question how one would discover the required data in today's chaotic information universe, how one would understand which datasets can be meaningfully integrated, and how to communicate the results to humans and machines alike. The Semantic Web addresses exactly these questions. In this talk, I will argue that making sense of data and gaining new insights works best if inductive and deductive techniques are combined.


N. N. to be announced


Buffet Lunch



Reliable Quality and Reproducibility

Moderation: Robert C. Campbell, Senior Publisher, Wiley, Oxford


Computer-aided Design for Experimenters: Delivering Reproducibility and Collaboration in Research

Dr. Matt Cockerill, Co-Founder of Riffyn, Riffyn, Oakland


Current ad hoc approaches to experimental design and data capture lead to experimental noise and error. As a result of this, results are difficult to interpret reliably and are frequently irreproducible. 
Manufacturing and supply chain effectiveness has  transformed in the past decades by quality systems technologies, but such technologies have been too rigid for the rapidly changing research environment. Riffyn is changing this by providing flexible, visual, browser-based software for experimental process design and data capture. Riffyn’s software can scale gracefully from academic research to industrial R&D.
Using open data standards, Riffyn will support scientific communication by allowing researchers to share detailed computer-readable experimental process descriptions alongside their experimental data, transforming the traditional Materials and Methods section into something far more useful.



How Video Publication of Laboratory Experiments will solve the Reproducibility Problem

Dr. Moshe Pritsker, CEO, Editor-in-Chief, Co-Founder, JoVE, Cambridge, MA


Experimental sciences including biology, medicine, physics and chemistry chronically suffer from low reproducibility of published studies. Recent experiments indicate that 70% to 90% of studies published in scientific peer-reviewed journals are not reproducible. Even if partially true, these findings present difficult questions about the future of the scientific research. We believe that this phenomenon is mostly due to the traditional text format of scientific journals, which cannot provide an adequate description of modern research techniques. This creates a critical problem of knowledge transfer that severely impacts research and education. Addressing this challenge, JoVE has developed a unique video-based approach to scientific publishing to provide a systematic visualized publication of experimental studies. As of January 2015, JoVE has produced nearly 4,000 videos demonstrating experiments from laboratories at top research institutions and used online by millions of scientists and students worldwide. JoVE institutional subscribers include nearly 800 universities, colleges, biotech and pharmaceutical companies including such leaders as Harvard, MIT, Yale, Stanford, Max Planck Institute and Cambridge University. The case studies conducted at a number of university labs in the USA indicate that the video publication greatly facilitates understanding and learning of experiments, enhancing productivity in research and education.


Reproducibility Challenges in Climate Science

Dr. Georg Feulner, Dep. Chair of PIK's Research Domain I - Earth System Analysis, Institute for Climatic Impact Research, Potsdam


Ensuring the reproducibility of scientific results is a key concern for any scientific discipline. In principle, the reproducibility challenges for climate science are similar to those in other fields. The demands on reproducibility in climate science, however, are particularly high because of the tremendous importance of the research results for our society facing the challenges of climate change. In practice, this societal scrutiny of climate science is exceedingly exacerbated by public assaults on the results (and often also on the scientists) by individuals or organisations disagreeing with the notion of anthropogenic climate change. In my presentation I will give examples highlighting the reproducibility challenges and public debates for various fields in climate science.


Data Accessibility and Reproducibility in the Biosciences

Dr. Bernd Pulverer, Chief Editor, Head of Scientific Publications, The EMBO Journal, Heidelberg


The biosciences are witnessing a rapid growth and diversification of research data. Efficient research progress depends on the development of standards, validation processes and accessible, stable infrastructures to store and share such data.
The peer reviewed research paper remains the predominant mode of sharing and archiving validated research findings. In papers, data is published as figures, which are typically little more than illustrations to support the textual description of the research - the reader (be it man or machine) cannot extract and reanalyze the data. A number of scientific journals now publish figures in association with the underlying source data, licenced for open access and reuse. Machine-readable metadata describing biological entities and experiments will integrate data across the peer-reviewed scientific literature, biomedical databases and data repositories. This may enable new search strategies, rendering data and experiments directly discoverable in a manner that is complimentary to text-based search. The number of image aberrations in the literature is significant and the reproducibility of published work is in question. The publication of validated source data with associated reagents and methods in journals and repositories and enhanced editorial screening procedures ensure transparency, reproducibility and accountability in the reporting of scientific research.


Coffee & Tea and Networking


Closing Panel: The Communication of Scientific Results

Chair: Dr. Bernd Pulverer, Chief Editor, Head of Scientific Publications, The EMBO Journal, Heidelberg


On the Panel:
Kent Anderson, Publisher, Science (AAAS), Washington
Liz Ferguson, Publishing Solutions Director, Wiley’s Global Research Division, Oxford
Dr. John R. Inglis, Co-Founder, bioRxiv, Cold Spring Harbor Laboratory
Prof. Dr. Bernhard Sabel, Editor-in-Chief, Restorative Neurology and Neuroscience, Magdeburg

Louise Page, Vice President, Publisher Relations & Business Development, HighWire Press, Inc., Stanford


Fragmented, Structured, Discoverable, Accessible and Usable Knowledge wheredoes this leave Academic Publishing?


In the closing panel, we will aim to integrate the strands running through this year’s meeting to synthesize a snapshot of where we are in the rapid evolution of academic communication and to discuss the role publishers, funders and institutions may play in the future.


From free text to structured data and semantic information: Is the research paper an archaic form of communication or a format optimized for human understanding and that will evolve further with online technologies to remain the main mode of sharing validated research. We are currently only sharing a tiny fraction of research data - should we archive/share all data generated, as envisioned by some in the open science movement? What are the best platforms for this – preprint servers or structured databases? How does such data need to be archived to render it useful and discoverable? What quality control do we need for such data? Is there a toxic effect of publishing unreliable data?
How can individuals navigate the dramatic rise of information? Is browsing dead?
Who will bear the costs of the technological developments to enhance research communication? Can economic targets be reconciled with an open mode of data sharing?


Research Assessment: currently, research assessment heavily relies on journal-based publication and the value of research output is often directly related to journal reputation and journal or article level metrics. Are there more advanced quantitative measurement for the value of published research? What are the alternatives to metrics based evaluation? Should research output beyond the peer reviewed literature be taken into account? Is journal based research assessment dampening scientific progress?


Research Ethics: The role of journals as gatekeepeers of reliable research prepublication and postpublication. What is missing at journals/editorial processes to ensure that papers are reproducible?