Making a Home for a Family of Online Journals
The Living Reviews Publishing Platform
Heinz Nixdorf Center for Information Management
in the Max Planck Society
- The Family
- The Concept
- The Infrastructure
- Where Next?
The Living Reviews Publishing Platform consists of
- the Living Reviews concept - in particular the type of publication,
i.e. solicited, peer refereed review articles (see http://www.livingreviews.org/faq.html)
- the common technical infrastructure (production server, test server and journal hosting environment) and
- the ePublishing Toolkit, a software package providing tools to help in
publishing scientific content on the web. (see https://dev.livingreviews.org/projects/epubtk)
The full platform is offered to journals which are members of the Living Reviews
consortium. But all parts except for the infrastructure and the brand can be
used independently for free.
See also The Living Reviews Publishing Platform.
The Living Reviews Family
- Currently three journals, at least two more on their way.
- Physics and Humanities, Linux and Windows, Max Planck Society and outside, Germany and
- Authors, Referees, Editorial Boards, Managing Editors, Editorial Assistants, Programmers.
What do we publish?
Solicited, peer-refereed review articles.
- The invitation process is a big part of our publishing workflow.
- Articles are big - 100+ print pages, 300+ references.
- Publishing rate is low - one per month and journal.
A new journal startup needs approval by consortium of existing ones.
For the concept of a particular journal see the
How do we publish?
Take advantage of the web:
- Online Only.
- Let Publications "live".
- Disseminate Metadata (OAI, ADS, RSS, RDF).
- Let Google take care of the rest.
While we publish "online only", a printable high-quality PDF version of our
publications is still deemed indispensable. The online HTML versions are
considered the authoritative versions, though, since they reflect updates
Taking full advantage of the medium does also mean that publications can be
updated and the changes being visible to all online readers immediately.
To make discovery of our content easy, we disseminate our metadata via
- an OAI static repository gateway,
- RSS (i.e. make it available to feed crawlers),
- RDF (i.e. make it available to semantic web applications).
For an example of how well Google works as disseminator,
see the "Referring Site Report" and the "Search Query Report" in our
web log analysis.
- Production server (www.livingreviews.org)
- Apache web server
- Webware application server
- PostgreSQL database server
- Development server (dev.livingreviews.org) serving subversion/trac
- manuscript management
- ePubTk development
- Living Reviews knowledge base
- Test server
The production and the development server are hosted at the RZG in
Garching, the test server at the AEI.
More information about the technical infrastructure:
Using the Infrastructure
- Web content is maintained offline as checkout of a subversion
repository by managing editor.
- Journal metadata is maintained in (versioned) XML files.
- Publication metadata is kept in an RDF database.
- Reference data is kept in a relational database.
- Web/database content is created using tools from ePubTk.
Again, more information is provided
The ePublishing Toolkit
- Publication Builder: creates presentation formats from publication sources
- Reference Database: manage references of publications
- Register: manage public metadata of publications
- Journal Builder: create journal pages
- EIMS: Editorial Information Management System
- miscellaneous: web statistics, blog, ...
Find more information on the
In particular, details about the components can be found here:
- Input: Publication sources in LaTeX and BibTeX
- Output: HTML, PDF in high-quality
- wraps conversion tools of the TeX family: pdftex, TeX4ht
- HTML and PDF created from identical TeX source
Screenshot of a browser window displaying an article from Living Reviews in
Screenshot of the PDF version of the same article snippet displayed in Acrobat
- Import format is XML
- Import filter for BibTeX
- GUI application for management
- Web frontend for search and retrieval
- Exports XHTML, XML, RDF+XML, BibTeX
Reference Database Import
Screenshot of import screen of the reference database management GUI. Potential
duplicates from a new set of references can be inspected and either be merged
with existing records in the database or be marked as different.
BibTeX Import Filter
Screenshot of the BibTeX filter application for the reference database. This
application allows to
- trigger various checks on a BibTeX database containing
references of a publication,
- edit the fields of records in the BibTeX database,
- create an XML file from the BibTeX database which is valid for importing
into the reference database.
Screenshot of the web-based search interface for the reference database of
Living Reviews in Relativity. Note the context dependent suggestions for the
author input field.
- RDF triple store
- GUI application for management
- Create views on data via XSLT
Register Management GUI
Screenshot of the management GUI for a journal's register data.
- Integrates configuration data with page creation
- Create views on configuration data via XSLT
- supports/enforces publication workflow
- stores editorial information as publication history
- compiles role based task lists
- provides reports on publication status
Due to the special kind of publications, our workflow is special, too. Find the details
If interest is voiced, a test instance of EIMS to try out may be installed.
Screenshot of the "Tasks" screen of EIMS.
Screenshot of a list of available reports in EIMS.
Screenshot of the "Report" screen for a workflow item (in this case a
publication) in EIMS.
- move software to the server
- move interfaces to the browser
- keep on growing
- venture into the MS Word world
- Keep the tool zoo small.
- Don't reinvent the wheel unless ...
- Open Source
- Find a niche.
Keep the tool zoo small.
The central tools we use at Living Reviews are subversion, trac and
ePubTk. While none of these three (except maybe trac) is particularly easy
to use, the fact that each of them is used for multiple tasks helps getting
up the learning curve. In particular the fact that different user groups
use the same tools helps building up a common knowledge base.
The prime example for this is subversion. Managing editors use it to maintain
the web content, technical editors use it to handle the manuscripts,
developpers use it for version control of the software.
Don't reinvent the wheel unless ...
While we try to make use of existing tools (possibly using them in a novel
way), a big part of our toolset is the ePublishingToolkit, which does
reinvent some wheels to make sure they fit.
Very diverse people need to collaborate in our project: linux, mac and windows
people; scientists and non-scientists; non-collocated people; developpers
and users. While this demands a lot of attention, it also makes for good
experiences: windows people attacking command lines, linux people looking
for /etc/hosts on windows machines in general, tackling
steep learning curves with a team. You better like communicating
simply is the way to go! It makes experimenting easy since
you don't have to buy licenses first; it's fun and
possible - to get involved; you can learn by examples and actually use what
you learned. The prime example here is trac. We liked the user
interface. So we just plucked it and put it on our EIMS
thus keeping user experience uniform across applications.
Technical staff is necessary...
... because software is never really finished, automation is always wanted,
maintenance just has to be done (server updates, migration of legacy
data). On the other hand, we found that people who want to start a journal
and people who can support the technical infrastructure for an online
journal are rarely the same. So to keep the barrier for new journals low,
we provide a hosted solution.
Plan for change.
Migration issues have been a constant companion in our project: We migrated
several GB of web content (where tidy proved very helpful, but xhtml or
clean html4 from the outset would have been better). We are just in the
middle of migrating the old editorial information database (and it turns
out, that an API for the DB would be way better than dumping CSV
files). The easiest migration task so far has been migrating the
register. Since the data was already in XML it took only one XSL
put emphasis on portability/migratability of data,
avoid any dead-end formats,
put apis on applications or even better, store in format which is easy to export right away.
Again, change would be a lot easier, if standards existed.
Find a niche.
For our journals so far, the niche created by our special form of content
proved helpful. In particular, the advantage of online publishing with a
mechanism to update publications seems more clear cut in the case of review
articles, which follow the developments of a research field.
Well, we did what everybody seems to do:
- Meet the special needs of your own project ...
- ... thereby reinventing all the wheels.
- Just to find out: nobody's interested in our open source project.
What makes a wheel a wheel?
Towards interoperable epublishing components:
- Reusability on component level
- Using standards (APIs, protocols)
- Turn the multitude of wheels into a choice
- The Repository
- OpenSearch for full-text search
- Interface to longterm archiving
- The Registry
- dissemination of metadata via OAI-PMH
- web service with simple REST interface
- The Reference Database
- search via SRU
- web service a la CiteULike or Connotea
- [your component here]
- [your standard interface here]
- Interoperablity requires collaboration!
- So please join?
- Or tell us where to join!
Thank you for your attention!