New technology pushes transparency of publishing, journalism and science to new levels. Through the hyperlink structure of texts it is easy to link back to the sources of a text. What used to be long lists of references at the end of a text or in footnotes has become directly accessible through weblinks. Only paywalls may or may not restrict the fast and easy access to original sources. In writing online, this is a major additional feature of publishing in the last few years. Some online journals allow this for quite some time now, but there are lots of printed versions that stick to the read and be stuck approach of publishing.
In teaching I have been an advocate of “read the original sources” as the basic source of inspiration for authors. The transparency of the thought process and the evidence provided in whatever form should be traceable. In publishing this transparency allows to exclude the copying of thoughts or unreflected referencing.
However, the task to check for the validity of weblinks and the updating is an additional task. 500+ blog entries with an average number of 2 weblinks per blog entry makes this a job of its own. Testing of 1000 weblinks is something you need a software or plugin which alerts you to “broken links”. The maintenance of a webpage, therefore, increases substantially as the content increases. Reorganisations of webpages make the follow-up of links sometimes quite hard. Projects like the general archives of the web and webpages are very important to ensure the transparency of publishing in the short, medium and long run. The archives of today look more like machine rooms than the splendid archives or libraries of the past and present.