Comparing the hand from an early fifteenth century copy of Thomas of Cantimpré’s de natura rerum with the Voynich script has brought another thought – what if we’ve been using the wrong sort of program?
Perhaps what we need is not statistics and high-powered number-crunching, but the sort of program used for identikits and facial recognition.
What if we were to overlay folios from other early fifteenth-century manuscripts where the hand has some plainly similar characteristics (as with my example, Cambridge, Gonville and Caius MS 35/141)?
Would that help pierce the heavy disguise of the Voynich text?
I suspect that I’m being naive. Palaeographers regularly work on very poorly written, heavily abbreviated and mis-spelled texts and they remain
unphased. unfazed*. None has yet offered an interpretation of the Voynich text, have they?
* auto-correct needed correcting. :).
For readers interested in such things, though, there’s an excellent article is online about programming abbreviated texts in medieval English and Latin:
Alpo Honkapohja (University of Zurich), ‘Manuscript abbreviations in Latin and English: History, typologies and how to tackle them in encoding’ from Varieng, Volume 14 – Principles and Practices for the Digital Editing and Annotation of Diachronic Data.
If you missed the other post, here’s the illustration.
relates to post:
header picture from: ‘How I intend to beat facial recognition software’, thecanadianexplorer.com, October 15, 2012.