Have we been using the wrong software?

facial recognition canadianexplorer

 

Comparing the hand from an early fifteenth century copy of Thomas of Cantimpré’s de natura rerum with the Voynich script has brought another thought – what if we’ve been using the wrong sort of program?

Perhaps what we need is not statistics and high-powered number-crunching, but the sort of program used for identikits and facial recognition.

What if we were to overlay folios from other early fifteenth-century manuscripts where the hand has some plainly similar characteristics (as with my example, Cambridge, Gonville and Caius MS 35/141)?

Would that help pierce the heavy disguise of the Voynich text?

I suspect that I’m being naive. Palaeographers regularly work on very poorly written, heavily abbreviated and mis-spelled texts and they remain unphased. unfazed*.  None has yet offered an interpretation of the Voynich text, have they?

* auto-correct needed correcting. :).

For readers interested in such things, though, there’s an excellent article is online about programming abbreviated texts in medieval English and Latin:

Alpo Honkapohja (University of Zurich), ‘Manuscript abbreviations in Latin and English: History, typologies and how to tackle them in encoding’ from Varieng, Volume 14 – Principles and Practices for the Digital Editing and Annotation of Diachronic Data.

at: http://www.helsinki.fi/varieng/series/volumes/14/honkapohja/

If you missed the other post, here’s the illustration.

sort of quiration

comparison of forms: Cambridge, Gonville and Caius College MS 35/114 f.3r Early 15thC and MS Beinecke 408 f.38v. c.1405-1438.   Beinecke snips are outlined in red.

relates to post:

https://voynichimagery.wordpress.com/2016/05/09/an-early-15thc-copy-of-a-13thc-text-thomas-of-cantimpre/

 

_______________

header picture from: ‘How I intend to beat facial recognition software’,  thecanadianexplorer.com, October 15, 2012.

Advertisements

2 thoughts on “Have we been using the wrong software?

  1. I agree, though for slightly different reasons. Current computational analyses approach the MS as if it’s written in a rather clear majuscule, “all caps”, which is what it looks like.

    However, I think certain glyphs represent chunks of a more fluent, variable minuscule writing. Depending on the way those connect to each other, and the internal structure plus diacritic of the bench ligature, words that look the same superficially still get a different reading.

    This makes statistics almost useless, since to understand such writing, with countless variations depending on the surrounding glyphs, one should usually first understand the language. Kind of like only pharmacists can read doctor’s notes 🙂

    Sometimes only a trained eye can spot which variations are meaningful in a script. Voynich being Voynich, I wouldn’t be surprised if it has fooled us once more, presenting itself as a neat, small set of glyphs while actually it isn’t.

    So this kind of software could not only be used to compare the writing to different hands, but also to filter out possible meaningful variations between similar glyphs.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s