11 Reasons why FSG Digitization is Certainly not Enough for NLP

11 Reasons is my way to say infinitely many. But believe me, it is useful. I want to let you know that even though I use my new system, I still rely on the FSG digitization. We will try to answer how FSG digitization of the Voynich Manuscript is enough for NLP, not!

At the end of this article, you will find a voice-over of the first page that I recommend you should watch.

You might ask: how to upgrade from FSG digitization to JP transliteration? It is easy. Let us first take a look at the table below.

FSG digitization against JP system
Table 1 – FSG digitization against JP system

As you can see, we can find our seven groups of five characters each in this table. Each column associated to a consonant in the JP system. Every line associates a vowel to the consonant. Each cell contains three information. The Voynichese symbol is in black. The old FSG equivalent related to the symbol is in red. In green, you can find the suggested JP reading.

So, how much is FSG Digitization of the Voynich Manuscript not enough for NLP? (Natural Language Processing). I want you to know that answering that question is out of the scope of this blog post. However, since every attempt to decipher the Voynich Manuscript has been not enough, maybe we should question our knowledge.

Easier said than done…

…Or is it? Actually, I want you to know how successful I have been studying the language since I devised this JP system. Just look at how neatly organized this table is, and how nonsensical it is to consider that ‘G’ is way too different from ‘E’. Then, the length of the tokens… If you consider ‘IIIE’, how much do you think a machine should chew to account for meaning in symbols? How long would the roots of the language be? Of course, there are many types of languages.

But the least we can say is that every language has atoms. So we had better find the atoms. The italic ‘I’ symbols preceding the main symbol modify the result from the beginning. That would make sense to accept that the right way to display it in our system is as a vowel after the symbol’s consonant. Are you convinced? is FSG Digitization of the Voynich Manuscript enough for NLP?… yet?

FSG Digitization Of The Voynich Manuscript is Enough For NLP… not really!

Let us see if we’re good to go deciphering the Voynich Manuscript. Actually, almost… But we need to see an example of how FSG gets everything scrambled. It also misses out on exploring meaningful rules in the script. So, let us look at this table below.

Capture decran 2023 01 28 a 19 45 33
Table 2 – FSG messy crime

This is an unprecedented way to analyze why FSG is not enough for NLP. Let us assume as shown in Table 2, that the characters ‘ID’ have a meaning (in green). Let us a also assume that the characters ‘OK’ have a meaning (in red). If we invert function and sound like FSG does, we get the scramble in the bottom most line. Surely, now the characters are separated by three steps instead of two. Also, the data is interleaved and we continuously jump from data to non related data. Everything is lost and not one single atom is extractable this way.

Conclusion

Now, I want you to directly head to my playlist to listen to pages of the Voynich Manuscript using my system. I hope you will get used to it. It will allow us to read through the material and make statistical analysis of everything.

Link here

Link to the previous blog post