I have started work on a revised translation of the Annals of the Arabic Christian writer Eutychius. My approach is to get the Italian text, get my existing translation, and get a translation from ChatGPT 3.5, and interleave them, sentence by sentence. I’ve had to make some modifications to the somewhat crude tool that I use to interleave.
I had rather hoped to do a whole chapter at a time, but ChatGPT 3.5 does not support more than a certain amount of text. This is annoying in a way, because fiddling with interleaving takes time away from translating.
I must say that I am glad to discover relatively few mistakes in my first translation. There are some, but it could be far worse. ChatGPT tends to produce smoother English, so often I have gone with their rendering.
On the other hand ChatGPT has a definite tendency to paraphrase. It’s not bad; but I keep an eye on it.
I’ve done around 14 sections of chapter 1 – there are 18 chapters, or something like that – without too much trouble. But now ChatGPT is fighting me.
I started work on sections 15-17. When I interleaved, I found that the text produced by ChatGPT was around half the size of that from the Italian or my original translation. Mysteriously it had simply truncated text, right in the middle of the passage. It really fought me. I had to paste in each section by itself. This has not happened before, and reflects the deep instability of AI.
Once I had done this, I started work. But I am troubled to find that the AI output “feels” different. It’s quite close to my own translation. Is it possible that it is basically just giving me my original translation back? How can I tell?
The text and my original translation have footnote numbers, embedded in brackets like this (32). Previously ChatGPT included these. Now it strips them out.
None of this feels good. I was very happy with what it was producing originally. Now it feels like it is fighting me.
AI is not a fit technology. Any technology that gives different results when you use it at different times is not a fit technology.
These things are only tools. You need to know that your tool works, and will serve you when you have time to work. Imagine if your saw would only cut wood at certain times of the day? If the width and fineness of the saw cut varied depending on unknown factors? If your saw silently changed it’s depth of cut?
You would quickly get rid of it, if only out of sheer frustration.
I shall have to see how I go with this. It’s a very wonky technology. The secretiveness about how it works does not help. Nor does the fact that people want to force you to buy stuff to use it. I hate the commercial web that we have today. All the same, it does make things possible that would not have been possible before.