When saints disagree: the angry parting of St Epiphanius and St John Chrysostom

John Chrysostom started his career as a popular preacher in Antioch in the late fourth century.  Then he was translated to Constantinople, to take up the role of Patriarch.  This was a highly political role, and whoever held it was the target of intrigue and machinations.  So it was with Chrysostom; and eventually his many enemies got him deposed and exiled, and he died while in exile.

This was not the end of his story.  Once his most bitter foes had passed from the scene, it was decided that Chrysostom was actually the victim here, and he was rehabilitated.  He went on to become the most important of the Greek fathers.  His works are preserved in an enormous number of handwritten copies.

The seedy methods of the intriguers are what they always are, except for one unusual point.  Theophilus, Patriarch of Alexandria, was Chrysostom’s enemy, as every Patriarch of Alexandria was a rival with every Patriarch of Constantinople.  He arranged for a “Synod of the Oak” at which Chrysostom was to be put on trial.  Further, he invited the famous Epiphanius of Salamis to attend.

Epiphanius was by this time an old man.  He is best known today from his catalogue of heresies, the Panarion.  This is invaluable as a guide to these groups, which are often today rather obscure.  But the impression given to many readers is of a rather coarse, not too-intelligent man, prone to hasty judgements.  Epiphanius had already got involved in the origenist disputes, which were then just getting underway.  That these were really a pretext for political infighting rather than any genuine doctrinal issue seems to have completely escaped him, as it did many.

So Theophilus got Epiphanius, the heresy hunter, to come to his synod at which he proposed to frame Chrysostom.  Epiphanius came to Constantinople spoiling for a fight.  Chrysostom, wisely, refused to be provoked.  The exact chronology of events is unclear, but it seems that Epiphanius did not in the end attend the synod.  Instead he left Constantinople by ship, intending to return to Cyprus.  We might speculate that the old man had finally realised that he was merely a pawn in someone else’s quarrel, and chose to leave rather than get further involved.

Both Sozomen (H.E. 8, 15:1-7) and Socrates (HE 6, 14:1-4) record that a story circulated about the two saints.  Here’s Socrates, in the old NPNF translation here:

Some say that when he was about to depart, he said to John, `I hope that you will not die a bishop’: to which John replied, `Expect not to arrive at your own country.’ I cannot be sure that those who reported these things to me spoke the truth; but nevertheless the event was in the case of both as prophesied above. For Epiphanius did not reach Cyprus, having died on board the ship during his voyage; and John a short time afterwards was driven from his see, as we shall show in proceeding.

And here is Sozomen:

I have been informed by several persons that John predicted that Epiphanius would die at sea, and that this latter predicted the deposition of John. For it appears that when the dispute between them was at its height, Epiphanius said to John, “I hope you will not die a bishop,” and that John replied, “I hope you will never return to your bishopric.”

Both spoke truly.  Epiphanius died at sea, and never saw Cyprus again, while Chrysostom died in exile.

Both writers express some doubts about the story.  Subsequent hagiographers play down the dispute, as Young Richard Kim has recently discussed in a fascinating article, “An Iconic Odd Couple: The Hagiographic Rehabilitation of Epiphanius and John Chrysostom”, Church History 87 (2018), 981-1002.[1]

All the same, it is an amusing picture.

Share
  1. [1]doi:10.1017/S0009640718002354

Why “search engine optimisation” is an evil

We all want our words to be heard.  Our carefully crafted essays to be found.  That means that they must be visible in Google.  It is, indeed, for no other reason that I have devoted a couple of days of my life to doing some work on the old Tertullian Project files.

Increasingly it is only commercial sites that a Google search returns.  If you search for some out-of-copyright text, available for nothing online, you must first scroll past half-a-dozen adverts for people offering to sell you that, and then page through bookseller sites.  Google gains revenue if you are foolish enough to buy; but real people lose time and energy and money.

But once you start to look at the techniques needed for search engine optimisation, and the endless tweaks and nudges necessary, a conviction comes over you: that all this is evil.  For who has the time to do all this?

I’ve just seen some stuff telling me how I can improve my hits in this WordPress based blog, in respect of just one “problem”.  I’d have to install two plugins, activate them, and check whether or not they mess anything up.  Not too onerous; but impossible if you just sit at home with a text-editor.

When the WWW started, we were all equals.  We all created our HTML in a text editor like Notepad.  We all got traffic equally.   A corporation had no advantage over a man in a bedroom.  But now… not so.

The people who get the hits are not those who have something original and of value to offer.  They are those with the resources to do all the SEO tweaking necessary.  Effectively it privileges the corporation at the expense of the ordinary man or academic.  For the latter simply cannot keep up with all the effort needed.

I do not know the answer to all this.  But the web is a much different place to what it was.  We now have an effective monopoly in place, no different to the old Bell monopoly.  The events of January 2021 and the coordinated attack on Trump revealed that, for practical purposes, access to the web is controlled by a cartel – Google, Amazon, Facebook, Twitter – who can and do coordinate their control of the internet.

The answer must be the same as in the days of the old Bell monopoly.  It must be broken up.

Share

Converting old HTML from ANSI to UTF-8 Unicode

This is a technical post, of interest to website authors who are programmers.  Read on at your peril!

The Tertullian Project website dates back to 1997, when I decided to create a few pages about Tertullian for the nascent world-wide web.  In those days unicode was hardly thought of.  If you needed to be able to include accented characters, like àéü and so forth, you had to do so using “ANSI code pages”.  You may believe that you used “plain text”; but it is not very likely.

If you have elderly HTML pages, they are mostly likely using ANSI.  This causes phenomenal problems if you try to use Linux command line tools like grep and sed to make global changes.  You need to first convert them to Unicode before trying anything like that.

What was ANSI anyway?

But let’s have a history lesson.  What are we dealing with here?

In a text file, each byte is a single character.  The byte is in fact a  number, from 0 to 255.  Our computers display each value as text on-screen.  In fact you don’t need 256 characters for the symbols that appear on a normal American English typewriter or keyboard.  All these can be fitted in the first 127 values.  To see what value “means” what character, look up the ASCII table.

The values from 128-255 are not defined in the ASCII table.  Different nations, even different companies used them for different things.  On an IBM these “extended ASCII codes” were used to draw boxes on screen!

The different sets of values were unhelpfully known as “code pages”.  So “code page” 437 was ASCII.  The “code page” 1252 was “Western Europe”, and included just such accents as we need.  You can still see these “code pages” in a Windows console – just type “chcp” and it will tell you what the current code page is; “chcp 1252” will change it to 1252.  In fact Windows used 1252 fairly commonly, and that is likely to be the encoding used in your ANSI text files.  Note that nothing whatever in the file tells you what the author used.  You just have to know (but see below).

So in an ANSI file, the “ü” character will be a single byte.

Then unicode came along.  The version of unicode that prevailed was UTF-8, because, for values of 0-127, it was identical to ASCII.  So we will ignore the other formats.

In a unicode file, letters like the “ü” character are coded as TWO bytes.  This allows for 65,000+ different characters to be encoded.  Most modern text files use UTF-8.  End of the history lesson.

What encoding are my HTML files using?

So how do you know what the encoding is?  Curiously enough, the best way to find out on a Windows box is to download and use the Notepad++ editor.  This simply displays it at the bottom right.  There is also a menu option, “Encoding”, which will indicate all the possibles, and … drumroll … allow you to alter them at a click.

As I remarked earlier, the Linux command line tools like grep and sed simply won’t be serviceable.  The trouble is that these things are written by Americans who don’t really believe anywhere else exists.  Many of them don’t support unicode, even.  I was quite unable to find any that understood ANSI.  I found one tool, ugrep, which could locate the ANSI characters; but it did not understand code pages so could not display them!  After two days of futile pain, I concluded that you can’t even hope to use these until you get away from ANSI.

My attempts to do so produced webpages that displayed with lots of invalid characters!

How to convert multiple ANSI html files to UTF-8.

There is a way to efficiently convert your masses of ANSI files to UTF-8, and I owe my knowledge of it to this StackExchange article here.  You do it in Notepad++.  You can write a macro that will run the editor and just do it.  It runs very fast, it is very simple, and it works.

You install the “Python Script” plugin into Notepad++ that allows you to run a python script.  Then you create a script using Plugins | Python Script | New script.  Save it to the default directory – otherwise it won’t show up in the list when you need to run it.

Mine looked like this:

import os;
import sys;
import re;
# Get the base directory
filePathSrc="d:\\roger\\website\\tertullian.old.wip"

# Get all the fully qualified file names under that directory
for root, dirs, files in os.walk(filePathSrc):

    # Loop over the files
    for fn in files:
    
      # Check last few characters of file name
      if fn[-5:] == '.html' or fn[-4:] == '.htm':
      
        # Open the file in notepad++
        notepad.open(root + "\\" + fn)
        
        # Comfort message
        console.write(root + "\\" + fn + "\r\n")
        
        # Use menu commands to convert to UTF-8
        notepad.runMenuCommand("Encoding", "Convert to UTF-8")
        
        # Do search and replace on strings
        # Charset
        editor.replace("charset=windows-1252", "charset=utf-8", re.IGNORECASE)
        editor.replace("charset=iso-8859-1", "charset=utf-8", re.IGNORECASE)
        editor.replace("charset=us-ascii", "charset=utf-8", re.IGNORECASE)
        editor.replace("charset=unicode", "charset=utf-8", re.IGNORECASE)
        editor.replace("http://www.tertullian", "https://www.tertullian", re.IGNORECASE)
        editor.replace('', '', re.IGNORECASE)

        # Save and close the file in Notepad++
        notepad.save()
        notepad.close()

The indentation with spaces is crucial for python, instead of curly brackets.

Also turn on the console: Plugins | Python Script | Show Console.

Then run it Plugins | Python Script | Scripts | your-script-name.

Of course you run it on a *copy* of your folder…

Then open some of the files in your browser and see what they look like.

And now … now … you can use the Linux command line tools if you like.  Because you’re using UTF-8 files, not ANSI, and, if they support unicode, they will find your characters.

Good luck!

Update: Further thoughts on encoding

I’ve been looking at the output.  Interesting this does not always work.  I’ve found scripts converted to UTF-8 where the text has become corrupt.  Doing it manually with Notepad++ works fine.  Not sure why this happens.

I’ve always felt that using non-ASCII characters is risky.  It’s better to convert the unicode into HTML entities; using ü rather than ü.  I’ve written a further script to do this, in much the same way as above.  The changes need to be case sensitive, of course.

I’ve now started to run a script in the base directorym to add DOCTYPE and charset=”utf-8″ to all files that do not have them.  It’s unclear how to do the “if” test using Notepad++ and Python, so instead I have used a Bash script running in Git Bash, adapted from one sent in by a correspondent.  Here it is. in abbreviated form:

# This section
# 1) adds a DOCTYPE declaration to all .htm files
# 2) adds a charset meta tag to all .htm files before the title tag.

# Read all the file names using a find and store in an array
files=()
find . -name "*htm" -print0 >tmpfile
while IFS= read -r -d $'\0'; do
      #echo $REPLY - the default variable from the read
      files+=("$REPLY")
done <tmpfile
rm -f tmpfile

# Get a list of files
# Loop over them
for file in "${files[@]}"; do

    # Add DOCTYPE if not present
    if ! grep -q "<!DOCTYPE" "$file"; then
        echo "$file - add doctype"
        sed -i 's|<html>|<!DOCTYPE html>\n<html>|' "$file"
    fi

    # Add charset if not present
    if ! grep -q "meta charset" "$file"; then
        echo "$file - add charset"
        sed -i 's|<title>|<meta charset="utf-8" />\n<title>|I' "$file"
    fi

done

Find non-ASCII characters in all the files

Once you have converted to unicode, you then need to convert the non-ASCII characters into HTML entities.  This I chose to do on Windows in Git Bash.  You can find the duff characters in a file using this:

 grep --color='auto' -P -R '[^\x00-\x7F]' works/de_pudicitia.htm

Which gives you:

Of course this is one file.  To get a list of all htm files with characters outside the ASCII range, use this incantation in the base directory, and it will walk the directories (-R) and only show the file names (-l):

grep --color='auto' -P -R -n -l '[^\x00-\x7F]' | grep htm

Convert the non-ASCII characters into HTML entities

I used a python script in Notepad++, and this complete list of HTML entities.  So I had line after line of

editor.replace('Ë','&Euml;')

I shall add more notes here.  They may help me next time.

Share

Admin: Tertullian Project reload

The Tertullian Project (tertullian.org) and all the files underneath it will be temporarily offline.  I’ve made a couple of small technical changes, globally, to the HTML files, and so I am uploading the directory again from my local disk.  I’m not sure how long this will take; maybe an hour or two, probably less. My apologies for the outage.  There should be no visible change to anything at the end of it all.

The purpose of these changes is to improve the visibility of material on the site in Google. The Tertullian Project files date back to 1997, but Google Search – which did not exist when those pages were written – gives preference to pages which are optimised to work with it.

A kind correspondent suggested this action, as long ago as 2019, and provided a bash script to implement them.  Today I see some further problems reported in the Google Search Console.  So I have updated his script, run it, and we will see whether the result is an improvement.

If not, I have retained the old directory.  Let me know if there are breakages tomorrow.

Update: Well, that didn’t go particularly well – characters with umlauts and accents tended to break.  I’ve just rolled back to the original version, and I will try again tomorrow.

Update: Well… that has probably been one of the most annoying and frustrating days in my life.  Unicode support in Windows command line is complete rubbish.  It doesn’t matter if you use WSL, or Bash Shell; you will find that tools simply do not display unicode, don’t find it.  You can pipe a unicode ü character to grep and it will find it with a regex; you then cat a text file containing the same character and it will not.  Everything is rubbish.  Nothing works consistently for non-ASCII characters.  What I thought to do was to compile a list of files that needed changes.  Even that minor task can’t be done, because grep won’t work in any sensible way.

No doubt there is, in fact, a path way through this field full of bottomless rabbit-holes.  For how else could programmers overseas work?  But if so, it has been impossible to find it on Google.  Enough!

Share

Josephus in Ethiopian – a dissertation

An interesting dissertation has come online here, Y. Binyam, Studies in Sefer Yosippon: The Reception of Josephus in Medieval Hebrew, Arabic, and Ethiopic Literature, Florida (2017).  The abstract reads:

In this dissertation I analyze the reception of Josephus in Ethiopia by way of the Hebrew Sefer Yosippon, its Latin sources, and its subsequent Arabic translations. I provide the first English translations and comparative analysis of selected passages from the Latin, Hebrew, Arabic, and Ethiopic texts that transmit Josephus’s Jewish War.

The first part of this project provides an introduction to four texts that play important roles in the transmission of Josephus’s Jewish War from first-century Rome to fourteenth-century Ethiopia: the fourth-century Latin De Excidio Hierosolymitano, the tenth-century Hebrew Sefer Yosippon, the twelfth-century Arabic Kitāb akhbār al-yahūd, and the fourteenth-century Ethiopic Zena Ayhud.

After discussing the critical issues related to these texts, the second part of the dissertation presents a detailed comparison of the receptions of the famous story of Maria found Josephus’s account of the siege of Jerusalem. I pay close attention to the redactional changes made by the author of each text and note the ideological, cultural, rhetorical, and historical factors that lie behind the various editorial activities.

Ultimately my research seeks to contribute to our understanding of the way in which non-western cultures receive the historiographical traditions of the classical period. In doing so, it will highlight the uniqueness of understudied literary and historiographical traditions that flourished in the medieval period.

The Latin text is the ps.Hegesippus which is online.

The thesis discusses the textual transmission of these four sub-Josephan texts.  Naturally this involves material known only to specialists.  Who of us knows much about the spread of texts into Ethiopic?   But I learn on p.61 that a “large number of translations were made into Ethiopic … during the thirteenth and fourteenth centuries”, and of “the ecclesiastical reforms that take place with the ascendancy of Yekuno Amlka (1270-1285), who commissions the translation of large numbers of theological and ecclesiastical works into Ge’ez.”

My thanks to the kind correspondent who drew my attention to this, very worthwhile, study.

Share