Coming into the Summer Fellowship, the aspect of my project that I was most worried about was the coding. While I am fairly functional, if not precisely fluent, with most technologies, delving beneath the surface level into the murky chasms of coding was a scary step to take. As I sit here reflecting upon my experience summer, my adventures with code were both the most satisfying part of the project and the most enjoyable part of the project. Much of the early portions of the summer were spent getting comfortable with the Python coding language. Working with my studio contact, Nikki White, allowed me to establish a strong basal knowledge of coding languages, which I then supplemented with various tutorials from online sources such as Linked-In Learning and The Code Academy. Whenever I got particularly stuck while writing a script, I quickly learned to do what all great coders do: I turned to Google. The website Stack Overflow, a veritable online coding community, usually had the answers I was looking for—perhaps the most important thing I’ve learned about coding is that no matter what you’re trying to do, somebody else has tried to do the same thing at some point, failed miserably, and eventually worked out a solution (which they’ve then shared online). Given that we are entering the Artificial Intelligence Epoch, I also occasionally, in moments of great weakness, turned towards Chat GPT for help. Intriguingly, Chat GPT is a formidable tutor. I was able to share scripts that I had written with the AI, and it would kindly point out what I had done wrong and explain what I needed to fix to make the script function in the way I had intended. A brave new world indeed.
The major hurdle I needed to clear while working on this project was the massive amounts of data cleaning and data entry required to prepare my data for textual analysis. This process has taken up the majority of the second half of the summer. Despite the monotony of this portion, I was able to expedite some of the data collection processes by putting my newly acquired coding skills to good use. I wrote Python scripts to parse the xml files of medieval texts I’d collected for the projects and extract useful metadata for the creation of the index I’ve made to power my textual analysis queries. This parsing process wasn’t without its own set of difficulties, however. The text files were in several different xml formats, which required several different scripts to parse them—and, in some cases, there were large quantities of texts which I had to parse on my own by opening the xml files and searching for metadata such as “title” and “date,” then manually inserting that metadata into my spreadsheet. This was a particularly onerous task, as a large portion of the texts had titles and introductions in German. As I write this, I am just entering the phase of the project where I am finally able to start performing textual analysis on my dataset. This is exciting, and I hope that I will be able to provide you with an update of my success in the video which accompanies this post. Thank you for reading.