Intro and context
The project that I’m working on is actually based on a previous (now defunct) project that had to be re-written. I was in the middle of creating a scrape tool to pull data from a website.
The original (I’ll refer to as MK1) worked really well, until the site was completely re-designed. I always knew of implications around this, but continued with it regardless. Looking back, I could have mitigated to lessen the impact of unexpected changes. This post is less about the details of the project and more about future-proofing expected changes, and creating an easier way in order to do so.
Anyone who has worked with an HTML parser knows that they can only work with what they are given, and if the HTML changes, so does the way the rest of the script behaves. I thought long at hard whilst rethinking the program… I thought about the 3 main objectives I wanted to achieve.
- Get data (input)
- Extract and order data (process)
- Save data (output)
I wanted this to be an automated, unsupervised process. There are (will be) many test cases if things go wrong… but still “want/need to” store the bread crumbs of “broken” data records for completeness.
Being a cup half full kinda guy, I broke MK1 down bit by bit looking for worst case scenarios and weaknesses.
The webpage is the input, it can’t be changed after its received. It’s fairly simple to programmatically grab HTML from from the internet, but what if I needed multiple pages? URLs change all the time, how to speed the process of changing a list of hard coded sites in a script? What if I wanted to add a new site entirely?
Ideally, I needed a simpler way, with as less hard-coding as possible, to pull raw data and push it onto process. If things change, this impact will be minimal. I also needed an accessible list of URLs to queue, which can be changed whenever needed.
From the HTML, I want to focus only on the elements of usefulness. Things I need. I look for similarities in lines of text. I find many different words, phrases, numbers expressing the same things differently. I could dedicate an entire function to do this for each group of data I want to extract. Adding to an ever growing list of if statements or switch cases, some may stretch for literally hundreds of lines of code for 25 different cases. (Like an exaggerated MK1). What if these 25 cases suddenly change… It could mean 100 lines+ of code needs to be reworked. What if the phrases that I were originally looking for, also changes?
I opted for files to hold these rules. They can all be read, loaded and used within a single loop statement without the need to build these in to the script.
I know what the output should be and how it should be stored. What if I wanted to add more data to the data set? Maybe I have fragments of data that processing has missed? Shall I just discard of it?
Here, I decided to include a list of values inside some of the config files used in the input stage. These will correspond to database columns and can be added/changed whenever needed.
Eventually I was able to group the problems together and create a logical solution for them all.
If you notice, there’s alot of “daisy chaining” going on. I don’t mind this as config files are a lot easier to manipulate then creating a database and a front end to manipulate it, and easier still then hardcoding the majority of variables that are needed.
Creating a parser in python
Essentially, the txt configuration files will contain your own little language and syntax in order to sort and use the data appropriately.
Let’s take this simple configuration:
#this is a configuration file
In the example, we have some familiar sights of a typical config file.
- “#” for block comments. We must tell the parser to ignore these
- Empty lines (or new line characters) to make the file more human readable. We must tell the parser to ignore.
- Finally, the configurations. “jsephler=https://jsephler.co.uk”
In python, we need to first open up the configuration file. Lets assume that “urls.txt” is in the same directory as our script.
urllist =  # a list for config data
filename = "urls.txt" # path to file
with open(filename, "r") as urlfile: # open file
for line in urlfile: # iterate through file
if line != "#" and line.startswith("\n") is False: # ignore "#" and "newline" characters
tmp = line.strip().split("=") # strip line of "whitespace" and split string by "=" character
urllist.append(tmp) # append to list
print(urllist) # print list to console
if __name__ == "__main__": # initiate "main()" first
If permissions will allow, the script will open our file and refer to it as “urlfile”.
The loop will iterate through every line in the file, while the if statements check for any lines that start with “#” or “\n” new line characters.
Before we store our data, we remove whitespace (strip) and seperate (split) the string by the “=” character.
Only after this, we can append it to our urllist array.
Output should look like this:
[['jsephler', 'https://jsephler.co.uk'], ['time50', 'https://time50.jsephler.co.uk']]
An array of arrays, where each member of urllist:
 is the sitename, and  is the url.
Breaking this down further, you could have a configuration like this:
After the first “=” split, you could split the second member of the array a second time by using the “|” character to end up with another 2 pieces of data. Copy could call a function to do just that, copy!
Obviously plan ahead and use the characters wisely. You do not want to use a character that could be included in a URL as you may need to use it the future.
By doing this, you can create a config file that’s not only simple but powerful too.
There is a Python config parser library, however I preferred to create my own. My reasons for doing so:
- I didn’t actually know it existed until I started writing this post.
- It is fairly simple logic and I can tailor the syntax and uses.
- You could potentially save on overheads instead of loading and using a separate module.
- It’s alot of fun to experiment with!
For reference, here is the documentation for the standard parser library: https://docs.python.org/3/library/configparser.html