Happy New Year! Here’s a program…

Leading up to the momentus 2017/2018 new years ding, we couldn’t decide how to choose 10 of 50 questions in a Pub Quiz game that we picked to play. I decided to make a program to do just this, and randomise the questions. Here’s my new years python gift, to you. Have a good one!

from random import randrange

def main():
    numrange = [1, 51]
    while True:
        i = input()
        if i == "q":
            break
        else:
            questions(numrange)
    return 1

def questions(numr):
    questions = []
    for i in range(10):
        while True:
            tmp = randrange(numr[0], numr[1])
            if tmp not in questions:
                questions.append(tmp)
                break

    questions = sorted(questions)
    for i in questions:
        print(str(i))
        
    return 1

if __name__ == "__main__":
    main()

First look at building a configuration file parser – Python3

Intro and context

The project that I’m working on is actually based on a previous (now defunct) project that had to be re-written. I was in the middle of creating a scrape tool to pull data from a website.

The original (I’ll refer to as MK1) worked really well, until the site was completely re-designed. I always knew of implications around this, but continued with it regardless. Looking back, I could have mitigated to lessen the impact of unexpected changes. This post is less about the details of the project and more about future-proofing expected changes, and creating an easier way in order to do so.

Anyone who has worked with an HTML parser knows that they can only work with what they are given, and if the HTML changes, so does the way the rest of the script behaves. I thought long at hard whilst rethinking the program… I thought about the 3 main objectives I wanted to achieve.

  1. Get data (input)
  2. Extract and order data (process)
  3. Save data (output)

I wanted this to be an automated, unsupervised process. There are (will be) many test cases if things go wrong… but still “want/need to” store the bread crumbs of “broken” data records for completeness.

Being a cup half full kinda guy, I broke MK1 down bit by bit looking for worst case scenarios and weaknesses.


Input

The webpage is the input, it can’t be changed after its received. It’s fairly simple to programmatically grab HTML from from the internet, but what if I needed multiple pages? URLs change all the time, how to speed the process of changing a list of hard coded sites in a script? What if I wanted to add a new site entirely?

Ideally, I needed a simpler way, with as less hard-coding as possible, to pull raw data and push it onto process. If things change, this impact will be minimal. I also needed an accessible list of URLs to queue, which can be changed whenever needed.

Process

From the HTML, I want to focus only on the elements of usefulness. Things I need. I look for similarities in lines of text. I find many different words, phrases, numbers expressing the same things differently. I could dedicate an entire function to do this for each group of data I want to extract. Adding to an ever growing list of if statements or switch cases, some may stretch for literally hundreds of lines of code for 25 different cases. (Like an exaggerated MK1). What if these 25 cases suddenly change… It could mean 100 lines+ of code needs to be reworked. What if the phrases that I were originally looking for, also changes?

I opted for files to hold these rules. They can all be read, loaded and used within a single loop statement without the need to build these in to the script.

Output

I know what the output should be and how it should be stored. What if I wanted to add more data to the data set? Maybe I have fragments of data that processing has missed? Shall I just discard of it?

Here, I decided to include a list of values inside some of the config files used in the input stage. These will correspond to database columns and can be added/changed whenever needed.

Eventually I was able to group the problems together and create a logical solution for them all.

If you notice, there’s alot of “daisy chaining” going on. I don’t mind this as config files are a lot easier to manipulate then creating a database and a front end to manipulate it, and easier still then hardcoding the majority of variables that are needed.

Creating a parser in python

Essentially, the txt configuration files will contain your own little language and syntax in order to sort and use the data appropriately.

Let’s take this simple configuration:

urls.txt

#this is a configuration file
#syntax: sitename=url

#url1
jsephler=https://jsephler.co.uk

#url2
time50=https://time50.jsephler.co.uk

In the example, we have some familiar sights of a typical config file.

  1. “#” for block comments. We must tell the parser to ignore these
  2. Empty lines (or new line characters) to make the file more human readable. We must tell the parser to ignore.
  3. Finally, the configurations. “jsephler=https://jsephler.co.uk”

In python, we need to first open up the configuration file. Lets assume that “urls.txt” is in the same directory as our script.

openconfig.py

def main():
    urllist = []  # a list for config data
    filename = "urls.txt"  # path to file
    with open(filename, "r") as urlfile:  # open file
        for line in urlfile:  # iterate through file
            if line[0] != "#" and line.startswith("\n") is False:  # ignore "#" and "newline" characters
                tmp = line.strip().split("=")  # strip line of "whitespace" and split string by "=" character
                urllist.append(tmp)  # append to list

    print(urllist)  # print list to console

if __name__ == "__main__":  # initiate "main()" first
    main()

If permissions will allow, the script will open our file and refer to it as “urlfile”.

The loop will iterate through every line in the file, while the if statements check for any lines that start with “#” or “\n” new line characters.

Before we store our data, we remove whitespace (strip) and seperate (split) the string by the “=” character.

Only after this, we can append it to our urllist array.

Output should look like this:

[['jsephler', 'https://jsephler.co.uk'], ['time50', 'https://time50.jsephler.co.uk']]

An array of arrays, where each member of urllist:

[0] is the sitename, and [1] is the url.

Breaking this down further, you could have a configuration like this:

jsephler=https://jsephler.co.uk|copy

After the first “=” split, you could split the second member of the array a second time by using the “|”  character to end up with another 2 pieces of data. Copy could call a function to do just that, copy!
Obviously plan ahead and use the characters wisely. You do not want to use a character that could be included in a URL as you may need to use it the future.

By doing this, you can create a config file that’s not only simple but powerful too.

Conclusion

There is a Python config parser library, however I preferred to create my own. My reasons for doing so:

  1. I didn’t actually know it existed until I started writing this post.
  2. It is fairly simple logic and I can tailor the syntax and uses.
  3. You could potentially save on overheads instead of loading and using a separate module.
  4. It’s alot of fun to experiment with!

For reference, here is the documentation for the standard parser library: https://docs.python.org/3/library/configparser.html

Preparing text files for different OS’s

If you decide to programmatically write out text to a .txt file for viewing later, it might be handy to know these tips.

Linux/Unix systems will separate each line of text with a single \n escape, signalling the end of a line.

Mac OS reads the end of a line with a single \r carriage return escape.

Microsoft packages such as Notepad reads a sequence of \r\n to declare the end of a line of text.

Obviously, if you’re creating an application destined for the Host OS, you probably know this already. If you wanted to generate reports or logs for use on a different OS, you might find this handy to know, as did I! 👍

Lower level stuff for reference.

ASCII hex for \n (new line)

0x0a

ASCII hex for \r (carriage return)

0x0d

Producing “animated” graphs

Visualising data, is an important way to convey a message. However, I am ever more seeing animated graphs built from, occasionally, large, complex datasets. Sometimes the patterns they produce are really impressive and cannot be shown just as a static image or by simply looking at the data itself.

I became inspired (just a little bit) and decided to learn and create my own. This is not a comprehensive detailed guide but it will certainly give you the knowledge to make your own way through this data world.

First, I needed a dataset to analyse. Weilded with python, I started brainstorming… I’ve always been fascinated by RNG (random number generators) so picking a lottery was an obvious choice for making a small dataset from; it is largely available from multiple websites. I picked a site and set to work.

BeautifulSoup

After downloading the HTML pages, I built a tool in python using a module called BeautifulSoup. The module basically takes inputs in the form of an HTML or XML DOM and parses it into objects. This allows you to navigate the page effeciently, to change or extract certain elements or the values within the element. After some tinkering, I was able to reduce a long HTML page down to the data I wanted to, with a few lines of code.

I’m going to be honest here. This part took most of my time up; BS documentation was a nightmare to navigate and understand…. The initial tutorial was helpful at first but towards the end I felt like many fundamentals have been left out. There are good tutorials out there and after reading a few, you get a sense of how dynamic BS can be. One trick I used was to assign a variable the results of a BS into a new list and parse the list back into a new BS object. ( To do this method, you’ll need to use encode() )

Another tip whilst working on data, I found an invaluable way to sort it by date. If (like this example) the data you’re working with has date/time values, it might be necessary and important to sort the data via date. I know this might not be the most efficiant way, but but quite effective nonetheless. One by one mask the date/time string with strptime() and change to Unix epoch. You can now easily sort the data. You can now loop and convert the epoch back to your desired date/time layout using strftime()

Sending the data from memory to a CSV or TSV spreadsheet will seal the deal and your ready to start making graphs!

MATPLOTLIB

To generate graphs, I used a python module called MATPLOTLIB. There are many tutorials out there. Get a good idea about how you would like to frame your data and what you will like to show. When you know that, play around with MATPLOTLIB until you have it plotting test data correctly and how you want.

Be sure to find out how to save as an image and close the method properly before generating the next graph slide. For a video file, you’ll probably want to save the first image as 0000.PNG and the next as 0001.PNG etc. as to easily encode into a video (and to keep them in order)

After generating your graphs, you can now easily encode the files into a video to create a graph which seems to move.

My first attempt and final product can be seen here. Have fun experimenting!