multi-search: tool for searching keywords on multiple websites

Intro

multi-search is a small GUI tool designed to search for a term or keyword on multiple websites.
It’s built in Python using the appJar GUI suite.
It’s intended as a tool for buyers or sellers to compare the price of an item, where comparison websites may fail for different reasons.
It can will save time, as well as passively suggesting different sites to search.
Additionally, the multi-search tool doesn’t attempt track or collect any user information.

The program is simple by design, which tries hard to be as quick and intuitive as possible.

multi-search main
multi-search program

After the “Sites to search” are selected in the dropdown, the URL links are presented for use.

multi-search results

There’s an additional option to automatically open the links in the web-browser.

How it works

It’s really simple. The program loads the site and search data from a “.txt” file.

It builds a URL string from the data and the search term provided.

It shows or opens the URL’s for the user!

Project State

Up to now in it’s current form, the program is completely useable, although still in its infancy.

Project Direction

  • Bundle the project for Windows and Linux, making it widely available

Where to download

The project is currently available at github.com


Changes:

  • 21/02/2018: Major Changes
    Added “multi-search.ini” to set user configurations
    Improved naming convention for site URLs – allowing for international support
  • 20/02/2018: Major Changes
    Added JSON support for REST API’s

SQLite3 and Python3 : Generating Statements

I’ve been working on a project that interacts with a database, and happened upon a some interesting problems.

The data I want to input into the database is initially stored in a dict variable.

  • If you weren’t already aware, the order of a Python dict changes, even if you use a blueprint or template. This means that a pre-prepared database statement wouldn’t necessarily align with the values of the dict, deeming the data to be parsed into the wrong fields in a database.

The dict may contain some or all of the tables’ fields

  • My program will acquire as many data values from input as possible to fully populate the dict, however I have allowed this to be dynamic in a sense that if the data is irretrievable, or doesn’t match a regex for the field, it will ignore said field and move on, deeming the dict value to default to “None”.

I want to make correct use of the API’s escape mechanism

  • This is most important. Not only to make fully sure that I’m not inserting “unclean and potentially harmful” data, but also to follow the line of best practice. I’m really sure (like 99%) that the data will be clean. However, mitigating the risk further will bullet proof the commit.

I don’t want to bore you with many lines of code, so I will explain my methodology of my trials instead.

Many Loops

At first, I kind of penned down the rough idea of the function in long hand.
I started with a loop to store the key/value pairs in 2 separate lists. I had to include a couple of cases; to ignore “None” values and some keys which weren’t destined for the intended table.

For each list, I wrote a loop that builds a string that had to be properly formatted for the final statement.

INSERT INTO x (f1, f2) VALUES(v1, v2)

This worked well but the code really was too verbose. 30~50 lines in fact. I thought about it and lessened the load.

Less loops, messy code

Trying to lessen the loops, I went with the same loop to extract the dict key/values as above.

For reference:

_ = [[],[]]

ignore = ["ig1", "ig2"]

for key, value in dict:
    if key in ignore:
        continue
    _[0].append(key)
    _[1].append(value)

I decided to use the string .join() method to create the raw statements from each list, which was concatenated to make the final statement.

It worked and lessened the loop burden. However I had forgotten an extremely important step, the SQLite3 escape mechanism.

Success in less words

It suddenly dawned on me that through these statement creating methods, I’m joining raw data to this string for input, without properly parsing the data. I had also tried:

"{v0}".format(v0=data[0])

This of course gave me the same result.

The fix was fairly simple, however.
The values in the statement needed to be changed from the “data values” to “?”.
The execute method also required the second argument as a tuple to allow for parsing and “?” substitution. I simply converted the list to a tuple.

The code is now down from around 50 to 15 lines which I’m happy about.

ignore = ["ig1", "ig2"]  # list of dict keys to ignore.
col = []  # list of col headings
val = []  # list of col values

# loop dict, append key/value to lists respectively
for key, value in dict.items():
    if key not in ignore:
        if value:
            col.append(key)
            val.append(value)

val = tuple(val)  # convert list to tuple

f = ", ".join(col)  # make string of fields
v = ",".join(["?" for i in range(len(col))])  # make string of "?"'s
statement = "INSERT INTO x (" + f + ") VALUES(" + v + ")"  # finalise statement

dbc.execute(statement, val)

This code sits right with me. It has taken a few attempts, but the final result gets the job done correctly:

INSERT INTO x (f1, f2, f3) VALUES(?,?,?)

It goes to show that even something fairly trivial as chucking information into a database can require some careful planning, especially when taking limitations into account (in this case datatypes), and the nature of task and has taught me some important lessons.

Python: A visual look at randomly changing values

I wanted to visually see what was happening whilst randomly filling a Python list or dict with a value. For these examples, I used (mainly) identical code to produce the visuals, but changed the main loop method.

I created the the first program with what I thought was the best method, but was quickly disappointed

The First Attempt

I started by creating an empty dict of 100 key/value pairs; this will be my working space. I felt 100 was a nice round number to use, and it easily fits within the confines of a terminal.
Using a loop, I randomly choose a number between 0 – 99 and change the corresponding dict value to “x” resembling a change.
If the dict value had already been changed, it loops over until a new unchanged value is picked. The program stops when all values have been changed.

The whole process took little time to execute, so decided to slow down the program with the sleep method to clearly see the changes.

(Source: Python_dict_from_random)

In the video, the end result was a total of 415 conflicting random picks and 100 successful picks. This really surprised me as I have used this method in some programs before, but never realised how inefficient it actually was.
This approach essentially uses brute-force to change the values, without any control. I needed control.

The Second Attempt

Thinking about the problem, control was key. Infact, the key is control. With every filled dict value, the probability of getting an unchanged value goes down.
Ideally, the probability of getting an unchanged value should be (or as close to) 100% to be efficient.

I decided to create a new list, based on the dicts’ keys. From there, I could randomly select a value from the list, change the value of the corresponding dict key, and then remove the key from the list. I will be left with a list of dict keys which are yet to be populated. This means that every random pick will always be an empty value. I gave it a go and it worked really well!

(Source: python_dict_from_random_reduction)

As you can see, 0 conflicts; there can’t be a conflict. The only downside to this might arguably be the overhead of manipulating the list and storing the list, especially if the dict key is a string rather then an integer. However I am certain that the likelihood of the first example picking 100/100 keys on the first 100 loop iterations is really low.

Notes

In the examples above, I used dicts as the data medium. Obviously with some tweaking, you could use lists. Infact, if you use the first method especially, you SHOULD use a list/index (unless the dict key is indexed by an integer)
Secondly, you will need to pre-allocate the desired length in the list in order to prevent list index exceptions!

Happy New Year! Here’s a program…

Leading up to the momentus 2017/2018 new years ding, we couldn’t decide how to choose 10 of 50 questions in a Pub Quiz game that we picked to play. I decided to make a program to do just this, and randomise the questions. Here’s my new years python gift, to you. Have a good one!

from random import randrange

def main():
    numrange = [1, 51]
    while True:
        i = input()
        if i == "q":
            break
        else:
            questions(numrange)
    return 1

def questions(numr):
    questions = []
    for i in range(10):
        while True:
            tmp = randrange(numr[0], numr[1])
            if tmp not in questions:
                questions.append(tmp)
                break

    questions = sorted(questions)
    for i in questions:
        print(str(i))
        
    return 1

if __name__ == "__main__":
    main()

First look at building a configuration file parser – Python3

Intro and context

The project that I’m working on is actually based on a previous (now defunct) project that had to be re-written. I was in the middle of creating a scrape tool to pull data from a website.

The original (I’ll refer to as MK1) worked really well, until the site was completely re-designed. I always knew of implications around this, but continued with it regardless. Looking back, I could have mitigated to lessen the impact of unexpected changes. This post is less about the details of the project and more about future-proofing expected changes, and creating an easier way in order to do so.

Anyone who has worked with an HTML parser knows that they can only work with what they are given, and if the HTML changes, so does the way the rest of the script behaves. I thought long at hard whilst rethinking the program… I thought about the 3 main objectives I wanted to achieve.

  1. Get data (input)
  2. Extract and order data (process)
  3. Save data (output)

I wanted this to be an automated, unsupervised process. There are (will be) many test cases if things go wrong… but still “want/need to” store the bread crumbs of “broken” data records for completeness.

Being a cup half full kinda guy, I broke MK1 down bit by bit looking for worst case scenarios and weaknesses.


Input

The webpage is the input, it can’t be changed after its received. It’s fairly simple to programmatically grab HTML from from the internet, but what if I needed multiple pages? URLs change all the time, how to speed the process of changing a list of hard coded sites in a script? What if I wanted to add a new site entirely?

Ideally, I needed a simpler way, with as less hard-coding as possible, to pull raw data and push it onto process. If things change, this impact will be minimal. I also needed an accessible list of URLs to queue, which can be changed whenever needed.

Process

From the HTML, I want to focus only on the elements of usefulness. Things I need. I look for similarities in lines of text. I find many different words, phrases, numbers expressing the same things differently. I could dedicate an entire function to do this for each group of data I want to extract. Adding to an ever growing list of if statements or switch cases, some may stretch for literally hundreds of lines of code for 25 different cases. (Like an exaggerated MK1). What if these 25 cases suddenly change… It could mean 100 lines+ of code needs to be reworked. What if the phrases that I were originally looking for, also changes?

I opted for files to hold these rules. They can all be read, loaded and used within a single loop statement without the need to build these in to the script.

Output

I know what the output should be and how it should be stored. What if I wanted to add more data to the data set? Maybe I have fragments of data that processing has missed? Shall I just discard of it?

Here, I decided to include a list of values inside some of the config files used in the input stage. These will correspond to database columns and can be added/changed whenever needed.

Eventually I was able to group the problems together and create a logical solution for them all.

If you notice, there’s alot of “daisy chaining” going on. I don’t mind this as config files are a lot easier to manipulate then creating a database and a front end to manipulate it, and easier still then hardcoding the majority of variables that are needed.

Creating a parser in python

Essentially, the txt configuration files will contain your own little language and syntax in order to sort and use the data appropriately.

Let’s take this simple configuration:

urls.txt

#this is a configuration file
#syntax: sitename=url

#url1
jsephler=https://jsephler.co.uk

#url2
time50=https://time50.jsephler.co.uk

In the example, we have some familiar sights of a typical config file.

  1. “#” for block comments. We must tell the parser to ignore these
  2. Empty lines (or new line characters) to make the file more human readable. We must tell the parser to ignore.
  3. Finally, the configurations. “jsephler=https://jsephler.co.uk”

In python, we need to first open up the configuration file. Lets assume that “urls.txt” is in the same directory as our script.

openconfig.py

def main():
    urllist = []  # a list for config data
    filename = "urls.txt"  # path to file
    with open(filename, "r") as urlfile:  # open file
        for line in urlfile:  # iterate through file
            if line[0] != "#" and line.startswith("\n") is False:  # ignore "#" and "newline" characters
                tmp = line.strip().split("=")  # strip line of "whitespace" and split string by "=" character
                urllist.append(tmp)  # append to list

    print(urllist)  # print list to console

if __name__ == "__main__":  # initiate "main()" first
    main()

If permissions will allow, the script will open our file and refer to it as “urlfile”.

The loop will iterate through every line in the file, while the if statements check for any lines that start with “#” or “\n” new line characters.

Before we store our data, we remove whitespace (strip) and seperate (split) the string by the “=” character.

Only after this, we can append it to our urllist array.

Output should look like this:

[['jsephler', 'https://jsephler.co.uk'], ['time50', 'https://time50.jsephler.co.uk']]

An array of arrays, where each member of urllist:

[0] is the sitename, and [1] is the url.

Breaking this down further, you could have a configuration like this:

jsephler=https://jsephler.co.uk|copy

After the first “=” split, you could split the second member of the array a second time by using the “|”  character to end up with another 2 pieces of data. Copy could call a function to do just that, copy!
Obviously plan ahead and use the characters wisely. You do not want to use a character that could be included in a URL as you may need to use it the future.

By doing this, you can create a config file that’s not only simple but powerful too.

Conclusion

There is a Python config parser library, however I preferred to create my own. My reasons for doing so:

  1. I didn’t actually know it existed until I started writing this post.
  2. It is fairly simple logic and I can tailor the syntax and uses.
  3. You could potentially save on overheads instead of loading and using a separate module.
  4. It’s alot of fun to experiment with!

For reference, here is the documentation for the standard parser library: https://docs.python.org/3/library/configparser.html