Python3: Getting Weather Conditions Through API

For a project, I wanted the current outside temperature for my local area. For different reasons, I decided on an external choice but mainly because it would be more accurate then dangling a temperature sensor out of the window!

I will walk you through the steps of building your own API calls in Python3.


There are many different API’s out there in the wild, some premium and some entirely free. I stumbled upon a site called which has free limited access or the option for a more powerful premium service.
Currently, the accounts have a limit of 60 per minute for their Current Weather API which is well within my needs (1 call per 15 minutes), so I chose this one.

After signing up for a free account, (you do not need to supply a payment method), you are able to create an API key for your app. There should be one already made by default, which I just changed the name of to my current project.


Once you have an API key for your project, you might need to wait for it to activate but it should be ready to use fairly quickly. You are now ready to start building your request!

To make the API request over http, I used the powerful requests library which makes this job more “pythonic” and easier to work with then the standard urllib.

Installing requests is easy with pip:

pip3 install requests

Now we can start building our API call. In a new .py file, add the following.

# requests docs :
import requests

def main():

    # define the GET string for the Http query
    payload = {
               'q': 'London,UK',  # q is the place name for your query
               'units': 'metric',  # units is the unit of measurement
               'APPID': 'YOUR_API_KEY_HERE',  # APPID is for your app's API key (keep secret)

    # The URL for the current weather API call
    # (full docs here:
    api_url = ''

    # make the API call with defined parameters
    query = requests.get(api_url, params=payload)

    # convert raw json response to a python dictionary with json()
    data = query.json()


if __name__ == "__main__":

In the payload dictionary, change the “APPID” to your newly created API key.
Requests will use the supplied payload dictionary to form a complete GET string at the end of the api_url and will automatically build websafe escaping for special characters where required!


If everything is succesful, the data from your API call will be parsed from a raw JSON object into a Python3 dictionary object. Your output will be like so:

  'coord': {
            'lon': -0.13,
            'lat': 51.51
  'weather': [
                'id': 741,
                'main': 'Fog',
                'description': 'fog',
                'icon': '50n'
                'id': 500,
                'main': 'Rain',
                'description': 'light rain',
                'icon': '10n'
                'id': 701,
                'main': 'Mist',
                'description': 'mist',
                'icon': '50n'
  'base': 'stations',
  'main': {
            'temp': -0.01,
            'pressure': 997,
            'humidity': 100,
            'temp_min': -1,
            'temp_max': 1
  'visibility': 8000,
  'wind': {
            'speed': 3.6,
            'deg': 300
  'clouds': {
              'all': 90
  'dt': 1548199200,
  'sys': {
           'type': 1,
           'id': 1414,
           'message': 0.0041,
           'country': 'GB',
           'sunrise': 1548143498,
           'sunset': 1548174807
  'id': 2643743,
  'name': 'London',
  'cod': 200

As you can see, it is mainly made up by a parent dictionary object containing inner lists and child dictionary objects.

You can now navigate the data through the usual way in Python. For example:


will return the wind speed value of “3.6” (m/s) in this example.

And to retreive the current recorded temperature, you will use they key values:


which will return a chilly value of “-0.01” (degrees)!


In just a few lines of Python code you have an endless pit of on-demand data at your fingertips. This is incredibly useful in many different situations, and not limited to the example seen here.

Though this code is simple, it was designed to show a working illustration of using APIs in Python. In a real project, it would be necessary to check the response status to make sure the data has been delivered correctly. Without this, a program can crash by another of exception errors.

Further Implementation

In order to properly use this code into a working application, you may need to think of corner cases to catch exeptions and stop it from crashing in the event of an unexpected circumstance.
For example:

  • What happens if the current device loses internet connection or the URL is unreachable?
  • What happens with a bad API request?
  • What happens if the API key expires or gets blocked?

These 2 cases are infact quite similar, but can lead to many different errors further down the line.

In my case, I will:

  • Check the requests status code first. If this fails, I will record the information as “NULL” and skip everything else.
  • If the status is good, I will use a Try Except clause to access the data through dictionary keys. If the data is somehow not there due to a bad request, I will avoid a ValueError exception and record the data as “NULL” instead.

There might be a few more cases that I havn’t mentioned, but that is down mainly to what YOU decide to do with the data and how important it is for your application.

Questions? Have I missed something?
Comment below!

Have you been compromised?

If you ever wanted check if your login email/username credentials have ever been hacked or breached, you might be in luck! (or unluck in some cases…) is a website that will check against known data-breaches from many major websites, or “pastes” from hackers who have compromised data and pasted the credentials publicly.
It will also notify you about the type of data that has been leaked, which is important to know.

By now, people should be using the approach of using a strong, unique password for every different account / service that they sign up for, but more often then not, this is not the case.

If you don’t follow these practices, it might be time to start thinking about it; otherwise 1 databreach can quickly lead to many.

BSOD fix – Ryzen with dedicated AMD GPU

Recently, I’ve been experiencing many BSODs in Windows.
I’ve had a few different errors such like “KMode_Exception_Not_Handled” and “TCPIP.sys” which ultimately threw up Kernel Power errors in Event Viewer.

After a few searches, the errors pointed to driver issues. This started to happen soon after upgrading to the latest Windows 10 version.

Starting with the network driver, downloaded the package from the motherboard’s site and installed it, but the BSODs carried on happening.
I then decided to reinstall both graphics drivers and chipset drivers from the AMD site.
Alas, the BSODs persisted.

Driver Meltdown

I decided to go down the “Old School” route by uninstalling the motherboard, AMD GPU and AMD Chipset drivers completely. I then used CCleaner to clear the registry and deleted the AMD folder located in C:\AMD.

Fully cleaned of old drivers, I installed all motherboard drivers, and then installed AMD Ryzen Chipset drivers BEFORE finally installing the AMD GPU drivers.

So far, after a few reboots and some good hours of usage, the system seems to be behaving itself! Until I turned it on the next day and I was getting BSOD after BSOD.

The drivers weren’t the problem.


At this point, there wasn’t much more I could do more in regards to the drivers. Clearly, there was an issue somewhere else and I’ve exhausted the “easy” options so far. A lot of the errors seem to point loosely to perhaps bad RAM corrupting the drivers or the filesystem.

Going back to basics, I tested the system.

  • CHKDSK on drives – no issues
  • MEMTEST86+ – 8 passes no issues
  • Windows Shell “SFC” scan – no issues
  • Windows Memory Diagnostic test – 1 pass no issues
  • Reseated the RAM
  • Reseated the GPU
  • Stress test system with 3DMark – 1 BSOD, 1 PASS
  • Disabled some devices like GPU audio output and onboard sound in case of conflict

At this point, I had a few things to think about. Overwealmingly, most of the tests had passed.

  • Memory was good
  • Storage was good
  • Windows installation was good (apparently)

Which lead me to believe the possibility of these conclusions:

  • Bad GPU – BSOD ATI related errors, faulty hardware?
  • Bad Motherboard – BSOD memory-related errors?
  • Bad CPU – BSOD memory-related errors?
  • Bad PSU – Event Veiwer Kernel Power errors?
  • Dodgy Windows update – corruption?

Whilst pondering these grim posibilities, I checked the drivers again on the motherboard’s website in hope of a new driver release which may solve my issues. Almost a week prior, there had been a new BIOS update released.


That’s when the penny dropped; the newest chipset drivers “might” not be working properly with the older motherboard firmware!
This was another completely reasonable notion that hadn’t occurred to me since the release date on the BIOS was only a couple of days ago but I’ve been having this issue for a couple of weeks. The morbid conclusion of a hardware failure (although, not impossible), had now left my mind, and was sure that this was the cause.

Before any BIOS update, reset back to default configurations.
I updated the BIOS, rebooted and maticulously went through the options to roughly gain my previous configurations. Booted back into Windows and no BSOD (yet).

I decided to do another 3DMark stress test, just to give it the computer something to worry about. It went to 1 point above the last test.

A couple of restarts and hours of usage after, no sign of any issues. I re-enabled the devices that I had previously disabled and carried on to use the computer normally.

The system now seems solid, and not an error in sight yet. This is positive and I am confident that the new BIOS has fixed the stability issues

To conclude, the newer drivers didn’t play nice with the older firmware, and the new BIOS seems to have solved the problem. But this highlights some other concerns…

Python3: Time-critical code

Whilst thinking about how to tackle my Raspberry Pi temperature project, I found and issue that needed resolving.
In short, I want to record the temperature of my room every 15 minutes. I wanted to record every 15th minute of every hour for a couple of reasons:

  1. To keep the data fair: simply rounding to the nearest quarter hour could mean a difference of up to 15 minutes should the Pi turn off and start running again.
  2. Uniformity: I didn’t want the time to be every 15 minutes after the program starts, for example 10:21 and then 10:36. I want to make a clear comparison of the temperature exactly at the same time for any time I start or stop the program.

Now there are a couple of ways to start this automation, specifically in Linux:

  • cronjob: can run code on a specified time/interva
  • init.d startup: can run code once when startup commences

The cronjob method would be good, but let’s think about this in some more detail:
Every 15 minutes, the system will fire up the code from scratch. Which means all imports will take place, load functions into memory, assign variables etc. which costs time (yes I am being THAT pedantic) and would not always be accurate. Also, it takes extra setting up (so if someone else uses my project, it can be more complicated to set up)

Init.d will simply run the code once on startup. This will allow you to manage time better, although if the code crashes (for some reason), you will have to manually start it or restart the machine.

There a pros and cons to both methods, but ultimately I wanted full control over time and as you may have gathered, opted for init.d.

Now, the interesting bit! Getting the time right, every time.
Again there were a couple of ways that can accomplish this. I needed to offset the start time (start of the script) to take measurement at exactly every quarter hour.
Obviously, this can be any time between a 15 minute window.
I could have arbitrarily looped over the current UNIX time until it divided by 900 (seconds in 15 minutes) but this takes processing time and if the difference to run the loop (in MS) pushes it over a second, I could potentially miss a record of data.

Ideally, I wanted to calculate the time difference and wait it out. Then I could take a record of data and process it, before calculating the difference again for the next time. Doing things this way means I have a 14 minute 59 seconds window to run whatever code I wanted to, until the next time. Obviously, 14 minutes 59 seconds is a long time in terms of running code, but obviously depends on application.
For example, processing dynamic data could take you different times to complete.

Below is my algorithm to calculate the time needed to wait until the next quarter hour. Immediately after time.sleep(), the time critical code, then after that, you have almost a full 15 minutes to do what you need to catch the next one.

# Calculate the time required to wait until the next quart hour

from datetime import datetime
import time

# minute, second, millisecond to a string and split into list
min_sec ="%M %S .%f").split(" ")

# calculate minutes to wait for the next whole quart hour
# explanation of m calculation: e.g. time = 12:33 : min = 33
# 33 % 15 = 3 : find out minutes since last quart
# 15 - 3 = 12 : whole minutes to wait for next quart
# 12 * 60 = 720 : whole seconds to wait with time.sleep()
m = (15-(int(min_sec[0])%15))*60  # convert string to int and calculate seconds

s = int(min_sec[1])  # convert string to int
ms = float(min_sec[2])  # convert ms string to float

# time to sleep until next whole quart
# calculation:
# minutes in seconds minus seconds over whole minute minus ms over whole second
# sleep for required time
time.sleep(float(m-s) - ms)

# ..
# whole quart of hour reached....
# time should be X:(00/15/30/45):00.000
# run time sensitive code here

“The War of the Worlds” : a 21st Century project (Pt. 2)

H. G. Wells – “The War of the Worlds” Map

Map Theory

With the full list of place names successfully extracted, it was time to think about tackling the map itself.
Initially, I thought of using the Googlemaps API, however it required the longlitude and lattitude for each place name. In order to do this, I would need to lookup the places with Google’s Geolocation API (or similar) but this is a premium service. It seemed that the API routes were quickly dwindling; I had to find a different way of producing a map.

Still using Google maps, it offers the a way of sharing a map that can be shared on the internet. In hindsight, this was the best option for the project. Google maps allowed me to upload a CSV spreadsheet (comma seperated values) which were headed Title, Place name.

Running the lists through a program to output the CSV, I capitalised the title which was the place name. I then uploaded this data to the map.

The first time that I saw all the places as points on a map was incredible! However I quickly realised that there were anomolies in the points of data.
Such as a place called “Wellington Street”, where Google decided the best location was in New Zealand. Instantly knowing that NZ isn’t a place that is mentioned in the book, this was the start of a manual refining process… although most of the locations were correct

I spent some time almost triangulating the exact street or place that I could identify was in the wrong place. After some time, the map slowly became a concentrated spread of dots from Surrey through to West London and finally becoming more sparse on the East Coast. Unfortunatley, there are some places that don’t appear in the final map as they have since been built over such as the “inkerman Barracks”.


This was an interesting project that became alot more time consuming then I first thought.


The way that Wells’ casually describes or talks about places meant that not all the places were extracted straight from the text. This often meant cross checking against the text itself to understand that exactly the place that he implies.

Exact location

This was also difficult, as I ideally wanted a comprehensive map although I couldn’t quite decide whether to keep it historically accurate or keep it within the confines of the current modern map.
There were also some locations that weren’t in the right place after importing the data which again had to be checked, moved and deleted.

Overall, I am happy with the outcome, although it was a little more tricky than I first expected. It took some manual intervention to get it correct, but it has highlighted some interesting problems.