Adding desktop shortcuts in Ubuntu 18.04

I like having my desktop filled with shortcuts to programs that I regularly use. In Ubuntu 18.04, there is a lack of a “right click > add shortcut to desktop” option. If you are missing this option too, don’t fret! There is another way to do just that.

In the Terminal, navigate to:


If you list all files with ls, you will see many different .desktop files each of which houses the information for executing a program. It also tells the UI where to find the icon that it should display.

Find the program that you wish to add to the desktop (you should find it here if it is installed through apt and it is a GUI application)

Copy the .desktop file to your own Desktop folder:

cp /usr/share/applications/<app>.desktop /home/<user_name>/Desktop/<app>.desktop

Next we need to make the desktop file executable:

chmod +x /home/<user_name>/Desktop/<app>.desktop

Now looking at the desktop, you should see your copied .desktop file.

ubuntu 18.04 desktop shortcut
ubuntu 18.04 desktop shortcut

Double clicking on the file will bring up a prompt, warning that the program is untrusted. Click “Trust and Launch”

ubuntu desktop 18.04 launch application
Ubuntu 18.04 desktop launch application

Once accepted, the application should launch as normal. Close it down, and you should now see the once .desktop file changed into an icon launcher as intended!

ubuntu 18.04 desktop shortcut 2
ubuntu 18.04 desktop shortcut

Changing Pagefile (Virtual Memory) settings in Windows 10

The Pagefile is a file used by Windows that is kept on the hard-drive. Pagefile is also known as Virtual Memory. It acts as an additional cache for things that might be kept in RAM, but stored this way because either RAM is too full or the data might be needed to made persistent.

If you frequently run out of RAM, increasing the pagefile will help to keep your programs running properly could stop crashes caused by low memory.

Some things you need to know

  • Data stored in the in the Pagefile is not optimal as hard-drives are much slower to access then in physical RAM.
  • Important: Pagefile/Virtual Memory is not recomended on SSDs as the file can be written to and read from fairly frequently and can cause premature wear on the drive. If you’re not worried about wear, you may still set a page file.
    You may also move the pagefile to a mechanical drive or set the file to a static amount.
    Also to note: as SSD tech matures, the agility of flash increases thus lowering potential wear.

Changing the settings

Use Cortana to search for “Advanced system settings“:

Search Cortana: advanced system settings
Search Cortana: advanced system settings

Clicking into Advanced system settings will bring up the “System Properties” view. In the “Advanced” tab, click “Settings…” under the “Performance” section:

System Properties: Advanced
System Properties: Advanced

This will bring up the “Performance Options” view. Continue to the pagefile settings by clicking “Change…” under “Virtual Memory” in the “Advanced” tab:

Performance Options: Virtual Memory
Performance Options: Virtual Memory

Lastly, you should be presented with the “Virtual Memory” view where you will be able to control your pagefile settings:

Virtual Memory view
Virtual Memory view

You can set a static size, move the pagefile to another drive or simply let Windows take control of the virtual memory with dynamic allocation.

Overcoming a resource famine

This may seem as another moan about how bad my AMD FX6350 is, however it isn’t. Much. I have arrived at the point where another Virtual Machine wouldn’t only be handy, but critically important for various reasons.

My current usage is as follows:

  1. Windows 10 host – SSD1
  2. Ubuntu 16.04 guest – HDD 1 – 2 VCore / 3 (allocatable)
  3. Ubuntu 18.04 Server guest – HDD1 – 1 VCore / 3 (allocatable)

VirtualBox allows 3 out of 6 cores to be allocated to VM’s. It seems the FX6350 isn’t a “true” 6 core processor and instead sees 3 physical as 6 logical processors.

The current configuration worked well, until I decided I wanted to test my 18.04 server (hosting my web app) against some attacks via Kali. This would mean that I would need another VM as 4, to test against 3. This would mean the CPUs are being pushed to the point where they (the host and guests) would not operate correctly.

I immediately decided to throw more power at the problem as I was lacking at least a few cores to accomplish this. I had 2 solutions:

  1. Set up a spare PC to play role as 18.04 server or Kali
  2. Buy a new PC altogether with enough cores to cater for the downfalls

Problem with solution 1: Another PC will be running and I do not have another monitor/mouse/keyboard/space/power socket(s) to  accommodate for something that could be done a lot easier on a single machine

Problem with solution 2: Not sure if my Windows 8.1 license will still apply an upgrade to Windows 10. Also, the obvious monitory cost involved.

Wielding my credit card, I was very close to ordering a spanking new Ryzen 7 2700x bundle from for a princely sum. At that moment, I instantly thought of a lesser (cheaper) solution.

Whilst (in theory) all 3 guests where happily working with/against each-other, the host will be nothing more then a host. Doing nothing, other then sharing resources, whilst using them. Removing the Windows 10 host lead me to the penultimate conclusion.

  1. Ubuntu 18.04 host – SSD 2
  2. Ubuntu 18.04 server guest – HDD1 – 1 VCore / 3 (allocatable)
  3. Kali guest – HDD1 – 2 VCore / 3 (allocatable)

Instantly I can start to see bottlenecks here, hence the “lesser”. I could extend a guest to HDD2, If I need to.  However, I have dodged a large sum of credit whilst achieving what I wanted. (in theory at least)

It seems that for the time being, the 2013 processor popular with gamers-on-a-budget still fits its purpose (barely). Being coy about the situation, I have averted a cost and will keep me warm through winter (although dreadful in the current heatwave). Eventually, I will have to let it go but until then I await for DDR5 and future processors to add to the mounting upgrade-ability from this long dead platform.

Update – April 2018

I have been quiet here on the blog, however my brain has been working non-stop! I feel that I would like to post something here, but I haven’t had anything to “physically” bring to the e-world. So, heres a small update on what’s been happening over the past few months.


For an odd reason as of late, I have been dead set on generating prime numbers. Where this idea came from, I’m not quite sure but it has taken up a large portion of my time.

Originally starting with the bog-basic method of finding a prime number, I created a generator in Python. Now, this may seem boring and arbitrary for most, but I really enjoyed creating and testing this algorithm.
I spent a while researching about prime numbers (as I’m by no means a mathematician), and slowly started to optimize it.

It was whilst numbers greater then 1,000,000 took longer to process, I started to explore different avenues to speed this up.
I first looked into processing these numbers on different threads.  Obviously, there were a number of ways I could do this.

  1. Send a number n to it’s own thread for prime testing
  2. Split the number n in [x number of threads] ways and give each chunk to a separate thread to test. If a thread returned “not prime”, close all threads and start on next n

As you can see, it started to get complicated. Even more-so when this avenue started to dwindle when I realized a thread is just a time share of the same die, not a seperate CPU process. So I then explored the idea of multi-processing.. which would be great for option 2 above.

This became even more complicated as memory isn’t shared between the parent and child processes. I did however, build an algorithm that would split a number between X threads/processes, which works quite nicely.

Still staying with prime numbers, I was interested to see if my Python algorithm would run quicker in another language, so I re-wrote the logic in C and ran the test. It ran super quick, and I was really surprised by the performance increase!

I then explored the idea of importing my C algorithm into Python and running it, but came against some problems. Python will add bits to an int to accommodate for really large numbers, where C seems to lose interest at u_int64.
I hit a nerve when I found this; I ultimately wanted to leave it to run, just to see the biggest number I could find. Obviously, my C algorithm wouldn’t hold up to this, which was a great shame I feel.

I have for now, put this sideline to rest, until I find some more free time.


I decided that it was time to expand the internet just a little bit more.

It had been a while since I last created a web app, infact it was my final CS50 project in Flask. I know Flask isn’t greatly maintained anymore, and therefore isn’t used for larger projects. People seem to be moving to Django more and more, so there I went.

I have a collection of calculators, references, generators etc. that I have accumulated and I really wanted to start to put in a nice big box with a bow on top. I also wanted to tie it to somehow, allowing a central location for me and others to access from.

I also wanted to take a dive into website meta data and learn more about it.

Armed with my knowledge of Flask and trusted Bootstrap, I decided to finally take the plunge and learn Django. And boy am I not dissapointed!

Django really is fantastic, and is packed full of some really great features. After shuffling through alot of the documentation and getting my head around Models, I have a good understanding and has been a relativly smooth transition.

I will probably use Django from now on.

Im hoping that my next blog post will be a big reveal, unveiling the new site, however there’s still plenty of work to be done!

CS50x – End result

I finished the 2017 CS50x course! Although yet to receive my certificate, everything has been handed over with straight 100% grades all the way through.

What has it been like? It has been tough. At times, really tough. The most frustrating bits are working on something… And fiddling with a few lines of code to get the right output for occasionally, hours at a time.

Picture this; you notice the time and realise you have work the next day. Shut down the PC… laying in bed and all you can think of is this final puzzle peice. The greatest error you can make is think of a possible solution… sometimes THE solution… and realise you have 4:30 hours till the alarm goes off. The shear panic of forgetting your idea whilst you sleep makes me want to jump up out of bed and tinker some more… but alas… exhaustion sets in.

What was your most favourite problem? To be honest, I loved the theory, execution and outcome to Recover. Yes, it was one of those problems that had me lying in bed pondering, but the moment of changing something, recompiling and running it with the perfect outcome, was a real eurika moment!

The Recover problem was to extract images from a “broken” SD card, teaching the concept of how data is stored on disk, the makeup and structure of a BMP image in order to identify and build the files anew. It really was an insight into lower level programming concepts.

What did you know before the course? Before CS50, I taught myself PHP, dabbled with HTML, CSS and MySQL, and prodded around C#. Looking back after taking the course however, I realise that I had no real understanding of objects, pointers or even loops to name a few. Books are great if you already know the fundamentals, but there aren’t many books that can vigeriously teach you a concept and apply it to a real world use.

What have you taken away from CS50? CS50 explained in depth, with enough room to let me explore and fill in the gaps. The lecture alone will not answer every question or exception case I thought of, but continued to remind me to refer to available resources to make those connections, and find the answers. I found myself writing little snippets for almost every new concept and this concreted my understanding.

I am now confident when tackling a new project or problem, even though I might not have the knowledge to hand.