However, I have noticed that this happens too if I am watching either a higher quality or higher frame rate video in MPV. To combat this, I always resorted to the typical (lazy) solution of just choosing the lower quality source to prevent the situation from occurring at all. It recently bugged me enough though, that I finally decided to look if there are any settings I can change to prevent the fan from spinning up at all for any video source.
It literally took me 30 seconds. Seriously, 30 seconds of googling to come to my answer and fix the problem for the rest of time. MPV doesn’t use hardware acceleration by default! Whaaat! I have been using software decoding this entire time while the hardware specialized for the task was just sitting there doing nothing! - facepalm -
For the reason why the decision is how it is, it is given in the man page or on their website. I am unsure if it is a sound reasoning, but I am unaware of the process of creating a media player, so what would I know.
Thankfully, enabling hardware acceleration is as simple as using Ctrl+h
while watching a video, or setting it as default by changing a setting in the config file:
johannes@deb:~$ echo "hwdec=auto" >> ~/.config/mpv/mpv.conf
To demonstrate the effects that this simple change has on the resource utilization of MPV before and after the change, I decided to use my laptop to the extreme by attempting to play 4K 60fps clip on my PC. More specifically, the video clip is encoded in V9 at a bit rate of 18901 kb/s, which is probably the most recent encoding standard that my laptop has specialized hardware for.
To make the differences even more apparent, I decided to also include the performance metrics from playing the clip in my Firefox (102esr) and Chromium (109) browsers to see how they would compare. The browsers also have hardware acceleration enabled, so they should supposedly also be using it. You can imagine playing the video in the browser to be the best case scenario for watching the same clip on a browser (i.e., through YouTube or any other video streaming website) before all the other JavaScript faff is added alongside it. Before we do, here are some details about the experiment:
Now, without further ado:
Video Player | CPU Utilization (%) | PC RAM Consumption (MB) |
---|---|---|
Firefox 102esr | 264 | 1150 |
Chromium 109 | 477 | 946 |
MPV (hwdec=off) | 222 | 962 |
MPV (hwdec=auto) | 18 | 648 |
Observation: Both Firefox and MPV with hwdec=off had trouble with very frequent frame drops, while the other two options played the file without a sweat.
Wow! What a difference! With a literal order of magnitude difference in resource useage, I think it is safe to say that my fan problem is no longer a problem. I find it amazing how much of a difference you can even see between the most and least efficient players with a total of 460%.
This does add additional questions though. Are my browsers really using hardware acceleration? Is this just another Linux thing where it is not worth it for the developers to implement? These are perhaps questions I should look into next.
Regardless, this new find has really improved my life with allowing me to fast seek through very high quality videos while maintaining a nice and quiet room! :-)
]]>Another update I made was a function to combine a “scanning session” into one. A scanning session can be defined as a period of time when I utilize the scan script multiple times before renaming the (ordered by date/time) directories to something more representative of the contents. The situation can be visualised like this:
johannes@deb:~$ tree -d scans/
scans/
├── 2023-01-15T08:23:41
├── 2023-01-15T08:23:57
├── 2023-01-15T08:24:34
└── completed_scans
4 directories
This situation surprisingly occurs quite frequently. For example, it can happen if you are scanning too many documents at once, and you need to “reload” more pages that belong to the same section. Another (unfortunately more frequent) example, is when the scanner jams due to it accidentally pulling multiple documents at once. To solve this, I decided to write a short function that would take all the scanned folders and dump all the scans into the first ordered, rename the scans to fit the scheme and delete the now empty folders.
Once there is only a single directory containing all pages of a certain section, I can simply rename and store them into the completed_scans folder structure conveniently placed in the same root folder. With this function, I am able to save my time manually doing the merge afterwards, checking folders if there even is anything in them etc. Sounds Great! The function I initially wrote looked like this:
merge_folders () {
MERGE_DELIM=${1-$(date +%Y-%m-%d)}
FOLDERS=$(find . -maxdepth 1 -type d -iname "*$MERGE_DELIM*" | sort)
MAIN_FOLDER=$(echo "$FOLDERS" | awk 'NR==1')
OTHER_FOLDERS=$(echo "$FOLDERS" | awk 'NR!=1')
CURR_FILE=$(ls "$MAIN_FOLDER" | tail -1 | cut -d"." -f1 | echo "$(cat -)+1" | bc -s | printf "%05.f" $(cat -))
FILE_LIST=$(find $OTHER_FOLDERS -type f | sort)
for file in $FILE_LIST
do
EXT=$(basename "$file" | cut -d'.' -f2)
mv "$file" "$MAIN_FOLDER/$CURR_FILE.$EXT"
CURR_FILE=$(echo "$CURR_FILE+1" | bc -s | printf "%05.f" $(cat -))
done
echo "$OTHER_FOLDERS" | xargs rm -r
}
With all of these variables, we can simply iterate over the FILE_LIST, rename and move all into the MAIN_FOLDER and afterwards simply remove all the empty folders.
This is perfectly functioning code and I used it for most of the scanning I have done since my last post. However, something went wrong. If you can immediately see this code, then you are far smarter than I was when I wrote this.
Before talking about the problem, I will talk about the consequence. When I executed the command, it definitely took longer to execute than I thought, and I was getting a strange feeling. It turns out that executing this command caused my entire “completed_scans” folder to be dumped into the currently selected main folder, removing any sort of organization I already had done.
Luckily, due to my confusion I held down CTRL+C to cancel the command as soon as possible. While it did not manage to move my entire “completed_scans” folder, it had managed to displace a grand total of 600-page scans into the main folder. Ouch.
The culprit was the line defining the FILE_LIST variable. As defined, it finds all the files in the directories given to it through OTHER_FOLDERS. When the OTHER_FOLDER variable contains actual folder paths, there is no problem (yay!). But can you guess what would happen if I accidentally executed $ bscan merge
when there were no other folders to merge?
That’s right! With the OTHER_FOLDERS variable empty, we result to using the default behaviour of the find command. Unfortunately for this case, if the find receives no folder to check, it checks the entire current directory along with subfolders. Due to this, my entire “completed_scans” directory was part of the moving plan! In other words, the sorted alphabetical list of “completed” files was iterated through and gracefully displaced into the scanning session folder. To top it all off, if the command had fully executed, it would have also deleted all the organization I had so far by deleting the folder structure within the completed scans directory. Phew.
At the end of the day, the effect this had in my case was pretty minor. Since the FILE_LIST was sorted, I had great luck that “bank” is pretty high up there in the alphabet and that the majority of files it did move were part of a single folder anyway. Regardless, if I had not cancelled the command at all, I would have had to do quite a bit of additional manual labour for the mistake to be corrected. What I should have done is keep the “completed_scans” folder outside of reach for any command in the script to reach it. It’s strange, since I am usually pretty careful when it comes to files I would like to keep frozen, but alas.
I have never used any sort of testing frameworks when it has come to my bash scripts. I did not even know that there were any that existed. Consequently, I feel like I should start looking into one, especially when it comes to something that affects important files on a system. Well then. I guess it’s time to learn another one then! :-)
]]>Regardless, I still do want to keep and organize the papers I have for when/if I do need them in the future. Consequently, I have come up with an idea that the German government could never come up with… Digitalization!
To combat this task, I wanted to create the most optimal setup for scanning 1000s of pages without breaking my back. After taking a while to research the topic (a story for another time), I finally concluded with buying an automatic document feeder (ADF) scanner. More specifically, the Brother ADS-4300 N 1.
The main points which made me select it are the following points:
I was very unsure of how good the scanner would be as this particular model has barely any coverage on any internet forum other than the official brother pages. Overall, the scanner is placed in the middle of the line-up of Brother’s ADF scanners with the “upgraded” scanners ditching Ethernet for Wi-Fi and including a front touch screen (An absolute recipe for disaster!). Because of this, I thought it would be nice to give a short initial impression of the scanner after scanning about 5 full binders of paper.
The scanner was easy to get out of the box and came with everything that you need to get going other than an Ethernet cable (which I fortunately have plenty to spare). While the physical setup was easy enough, it turned out to be slightly more challenging on the software front. The included manual gives basic information on what the different lights do on the scanner itself, but for using it from a PC or phone, it promptly points you to an online setup. Although I have not tried, I am sure that all the mainstream operating systems have an easy enough interface provided by the applications downloaded through this method.
However, from the point of view of a Linux user, there are no further instructions after downloading and installing the drivers for USB and network use. You can also access some scanner setup options by connecting to the scanner directly through its HTTP frontend. The web interface first tries to get you to install a cert to enable HTTPS which makes sense given the features that the description page boasts. However, this of course is always a bit suspicious as you are trusting your security in a certificate that you did not create yourself. Fortunately, you can also generate your own and import it through the admin portal. Here you also have access to set up FTP/SFTP connections and change the 3 custom buttons at the front of the machine. I have not tried this yet, but this looks quite convenient, as I am able to set up the custom buttons for family members so that they only have to press a button to get the result they want.
Where I have used the scanner is through the SANE interface.
Having not tried out SANE before in my life, I was expecting a slight difficulty bump to the process. Luckily, the Arch wiki 2 saves the day and provides very simple instructions on application frontends for SANE and the CLI options. I started out by using a GUI frontend where I could check quickly test out quality settings and see what is best for which content. I personally had the best GUI experience with gscan2pdf
where it was just intuitive to use without reading any man pages. After I was happy with the performance, I decided to check out the command line interface (CLI) to see if scripting was an option to me. As is tradition, the Arch wiki 2 saves the day and provides very simple instructions on the types of commands to expect through SANE.
Overall, I decided to dig deeper into the CLI option and found it to be the easiest and simplest option to take full advantage of the scanner. For reference, below is an image of the options provided by the brother scanner directly:
With these options, it is easy enough to get a good archival scan by typing:
$ scanimage -d "$BROTHER_SCANNER" --format png --resolution 300 --MultifeedDetection=yes --AutoDocumentSize=yes --AutoDeskew=yes --batch "%d.png"
With all these options available, I created a simple script that creates
optimal defaults considering the scan type (archive document, image print,
receipt, etc…) with the option to override them through command flags. What
this means (as seen in the command above), is that I can execute bscan archive
to scan the documents at
300 DPI, in colour with the document auto cut and aligned as one would expect in
an archival version. The scanner works perfectly with the script, and it was a
delight to see everything run so smoothly! All I can say is that I am chuffed!
DISCLAIMER: The image examples given in this section have been heavily compressed to keep the website small. If you would like some full size examples, feel free to contact me.
My first impression of the scanning process is generally very positive. I load up the documents I want to scan, I type my command for the specific quality settings and off it goes!
I am doing everything through an Ethernet connection (PC and scanner are both connected via cable). The time it takes for the scanner to react is almost instantaneous.
Side Rant: If I compare this to the older Brother printer that we also have here (Wi-Fi only) it is night and day. I really don’t get why you would want to equip a stationary tool with Wi-Fi over Ethernet. Once again I just have to ask what went wrong when they decided that the “higher end” models are only equipped with Wi-Fi.
The only part I did not expect to be slow is the actual processing of the scans and the transfer back to the PC. While I do believe that the scanner can do 40 pages a minute (I have not tested this, perhaps I should), the processing of the pages takes quite a bit longer and so my terminal is hung up a while longer while the pages load. This is perhaps not a too bad of a problem though, since removing the scanned documents and loading new pages usually takes long enough for the processing to finish.
Since this is the first ADF system that I have bought for home use, I can not really compare the quality between competitors or other models. Nevertheless, I found that the scanner does do a good job with documents. Even when taking 300 DPI PNGs (my default setting for archive mode), the only way to see compression artifacts and individual pixels is to really zoom in! The contrast between the white paper and the colour on top (even faint colours like red or light pencil) show even better on my monitor.
In addition to documents, I also tried scanning old printed images where the negatives were thrown away ( Please don’t throw them away :-( ). The pictures tested were printed on pretty standard glossy 4” x 6” photo paper. Initially, I was worried that the feed would not be able to separate the pictures, and it would cause problems with feeding them through. To my surprise, after entering my specific “photo scan” command, they just went through normally one by one. The results of these scans were a bit more mixed. The images with a lot of light positively surprised me with how decent it looked. On the contrary, images which were more dark had more problems with getting details compared to the original. In addition, some images were not cut properly and so the scanner bed can sometimes still be seen in the file.
The one factor that I did not think about before purchasing an ADF, is the maintenance that is required for the best quality scans (Not the fault of the product itself). In hindsight, I perhaps was being a bit too naive. When I set it up, I immediately placed a dust cover over the scanner to prevent any from going in. However, what I did not think of is all the paper dust(?) that is on the paper itself. Whether this is due to the age of the paper that I had placed into the scanner, or if it has to do with the quality of paper that I used in some of my binders is not clear to me. Regardless, the scans can be affected by this “paper dust” that sometimes places itself on the scanner portion. This can create very small and then black lines on the scan which can be annoying on some more important documents that you want to be kept perfectly. Interestingly, the dark streak sometimes fixes itself after more paper is fed through, but I also got used to opening up the scanner and giving it a good clean with microfiber cloth and my Rocket air blower.
Other than that, my first impressions remain very positive. Especially coming from a Linux background where companies usually just don’t care about the operating system, the implementation in SANE was the best solution that Brother could do! My next task is to really put the scanner to the test. I still have boxes full of binders and photos to be scanned before I can really make my final judgement on it, but I can tell that this will be a project that will be ongoing for another while. I will probably comment on those results again.
This news was the final straw to break the camel’s back for me and I felt like I had to take control of the situation of my presence on social media platforms. With the added news that Facebook’s monthly active users had seen the first drop in the history of the company4, it implied to me that others were also getting fed up and looking for other solutions. From the outlook of Facebook, this drop can also be a sign as what is to come of the company as they will now have to act more desperate to regain a positive gradient irrespective of the user experience.
From all the news, I looked at the social media websites that I had on my phone and asked myself if I really need any of these at all. This motivated me to take a closer look and really think about which companies do I really want to support with my presence. Now, I am quite a data hoarder when it comes to my own information on the internet. Thus, it is important for me to be able to save the data that is stored on the servers about myself so that I have a backup in case I need it for any future uses or even for plain nostalgia sake. After perusing through my phone application list, LinkedIn, Twitter, Snapchat, Instagram, Discord and of course Facebook were all the social media applications that I had on my phone which were now neatly placed on the chopping block. Here is the outcome of what I did to each app:
Uhh, what?… so what happened with Facebook? Well… I am not too sure how they managed to mess this up like this? Honestly, I am not sure how it could have gone on for this long either. The GDPR has been in effect since 25th May 20187 and I thought that these laws should have forced all companies to make this process easier with the threat of major fines to the companies who break them. So that we are all on the same page, let’s take a look at which articles of the GDPR this is fundamentally breaking (in my opinion):
Both of these articles basically should allow the data subject (me in this case) to access all the data that is held by the data controller. At the end of the day, it is my data that they are storing, so I should have the right to see it right? Unfortunately, if I cannot read the data that they send me, I do not think that it being in compliance with the above stated articles.
To prove that I really tried my absolute best to figure out what is going on here, I want to take you all on an adventure of the download attempts I had over the past week or so. Buckle up!
I started my adventure of requesting downloads from Facebook back on the 29th July. Naive me looked at the download request page thinking how intuitive it looks, and I liked how you could really customize what sort of data you would like to download about yourself. The download format can be done in a human-readable way using a small HTML frontend and the machine-readable method of JSON can also be used. Neat! I clicked download and moved to doing something actually important. I woke up the next day excited to read an email that the process had completed and that I can download the data from their page. Awesome! I go ahead and bulk download all the zip files that contain my data. Upon first inspection, the files looked fine with everything neatly laid out into different folders.
Suddenly, upon opening zip number 3 I was greeted with an odd looking file that did not match the style of the others. What is this? A .zip.enc file? Never heard of it! After googling for this file type, I came across the fileinfo page8 containing information about this file type. From the information on this page, it seems that it is a file type only found in the “Download Your Information” feature of Facebook and occurs if the data was not processed properly during the data retrieval. In other words “Mistakenly encrypted”. To give the benefit of the doubt, I decided to just shrug and request my information again hoping that this was just true accident. You wouldn’t believe what I managed to find: The exact same problem! After this occurred again, I tried something that I never thought I would do ever in my life. I tried to find and contact Facebook support.
Now, I am going to say this as directly as possible. Facebook support does not exist, and I am honestly not sure if it ever has. There is no phone number, no email address, no live chat. The only options that a normal user gets is a not so useful FAQ page and the ability to give feedback by the “Help us improve Facebook” and “Something went wrong” options. Oh boy, yes. Something did go wrong, please how can you help me? The option leads you to fill out a form with category, details, and you can even add a screenshot! Well, this is no help. Since it was my only option though, I did start off with being professional and giving the exact steps that I was doing and that there were problems with the output files. I also continued requesting downloads… 7 in total to be exact. No response from any up until now. Let’s dig into my adventure even more!
Date | Number of Zips | Total Size (GB) |
---|---|---|
2022-08-03 | 4 | 9.270 |
2022-08-07 | 5 | 9.914 |
2022-08-08 | 4 | 8.990 |
2022-08-09 | N/A | 8.563 |
As mentioned in my timeline, on the 3rd of August I decided to also keep track of additional metadata of the files downloaded from the “Information Retrieval Tool”. I was surprised to find how different each download was in terms of size. Some downloads had to be contained in 5 while others only had to take up 4. Each had its own unique total file size. I am uncertain if this is because Facebook fails to retrieve my information every time or if the encrypted files take up that much more space and fail differently every time. I am not sure. Though Can I be certain if I really am getting all my information even if it is correct?
On the 9th of August I decided to try and download each category they list individually to see where the .zip.enc files are coming from.
While going through all the categories, I found an interesting discovery when getting the “Posts” category after wondering why the zip size was only 367 MB. Therefore, please take this word of warning into account: Under the “What’s included?” section, they mention that “Your photos” are included in this section. They are not. Well… The photos you posted in your posts are included, but not the photos that you uploaded to an album. You might be thinking “well… duh… you downloaded the category specifically named posts” and I agree with you. However, the help text specifically states “Photos you’ve uploaded and shared”. This wording would have me believe that the photos that I added to albums are also included in this upload. In order to download your albums, you have to go to them manually and click the download button for each one. This sounds like an easy fix, but imagine if you made 1000s of albums for all kinds of events which I know many people do. In addition, honestly why not include all albums as part of the download? Facebook obviously doesn’t care about bandwidth or storage issues as they will let me recreate my broken information requests even though the tool does not work.
After going through each category each with surprising little data content, I could not find any encrypted files until I stumbled across the “messages” category. Total size: 8.168 GB. Wow, that’s a lot! It was this section that was taking up all the space! Sure enough, the .zip.enc files were to be found here. After finding out about this, I was thinking maybe I set up some end-to-end encryption setting on Facebook. Alas, Facebook does not have such a setting. At least not yet. They announced two days ago that they are testing it out9. I wonder if they would start using this excuse in the future when their retrieval tool still doesn’t work and just say “But then it must be end-to-end encrypted!”.
For anyone who is interested, I decided to check up on the description that fileinfo 8 gave with respect to the .zip.enc file actually being encrypted data and if I could trust that assertion.
What I first wanted to see if the file wasn’t just a regular zip file. This would have been very strange, however it was worth a try. Attempting to open the file in xarchiver just spat out an error message. I decided to take a closer look and see if the starting bytes (i.e. magic number) of the file matched anything else that it could be. The zip format has its magic number stored in little endian format as 0x04034b50
. If we take a look at the zip.enc file using some simple commands (where the Xs constitute my Facebook ID):
$ xxf facebook_XXXXXXXXXX.zip.enc | head -1
00000000: e358 f9f3 f6ba af13 5e49 e999 cae0 0625 .X......^I.....%
Guess it really is not a zip then. Due to endian format, we should have been seeing 00000000: 504b 0304 .....
at the start as mentioned before. The starting bytes do not match any other known file formats and encryption algorithms don’t really tend to add any magic numbers to the header of their files, so the file could really either be complete random data or an encrypted/encoded version. It’s not a trivial task to find out how exactly it was done though.
Just for fun, I decided to lastly check out the information entropy of the file in question. Basically checking out how random the file really is. On Debian Linux, there is a very handy command to test this out, so I had a go. The output has been slightly truncated:
$ ent facebook_XXXXXXXXXX.zip.enc
Entropy: 8.000000 bits per byte (perfect)
Chi Squared dist 514913145 samples at 269.73, random would exceed 25.17% of time.
Arithmetic Mean: 127.4979 (127.5 = random)
Monte Carlo value for Pi is 3.141541911 (0% error)
Serial Correlation coefficient is 0.000056 (uncorrelated = 0.0)
I won’t go into too much detail about what all these values mean, but just give a general explanation of what this is showing. With an entropy of 8 bits per byte, all bits of information are required to keep a lossless version of the file and is therefore very dense. Very good encryption and compression algorithms target the 8bits per byte range as no space is being wasted and finding patterns in the file is tremendously difficult. Of course, there will be variations within the file, so it won’t always be at 8, but to 7 significant figures is pretty good!
The other test I wanted to describe a bit more was the Chi-squared test. In short the chi-square test tests to see if the variation in the collected data (my file) follows a particular given pattern (complete random sequence). The tool ent
gives a value between 0% and 100% which indicates that the sequence is not random the more it tends towards the extremes. Comparing the information from my file to that of a random sequence would not exceed out value of 269.73 per sample 25.17% of the time (as well as the remaining values really) shows that the sequence is likely to be encrypted or just pure random. This would take any sort of file encoding out of the question, as file encoding would still reveal its inherent structure. This leaves with the possibility of this file being just random junk, or with it being encrypted.
Since I am pretty sure that Facebook would not waste its bandwidth and storage facilities to host large random noise files, I think I can be pretty confident that these are just encrypted files that contain my information (Imagine if they included someone else’s information?..).
Facebook does not seem to care about the GDPR. Facebook also does not want to fix the problem. The fact that the experience of requesting your own data was the worst on the largest social media platform compared to its competitors and even its own subsidiaries is completely unacceptable. I was expecting to be done with this task after one day of bulk downloading everything from every service. In the end, I find yet another example of seemingly gross malpractice with no way of being able to get help on the issue. You might not want to delete your Facebook account yet, but do realize that posts from various form websites about this issue have existed for at least 2 years10 and the situation seems to have only gotten worse. In addition, the fact that photos from your albums are not included at all with the tool, and they have to be manually downloaded elsewhere is unexpected and not fair to its users. I am still unsure whether I should just end up ignoring the encrypted message files and delete my account anyway. What is stopping me is that I do not know which messages these even are, and it is difficult to find out due to the total size of over 8 GB going all the way back to 2008.
If you are also interested in seeing if some or all of your messages are in this strange .zip.enc format, go ahead and check out the Download Your Information page on Facebook and select the date range to be “All time”. If you happen to also have .zip.enc files after a few times of trying, are an EU citizen and think this is violating your rights, I urge you to take 15 minutes of your time to submit a complaint to your National Data Protection Commission. You can find a full list of these here.
Data from Statista showing Facebook’s monthly active user count https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/ ↩
Facebook to make the Facebook app more like TikTok: https://www.nytimes.com/2022/07/21/technology/facebook-app-changes-home.html ↩
Instagram reverting changes to turning all TikTok: https://metro.co.uk/2022/07/29/instagram-u-turn-on-tiktok-style-changes-after-massive-backlash-17090562/ ↩
Facebook MAU down for the first time in history: https://www.businessinsider.com/meta-facebook-user-numbers-shrink-first-time-ever-2022-2 ↩
Nitter - Open source Twitter front-end: https://github.com/zedeus/Nitter ↩
Barninsta - Open source Instagram front-end (use at own risk for your account): https://f-droid.org/packages/me.austinhuang.instagrabber/ ↩
GDPR timeline: https://edps.europa.eu/data-protection/data-protection/legislation/history-general-data-protection-regulation_en ↩
.zip.enc file format description: https://fileinfo.com/extension/zip.enc ↩ ↩2
Facebook testing end-to-end: https://about.fb.com/news/2022/08/testing-end-to-end-encrypted-backups-and-more-on-messenger/ ↩
Reddit post on the same issue: https://www.reddit.com/r/facebook/comments/kn5l8a/i_downloaded_an_export_of_my_data_from_facebook/ ↩
DISCLAIMER: I am by no means a web designer!!!
For many years I experimented around with different custom designs and fancy layouts to use for this website. Iteration after iteration I felt like I had something that works, but it never felt like it reflected who I was that well. Upon the sea of existing portfolios bombarded with large image files and catchy tag lines, the ones I were designing felt like I was trying to fit into the same crowd. Having used Jekyll for the majority of websites I have created before, I started actively searching for minimal designs and finally stumbled upon this GitHub project by riggraz. Man was I happy when I found this. It isn’t perfect and I really was not a fan of the recursive list design on the main page, though those were some simple fixed to make. Overall, the design is fast, small in size and conveys the information very efficiently. I can just get stuff done with it.
For the most part the webpage will remain static in its nature. Any post will be quite far apart from another and I don’t expect to update it often ie. the majority of edits will probably just be updating the “Current Activities” section when the information is well.. no longer current! When I have more time free time and start to explore new things, I am sure I will have much more to talk about. Therefore, if you are interested in staying updated with a nice little post every now and then, the RSS feed will be the best solution! :)
]]>