It’s ya boi Shark here, and it is about time I write an update in regards to my dissertation with an extra important lesson for y’all!
Because it has been a while since my first post about it, a reminder is in order! The gist of my dissertation project is to develop a game that utilises the principles of Einstein’s relativity. I’m building my project in VC++ and OpenGL (or so I was). My project is due in 3 and a half weeks.
OKAY okay, you mentioned a crisis?
You see, it is to do with that OpenGL bit and I’ll cut to the chase. I TOTALLY underestimated the amount of work needed to develop the game with it, and the work towards it was pilling up rapidly. You could sectionalise making a game with OGL as the following; creating the OpenGL context, developing a basic game engine, implementing your mechanics, building the UI and control setup, and bug testing.
Getting the game engine I wanted up and running took a while, and I hadn’t even considered UI! So in light of this, last Friday I made the call to abandon OGL and turn to the Unity to focus on getting a working game! Now you’re probably thinking “why didn’t you just use Unity to begin with? It’s designed to get you into making the game ASAP!”
Well for better or for worse, I’m the sort of person who enjoys pilling work onto myself and I wanted to convey my relentless effort. This time, it turned out it was for worse. The work I was doing in addition to what needed to be done was comparable to what some other students outright have a dissertations! You could I got a little too cock because realistically, a working OGL solution in three weeks complete with UI and gameplay was not going to happen.
So because my project title is literally “Relativity Simulation: Develop a physics-based game that utilises the principles of relativity, complete with GUI” without any mention of OGL, and I think it is for the best I mothball this solution and just concentrate on meeting that criteria as best as I can.
It’s not a disappointment, though…
In fact, I could use this to my advantage!
Firstly, I know have something interesting, critical, and self-reflective to add to my writeup! I can show the OGL solution as a sort of trial, which will then support my written rationale for choosing Unity over OpenGL + my own custom engine. I also put a lot of thought into how I was structuring my game engine, which in part is still applicable to explaining how I am using Unity!
Secondly, the physics implementation itself is transferable! The work I have already done is far from wasted, and since C++ and C# syntax is similar, most of physics stuff can be ported. It’s just implementing equations in code and attaching them to the game objects, at the end of the game.
Thirdly, I know have something else to keep my occupied over the long summer break between third year and my master’s year! I could never conceive abandoning my hard work, so I’ll just complete it without the pressure of meeting deadlines in my own time!
And lastly, just look at what I’ve already been able to do thanks to my prior experience of making a space-based Unity game last year!
And now, the lesson
Creative and ambitious are great, but sometimes complete realism is more important when working on a deadline as academic and report-driven as this! The OGL route would have been a great technical accomplishment for myself, but this Unity thing will likely give me more marks since it was done with actual academic reason!
I suppose this is a common caveat when you are in control of your own project, because you obviously want to show off the best you can do but sometimes it is hard to know when to stop.
I indeed came close to disaster on this one, which could have cost me a first-class grade. But I’ve realised my error is judgement, and I WILL make the most kick-ass game possible and focus on what is needed! Also my supervisor was always aware that Unity was my backup option, so there shouldn’t be any additional problems from hereon. I just got to keep my head down for the next two weeks as I iron out the actual game, and then another week of writing my results and conclusions down!
You can do this, Shark!
I should also point out things might be quiet on here during that time, so bear with me y’all! I’ll leave you with some more screenshots!
Also once this is all over, I’ll be going through some of the OGL stuff on here!
It’s ya boi Shark here for an update finally after a fuck-tonne of ThinkPad related posts. For me this academic year has so far been extremely positive experience after a less than stellar summer, with January being a pretty notable month. So I think this deserves some song and praise!
I’ll start here, since I haven’t really swum to this around here. And I think I should do more often though since my work as a student voice representative has overall been a highlight of this academic year! In case I haven’t explained on here before, SVR-hood is a step above my previous course representative position where I now liaison between course reps and higher in the Student Voice Team and sit on faculty-level committees. I work closer and more involved with the university and Students’ Union than before, complete a research project in the hopes that it is used by the university to implement real change, and is expected to partake in more SU events such as attended their Annual General Meeting (AGM) and Change Week. I also get two lovely line managers and four sabb officers above me for support!
Change Week has been the highlight from January where I led a team for the week’s Wednesday 12-hour lockdown. The week started with gathering student feedback on the Monday which unfortunately I was unable to help out with due to dissertation work. The day after, SU staff went through the feedback (collected on cards at promotional stands across campuses and from an online form) and established four common themes; microwaves on campus, making USW more greener, making USW students more employable, and including Cardiff and Newport campus in more things. I led the greener USW team where we created a pitch to ask USW to improve recycling and water conservation on campus. We were supposed to pitch on the Thursday that week too, but the snow came! But despite that last bit, the experience was amazing, fun, and totally worth those 12 long hours!
January was a good month for assessment feedback! Excluding dissertation milestones, I had two assignments before Christmas and one in early Jan! I had the first two assignments’ grades last month, in which I got 87% and 70% for modules Game Engine Design and Artificial Intelligence for Game Developers respectively. Today I got my mark for the assignment submitted in January (the anti-aliasing one I wrote about before) and I got 80% for that! 3/3 firsts ain’t bad!
Like previous years, working my fins off for university is paying off! For future assignments though, I will be aiming to punch above 90% for the second assessment for all three modules though because I believe I can do better!
My dissertation is proceeding smoothly, although it is hard to really comment on it since its still far from over. Milestone 2 was due at the start of this month, which required a basic skeleton implementation.
Since my post about on the subject a few months ago, I’ve done a lot more research onto special relativity and began the general relativity phase (although it was omitted from milestone 2 since it was far from complete and the implementation was centred around building Lorentz transformation functions and nothing more). Expect an actual update post soon!
January saw me attempting to engage more with societies at university a bit more in order to be a less antisocial. I’ve mentioned before I founded USW Programming Society – I’m still serving a President and last month we’re laying a lot of groundwork to start giving lectures and tutorials to people interested! I’ve been present for about half of the lessons so far and whilst numbers are small, the discussion from that have been great! I’ll be looking to push advertising the society in the coming weeks!
Another society I’ve been getting involved with is the Earth Society USW! It is a fantastic society full of people who really want to campaign for making the planet a bit better! I became a member last September and so far the best highlights that have left a lasting impression includes gardening in a greenhouse on campus and a victory at the recent SU’s Annual General Meeting where the society passed a motion (link to the proposal) to mandate the SU to create a zero-waste shop (something I wholly support)! I volunteered myself to be on a sub-committee that will be looking into existing shops to support the founding of USWSU’s one. I hope to generally get more involved with the society as long as time allows!
Despite a little jab at it when I talked about SVR stuff, I still enjoyed the snow when it came right at the end of the month!
That’s it. What more can I really say about snow?
Of course this wouldn’t be a Shark post without a mention of ya boi’s current obsession. Back on January 1st, I had two ThinkPads; SN-T41 and SN-R60e. As of now, the fleet consists of 12 laptops, 3 keyboards, 3 spare bins, and a laptop dock! With all but the last 2 laptops (SN-X200t and SN-T440) being ordered in January! I’ll be continuing the invest in the fleet as money allows, refurbishing existing machines, and preserving these historical machines (which are by far the most influential laptop family for modern computing) for myself and anyone interested!
You can visit the ThinkPad page to see fleet in all its glory!
Hey, it’s ya boy Shark here with some good news! Despite not feeling the greatest about it, I got 87% as my grade for my Game Engine Design coursework on Doom 3 modifications and Python bots! 😀 It was a pleasant surprise to start last week with to say the least!
Anyway on Friday night, I submitted my first Real-Time Rendering Techniques coursework! This module focuses on teaching us graphics programming in OpenGL (as a successor to last year’s Computer Graphics module) and DirectX without the aid of an engine like Unity or Unreal. This coursework was centered around implementing and comparing supersampling anti-aliasing (SSAA) and multisampling anti-aliasing (MSAA). So today I’m gonna be talking about what aliasing is, what those two things are, and the conclusions I drew!
Please note that there will be no code samples below due to this being a recently submitted coursework. This post assumes basic understanding of the topic (like knowing what anti-aliasing is as a game setting or something), but if you don’t have that this still might still be an interesting read!
The problem of aliasing
As defined, aliasing is when signals become distorted when they are sampled. In context for this post, graphical aliasing is the occurrence of image artifacts that produce a stair-like or jagged edge around objects rendered on screen. This is particularly a big problem for computer games since aliasing can make otherwise stunning scenes look too digitised and break the game’s immersive potential.
The easiest way to recreate this effect for yourself is by viewing an image with lower resolution than the computer’s display and stretching it out (so long as the image viewer itself doesn’t anti-aliase) or zooming in on it. Lines will become jagged and any quality of the image will be ruined. Unfortunately no perfect solution exists due to the fact that the detail on any given image will be represented on screen by pixels (which are just really small blocks of colour). However, we can at least mitigate this issue with anti-aliasing!
The assignment called for us to implement and compare two examples of anti-aliasing; supersampling anti-aliasing (SSAA) and multisampling anti-aliasing (MSAA). Both of these techniques are varied in age, implementation difficulty, and performance characteristics. As defined, anti-aliasing is literally the process of diminishing aliasing.
SSAA is perhaps the oldest and most primitive anti-aliasing technique, which has long been abandoned due to high performance requirements and lack of built-in implementation in frameworks such as OpenGL. Supersampling involves telling your code to render the scene (what you’re looking at whilst playing the game) at a higher resolution than the display and compressing it down into the display’s resolution. The latter process, known as downsampling, works by averaging the colour of each pixel with some of it’s neighbours in a ratio like X4. In a X4 scenario, the scene would be rendered four times bigger than the display’s resolution setting and then subsequently averaged in blocks of four pixels during downsampling. You’ve probably observed downsampling yourself when resizing an image or photo to be smaller than it originally was
MSAA is a newer technique that, whilst it is still not the best technique available, is relatively popular. Unlike SSAA, MSAA does not require the program to render the scene at a higher-than-display resolution. Instead, the algorithm behind it samples two or more adjacent pixels as it during runtime.
I produced some side-by-side screenshots to show the differences in performance and effect before and after enabling the respective techniques. The testbed was a university lab machine powered by an NVIDIA GeForce GTX 980 Ti. The scenario consisted of extremes, with a low resolution of 640×480 and an AA-on sample size of X16. The resolution was originally supposed to be 1920×1080, but attempting to use X16 SSAA at that size caused lots of issues relating to memory.
The SSAA scene is full of spheres, and the MSAA scene is full of pyramids. Ideally the scenes should have been the same, but the coursework demanded different scenes with one being FBO-powered and the other one on the default framebuffer (or a similar combination/configuration). In terms of vertex count though, both were similar (there were a lot more pyramids in the MSAA scene to compensate for the large meshes for the spheres).
The combination of rendering higher than the display’s resolution and downsampling the result was not very performant, as we see here. The 640×480 resolution (307,200 pixels) results in the scene being rendered at 10240×7680 (78,643,200 pixels)! The framerate drop from being capped at 60FPS (due to v-sync, whoops) to ~17.5!
MSAA fairs far better by a long margin with the FPS bound only by v-sync (I really should have turned that off). The anti-aliasing effect looks about as good as SSAA as well, so win-win?
And that’s the conclusion; summarising why MSAA is considered better! My cohort was however slightly divided on which one actually looked better. Personally I found that at X16, their effectiveness looks about the same but at X2, SSAA might look slightly better. The resources you save on using MSAA will compensate for this by allowing you to sample at a much higher rate anyway, however!
I’ve postponed the planned post on the most unusual ThinkPads for the time being, pending posts about my new purchases! I’ll be showcasing my machines (and keyboards) in pairs, with the schedule being the following:
Ultrabooks: SN-X41 & SN-X220
Modern: SN-L430 & SN-T430
Classic: SN-T41 & SN-T42
Keyboards: SN-KBC Compact & SN-KBB Tablet 2 Bluetooth
So, ya boi Shark here finished a really fun coursework last Friday!
The coursework was for one of my favourite modules – Artificial Intelligence for Game Developers. Since term started, we have looked at search algorithms, decision trees, artificial neural networks, fuzzy logic, finite automaton, evolutionary computing, clustering, and self-organising maps. We also briefed deep learning, although that’s for later in the academic year. Now, the coursework asked us to select appropriate (and reasonable, considering time restrictions) techniques from some of that pool for NPC navigation and artificial life simulation.
Please read on!
My NPC navigation implementation was your standard A* search stuff. We were asked to come up with a solution to get from A to B on a map made up of different terrain types. The types – grass, marsh (slows you down), road (speeds you up), river, and mountain (both block) – were given to us. The A* algorithm is widely known for pathfinding and graph traversal, mainly because it’s basically a great extension of the popular Dijkstra’s algorithm with heuristic (educated guessing) guidance. To show informed choice, however, I did implement alternative algorithms to show the difference in operation:
Breadth-first search (BFS)
Only suitable for unweighted graphs
Imagining the graph is like a tree; simple concept of looking at all leaves on the same branch level before going deeper
Depth-first search (DFS)
Only suitable for unweighted graphs
Concept of going to the lowest branch first, then recursively going back up to explore leaves on the way
Designed for weighted graph, but will act like BFS if all weights are equal
In theory, implementing A* is relatively straight forward – implement Dijkstra’s, then tack on some heuristic cost calculations. The solution is VC++ 2017 based, with the result being rendered in C++ console (there is no mark on aesthetic).
During development, I realised the core algorithm (the Dijkstra’s without heuristic) of the program was spitting out weird results that was only resolved by adding heuristic. Due to time constraints, I was not able to properly debug the issue before the deadline (but I did write and document the issues).
Artificial life simulation
This is where this assignment got REALLY interesting! For this second part, we were asked to come up with a life simulation game that shows agents (entities) interacting dynamically. I developed a fungi-ant simulation (that is not really accurate to real life, but fine for this assignment) that shows how both populations change and affect each other over time. My attack was developing the agents to be powered by a finite state machine, and specifically develop the ants with a genetic algorithm to facilitate the dynamic development of their population. I opted to do this part in Unity (2018.2).
My three agents are fungi, worker ants, and queen ants. The fungi simply replicates itself (stylised as one of four visually-different variants) via sporing, whilst also growing in size over time. The worker ants collect fungi via random selection and short-range sight and return them to the queen ant for the queen’s consumption. The queen ant, once well feed, will summon a worker selected via fitness proportionate selection to mate with.
Finite-state machines (FSM) or finite automaton is the way of looking at computing where a machine operates with one state of operation at a time. Such a system is useful for the basic AI operation since it works by changing the state of an agent based on a specified and strict set of variables to look at. Each entity has its own FSM tailored to a generalised version of how they act.
The ant population is influenced by a genetic algorithm designed to refine the performance of the ants over time as new generations are born. The process is triggered by the queen ant, who summons a selected mate via a pheromone when the queen ant is in a state of mating. The mate will return to the queen and some DNA splicing will begin!
Representation of DNA
Obviously, the DNA is generalised. Otherwise it would take forever to specify realistic genes and visualise them. So DNA is simply a collection of three variables acting as the ant’s “genes”; speed, sight, and hunger saturation. All three are floats and used fundamentally during the simulation of actions such as moving about or detecting things around the individual.
Fitness and mate selection
Fitness is a value we use to summarise the overall “quality” of the ant’s DNA. It is simply defined as f = (speed+ sight + hunger) / 3. We can use this to guide the queen’s selection of a mate so it’s not a total pseudorandom selection. I opted for roulette wheel selection. It operates by calculating the current ant population fitness (sum of all ant fitness or Σf) twice, but on the second time the ‘summing’ stops when a random threshold is reached. The iterative value used when looping the ant population is used to select a mate. In code, this looks like:
When an attracted worker arrives at the queen, DNA splicing will occur that creates some new DNA with a 50/50 chance splice of each of the three genes from the queen and the worker. A random mutation may also occur when this happens, which for better or for worse will add some variance to the population. This looks like:
In conclusion, then…
…this has been a ballin’ coursework! The second part in particular was a joy to implement and I hope to continue on with it in the future. Maybe there will be a follow up post in a few months’ time? 😀
Oh, here’s a parting screenshot:
The artificial life simulation, as it was, would not have been possible without these free assets:
Ya boy Shark here about to spit mad equations at you!
Okay, maybe not like that. But now that the first milestone of my dissertation is past, I am feeling the need to talk about it a bit. This will be the first of a series of posts about what I’m doing.
An overview of the project
My dissertation title is “Relativity Simulation: Develop a physics-based game that utilises the principles of relativity, complete with GUI”.
The opportunity to study relativity on my third year project was something I could not turn down or substitute with another choice despite the depth of challenge in front of me. The impetus for the choice stems from my previous year’s team-based workshop assignment, which saw me developing the realistic gravitation physics behind our team’s game. With other aspects of the game put aside, the end resulted in a profound experience for me when I saw our little own Newtonian universe come to life. Now inspired to take on more advanced physics programming so that one day I can do it again more realistically and better, this project presents me with that opportunity to learn the physics I need with a hands-on project!
The first step of this journey was interpreting and narrowing down exactly what I want to do within the confines of the given title, since there is only so much I can do given the time constraints of the predetermined milestones and the time it will take me to digest everything I learn (something I hope these sort of blog posts will help me). My project aim is to create an educational and interactable simulation that demonstrates how relativity works and how everything changes when you alter relativistic (and by extension; applicable Newtonian) constants such as the speed of light, Planck’s or universal gravitational, which you need to do to complete game levels. Basically, you skew how all game objects interact because you’re basically altering how the virtual ‘universe’ is working. I’m hoping this can evolve into a fun game where you don’t interact with the objects themselves to complete the levels, and the player gets a visual idea of what relativity is without being bombarded with straight up equations, etc.
Milestone one saw me set out the aim and objectives, write my literature review and background research, and design the solution I’m working from (which will be a Windows OpenGL project). Right now, I do not have my game design specified since I am still in the process of researching relativity and deciding what aspects I am gonna use to build a game out of. Milestone two will ultimately have it though, which is due early-February. However, thanks to research I have done, I have indeed found an aspect of special relativity that I can potentially use as a game mechanic. So please, read on for part one in this journey to understand relativity!
But first, a bit of context
Actually, first we need a disclaimer: I possess up to A-level physics education, so please forgive any inaccuracies should there be any since I do not study physics as a part of my course. When I do study physics it is in my spare time – I’m trying my best!
Anyway. Relativity is perhaps one of the most important developments in physics in the 20th century, and today is a prime example of successfully-observed theoretical physics. It encompasses two related theories proposed by Albert Einstein that were built upon the results and findings of other physics such as Albert Michelson, Hendrik Lorentz, and Henri Poincaré. The two theories are special relativity (1905) and general relativity (1916). As stated before, special relativity (STR) will be the focus of this post since its within it that I found my potential ‘playing cards’ for this project. (Don’t worry about general relativity for now, since I will be posting about it closer to the next milestone once I’ve finished my research into it.)
So, STR! It describes the formerly-separate concepts of 3-dimensional space and 1-dimensional time as a 4-dimensional spacetime continuum, replaces the Newtonian Galilean transformations with Lorentz transformations (layman’s: a method of examining different perspectives of time, size and position in space), and states that the speed of light is an absolute constant.
Focusing specifically on the latter two tenants of STR, fixing the speed of light means only time, mass, and length change in calculations from now on. Hence we have consequences that you might of heard of, namely time dilation (event perceived at different times by observers at different velocities), relativistic mass (object’s mass increases with velocity), and length contraction (object’s length decreases with velocity). Each one is possible thanks to having a fixed reference (the speed of light as a constant) to calculate a velocity/light speed ratio with. This ratio is a part of the Lorentz factor, which is key to this idea of what I can make a game out of!
Beware, maths and formula ahead!
The Lorentz factor
The Lorentz factor is pretty much the key to calculating the most well known and visually-representable special relativistic effects. The factor arises from derivations of the Lorentz transformation that allows us to measure how time, mass, and length are affected by time dilation, relativistic mass, and length contraction respectively. The base factor is expressed as:
As aforementioned in the context, the factor relies on a ratio of comparing the velocity against the speed of light so that we know how time, relativistic mass, and length of an object changes when said object moves. The factor should return a value between 0 and 1, where 1 shows absolute lack of velocity. 0 would mean velocity is the same as the speed of light. We can then divide or multiple the factor by specific properties of an object to calculate relativistic values. Below shows three applications of this:
If you’re paying attention to the first two straight away, you might notice that if velocity is the same as the speed of light (299,792,457 metres per second), the result of the factor (as aformentioned, would be 0) would yield an error like “Math ERROR” on a scientific calculator or “#DIV/0” on Excel. This is normal (duh, you can’t divide by 0!), but there will be some additional relativistic explanations later for each specific case.
Starting with time dilation, I’ll be trying to explain these applications in a way that can be somewhat more easily digested that what you might find on Wikipedia (for example).
So, observer time is the time measured by an object that takes into account the relativistic effect of moving at extreme velocities (as opposed to “proper time”, which is time measured without any relativity taken into account). Suppose we have a stationary Shark named Wrex and an in-motion Shark named Princess travelling one-quarter the speed of light (0.25c or 74,948,114.5 m/s). Let the proper time (from an independent clock) measure the time as 1PM (or 46,800 seconds from midnight).
So we can see that at high velocities, Princess’ clock is no longer synchronized with Wrex’s observed time or the independent clock that provides us the proper time reference. The extra ~1,535 seconds or ~25.58 minutes is something no human can presently experience since we do not have any sort of vehicle that can propel us to the sorts of speed required to experience it. If we COULD reach the speed Princess is travelling, the subject would age slower since it would take them ~48,335 seconds to experience the same events a stationary observer does over 46,800 seconds. But let it be clear we do indeed ‘experience’ time dilation daily when we are at some sort of motion, although we ourselves cannot notice. To put this into perspective: the fastest thing the average human could experience, a commercial jet aircraft, would register an observer time of 46800.0000000157 seconds assuming velocity is the average jet speed of 885 kilometres per second or 245.833 m/s (source) and proper time is provided by the same clock used in the Wrex/Princess example. This is something only an atomic clock could register.
Relativistic mass (kilograms) is the measurement of “effective” mass that takes into account the increase in its inertial mass at high velocities, with inertial mass essentially being a parametre of mass that specifies it’s resistance to changes in motion. Using Lorentz factor, we can measure and prove that at higher velocities, the overall mass will increase. So now suppose we have a Shark named Benedict who has a “rest” mass of 50 kilograms and is travelling at one-quarter light-speed (0.25c or 74,948,114.5 m/s).
So Benedict’s mass increased by 1.6397779494322 kilograms at 0.25c! One consequence of relativistic mass is that an object with a rest mass more than 0 cannot travel at the speed of light – as an object approachesc, the object’s energy and momentum increase without bounds. It is possible that you might have heard about this if you’ve ever looked into the challenges of deep-space travel within realistic and liveable time-frames.
Another interesting note is that we can calculate the relativistic mass of an object using its energy value, something possible thanks to Einstein’s famous equation that states mass and energy are equivalent:
Length contraction is the phenomenon of an object’s length being shortened in the direction of motion. Once again, we can use Lorentz factor to calculate this BUT formula is set out differently than in the last two uses, since we are calculating the contraction of the value in question and not the increase. So suppose we have a Shark named Louise who at rest (actual) length is 1.5 metres and is travelling at half-light speed (0.5c or 149,896,229 m/s).
So Louise became ~20cm shorter at half-light speed, but there is not much more to say about length contraction other than it is also known as Lorentz-FitzGerald contraction since it was postulated by George FitzGerald in 1889 and Hendrik Lorentz in 1892.
So, can we conclude this already?!
Yes, we can!
I think all three examples presented today can be invaluable in my game design, since what I need are mechanics that can be incorporated into the game that are both innovative (since there are very few relativity-based games out there) and representable. The latter is going to be a challenge, since in order to visualise any of these I need to scale the universe and its constants down to manageable levels. So constants such as the speed of light might be hundreds or even thousands of times smaller than actuality in order to make velocity/light speed ratios small so that changes to the effects can be noticeable on screen.
Anyway, that’s all folks! 😀 Hopefully this might be interesting for someone!
Shark here for another SHARKTASTIC blog reboot, but this time it’s all grown up and serious. Okay. Maybe not quite. I mean, I’m still the same old workaholic but I’m wanting to have a little tidy up here. This blog has been on hiatus since August, and a number of events have happened since then that have required my undivided attention. Like I’m now 21! However, I’m back (hopefully)! 😀
So just prior to this post, I made almost all my ‘my life’ type posts private. Upon reflection, I believe some are a bit too personal for them to be readily out there. Plus they were quite cliche-y, TBH. All project posts are still public, although some of those projects are now terminated (see below). Going forward, I’m hoping to post more about projects, reviews, rants, and ramblings, but I will occasionally post stuff about life under the sarcastic and self-reflecting persona of Shark!
Now, let’s get into some Khalidonian updates! Spoiler: “You don’t get spoilers ;)”
In regards to my work, I’ve pretty much dropped all projects for the foreseeable future to free up more me-time and allocate some of it towards self-studying. My Tholian Simulator game has been shelved indefinitely due to complexity and lack of time. Most of the planning phase was completed, so it is just a matter of dedicating time (when I have it) later on down the road. Path to 2265 Chapter 3 is postponed until next year since I’ve had not a lot of time to write creative stuff and I’m wanting to review over the rest beforehand (since re-reading it over, it feels like a 14 year old wrote it and there are so many typos). Everything else should be assumed to be cancelled. This freed time is partially channelled into University work; my notable works so far is my artificial life simulation coursework and my dissertation, which will both soon have their own posts since they’re both super cool!
Now in regards to my life, oh boy. This summer has been… interesting. As I tried to allude to, I’m not gonna try to post cliche stuff about being heartbroken, parental separation, and shit. I’ll just quickly summarise in saying that I approached each problem asking myself “What can I learn from this?”, and I believe I have the answers I need. I’m genuinely fine though, so no need to worry!
And yeah, that’s it really. I’ve joined a few societies at University, have good friends, and I’m now a student voice representative for my Faculty school and I’m really enjoying the role! I got some cool assignments and dissertation this year that will add immensely to my portfolio, and I still have some good project ideas for when the time is right!
As a wise Shark once said, “when life gives you lemons, put them in the fridge”.
It kind of took me by surprise. It has been about a decade (from what I remember) since it last snowed well in Wales – long enough to forget what snow even feels like. So, when I woke up to see that my street is now completely blanketed after a day or so of snowing, I couldn’t contain the urge to just walk and have some fun in it!
The result: one of the best days I have had in a while! I have previously talked about my love of the simple pleasure of being outside and walking, but combine that with this soft and pretty snow and I just didn’t want to not be in it! Throughout the day, I made four different voyages into the snow equalled a combined 20,500 steps (as per Google Fit). My legs are pretty tired to say the least, as fighting the fairly strong wind and the resistance of large piles of snow did take a little bit of a toll but every moment was worth it!
There are a few notable examples of fun or interesting things I did in the day – including spots of photography, banter with friends etc. But the one that left the biggest and most personal impression was this moment when I was walking alone past World of Groggs on my last trip of the day at around 2050 hours. Imagine a setting of this usually busy street now blanketed in vibrant-white snow and silent of human contamination. Where the force of the gentle wind becomes audible and snow creaks as you tread. For some random reason, I looked back when I was walking in this setting and realised how much devoid of human life this street now was. At first, the I got this deep feeling that I was truly alone for a second – like if I were to scream, no one would hear me. But when I turned back around, I realised how beautiful and calm my surrounds were. I could now enjoy this surrounding to the point where all scares and cares went straight out of me and walked home with a smile!
I think that moment will stick with me for a while. I hope to be out early tomorrow for a spot of photography. Hopefully my main camera’s battery won’t die early in the day again!