Anti-aliasing

Uni

Hey, it’s ya boy Shark here with some good news! Despite not feeling the greatest about it, I got 87% as my grade for my Game Engine Design coursework on Doom 3 modifications and Python bots! 😀 It was a pleasant surprise to start last week with to say the least!

Anyway on Friday night, I submitted my first Real-Time Rendering Techniques coursework! This module focuses on teaching us graphics programming in OpenGL (as a successor to last year’s Computer Graphics module) and DirectX without the aid of an engine like Unity or Unreal. This coursework was centered around implementing and comparing supersampling anti-aliasing (SSAA) and multisampling anti-aliasing (MSAA). So today I’m gonna be talking about what aliasing is, what those two things are, and the conclusions I drew!

Please note that there will be no code samples below due to this being a recently submitted coursework. This post assumes basic understanding of the topic (like knowing what anti-aliasing is as a game setting or something), but if you don’t have that this still might still be an interesting read!

The problem of aliasing

As defined, aliasing is when signals become distorted when they are sampled. In context for this post, graphical aliasing is the occurrence of image artifacts that produce a stair-like or jagged edge around objects rendered on screen. This is particularly a big problem for computer games since aliasing can make otherwise stunning scenes look too digitised and break the game’s immersive potential.

The easiest way to recreate this effect for yourself is by viewing an image with lower resolution than the computer’s display and stretching it out (so long as the image viewer itself doesn’t anti-aliase) or zooming in on it. Lines will become jagged and any quality of the image will be ruined. Unfortunately no perfect solution exists due to the fact that the detail on any given image will be represented on screen by pixels (which are just really small blocks of colour). However, we can at least mitigate this issue with anti-aliasing!

Two techniques…

The assignment called for us to implement and compare two examples of anti-aliasing; supersampling anti-aliasing (SSAA) and multisampling anti-aliasing (MSAA). Both of these techniques are varied in age, implementation difficulty, and performance characteristics. As defined, anti-aliasing is literally the process of diminishing aliasing.

SSAA

SSAA is perhaps the oldest and most primitive anti-aliasing technique, which has long been abandoned due to high performance requirements and lack of built-in implementation in frameworks such as OpenGL. Supersampling involves telling your code to render the scene (what you’re looking at whilst playing the game) at a higher resolution than the display and compressing it down into the display’s resolution. The latter process, known as downsampling, works by averaging the colour of each pixel with some of it’s neighbours in a ratio like X4. In a X4 scenario, the scene would be rendered four times bigger than the display’s resolution setting and then subsequently averaged in blocks of four pixels during downsampling. You’ve probably observed downsampling yourself when resizing an image or photo to be smaller than it originally was

MSAA

MSAA is a newer technique that, whilst it is still not the best technique available, is relatively popular. Unlike SSAA, MSAA does not require the program to render the scene at a higher-than-display resolution. Instead, the algorithm behind it samples two or more adjacent pixels as it during runtime.

The results

I produced some side-by-side screenshots to show the differences in performance and effect before and after enabling the respective techniques. The testbed was a university lab machine powered by an NVIDIA GeForce GTX 980 Ti. The scenario consisted of extremes, with a low resolution of 640×480 and an AA-on sample size of X16. The resolution was originally supposed to be 1920×1080, but attempting to use X16 SSAA at that size caused lots of issues relating to memory.

The SSAA scene is full of spheres, and the MSAA scene is full of pyramids. Ideally the scenes should have been the same, but the coursework demanded different scenes with one being FBO-powered and the other one on the default framebuffer (or a similar combination/configuration). In terms of vertex count though, both were similar (there were a lot more pyramids in the MSAA scene to compensate for the large meshes for the spheres).

Supersampling anti-aliasing (SSAA) example: off versus x16

The combination of rendering higher than the display’s resolution and downsampling the result was not very performant, as we see here. The 640×480 resolution (307,200 pixels) results in the scene being rendered at 10240×7680 (78,643,200 pixels)! The framerate drop from being capped at 60FPS (due to v-sync, whoops) to ~17.5!

Multisampling anti-aliasing (MSAA) example: off versus x16

MSAA fairs far better by a long margin with the FPS bound only by v-sync (I really should have turned that off). The anti-aliasing effect looks about as good as SSAA as well, so win-win?

And that’s the conclusion; summarising why MSAA is considered better! My cohort was however slightly divided on which one actually looked better. Personally I found that at X16, their effectiveness looks about the same but at X2, SSAA might look slightly better. The resources you save on using MSAA will compensate for this by allowing you to sample at a much higher rate anyway, however!

Further reading…

Other blog updates

Whilst I’m at it, time for some blog updates!

I’ve postponed the planned post on the most unusual ThinkPads for the time being, pending posts about my new purchases! I’ll be showcasing my machines (and keyboards) in pairs, with the schedule being the following:

  • 18th January: Ultrabooks: SN-X41 & SN-X220
  • 25th January: Modern: SN-L430 & SN-T430
  • 1st February: Classic: SN-T41 & SN-T42
  • 8th February: Keyboards: SN-KBC Compact & SN-KBB Tablet 2 Bluetooth
  • 15th February: Uniques: SN-A30p & SN-R60e

And that’s all for today! Shark out!

Advertisements

Shark’s Aims for 2019

Life

‘Ello, hope the festive season is treating y’all well!

It’s ya boi Shark here at a time of year where some people look to reflect on what’s happened during the last year and wonder what is to come. I wanted to do something similar and make a blog post out of it, but I don’t want the effort to end up as some new year’s resolution list.

So, without further ado, 2019 is gonna be about…

1) Aiming to ensure my work comes first

1.1) At university

During the semester just past, I felt like two out of the three assignments or milestones we had went well. My dissertation’s progress is ahead of my initial plan, and my AI assignment’s (the ant sex one I wrote about two weeks ago) final product was exactly what I wanted. But my final Game Engine Design module assignments was below my expectations and I feel I could have done much better. The main problem was a rushed report about the product, which was to adapt and extend a Python map conversion tool and AI bot (Pybot) for Doom 3.

Through the assignments there was a great deal of stress, something that accumulated and affected that last assignment of the term. I believe it came down to two things; my own natural stress when concentrating against limited time, and what seemed like near-constant requests for help during that critical period. Usually my own stress is manageable, but I felt the additional pressure got to me during the last stretch towards submission. Dealing with it is complicated by my desire to help (I’m usually more than happy to lend a fin to classmates who really need help) conflicting with my growing view that it was starting to take the piss, and this isn’t an easy situation to deal with since it involves people who I care about. So, where do I draw the line between helping and making sure I get my work done? I don’t know yet. But I know I have to reserve my right to focus on my own assignments when I need to. I’m thinking this might be an interesting topic to revisit eventually, especially if I take a more deeper look at the cause because I believe it is always within my best and fair interest to look at myself when asking “what’s going wrong?” so I’m not just finding third-party excuses.

But anyway. With the second of three terms upon me, I must ensure that I’m continually giving it my best as I continue my voyage through this curious year and time of my life. Since my enrolment back in 2015, these last three years have been years growth and near-continuous achievement in programming-related modules. Only one of my programming grades was not a first, and even then it was a second. I must ensure I capture this with my grades this year and not settle for (a) second. This is perhaps the most important vow unto myself for 2019.

1.2) At home

Now. Ensuring I work towards my degree is one thing, but my personal projects are another beast entirely. As with a lot of professions, a good degree on its own is not necessarily a golden ticket to the chocolate factory. You need a portfolio. A portfolio full of spicy and juicy work. Whilst mine is pretty okay for now, I want to ensure that I keep on adding towards it throughout 2019.

Thankfully, past assignments and the continuous stream of new knowledge through lectures provide a lot of inspiration for what I can do for my portfolio. Almost every feasible idea I have is documented, so it is just a matter of finding the time to follow through on them. In my blog reboot post, I mentioned how I’ve dropped a lot of projects I was already doing to allow myself time to complete university work – I believe I need to begin working some of them back into my schedule for next term!

2) Aiming to manage my costs

At least for the time being, ThinkPads are all the rage in my life right now! But going into the new year with this rage, I must ensure it is a managed passion that does not become a killer burden to my wallet. It’s good to have passions and hobbies, but not if it’s gonna bite me in the ass in a few months’ time and I have to borrow money for shit again…

I already have some sensible acquisitions planned for when I get my next student finance payment in little over a week’s time. Among them is the IBM ThinkPad X41 swivel “lap-tablet” from 2004 and a new Lenovo ThinkPad USB keyboard that could allow me to have the ThinkPad experience (keyboard and TrackPoint nipple mouse) on any computer! I also plan to retain some spare cash for when I get opportunities to buy cheap ThinkPads at places CeX, Cash Generators, or carboot sales so that I can do them up as small personal projects!

I know I must not allow this to get out of hand by recognising the line at which I need to stop. As we speak, I am developing a budget to give myself an idea of this limit.

3) Aiming to sort my health out

Since 2015, I have lost about 13 kilograms (~2 stone to you imperialists). Whilst that’s better than nothing, I still have a long way to go until I have a BMI I’m happy with. And I would like to accomplish my weight loss goals at a faster rate too.

I’m already beginning to tackle the issue with a morning and night exercise routine comprised of things like sit-ups and squats, and I am phasing some crappy foods out of my diet too. As a vegetarian, it’s about time I actually cook veg regularly instead of defaulting to bloody pizza all the time, and so forth. Also, signing up to a gym isn’t a bad idea either but I’m worried that will be something that will eat up too much time of my days. I’ll have to cost it and work something out.

4) Aiming to ‘fend off life’s shit

Before I continue, I’d like to assure you this isn’t a depressive story following this. I don’t fancy posting such personal things here anyway, so I’ll spare the details. Back in my blog reboot post, I alluded to how things went this summer (I purposely omitted details then too). Some of the events you could consider to be mentally taxing if you heard about it, and believe me it was the sort of stuff that could upset me. But as I said then, I approached the issues head-on to find the answers I needed to move forward with.

So, as I move into this new year, I’d like to reiterate the importance to myself (and perhaps to others too) of approaching issues constructively and rationally. I shouldn’t bury them like I have in the not-so-distant past for they’ll simply come back worse. It’s also never a bad thing to make sure I get the help I need if I need it, even if that’s just talking to someone I trust about it.

5) Aiming to enjoy the diminishing time I have left at university!

Whilst I don’t want to distract myself from my work, I’m wanting to ensure I enjoy every good and memorable moment I get considering I’m on the 4th/5 year of my degree. Whether it’s with friends, societies, or even just a good coursework! After the fifth year, that’s it! No more university unless I end up as a research student, but even then, almost all my friends would have moved on. Throughout the last term, the realisation that 60-70% of my course will not be here next year has crossed my mind quite a bit since they’re only doing the bachelor’s degree and not integrated masters (MComp) like myself.

It’s gonna be weird and quiet as hell.

6) Aiming to make this blog great!

This will be the last post of 2018, however next year should see further continued commitment to this blog! My current goal is three posts per month, something roughly in line with what I have been doing here since the reboot. Current topics on mind are coursework (AI and realtime rendering/graphics), my dissertation, ThinkPads, and personal stuff if interesting or relevant. I’m hoping to explore other things to talk about too, including electronics/hobbyist projects, my thoughts on tech news, Star Trek commentary, my experiences within the role of Student Voice Representative @ USWSU, and reviews of tech books and manuals I’m reading!

But I guess we shall see! The next post is due next Wednesday, which will see me taking a look at some unusual but desirable ThinkPads that I’m looking to buy at some point! Following that, there will be updates on the ThinkPad acquisitions I do make and an insight into my OpenGL realtime rendering coursework!

…and that’s it!

Hope this has been somewhat of an interesting and insightful read! It’s been helpful for myself to actually sit down, consider, and write about what I want to do in 2019. Hopefully I can make good on them and write all about the success (or otherwise) this time next year!

Shark out!

ThinkPads, Linux, and philosophy I guess

Tech, ThinkPads

So for my last assignment at university, ya boy Shark was required to play around with a lecturer-designed map format converter and a Python bot for Doom 3. On Linux. This required that me, an almost devout Windows user who actually liked Windows Me and Vista, to get up and personal with things like Emacs and general Linux quirks.

Let this be clear. I’m not opposed to Linux at all, but I was somewhat arrogant to what using it properly was like. The experience of using Linux at university was also tainted thanks to preconfigured nonsense delivered on Apple iMac 5Ks with terrible and awkward-to-hold mice and an out-of-the-box mismatch between system’s and keyboard’s layout that cannot be changed without root access that only served to make adapting to Linux paradigms (especially shortcut key strokes) all the more infuriating.

First, a little bit of Linux philosophy

After that bad experience, I still wanted to learn more Linux and be able to actually use it properly in a controlled environment tailored to me. But first, why Linux? Well I consider Linux to stand for three things that can be appealing from a computer science student’s standpoint; freedom of distribution, freedom of choice, and freedom of shot yourself in the foot by f***ing around with things that would normally be shielded to you on things like Windows.

Firstly, distribution and choice. They go hand in hand because there are just SO many flavours to choose from – all of which are free to use! You have the main families such as RPM and Debian. Then inside them you, have things like Fedora or Ubuntu. And then those things themselves have variants such as Kubuntu. And then on top of that, you can literally ‘reform’ the operating system’s feel to how you want with desktop environments such as GNOME, MATE, LXDE, Xfce, etc.

Secondly, shooting yourself in the foot. When outside of the aid of the desktop environment, Linux will not handhold. As opensource.com puts it, “it assumes you know what you are doing when you type a command and it proceeds to execute that command without asking if you really want to.” If you’re not experienced enough, you’re going to probably have a bad day at some point when making modifications to your system. Especially if you “sudo” everything. You could say this is a double-edged sword, and it’s up to you to decide if the pros outweigh the cons.

Enter the ThinkPad(s)!

If you’re involved with computing, you might also see the appeal in what Linux stands for and could potentially offer! So eagerly wanting a platform to use (since I don’t want to use my one good gaming laptop for this), I turned to a certain old laptop I had lying around.

SN-T41 being ‘attacked’ by Tipper

I bought this badboy (an IBM ThinkPad T41) off my friend for £15 just two months earlier with the intention of doing it up into a ’90s era (probably Windows 98) PC gaming laptop. I put off the little project due to lack of time after long days working at university, so it never materialised. Now that I have the time but priorities have changed, I believe it is perfect for dipping my feet into Linux now with whilst I awaited funds (my student loan) to invest in superior hardware. I originally planned to put the latest version of Linux Mint on it but quickly realised there would be issues with its old Intel Pentium M processor – it lacks a memory management feature called PAE (Physical Address Extension) that is required by most modern Linux distros. So with more realistic expectations, I dialled it back and looked for some older and lightweight distros. I eventually settled on Lubuntu 12.04 (a lightweight flavour of Ubuntu using the LXQt desktop environment) to give myself a nice compromise between software availability (it’s post 2010 at least) and resource lightness. Not bad for a 2003 machine, and its actually fairly decent to live with! I doubt putting the era-equivalent (to the version of Lubuntu) Windows 8 would have been as successful. To be honest, I doubt it would have even installed! But despite all this, I wanted to experience Linux with MORE SPEED and up-to-date distros. So after being given a little bit of money off my nan for buying myself something nice for Christmas, I invested in a slightly more modern relative of the T41 as a nice treat for myself and something with real value to my self-education!

SN-R60e chilling with The Sharkster (creative, eh?)

Here’s the Lenovo ThinkPad R60e from 2006! Costing just £50, it is in near-perfect shape with only the battery being kaput! The R60e is rocking an Intel Core Duo T2300 that has PAE, so the latest Linux distros work with it! I opted for Xubuntu 18, another lightweight Ubuntu flavour that uses the Xfce desktop environment. Xubuntu holds up well on the machine despite some slow downs that can be apparent when multitasking beyond three windows. I do hope to partially remedy this with a CPU upgrade to an Intel Core 2 Duo T7400. The T7400 is a generation newer than the T2300, and it’s clocked higher at 2.16GHz compared to 1.66GHz. I am also considering trying Xubuntu 16 to see if it’s a bit faster. Anyway, the CPU is ordered and should be with me just after the new year!

Xubuntu 18 on SN-R60e

Hold up, why ThinkPads? And why two?

Go brew yourself a nice cup of earl grey and open a pack of Bourbon biscuits, and prepare yourself for a long-winded explanation and justification!

So, it all started with a circa 2000 Pentium III-equipped IBM ThinkPad that I had growing up. A T21 that, as the model number implies, is two generations older than my T41. It was THE most sturdiest and durable laptop I ever owned, a legacy of thanks to it predating trends such as the god awful netbooks from around the late ’00s and the modern ultrabook (which aren’t bad, but too thin for my comfort). I finally managed to break it somehow early into my high school years (along with my sanity and once-trim figure). Now fast-forward to the 22nd September 2018. When my friend showed me his disused T41, I saw the best qualities of the earlier model in it and thought “you can make a collection out of this!” A few weeks latter, I made an offer to buy it in the poor shape it was in and give it some love! Which I did! A good clean and thorough but mindful plastic polishing later, we got SN-T41 in all its glory!

Some more context. Anyone who knows me well knows that I have a habit for collecting stuff; 4-colour Bic pens, Logitech LCD keyboards, and PlayStation controllers are among the most complete examples of my collections! Each collection was bound by a common feature such as the style of pen, the features of the keyboards, or lineage (for both the keyboards and controllers). ThinkPads have both desirable features and a lineage of interesting history bound by a common form. It just had to be collected, for my enjoyment and ability to use them in learning!

So, you find them collectable? What exactly makes a ThinkPad a ThinkPad?

Since their 1992 inception, ThinkPads have managed to retain their best qualities despite changes in trends and industry design throughout their 26-year existence. Some of which include the keyboards, mouse options, and other design cues/features!

The modern (left, T470) and classic (right, ThinkPad 25) upper deck layout (image from Ars Technica)

The keyboard

ThinkPad keyboards are so good! They’re a joy to type on thanks to a satisfying press, and the keys travel the sort of distance I like for fast and error-minimised typing. Maybe not so much of a problem with modern laptops, but earlier ThinkPads had the benefit of having a properly balanced spacebar for consistent pressing no matter what part of the bar you hit. Something that plagued laptops of earlier decades.

The TrackPoint mouse

ThinkPads are recognisable by the little red nipple in the centre of their keyboards, which is officially known as the TrackPoint pointing stick. The left, right, and sometimes a centred middle button (used for allowing the TrackPoint to scroll around) flank the bottom of the spacebar. The TrackPoint is designed to be easily accessed by your index and middle fingers on the home row (ASDFGHJKL on a QWERTY keyboard), and the mouse buttons are designed to be used with your thumbs. Both placements are made to minimise the need for moving your hands away from the keyboard or at least minimising the rotating of your wrists, increasing ergonomics on the laptop’s constrained real estate. For users that prefer touchpads though, they are available on most post-2000 model variants ALONG WITH the aforementioned TrackPoint.

The ThinkLight

A large majority of the older ThinkPads have a togglable light on top of the screen that were designed to illuminate the keyboard before backlit keyboards were popular.

SN-R60e’s orange ThinkLight

The light is usually either a white LED (on the top-tier T-series) or orange LED (on lesser tiers such as the R-series). Honestly, it’s up to you if you consider this a gimmick but I’m actually using it right now as I write this in the relative darkness!

The chassis

As aforementioned, ThinkPads are sturdy little machines. All IBM and older Lenovo models sported a THICC as fuck case. The boxy design was originally modelled in 1990 by designer Richard Sapper, based on the Japanese ‘Bento’ lunchbox. He wanted the outside to look unimposing so that the inside can be surprising. In effect, the true nature of the powerful machine is concealed behind something the random passer-by might just ignore as a primitive box. Obviously this was conceived when laptops were still a rarity for public use, but the design was carried on for some time despite it now still being obvious that a ThinkPad is a laptop even when closed.

ThinkPads are also out of this world! Their strength and robustness has resulted in them being on every NASA space shuttle flight from 1995 till retirement, and the IBM A31p and the Lenovo T61p are presently the most used laptops on the International Space Station (image from Lenovo Blog)

This philosophy remained in place until around 2012, when the box chassis was morphed into a sturdy yet ultra lightweight frame instead. This is unsurprising since the industry of now demands ever smaller designs, so Lenovo has taken it upon themselves to make sure that despite this the ThinkPad remains the among the most rigid laptops in production.

So, what’s next init?

In terms of Linux, I’m gonna be spending a lot of my Christmas break building up experience by using it daily. I’m already partially using Linux daily on my R60e, and this blog post was entirely written whilst using it! My main laptop has only really been used for YouTube videos and stuff. As I learn more, a transition will take place until eventually my main laptop is only used for gaming. As for the ThinkPads, there will simply be more of them! 😀

I’ll be sure to post some updates as I gain more experience, learn things I wanna share, or highlight a new ThinkPad I’ve falling in love with!

For now though, remember: “A candle isn’t a candle without wax, just like a Shark isn’t a Shark without its Lenovo ThinkPad X1 Carbon running Kali Linux 2018”.

“What?”
– Your brain after reading that last line

Further reading…

Algorithms and Ant Sex

Assignment: Y3 Artificial Intelligence

So, ya boi Shark here finished a really fun coursework last Friday!

The coursework was for one of my favourite modules – Artificial Intelligence for Game Developers. Since term started, we have looked at search algorithms, decision trees, artificial neural networks, fuzzy logic, finite automaton, evolutionary computing, clustering, and self-organising maps. We also briefed deep learning, although that’s for later in the academic year. Now, the coursework asked us to select appropriate (and reasonable, considering time restrictions) techniques from some of that pool for NPC navigation and artificial life simulation.

Please read on!

NPC navigation

My NPC navigation implementation was your standard A* search stuff. We were asked to come up with a solution to get from A to B on a map made up of different terrain types. The types – grass, marsh (slows you down), road (speeds you up), river, and mountain (both block) – were given to us. The A* algorithm is widely known for pathfinding and graph traversal, mainly because it’s basically a great extension of the popular Dijkstra’s algorithm with heuristic (educated guessing) guidance. To show informed choice, however, I did implement alternative algorithms to show the difference in operation:

  • Breadth-first search (BFS)
    • Only suitable for unweighted graphs
    • Imagining the graph is like a tree; simple concept of looking at all leaves on the same branch level before going deeper
  • Depth-first search (DFS)
    • Only suitable for unweighted graphs
    • Concept of going to the lowest branch first, then recursively going back up to explore leaves on the way
  • Dijkstra’s algorithm
    • Designed for weighted graph, but will act like BFS if all weights are equal

In theory, implementing A* is relatively straight forward – implement Dijkstra’s, then tack on some heuristic cost calculations. The solution is VC++ 2017 based, with the result being rendered in C++ console (there is no mark on aesthetic).

My heuristic – Manhattan distance

During development, I realised the core algorithm (the Dijkstra’s without heuristic) of the program was spitting out weird results that was only resolved by adding heuristic. Due to time constraints, I was not able to properly debug the issue before the deadline (but I did write and document the issues).

Artificial life simulation

This is where this assignment got REALLY interesting! For this second part, we were asked to come up with a life simulation game that shows agents (entities) interacting dynamically. I developed a fungi-ant simulation (that is not really accurate to real life, but fine for this assignment) that shows how both populations change and affect each other over time. My attack was developing the agents to be powered by a finite state machine, and specifically develop the ants with a genetic algorithm to facilitate the dynamic development of their population. I opted to do this part in Unity (2018.2).

My three agents are fungi, worker ants, and queen ants. The fungi simply replicates itself (stylised as one of four visually-different variants) via sporing, whilst also growing in size over time. The worker ants collect fungi via random selection and short-range sight and return them to the queen ant for the queen’s consumption. The queen ant, once well feed, will summon a worker selected via fitness proportionate selection to mate with.

This is what it looks like whilst running!

Finite-state machine

Finite-state machines (FSM) or finite automaton is the way of looking at computing where a machine operates with one state of operation at a time. Such a system is useful for the basic AI operation since it works by changing the state of an agent based on a specified and strict set of variables to look at. Each entity has its own FSM tailored to a generalised version of how they act.

One of my state diagrams (for worker ants)

Genetic algorithm

The ant population is influenced by a genetic algorithm designed to refine the performance of the ants over time as new generations are born. The process is triggered by the queen ant, who summons a selected mate via a pheromone when the queen ant is in a state of mating. The mate will return to the queen and some DNA splicing will begin!

Representation of DNA

Obviously, the DNA is generalised. Otherwise it would take forever to specify realistic genes and visualise them. So DNA is simply a collection of three variables acting as the ant’s “genes”; speed, sight, and hunger saturation. All three are floats and used fundamentally during the simulation of actions such as moving about or detecting things around the individual.

Fitness and mate selection

Fitness is a value we use to summarise the overall “quality” of the ant’s DNA. It is simply defined as f = (speed+ sight + hunger) / 3. We can use this to guide the queen’s selection of a mate so it’s not a total pseudorandom selection. I opted for roulette wheel selection. It operates by calculating the current ant population fitness (sum of all ant fitness or Σf) twice, but on the second time the ‘summing’ stops when a random threshold is reached. The iterative value used when looping the ant population is used to select a mate. In code, this looks like:

Splicing DNA

When an attracted worker arrives at the queen, DNA splicing will occur that creates some new DNA with a 50/50 chance splice of each of the three genes from the queen and the worker.  A random mutation may also occur when this happens, which for better or for worse will add some variance to the population. This looks like:

Note: the bool happened was added to quickfix a bug where multiple ants were being spawned before the queen’s FSM switched from the mating state to its idle/feeding state.
Extra context: mutationRate is ridiculously low (defined as 0.05f)

In conclusion, then…

…this has been a ballin’ coursework! The second part in particular was a joy to implement and I hope to continue on with it in the future. Maybe there will be a follow up post in a few months’ time? 😀

Oh, here’s a parting screenshot:

Hastily-assembled – you can tell I made because there’s an obligatory lens flare

Acknowledgements

The artificial life simulation, as it was, would not have been possible without these free assets:

Dissertation and Lorentz factor

Assignment: Y3 Dissertation, Uni

Ya boy Shark here about to spit mad equations at you!

Okay, maybe not like that. But now that the first milestone of my dissertation is past, I am feeling the need to talk about it a bit. This will be the first of a series of posts about what I’m doing.

An overview of the project

My dissertation title is “Relativity Simulation: Develop a physics-based game that utilises the principles of relativity, complete with GUI”.

The opportunity to study relativity on my third year project was something I could not turn down or substitute with another choice despite the depth of challenge in front of me. The impetus for the choice stems from my previous year’s team-based workshop assignment, which saw me developing the realistic gravitation physics behind our team’s game. With other aspects of the game put aside, the end resulted in a profound experience for me when I saw our little own Newtonian universe come to life. Now inspired to take on more advanced physics programming so that one day I can do it again more realistically and better, this project presents me with that opportunity to learn the physics I need with a hands-on project!

The first step of this journey was interpreting and narrowing down exactly what I want to do within the confines of the given title, since there is only so much I can do given the time constraints of the predetermined milestones and the time it will take me to digest everything I learn (something I hope these sort of blog posts will help me). My project aim is to create an educational and interactable simulation that demonstrates how relativity works and how everything changes when you alter relativistic (and by extension; applicable Newtonian) constants such as the speed of light, Planck’s or universal gravitational, which you need to do to complete game levels. Basically, you skew how all game objects interact because you’re basically altering how the virtual ‘universe’ is working. I’m hoping this can evolve into a fun game where you don’t interact with the objects themselves to complete the levels, and the player gets a visual idea of what relativity is without being bombarded with straight up equations, etc.

Milestone one saw me set out the aim and objectives, write my literature review and background research, and design the solution I’m working from (which will be a Windows OpenGL project). Right now, I do not have my game design specified since I am still in the process of researching relativity and deciding what aspects I am gonna use to build a game out of. Milestone two will ultimately have it though, which is due early-February. However, thanks to research I have done, I have indeed found an aspect of special relativity that I can potentially use as a game mechanic. So please, read on for part one in this journey to understand relativity!

But first, a bit of context

Actually, first we need a disclaimer: I possess up to A-level physics education, so please forgive any inaccuracies should there be any since I do not study physics as a part of my course. When I do study physics it is in my spare time – I’m trying my best!

Anyway. Relativity is perhaps one of the most important developments in physics in the 20th century, and today is a prime example of successfully-observed theoretical physics. It encompasses two related theories proposed by Albert Einstein that were built upon the results and findings of other physics such as Albert Michelson, Hendrik Lorentz, and Henri Poincaré. The two theories are special relativity (1905) and general relativity (1916). As stated before, special relativity (STR) will be the focus of this post since its within it that I found my potential ‘playing cards’ for this project. (Don’t worry about general relativity for now, since I will be posting about it closer to the next milestone once I’ve finished my research into it.)

So, STR! It describes the formerly-separate concepts of 3-dimensional space and 1-dimensional time as a 4-dimensional spacetime continuum, replaces the Newtonian Galilean transformations with Lorentz transformations (layman’s: a method of examining different perspectives of time, size and position in space), and states that the speed of light is an absolute constant.

Focusing specifically on the latter two tenants of STR, fixing the speed of light means only time, mass, and length change in calculations from now on. Hence we have consequences that you might of heard of, namely time dilation (event perceived at different times by observers at different velocities), relativistic mass (object’s mass increases with velocity), and length contraction (object’s length decreases with velocity). Each one is possible thanks to having a fixed reference (the speed of light as a constant) to calculate a velocity/light speed ratio with. This ratio is a part of the Lorentz factor, which is key to this idea of what I can make a game out of!

Beware, maths and formula ahead!

The Lorentz factor

The Lorentz factor is pretty much the key to calculating the most well known and visually-representable special relativistic effects. The factor arises from derivations of the Lorentz transformation that allows us to measure how time, mass, and length are affected by time dilation, relativistic mass, and length contraction respectively. The base factor is expressed as:

As aforementioned in the context, the factor relies on a ratio of comparing the velocity against the speed of light so that we know how time, relativistic mass, and length of an object changes when said object moves. The factor should return a value between 0 and 1, where 1 shows absolute lack of velocity. 0 would mean velocity is the same as the speed of light. We can then divide or multiple the factor by specific properties of an object to calculate relativistic values. Below shows three applications of this:

If you’re paying attention to the first two straight away, you might notice that if velocity is the same as the speed of light (299,792,457 metres per second), the result of the factor (as aformentioned, would be 0) would yield an error like “Math ERROR” on a scientific calculator or “#DIV/0” on Excel. This is normal (duh, you can’t divide by 0!), but there will be some additional relativistic explanations later for each specific case.

Time dilation

Starting with time dilation, I’ll be trying to explain these applications in a way that can be somewhat more easily digested that what you might find on Wikipedia (for example).

So, observer time is the time measured by an object that takes into account the relativistic effect of moving at extreme velocities (as opposed to “proper time”, which is time measured without any relativity taken into account). Suppose we have a stationary Shark named Wrex and an in-motion Shark named Princess travelling one-quarter the speed of light (0.25c or 74,948,114.5 m/s).  Let the proper time (from an independent clock) measure the time as 1PM (or 46,800 seconds from midnight).

My wonderful illustration

So we can see that at high velocities, Princess’ clock is no longer synchronized with Wrex’s observed time or the independent clock that provides us the proper time reference. The extra ~1,535 seconds or ~25.58 minutes is something no human can presently experience since we do not have any sort of vehicle that can propel us to the sorts of speed required to experience it. If we COULD reach the speed Princess is travelling, the subject would age slower since it would take them ~48,335 seconds to experience the same events a stationary observer does over 46,800 seconds. But let it be clear we do indeed ‘experience’ time dilation daily when we are at some sort of motion, although we ourselves cannot notice. To put this into perspective: the fastest thing the average human could experience, a commercial jet aircraft, would register an observer time of 46800.0000000157 seconds assuming velocity is the average jet speed of 885 kilometres per second or 245.833 m/s (source) and proper time is provided by the same clock used in the Wrex/Princess example. This is something only an atomic clock could register.

Relativistic mass

Relativistic mass (kilograms) is the measurement of “effective” mass that takes into account the increase in its inertial mass at high velocities, with inertial mass essentially being a parametre of mass that specifies it’s resistance to changes in motion. Using Lorentz factor, we can measure and prove that at higher velocities, the overall mass will increase. So now suppose we have a Shark named Benedict who has a “rest” mass of 50 kilograms and is travelling at one-quarter light-speed (0.25c or 74,948,114.5 m/s).

Another sick illustration (note: the final answer had an error, so I replaced it digitally)

So Benedict’s mass increased by 1.6397779494322 kilograms at 0.25c! One consequence of relativistic mass is that an object with a rest mass more than 0 cannot travel at the speed of light – as an object approaches c, the object’s energy and momentum increase without bounds. It is possible that you might have heard about this if you’ve ever looked into the challenges of deep-space travel within realistic and liveable time-frames.

Another interesting note is that we can calculate the relativistic mass of an object using its energy value, something possible thanks to Einstein’s famous equation that states mass and energy are equivalent:

Length contraction

Length contraction is the phenomenon of an object’s length being shortened in the direction of motion. Once again, we can use Lorentz factor to calculate this BUT formula is set out differently than in the last two uses, since we are calculating the contraction of the value in question and not the increase. So suppose we have a Shark named Louise who at rest (actual) length is 1.5 metres and is travelling at half-light speed (0.5c or 149,896,229 m/s).

The final sharktastic illustration!

So Louise became ~20cm shorter at half-light speed, but there is not much more to say about length contraction other than it is also known as Lorentz-FitzGerald contraction since it was postulated by George FitzGerald in 1889 and Hendrik Lorentz in 1892.

So, can we conclude this already?!

Yes, we can!

I think all three examples presented today can be invaluable in my game design, since what I need are mechanics that can be incorporated into the game that are both innovative (since there are very few relativity-based games out there) and representable. The latter is going to be a challenge, since in order to visualise any of these I need to scale the universe and its constants down to manageable levels. So constants such as the speed of light might be hundreds or even thousands of times smaller than actuality in order to make velocity/light speed ratios small so that changes to the effects can be noticeable on screen.

Anyway, that’s all folks! 😀 Hopefully this might be interesting for someone!

Supportive sources

Further reading…

Notes

  • All shark-based diagrams are free to use provided they are referenced to this blog
  • All equation images are free to use without reference!

SharktallicA: The Blog Reboot… Again?

Life, Uni

Well, hello there!

Shark here for another SHARKTASTIC blog reboot, but this time it’s all grown up and serious. Okay. Maybe not quite. I mean, I’m still the same old workaholic but I’m wanting to have a little tidy up here. This blog has been on hiatus since August, and a number of events have happened since then that have required my undivided attention. Like I’m now 21! However, I’m back (hopefully)! 😀

So just prior to this post, I made almost all my ‘my life’ type posts private. Upon reflection, I believe some are a bit too personal for them to be readily out there. Plus they were quite cliche-y, TBH. All project posts are still public, although some of those projects are now terminated (see below). Going forward, I’m hoping to post more about projects, reviews, rants, and ramblings, but I will occasionally post stuff about life under the sarcastic and self-reflecting persona of Shark!

Now, let’s get into some Khalidonian updates!  Spoiler: “You don’t get spoilers ;)”

In regards to my work, I’ve pretty much dropped all projects for the foreseeable future to free up more me-time and allocate some of it towards self-studying. My Tholian Simulator game has been shelved indefinitely due to complexity and lack of time. Most of the planning phase was completed, so it is just a matter of dedicating time (when I have it) later on down the road. Path to 2265 Chapter 3 is postponed until next year since I’ve had not a lot of time to write creative stuff and I’m wanting to review over the rest beforehand (since re-reading it over, it feels like a 14 year old wrote it and there are so many typos). Everything else should be assumed to be cancelled. This freed time is partially channelled into University work; my notable works so far is my artificial life simulation coursework and my dissertation, which will both soon have their own posts since they’re both super cool!

Now in regards to my life, oh boy. This summer has been… interesting. As I tried to allude to, I’m not gonna try to post cliche stuff about being heartbroken, parental separation, and shit. I’ll just quickly summarise in saying that I approached each problem asking myself “What can I learn from this?”, and I believe I have the answers I need. I’m genuinely fine though, so no need to worry!

And yeah, that’s it really. I’ve joined a few societies at University, have good friends, and I’m now a student voice representative for my Faculty school and I’m really enjoying the role! I got some cool assignments and dissertation this year that will add immensely to my portfolio, and I still have some good project ideas for when the time is right!

As a wise Shark once said, “when life gives you lemons, put them in the fridge”.

Wait, what?

Alright. Fair enough. Shark out!