The programs I talk about can be downloaded from the downloads page. If your computer isn't supported, email me and I'll try to make it work for you
The mandelbrot set is a fractal pattern made by taking a bunch of points on the complex plane, and iterating \(z_{n+1} = z_{n}^2 + c\). If the point diverges to infinity, it isn't in the mandelbrot set. If the point stays bounded, it is in the Mandelbrot set. When you do that with a bunch of points between about \(\pm 1.5\) and \(\pm 1.5\), you will get the iconic shape. After this, you can look at increasingly small sections of it that almost always have something interesting going on.
In this section, I will write a minimal version in Python that works and outputs an image. It will end with this
The basic thing that you need to find a value at a point is to find the real and imaginary \((x\) and \(y)\) values at a point. Then, they are added to a complex number \((Z\)), which then gets squared. Then, you check if the value is diverging. I'm not smart enough to know a good way to do this, so I just check if it goes above a given value. If it hasn't been too many cycles and the value of \(Z\) isn't diverging, you add the values of the point again, square it, check, and repeat.
def bounded(c_real, c_imag, loops, top): '''this takes in a point called C and checks if it goes above [top] within [loops] iterations''' z_real = 0 z_imag = 0 for I in range(loops): # add the point to Z temp_real = z_real + c_real temp_imag = z_imag + c_imag # this is just squaring a complex number z_real = (temp_real * temp_real) - (temp_imag * temp_imag) z_imag = 2 * temp_real * temp_imag # check if the absolute value of the number goes above top if ((z_real * z_real) + (z_imag * z_imag) > top * top): # if it does, stop execution and say how long it took return i # if the point stayed bounded the whole time, return 0 to say how long it took return 0
This code is nothing too exciting. The only reason that it is in a function is so that code later on can use it over and over.
Using those equations to square \(Z\) isn't necessary, you could import cmath and have that do it for you, but its just a way to show that complex numbers are nothing scary.
This section is just going to have some basic variables that you can mess around with to change the bit of the mandelbrot set you want to see. Then, it will loop over the points to show the bit of the mandelbrot set you have defined.
import numpy as np # the central point for the image real = 0 imag = 0 # how far along each part you want to see real_radius = 1.5 imag_radius = 1.5 # change this to vary how big you want the final image to be zoom = 70 # these use the parameters above to calculate the lowest and highest points on each axis low_real = real - real_radius high_real = real + real_radius low_imag = imag - imag_radius high_imag = imag + imag_radius # this calculates how many pixels the image will have width = int(2 * real_radius * zoom) height = int(2 * imag_radius * zoom) # makes an evenly spaced series of points along the real axis between the points reals = np.linspace(low_real, high_real, width) # goes over each of them for real in reals: imags = np.linspace(low_imag, high_imag, height) for imag in imags: # checks if the current point is in the mandelbrot set and then prints the result output = bounded(real, imag, 100, 1000)
The only reason for numpy is how easy it makes it to use linspace, if you want to make the program run slightly faster, you might be able to rewrite linspace yourself.
Be careful when increasing zoom!! The time taken to run this program is (roughly) proportional to the square of it
For this, I will be using the .ppm format because it is really easy to use and you don't need anything else.
In short, there is a line to say that the colours are saved as text, then a line to say the width and height of the image, a line to say the maximum brightness of a pixel, and then RGB values of each pixel
... height = int(2 * imag_radius * zoom) print("P3") print(f"{width} {height}") print("255") ... ... output = bounded(real, imag, 100, 1000) print(f"{output} {output} {output}")
Running this code will output a bunch of numbers. Open up notepad (or any other text editor), paste those values in and then rename it to "image.ppm" and it should display an image (I've only tried this on a mac so far). It will be rotated \(90^\circ\) from what it should be, but you can change this by chaning the order of the for loops.
To change what the rendering looks like, you can change the values at the top of the file. If you want to zoom in 10 times on the centre, divide the radii by 10 and multiply the zoom value by 10. To change the maximum number of iterations, change the third value in the call to bounded. To change the maximum value before the point is called bounded, change the fourth value.
The full code is avaliable on github here
This section is gonna assume a bit more technical knowledge
After playing around with my Python implementation, I was getting bored of it running so slowly. I tried implementing multiprocessing and this did result in an increase in speed, but not enough to be what I was looking for
When re-implementing everything in Rust, I had some requirements
Of these, number 2 would be the hardest to implement. 1 and 3 can basically be acheive by translating the code that I had from Python to Rust line by line. Functionality would be maintained and by virtue of rust being compiled and Python not, it would almost definitely run faster
My biggest problem would be the GUI, because I didn't know any Rust GUI libraries, and because I didn't know that much rust at the time I wasn't really in a good position to learn.
Because I already had a functional Python frontend, I could just send data from Rust to Python and not have to worry about anything. To do this, I decided to use UNIX sockets, because they were the first thing I came across and the TCP libraries in Python and rust worked with them on macos.
Once I had the renderer working in Rust and giving the same output as Python, I began sending the information from rust to Python. I struggled with this, because it was something that I was very unfamiliar with, I was teaching myself everything, and I was just getting to grips with rust.
The main challenge with this system was that to have everything work correctly, I had to start the Rust program, and then the Python program. If I wanted to change what I was looking at, I would have to shut down both of them, change the values and then restart both of them in order. Because of how the sockets worked, if I started them in any other order, the code wouldn't work.
Eventually, I managed to make a version that could send the data from Rust to Python and display them. Eventually, when I wanted to use buttons in the GUI, meaning sending data back from Python to Rust, this system became too complex, so with my newfound confidence in Rust, I decided to learn egui, a Rust library for graphical interfaces
At the end of this step of the project, I managed to acheive all of my goals. The only reason I took more steps was that I was still interested in the project, and I wanted to add more functionality
I chose to use egui because it is pure Rust, so any computer that I can compile Rust on will run my code with minimal complexity. With something like ImGui, I wasn't as sure.
Using egui means that data doesn't need to be transmitted between Rust and Python, so it should run faster. I think this was the case, but I don't have the original python code to test against. Once I could see an image, I wanted to be able to change the view quickly without having to close the window. For this, I added buttons to control x, y, window size, zoom and more.
This was mostly the end of the evolution of the UI, from here it was mostly functional upgrades. The main changes were adding more buttons to controll more things, almost limitless depth, and speed improvements
When the renderer was zoomed in too far, black lines would appear on the screen. As this was pushed further, the lines would increase in density until the display was all black, and then the program would crash
Im gonna do my best to explain why this is but it gets a bit complex
If you want more detail on this, look at IEEE 754
The way a floating point number stores a number is kinda similar to scientific notation, something like \(a\times 10^b\). In memory, a floating point number has a certain amount of bits to store \(b\), the exponent, and a certain number of bits to store \(a\), the mantissa. This allows a floating point number to store very large and very small numbers.
This means that something like \(1.0\) has plenty of accuracy to spare, and something like \(0.00000001\) also has a lot of accuracy. This is just \(a=1\), with \(b\) changing. The issue comes when something like \(1.00000001\) needs to be represented.
\(1.0\) can be represented as \(1\times 10^0\), and \(0.00000001 = 1\times 10^{-8}\). Both of these only have one bit needed in the mantissa. When \(1.00000001\) is stored, this is \(100000001\times 10^{-8}\) (I think). Obviously, this needs a lot more bits in the mantissa.
In most cases, this isn't an issue. However, the mandelbrot set has a lot of information near 1 and 2, and it needs a lot of space in the mantissa.
When a floating point number cannot be represented, for example because the mantissa has to be too big, it gets rounded to the nearest number that can be represented. This is why the black lines show up, because some of the columns have a real component that cannot be represented.
Notably, the bars here are only vertical. This is because this was taken around \(\text{real}=-2, \text{imaginary}=0\). When the real component is small and the imaginary component is a lot larger, these lines are more horizontal.
To remedy this, floating point numbers with any amount of precision were used. In code, this was done by using the rug crate was used.
The first effect of this is that the code became SLOW. f64s have native hardware on most CPUs so they are fairly fast to do calculations with. Floats in rug do not, meaning that they are very slow.
In addition to this, every time a calculation is done, additional effort must be taken to minimise allocating a new variable, which will take a long time. All of the speed increases so far have been from reducing the number of allocations, and in some circumstances this has been significant.
The most obvious way to do this was to use mul_sub_mul_mut, a built in feature in rug that multiplies and subtracts some numbers. This was worth being inculded, so was probably faster than doing them separately, as later runs would veify. This took some changes in code, mostly algebraic manipulaton.
One surprising way to make the rendering faster was to use my own function to square complex numbers. I'm not sure why, as I haven't read the source code for it, but the MPC library has a squaring procedure that takes a long time to run. To remedy this, I implemented the squaring algorithm myself. This resulted in a significant speed increase.
While before, I assumed that if a function was already in the library, it was probably well optimised, I now know that to not necessarily be true. I used flamegraph
to profile my code and this allowed me to see that calls to this function were taking up the majority of the time for an image. I have since retried this method, but i am chasing smaller and smaller performance gains, so I hav3e decided to take the route of adding more features to the code.
Now that I can show what the mandelbrot looks like at a particular slice, the obvious thing is to make one of those zooms that inspired me. Doing this is partially an excuse to carry on a project that I have enjoyed working on, but it also allows me to show other people what I have been working on.
Because I knew that I wanted to work on this functionality eventually, I split the code from the egui section into two files. One file for the frontend, that offers the user controls and what they are looking at, and one for the renderer, which handles all of the maths.
This separation made it easy to move to a continuation of the project, but in a new direction. All I had to do was copy mandelbrot.rs to a new folder, and all of the functions I was used to were ready to go. After a while of developing, it was sometimes time consuming to make changes to mandelbrot.rs in one project, and then have to change it in the other project. To get around this, I moved mandelbrot.rs to its own repository, so I only have one copy of it to change. This gave me a better understanding of rust's system for managing this kind of thing.
To make it easier to make these videos without having to recompile the code every time there I wanted to zoom in on a different point, I made it so the program reads parameters from a file. With this, only one program needs to be compiled and the file can be changed with a fairly human-readable interface. To make this even easier, I added a button to the GUI to output the relecant data ot generate a zoom.
This did still require some understanding of what was going on, making it less user friendly. To fix this, I wrote a little utility script to generate this file for the user. This takes one extra step of complexity out of the chain.
The least user-friendly part of this is using ffmpeg to make the final video. This could be automated, but it would take away a lot of control over frame rate, quality and final size of the video. The default settings to use are ffmpeg -framerate 30 -i %04d.png -c:v libx264 -pix_fmt yuv420p out.mp4
, and this gives a fairly good video.
There is still more to write. Last update on 31/08/2023.