To digitally represent colors, one most often uses the RGB color system. By combining three fundamental light colors in certain ways, one can define a variety of different wavelengths of light. The human eye has three distinct photoreceptors for the aforementioned three colors, nearly all screens use pixels consisting of three parts in those colors and most image formats store the image data in the RGB color system.
However, there are other color systems than RGB with other strengths. Cycling through the colors of the rainbow, for example, is a lot easier using the HSL (or HSV) color model, as it is simply controlled by the hue.
Rainbowify uses the HSL color model to rainbowify a given image. To do so, the image is first converted into a grayscale image (averaging all three color channels). A pixel’s brightness is then interpreted as its hue with its saturation and lightness set to the maximum. As a final touch, the hue gets offset by a pixel-position dependent amount to create the overall appearance of a rainbow. Source code is listed below and can also be downloaded.
A recent PCG golfing question When do I get my sandwich? asked to find a mapping between seven input strings (sandwich names) and the seven days of the week (indexed by number).
The first answer was made by a user named i cri everytim and utilized a string of characters which uniquely appear at the same position in all seven input strings, enklact, to perform the mapping in Python 2 requiring 𝟤𝟫 bytes. After their answer, a lot of answers appeared using the same magic string in different languages to reduce the number of bytes needed. Yet nobody reduced the byte count in Python.
Trying to solve the problem on my own, my first attempt was using only the input strings’ last decimal digit to perform the mapping, though this approach did not save on bytes (read my PCG answer for more on this 𝟥𝟢 byte solution).
After a few more hours of working on this problem, however, I achieved to bring down the byte count by one entire byte.
I did so by using a simple brute-force algorithm to check for Python expressions which can be used to perform the sought after mapping. To do so, I use Python’s apostrophes (`...`) to turn the found expression into a string — str(...) is three whole bytes longer — and index that string with the input strings’ lengths. It sure is not very readable, but only takes 𝟤𝟪 bytes — and that is all that matters.
lambda S:`6793**164`[len(S)]
After finding the 𝟤𝟪 byte function which uses a 𝟫 byte expression (6793**164), I attempted to find an even shorter expression. And even though I did not yet find one, I did write a more general brute-force Python program (source code shown below; can also be downloaded) than the one I linked to in my PCG answer.
Brute-forcing takes exponentially more time the more digits you have to check, so my brute-forcer still requires the user to decide for themselves which expressions should be tried. There are three parameters that define the search; a regex pattern that should be contained in the expression’s string, an offset that pattern should ideally have and a target length. If an expression is found that takes as many bytes as or less bytes than the target length, an exclamation point is printed Though this program did not prove useful in this case, there may come another challenge where an arithmetic expression golfer could come in handy.
My program may not have found shorter expressions, but definitely some impressive ones (the +... at the end refers to an additional offset from the string index which — unsurprisingly — take additional bytes):
2**2**24+800415
2**2**27+5226528
2**7**9+11719750
7954<<85
I also considered using division to generate long strings of digits which may match; the only problem is that Python floating-point numbers only have a certain precision which does not produce long enough strings. Again, using exponentiation (**) and bitshifting (<<) I could not come up with a working expression that takes less bytes.
While browsing StackExchange PCG [1] questions and answers, I came across a challenge regarding drawing the swiss flag. In particular, I was interested in benzene’s answer, in which they showcased a brainfuck dialect capable of creating two-dimensional 𝟤𝟦-bit color images. In this post I present this dialect with slight changes of my own, as well as an interpreter I wrote in Python 2.7 (source code is listed below and can also be downloaded).
Urban Müller’s original brainfuck (my vanilla brainfuck post can be found here) works similar to a Turing machine, in that the memory consists of a theoretically infinitely large tape with individual cells which can be modified. What allows brainfuck X (or braindraw, as benzene called their dialect) to create color images is, that instead of a one-dimensional tape, a three-dimensional tape is used. This tape extends infinitely in two spacial dimensions and has three color planes. Each cell’s value is limited to a byte (an integer value from 𝟢 to 𝟤𝟧𝟧) which results in a 𝟤𝟦-bit color depth.
Adding to brainfuck’s eight commands (+-<>[].,), there are two characters to move up and down the tape (^v) and one character to move forwards in the color dimension (*). Starting on the red color plane, continuing with the green and ending in the blue. After the blue color plane, the color planes cycle and the red color plane is selected. benzene’s original language design which I altered slightly had three characters (rgb) to directly select a color plane. Whilst this version is supported by my interpreter )the flag --colorletters is necessary for that functionality(, I find my color star more brainfucky — directly calling color planes by their name seems nearly readable. brainfuck’s vanilla eight characters still work in the same way, brainfuck X can thereby execute any vanilla brainfuck program [2]. Also, there still is a plaintext output — the tape’s image is a program’s secondary output.
Having executed the final brainfuck instruction, the interpreter prints out the tape to the terminal — using ANSI escape codes. Because of this, the color depth is truncated in the terminal view, as there are only 𝟤𝟣𝟨 colors supported. [3] For the full 𝟤𝟦-bit color depth output, I use the highly inefficient Portable Pixmap Format (.ppm) as an output image file format. To open .ppm files, I recommend using the GNU Image Manipulation Program; specifying the output file name is done via the --output flag.
The Swiss flag image above was generated by benzene’s braindraw code (see their StackExchange answer linked to above); the resulting .ppm file was then scaled and converted using GIMP. Interpreter command: python brainfuckx.py swiss.bfx -l -o swiss.ppm
Usage
Being written in pure Python, the interpreter is completely controlled via the command line. The basic usage is python brainfuck-x.py <source code file>; by using certain flags the functionality can be altered.
--input <input string>, -i <input string> specifies brainfuck’s input and is given as a byte stream (string).
--simplify, -s outputs the source code’s simplified version; the source code with all unnecessary characters removed.
--colorstar selects the color star color plane change model which is the default.
--colorletters, -l selects the color letter color plane change model.
--silent stops the interpreter from outputting warnings, infos and the final tape.
--maxcycles <cycles>, -m <cycles> defines the maximum number of cycles the brainfuck program can run; the default is one million.
--watch, -w allows the user to watch the program’s execution.
--watchdelay <delay> defines the time in seconds the interpreter sleeps between each watch frame.
--watchskip <N> tells the interpreter to only show every 𝑁th cycle of the execution.
--output <output file name>, -o <output file name> saves the final tape as a .ppm image file.
A classic quine is a program which outputs its own source code. At first, such a program’s existence seems weird if not impossible, as it has to be so self-referential that it knows about itself everything, including how to know about itself. However, writing quines is possible, if not [1] trivial.
A cyclic quine therefore is a program which outputs source code which differs from its own source code, yet outputs the original source code when run (the cycle length could be greater than one). So when running source codes 𝛹 and 𝛷, they output source codes 𝛷 and 𝛹.
Therefore, when one saves the first program as q0.py and the second as q1.py, one can create both source codes from one another (the following bash commands [2] will not change the files’ contents).
I wrote my first ever Mandelbrot Set renderer back in 2015 and used Python to slowly create fractal images. Over a year later, I revisited the project with a Java version which — due to its code being actually compiled — ran much faster, yet had the same clunky interface; a rectangle the user had to draw and a key they had to press to the view change to the selected region. In this post, over half a year later, I present my newest Mandelbrot Set fractal renderer (download the .jar), written in Java, which both runs fast and allows a much more intuitive and immersive walk through the complex plane by utilizing mouse dragging and scrolling. The still time demanding task of rendering fractals — even in compiled languages — is split up into a low quality preview rendering, a normal quality display rendering and a high quality 4K (UHD-1 at 𝟥𝟪𝟦𝟢 ⨉ 𝟤𝟣𝟨𝟢 pixels to keep a 𝟣𝟨 : 𝟫 image ratio) rendering, all running in seperate threads.
Rainbow spiral
The color schemes where also updated, apart from the usual black-and-white look there are multiple rainbow color schemes which rely on the HSB color space, zebra color schemes which use the iterations taken modulo some constant to define the color and a prime color scheme which tests if the number of iterations taken is prime.
Zebra spiral
Apart from the mouse and keyboard control, there is also a menu bar (implemented using Java’s JMenuBar) which allows for more conventional user input through a proper GUI.
Controls
Left mouse dragging: pan view,
Left mouse double click: set cursor’s complex number to image center,
Mouse scrolling: zoom view,
Mouse scrolling +CTRL: pan view,
‘p’: render high definition fractal,
‘r’: reset view to default,
‘w’, ‘s’: zoom frame,
Arrow keys: pan view,
Arrow keys +CTRL: zoom view,
Menu bar
“Fractal”: extra info about current fractal rendering,
“Color Scheme”: change color scheme and maximum iteration depth,
“HD”: controls for high definition rendering,
“Extra”: help and about.
Blue spiral
A bit more on how the three threads are implemented. Whenever the user changes the current view, the main program thread renders a low quality preview and immediately draws it to the screen. In the background, the normal quality thread (its pixel dimensions match the frame’s pixel dimensions) is told to start working. Once this medium quality rendering is finished, it is preferred to the low quality rendering and gets drawn on the screen. If the user likes a particular frame, they can initiate a high quality rendering (4K UHD-1, 𝟥𝟪𝟦𝟢 ⨉ 𝟤𝟣𝟨𝟢 pixels( either by pressing ‘q’ or selecting “HD ❯ Render current frame”. This high quality rendering obviously takes some time and a lot of processing power, so this thread is throttled by default to allow the user to further explore the fractal. Throttling can be disabled through the menu option “HD ❯ Fast rendering”. There is also the option to tell the program to exit upon having finished the last queued high definition rendering (“HD ❯ Quit when done”). The high definition renderings are saved as .png files and named with their four defining constants. Zim and Zre define the image’s complex center, Zom defines the complex length above the image’s center. Clr defines the number of maximum iterations.
Another blue spiral
Just to illustrate how resource intensive fractal rendering really is. A 4K fractal at 𝟥𝟪𝟦𝟢 ⨉ 𝟤𝟣𝟨𝟢 pixels with an iteration depth of 𝟤𝟧𝟨 would in the worst case scenario (no complex numbers actually escape) require double multiplications. If you had a super-optimized CPU which could do one double multiplication a clock tick (which current CPUs definitely cannot) and ran at 𝟦.𝟢𝟢 GHz, it would still take that massively overpowered machine seconds. [1] Larger images and higher maximum iterations would only increase the generated overhead. The program’s source code is listed below and can also be downloaded (.java), though the compiled .jar can also be downloaded.
Green self-similarity
Unrelated to algorithmically generating fractal renderings, I recently found a weed which seemed to be related to the Mandelbrot Set and makes nature’s intertwined relationship with fractals blatently obvious. I call it the Mandel Weed.
Most images nowadays are represented using pixels. They are square, often relatively small and numerous, come in different colors and thereby do a good job being the fundamental building block of images. But one can imagine more coarse-grained and differently shaped pixels. An interesting fact is, that in most monotype fonts two characters placed right next to each other (for example ‘$$’) occupy roughly a square area. So simple ASCII characters can indeed be used to approximately describe any ordinary image. Asciify does exactly this; it takes in an image and some optional parameters and maps the pixels’ intensity onto a character set. Both the large and small default character sets are taken from a post by Paul Bourke.
In conjunction with asciify.py, I wrote index.py, which asciifies a bunch of images and results in their html form; it also creates an index. All images asciified for this post can be viewed through this index [1].
Converting an image to its asciified form works best when there is a lot of contrast in the image. Because of this, some pre-processing of the image may be required for best results (all images shown where only cropped or rotated). The built-in color functionality also only knows of 𝟪 colors, so bright and different colors look the best, as they interestingly differentiate from one another. The asciified image’s size also plays a role, the larger it is, the better the characters blend into one and appear to be one image.
Asciify is operated on a command prompt; python asciify.py img.png. To parse arguments, the built-in Python module argparse is used. The images are opened and read using the Python Imaging Library module PIL, which needs to be installed for this program to work. Optional arguments include --size N, where the maximum size can be specified, --invert and --smallcharset, which can sometimes increase the asciified image’s visual appeal and --html, which will output an html file to be viewed in a browser. To see the program’s full potential, simply run python asciify.py --help [2]. Source code for both asciify.py and index.py can be downloaded, the first is also listed below.
The two examples above use the color mode, though certain images also work in default black and white mode, such as this spider I photographed.
Today it is the first day of July in the year 2017. On this day there is a point in time which can be represented as 1.7.2017, 17:17:17. To celebrate this symbolically speaking 17-heavy day, I created a list of 17 integer sequences which all contain the number 17. All sequences were generated using a Python program; the source code can be viewed below or downloaded. Because the following list is formatted using LaTex, the program’s plaintext output can also be downloaded.
Prime numbers 𝑛.
Odd positive integers 𝑛 whose number of goldbach sums (all possible sums of two primes) of 𝑛 + 𝟣 and 𝑛 - 𝟣 are equal to one another.
Positive integers n who are part of a Pythagorean triple excluding 𝟢: with integers .
Positive integers 𝑛 where is prime
Positive integers 𝑛 with distance 𝟣 to a perfect square.
Positive integers 𝑛 where the number of perfect squares including 𝟢 less than 𝑛 is prime.
Prime numbers 𝑛 where either 𝑛 - 𝟤 or 𝑛 + 𝟤 (exclusive) are prime.
Positive integers 𝑛 whose three-dimensional vector’s floored length is prime, is prime.
Positive integers 𝑛 who are the sum of a perfect square and a perfect cube (excluding 𝟢).
Positive integers 𝑛 whose decimal digit sum is the cube of a prime.
Positive integers 𝑛 for which is a perfect square.
Prime numbers 𝑛 for which is prime.
Positive integers 𝑛 where is a substring of 𝑛.
Positive integers 𝑛 whose decimal reverse is prime.
Positive integers 𝑛 who are a decimal substring of .
Positive integers 𝑛 whose binary expansion has a prime number of 𝟣’s.
Positive integers 𝑛 whose 7-segment representation uses a prime number of segments.
Today it is June the 28th which means that it is 𝜏 day! The irrational and transcendental constant 𝜏 is what defines , which obviously makes it an important constant. To celebrate this day, I created a C program which calculates 𝜏 by randomly creating 𝟫-dimensional points inside the 𝟫-dimensional hypercube and testing if they are inside the 𝟫-dimensional hypersphere with its center located at [1].
Today’s 𝜏 time is 3:18:53 as . As one does not know if the time is specified as ante or post meridiem, there are actually two perfectly acceptable 𝜏 times.
The formula used for calculating 𝜏 is derived from a 𝟫-dimensional hypersphere’s hypervolume formula (see this Wikipedia article).
The constant gets calculated to . The real value is approximately 𝜏 = 𝟨.𝟤𝟪𝟥𝟣𝟪𝟧…, which makes the percent error
Thereby, this C program’s approximation is not too far off. [2] The source code is listed below and can also be downloaded here. Instructions on how to compile it using GCC can be seen below or in the source code.