Practical Component

aim of this project

The project aims to create a procedural 3D tool in SideFX Houdini for creation of single-image random dot autostereograms. The product will entail two procedural generators, which will be combined into a single tool in order to create the final design. The product will be tested in several stages. Firstly, I will constantly review and improve the procedural network throughout the creation process. Secondly, I will liaise with industry professionals to ensure the product approaches the professional industry standards and could be used in VFX productions. In order to identify the application of the product, I will also conduct a survey and an experiment (go to “Become part of the project” and “Eye strain experiment” pages to find out more).
This page presents several iterations (but not all) and describes the main issues and solutions.

sidefx Houdini and procedural tools

SideFX Houdini is an industry-standard VFX software; it derives from PRISM, a computer graphics software developed by technical artists Kim Davidson and Greg Hermanovic in 1985. Almost a decade later, what started as a simple procedural graphics application turned into the “Academy Award of Merit” winner software – Houdini. (SideFX, n.d.a) Ever since 1996, Houdini has been used by the feature film industry to create mesmerising graphics. Houdini is written in C++ language, which enables the users to use it in different operating systems, such as Windows, Linux, and OSX, which gives it flexibility and allows to be used across the industries. (SideFX, n.d.a)

tools creation

trial #1

For the first attempt, I have decided to use a bit simpler software – Adobe AfterEffects. The reason for this choice was that AfterEffects comes with already built-in displacement preset, which would be very helpful when creating stereograms. I also wanted to check my hypothesis and experiment in 2D software before I move on to Houdini.

Hypothesis: autostereograms can be created by shifting pixels of the pattern image by the value between 0 and 10 depending on the amount of white and black on the disparity map
Test:
1. Create a repeatable pattern using “Fractal Noise”
2. Duplicate the pattern
3. Shift pixels, based on the disparity map’s white value, using “Displacement Map”
Outcome: a simple autostereogram with minor errors
Conclusion: Autostereograms can be created in AfterEffects with the use of just two modules – pattern and disparity map

Errors

A shift in pixels can be noticed when the pattern is too big

Disparity map creates two shapes – one extruded and one cut in the image

The image is difficult to view if the horizontal pattern column is too narrow

Solutions

A small repeatable pattern must be used

Solution yet to be explored

Pattern columns must be adjusted to the distance between a person’s eyes, usually about 6cm

trial #2

The aim of this attempt was to remove the 2nd (unwanted) shape. The solution is to repeat the disparity map on each pattern tile starting from the tile where we want the disparity shape to appear and finishing on the last tile on the right. It is important to note, that the pixels on the disparity shape must be moved to the right.
The attempt was successful and the result can be seen on the right.

trial #3

Once I have understood the principles used for autostereograms, I could slowly start creating a bit more complex images. I am focusing on the shapes (disparity map) as the pattern is currently the easiest thing to make and change.

trial #4

I did not think that creating stereograms would be that challenging. The next step was to create a disparity map in SideFX Houdini. I placed several placeholders in the scene and rendered the Z-depth pass. However, there were some issues that I had to fix before bringing the image to AfterEffects. One of them was the lack of blurriness around the edges of the geometries (the geometries did not blend naturally with the background). This could be quite easily fixed with the settings, once I knew where to look.
The result is satisfying and now can be tested on more complex geometries.

trial #5

To understand how wallpaper stereograms work, and therefore better understand random dot autostereograms, which use the same principles, I made several tests in Houdini (again, with simple placeholders). The results were satisfying and, more importantly, proved my hypothesis and worked.
Conclusion: Objects placed at smaller intervals appear to be closer to the observer (camera)

trial #6

Creating a wallpaper stereogram was not a difficult task, it was just simple Maths. It was a matter of adjusting several settings and ensuring correct intervals and width of objects. When looking with diverged eyes, the ducks, placed at high intervals, seem to be farther away in space than the diamond repeated at smaller intervals, and the bells seem to be the closest because the distance between them is the shortest. Another noticeable difference is the number of icons. When normally looking at the image, there are 6 icons of each type, but when diverging the eyes, an additional 7th icon appears in the image.
Conclusion: The distance between two objects which I want to appear at 0 distance (in the middle – not closer and not farther) must be the same as the distance between two pupils, which is on average 60mm.

trial #7

All stereograms I have done so far were very simple and my goal is to create abstract and complex random-dot stereograms. However, when using the aforementioned “Displacement map” tool in AfterEffects, the tool performs calculations to fake depth and those calculations prevent me from using the tool for the stereograms. Paradoxically, it is too complex. So, I had to go back to start and create a random-dot stereogram by hand to understand the method even better and see how I can translate it to Houdini language. The image consists of 6 columns with repeating character pattern, each column consists of 15 characters (including spaces and interpunction). Each pattern repetition works the same way as icons in wallpaper stereograms. In the second column from the left, in lines 5 to 9, the pattern has been changed and then the changed pattern has been duplicated and pasted on the right. For example, in the 5th line of the first column, the repeated pattern says “go healthy can “, but in the 2nd column the letter “y” has been deleted and another character (in this case space) has been added after “ can “. Now the pattern from 2nd column on says “go health can “, without “y” and with an additional space after “can”. Therefore, the word “can” has been moved one character to the left, which makes it appear as if it was in front of the rest of the words in the image.
Conclusion: Starting from the first changed column, I must change all following columns and make the changes on them. Random-dot stereograms are just a more complex version of wallpaper stereograms.

trial #8

After several trials in Houdini I got stuck and decided to go back to AfterEffects and check my theories again. This was a success and all my observations and assumptions confirmed to be true. The outcome is visible on the right.
The downside of AfterEffects is that the effect must be created manually, which is time consuming and does not allow for representation of complex 3D scenes.
A success here is that the star is big and in the middle, as opposed to the previous tests where the shape was only on one side.

trial #9

Going back to Houdini. The first attempt was to create a simple stereogram that would take white value data from an image and transfer this information to the point shift in X axis. This way, one can upload any disparity map (black&white image) and the tool would create the stereogram automatically.
Problem: Since I am applying white colours (of different lightness) to each point on the grid, I cannot apply the colourful pattern to the same point. Therefore, the stereogram is not complete. The shift works, but it is “unviewable” without the pattern.

trial #10

After several hours of searching and learning new tools, I found a solution to the problem described in Trial #9. Instead of using one grid, I would use two grids of the same size and number of points. I would then take the white value from the 1st grid to move the points and the point colours from the second grid.
Therefore, the white colour (0 to 255) would become the position in X axis, and the pattern colour would become the colour of the points. As simple as it sounds in theory, I found it very difficult to recreate as I could not find a proper node or VEXpression.
Problems: Points on each grid are ordered and numerated differently; the colour of the second instead of the first grid grid shows; how do I take attributes from the first grid and apply to the second?
Solutions: to order the points I can use “Sort” node and sort the point by Z axis; the grid that I want to see in the final version must be plugged to the 1st input; there is a VEXpression that allows for attribute transfers between inputs/geometries

trial #11

The network works, but it is very simple. My aim was to create stereograms based on 3D scenes in Houdini and at the moment they are created based on disparity maps, which requires a separate software or exporting and importing in Houdini. So, I spent the next few hours on finding a way to transfer position data in Y axis of one grid to position data in X axis on another grid. This required two separate scenes and several modifications in the VEXpression.
I am using the same grid, the same number of points and their order in two networks. The first network takes the points and projects them on a 3D geometry, e.g. a sphere, using “Ray” node. So now we have a sphere made out of points and a flat surface made of points around it. The 2nd network uses the position in Y axis of each of those projected points and moves the corresponding points on the second grid the same value, but this time in X axis (to the left or right). This way we can transfer the 3D objects and their position in 3D space to a “flat” stereogram.
This might sound simple (or chaotic), but the network and the process of finding this solution was extremely challenging. The VEXpressions were long and unclear, and I definitely used too many nodes.
In this version we can already see a simplified code and node network. They are much more elegant and easier to read.
The next step is to create stereogram with bigger objects. At the moment they can only be positioned on the left.

trial #12

This test is to fix several small things in the network. First, the network must be able to use bigger objects to cover the full width of the stereogram. Secondly, I want to be able to adjust the depth of the stereogram. And finally, all relevant parameters must be linked so that I am able to adjust the distance between pupils (column width) with just a simple click.
I have managed to do all of the above. In the new version, one can create bigger objects and properly see them in the stereogram. The depth of the stereogram is adjusted with the float “Shift scale” slider, which means it supports fractions. Moreover, all parameters have been linked, so now the column width can be adjusted by changing one value in the grid options.
Everything seems to work very well and I am very proud of the results, however, there are still several issues visible:
1. The objects seem to be slightly distorted on the right, this is probably caused by wrong vectors or other parameters in the “Ray” node.
2. If the object is above the grid, it is visible inside-out. This can again be caused by the “Ray” node as it projects the points on the closest faces and it should project them on the farthest ones. This might be fixed by swapping the positions of the geometries and the grid with the points.

Conclusion: So far the network works and supports disparity maps and 3D scenes created in Houdini. Its width can be adjusted for different viewers and their pupil distance. The network is procedural and the user can easily change several parameters without breaking the network. There are several things that can be improved and I will be working on them soon.