Here are the updated NFT markers. The stingray may need to have its marker level reduced but we will be field testing to improve the final tweaks. The marker photos have been shot with taking the viewers angle and position in to consideration so should offer a quicker initialisation
Here is a nice treatment of words in space and animations to get them to form paragraphs. This treatment is nice because it ties in with our overall style of “the wind”
AR Toolkit 6 is said to be released shortly. Although the developers have not given a specific date they have promised to have it shipped by fall (at latest quarter 1 of next year). This means that we may be able to upgrade our app to utilize the new features of AR toolkit 6
The most notable feature upgrade in AR Toolkit 6 is the tracking and NFT recognition so hopefully this is something we are able to use.
It also has a planar tracking for finding flat surfaces dynamically in the environment around the user
Here is a screen shot of the dynamic rope script with the 360 AR view.
The anchor box is a place holder and will be swapped for a knob like model that will be able to rotate left or right to “reel” the rope and bring the content closer or push the content further away from the user allowing for an interactive zoom and spacial organisation mechanic.
The “nice to have” for this element would be tying it into a wind direction API so that the ropes would get pushed in the direction of the wind
When developing for mobile applications its important to be aware of how intensive your code is and to try and optimise it at every point possible. One of the optimisations I have been looking into is dynamically loading NFT markers so as not to create a huge wait when the application first loads.
From testing I have deduced that each NFT marker added into the scene adds around 1.5-2.5 seconds of load time. This may not seem like much but when you consider that we are planning on having 3 NFT’s for each writer, and that we may end up having more than 3 writers in total those numbers quickly add up.
Unfortunately ARToolKit 5 doesn’t allow for dynamic NFT loading, so after some testing I have come up with a cleaver way to get around the problem. Instead of loading a new scene each time the player switches MapView to ARView (or vice versa) I am instancing an ARcontroller that is relevant to each writer and only stopping and starting this when the user needs those specific NFTs. To put simply this allows for the software to only load the NFT’s relevant to the writer in the users location. It also then allows for very quick transitions between MapView and ARview by utilising camera culling masks
This top photo demonstrates an uncleaned up marker. You can see that the track points are chaotic and although follow the text have a lot of “noise”. To fix this I decided to digitally paint out the noise and only focus on the text. This way I would “help” the software realise what was important to track.
Here is my first attempt at that. Only using one line. It worked well but I realised that one line of text didn’t offer the enough tracking data to get a clean track so I went up to 3 lines. Important to note here is I chose the the bottom 3 lines – the three that would be closest to the user. This worked very well and is definitely usable with some further refinement. Those refinements are
- changing angle of initialising photo to better fit point of view of human looking at poem
- reducing number of words (width) to something that can fit into FOV (field of view) of cellphone camera when turner horizontal
Image top shows half way through cleanup process
A few technical test that have been done were understanding the appropriate amount of track points to extract when converting an image into a natural feature set.
AR tool kit allows you to specify a few different parameters when it comes to customising the output result. The first parameter is DPI. Its important to note that the higher the DPI the larger the feature set and slower it is to load. It should also be noted that mobile camera’s have a maximum resolution (DPI) so it is redundant to go over this. With all of these factors and after testing the best DPI was 150.
The second parameter that can be defined is initialisation threshold. This has a range of 0 – 4. 0 meaning that only a few points have to be detected for NFT to load and track and 4 meaning that a high amount of points must be detected. I have found that a value of 1 works best for this. It allows the users a little bit of give in getting the scene initiated.
The third parameter is amount of track points. This also ranges from a value of 0 – 4. The best option for this varies on the image. As a general rule of thumb if an image has a lot of “noise” then a lower number of track points is recommended. However if an image is clean or has been digitally created then a setting of 3 is recommended. For the best results of “realworld NFT” I have found that digital clean ups then extracting at level 3 is the best. – see NFT clean up blog post for more info
A simple but important setting. I have allowed the camera to only sample the image at a max of 25 frames per second. This is to reduce the jitter