This is a flow chart of our menu screen and how it will link up.
This will be a list of each Writer and will have the following Information for each:
- Featured Poem (based on their works)
- User Generated Poems
The user generated poems will be a library based on poems created near the Writer Location on the Waterfront and will be sorted to that corresponding writers field in this library.
Will contain options for the user to adjust. Right now all that entails is User Models (options of a Pear, Car or Globe)
This will be a field in which a user can writer their own poem. At present this is just a blank sheet in which they can type anything they want but eventually we could present frameworks based on poem structures. These poems will be sorted based on their location data.
These tests were based around the creation of an overlay which specifies to the user what to focus the camera on to generate the NFT content. The overlay disappears once the NFT has been tracked.
Once the tracking works a small square was programmed to appear this will be replaced with interactive content now that we know it works. This stingray NFT were successful except for the sizing of the NFT on the UI. This will be amended.
The Text NFT was unsuccessful however although the overlays sizing was better than the stingray.
Below is the closest we could get to matching the Text NFT to the marker.
For this site visit we aimed to test the functionality of the geo location tracking as well as live testing the natural feature tracking at each of the three writers walk sculptures ( except Bill Manhire).
At Vincent O’Sullivan’s Sculpture we tested the NFT, with a small cube appearing at the bottom of the NFT marker to show it was working. It did take a while for the tracking to work however so we took another hi res image of the stingray to make into a better NFT marker image for tracking.
We then proceeded to Bill Manhire’s Sculpture to test the location tracking, this worked without any problems (the 3D object spun faster depending on how close we were to the sculpture).
Our next stop was Katherine Mansfield and the geo location tracking worked well with this sculpture too. The pop up that says augmented view is what the user will tap to initiate the AR views(360 and NFT).
The NFT AR also worked once initialize but this still needs work. so as before we have re taken some NFT Images to revisit and refine this tracking. Overall this Site Visit yielded some successful testing and has given us information to build on from this point onwards.
AR Toolkit 6 is said to be released shortly. Although the developers have not given a specific date they have promised to have it shipped by fall (at latest quarter 1 of next year). This means that we may be able to upgrade our app to utilize the new features of AR toolkit 6
The most notable feature upgrade in AR Toolkit 6 is the tracking and NFT recognition so hopefully this is something we are able to use.
It also has a planar tracking for finding flat surfaces dynamically in the environment around the user
Here is a screen shot of the dynamic rope script with the 360 AR view.
The anchor box is a place holder and will be swapped for a knob like model that will be able to rotate left or right to “reel” the rope and bring the content closer or push the content further away from the user allowing for an interactive zoom and spacial organisation mechanic.
The “nice to have” for this element would be tying it into a wind direction API so that the ropes would get pushed in the direction of the wind
I was out for a walk on Friday 16th and I decided to test the application at the Katherine Mansfield Sculpture. The Location services worked well and mapped my location well (screenshot below).
The initialisation of an NFT on the sculpture itself worked but took too long and i needed to adjust my viewing angle to make it work. But once it did work the tracking was great and worked seamlessly.
For our next test we should work on the initialisation of the AR.
When developing for mobile applications its important to be aware of how intensive your code is and to try and optimise it at every point possible. One of the optimisations I have been looking into is dynamically loading NFT markers so as not to create a huge wait when the application first loads.
From testing I have deduced that each NFT marker added into the scene adds around 1.5-2.5 seconds of load time. This may not seem like much but when you consider that we are planning on having 3 NFT’s for each writer, and that we may end up having more than 3 writers in total those numbers quickly add up.
Unfortunately ARToolKit 5 doesn’t allow for dynamic NFT loading, so after some testing I have come up with a cleaver way to get around the problem. Instead of loading a new scene each time the player switches MapView to ARView (or vice versa) I am instancing an ARcontroller that is relevant to each writer and only stopping and starting this when the user needs those specific NFTs. To put simply this allows for the software to only load the NFT’s relevant to the writer in the users location. It also then allows for very quick transitions between MapView and ARview by utilising camera culling masks
Today Jeff and I caught up with Seb, in Evans Bay, to discuss the CMS in it relation to the app we are developing. We mainly wanted to gauge whether our unity game engine could talk to his server which he thinks it can. We plan to get a test of this working by next week.
We also may not be using xml files as they do not work well with his silver stripe CMS. The file type we may be using is called ssh.
This top photo demonstrates an uncleaned up marker. You can see that the track points are chaotic and although follow the text have a lot of “noise”. To fix this I decided to digitally paint out the noise and only focus on the text. This way I would “help” the software realise what was important to track.
Here is my first attempt at that. Only using one line. It worked well but I realised that one line of text didn’t offer the enough tracking data to get a clean track so I went up to 3 lines. Important to note here is I chose the the bottom 3 lines – the three that would be closest to the user. This worked very well and is definitely usable with some further refinement. Those refinements are
- changing angle of initialising photo to better fit point of view of human looking at poem
- reducing number of words (width) to something that can fit into FOV (field of view) of cellphone camera when turner horizontal
Image top shows half way through cleanup process
A simple but important setting. I have allowed the camera to only sample the image at a max of 25 frames per second. This is to reduce the jitter