Map Concept 1

13 DECScreen Shot 2016-12-13 at 1.24.41 PMBased off the above image we will create a concept map of Wellingtons Water Front for the Literary Atlas. It will be stylized like the feature image and use the same colours and bold black outlines.

The following typefaces have been selected to be used.Screen Shot 2016-12-13 at 1.52.30 PM

 

16 DEC

Here is the rough mock up of the Map in Illustrator. More work will be done on this tomorrow.Screen Shot 2016-12-16 at 2.32.37 PM

  • Icons for specific points of interest will be made (right now these are small burgundy circles)
  • A swatch for the buildings will be written for finer lines (the current blinds pattern is a placeholder)
  • I want to fill all the water with text ( as an abstract design choice)
  • Names of parks will have to be sourced
  • Find out all the boat names around the waterfront
  • User relevant points of interest?

 

18 DEC

Here is the mock up that has been completed thus far:

Screen Shot 2016-12-18 at 2.15.40 PM

I Have made a Building Pattern Swatch, however I need to adjust the legibility of the text within the buildings. I like the style of the water with the changing size of text however I think I should make the overall size of the text smaller. I am really happy with the colours of the map.

I am conscious that the map appears too busy. I might complete the basis of the map in its entirety, then develop the style. We need this map to  serve its purpose as a functional map as well as having a literary style.

Still to do:

  • Icons for specific points of interest will be made (right now these are small burgundy circles)
  • Find out all the boat names around the waterfront
  • User relevant points of interest?

 

19 DEC

Final Mock up completed! The full map can been seen below.

Map Mock Up

Text will need to be added next to further develop this map. The building swatches were adjusted also to make the building text more legible but the outline of the buildings was increased to make them stand out.

From this point on:

  • Text needs to be added
  • Icons needed for landmarks
  • Bridges need to be added

All things considered the map is really coming together

Unable to Dynamically Add Markers (Limitation of ARToolKit 5)

When developing for mobile applications its important to be aware of how intensive your code is and to try and optimise it at every point possible. One of the optimisations I have been looking into is dynamically loading NFT markers so as not to create a huge wait when the application first loads.

From testing I have deduced that each NFT marker added into the scene adds around 1.5-2.5 seconds of load time. This may not seem like much but when you consider that we are planning on having 3 NFT’s for each writer, and that we may end up having more than 3 writers in total those numbers quickly add up.

Unfortunately ARToolKit 5 doesn’t allow for dynamic NFT loading, so after some testing I have come up with a cleaver way to get around the problem. Instead of loading a new scene each time the player switches MapView to ARView (or vice versa) I am instancing an ARcontroller that is relevant to each writer and only stopping and starting this when the user needs those specific NFTs.  To put simply this allows for the software to only load the NFT’s relevant to the writer in the users location. It also then allows for very quick transitions between MapView and ARview by utilising camera culling masks

Precedent – Pokemon Go

What is Pokémon GO?

Travel between the real world and the virtual world of Pokémon with Pokémon GO for iPhone and Android devices! With Pokémon GO, you’ll discover Pokémon in a whole new world—your own! Pokémon GO uses real location information to encourage players to search far and wide in the real world to discover Pokémon.

The Pokémon video game series has used real-world locations such as the Hokkaido and Kanto regions of Japan, New York, and Paris as inspiration for the fantasy settings in which its games take place. Now the real world is the setting!

The Pokémon video game series has always valued open and social experiences, such as connecting with other players to enjoy trading and battling Pokémon. Pokémon GO’s gameplay experience goes beyond what appears on screen, as players explore their neighbourhoods, communities, and the world they live in to discover Pokémon alongside friends and other players.

Pokémon GO is developed by Niantic, Inc. Originally founded by Google Earth co-creator John Hanke as a start-up within Google, Niantic is known for creating Ingress, the augmented reality mobile game that utilizes GPS technology to fuel a sci-fi story encompassing the entire world. Ingress currently has 12 million downloads worldwide.

Source: https://pkmngowiki.com/wiki/Main_Page

How it works from our perspective?

Way-finding

IMG_0248Way-finding works by using the phones location services to know the user position in space. A radar is then used to activate points of interest around a user, such as Poke Stops and Pokemon encounters. The player is represented through an avatar on the map itself.

AR  View

IMG_0249In my opinion Pokemon go is not true AR.  I uses the camera view with a UI Overlay that is positioned using a phones gyro / accelerometer. This creates the illusion of a virtual object in physical reality however this can be debunked by moving the phone around in space. When this is done virtual objects retain the same distance from the phone and move position in space, where as if they were true AR they would retain a “fixed”position in space.
Another thing to understand is that not many people use this view as it complicates the Pokemon catching Pokemon. Many just use the default 3D view that keep the Pokemon in view at all times.

What we will use

This precedent will be used as a basis and inspiration for way finding and the AR instance using the phones gyro sensor.

However for our map we are leaning towards a 2D style, opposed to the 3D style of Pokemon Go. We also need to be careful not to make the visuals too much like Pokemon Go as it is a well known application, but we can use it to inform the mechanics of the way finding as these are already well known.

Precedent – Wallame

What is it

WallaMe is a free iOS and Android app that allows users to hide and share messages in the real world using augmented reality.

Users can take a picture of a surface around them and write, draw and add stickers and photos on them. Once the message (called Wall) is completed, it will be geolocalized and will remain visible through WallaMe’s AR viewer by everyone passing by. A Wall can also be made private, thus becoming visible only to specific people.

UNADJUSTEDNONRAW_thumb_ceAll the Walls created worldwide can be seen in a feed similar to those of social networks like Facebook and Instagram, and can be liked, commented on, and shared outside the app.

WallaMe is mostly used to create digital graffiti and for proximity messaging.

Source: https://en.wikipedia.org/wiki/WallaMe

 

How it works

Wallame allows you to create your own markers based o photos and map AR instances to them.

Content can be in the form of;

  • Images
  • Doodles(drawn using the app)
  • Text

An example video is below

Source: http://sites.gsu.edu/cetl/2016/07/28/cool-tools-wallame/

 

What will we use 

The ability to have a user created content is exciting as users could leave their own messages/ marks on the overall experience.

Other interactions such as the ability to doodle/ place content in space could be very powerful.

One limitation is that the app can only see one AR instance at a time, which is chosen by the user.

 

Developer Meeting

Today Jeff and I caught up with Seb, in Evans Bay, to discuss the CMS in it relation to the app we are developing. We mainly wanted to gauge whether our unity game engine could talk to his server which he thinks it can. We plan to get a test of this working by next week.

We also may not be using xml files as they do not work well with his silver stripe CMS. The file type we may be using is called ssh.

IMG_0211

NFT Field Test 1

Jeff and I went for a walk to test out his fix NFT. To do this he adjusted the tracking of the feature points that the software tracks to only be the text (and not the texture of the stone).

To do this we developed a simple scene in which we could test the tracking and see if it worked on the actual typographic sculpture.

The result, seen below, worked well and tracked nicely using three lines of text on the sculpture.UNADJUSTEDNONRAW_thumb_a8

We also had a look for other natural features we could track. One of the points of interest we found were hidden QR Codes which we later scanned and found they were markers for Orienteering on the Waterfront.

https://wwfcourse.wordpress.com/

UNADJUSTEDNONRAW_thumb_a3

Furthermore we will need to check the typographic sculptures in different conditions if possible as below is the James K Baxter sculpture at high tide. This would not be ideal for NFT Tracking at all.

UNADJUSTEDNONRAW_thumb_9a

NFT Marker Clean Up

Screen Shot 2016-12-12 at 5.01.54 PM

This top photo demonstrates an uncleaned up marker. You can see that the track points are chaotic and although follow the text have a lot of “noise”.  To fix this I decided to digitally paint out the noise and only focus on the text. This way I would “help” the software realise what was important to track.

Screen Shot 2016-12-06 at 4.28.01 PMScreen Shot 2016-12-06 at 4.27.39 PM

Here is my first attempt at that. Only using one line. It worked well but I realised that one line of text didn’t offer the enough tracking data to get a clean track so I went up to 3 lines. Important to note here is I chose the the bottom 3 lines – the three that would be closest to the user. This worked very well and is definitely usable with some further refinement. Those refinements are

  • changing angle of initialising photo to better fit point of view of human looking at poem
  • reducing number of words (width) to something that can fit into FOV (field of view) of cellphone camera when turner horizontal

Image top shows half way through cleanup process

KM_Cleanup   Screen Shot 2016-12-12 at 5.03.13 PM

Creating Feature Sets – Whats Best?

A few technical test that have been done were understanding the appropriate amount of track points to extract when converting an image into a natural feature set.

AR tool kit allows you to specify a few different parameters when it comes to customising the output result. The first parameter is DPI. Its important to note that the higher the DPI the larger the feature set and slower it is to load. It should also be noted that mobile camera’s have a maximum resolution (DPI) so it is redundant to go over this. With all of these factors and after testing the best DPI was 150.

The second parameter that can be defined is initialisation threshold. This has a range of 0 – 4. 0 meaning that only a few points have to be detected for NFT to load and track and 4 meaning that a high amount of points must be detected. I have found that a value of 1 works best for this. It allows the users a little bit of give in getting the scene initiated.

The third parameter is amount of track points. This also ranges from a value of 0 – 4. The best option for this varies on the image. As a general rule of thumb if an image has a lot of “noise” then a lower number of track points is recommended. However if an image is clean or has been digitally created  then a setting of 3 is recommended. For the best results of “realworld NFT” I have found that digital clean ups then extracting at level 3 is the best. – see NFT clean up blog post for more info