Prototyping Drawing Interfaces

Multi-Input Software Prototype





Skills:
Processing
Android 
This working-prototype explores a gestural radial menu that is accessed by a users non-dominant hand within the context of a drawing application. It was inspried (partially) by the irritation of constantly tool-switching in Paper by 53 and Procreate.


Currently Implemented Gestures: 
Change Tools (Marker, Pen, Eraser)
Change Brush Color 
  • 3 Finger Rotate 
  • 3 Finger Drag

Tangible multi input devices are becoming more and more commonplace. Microsoft and Apple are pushing sophisticated new products based around multi input, the Apple Pencil and the Surface Pen + Dial. However, many of the applications that are actually used by creatives don’t take full advantage of the capabilities of the hardware, especially on the iPad.

Having a stylus affords many new (and I would argue better)  interaction paradigms for drawing applications. In the two most popular drawing apps on the app store, we see the same ‘hunt and peck’ style interfaces that have been much the same since the applications launch. These moments of switching tools or chaning colors interrupt the creative process and overlay the screen with cumbersome app chrome.

     
Procreate (left) and Paper by 53 (right)

I knew that one area that was going almost completley underutilized in drawing apps was the users non-dominant hand. I had seen a lot of rich interaction design in VR that emphasized the use of both hands for really interesting creative applications. For example this fantastick work in gravity sketch by nickpbaker. 

There were some interfaces that attempted to take advantage of it such as Procreate’s small bar on the left, but nothing that allowed you to use the regular interface less. 

This is a little confusing when you consider the fact that when people naturally draw they use their left hand for various activies like holding the page or keeping their place. This begins even at a young age, as shown below. 


Building the Interface
These images are from Bill Buxton’s recent work at Microsoft Research, and seeing his paper draw similar conclusions to my own observations of people using iPad’s with the Apple pencil was really exciting. 

From here I began some initial explorations based on these intuitive gestures that people already made on physical paper. These consisted of sketched mockups as well as sketching through programming in Processing. 



Once I had decided on 2 interactions that I could focus on in a prototype as a proof of concept, I began to try to put them into a more finished prototype that would help communicate the idea to others. 

Because I wanted to build such a non-standard interface, I had to basically draw all the graphics using Processing’s simple built in elements. This required some interesting geometric gymnastics in order to get the whole thing working smoothly.

By using Processing to actually draw the interface, I could have a lot more final control over the output. Below outlines my process for implementing the final tool that is manipulated by the non-dominant hand. 

The first stage was to build a ‘tool’ class that could render and contains all dependent elements. Let’s take a closer look. 

Drawing a circle from the finger points! 
Detecting when the tool is active



Building the Drawing Application

While the purpose of this project was to demonstrate through a proof of concept interface that drawing applications could be enhanced through the use of non-dominant hand control, I also wanted to play with the inner workings of drawing apps like Paper.

A fundamental limitation in these devices is noise coming from the touch digitizer. This means that we can’t simply draw user strokes exactly as they are input, or the result might look distorted or jaggd.

With apps like Paper, there was so much time and attention put into how brush strokes were drawn onscreen, and I wanted to see how this kind of granular level of control could be achieved. I researched heavily what techniques were used to create these apps, and implemented my own version in Processing to be used in conjunction with my new interface.

One thing that is key to understanding how these strokes are rendered is the actual geometry underlying them. You can see here that each stroke is actually rendered as a series of triangles.

Here you can see how I construct these triangles. The tip of your pencil moves along the path indicated by the arrows. From there, we construct these additionl points, A,B,C,D based on the perpendicular vectors.
Because we have built in this flexability of rendering our stroke with these geometric triangles, we can now easily scale the magnitude of these vectors A, B, C, D to vary based on the speed of our pen. 

Finally, by applying a quadratic bezier curve to the points before we construct these triangles, we can create an even silkier, smoother experience.
A Final Note on Performance
How can we get this app to work at 60FPS, consistently and smoothly? This really matters for latency sensitive applications like accessing the touch data for drawing. First, we have to use the P3D renderer for improved performance. There is some sacrifice in quality for anti-aliasing, but on high resolution devices it matters little. 

Even though we only drawing 2D elements, the Processing P2D renderer has an issue tesselating strokes on Android that makes it massivley slower. 

Additionally, we draw all graphics to an offscreen graphics buffer, so that no geometry is rastered twice. 


Mark