Thanks for stopping by! Here you can find my latest submissions for Apple’s annual WWDC competition.

Just use your iPad to try out the playgrounds. I’d love to hear your thoughts! 🙂

 

WWDC 2020 Swift Student Challenge Submission: VUIPlaygroundBook

In my playground I am presenting the concept of controlling and using UI elements on iPadOS and iOS completely without hands, just by slightly moving, tilting and nodding the head.

In doing so, I focused on two major points. Firstly, the concept is aimed at working on already existing interfaces without having to rebuild much of the structure. Secondly, it is aimed to work with the well-established elements of the UIKit Framework, whose navigation logics is already intuitively known to many users (like UISlider, UISwitch, UIPickerView etc.). The goal is to create accessibility without creating a completely new and unfamiliar user environment.

To make an element of UIKit operable, it only needs to be implemented as a VUI-Element (Subclass) to define the specific translation of the head movement. The UserInterface-Element becomes a VisualUserInterface-Element (VUI). This architecture allows me to make more and more elements operable later. It is possible to switch between views, even within a navigation controller! Without a single tap or swipe.

Up until now I integrated 6 elements. In my playground, I displayed two exemplary scenes using them (VUISwitch, VUIPickerView, VUISegmentedControl, VUISlider, VUIButton, VUIHorizontalCollectionViewController).

 

 


WWDC 2019 Scholarship Submission: MLDrawingBook

The MLDrawingBook analyzes drawings and recognizes them in realtime using Core ML 2. It teaches children how to draw animals and helps them to improve themselves step by step.

ML is short for Machine Learning. This book is all about it! Your algorithm analyzes your drawings and recognize them in realtime just within seconds. Exciting, isn’t it? I kicked off with about 250 drawings I recorded on the iPad. Then I trained the ML model by flipping each of the images four times (image augmentation) .Finally I started out with over 1000 images in this colorful drawing book.

The most important element of my project is Core ML 2. I created, trained and enhanced my ML Model using the MLImageClassifierBuilder as well as  the hand drawn images of different animals In Playground, I used this ML Model (AnimalDraw.mlmodelc) combined with the Vision Framework to start the request (VNCoreMLRequest, VNImageRequestHandler) of a drawing.

 

Give feedback

Get Swift Playgrounds App for iPad