Program 2018


It's been a difficult task to create a program, considering we received 39 proposals and time only allows for a maximum of 9 slots. Many promising proposals had to be left out to our regret, but we are certain that we managed to put together an amazing and very diverse program!

Time Content
08:00 – 09:00 Registration + Coffee (open until 09:15)
09:00 – 09:15 Welcome
09:15 – 10:00 Carina Haupt
Rocket Science and Software Engineering

Carina works at the German Aerospace Center (DLR) where she is a software engineering team lead. It is her mission to improve the software quality at the DLR. But why is this necessary? After all we went to the moon with less computer power than of a modern smartphone! While there are a lot of success stories, there are also those stories where software failures resulted in mission failures.

This talk gives an overview of some of these unfortunate programming errors, their consequences, and how modern developement tries to prevent history from repeating.

10:00 - 10:40 Tim Head
From Exploring Data Interactively to Creating Reproducible Pipelines

Have you previously built a report based on some data? Worried it wouldn't work anymore when you had to re-run it six months later? Annoyed that you have to email someone to get the latest version of a plot for your slide deck?

In this interactive talk we will make a reproducible pipeline based on Jupyter notebooks and open data. I will introduce you to the Python data ecosystem highlighting tools for analysing data, creating visualisations and sharing those with your team and the public. We will start with a question, and following the path of a typical data analysis project, we will interactively explore the data, find our answers and then create a robust pipeline that allows us to re-run this analysis automatically. Finally I will show how easy it is to share what we created with others using

10:40 - 11:10 Coffee break
11:10 – 11:40 Raphael Das Gupta
Comprehensions – Origin, History, Use

Along the lines of Guido's excellent "The History of Python" blog post we'll look into where the idea for Python's (list) comprehensions came from and how it evolved into these related concepts in Python:

  • list comprehensions
  • set comprehensions
  • dict comprehensions
  • generator expressions

11:40 – 12:10 Gabriel Krummenacher
Leveraging Neural Networks and Python to Forecast Train Delays in the Swiss Railway Network

In this talk I will show how we developed a neural network model in Python to forecast train delays in real-time. Based on the history of delays in the surrounding network we can predict the future expected delay at different points in the network.

Building on the excellent machine learning and deep learning tech stack of Python (Keras, TensorFlow and Pandas) I will show how to implement and train a sequence prediction model and work with time series data.

12:10 – 12:40 Iacopo Spalletti
Real Time Django

Since the introduction of Channels, real time web has become much easier to work with in Django. It’s now possible to build real time applications with much less effort in managing the idiosyncrasies of the async programming and a lot of batteries are included. Starting with a brief introduction to Channels, we will see how to build a real time application, both on the Django and the frontend side and how easy it’s to start experimenting with it.

12:40 – 14:00 Lunch
14:00 – 14:30 Sarah Mühlemann
SpyPi – An Attempt to Get Students Into Data Security

Technology has become a fundamental part of our daily life and a major component of the education system. Students are encouraged to interact with technology and make use of it. However, in the majority of cases the importance of data security is not discussed, although, it is important, that especially young people get a feeling for the power of modern technology and the dangers that come with it.

SpyPi is an attempt to approach these topics in class. The interactive hacking-station is the result of my high school graduation work and bases on Raspberry Pi and Python. It enables a role reversal between the user and black-hat hacker/data collector. This helps students to gain a new perspective on their own behavior with digital information. SpyPi’s interactivity avoids flooding people with jargon-heavy information and permits SpyPi to meet the user at eye-level. Several applications are included to point out various dangers we face on a daily basis.

14:30 – 15:00 Amit Kumar
Lets Talk About GIL!

There is lot of misconception in majority of Python Programmers regarding Global Interpreter Lock. Most of them think its the worst part of Python. I will try to demonstrate how it actually works and how we can leverage multiple CPU cores for multithreading for I/O and CPU Bound tasks. I will also show some comparisons with different implementations of Python and the presence or absence of GIL in those, to answer questions like, why we can't just remove it from CPython and solve all our problems or why Jython performs better in Multithreading for CPU Bound tasks.

15:00 – 15:45 Coffee break
15:45 – 16:15 Josef Spillner
Serverless Computing: FaaSter, Better, Cheaper and More Pythonic

Function-as-a-Service (FaaS) is the consequent code-level implementation of the microservices concept in which each function or method is separately instantiated, measured, accounted and billed. As a programming and deployment model, it has become popular for discrete event processing. Several public commercial services offer FaaS hosting, but almost always in silos with arbitrary limits, incompatible tooling for each provider, and no convenient sharing of functions.

Snake Functions (Snafu) contrasts these constraints. It is a novel free software tool to fetch, execute, test and host functions implemented in Python and (with slight performance overhead) in other languages, too.

16:15 – 16:45 Peter Hoffmann
12 Factor Apps for Data-Science with Python

Heroku distilled their principles to build modern cloud applications to maximize developer productivity and application maintainability in the in the manifesto. These principles have influenced many of our design decisions at Blue Yonder.

While our data scientists care about machine learning models and statistics, we want to free them of being concerned with technicalities like maintenance of network equipment, operating system updates or even hardware failures. In order to save our data scientists from these tasks, we have invested into a data science platform.

This talk will give an insight how we use Apache Mesos, Devpi, Graylog and Prometheus/Graphana to provide a developer-friendly environment for data scientists to build their own distributed applications in Python without having to care about servers or scaling.

16:45 – 17:00 Closing
17:00 – 20:00 Social Event / Apéro


Right after the conference we'll have a small aperitif sponsored by 89grad. There will be soft drinks, water, beer and snacks free of charge for all conference attendees.