Posters

15 years of PyCon: insight into Python's language and community via seminar abstracts

Tanya Schlusser, Hailey Hoyat
Sunday 10 a.m.–1 p.m. in

Since its inaugural conference in 2003, the Python community has hosted over 1200 talks, in over 30 countries, with more than 20,000 attendees in the U.S. alone. That’s a lot of Python -- and a lot of extractable data. By examining the categories and topics of seminar abstracts over the years, observations about the development of the language and the Python community's interests can be found. These insights are invaluable to understanding how both the community and language have grown, where they are headed, and how people may support the continued growth of Python for years to come. This year for the 15th anniversary of PyCon, we will highlight some of the many diverse presentations of previous conferences, piquing not only your nostalgia, but also your appreciation for the impact this event has on what is shaping to truly be a hundred-year language.

A Python-Friendly Computer Keyboard

Erin Allard
Sunday 10 a.m.–1 p.m. in

If we could create a computer keyboard specifically for Python programmers, what would the layout of the letters and symbols be? The QWERTY keyboard we use today was developed in 1878 to avoid jamming typewriters' metal arms when typing quickly. But some of the most commonly used letters and symbols aren't easy for our fingers to get to on our modern, two-dimensional computer keyboards! Computer programmers know this problem well: We have to press the `SHIFT` key every time we need a curly bracket, a parenthesis or a colon. And we use lots of these punctuation marks, creating a lot of extra keystrokes. By running character analysis on the source code of Python's 30 most widely used libraries—presumably good examples of high-quality, Pythonic code—we can discover character frequencies that will help us assemble a new keyboard layout that helps Python programmers make more efficient keystrokes. A Python-friendly keyboard would have the most common letters and symbols on the home row, the moderately-used letters and symbols on the top row, and infrequently used letters and symbols on the bottom row. And with keyboard re-mapping, we can actually implement such a keyboard!

AREPL: real-time evaluation of python

Caleb Collins-Parks
Sunday 10 a.m.–1 p.m. in

[AREPL](https://github.com/almenon/AREPL) is a scratchpad in which debugging, running, and writing your code can all be done at the same time. Whenever you stop typing your code is evaluated and the output is shown. But not just the output - your local variables are saved for inspection as well. Update the code, and the values seamlessly change. Even if you make a mistake, you will still get the state of your variables right before, along with the error and stack trace. **Other Features:** * Availible as a [VSCode Extension](https://github.com/almenon/AREPL-vscode) * human-readable display of certain types, like dates. * Automatic restart for GUI development * Ability to 'save' a section so it only runs once ---------- **Links:** [https://github.com/almenon/AREPL](https://github.com/almenon/AREPL) | [https://tinyurl.com/areplVid1](https://www.youtube.com/watch?v=GxryBUukTyM)

BAMnostic: an OS-agnostic port of genomic sequence analysis

Marcus Sherman
Sunday 10 a.m.–1 p.m. in

As genome sequencing and testing gets [cheaper](https://www.genome.gov/27565109/the-cost-of-sequencing-a-human-genome/) and becomes more [mainstream](http://bgr.com/2017/11/27/23andme-dna-test-price-drop-amazon-cyber-monday/), the amount of data being generated is staggering. Much like other scientific fields, **Python** has become one of the predominant programming languages used to process such data. What most people do not know is that a majority of genome analytics can be boiled down to clever string comparison and matching algorithms. The caveat here being a single file can be as ≥**300 Gb** in its [compressed binary encoded format](https://samtools.github.io/hts-specs/SAMv1.pdf). A high-throughput sequencing library ([htslib](https://github.com/samtools/htslib)) was developed to establish a standard encoding and compression schema that enabled researchers to have random access to these large files. As it stands, htslib is the industry standard in the realm of genomics. One of the most popular Python libraries for handling genomic data ([PySAM](http://pysam.readthedocs.io/en/latest/)) is essentially a wrapper for htslib. As widely used as both htslib and PySAM are for developers, a large contingent of users (both end and developer) are excluded simply because htslib and PySAM do not support [***Windows***](https://github.com/pysam-developers/pysam/issues/575) environments outside of contrived builds and dependencies that many end-users would not be willing to implement. To overcome this issue, pure Python ports of the random access, unpacking, and decoding components of htslib were developed as a lightweight toolkit called **BAMnostic**. BAMnostic was developed to be a drop-in alternative for a majority of PySAM's workload when working in a Windows environment or projects that require an OS-agnostic approach. As a drop-in, it retains the same interface as PySAM for each of its supported functions. This interface also provides a means of simple extensibility for machine learning and statistical analysis through libraries such as [TensorFlow](https://www.tensorflow.org/api_docs/python/), [scikit-learn](http://scikit-learn.org/stable/), and [statsmodels](http://www.statsmodels.org/stable/index.html). Additionally, it makes piping desired data into data visualization libraries, such as [Plotly](https://plot.ly/), a simple task. Lastly, as pure Python, it can now be easily embedded into a socketed [Flask](http://flask.pocoo.org/) web server or used as standalone application.

Building Reproducible Machine Learning Models with Python and Docker

Jeff Espenschied
Sunday 10 a.m.–1 p.m. in

A 2016 study by the journal _Nature_ found that 90% of working scientists surveyed think there is a "slight" or "significant" crisis in experimental reproducibility. For the working data scientist, the processes and models used in analysis must be able to be used to reproduce results or to be applied to new data. We ran into these same problems while building an API for developers to easily incorporate Machine Learning algorithms in their software. We were able to leverage Docker, Python, Flask, and Amazon S3 to enable the reuse of the exact model generated in the initial analysis. This poster will show how those pieces are put together and how you could create a similar system for your analysis.

Building your own Messenger Chatbot using Python

Akilesh Lakshminarayanan
Sunday 10 a.m.–1 p.m. in

While designing and developing your own functional chatbot might seem like a herculean task, python makes it extremely easy to add intelligent conversational functionality, and with pretty good accuracy! In addition to this, once you are familiar with Facebook messenger API (which also has a wide range of wonderful conversational interface elements) it’s actually not that hard a task to get your chatbot into production on Messenger. My poster, with the help of a messenger bot I have programmed myself as an example, will be taking you through the whole workflow of setting up your own chatbot. We will start by looking at the important features of the Messenger API, such as automatically sending texts, quick replies and images. Python libraries such as _spaCy_ and _NLTK_ make it very intuitive to add functionality to your bot. I will then be explaining how you can use NLTK for text classification, and spaCy language models for entity recognition and part-of-speech tagging. These python libraries will enable us to add natural language conversational ability to the chatbot. To get your bot up and running on messenger, you need to deploy it on a cloud server. I will be going through the steps involved in getting your app up and running on one such cloud service, _Heroku_. Following this, we will be integrating Messenger with the application deployed on Heroku, for which we need to set up WebHooks (after I tell you what webhooks are!) and authorise the app. Finally, I will be talking about how to get your bot into production for which you will need to do complete some safety formalities (such as setting a privacy policy) as per Facebook’s rules & regulations. We will then discuss how different lines of business can leverage chatbots, and what the potential advantages and disadvantages of chatbots are. By the end, you will be equipped with all the tools necessary to design your own chatbot for your product, and get it up and running on messenger for your Facebook product page!

Building your own weather app using NOAA open data and Jupyter notebooks

Filipe Fernandes
Sunday 10 a.m.–1 p.m. in

The National Weather Service (NWS) estimates that its open data support a [1.5 billion dollar industry](http://informationdiet.com/blog/read/how-did-weather-data-get-opened). However, if you are a Python enthusiast and love open data you don't need that industry to get your very own customized weather app ;-) We will walk through all the steps to create a fully-featured GIS interactive map (mobile friendly too!). Thanks to the Open Geospatial Consortium (OGC) standards, and NOAA's open data policies, it is quite easy to set up a data discovery system based on location, time, and variable of interested. In [this example](https://ocefpaf.github.io/python_hurricane_gis_map/irma.html) we'll explore the National Hurricane Center (NHC) predictions for hurricane Irma and fetch all open data we can find along its path.

Build secure and reliable continuous delivery deployment for python microservices

Natalie Serebryakova
Sunday 10 a.m.–1 p.m. in

In the agile methodology, the speed at which software gets shipped these days got very fast. In many cases the initiative of building software in a secure way get’s deprioritized. There is no time to do “traditional secure process” where every stage of the software development cycle has a security checklist. This talk presents proposal about processes that support or could support secure software development. Continuous Delivery is a software development discipline where you build your python microservices in such a way that it can be released to production at any time. Microservice security relies on automating the Continuous Delivery deployment process. Making deployments secure and reliable before they land in production should be a goal for every software developer. With Continuous Delivery and security automation software developer doesn’t have to be a security expert in everything to work within a microservices architecture. A poster will contain (suggestions and code snippets): 1) Challenges when using Microservices and Continuous Deployment 2) Secure Continuous Delivery Microservices Production pipeline - access control settings - secure deploy an API gateway - centralized security or configuration policies - secure source code management using GitHub - use Security Policies tailored for microservices workflow 3) Conclusion

Cart Pole AI Controller

SRIVIGNESSH PACHAM SRI SRINIVASAN
Sunday 10 a.m.–1 p.m. in

Cart Pole Balancing Problem is one of the standard classical control problem. Building an AI Reinforcement learning agent to balance the pole connected with one joint on top of the moving cart is a challenging problem. This live demo showcases the working of the adaptive actor critic game controller. It has applications in designing future AI game controllers

Converting unstructured web data into sequenced STEM educational games

Itay Livni, Michael Wehar
Sunday 10 a.m.–1 p.m. in

Vocabulary, repetition, and examples are fundamental to human learning. These fundamental tools help humans to teach each other, communicate, and innovate. Yet, vocabulary building and reading comprehension games specifically geared for science, technology, engineering and math (STEM) disciplines are lacking. They are expensive to make and the content has a short life span. First, a topic and grade level must be chosen by the game designer. Second, an age appropriate curriculum must be developed. Third, the content must be researched and edited. And finally, the content needs to be transformed into a game by game developer(s). This process needs to be repeated for each topic. However, this process can be automated using natural language processing (NLP), the digitization of primary sourced information, and vibrant open source ecosystems. Automating this process enables educators to create STEM educational games with just four user inputs: (1) Term, (2) Topic, (3) Grade Level, (4) Game type. The corresponding output is a set of sequenced games that can be adjusted for reading comprehension levels for particular students. The process to build content for the games is built on open source packages such as beautiful soup, pandas, textacy, gensim, scikit-learn, and networkx. Client side work is done in javascript and is served by Flask.

Curriculum for project-based data science classes and their building blocks

Nadia Udler, Adomous Wright, Eli Udler
Sunday 10 a.m.–1 p.m. in

Machine learning, artificial intelligence and data science are interdisciplinary subjects and therefore they are difficult to teach. We suggest educational building blocks that help students to understand machine learning software (such as scikit-learn) and create their own artificial intelligence algorithms. These building blocks are theoretical components of global optimization algorithms (derived in [1][1]- more information of these building blocks will be given later ) as well as some other numerical procedures (such as automatic differentiation, see, for example, Python implementations [Autograd] (https://github.com/HIPS/autograd) and [AlgoPy](https://pythonhosted.org/algopy/)). Based on these building blocks the programming projects are created that demonstrate certain steps in machine learning algorithms and show how these steps appear in popular modern machine learning methods. These projects become a foundation of project based data science courses, such as Data Analysis and Decision Making using Python, Python for Financial Applications, Operations Research models using Python, etc. Building Blocks In [Kapl and Prop][1] the theoretical approach for the design of global optimization methods based on potential theory was introduced. This approach extends the theory of gradient based optimization for algorithmically defined functions (or black box functions), where analytical representation of the function is not available or too complicated to work with (say, too hard to compute derivatives). Such situations are very common in real world applications. Based on this theory, the parsimonious set of building blocks of the optimization algorithms was obtained. The hypothetical algorithm where all these building blocks are present in their full form is given in [1][1] It is shown that by varying the parameters of the building blocks we obtain the whole universe of optimization methods, some of them we recognize as well known heuristic techniques such as [CMA-ES](https://en.wikipedia.org/wiki/CMA-ES), [Shor r algorithm](https://link.springer.com/article/10.1023%2FA%3A1008739111712), [Nelder and Mead algorithm](https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method) etc. Main building blocks are defined as linear algebra operations: space dilation operator (the space transformation is based on this operator ), Householder transformation as a certain case of space dilation operator, and memory accumulation module (accumulates information from the previous iterations of the algorithms). Other numerical procedures that algorithms are built upon are automatic differentiation and [natural gradient evolution strategy](https://pdfs.semanticscholar.org/eb2d/7fb3105cd98646943b5dccf799d2bb8b09ed.pdf). Examples of the projects Householder transformation in Nelder and Mead algorithm (fmin function in SciPy) Space dilation operator in Shor r-algorithm and in CMA-ES Combining automatic differentiation with gradient-based algorithms for optimization of algorithmically defined functions. Natural gradient evolution strategy Coordinate transformation, cootdinate descent algorithm and separable functions. Invariance of coordinate descent method with respect to scaling and rotation of the search space. Memory accumulation module. Comparizon of memory accumulation in Shor algorithm and genetic algorithm Multi objective optimization vs constrained optimization [1]:http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=4004&option_lang=eng"Investigation of search methods for optimization that use potential theory"

Django-Herald: A Django Messaging Library

Robert Roskam, Jared Proffitt
Sunday 10 a.m.–1 p.m. in

A Django messaging library that features: - Class-based declaration and registry approach, like Django Admin - Supports multiple transmission methods (Email, SMS, Slack, etc) per message - Browser-based previewing of messages - Maintains a history of messaging sending attempts and can view these messages - Disabling notifications per user

Exploring generative models with Python

Javier Jorge
Sunday 10 a.m.–1 p.m. in

Do you know that Neural Networks learn an inner representation? This could be useful for data understanding and disentangling relationships between features. Over this manifold, their last layers can perform the classification. But what if we change the target of these networks? Instead of trying to classify correctly, we could want to reconstruct the input. Based on this idea, there are different approaches using Neural Networks that are known as generative models, models that can provide new instances learning a manifold of the original data. Generative Adversarial Networks (GAN) and Variational Autoencoders (VAE) are two well-known techniques to do this. Regarding this learnt manifold, if the network can reconstruct the input after projecting it into this subspace, What is in between two projected samples? Is it useful or just noise? We have explored this manifold empirically and we have found that some paths between projected samples provide unseen instances that are a combination of the inputs, i.e: digits that are rotated or shrunk, style combination, etc. This could be useful to understand the underlying distribution or to provide new instances when data is scarce. In order to carry on these experiments, we have relied on Python, in particular, we have extensively used Tensorflow. In this poster, the Python tools that one could use to explore and reproduce these experiments will be described as well as the datasets that we have used.

Fighting Documentation Drift

Terrence Reilly
Sunday 10 a.m.–1 p.m. in

One of the main arguments against documentation is that it frequently falls out of sync with the code it is supposed to describe. However, the solution isn't to document less; the solution is to automate the process of checking our documentation. This is the idea behind [darglint](https://github.com/terrencepreilly/darglint), a docstring argument linter. *Darglint* can identify certain types of documentation drift with docstring, such as - missing/extraneous parameters, - missing/extraneous return or yield statements, - missing/extraneous exception descriptions, As well as a variety of stylistic errors which can make parsing the docstring difficult. This poster will describe and demonstrate *darglint*, measure its effectiveness against open-source projects, and explore other possibilities for fighting documentation drift.

Hack your Kinect!

Kay Kollmann
Sunday 10 a.m.–1 p.m. in

Has your Kinect, too, been collecting dust at the back of your consoles console? Time to get it out and put it to good use again! With a little Python and OpenCV magic, you can easily breathe new life into this super innovative gaming device from a few years ago. My poster will show you what your trusty Kinect is capable of and what you can do with the built-in hardware – which does not limit you to playing games, but opens up other, even more interesting possibilities of interaction, especially when combined with a projector. Not a Kinect owner yet? No problem, my poster has you covered! I will let you know what equipment you need and how to get it (good news: you do not need to also get the console it came bundled with!). As space is limited on a poster, I will likely not be able to provide a full working example script, but I will make sure to include code snippets and links to resources to help you with your own future Python + Kinect projects.

How a Python Application Can Support Everything Else

Laurie Barth
Sunday 10 a.m.–1 p.m. in

This poster will walk through the full replacement of a legacy suite of applications. The final system had a series of projects that all plugged into a Python project that versioned every document for storage. Python was the beginning and end state of every piece of data whether it was going through Javascript manipulation, PHP, or Knime. Python was the backbone, and it had to perform over very large data sets that were expected to grow quickly. This poster will present a visual representation of all the systems and how this project used Python to solve the problems in question.

I contributed! But what now ? Data analytics for understanding your open source community

Bee
Sunday 10 a.m.–1 p.m. in

For any community especially those in open source, newcomer onboarding is important but retaining them is critical. Since past two years, we at the Fedora Community Operations team have been working on understanding community health and on improving contributor retention rates in our community. Using this poster session, I would like to share some of the findings from our work, similar research done by other open source projects and how we can use and apply these findings. The poster will discuss some community-oriented metrics we use at Fedora to understand community health like contributor engagement metrics, contributor onboarding and retention rates etc. I will then share some insights derived from finding common patterns in contribution activity of long-term active volunteers i.e. Are they involved in many different areas of their projects? What are some good measures to predict how long a volunteer will stay ? What is the magic element X that makes people stay ? The poster will also discuss about similar research done by other open source communities in this area and how these insights can be applied to improve contributor retention rates and overall community health on an individual as well as community level.

Improving command line experience for managing databases with mssql-cli and mssql-scripter

Alan Yu, Ronald Quan
Sunday 10 a.m.–1 p.m. in

Since the announcement that SQL Server 2017 supports Linux and Docker, there was a need for creating modern, cross-platform CLI tools to provide DBA's and developers the choice to use SQL Server anywhere. While rethinking our tools strategy, our team chose to collaborate with the open source community to create two great Python-based tools: mssql-cli and mssql-scripter. - **mssql-cli** is an interactive T-SQL query tool which includes features such as auto-completion, syntax highlighting, and pretty formatting. To create this tool, we collaborated with the [dbcli community](https://github.com/dbcli) which includes other CLI tools such as pgcli and mycli. - **mssql-scripter** is a scripting tool for SQL Server databases and can easily generate CREATE and INSERT t-sql scripts for database objects, similar to pg_dump and mysqldump. To learn more, please visit our GitHub repos for [mssql-cli](https://github.com/dbcli/mssql-cli) and [mssql-scripter](https://github.com/microsoft/mssql-scripter)

Instrumenting Python

Elaine Arbaugh, James Lim
Sunday 10 a.m.–1 p.m. in

This poster presents [Affirm](https://www.affirm.com/)'s approach to instrumenting Python. We place great importance on our metrics infrastructure because metrics help engineers find and fix bugs quickly, confidently iterate on features, and make data-driven decisions. We start by discussing how we emit metrics in our Python code by extending the Python logger. We consider how we ensure this process is reliable and how we make sure it will not impact application performance. We also present our metrics pipeline--how we use open source software for metrics collection ([Riemann](http://riemann.io/)), storage ([Elasticsearch](https://www.elastic.co/products/elasticsearch)), visualization ([Grafana](https://grafana.com/)), and alerting ([Cabot](https://github.com/Affirm/cabot)). We explain the reasons we believe the tools we chose are reliable and scalable and the checks we have built to ensure our pipeline is working. As an example, we present the metrics we collect for [celery](http://www.celeryproject.org/) tasks and examples of the insights these metrics have given us.

Lessons Learned from Civic Hacking

Carrie Maxwell
Sunday 10 a.m.–1 p.m. in

Budgets, Housing, and Hurricanes. What do all of these things have in common? Civic Hacking. These were local problems that were tackled by a village of coding warriors. There’s a lot Civic Hacking happening in the state of Texas. From giving citizens exposure to city budgeting with Award-winning Budget Party, tackling Section 8, to the Hurricane app creating effort covered by Forbes.

Managing Machine Learning Experiments

Seb Arnold
Sunday 10 a.m.–1 p.m. in

Managing experimental results in machine learning can be a daunting task. Researchers and practitioners often try a variety of algorithms, hyper-parameters, and pre-processing techniques, each resulting in different outcomes. Tracking and analyzing each of these outcomes is a burden further amplified when dealing with multiple collaborators and several computer nodes. In this presentation I will share my experience of managing over a 1,200 experimental results, ran in parallel on 8 computer nodes with 5 collaborators over a time span of 6 months. I will focus on the usage of the [randopt](https://seba-1511.github.io/randopt/) package for experimental management and visualization. Specifically, I will introduce the typical randopt workflow which consists of experiment creation, hyper-parameter selection, and results visualization. Randopt is an [open-source](https://github.com/seba-1511/randopt) library for experiment management. It is written in pure Python, is dependency-free, and available on [PyPI](https://pypi.python.org/pypi/randopt). Interactive, web-based experimental reports are generated via the built-in command line utility and a programmatic API is also available. It is compatible with all Python packages, including PyTorch, TensorFlow, scikit-learn, and numpy/scipy. While randopt was developed with machine learning in mind, its agnosticity with respect to the nature of the experiments makes it suitable for general-purpose scientific experiment management.

MFP: Making music with Python

Bill Gribble
Sunday 10 a.m.–1 p.m. in

MFP (Music For Programmers) is an application that lets users make music by creating a dataflow diagram called a "patch" (really a program) that generates or processes sound. MFP is strongly inspired by graphical patching languages such as Max/MSP and Pure Data, but is a completely new implementation written in Python with C extensions. Using MFP, you can quickly start making sound, processing the inputs and outputs of other audio programs, and interfacing with MIDI devices and control surfaces by drawing diagrams. But you also have access to Python data and libraries from within patches, which makes it possible to write programs that have nothing to do with audio, or to bring in Python's power to handle files, strings or whatever, or to write extensions to MFP itself.

Open Source Metrics at Twitter: A USF Capstone Project

Remy DeCausemaker
Sunday 10 a.m.–1 p.m. in

Through a Capstone Program partnership between the Open Source Program at Twitter, the University of San Francisco Computer Science Department, and members of the Linux Foundation's CHAOSS Community, undergraduate students researched and developed numerous proof of concept tools to gather and and visualize metrics that assess Open Source Community Health.

Practical Sphinx

Carol Willing
Sunday 10 a.m.–1 p.m. in

Each member of your project team uses something different to document their work --- RestructuredText, Markdown, and Jupyter Notebooks. How do you combine these into useful documentation for your project's users? Sphinx and friends to the rescue! Learn how to integrate documentation into your everyday development workflow, apply best practices, and use modern development tools and services, like Travis CI and ReadTheDocs, to create engaging and up-to-date documentation which users and contributors will love.

PyBites Code Challenges

Bob Belderbos, Julian Sequeira
Sunday 10 a.m.–1 p.m. in

Code challenges work. We hosted [47 Python challenges last year](https://pybit.es/pages/challenges.html) and we have received amazing feedback. Not only do we get [amazing PR submissions](https://pybit.es/guest-telegram-python-chatbot.html), we hear people stretching beyond what they thought capable of. As Sean Connery said: “there is nothing like a challenge to bring out the best in man.” and we’re in the business of bringing this to the Python world. We just launched our [Code Challenges Platform](https://codechalleng.es) and > 500 Github users jumped on it the first 2 weeks. We did a live code workshop at [Alicante University](https://pybit.es/alicante-pychallengeday.html) where we learned about typical hurdles people need to overcome to start leveraging the power of Python. We think it would be nice to further support this initiative with an attractive and data-driven poster about our project.

Python and Windows C++ desktop app: how we made them the best friends

Tomas Danek, Lukas Kucera
Sunday 10 a.m.–1 p.m. in

Have you ever dreamed of live peeking and hacking inside big C++ desktop application? Controlling it with simple and elegant Python code? That's what we do in our test automation team at Avast software. When we were thinking about test automation for software which protects hundreds of millions users worldwide, we wanted our tests to be as stable as possible. That's why we targeted internals of the application. Boost Python allows us to easily export internal C++ interfaces of our application to be controlled with Python. This approach allows us to automatically create binary that is importable by Python like a standard Python module and natively call the C++ code of the application. It provides us with the best of both worlds: * Python allows easy and fast development of tests * Invoking code of our C++ application directly reduces the test stability problems * We are directly at the core, no extra layers/testing frameworks needed * No need to beg for some extra code in the application: we just use it as it is * We can script our Python tests, as well as interactively control the application In this poster, we show the basic principles of this architecture and present guiding steps for those who would like to start leveraging convenient Python in the cruel C++ world.

Python Developers Survey 2017: Findings

Dmitry Filippov
Sunday 10 a.m.–1 p.m. in

Want to know about the latest trends in the Python community and see the big picture of how things have changed over the last few years? Interested in the results of the latest official Python Developers Survey 2017 which was supported by the Python Software Foundation and gathered responses from more than 10.000 Python developers? Come learn about the most popular types of Python development, trending frameworks, libraries and tools, additional languages being used by Python developers, Python versions usage statistics and many other insights from the world of Python. All derived from the actual research: Python Developers Survey 2017 which collected responses from over 10.000 Python developers, organized in partnership between the Python Software Foundation and JetBrains.

Python for Passwords: Diceware

Megan Speir
Sunday 10 a.m.–1 p.m. in

A Beginner's Guide to Crypto Magic ⚡️ Welcome to Professor Speir's crypto class for first years. Passwords are an integral part of our lives both in the Wizarding World of Technology™️ and consumers of the internet. In this lesson, we will learn a fun and safe methodology of creating passwords for humans that also contain a great deal of entropy. Why are passwords so difficult? Password best practices. How can we create good passwords with Python? >Diceware: A History >Diceware: The PyPi Package It all comes down to being random. Explain how `random.SystemRandom` works. I would have a demo application built where people could test generating such a password to take the experience from the poster to the attendee.

Reproducible environments for reproducible results

Austin Macdonald, Bihan Zhang
Sunday 10 a.m.–1 p.m. in

Trustworthy results require reproducibility, which begins with an environment that won’t change under your feet. A truly stable environment requires a guarantee of dependency immutability and the ability to roll back if problems arise. [Pulp](https://pulpproject.org/) is open source, written in Python, and can be used to fetch, upload, organize, and distribute software packages. With Pulp you can host and manage multiple PyPI-like instances, called repositories, which are versioned and contain any packages you choose. These repositories can range from a mirror to a carefully curated [known good set](https://packaging.python.org/glossary/#term-known-good-set-kgs) and they can be promoted through your software development life cycle. You can fetch packages from external repositories, upload private packages, and create a pull-through cache of the Python Package Index, all while keeping fine control from development to production. Each Pulp repository can act as a package index that plays nice with pip. Pulp's plugin architecture enables users to manage RPM Packages, Debian Packages, Docker Containers, and ISOs all in one place. View our source code on [github](https://github.com/pulp/pulp), or read more about us in our [documentation](https://docs.pulpproject.org/en/3.0/nightly/). Come by and talk Python workflows, learn how Pulp can help you, and tell us your use cases.

Running the #1 Brazilian Telegram Bot on a Raspberry Pi using Python

Gabriel Ferreira
Sunday 10 a.m.–1 p.m. in

Telegram Messenger is an instant messaging app. It has a well documented chatbots API and a great amount of users on multiple countries. From the need to save some time, I learned to develop chatbots, automating tasks. A server is needed to run them 24/7. Then, questions arrise: - How to develop them? - Where to run it? - How much will it cost? On this poster I'll show people how I'm running a few Telegram Chatbots as cheap as possible, making possible to achieve it's goals without spending lots of money.

Spfy: analyzing E.coli genomes as graph data

Kevin K Le
Sunday 10 a.m.–1 p.m. in

Whole genome sequencing isn't only for humans, but also plays a key role in our understanding of bacteria. Spfy uses WGS data to predict traditional lab results for E.coli genomes. The results are then stored in a linked graph for identifying and comparing different subtypes. The entire platform is packaged as a web-app, and the analysis modules are written in Python (+ a bit of R). Part of a larger open-source initiative by the National Microbiology Lab of Canada.

Supervised and Unsupervised Machine Learning of Electroluminescent Images of Photovoltaic Modules

Ahmad Maroof Karimi, Justin Fada, Roger French
Sunday 10 a.m.–1 p.m. in

Electroluminescence (EL) is a process in which materials emit light when an electric current is passed through it. In this method, electricity is passed through photovoltaic (PV) modules and EL light is emitted from the solar cells which are captured by an infrared sensitive camera. EL images are useful for characterization of electrical properties of photovoltaic (PV) modules based on the intensity of light in the images. The goal of the project is to build an automated pipeline for EL image supervised classification and unsupervised clustering. The motivation behind EL image processing is to study the effect of degradation in electrical properties based on physical appearances captured by the images. To study PV module degradation, EL images of crystalline silicon PV solar panels were captured under multiple test conditions at various periodic intervals. Damp-heat and thermal cycling cause corrosion and cracks, respectively, which can be seen in an EL image with regions of dark areas. Cracks orientation and thickness of corrosion is correlated to resistive losses which cause EL images to have lower light intensity at affected areas. This work is part of our US Dept. of Energy, SunShot project “MLEET”. To enable in-place analytics we store all datasets and results from different sources in Hadoop with an HBase NoSQL database and we integrate it with python using the happybase module. For feature extraction and machine learning from these EL images, we use scipy, sklearn, and opencv.