Tutorials

<center><h2><b>Tutorial Registration is now open!</b></h2></center> <center> <a href="/2018/registration/register/" class="btn">Register Now!</a> </center>

API-Driven Django

Philip James
Thursday 9 a.m.–12:20 p.m. in Room Three

As the web continues to evolve, the demand for data-driven backends matched with rich frontend experiences grows every day. Django comes with a robust templating system and rendering engine, but more and more web applications using Django are just focusing on it’s API abilities. What if we could have the best of both worlds? What if we could use Django and django-rest-framework to write views that let us prototype quickly using the Django templating system, and have those same views return API responses to rich clients? In this tutorial, we’ll build a sample data collection and display web application, taking advantage of the ways Django and django-rest-framework work together. The end result will be a web application you could adapt for many kinds of data collection needs, and you’ll come away knowing how to get a rich API and a frontend prototype out of just one Django view.

A Python-flavored Introduction to Containers And Kubernetes

Ruben Orduz
Thursday 9 a.m.–12:20 p.m. in Room Five

Containers have more or less taken over the world of application, web APIs, mobile endpoints and other forms of deployment. They have become the currency, the "table stakes" and de-facto application deployment unit. Their raise to the fore has brought about a whole host of use cases which weren't practical or accessible in the world of "classic" paradigms of infrastructure and virtualization. Containers have also brought application deployment closer and more accessible to developers. But as more use cases, deployment styles and exponential adoption of containers was ongoing, a new set of problems began to surface: how do you manage the ever growing number of containers in a deployment? How do you make sure containers have the right resources, deployed to the right machine, running with the correct parameters, how do you scale in and out without disruption? How do you make sure in a fleet of X containers that they’re all running and in healthy state? Enter Kubernetes. Initially developed internally by Google to replace their own complex container orchestration and management framework. It had to meet all the stringent standards and mind-boggling scale that Google operates on, but from the get-go an effort was made to make the learning curve and developer experience as approachable as possible. At certain point the creators made the case to Google to release kubernetes to the open source community -- a crucial decision that has helped “k8s” (as it’s commonly referred to as) reach rock star levels of fame and mind share not just in the FOSS community but also across industries and businesses from small operations to gigantic multinational corporations with thousands of deployments.

Beyond Django Basics

Shauna Gordon-McKeon
Thursday 1:20 p.m.–4:40 p.m. in Room Three

Finished with the official Django getting started guide, and not sure what to do next? This tutorial has you covered. We'll extend the blog built in the official guide, using a variety of slightly more advanced Django features. Topics to be covered include: extending the in-built user model, using the in-built login system, enhancing forms, using view mixins and overriding view methods, and changing up your database backend. With each extension, we'll talk about not just how to use these features but also *why* you'd want to use them. We'll conclude by talking about other Django features you may want to learn about as you grow more proficient with this versatile framework.

Build-a-GitHub-Bot Workshop

Mariatta Wijaya
Thursday 9 a.m.–12:20 p.m. in Room Seven

GitHub provides a great platform for collaborating. You can take it to the next level by creating custom GitHub bots. By delegating some of the chores to a bot, you get to spend more time developing your project and collaborating with others. Learn how to automate your workflow by building a personal GitHub assistant for your own project. We'll use libraries called `gidgethub` and `aiohttp` to write a GitHub bot that does the following: - Greet the person who created an issue in your project. - Say thanks when a pull request has been closed. - Apply a label to issues or pull requests. - Gives a thumbs up reaction to comments **you** made. (becoming your own personal cheer squad). The best part is, you get to do all of the above using Python 3.6! F-strings included!

Build a Search Engine with Python + Elasticsearch

Julie Qiu
Wednesday 9 a.m.–12:20 p.m. in Room Four

One of the most common actions that we take when visiting any website is search. A common service that powers search for many sites is Elasticsearch - but what makes it so powerful? What can you do with Elasticsearch that you can’t with a regular database? This tutorial starts with an introduction to Elasticsearch architecture, including what makes it great for search and not so great for other use cases. We will then build an application together with a search engine powered by Elasticsearch. We will also discuss how to optimize search queries and scale as the volume of data increases.

Code Your Heart Out: Beginning Python for Human People with Feelings

Melanie Crutchfield
Wednesday 9 a.m.–12:20 p.m. in Room Two

This tutorial is for people who are __brand new to Python__. It's for people with curioisty to feed, anxiety to overcome, and worlds to change. It's for people named Edna. (And others not named Edna.) During this tutorial you'll be encouraged to __bring your whole self to learning__. We'll start with the very basics of Python, keeping your fingers on the keyboard to gain as much practice as possible. Between strings, functions, and other fun Python-y things, we'll discuss learning deeply, __nourishing our brains__, and boosting happiness with science. No prior experience required; come just as you are. __This is about being a whole person__. It's about learning Python, because Python is really cool. It's also about staying afloat. Being productive. Focusing. It's about finding joy in the error codes. Come play. It'll be awesome.

Complexity Science

Allen Downey
Thursday 9 a.m.–12:20 p.m. in Room Four

Complexity Science is an approach to modeling systems using tools from discrete mathematics and computer science, including networks, cellular automata, and agent-based models.  It has applications in many areas of natural and social science. Python is a particularly good language for exploring and implementing models of complex systems.  In this tutorial, we present material from the draft second edition of *Think Complexity*, and from a class we teach at Olin College.  We will work with random networks using NetworkX, with cellular automata using NumPy, and we will implement simple agent-based models.

Docker for Data Science

Aly Sivji, Joe Jasinski, tathagata dasgupta (t)
Thursday 1:20 p.m.–4:40 p.m. in Room Five

Jupyter notebooks simplify the process of developing and sharing Data Science projects across groups and organizations. However, when we want to deploy our work into production, we need to extract the model from the notebook and package it up with the required artifacts (data, dependencies, configurations, etc) to ensure it works in other environments. Containerization technologies such as Docker can be used to streamline this workflow. This hands-on tutorial presents Docker in the context of Reproducible Data Science - from idea to application deployment. You will get a thorough introduction to the world of containers; learn how to incorporate Docker into various Data Science projects; and walk through the process of building a Machine Learning model in Jupyter and deploying it as a containerized Flask REST API.

Down the rabbit hole. A 101 on reproducible workflows with Python

Tania Sanchez Monroy
Wednesday 9 a.m.–12:20 p.m. in Room Seven

There has been a massive interest in reproducible research / data analysis pipelines over the last few years. But... how can I ensure that what I produce as a Python user is reproducible? In this tutorial we'll be taking you on a journey down the rabbit hole of reproducibility. We'll be taking a step by step approach to reproducible scientific development in Python. This means you get a crash course on version control, execution environments, testing, and continuous integration. And a guide on how to integrate all of these in your software projects. By the end of the course we hope you will have the necessary tools to make your Python workflows reproducible no matter if you're starting a brand new project or if this is ready to be shared with the world.

Exploratory Data Visualization with Vega, Vega-Lite, and Altair

Jake VanderPlas
Thursday 1:20 p.m.–4:40 p.m. in Room Six

Exploring a new dataset visually can provide quick intuition into the relationships within the data. There are a few well-developed visualization packages in Python, but they often have very imperative APIs that force the user to focus on the mechanics of the visualization – tick locations, axis limits, legends, etc. – rather than the salient relationships within the data. This tutorial will introduce data visualization with [Altair](http://altair-viz.github.io), a package designed for exploratory visualization in Python that features a declarative API, allowing data scientists to focus more on the data than the incidental details. Altair is based on the [Vega](https://vega.github.io/) and [Vega-Lite](https://vega.github.io/vega-lite/) visualization grammars, and thus automatically incorporates best practices drawn from recent research in effective data visualization. The tutorial will provide an introduction to the Altair package and its API, but more importantly will dive into the core concepts of effective data visualization that can be applied using any visualization package or tool.

Faster Python Programs - Measure, don't Guess

Mike Müller
Wednesday 1:20 p.m.–4:40 p.m. in Room Six

Optimization can often help to make Python programs faster or use less memory. Developing a strategy, establishing solid measuring and visualization techniques as well as knowing about algorithmic basics and datastructures are the foundation for a successful optimization. The tutorial will cover these topics. Examples will give you a hands-on experience on how to approach efficiently. Python is a great language. But it can be slow compared to other languages for certain types of tasks. If applied appropriately, optimization may reduce program runtime or memory consumption considerably. But this often comes at a price. Optimization can be time consuming and the optimized program may be more complicated. This, in turn, means more maintenance effort. How do you find out if it is worthwhile to optimize your program? Where should you start? This tutorial will help you to answer these questions. You will learn how to find an optimization strategy based on quantitative and objective criteria. You will experience that one's gut feeling what to optimize is often wrong. The solution to this problem is: „Measure, Measure, and Measure!“. You will learn how to measure program run times as well as profile CPU and memory. There are great tools available. You will learn how to use some of them. Measuring is not easy because, by definition, as soon as you start to measure, you influence your system. Keeping this impact as small as possible is important. Therefore, we will cover different measuring techniques. Furthermore, we will look at algorithmic improvements. You will see that the right data structure for the job can make a big difference. Finally, you will learn about different caching techniques. ## Software Requirements You will need Python 2.7 or 3.6 installed on your laptop. Python 2.6 or 3.4/3.5 should also work. Python 3.x is strongly preferred. If released, we will use Python 3.7. ### Jupyter Notebook I will use a Jupyter Notebook for the tutorial because it makes a very good teaching tool. You are welcome to use the setup you prefer, i.e editor, IDE, REPL. If you also like to use a Jupyter Notebook, I recommend `conda` for easy installation. Similarly to `virtualenv`, `conda` allows creating isolated environments but allows binary installs for all platforms. There are two ways to install `Jupyter` via `conda`: 1. Use [Minconda][1]. This is a small install and (after you installed it) you can use the command `conda` to create an environment: `conda create -n pycon2018 python=3.6` Now you can change into this environment: `source activate pycon2018`. The prompt should change to `(pycon2018)`. Now you can install IPython: `conda install Jupyter`. 2. Install [Anaconda][2] and you are ready to go if you don't mind installing lots of packages from the scientific field. ### Working witch `conda` environments After creating a new environment, the system might still work with some stale settings. Even when the command `which` tells you that you are using an executable from your environment, this might actually not be the case. If you see strange behavior using a command line tool in your environment, use ``hash -r`` and try again. ### Tools You can install these with `pip` (in the active `conda` environment with `conda`): * [SnakeViz][3] * [line_profiler][4] * [Pympler][6] * [memory_profiler][7] [1]: https://conda.io/miniconda.html [2]: http://continuum.io/downloads [3]: http://jiffyclub.github.io/snakeviz/ [2]: http://continuum.io/downloads [4]: https://pypi.python.org/pypi/line_profiler/ [6]: https://pypi.python.org/pypi/Pympler [7]: https://pypi.python.org/pypi/memory_profiler

Foundations of Numerical Computing in Python

Scott Sanderson
Wednesday 9 a.m.–12:20 p.m. in Room Six

Python is one of the world's most popular programming languages for numerical computing. In areas of application like physical simulation, signal processing, predictive analytics, and more, engineers and data scientists increasingly use Python as their primary tool for working with numerical large-scale data. Despite this diversity of application domains, almost all numerical programming in Python builds upon a small foundation of libraries. In particular, the `numpy.ndarray` is the core data structure for the entire PyData ecosystem, and the `numpy` library provides many of the foundational algorithms used to power more domain-specific libraries. The goal of this tutorial is to provide an introduction to numpy -- how it works, how it's used, and what problems it aims to solve. In particular, we will focus on building up students' mental model of how numpy works and how **idiomatic** usage of numpy allows us to implement algorithms much more efficiently than is possible in pure Python.

Getting Started with Blockchains and Cryptocurrencies in Python

Amirali Sanatinia
Wednesday 1:20 p.m.–4:40 p.m. in Room Five

Blockchains and cryptocurrencies are getting more popular everyday. The rise and wide adoption of cryptocurrencies such as Bitcoin has attracted a lot of attention, ranging from developers to bankers. However, many people are still not very comfortable with the ideas and concepts behind the blockchain, and workings of cryptocurrencies such as Bitcoin. Therefore, it stops them from entering and exploring the blockchain and cryptocurrency world. In this tutorial, we first explore the cryptographic ideas behind the cryptocurrencies, including hashing, public/private cryptography. This will be followed by the basics of a simplified blockchain. We cover mining, incentives, payment records, ownership, etc. Then we delve into working and playing with a private Bitcoin network, by implementing simple programs in Python to create public/private keys, accounts, and transactions. We further look into services that provide exchange rate data on cryptocurrencies and analyze the data.

Going Serverless with OpenFaaS, Kubernetes, and Python

Michael Herman
Wednesday 9 a.m.–12:20 p.m. in Room Eight

OpenFaaS (Functions as a Service) is a framework for building serverless, event-driven functions with Docker and Kubernetes. In this tutorial, you'll learn how to build and deploy a full-stack application that uses Flask (client-facing app) along with OpenFaaS to handle background processes.

Intermediate testing with Django: Outside-in TDD and Mocking effectively

Harry Percival
Thursday 1:20 p.m.–4:40 p.m. in Room One

Once developers have got the hang of the basics of testing, problems of applying it in the real world soon start to manifest themselves, and common questions come up. - What order should I write my tests and code in to avoid wasting time on blind alleys? - If I'm using Mocks in my tests to avoid external dependencies, how do I avoid getting stuck with unwieldy, unreadable tests that don't actually tell me when things have gone wrong? - Unit tests vs integration tests vs functional tests, which should I use when, and what are the trade-offs? In this tutorial we'll work through an example of using an existing Django codebase, adding a new feature, and experimenting with different testing techniques along the way to illustrate the pros and cons of each - bottom-up vs outside-in development - double-loop TDD - using Mocks to isolate application layers from each other - "listen to your tests", and learning to use ugly or convoluted tests as a signal for improving design. Some familiarity with Django is desirable, although skills learned in other web frameworks are transferrable. By the end, you'll be able to go back to your own projects with practical experience, and a new way of thinking about how to optimise your tests for your own circumstances.

Introduction to Digital Signal Processing

Allen Downey
Wednesday 1:20 p.m.–4:40 p.m. in Room Four

Spectral analysis is an important and useful technique in many areas of science and engineering, and the Fast Fourier Transform is one of the most important algorithms, but the fundamental ideas of signal processing are not as widely known as they should be. Fortunately, Python provides an accessible and enjoyable way to get started.  In this tutorial, I present material from my book, *Think DSP*, and from a class I teach at Olin College.  We will work with audio signals, including music and other recorded sounds, and visualize their spectrums and spectrograms.  We will synthesize simple sounds and learn about harmonic structure, chirps, filtering, and convolution.

Introduction to Python for Data Science

Skipper Seabold
Thursday 9 a.m.–12:20 p.m. in Room Eight

This tutorial introduces users to Python for data science. From data cleaning to model building, we will work through a series of short examples together using some real-world health inspection data. Attendees will have their hands on the keyboard, using the Python standard library and pandas to clean data and scikit-learn to build some models.

Introduction to TDD with Django

Harry Percival
Thursday 9 a.m.–12:20 p.m. in Room One

Over the past few years, automated software testing has moved from being a niche interest to being the default assumption. This tutorial is an introduction to Test-Driven Development (TDD) for the world of web development in Python using the Django framework. The tutorial is suitable for people who are new to either testing, or Django, or both, although some basic working knowledge of Python syntax (or programming in another language) is assumed. Learn about: - Unit testing and Functional testing - the Selenium browser automation tool - Python's unittest standard library module - Django models, views and templates - testing front-end and back-end code - refactoring, using tests - the unit-test/code cycle, or Red-Green-Refactor, TDD workflow - and the Testing Goat, Python's unofficial mascot for testing! Come prepared! You'll need a Python 3.6 virtualenv with Django and Selenium installed. Detailed instructions are provided [here](https://www.obeythetestinggoat.com/book/pre-requisite-installations.html).

Intro to Spatial Analysis and Maps with Python

Christy Heaton
Thursday 9 a.m.–12:20 p.m. in Room Six

In this tutorial, we will introduce Python as a spatial problem solving and data visualization tool. To demonstrate the power of Python for spatial analysis, we will solve a spatial problem and make a beautiful map of our results. Along the way, we will discuss considerations when dealing with spatial data and the wide range of Python tools available for spatial analysis.

Lights Camera Action! Scrape, explore, and model to predict Oscar winners & box office hits

Deborah Hanus, Patricia Hanus, Sebastian Hanus, Veronica Hanus
Wednesday 9 a.m.–12:20 p.m. in Room Three

Using Jupyter notebooks, HTTP requests, BeautifulSoup, NumPy, Pandas, scikit learn, and matplotlib, you’ll predict whether a movie is likely to [win an Oscar](http://oscarpredictor.github.io/) or be a box office hit. We’ll step through the creation of an effective dataset: asking a question your data can answer, writing a web scraper, and answering those questions using nothing but Python libraries and data from the Internet.

Making Art with Python

Emily Xie
Wednesday 9 a.m.–12:20 p.m. in Room Five

In this workshop, we’ll learn how to make visual art using Processing.py, the Python mode for a powerful visual language library called Processing. This tutorial walks through Processing.py from the ground up––from initial setup & foundational concepts, to the library's core functions, as well as its more advanced features. Topics covered include the coordinate system, shape primitives, lines, stroke, fill, color, mapping, events, and transforms. At the end, we'll break out of the tutorial format and give free reign for attendees to create, tinker, and experiment freely with the framework. You’ll walk away with an art piece of your own original design, as well as a newfound appreciation for Python as a medium for creative expression.

Network Analysis Made Simple: Part I

Eric Ma, Mridul Seth
Thursday 9 a.m.–12:20 p.m. in Room Two

Have you ever wondered about how those data scientists at Facebook and LinkedIn make friend recommendations? Or how epidemiologists track down patient zero in an outbreak? If so, then this tutorial is for you. In this tutorial, which is Part I of a two-part series, we will use a variety of datasets to help you understand the fundamentals of network thinking, with a particular focus on constructing, summarizing, and visualizing complex networks. With this tutorial, you will be well equipped to explore advanced topics (dynamics on graphs, evolving graphs, and network propagation methods) in Part II.

Network Analysis Made Simple: Part II

Mridul Seth, Eric Ma
Thursday 1:20 p.m.–4:40 p.m. in Room Two

Daenerys or Jon Snow? Diffusion of news through Twitter? JFK, ORD or ATL, do these codes look familiar? In this tutorial we build up on the fundamentals of Part 1 tutorial and look at various applications of network analysis to real world datasets like the US Airport Dataset, Game of Thrones character co-occurrence network, and foray into diffusion processes on networks.

Parallel Data Analysis with Dask

Tom Augspurger, James Crist, Martin Durant
Wednesday 1:20 p.m.–4:40 p.m. in Room Three

The libraries that power data analysis in Python are essentially limited to a single CPU core and to datasets that fit in RAM. Attendees will see how dask can parallelize their workflows, while still writing what looks like normal python, NumPy, or pandas code. Dask is a parallel computing framework, with a focus on analytical computing. We'll start with `dask.delayed`, which helps parallelize your existing Python code. We’ll demonstrate `dask.delayed` on a small example, introducing the concepts at the heart of dask like the *task graph* and the *schedulers* that execute tasks. We’ll compare this approach to the simpler, but less flexible, parallelization methods available in the standard library like `concurrent.futures`. Attendees will see the high-level collections dask provides for writing regular Python, NumPy, or Pandas code that is then executed in parallel on datasets that may be larger than memory. These high level collections provide a familiar API, but the execution model is very different. We'll discuss concepts like the GIL, serialization, and other headaches that come up with parallel programming. We’ll use dask’s various schedulers to illustrate the differences between multi-threaded, multi-processes, and distributed computing. Dask includes a distributed scheduler for executing task graphs on a cluster of machines. We’ll provide each person access to their own cluster.

Practical API Security

Adam Englander
Wednesday 1:20 p.m.–4:40 p.m. in Room Eight

With the dominance of Mobile Apps, Single Page Apps for the Web, and Micro-Services, we are all building more APIs than ever before. Like many other developers, I had struggled with finding the right mix of security and simplicity for securing APIs. Some standards from the IETF have made it possible to accomplish both. Let me show you how to utilize existing libraries to lock down you API without writing a ton of code. In this tutorial, you will learn how to write a secure API with future proof security utilizing JOSE. JOSE is a collection of complimentary standards: JWT, JWE, JWS, JWA, and JWK. JOSE is used by OAuth, OpenID, and others to secure communications between APIs and consumers. Now you can use it to secure your API.

Python Epiphanies

Stuart Williams
Wednesday 1:20 p.m.–4:40 p.m. in Room Two

This tutorial is for those who've been using Python for a while and would consider themselves at an intermediate level but are looking to get to the next level. We'll explore core language features, look a bit under the hood, and to understand and learn how not to be too afraid of bytecode, monkey patching, decorators, and metaclasses. In many ways Python is very similar to other programming languages. However, in a few subtle ways it is quite different, and many software developers new to Python, after their initial successes, hit a plateau and have difficulty getting past it. Others don't hit or perceive a plateau, but still find some of Python's features a little mysterious or confusing. This tutorial will help deconstruct some common incorrect assumptions about Python. If in your use of Python you sometimes feel like an outsider, like you're missing the inside jokes, like you have most of the puzzle pieces but they don't quite fit together yet, or like there are parts of Python you just don't get, this may be a good tutorial for you.

Python functions as a first class topic

Luciano Ramalho
Thursday 1:20 p.m.–4:40 p.m. in Room Seven

This tutorial is a wide and deep exploration of Python functions, the most important abstraction tool in the language. We will precisely define and practice the concepts of first class functions, higher-order functions, and closures. We will apply these ideas in practical exercises and use them to simplify some classic design patterns. With that solid foundation, we cover Python's `@decorator` feature, and the `functools` and `operator` packages which support functional programming idioms. We will also see how Python's flexible parameter declaration and argument handling functionality lets us create APIs that are a joy to use and able to evolve while remaining backward compatible.

Statistics and probability: your first steps on the road to data science

Chalmer Lowe
Thursday 1:20 p.m.–4:40 p.m. in Room Four

An introduction to statistics and probability geared toward enabling attendees to understand the capabilities and limitations of statistics and probability and to help them implement calculations in their projects. Where possible/feasible, attendees will build their own tools to help them grasp the underlying concepts. In addition, attendees will be introduced to the pre-built tools in world-class Python and data science libraries to help them capitalize on the efficiencies and utility that those libraries offer.

Using List Comprehensions and Generator Expressions For Data Processing

Trey Hunner
Wednesday 1:20 p.m.–4:40 p.m. in Room One

Creating one list out of another list is a very common thing to do in Python, so common that Python includes a special construct just for this purpose: list comprehensions. We'll get hands-on experience using list comprehensions, set comprehensions, and dictionary comprehensions during this tutorial. We'll also learn how and when we can slightly tweak our comprehensions to turn them into more performant generator expressions. We will learn some tricks for figuring out which of our "for" loops can be rewritten as comprehensions and which cannot. We will focus heavily on **code readability and code clarity** and we'll discuss when comprehensions help readability and when they hurt. All new skills will be acquired through practice. We'll work through many exercises both individually and as a group. All students will also receive a cheat sheet which can be used for guidance during future comprehension-writing journeys. A laptop with Python installed is required for this workshop.

Using pandas for Better (and Worse) Data Science

Kevin Markham
Thursday 1:20 p.m.–4:40 p.m. in Room Eight

The pandas library is a powerful tool for multiple phases of the data science workflow, including data cleaning, visualization, and exploratory data analysis. However, proper data science requires careful coding, and pandas will not stop you from creating misleading plots, drawing incorrect conclusions, ignoring relevant data, including misleading data, or executing incorrect calculations. In this tutorial, you'll perform a variety of data science tasks on a handful of real-world datasets using pandas. With each task, you'll learn how to avoid either a pandas pitfall or a data science pitfall. By the end of the tutorial, you'll be more confident that you're using pandas for good rather than evil! Participants should have a working knowledge of pandas and an interest in data science, but are not required to have any experience with the data science workflow. Datasets will be provided by the instructor.

Web Applications, A to Z

Moshe Zadka
Wednesday 9 a.m.–12:20 p.m. in Room One

Modern web applications have gotten complicated -- backend logic, front-end logic, storage and deployment options abound. This tutorial will take a tour of all the pieces that go into making a web application, and show how they all fit together -- using specific choices, specific examples and a lot of hands-on programming, to give participants a chance to actually write a web applications: all the parts. We will use some external third-party services, but care is taken to fit into the free tier.

Workflow Engines Up and Running

Ian Zelikman, Austin Hacker
Wednesday 1:20 p.m.–4:40 p.m. in Room Seven

Join us for an introduction hands on tutorial of Python based workflow engines. You will get to create, run and monitor a real time workflow job with two Python based popular workflow engines: Airflow and Luigi. Developers may have some long running batch jobs and may be using one of the tools we will cover or might be using a different engine, and would like a more in depth hands-on experience learning these tools.