Posters
What Broke When Our AI Agents Got Autonomy, and How We Secured Them with Dual LLMs
Today’s AI agents fetch untrusted data, call APIs, and modify systems, which increases the consequences of treating all inputs as trustworthy.
This poster presents the Dual LLM pattern, an architectural approach that separates the handling of untrusted data from privileged reasoning. Instead of relying on a single model for both execution and validation, the system uses a quarantined LLM to process untrusted inputs under strict constraints, while a privileged LLM performs higher-level reasoning without direct exposure to unsafe content.
Using real-world examples, such as processing emails and web content containing hidden adversarial prompts, the poster shows how dual-LLM systems can safely summarize untrusted data while limiting the risk of prompt injection. It also highlights the trade-offs of this approach, including added latency, orchestration complexity, and cases where over-constraining models can reduce usefulness.
The goal of this poster is to share practical lessons from building agentic systems in Python and to spark discussion around when dual LLM patterns make sense, where they fall short, and how they can be applied responsibly and safely in real systems built with tools like FastAPI, LangChain, or similar frameworks.
Vocals vs. Vapor: Can Python Catch the Fake?
Music industry is increasingly threatened by AI-generated vocal deepfakes that can mimic any artist's voice with alarming accuracy. An AI-generated track featuring fake Drake and The Weeknd vocals hit 600,000 streams before removal, one incident among thousands. Tools like RVC and GPT-SoVITS now produce convincing vocal clones in hours, leaving musicians vulnerable to unauthorized AI performances and streaming platforms without reliable detection methods. Our poster demonstrates how Python's scientific computing ecosystem can help distinguish authentic performances from AI-generated imitations.
By using Python libraries like Librosa and PyTorch, we analyze audio at a level beyond human perception, detecting subtle synthesis artifacts that reveal synthetic origins.
Imagine streaming platforms automatically flagging unauthorized AI covers, or artists having tools to verify their voice hasn't been cloned. Our approach uses spectral analysis to capture vocal characteristics - from vibrato patterns to formant transitions to micro-variations in breath sounds - and siamese networks to learn each artist's unique vocal fingerprint, identifying patterns that distinguish real performances from convincing fakes.
The poster will bridge audio signal processing with practical content verification, providing an interactive demonstration of how machine learning can protect artist identity in an era of generative AI.
Exposing Greenwashing: Satellite ML for Carbon Credit Verification
The global carbon market is projected to reach $1 trillion by 2030, yet it remains plagued by a fundamental credibility crisis - studies show 84% of offset projects fail to deliver promised environmental benefits (that's potentially $840 billion in phantom climate action). Traditional verification relies on infrequent site visits and self-reported data, creating blind spots where fraud thrives undetected. This poster presents a Python-based framework for independently auditing carbon credit projects using satellite imagery, drone data and machine learning, exposing risks that conventional methods miss.
Our pipeline processes Landsat-8 and Sentinel-2 imagery using rasterio and geemap, calculating vegetation indices (NDVI/SAVI) to monitor forest health across project boundaries. Using scikit-learn's Isolation Forest, we detect anomalous claims without needing labeled fraud examples - the algorithm identifies carbon stock reports that deviate from satellite-observed patterns, flagging outliers for investigation rather than requiring us to define 'fraud' in advance. Time-series analysis with the BFAST algorithm identifies sudden deforestation events and leakage patterns - where "protected" forests simply shift destruction to neighboring areas.
Applied to certified REDD+ projects, the framework identifies over-crediting risks where claimed sequestration exceeds observed vegetation changes by 2-3x. It detects leakage signatures where deforestation increases in buffer zones despite reported project success. By comparing reported stocks against satellite-derived indices while monitoring adjacent areas, the pipeline addresses key integrity challenges.
Visitors can explore an interactive demo comparing claimed vs. observed forest cover for real carbon projects, see anomaly detection results on live satellite feeds, and access the full pipeline for their own investigations. With open data and open-source tools, anyone - from journalists to researchers to climate advocates & data scientists, can independently verify what carbon projects actually deliver.
Where the Sidewalk Ends: Data-Driven Urban Safety
Pedestrian fatalities are on the rise across the US, yet available data remains difficult to access and understand. I built MOSEY, a Streamlit app that visualizes car crashes with Walk Score to better understand street safety where I live. This talk shows how Python tools can transform civic engagement -- and what I learned from building a personal project to becoming part of collective action.
The motivation of this project came from seeing pedestrian fatalities close to home. As a concerned citizen and as a data scientist, I wanted to better understand pedestrian and cyclist safety in the city north of Boston. The Massachusetts Department of Transportation (MassDOT) Crash Portal provides extensive dashboards and statewide statistics, but it’s not intuitive to use. Concurrently, I thought about how to define walkability. Walk Score is often used, but it is based on if errands can be completed by foot, not safety. I wanted to visualize both together.
Using MassDOT data and the Walk Score API, I built MOSEY (Move On Safely EverYone) with Streamlit, geopandas, and folium. Users enter an address to see nearby car crashes from the last 10 years along with the Walk Score, for a fuller picture of walkability.
I joined local organizations that advocate for pedestrian and cyclist safety and I used my analysis to push for a proposed bike path. We recently conducted walk audits to understand the condition of sidewalks and streets, and we’ll synthesize this data to advance safer streets legislation. What started as a personal project has led me to engage with my community and I hope it inspires others to do the same.
Evaluating engineering coherence of interpreted soil units in a complex geology using python
Python is becoming an essential tool in the Architecture, Engineering, and Construction industry. Beyond improving efficiency and automating tasks in the industry, Python’s data analysis toolkit empowers geotechnical engineers to make evidence-based decisions that enhance the safety and reliability of infrastructure. Geology and soil are closely interconnected, forming a fundamental aspect related to geotechnical engineering – a branch of civil engineering focused on the behavior of earth materials in infrastructure projects. While multiple types of soil can exist within a single geological unit, these soils may significantly differ in engineering properties, which can impact the design and safety of structures. This project aims to analyze real-world data from the Washington Department of Transportation (WSDOT) in Seattle which investigates the relationship between the engineering properties of soil and geology. The data consist of field boring logs and lab tests conducted on soil samples. Correlations are used to obtain engineering soil properties to use in the design. Seattle’s subsurface environment is particularly complex, shaped by varied geological formations and a history of ground-related challenges. Using python for data analysis and visualization, this project explores how key engineering features correspond to interpreted geologic units. The analyses will include pair plots, correlation matrices and principal component analysis (PCA). The visualization reveals patterns and validates the geologic interpretations. This project demonstrates how python allows for rapid exploration of real-world datasets to evaluate uncertainties and assist engineering judgement by simplifying complex geology. This improves the efficiency in design and helps engineers to focus on analyses rather than classification task. PCA also assists in feature reduction for different soil types to use for machine learning tasks.
From Chaos to Clarity: Automating history clean-up and mono-repo migration with git-filter-repo
As software projects grow, managing many small repositories can become complex and messy. This is a problem faced by many engineering teams. Mono-repo can be a solution for it, as it simplifies dependency management, CI/CD pipelines, and collaborative development.
But migrating multiple old repositories into a single mono-repo is far from trivial. In this poster, we share our experience migrating multiple old repositories into a single mono-repo using git-filter-repo, a modern Git tool written in Python. It allows writing custom Python scripts to rewrite commit histories, rename files, and clean up unwanted data. During the migration, we used Python callbacks to remove large binaries, obsolete files, and legacy folders. Instead of just deleting them, each removed file was replaced by a small text file containing its original name, folder path, new location, and SHA-256 checksum. This kept the history traceable while keeping the repository lightweight.
Privacy-Preserving Natural Language Querying via Schema-Driven Agent Pipelines
Users from non-technical backgrounds often need to explore data without directly interacting with query languages or execution engines. This project presents a privacy-preserving approach to natural language querying, where user questions are translated into structured queries without ever exposing underlying data to external models. Instead of operating on raw data, the system relies entirely on schema metadata - table names, column descriptions, relationships, and business context to guide query generation. By design, this schema-restricted approach treats privacy as a first-class constraint rather than an afterthought.
The system is built around an LLM-powered, agent-driven pipeline that breaks natural language query generation into well-defined stages, including intent interpretation, schema-aware table and column selection, contextual grounding using validated examples, and structured query synthesis. The architecture separates query generation from query execution, allowing the same pipeline to support different analytical backends like Spark and PySpark. For moderately complex analytical questions, the end-to-end workflow completes in approximately 30–50 seconds.
This poster presents the full pipeline architecture, explains how schema-only reasoning enables privacy-preserving query generation, and demonstrates example prompts and generated queries using synthetic datasets. The poster focuses on architectural design, particularly how privacy constraints influence pipeline structure and agent responsibilities.
Cost & Energy of Everyday LLM Workloads: A Visual Field Guide for Python Developers
Generative AI is everywhere, but the real cost and energy impact of everyday LLM calls is usually buried in dashboards and invoices. This poster turns those hidden numbers into clear visuals that Python developers can explore at a glance.
Using small, reproducible Python scripts, we profile common workloads, chat completions, document summarization, classification, RAG queries, and batch embedding jobs, across different model sizes and configurations. For each scenario, we generate waterfall charts that break total latency into network, tokenization, inference, and post‑processing, plus bar plots that compare cost per request and per 1,000 tokens, along with simple indicators of CPU vs GPU energy use.
All measurements are implemented with familiar Python tools such as pandas, matplotlib, and popular LLM client libraries, and the code will be available in an open repository so attendees can plug in their own endpoints and regenerate the figures. The poster is designed for students, educators, and practitioners who already use LLMs but want crisp, visual intuition for “what this call really costs,” leaving them with practical heuristics and ready‑to‑run notebooks for their own projects.
Building a Self-Correcting Protein Analysis Agent with LangGraph
Scientific automation often faces a critical "robustness gap": traditional linear scripts are brittle and crash silently when encountering the messiness of real-world biological data. This poster presents a Self-Correcting Protein Analysis Agent that bridges this gap by replacing fragile pipelines with a resilient, cyclic agentic architecture. Rather than simply executing a sequence of steps, this system acts as an autonomous research assistant capable of reasoning, validating, and repairing its work.
We focus on the architectural shift from "happy path" automation to State-Aware Cyclic Graphs. Attendees will visualize how the agent manages complex context and employs a "self-healing" loop: when validation checks fail (e.g., detecting geometric errors in a molecule), the agent doesn't crash. Instead, it autonomously diagnoses the issue, selects a refinement strategy (like adjusting chemical bonds), and re-evaluates the result, mimicking the iterative trial-and-error process of a human scientist.
Finally, we demonstrate how the agent closes the research loop by autonomously authoring comprehensive PDF reports. By combining structured data extraction with synthesis capabilities, the agent produces defensible, publication-ready artifacts. This poster offers a blueprint for developers looking to build robust agentic systems that can interact with the physical world, recover from errors, and deliver tangible scientific results.
Speak Python with Devices
This poster introduces how Python can be used to control and communicate with hardware devices. While Python is commonly associated with machine learning, data analysis and web development, it is also a practical tool for interacting with the physical world. Through diagrams, code snippets, and simple examples, this poster demonstrates the basic concepts behind using Python to talk to devices.
The poster focuses on lowering the barrier for Python developers who are curious about hardware but do not know where to start. By presenting approachable examples, it shows how Python can be applied to real-world scenarios such as IoT projects and infrastructure automation. Attendees are encouraged to stop by, ask questions, and discuss ideas for applying Python to their own “touchable” projects.
Building Your Own Quantum Cloud with Python
Quantum computing is transitioning to cloud services, yet the infrastructure layer remains a "black box" of proprietary software. This poster introduces an open-source framework designed to simplify quantum cloud infrastructure using standard Python practices.
This poster is tailored for Python developers interested in quantum computing but who may feel intimidated by the physics. We demonstrate that building a quantum platform is primarily a software engineering challenge where standard practices apply:
- Web Application Design: Utilizing FastAPI on AWS Lambda to manage asynchronous job lifecycles, making the infrastructure accessible to any web developer.
- Schema-Driven Interfaces: Using OpenAPI and gRPC for consistent code generation and reliable microservice communication.
- Standardized Representation: Adopting OpenQASM 3 as the hardware-agnostic interface, treating quantum circuits as structured data.
- Proven Integration: The framework is already managed on multiple real-world quantum computers and cloud providers.
You do not need a background in quantum mechanics to contribute. By applying idiomatic Python and modern web standards, software developers can build the robust infrastructure required for the next generation of computing.
Everything I Googled During My First PyTorch Project
PyTorch is famous for being "Pythonic," but the first training run can still feel like a collection of incantations: DataLoader, nn.Module, loss, backward(), optimizer, eval(). You copy a tutorial, hit run, and something happens. But do you understand why?
This poster walks through the smallest end-to-end PyTorch project that actually learns, connecting each step to the underlying concept. Starting with a small dataset (images or tabular, the same template works for both), it traces how batches flow through a model, how a single loss value becomes gradients for thousands of parameters, and how the optimizer updates those parameters in the right direction.
Along the way, it highlights the concepts that cause the most confusion for newcomers (such as myself!): tensor shapes and why they break, logits vs. probabilities and when to use which, why gradients accumulate if you forget to zero them, and what model.train() and model.eval() actually change under the hood.
The poster will incude an annotated training loop map showing forward → loss → backward → step with the corresponding PyTorch calls, plus a tensor-shape trace so you always know what dimensions to expect. It also covers two sanity checks every beginner should run: overfitting a single batch on purpose, and verifying that parameters actually change after optimizer.step(). A troubleshooting section addresses common first-week errors, from device and dtype mismatches to wrong loss/output pairings to the classic forgotten torch.no_grad() during evaluation.
Attendees will walk away with a reusable starter template they can adapt to their own projects, and a mental model for debugging when code runs but doesn't learn. No ML background assumed, just Python and curiosity.
Memor: Reproducible Structured Memory for LLMs
We present Memor, a Python library for structuring, managing, and transferring conversation histories across different large language models through an object-oriented, Pythonic interface. Visitors learn how the library abstracts LLM interactions into intuitive Session objects that encapsulate sequences of message exchanges, including not only the content but also critical metadata such as decoding temperature, token counts, and model-specific parameters, enabling comprehensive and reproducible logs of interactions.
The poster demonstrates the technical architecture, including the session management system, message filtering capabilities, and cross-model transfer mechanisms, while showcasing practical examples that illustrate how users can seamlessly migrate conversation context between different LLMs. An example of this could be starting the conversation with a Retrieval Augmented Generation (RAG) model to gather relevant information and then switching to a reasoning-specialized model to solve problems based on the retrieved context.
Through code demonstrations, we show how Memor's structured data format enables users to select, filter, and share specific portions of past conversations. These features facilitate granular control over which messages and context are preserved or transferred. The presentation discusses practical applications from multi-model workflows where different LLMs are leveraged for their specialized capabilities to research applications requiring reproducible LLM experiments with complete parameter tracking, and collaborative scenarios where conversation histories need to be shared or archived in a scientific report for reproducibility.
We demonstrate Memor’s intuitive API, its flexibility across LLM providers and message formats, and its extensibility for custom prompt templates. We also show how Memor simplifies conversation management in increasingly complex LLM applications by standardizing context handling across models. This positions Memor as both a practical tool for developers and a framework for improving research reproducibility in NLP.
From Models to Messages: A Visual Guide to (De)Serialization Patterns in SQLAlchemy ORM
Building APIs and data-driven applications with SQLAlchemy often requires translating complex ORM models into JSON-friendly, portable formats. SQLAlchemy enables developers to create, access, and manipulate SQL models through Python classes, simplifying database interaction. However, Python objects are not always the most efficient format for transmitting data over a network or sharing information between separate applications. This is where data serialization and de-serialization become essential — transforming Python objects into formats suitable for communication, storage, or API exchange. Effective (de)serialization is key to building robust, maintainable, and high-performance data workflows.
This poster provides a visual guide to popular Python (de)serialization frameworks — including Marshmallow-SQLAlchemy, ColanderAlchemy, SQLAthanor, ModelSerializer, and SerializerMixin — highlighting how each integrates with SQLAlchemy ORM models, handles relationships, and manages schema configuration. Through side-by-side examples, it illustrates design patterns that simplify model-to-schema transformations, enhance API consistency, and reduce boilerplate code.
This poster will clearly illustrate to the audience: - Visual comparisons of how major serializers handle nested relationships, validation, and schema generation. - Trade-offs between declarative and dynamic schema approaches. - The strengths and weaknesses of each serializer through direct comparison. - Actionable recommendations on how to choose the right serializer for their project — whether optimizing for flexibility, simplicity, or performance. - Strategies to extend SQLAlchemy models for modern API frameworks like FastAPI and Flask.
Whether you’re building REST APIs, microservices, or internal data tools, this poster will offer a practical, visual reference for choosing the right serialization approach — demystifying how Python’s SQLAlchemy ORM ecosystem turns models into messages.
Python in Rocket Science: Visualisation of Supersonic Combustion
How much Python is used in creating the next generation rockets? In this poster, I will demonstrate this using an example of supersonic combustion. Supersonic combustion occurs rapidly, requiring high-speed cameras to capture the details of the combustion physics. Each take produces large datasets that need to be analysed to visualise this complex physics. In this poster, I will discuss how Python is used to perform this analysis. I will show analysed videos/images of supersonic combustion using hydrogen as a fuel conducted in a wind tunnel. These images will show combustion data in terms of density gradients in the flow, known as 'Schlieren.' I will show how large datasets from wind tunnel experiments can be carefully analysed using Python to obtain quantitative information on the combustion physics. The audience will be able to see that such complex problems can be tackled with Python. I will conclude with how we are using Python in educating the next generation of rocket scientists!
Build Complex-Valued Neural Networks with Python
Over the past few years, deep learning models have been increasingly applied to domains where data is naturally represented with complex numbers, such as MRI data and radar signals. In such scenarios using a real valued neural network fails to fully exploit the correlation between the real part and the imaginary part.
This poster presents the use of complex-valued convolutional neural networks (CV-CNNs) for a radar signal processing case study aided by packages such as torch-complex and complexPytorch. The poster will include: (1) Illustrative visuals showing how complex convolutions operate and how they preserve phase relationships, along with side-by-side comparisons highlighting the superior performance of complex-valued networks over real-valued networks on complex signal data; (2)Illustrations of workflows for generating training datasets with practical radar effects such as array imperfections, inhomogeneous clutter and low SNR scenarios; (3) Through illustrative figures, you can see how to build complex valued neural networks using tools from complexPytorch. This section emphasizes the extensibility of Python ecosystem, showing how packages such as torch- complex and complexPyTorch can be adapted for a variety of scientific applications.
Whether you’re coming from a deep learning, signal processing, or applied research background, this poster will provide practical insights and tools for working with complex-valued data pipelines in Python, and showcase how PyTorch can be extended to support emerging complex-valued deep learning applications.
Flet: Python GUI for cross-platform apps
Flet is an open-source application framework for Python to build responsive cross-platform applications from a single codebase.
The poster will introduce Flet framework to the visitors.
The poster will include: - A visualization of how the same Python program can be packaged and deployed to iOS, Android, macOS, Windows, Linux and web. - Imperative vs declarative approaches - both possible with Flet. - Visualization of extensibility model and connection to Flutter/Dart ecosystem. - The cloud of available binary packages (wheels) for iOS and Android (NumPy, Pydantic, Pandas and others). - End-to-end UI integration testing of user app. - Demonstration of real apps in App Store and Google Play on physical devices.
How Python Powers Modern AI: Comparing PyTorch and JAX Execution Stacks
Python remains the dominant language for machine learning, yet modern AI systems rely heavily on compilation and runtime optimization to achieve performance. Two ecosystems — PyTorch and JAX — approach this challenge in fundamentally different ways, while sharing a common goal: preserving Python productivity while delivering high-performance execution.
In this poster, we will compare the execution stacks of PyTorch and JAX from a Python developer’s perspective. Starting from simple Python model code, we’ll trace how each framework captures computation, represents it internally, and executes it efficiently on accelerators.
We’ll introduce key concepts such as graph capture, intermediate representations, and runtime execution — using PyTorch 2.x and the JAX/OpenXLA ecosystem as concrete examples. Rather than diving into low-level compiler theory, the focus will be on building accurate mental models: how control flow is handled, how compilation boundaries differ, and how these design choices affect performance, debuggability, and developer experience.
Building and testing GPU code in open-source projects: lessons from XGBoost
NVIDIA GPUs are becoming a key tool for accelerated computing, with many Python libraries in ML and HPC eager to harness their power. Open-source projects face a major challenge: with contributions from around the world, how can we ensure that the specialized code for GPUs remains functional and reliable?
The XGBoost project (Python ML library) tackled this challenge head-on. In this poster, we will share our experience setting up a CI pipeline for building and testing native code targeting NVIDIA GPUs. Additionally, we will offer practical recommendations for other open-source developers looking to adopt GPUs in their projects. Special focus will be given to common constraints in open-source development, such as budget, developer time, development velocity, and domain knowledge. We will also provide helpful pointers for packaging the native code as Python wheels and publishing them to PyPI and Conda-forge.
Breaking the Speed Limit: Fast Statistical Models with Python 3.14, Numba, and JAX
Data scientists and domain experts often face a dilemma: we understand the models, and we use Python, but we aren't C++ or Rust engineers. We need code that is quick to write, easy to work with, and still fast enough to run on large, real‑world datasets. How do we choose the right tool without getting lost in low‑level details?
With the new free‑threaded build and experimental JIT in Python 3.14, combined with tools like Numba and JAX, we finally have a realistic way to push back against the "Python is slow" stereotype. In this talk, we'll use two concrete workloads to illustrate this modern stack: complex iterative loops (k‑means) and massive data parallelism (permutation test). The focus is on computational patterns rather than statistical theory.
We'll compare plain NumPy, Python 3.14 (with free-threaded or JIT configurations), Numba, and JAX across varying data scales, highlighting the trade-offs in runtime, memory, debuggability, and developer experience. Along the way, we'll also demonstrate how AI coding tools can serve as a copilot, helping to translate clear mathematical code into high-performance kernels without requiring deep compiler expertise.
Beyond “LGTM”: Building a Python Bot That Teaches Teams How to Actually Talk to Each Other
Ever noticed how code review bots are great at yelling “Your code sucks!” but terrible at noticing when your team communication does or how to support open source contributors grow? I built a Python GitHub bot that does something different: it watches how developers collaborate during pull requests. Not just “does this code work” but “are we being good teammates about it?”
The bot analyzes three things most tools ignore: - PR descriptions: Did you link the issue? Explain the “why”? Or just write “fixed stuff” and call it a day? - Review responses: When someone suggests changes, do you acknowledge them like a human or just push commits silently into the void? - Change tracking: That reviewer asked you to refactor something two days ago—did you actually do it, or are you hoping they forgot?
The poster walks through the Python architecture—webhook listeners, LLM-powered communication analysis, conversation graph tracking, and diff parsing that connects requested changes to actual commits. I’ll show how combining pattern matching with language processing models can give you a bot that understands both code AND people.
You'll leave with confidence for building tools that make your collaborative contributions better at the soft skills of software development. Plus I'll compare it to existing tools like CodeRabbit and GitHub Copilot to show where the collaboration gap lives.
Bring your war stories about terrible PR etiquette. Let's make code review culture less painful, one bot comment at a time.
Simulating LLM Agents in pytest: Path to Reliable AI Agents and Systems
LLM-based agents and systems don't fail like normal software, they regress silently. Everything from tool chain changes, prompt drifts, routing issues, context limits, but the agent "still works" … until a real user hits an edge case and everything spirals into disaster.
This poster introduces a Python-first pattern for making agent behavior reproducible, testable, and optimizable by treating agent runs as pytest simulations: multi-turn “episodes” driven by scenario fixtures, seeded user/environment simulators, and strict execution budgets. Each episode produces a structured trace (messages, tool calls, intermediate state, timings) that can be asserted with deterministic checks (e.g., schema correctness, tool-call limits, policy constraints) and scored with lightweight rubrics (e.g., goal completion, instruction adherence, tool efficiency). The result is an agent test suite that behaves like a CI gate: small enough to run on every PR, and realistic enough to catch the failures that unit tests miss.
Attendees will leave with a practical blueprint and patterns to apply and think more broadly about how we can carry out lightweight testing and hardnesses within CI/CD and outside for more reliable ai applications in python.
epymorph: open-source software for the rapid exploration of spatial disease models and forecasting powered by Python
Efforts to model and forecast infectious disease transmission are critical both to the science of understanding pathogens and in planning public health responses to improve health outcomes and save lives. The barriers to developing these models are high — due to required scientific expertise, the cost to develop, calibrate, and run bespoke software solutions, and myriad combinations of potential critical factors. This has kept modeling out of reach for many local governments where so many public health decisions are made.
The NIH-funded EpiMoRPH project envisions a platform to lower these barriers and place trustworthy predictions within reach. epymorph is the computational core of this developing platform. Our key value: by reducing the time from concept to results, we maximize experimentation and discovery. epymorph's modular design enables rapid construction of spatial epidemiological models while maintaining flexibility. Data-wrangling utilities for common sources let you focus on the parts of your model that count. And built-in features for fitting model parameters to observational data (e.g., disease surveillance data) and producing forecasts with uncertain quantification bring state-of-the-art mathematical methods right to your virtual environment. See how we are leveraging Python's object-oriented paradigm and mature scientific computing ecosystem to tackle some of the hardest problems in public health.
https://github.com/NAU-CCL/epymorph
From Workstations to Production: How to Implement and Secure Python Solutions Applications (On-Premises and On-Cloud)
Context
In recent years, Python as a programming language and many of its more powerful libraries, has become a true standard in the tech world. It's hard to imagine any medium-sized or large organization or company not using Python: it's practically impossible. Everyone is using Python in one way or another, but how are organizations and companies implementing and running Python code? Are they doing so securely? Or are they simply allowing analysts, programmers, cloud engineers and data scientists to run code from their personal computers (workstations) for retrieving and changing real production data and resources without any security control or governance? Or coding and executing Lambda functions on the cloud without any kind of control or security policy?
Challenge
Some organizations and companies have already deployed security controls and infrastructure to run their Python applications securely and efficiently according to their needs and context, but if that's not my case, where do I begin? How do I do it? What things should I consider? What would be the best strategy in my specific case? It seems there's plenty of information and documentation on how to use Python for almost any purpose, but very little about how to take the code from a workstation and deploy it to a real production environment.
What you will learn?
How do I move Python code from workstations to production environments? This beautiful and useful poster will show you a roadmap with the most widely adopted strategies for deploying and securing Python applications in production. Turn a challenge into an opportunity.
Who is this for?
This poster is for anyone who develops or implements Python code and wants to learn how to securely deploy Python applications to production.
Data Telling Stories: Understanding the Community Profile with PyLadies Fortaleza - BR
This poster presents a data-driven initiative by PyLadies Fortaleza to understand the demographic, professional, and educational profile of its local Python community. Through a carefully designed survey and subsequent data analysis using Python's robust libraries (e.g., pandas, matplotlib, seaborn), we've collected and visualized key insights into who our community members are, their primary interests, challenges, and aspirations within the tech landscape. The poster will showcase methodologies for survey design, data collection, and essential exploratory data analysis (EDA) techniques. Key findings will be highlighted, revealing trends in gender representation, educational backgrounds, professional roles, preferred Python applications, and common barriers to entry or advancement in tech. Our goal is to demonstrate how community-led data initiatives can provide valuable insights, inform strategic planning for events and outreach, and foster a more inclusive and supportive environment. This work exemplifies how data science skills can be applied directly to community building, making it relevant for anyone interested in Python communities, diversity in tech, or practical data analysis.