Making Jython Faster and Better

Type:
Talk
Audience level:
Intermediate
Category:
Python Internals
March 10th 11:45 a.m. – 12:15 p.m.

Description

As a dynamic language, Python is difficult to optimize. In addition, these dynamic features make using Python code from Java currently too complex. However, Java 7 adds the invokedynamic bytecode and corresponding library support, making it possible to finally address these problems in Jython. This talk will describe work in progress to make Jython faster and better (improving Java integration).

Abstract

Jython demonstrates that it is quite possible to fit Python’s dynamic features on the Java Virtual Machine (JVM) to provide for seamless integration, while also taking advantage of the breadth of the Java platform. My favorite aspect of this blending is certainly is the amazing java.util.concurrent package.

However, from a performance perspective, it’s an awkward fit. In certain cases, the JVM is able to aggressively inline Python codepaths through its JIT. But generally it cannot for a variety of technical reasons. In addition, while Jython can conveniently call Java code, and support callbacks, it’s not at all convenient right now to go the opposite way. This mismatch is perhaps best seen now in Jython’s lack of support for Java annotations.

Consider how you, as a human translator, might attempt to optimize Python code for the JVM, or to fix the Java integration issues. Your plan of attack is simple: translate idiomatic Python to similarly idiomatic (and highly JIT-able) Java. Python developers signal their intent through a variety of mechanisms. They use builtin names like True/False or range/xrange, which through common convention, no one would seriously expect to see changed. They rarely monkey patch namespaces (action at a distance), although import time can get quite interesting. Importing packages from the java.* namespace means something when integrating from Jython. The challenge would be in supporting the dynamic functionality, however. Rewriting truly dynamic code into statically-typed code is just not the right way. It is non-trivial, error prone, and certainly not fun.

Enter the invokedynamic bytecode and the java.lang.invoke package, introduced with Java 7. It enables a wide range of optimizations, while allowing for the correctness of Python code, with its full range of dynamic features, no matter how crazy, to be maintained. There are some obvious wins, such as ensuring that a callsite (the point in the code where a call to a given function is invoked), a MethodHandle to a function in that namespace is linked, with all parameters properly permuted so that it’s a straight call through the Java calling convention. If there’s a namespace change, simply relink.

But there are also opportunities to use static analysis. For example, iterating over an xrange looks like a Java for loop, and can be optimistically compiled as such. If the builtin is rebound, the controlling SwitchPoint in invalidated and a continuation is setup to re-execute into an interpreter using unoptimized code (actually running Python bytecode). Other static analysis opportunities include being able to control the construction of frames for functions, use of decorators and function annotations to describe gradual typing (especially useful for Java integration), and so forth. This talk will cover a variety of these translations, demonstrate how we support both the fast and slow paths, and also describe some of the current performance benchmarks. I will also describe the pitfalls: obvious optimizations frequently result in bad performance due to the number of moving parts.

In addition, this talk with cover the state of Jython 2.6+. In particular, I will describe forthcoming changes to the Jython API (for embedding into Java). These include limited backwards breaking changes in ThreadState and PySystemState, to support increased performance, cleanup APIs, and remove issues in the garbage collection of ClassLoader objects.