Thanks for the comments Thomas and Pramod.

Apologies for the delayed response on this thread.

> I believe the thread implementation still adds some value over a container local approach. It is more of a “thread local” equivalent which is more efficient as opposed to a container local implementation. Also the number of worker threads is configurable. Setting the value of 1 will let the user to not do this ( although I do not see a reason for why not ). There is always the over head of serialise/de-serialize cycle even for a container local approach and there is the additional possibility of container local not being honoured by the Resource manager based on the state of the resources.

> Regarding the configurable key to ensure all tuples in a window are processed, I am adding a switch which can let the user choose ( and javadoc that clearly points out issues if not waiting for the tuples to be completely processed ). There are pros and cons for this and letting the user decide might be a better approach. The reason why I mention cons for waiting the tuples to complete ( apart from the reason that Thomas mentioned ) is that if one of the commands that the user wrote is an erroneous one, all the subsequent calls to that interpreter thread cal fail. An example use case is that tuple A set some value for variable x and tuple B that is coming next is making use of the variable x. Syntactically expression for tuple B is valid but just that it depends on variable x. Now if the variable x is not in memory because tuple A is a straggler resulting in tuple B resulting in an erroneous interpreter state. Hence the operator might stall definitely as end window will be stalled forever resulting in killing of the operator ultimately. This is also because the erroneous command corrupted the state of the interpreter itself. Of course this can happen to all of the threads in the interpreter worker pool resulting in this state as well. Perhaps an improvement of the current implementation is to detect all such stalled interpreters for more than x windows and rebuild the interpreter thread when such a situation is detected.

> Thanks for the IdleTimeoutHandler tip as this helped me to ensure that the stragglers are drained out irrespective of a new tuple coming in for processing. In the previous iteration, the stragglers could only be drained when there is a new tuple that came in processing as delayed responses queue could only be checked when there is some activity on the main thread.

> Thanks for raising the point about the virtual environments: This is a point I missed mentioning in the design description below. There is no support for virtual environments yet in JEP and hence the current limitation. However the work around is simple. As part of the application configuration, we need to provide the JAVA_LIBRARY_PATH which contains the path to the JEP dynamic libraries. If there are multiple python installs ( and hence multiple JEP libraries to choose from for each of the apex applications that are being deployed), setting the right path for the operator JVM will result in picking the corresponding python interpreter version. This also essentially means that we cannot have a thread local deployment configuration of two python operators that belong to different python versions in the same JVM.  The Docker approach ticket should be the right fix for virtual environments issue? <> ( but still might not solve the thread local configuration deployment )

> On 21 Dec 2017, at 11:01 am, Pramod Immaneni <[EMAIL PROTECTED]> wrote:
> On Wed, Dec 20, 2017 at 3:34 PM, Thomas Weise <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
>> It is exciting to see this move forward, the ability to use Python opens
>> many new possibilities.
>> Regarding use of worker threads, this is a pattern that we are using
>> elsewhere (for example in the Kafka input operator). When the operator