Yesterday I was pretty confident that IronPython would help me gain more performance out of my persistence layer. The opposite is true however and the reason is not the python code itself, but rather the IronPython runtime.
I ran some tests using the performance analyzer in Visual Studio and discovered that a lot of time was wasted on starting the python engine. Even when I precompiled the script and ran it every time I needed it for loading and saving objects. Precompiling the script into an instance of CompiledCode even caused my code to initiliaze the engine twice, once for precompiling the script and once for each time the script was executed. Not exactly optimal use of memory and CPU power if you ask me.
The difference is huge by the way, 1000 separate queries for one object to the database took 2 seconds with my old reflection based ObjectPersistenceHelper and 13 seconds with the dynamic version before I started optimizing it. After optimization the DynamicObjectPersistenceHelper still required 5 seconds to do the same operation. I didn’t test loading 1000 objects in one batch, because the results with the one object per batch test were convincing enough for me to throw out the dynamic persistence helper.
I haven’t dropped the idea completely. My second plan was to build the whole framework in IronPython and use that from C#. There is a problem with this idea, python code is dynamic so classes don’t really exist, instead they are objects and can’t be referenced from C#. Unless of course I hook the objects and use reflection again. Not really a good idea if you ask me.
My end conclusion for this experiment is that IronPython can be a good idea for ASP.NET and other scenario’s where objects don’t have to be reused from other assemblies, however I wouldn’t advice people to use IronPython in a scenario like I have with my data access layer.