[<<] Industrie Toulouse

A couple of quick reads I've undertaken this morning are Ars Digita: An Alternate Perspective and chapter 2 of the Zope 3 Programmer Tutorial. Two disparate reads, to be sure, but both quite interesting.

Michael Yoon's story on Ars Digita is a more balanced tale than some of the other accounts of the rise and fall of this company. There was a time when they were seen as one of the most direct competitors to Zope - both the company and the platform. Now, along with other companies that we thought were competitors or larger-scale versions of Zope on the professional services side, they're all but gone. Zope Corporation still exists. However, as Jeremy Hylton points out:

The story of ACS4 should be a cautionary tale for Zope3, although I think it's possible to manage the Zope3 transition better.
Zope 3 is a major rewrite of Zope, but it's not expected that everyone drop everything for Zope 2 and move to Zope 3 immediately. The Zope 3 Road Map outlines the following plan:
  • Zope X3 first. The X stands for experimental. It has no support for migration from Zope 2. Zope 2 will continue development for some time. Zope 2.7.0 alpha 1 was recently released. And there has been talk of a Zope 2 variation inside Zope Corp that incorporates some Zope 3 features that is expected to be released later this year.
  • Zope 3 (no X) later. This one will include support for Zope 2 products and conent, probably through a conversion utility.
My understanding of ACS4 was that it did not include any backwards compatibility with the ACS3 line, and also that the ACS3 line was deprecated before ACS4 was ready. The current Zope 2 and 3 roadmap seems to be avoiding that route with the Zope X3 versus Zope 3 plan. It's known that an incompatible version will come first that's not encumbered by history and will give developers time to work with the new system and learn it under production quality conditions. I know I'm looking forward to such a time, as I really like the concepts of Zope 3 but still have no time to learn my through the development/milestone releases.

It's also my understanding that not long after ACS4, Ars Digita decided to rewrite the whole thing in Java. I can confidently say that there are no such plans for Zope 3, although it should be easier to run it under Jython, given that (a) Jython catches up to Python 2.2.3 or Python 2.3 soon; and (b) the custom C parts for Zope 3 are converted to Java classes. The latter scenario is apparently better than the one in Zope 2, which is never expected to be runnable under Jython due to Extension Classes (which are no longer necessary under Python 2.2 thanks to the ability to subclass C types in Python; this has never been a concern for Jython anyways, since it can subclass Java classes directly). My primary point, however, is that while Zope 3 is a big step up from Zope 2, it's not expected that there should ever be a need to make such a big step again. And Zope 2 will continue to be maintained, since Zope Corporation and so many other companies (including my own) have large Zope 2 systems in place now.

That being said, one might ask "Why Zope 3 then?" My answer is that Zope 2 is old. There's a lot of direct heritage, good and bad, from 1996's "Bobo" which is still in Zope as ZPublisher. Prinicipia was written in 1997 as a full Bobo application and as a framework for Zope Corporation (known as Digital Creations at the time) to combine disparate Bobo applications together, using some of the different patterns and lessons from each that were done at the time. Bobo remained Open Sourced, while Principia was closed. Very quickly, programming capabilities were added to Principia's DTML and it became a full through-the-web development platform. It was Open Sourced (in late 1998?) and renamed Zope at version 1.9. 1.10 followed soon after. Then came Zope 2.0. Zope 2.0 brought ZODB 3 (the persistent object system), ZClasses (a through the web development system that some love and some hate and some just avoid altogether), multithreaded server support (ZServer), and various other changes. It was a vast improvement over Zope 1.10, but it still had a lot of the old code (some of which even pre-dates Principia!). Zope 2.3 was, in my opinion, the first really usable Zope 2 release. Among some nice user interface improvements, it featured the inclusion of Python Scripts into the core distribution. Up to this time, all server side scripting (for both display and processing purposes) was done using the DTML tag language. Ugh. Then, as Zope 2 continued to progress, Page Templates came into the picture (yay!) as did core session tracking, etc. This leaves us with a pretty nice system. But there are still problems. For example - WebDAV, FTP, and XML-RPC support are all done differently. There's no uniform way to add new protocol support to the system. The overall Zope 2 architecture is based on the heavy and wide inheritance tree problem, and I would wager that few Zope developers really know what they're subclassing and what they may accidentally override - I still get surprised at times (which is why I tend to have my own micro-framework now that I can use predictably). It wasn't until the CMF really took shape that we saw the benefits of a more component based architecture. The CMF used a lot of collaborating service components which all combined together to give a flexible content management experience. It was a lot easier to write a content object class for the CMF than it was to write something similar for regular Zope, because the developer could focus more on what the content object needed to do and less on what Zope wanted it to do. Menus and actions could be generated dynamically based on the content type of the class - not the class itself. This allowed configuration of a different workflow for a Press Release than for a regular Document, even though they might both be instances of the same class. But the CMF is still an awkward fit onto Zope 2 - CMF development and management is very different than regular Zope management. But there were a lot of good ideas in it - we could see firsthand the power of collaborating objects and the sort of flexibility that provided. Ultimately, it was decided that Zope needed a real component architecture in order to meet some of the "evolutionary shortcomings of Zope 2." And those are the bolts of it - Zope 2 is good, but there are some well known issues that make it difficult to evolve. Delivering a component architecture should yield better evolution as component architectures should be designed with replacement in mind - focus less on what a particular object is, and instead focus on whether it can get the job done. Putting that kind of loose coupling in place allows for new core parts to be dropped in almost as easily as new business/content objects. There are parts of Zope 2 that live up to this idea now (from new ZODB storage systems to pluggable security managers to replaceable session management components), and I expect that by the time Zope 3.0 (not X3) is done, the distance between the two systems will have shrunk considerably.

Anyways, to describe a Zope 3 benefit (again) that can be ascribed to Zope 2, looking at the slides for the Zope 3 developer tutorial, chapter 2, there is a good bit of documentation of how Zope 3 uses Schema to generate editing forms for objects. I covered some of this recently, including how I'm applying similar patterns to my current pile of Zope 2 based projects with decent success. It's nice having a base framework in place so that when schema changes do occur I can add them to the system with a couple of lines of code that describe the new element and don't have to deal with the display and validation manually. In general - at least for the applications we've been doing lately - it's a great way to write an application. More time can be spent focusing on the business logic, and less time is spent worrying about the user interface. Most of the user interface can be built out of the business rules that map data between the application and the storage system. Which leads to another benefit - it's also nice to know that as part of the validation process, the data that reaches the further down parts of the system (the ones approaching storage/model layers) has all been verified and converted to the right data format, which is beneficial when writing abstract data manipulation statements that write to an RDBMS or LDAP server.