Ars Technica has started a new Mac.Ars column. Ars has run many excellent Mac and Mac related articles in the past, from technical documents about the PowerPC G4 and 970 processors to nicely critical reviews of Mac OS X. All of their Mac related writings up to this point have been generally well rounded and have not given in to the dogmas of the emphatically pro-Apple nor the emphatically anti-Apple. They've covered Mac OS X from DP 2 (I have that release somewhere still) to Jaguar.
Dana Milbank writes up a nice little article about all of the question dodging, related to "the infamous 16 words" about Iraq's supposed nuclear dealings in Africa. My favorite bit is this reminder of Bush's campaign for the presidency:
"My job will be to usher in the responsibility era, a culture that will stand in stark contrast to the last few decades, which has clearly said to America: 'If it feels good, do it, and if you've got a problem blame somebody else,' " Bush often said on the campaign trail in 2000.It was basically Bush and Company running on an "I'm not Clinton" platform. The conservatives embraced it and pushed it on: "here's a life-loving hard-working (hah!) good ol' boy, here to set things right!"
["Responsibility: A Capital Minuet", Dana Milbank, Washington Post, 29 July 2003]
But, politics is politics, and Bush is a puppet of those around him. And his crack team of puppeteers is floundering. They're turning back into that type of double - even triple - talking administration that they claim to hate. Yet - people have died, and people are still dying for these lies.
I do think that the media is blowing it out of proportion. As always, they put way too much focus on the wrong thing while avoiding the bigger picture. Looking at the bigger picture would me they and their audience would have to think, and that is a terrible inconvenience to place on anybody, or so it would seem. In the months leading up to the attacks on Iraq, many voices were questioning the need for war and the evidence presented, including Colin Powell's "damning pictures" of supposedly lethal chemical weapon sites.
Now here it is, months later, and none of their stories are proving to be true, with the exception of the "Saddam Hussein is a bad man." But even if one were to go on the human rights violations, Iraq - while bad - was not the worst offender. So, with that story not holding much water on its own, what of the others? The weapons of mass destruction that the United States was positive existed - and in such great number - have still not been found. But didn't Colin Powell show pictures of them? Has no one thought of going to the sites where these pictures were targeted? Because - those weapons were there. We were so certain of it. And we were certain enough of it that we had to get the United Nations team out of there and blow shit up to look for ourselves, further convinced that having free reign of the country (more or less) and lots of extra humans on the ground, we'd find them for sure. And then there's this:
For example, the declaration fails to account for or explain Iraq's efforts to get uranium from abroad, its manufacture of specific fuel for ballistic missiles it claims not to have, and the gaps previously identified by the United Nations in Iraq's accounting for more than two tons of the raw materials needed to produce thousands of gallons of anthrax and other biological weapons.(added emphasis is mine).
[Why We Know Iraq is Lying, Dr. Condoleezza Rice, Jan 23 2003]
This is why the Republican and White House spin teams have nothing when they say "it's just 16 words, they never should have been there, it wasn't really lying because we blamed the british, yada yada yada." In this statement alone, we know that someone lied. And that someone is on our side. There were many of us who didn't believe the evidence we were presented as cause for war. I never felt that we were given a justifiable reason before the war started. And now, we have proof - damning proof - of a specific instance where the evidence presented was known to be inaccurate (at best).
The larger picture is that - this whole goddamn war was predicated on lies. Nothing yet has convinced me that it was required. There are worse governments in place than Hussein's (and as bad as his was, imagine life in those other places). We have one guy jumping up and down and saying "I've got big bombs that I'm not supposed to have and I can hit you with them!" who is all but ignored while we contribute to the deaths of roughly 7,000 Iraqi civilians in order to look for a nuclear program that many knew not to be there in the first place. There are bigger hotbeds for terrorism than Iraq, including new ones popping up along the Afghani border, but little is done. There's just a lot going on. And while the troops are away, their benefits here at home are being cut. Support the troops? Why not support the troops by making it so many of their families don't have to live on food stamps!. Bill Maher said it best: "We say they're our heroes, but we pay them like chumps."
The bullshit factor of this administration is off the charts. Yet - they continue to get away with a LOT. Why is that?
Oh yeah - don't even try looking at the bullshit that is the federal budget.
Zope 2.7.0b1 was released recently and I'm just now getting my first quick look at it. I'm thinking this will be the most important Zope 2 release since the 2.3.x family. Zope 2.3 brought a lot of usability enhancements - from the ability to use Python Scripts instead of DTML for server-side logic to many API and user interface improvements. It was the first Zope 2 release that I was comfortable using. There have been some nice additions to Zope since then and the 2.6 family has been rather good to us. But Zope has long suffered from another usability standpoint - system administration.
For most of Zope 2's life, you would download Zope, untar it, and run a script called
wo_pcgi.py (commonly pronounced as "whoa piggy!"). It would build Zope without PCGI support. Most (but not all) configuration would generally be done in a shell script that set up environment variables and command line switches to get Zope up and running. Few people knew about the environment variables (the ENVIRON.txt documentation file was a fairly recent addition to the tree - and nobody looks at documentation anyways). Nor did they know about some of the flexible Zope options such as the ability to separate the SOFTWARE_HOME (the base Zope software package) and INSTANCE_HOME (an instance with custom configuration and additional components). Separating out the INSTANCE_HOME has the benefits of being able to more cleanly manage a running site separate from the core Zope package - including the ability to easily upgrade to a newer version by changing the software home setting (say from Zope 2.6.1 to 2.6.2) without having to copy and move the site specific settings from one Zope tree to another. Yet to this day, I still see sites running out of directories named
One of the primary goals of Zope 2.7 was to fix this problem. And given my experiences thus far, it seems like the time and effort that has been put into this has been well placed.
The first difference is that Zope can now be put together with the traditional
configure; make; make install process. By passing a
--prefix setting to configure, one can set up where to install Zope - immediately making the instance home/software home separation easier and more obvious.
At the end of the make install process came the following note, based on my prefix settings:
Zope binaries installed successfully. Now run '/home/example/lib/zope/2.7.0b1/bin/mkzopeinstance.py'Running that script with '--help' presented a list of options to make a new INSTANCE_HOME. Run without options, the script asks questions directly. Basically, all that's needed is where to install the instance home; an initial username/password to access Zope through the web; and ZEO host/port (if running with ZEO). Note that you can still configure and run Zope in-place if desired (instead of running make install, you can run make instance). Prior to this script, making an INSTANCE_HOME required manual work and knowledge of how to set up the Zope start/stop scripts to run in this setting.
As stated earlier, configuring a Zope instance to run required setting environment variables and command line switches in a start script (usually a shell script that wrapped a call to z2.py). Compare this to Apache: I think that a benefit that the Apache web server has had for a long time has been its configuration file. I'm a developer, not a deployer, and rarely ever touch Apache. But I can usually stumble my way through the configuration file, mostly due to the fact that it's fairly well documented in-line, and is filled with meaningful examples. Zope 2.7 finally brings a similar feature to Zope by using ZConfig. ZConfig is a system geared towards writing more expressive configuration files than the simpler .INI files while still having something that's editable by administrators and deployers. When mkzopeinstance or make instance is run, a default zope.conf file is generated. This file is much easier to deal with than environment variables and other configuration options. Most of the documentation that was in ENVIRON.txt is here and commented out, with defaults clearly explained. This is much nicer that what was available in the past because it clues users in to what options they really have in the same place that they're setting them.
Zope 2.7 also comes with a much nicer start/stop system in the form of zopectl. It provides all the basic start/stop functions for controlling the Zope instance, as well as some other helpful commands. I've been using something similar for the past couple of years, and it's nice to see it fully fleshed out and shipping as part of the system.
While the new configuration, installation, and execution capabilities are excellent, it's worth noting a couple of other nice features:
Zope 2.7 has been through some interesting changes in its scope, and has taken its time to show up, but I think the wait will have been worth it. Hopefully the beta cycle will be quick - but solid.
Tape and reels on public display of bound girl.
Scar Scar / Feed Circuit
Sometimes, our brains will just think in reverse of the code and tools we use, and there seems to be nothing we can do about it. Jarno Virtanen writes that he can remember the order of arguments for pickle.dump(). I've had the same problem with the Unix
ln utility. Whenever I do
ln -s, I get the remaining arguments in the wrong order. I became convinced that it was written into the command that whatever you do the first time is incorrect, no matter what order you put the arguments in. Because for YEARS, I never did it right the first time. Fortunately, I (think) I now know how to use it. I think.
Jarno also notes a difference between the cPickle and Pickle modules. Pickle (the original Python based pickle module) has far less documentation than it's optimized C based sibling:
Python 2.2.3 (#1, Jul 3 2003, 12:12:07) [GCC 2.95.3 20010315 (release) [FreeBSD]] on freebsd4 Type "help", "copyright", "credits" or "license" for more information. >>> import pickle >>> help(pickle.dump) Help on function dump in module pickle: dump(object, file, bin=0) >>> import cPickle >>> help(cPickle.dump) Help on built-in function dump: dump(...) dump(object, file, [binary]) --Write an object in pickle format to the given file If the optional argument, binary, is provided and is true, then the pickle will be written in binary format, which is more space and computationally efficient.
Ed Heil's post about Python's (apparent to him at the time) lack of a strip() method on strings of course brought, of course, comments that this in fact does exist. Heil responds somewhat cheerfully with:
which turns out not to be true in the CURRENT python. That's what you get for criticizing Python: anything you say about it they say "But Guido just fixed that in the latest release! Get with the program!" Moving target, man! Moving target.except, that's not the case in this case either. It's been there since the oft forgotten Python 1.6 and it's far better known sibling, Python 2.0, both from early autumn 2000.
Curiously enough, I still see brand spanking new code use the
string module functions over methods (or, more often now, I see mixtures of the two). Curious that.
I was really impressed with Ruby for a while, but there's something I noticed about Python that I really really really really like: os abstraction. Ruby can run on many platforms, but I remember there being a lot of Unix shell-isms in there (Ruby took a lot of design inspiration from Perl, for better or worse), including the built in
`cmd` construct for executing shell commands. I remember reading somewhere that Matz (designer of Ruby) is really wanting to tone this aspect down. But it really makes me appreciate python's
os.path modules which, when used properly, can help ensure software that can run on not only Unix and Windows, but classic Mac OS as well - no matter what each OS may use to separate path components.
Actually, just having modularity built into the language and having a single module loading keyword (well, sort of a pair that do the same thing):
from foo import blah removes all of the weirdness of dual statements like
require, or Perl's use and require, or PHP's include and require. Modules are wonderful. The Module-2 influence on Python from the beginning has been great for the language, and the inclusion of packages (I can't remember if that was a 1.4 or 1.5 feature) as part of the system has made it even better.
distutils has made it supremely easy to package and install new packages into a system, giving Python some excellent deployment strategies. So yeah, there are still plenty of weird dualities in Python. But there are weird dualities in every language. That's all I'm saying.
A couple of quick reads I've undertaken this morning are Ars Digita: An Alternate Perspective and chapter 2 of the Zope 3 Programmer Tutorial. Two disparate reads, to be sure, but both quite interesting.
Michael Yoon's story on Ars Digita is a more balanced tale than some of the other accounts of the rise and fall of this company. There was a time when they were seen as one of the most direct competitors to Zope - both the company and the platform. Now, along with other companies that we thought were competitors or larger-scale versions of Zope on the professional services side, they're all but gone. Zope Corporation still exists. However, as Jeremy Hylton points out:
The story of ACS4 should be a cautionary tale for Zope3, although I think it's possible to manage the Zope3 transition better.Zope 3 is a major rewrite of Zope, but it's not expected that everyone drop everything for Zope 2 and move to Zope 3 immediately. The Zope 3 Road Map outlines the following plan:
It's also my understanding that not long after ACS4, Ars Digita decided to rewrite the whole thing in Java. I can confidently say that there are no such plans for Zope 3, although it should be easier to run it under Jython, given that (a) Jython catches up to Python 2.2.3 or Python 2.3 soon; and (b) the custom C parts for Zope 3 are converted to Java classes. The latter scenario is apparently better than the one in Zope 2, which is never expected to be runnable under Jython due to Extension Classes (which are no longer necessary under Python 2.2 thanks to the ability to subclass C types in Python; this has never been a concern for Jython anyways, since it can subclass Java classes directly). My primary point, however, is that while Zope 3 is a big step up from Zope 2, it's not expected that there should ever be a need to make such a big step again. And Zope 2 will continue to be maintained, since Zope Corporation and so many other companies (including my own) have large Zope 2 systems in place now.
That being said, one might ask "Why Zope 3 then?" My answer is that Zope 2 is old. There's a lot of direct heritage, good and bad, from 1996's "Bobo" which is still in Zope as ZPublisher. Prinicipia was written in 1997 as a full Bobo application and as a framework for Zope Corporation (known as Digital Creations at the time) to combine disparate Bobo applications together, using some of the different patterns and lessons from each that were done at the time. Bobo remained Open Sourced, while Principia was closed. Very quickly, programming capabilities were added to Principia's DTML and it became a full through-the-web development platform. It was Open Sourced (in late 1998?) and renamed Zope at version 1.9. 1.10 followed soon after. Then came Zope 2.0. Zope 2.0 brought ZODB 3 (the persistent object system), ZClasses (a through the web development system that some love and some hate and some just avoid altogether), multithreaded server support (ZServer), and various other changes. It was a vast improvement over Zope 1.10, but it still had a lot of the old code (some of which even pre-dates Principia!). Zope 2.3 was, in my opinion, the first really usable Zope 2 release. Among some nice user interface improvements, it featured the inclusion of Python Scripts into the core distribution. Up to this time, all server side scripting (for both display and processing purposes) was done using the DTML tag language. Ugh. Then, as Zope 2 continued to progress, Page Templates came into the picture (yay!) as did core session tracking, etc. This leaves us with a pretty nice system. But there are still problems. For example - WebDAV, FTP, and XML-RPC support are all done differently. There's no uniform way to add new protocol support to the system. The overall Zope 2 architecture is based on the heavy and wide inheritance tree problem, and I would wager that few Zope developers really know what they're subclassing and what they may accidentally override - I still get surprised at times (which is why I tend to have my own micro-framework now that I can use predictably). It wasn't until the CMF really took shape that we saw the benefits of a more component based architecture. The CMF used a lot of collaborating service components which all combined together to give a flexible content management experience. It was a lot easier to write a content object class for the CMF than it was to write something similar for regular Zope, because the developer could focus more on what the content object needed to do and less on what Zope wanted it to do. Menus and actions could be generated dynamically based on the content type of the class - not the class itself. This allowed configuration of a different workflow for a Press Release than for a regular Document, even though they might both be instances of the same class. But the CMF is still an awkward fit onto Zope 2 - CMF development and management is very different than regular Zope management. But there were a lot of good ideas in it - we could see firsthand the power of collaborating objects and the sort of flexibility that provided. Ultimately, it was decided that Zope needed a real component architecture in order to meet some of the "evolutionary shortcomings of Zope 2." And those are the bolts of it - Zope 2 is good, but there are some well known issues that make it difficult to evolve. Delivering a component architecture should yield better evolution as component architectures should be designed with replacement in mind - focus less on what a particular object is, and instead focus on whether it can get the job done. Putting that kind of loose coupling in place allows for new core parts to be dropped in almost as easily as new business/content objects. There are parts of Zope 2 that live up to this idea now (from new ZODB storage systems to pluggable security managers to replaceable session management components), and I expect that by the time Zope 3.0 (not X3) is done, the distance between the two systems will have shrunk considerably.
Anyways, to describe a Zope 3 benefit (again) that can be ascribed to Zope 2, looking at the slides for the Zope 3 developer tutorial, chapter 2, there is a good bit of documentation of how Zope 3 uses Schema to generate editing forms for objects. I covered some of this recently, including how I'm applying similar patterns to my current pile of Zope 2 based projects with decent success. It's nice having a base framework in place so that when schema changes do occur I can add them to the system with a couple of lines of code that describe the new element and don't have to deal with the display and validation manually. In general - at least for the applications we've been doing lately - it's a great way to write an application. More time can be spent focusing on the business logic, and less time is spent worrying about the user interface. Most of the user interface can be built out of the business rules that map data between the application and the storage system. Which leads to another benefit - it's also nice to know that as part of the validation process, the data that reaches the further down parts of the system (the ones approaching storage/model layers) has all been verified and converted to the right data format, which is beneficial when writing abstract data manipulation statements that write to an RDBMS or LDAP server.
To Live is finally out on DVD! This is one of the best movies I have ever seen. If not the best. A beautiful and painfully real portrait of life in mid 20th century China seen through the life of a relatively simple man and his family - without the sensationalism of the similar sounding (in concept) Forrest Gump. To Live is beyond anything I've seen.
Which means, it is the most efficient way to maintain code. All kinds of documentation is right within the code. There are tools to get the documentation into different formats. Javadoc can do the same thing? Sorry folks, no. Look at how Python is an engineer friendly language, by actually supporting a __doc__ attribute for every module, class and function. This is much better than fooling around with slash-star-star ... star-slash.There's more in the post, going into the joys of the new (Python 2.2 or later)
A little too simplistic definition (07 Jul 2003)
help(obj)function built into the Python interpreter which basically generates something like a man page based on the data in the object passed in (which may be an object instance, a class, a module, etc) (dir, help, and pprint).
Ah, pprint. This is a terrific Python module that "pretty prints" Python data structures. It comes in very handy when looking at large tuples (especially nested tuples), lists, and dictionaries. A module function that I've been using lately is
pformat(object) along with Zope's zLOG logging module to log what data is being sent to LDAP as part of the project I'm currently working on:
modlist = mod_entry_generator(old, rec, nodelete, mod_only) if __debug__: ## Avoided when Python is run with -O option zLOG.LOG("LDAP.Gateway", zLOG.BLATHER, "Modifying DN: %s" % dn, "%s\n" % pprint.pformat(modlist)) c.modify_s(dn, modlist)The modlist is a tuple of tuples in the form of
(OP, ATTR, VALUE). Since the modlist can grow quite long, or the VALUE items themselves can be quite long lists (for multi-value attributes), I get a (relatively) readable log entry:
((2, 'mail', ['firstname.lastname@example.org']), (2, 'mailAlternateAddress', ['email@example.com', 'firstname.lastname@example.org', 'email@example.com']))
There are a couple of small notes about the above code. First is the use of the
if __debug__: statement. When Python is run with the
-OO flags, __debug__ is set to false. It's advisable that when deploying Python programs that -O is set. This also removes assert statements from the compiled bytecodes. -OO does the regular optimizations and removes docstrings from the bytecodes as well.
Second is the use of zLOG.BLATHER. BLATHER is a -100 value in the zLOG level system. By default, anything less than zLOG.INFO (0) is not written to the logs. Using zLOG.BLATHER, zLOG.DEBUG (-200), and zLOG.TRACE (-300) are advisable ways of monitoring Zope operations and data during development time that can be disabled at deployment time. The new logger system in Python 2.3 has a similar but simpler set of levels. All of these can be used to your advantage as a developer to track data not only at development time, but after deployment. If something funny is happening, ratchet down the log level to show all of your trace/debug/etc messages. It's better than peppering code with
An interesting weblog post that came across my screen this morning is Is it High Time to Get Rid of Classes?. Personally - I don't think it is. At least, not necessarily. But it is high time to go away from deep, wide, and messy class hierarchies into shallow ones, with much of the generally inherited behavior replaced by collaborating objects.
I've harped on this before, especially when comparing the ZWiki code bases from Zope 2 to Zope 3. That post compares the single-class-with-heavy-hierarchy design of ZWiki for Zope 2 (a design pattern that most of Zope 2 (at least older Zope 2 code) is also guilty of) with the collaborating object design of ZWiki for Zope 3 (the component model based Zope).
I've been applying such patterns to my Zope 2 projects, the most recent being an LDAP application. The results have been generally successful. I have an application that can be generally built out of descriptions. There's the describing of web form/validation policies, which builds a strong user interface for us without me having to deal with writing HTML forms myself. And there's describing how to map LDAP data into simple Python structures and then sending those back to LDAP so that the higher-up application components never have to think about generating an LDAP mod list.
In these situations, it's still a fair amount of work getting a lot of the core components written and working. But it's a design decision to allow new attributes to be addable later. This design allows adding new ones in by basically updating the description components. Unless there's any major new business logic involved with the new attributes, that's all that is needed. The User Interface gets rebuilt and the LDAP communication code becomes aware of a new attribute it needs to watch out for on an object class.
The application is not absent of an inheritance hierarchy, but inside of the application the hierarchy is fairly shallow, usually two deep (at the most). The common abstract base classes used inside the application plug themselves into the Zope 2 hierarchy while ensuring that their subclasses have to worry little about said hierarchy. As such, I'm getting a nice set of reusable patterns and my own little framework going here to help build new applications fairly rapidly with a lot of strengths that weren't there previously (ie - weak (unvalidated) forms, etc).
Over the past couple of weeks, I've been wondering whether those Sound Soother / White Noise Generators would be a good thing to add to my life. Or whether even a CD playing alarm clock type thing (like the item linked) would be good to have next to the bed. I've been working on a harsh noise album for The ELW, which has involved a lot of late creative nights. It's also involved some interesting sleep sessions as I'd nod off on the hardwood floor during one of the tracks (most vary between 10-23 minutes) to intense dreams - usually to be jarred awake by the start of the third track (a jarring rhythmic beginning in comparison to the more usual long waves of large noise). I wonder how well that would really work to try to sleep to. A well programmed selection of The ELW or Merzbow could be an interesting experiment. Seeing one of those Sharper Image "sound soothers" in use on tonight's episode of Sex and the City got me thinking about it again. Particularly after I remembered a hilarious News Radio episode where the character Dave was given one of those boxes to calm down and became overly mellow and addicted to it.
On Merzbow, I've been starting to aggressively expand my collection, finally. Freshly arrived is Ikebukuro Dada (review). On the way are Batztoutai With Material Gadgets, a double CD from the excellent RRR catalog. I had this CD many years ago when I was first getting into the collage/concrete structure and had started The ELW project, but I think at the time I was unprepared for the intensity of Merzbow (even older works like Batztoutai), so as I slowly sold off my CD collection to live, this one was one of those early losses. I'm looking forward to having it again. Also on its way in that order is Frog+, another double CD (and one that might fit the sleeper-cd profile I described above).
Another order includes 1930, a Merzbow release on John Zorn's Tzadik label, which I hope to go back to for Keiji Haino, Ikue Mori, and Yuka Honda works. Which makes it seem that all that is good in my life right now seems to be coming from Japan, or from Japan by way of New York.
Speaking of Japan, in that order which includes Merzbow's 1930 is Petty Booka's US release, Let's Talk Dirty in Hawaiian. Petty Booka are my find of the week (during which I found plenty of other good new Japanese artists such as Ex-Girl and Hi Posi). They're two ukelele playing girls from Japan who play everything from polynesian music to country western and bluegrass. And they do it extremely well. I hope they release their country western work in the US soon - it shames (as usual) what usually passes for country these days, and can instead be played alongside Patsy Cline, Hank Williams (and Hank III's country works), Gillian Welch, etc.
As a result of all this, I've decided I need to get down to the liquor store soon and see if they have any Japanese Whisk(e)y. So much else good is coming from Japan into my life these days, and I do love the Whisk(e)y. How can I go wrong?