Welcome to the blog of Greg Pavlik, software technologist and frustrated adventurer. Currently, I am working on technologies related to Cloud Computing and Cloud Platform as a Service capabilities.
Thursday, December 14, 2006
Happy Holiday Season
I will shortly have an on topic post for the blog dealing with all the controversy around agile programming, but in the meantime, I wanted to wish all my friends around the world a very happy holiday season. I don't know where to start with the list of seasonal holidays that have already past or are coming (Christmas, Hannukah, Diwali, Ramandan, Chinese New Year.... off the top of my head). To make it simple, I hope everyone had a prosperous year in 2006 and I look forward to seeing you in 2007!
Monday, November 20, 2006
Icons
I wanted to make a small note on the passing of several icons of culture. First, Ed Bradley, who always distinguished himself as a gentleman throughout his life as a reporter. The passing of Bradley and earlier of Peter Jennings seems to mark the end of the era of broadcast television and of more civil and urbane news reporting. I'll miss both the figureheads and the genre as it was. Bradley also hosted an NPR program Jazz at Lincoln Center, worth checking out if the archives are available online.
Second, Milton Friedman died at 94 this past week. There are few economists that can lay claim to the level of influence and intellectual energy of Milton Friedman. In this century, the other great influencer was John Maynard Keynes. Friedman may not have been right about everything, but he managed to direct the course of history in his lifetime.
On a slightly less somber note, I had a chance Saturday night to see the newest James Bond film, Casino Royale. In the US, at least, Bond reruns seem always to have been on television, with the Connery and Moore eras dominant. The old James Bond has been killed off. Casino Royale is a complete re-invention of the character, carried off well by Daniel Craig. The new Bond is darker, more reckless and the film itself has a more developed realism. The Craig Bond is aggressively targeted at a female audience with a perhaps ironically dulled carnal appetite. The film is the best in years, perhaps ever, with the primary drawback being the relentless brand advertising.
Second, Milton Friedman died at 94 this past week. There are few economists that can lay claim to the level of influence and intellectual energy of Milton Friedman. In this century, the other great influencer was John Maynard Keynes. Friedman may not have been right about everything, but he managed to direct the course of history in his lifetime.
On a slightly less somber note, I had a chance Saturday night to see the newest James Bond film, Casino Royale. In the US, at least, Bond reruns seem always to have been on television, with the Connery and Moore eras dominant. The old James Bond has been killed off. Casino Royale is a complete re-invention of the character, carried off well by Daniel Craig. The new Bond is darker, more reckless and the film itself has a more developed realism. The Craig Bond is aggressively targeted at a female audience with a perhaps ironically dulled carnal appetite. The film is the best in years, perhaps ever, with the primary drawback being the relentless brand advertising.
Monday, November 13, 2006
Open Source Java
Sun announced that the core Java platforms will be made available under GPL. In general, I think this is a useful turn, but at this point I wonder how many people really have a keen desire to hack Java platform code. The main benefit is likely to be in areas like performance and memory management, where ISVs have a vested interest in tuning the platform.
It's strange to see how GPL has moved from being the license of open source purists to a preferred vendor license.
It's strange to see how GPL has moved from being the license of open source purists to a preferred vendor license.
Monday, October 30, 2006
Blue Moon
Once in a great while, something utterly simple comes along that changes the way you work and do business. I'm wondering if Zotero may be a such a thing on a massive scale?
SOA Suite Release
The latest release of Oracle's SOA suite is available with ESB functionality, policy management and governance, and process management tools and engine. Check it out. Note the article is from an Indian software development site: Indian IT news coverage is getting really strong!
Wednesday, October 25, 2006
Unbreakable Linux
There's a flood of commentary coming out on Oracle's move to offer Linux support, most of it speculative. I prefer the simple explanation: its good business to give customers what they want at an attractive price... One thing for sure, this is going to be a major, major shot in the arm for Linux.
Monday, October 23, 2006
SOA at Oracle Open World
Open World is this week and it is truely huge this year. To keep up with the latest and greatest on middleware, you should check out Thomas Kurian's keynote tomorrow. For a high level overview of where we see things moving forward in the SOA space, there are a series of customer-oriented white papers worth checking out.
Thursday, October 19, 2006
The Road Less Taken
After years of labor, OSGI has started to gain critical mass. In fact, OSGI is popping up all over the place: Eclipse, Spring, SCA discussions, etc. It seems like almost everyone is interested in leveraging OSGI. I won't go in to the details of OSGI, except to say that there are some genuinely useful things that can be done with OSGI for building product software: some of the most interesting capabilities have to do with dynamic loading and unloading. Sun has spearheaded an alternative model in JSR, 277. It is interesting to look at the reaction of the OSGI community to 277. Will this be the dividing line that marked the transition to a new generation of middleware technology?
Wednesday, October 18, 2006
Kid Safe Internet
There's been a rash of local stories of kids stumbling on inappropriate content on the Internet over the last few months, so I decided to pick up some filtering software. The package I chose was BumperCar 2 for the Mac. As far as I can tell, BumperCar is essentially a customization of Safari.
The software provides the standard white list/black list facilities. The white lists are useful for young kids and the black lists are useful in a way I hadn't thought of initially: you can filter out specific domains associated with an otherwise useful site. For example, BumperCar blacklists Google images by default. Google searches are automatically kid safe on BumperCar, which is imperfect but useful. Lastly, BumperCar will filter on both the content coming in and going out. Incoming content appears to be checked before rendering, which is a nice way to catch things that might slip through.
I've only looked at the filters briefly, but they appear to check for sexual content and violence. In this day and age, it would probably be best to check for extremist material of all sorts as well.
First, impressions: a good product.
The software provides the standard white list/black list facilities. The white lists are useful for young kids and the black lists are useful in a way I hadn't thought of initially: you can filter out specific domains associated with an otherwise useful site. For example, BumperCar blacklists Google images by default. Google searches are automatically kid safe on BumperCar, which is imperfect but useful. Lastly, BumperCar will filter on both the content coming in and going out. Incoming content appears to be checked before rendering, which is a nice way to catch things that might slip through.
I've only looked at the filters briefly, but they appear to check for sexual content and violence. In this day and age, it would probably be best to check for extremist material of all sorts as well.
First, impressions: a good product.
Wednesday, October 04, 2006
Oracle JPA and Spring
I've found the Spring framework to be a very useful addition to the J2EE platform. I've been working with early releases of Spring 2.0 over the last year and its a genuine point release for sure. I've long been a fan on the TopLink toolkit (long before coming to Oracle, I might add), so it is even cooler to see that the core of TopLink and Spring will be packaged together. A new standard for Java development?
Human, All Too Human
I spent last weekend in Paris. Only on my return did I realize this was my first time in France as a tourist, rather than a business visitor. My wife and I had the opportunity to spend some sustained time in the Louvre, which is a wonderful repository of human achievement.
I stopped halfway through the Greek sculptures to ask myself how much we've improved as a species in the last several thousand years. Clearly, we've seen advancements in mathematics, including the algebra and the (recent) development of the fundamental theorem of calculus. As a result we've seen significant advances in sciences, especially in conjunction with the adoption of inductive reasoning. Along similar lines, we've seen progress in medicine, which has extended our lifespan and often (though not always) our quality of life.
In contrast to the ancient world, we've virtually eliminated human slavery, which seems to me to be our most important social advance, since it implies at least the idea of basic and universal human dignity. At the same time, we keep finding reasons to kill each other en masse, as the horrors of the twentieth century remind us. Those retrograde instincts of hate, dogmatism, and fanaticism to which we all may be susceptible to some degree threaten to pull us back to barbarism and to make our scientific advances tools of evil.
In some ways, a uniform measure of progress may be an impossible thing to measure, but it is clear that there is a basic human drive toward humanism, greatness and beauty. The
Louvre is a fine place to rediscover that fact.
I stopped halfway through the Greek sculptures to ask myself how much we've improved as a species in the last several thousand years. Clearly, we've seen advancements in mathematics, including the algebra and the (recent) development of the fundamental theorem of calculus. As a result we've seen significant advances in sciences, especially in conjunction with the adoption of inductive reasoning. Along similar lines, we've seen progress in medicine, which has extended our lifespan and often (though not always) our quality of life.
In contrast to the ancient world, we've virtually eliminated human slavery, which seems to me to be our most important social advance, since it implies at least the idea of basic and universal human dignity. At the same time, we keep finding reasons to kill each other en masse, as the horrors of the twentieth century remind us. Those retrograde instincts of hate, dogmatism, and fanaticism to which we all may be susceptible to some degree threaten to pull us back to barbarism and to make our scientific advances tools of evil.
In some ways, a uniform measure of progress may be an impossible thing to measure, but it is clear that there is a basic human drive toward humanism, greatness and beauty. The
Louvre is a fine place to rediscover that fact.
Travesty
I've lived within proximity of Amish communities most of my life, more specifically near the community in Pennsylvania that just suffered a tremendous loss. These people live a meaningful and peaceful life. I cannot imagine in my worst nightmares what would motivate someone to try to harm their children. My heart goes out to them.
Wednesday, September 27, 2006
WWW2007: Call for Papers
I'm a big fan of the www* conferences, which bring together folks from almost every middleware company, the large ecommerce companies like Google and Amazon, and end users every year. I've participated in the conference as a presenter or program committee member for several years running and I'm pleased to be on the Web services track program committee again this year. If you have some good ideas, interesting experiences or novel research, you should consider submitting a paper.
Oh, and the setting for this years conference is Banff.
********************************************************************
CALL FOR PAPERS
Sixteenth International World Wide Web Conference
Web Services Track
Banff, Alberta, Canada
http://www2007.org
May 8-12, 2007
********************************************************************
The Web Services track of WWW2007 seeks original papers describing research in all areas of Web Services. Topics include, but are not limited to:
* Service contract and metadata
* Orchestration, choreography and composition of services
* Large scale XML data integration
Dependability
* Security and privacy
* Tools and technologies for Web Services development, deployment and
management
* Software methodologies for Service-Oriented Systems
* The impact of Web Services on enterprise systems
* Web Services performance
* Architectural styles for Web Services computing
* Application of Web Services technologies in areas including e-commerce,
e-science and grid computing
* Impact of formal methods on Web Services
IMPORTANT DATES
Refereed Paper submissions due: November 20, 2006 (HARD deadline; no extensions)
Acceptance Notification: January 29, 2007
Conference dates: Tuesday-Saturday, May 8-12, 2007
Submissions should present original reports of substantive new work and can be up to 10 pages in length. Papers should properly place the work within the field, cite related work, and clearly indicate the innovative aspects of the work and its contribution to the field. We will not accept any paper which, at the time of submission, is under review for or has already been published or accepted for publication in a journal or another conference. In addition to regular papers, we also solicit submissions of position papers articulating high-level architectural visions, describing challenging future directions, or critiquing current design wisdom. Queries regarding WWW2007 Web Services track submissions can be sent to Paul.Watson@ncl.ac.uk or Jim@Webber.name.
All papers will be peer-reviewed by at least three reviewers from an International Program Committee. Accepted papers will appear in the conference proceedings published by the Association for Computing Machinery (ACM), and will also be accessible to the general public via the conference Web site. Authors will be required to sign a copyright transfer form. Detailed formatting and submission requirements are available at http://www2007.org/.
Authors of top-ranked papers from the overall conference will be invited to submit enhanced versions of their papers for publication in a special issue of the ACM Transactions on the Web.
TRACK CHAIRS
* Paul Watson, Newcastle University (UK)
* Jim Webber, Thoughtworks (Australia)
PROGRAM CHAIRS
* Peter Patel-Schneider, Bell Labs Research (USA)
* Prashant Shenoy, University of Massachusetts (USA)
TRACK PC
* Boualem Benatallah, University NSW, Australia
* Sanjay Chaudhary, DA-IICT, India
* Thomas Erl, SOA Systems, USA
* Alan Fekete, University of Sydney, Australia
* Jinpeng Huai, Beihang University, China
* Hiro Kishimoto, Fujitsu, Japan
* Frank Leymann, University of Stuttgart, Germany
* Mark Little, Jboss, UK
* Jimmy Nilson, JNSK, Sweden
* Dare Obasanjo, Microsoft, USA
* Savas Parastatidis, Microsoft, USA
* Greg Pavlik, Oracle Corporation, USA
* Denis Sosnoski, Sosnoski Software Solutions, New Zealand
* Tony Storey, IBM, UK
* Japjit Tulsi, Google, USA
* William Vambenepe, Hewlett-Packard, USA
* Steve Vinoski, IONA Technologies, USA
* Stuart Wheater, Arjuna Technologies, UK
* Michal Zaremba, Digital Enterprise Research Institute, Ireland
Oh, and the setting for this years conference is Banff.
********************************************************************
CALL FOR PAPERS
Sixteenth International World Wide Web Conference
Web Services Track
Banff, Alberta, Canada
http://www2007.org
May 8-12, 2007
********************************************************************
The Web Services track of WWW2007 seeks original papers describing research in all areas of Web Services. Topics include, but are not limited to:
* Service contract and metadata
* Orchestration, choreography and composition of services
* Large scale XML data integration
Dependability
* Security and privacy
* Tools and technologies for Web Services development, deployment and
management
* Software methodologies for Service-Oriented Systems
* The impact of Web Services on enterprise systems
* Web Services performance
* Architectural styles for Web Services computing
* Application of Web Services technologies in areas including e-commerce,
e-science and grid computing
* Impact of formal methods on Web Services
IMPORTANT DATES
Refereed Paper submissions due: November 20, 2006 (HARD deadline; no extensions)
Acceptance Notification: January 29, 2007
Conference dates: Tuesday-Saturday, May 8-12, 2007
Submissions should present original reports of substantive new work and can be up to 10 pages in length. Papers should properly place the work within the field, cite related work, and clearly indicate the innovative aspects of the work and its contribution to the field. We will not accept any paper which, at the time of submission, is under review for or has already been published or accepted for publication in a journal or another conference. In addition to regular papers, we also solicit submissions of position papers articulating high-level architectural visions, describing challenging future directions, or critiquing current design wisdom. Queries regarding WWW2007 Web Services track submissions can be sent to Paul.Watson@ncl.ac.uk or Jim@Webber.name.
All papers will be peer-reviewed by at least three reviewers from an International Program Committee. Accepted papers will appear in the conference proceedings published by the Association for Computing Machinery (ACM), and will also be accessible to the general public via the conference Web site. Authors will be required to sign a copyright transfer form. Detailed formatting and submission requirements are available at http://www2007.org/.
Authors of top-ranked papers from the overall conference will be invited to submit enhanced versions of their papers for publication in a special issue of the ACM Transactions on the Web.
TRACK CHAIRS
* Paul Watson, Newcastle University (UK)
* Jim Webber, Thoughtworks (Australia)
PROGRAM CHAIRS
* Peter Patel-Schneider, Bell Labs Research (USA)
* Prashant Shenoy, University of Massachusetts (USA)
TRACK PC
* Boualem Benatallah, University NSW, Australia
* Sanjay Chaudhary, DA-IICT, India
* Thomas Erl, SOA Systems, USA
* Alan Fekete, University of Sydney, Australia
* Jinpeng Huai, Beihang University, China
* Hiro Kishimoto, Fujitsu, Japan
* Frank Leymann, University of Stuttgart, Germany
* Mark Little, Jboss, UK
* Jimmy Nilson, JNSK, Sweden
* Dare Obasanjo, Microsoft, USA
* Savas Parastatidis, Microsoft, USA
* Greg Pavlik, Oracle Corporation, USA
* Denis Sosnoski, Sosnoski Software Solutions, New Zealand
* Tony Storey, IBM, UK
* Japjit Tulsi, Google, USA
* William Vambenepe, Hewlett-Packard, USA
* Steve Vinoski, IONA Technologies, USA
* Stuart Wheater, Arjuna Technologies, UK
* Michal Zaremba, Digital Enterprise Research Institute, Ireland
Sunday, September 24, 2006
Engineering at its best
I have a more than passing interest in mechanical watches. Part of it has to do with a fascination with time itself, but the main motivator is a deep appreciation for the engineering involved with the design of a high quality movement. The essential elements of watch design include economy of space, efficiency of operation and constraints on implementation techniques -- all factors that software engineers should be forced to take into account. Unfortunately this is often not the case, as software developers are able to get away with things that would cause immediate break down in mechanical systems. And even incremental improvements in the practice are fraught with serious regression. Lately, I've thought about what has been most interesting in software development practices in the last 10 years in a very critical light: XP provided several advances in terms of what we know from job enrichment theory and process control in operations management, but also insists on using craftsmanship as the driving metaphor to explain away the need to industrialize the practice -- all the while, industrialization continues to occur as, for example, IDEs are normalized. If only the patterns folks had not fixated on building architects...
In any case, here's a fascinating, detailed look at a movement from a true Manufacture. An interesting thought experiment: If you are a software person, do you think you would be able to provide a reliable movement design? If so, why? If not, why not?
In any case, here's a fascinating, detailed look at a movement from a true Manufacture. An interesting thought experiment: If you are a software person, do you think you would be able to provide a reliable movement design? If so, why? If not, why not?
Monday, August 21, 2006
History of Beauty
I had the opportunity this Sunday to read most (not quite all) of Umberto Eco's History of Beauty. It's a survey of the kind of the things that, according to Eco, men admire but do not need to possess. Here was a fascinating opportunity to look at how time, history, ideas and culture define standards and perception. The book is not so ambitious, or at least not so good. In practice, the book is a survey of art and its relationship to social philosophy in the West. Far too much time is spent on some subjects (eg, 19th century decadents, though much of the youth culture of today seems to be a shallow echo of the aesthetes). The book partially disappoints for focusing exclusively on the West, though a world survey would require many volumes. The 20th century treatment is abysmal, which is quite puzzling given Eco's fascinating novel with graphics, the Mysterious Flame of Queen Loana.
As a survey, the book is readable and probably worth taking the time to page through. I found it stilted at times and thin in many places, with echoes of important insights here and there. The book's graphics are often stunning and it's probably worth the price for the painting prints. I can imagine the text being used as a companion to a series of university-level lectures, which may have been the author's intent. The US release, however, is marketed as a stand-alone work of art critcitism. Overall, I fear this book will be quickly relegated to the coffee table by most readers (at least those without young children).
As a survey, the book is readable and probably worth taking the time to page through. I found it stilted at times and thin in many places, with echoes of important insights here and there. The book's graphics are often stunning and it's probably worth the price for the painting prints. I can imagine the text being used as a companion to a series of university-level lectures, which may have been the author's intent. The US release, however, is marketed as a stand-alone work of art critcitism. Overall, I fear this book will be quickly relegated to the coffee table by most readers (at least those without young children).
Friday, July 28, 2006
ICSOC 2006 Workshop: CFP
This year I am a PC member for ICSOC 2006. Last year I presented at the workshop in the Netherlands with Jon Maron on SOA application design; this year, I'm doing the program committee (and for the main conference, if I remember all if this correctly!). The first call for papers follows...
-------------------------------------------------------------------
C A L L F O R P A P E R S
2nd INT. WORKSHOP ON ENGINEERING SERVICE ORIENTED APPLICATIONS:
DESIGN AND COMPOSITION (WESOA'06)
In conjunction with the 4th Int. Conference on Service Oriented
Computing (ICSOC 2006) http://www.icsoc.org
Chicago, USA, December 4th, 2006
WESOA Workshop Website
http://fresco-www.informatik.uni-hamburg.de/wesoa06/
Abstract Submission Due: September 8th, 2006
-------------------------------------------------------------------
C A L L F O R P A P E R S
2nd INT. WORKSHOP ON ENGINEERING SERVICE ORIENTED APPLICATIONS:
DESIGN AND COMPOSITION (WESOA'06)
In conjunction with the 4th Int. Conference on Service Oriented
Computing (ICSOC 2006) http://www.icsoc.org
Chicago, USA, December 4th, 2006
WESOA Workshop Website
http://fresco-www.informatik.uni-hamburg.de/wesoa06/
Abstract Submission Due: September 8th, 2006
Wednesday, July 26, 2006
Big SCA Update
The SCA working groups have been hard at work updating a baseline set of specifications for SOAs. Today, the official Open SOA web site has been launched. I encourage you to visit it and provide feedback.
A few points to note:
1) It's great to see that a bunch of new partners have joined the effort, including RedHat (I keep running into this Mark Little guy), Sun Microsystems, Tibco, Progress/Sonic, CapeClear, Software AG and others. This represents a real consolidation of the integration space around SCA as the standard basis for describing SOA components and their interactions.
2) The focus has really moved firmly to SOA as the design center. There has been significant attention paid to BPEL and managed policies. To my way of thinking, BPEL support is a key bellwether for credibility in the SOA space, since most organizations are moving in this direction to leverage service functionality in more sophisticated business processes. Second, managed policies are a key part of a global strategy for SOAs, so this is an important step in improving customer comfort in the Web services management space.
3) The updated Assembly spec is simpler and that translates to "simply better".
4) Oracle's SOA Suite will leverage SCA as the basic description unit of the integration technologies in Fusion middleware, as Thomas Kurian pointed out at JavaOne this year. With the momentum that our applications and middleware businesses are gathering, this is going to be a fantastic showcase of what we're doing. I've had a lot of fun working on the service fabric. It's also built using Spring, which has been a blast to use. More on that subject....
5) Last, and from a Java programmers perspective, some very interesting news: there is now a Spring integration that allows Spring-based applications to tie in directly to an SCA-based SOA environment. As Spring becomes a de facto standard in many organizations for building J2EE applications, we're opening the door to transparent SCA-based integration for these investments. Plus now there's a practical open source story for Java developers to get on board with SCA without worrying about new learning curves or lots of new constructs. With Spring, it can be just POJOs: turtles all the way down. I had a lot of folks ask me directly about Java programming and SCA. Spring is a great answer.
So why is this important? At least two reasons.
1) It means that customers can expect to see some structure and standardization around how SOA components are built. For example, SCA will describe the packaging and metadata around a BPEL process, which will move beyond process standardization to deployment standardization. It also means that there will be a normal model for understanding services and their interactions. The same metadata can describe relationships between BPEL process and ESB functionality, for example.
2) This will start to cut down on proprietary aspects of SOA infrastructure that have lead to interoperability nightmares. For example, the use and definition of Web services policies will be more clearly constrained. With WS-Policy, we have grammar, but with SCA, we'll have real usage models that vendors can work together to define across product sets.
Good stuff.
A few points to note:
1) It's great to see that a bunch of new partners have joined the effort, including RedHat (I keep running into this Mark Little guy), Sun Microsystems, Tibco, Progress/Sonic, CapeClear, Software AG and others. This represents a real consolidation of the integration space around SCA as the standard basis for describing SOA components and their interactions.
2) The focus has really moved firmly to SOA as the design center. There has been significant attention paid to BPEL and managed policies. To my way of thinking, BPEL support is a key bellwether for credibility in the SOA space, since most organizations are moving in this direction to leverage service functionality in more sophisticated business processes. Second, managed policies are a key part of a global strategy for SOAs, so this is an important step in improving customer comfort in the Web services management space.
3) The updated Assembly spec is simpler and that translates to "simply better".
4) Oracle's SOA Suite will leverage SCA as the basic description unit of the integration technologies in Fusion middleware, as Thomas Kurian pointed out at JavaOne this year. With the momentum that our applications and middleware businesses are gathering, this is going to be a fantastic showcase of what we're doing. I've had a lot of fun working on the service fabric. It's also built using Spring, which has been a blast to use. More on that subject....
5) Last, and from a Java programmers perspective, some very interesting news: there is now a Spring integration that allows Spring-based applications to tie in directly to an SCA-based SOA environment. As Spring becomes a de facto standard in many organizations for building J2EE applications, we're opening the door to transparent SCA-based integration for these investments. Plus now there's a practical open source story for Java developers to get on board with SCA without worrying about new learning curves or lots of new constructs. With Spring, it can be just POJOs: turtles all the way down. I had a lot of folks ask me directly about Java programming and SCA. Spring is a great answer.
So why is this important? At least two reasons.
1) It means that customers can expect to see some structure and standardization around how SOA components are built. For example, SCA will describe the packaging and metadata around a BPEL process, which will move beyond process standardization to deployment standardization. It also means that there will be a normal model for understanding services and their interactions. The same metadata can describe relationships between BPEL process and ESB functionality, for example.
2) This will start to cut down on proprietary aspects of SOA infrastructure that have lead to interoperability nightmares. For example, the use and definition of Web services policies will be more clearly constrained. With WS-Policy, we have grammar, but with SCA, we'll have real usage models that vendors can work together to define across product sets.
Good stuff.
Wednesday, July 19, 2006
Wake Up Call
Brazil will soon be running transportation on a biofuel basis. A single act of will by a bold American president could change our destiny.
Thursday, July 06, 2006
Trouble with Aspects
There's an interesting article worth reading arguing that transparency around caching is problematic. This was recently posted by Manik Surtani who is the lead for JBoss clustering. The interesting thing is that not too long ago, JBoss was trumpeting "transparent middleware" based on AOP as the wave of the future. With caching, you are of course stuck with a number of semantic implications that the developer needs to be aware of. But the situation is much worse for something like transactions where the semantics should be unambiguous and touch deeply on the application logic itself. Transactions and the question of transparency was very much at issue at one point as well. I guess a little experience went a long way since then. I hope.
Monday, July 03, 2006
The Middle
The nobel prize winning chemist Roald Hoffmann wrote an illuminating essay on moderation, which he delivered this morning on NPR. You can follow the link to read it, but its even better if you listen to it being read by Hoffmann.
There's something about ideology -- any ideology -- that can take people teleologically to extremes. It seems to me that is primarily because ideology rests on abstraction and theory, which can deviate sharply from reality. Moreover, ideology tells us exactly how things should be, or rather, must be, for the true believer. We know that humans are rationalizing animals, so sometimes ideology can be an excuse for releasing our basest instincts. But often it just blinds people to the world around them and leads them to a path of callous and cruel behaviors.
We just don't live in a world of simple absolutes, or at least there are very few. Anything that focuses on a perfect world is a problem: whether that be a perfect past that conservatives fantasize existed or a perfect future that progressives believe they can create for us. Don't get me wrong: we can and should strive for improvements in our world, but -- and this is the crucial point -- not at the expense of the human reality around us. And that, to me, sounds a lot like the middle too.
There's something about ideology -- any ideology -- that can take people teleologically to extremes. It seems to me that is primarily because ideology rests on abstraction and theory, which can deviate sharply from reality. Moreover, ideology tells us exactly how things should be, or rather, must be, for the true believer. We know that humans are rationalizing animals, so sometimes ideology can be an excuse for releasing our basest instincts. But often it just blinds people to the world around them and leads them to a path of callous and cruel behaviors.
We just don't live in a world of simple absolutes, or at least there are very few. Anything that focuses on a perfect world is a problem: whether that be a perfect past that conservatives fantasize existed or a perfect future that progressives believe they can create for us. Don't get me wrong: we can and should strive for improvements in our world, but -- and this is the crucial point -- not at the expense of the human reality around us. And that, to me, sounds a lot like the middle too.
Tuesday, June 27, 2006
iMacs for Slackers and Slobs?
Technically, the new Intel Macs are the best thing that Apple has done in years. With Parallels, they are just fantastic: there are plenty of Windows apps I need to use, and this nicely solves that problem. I daresay that even Microsoft must be happy with this turn of events: they now get to move Windows license revenues on the small but growing Mac segment. At some point, I'd like to upgrade my wife's machine, though it probably won't be this year. But, why, oh, why is Apple running the worst advertising campaign in this century? I have no idea what on earth they were thinking: the Mac guy looks like more of a loser than the Windows guy. At least the Windows guy looks like he's got a job.
On the other hand, I may just be getting old.
On the other hand, I may just be getting old.
Tuesday, June 20, 2006
iTunes flaw
As much as I appreciate what Apple has done with the iPod (by which I mean the complete package including the iPod itself, the iTunes client, and the iTunes Web service), there's a terrible limitation in the whole infrastructure. As a part of the fat client, I lose the ability to pass a URL around. And that just kills what I presume is a tremendous viral sales opportunity.
For example, imagine that I want to tell a friend over IM to buy Wynton Marsalis Septet's The Marciac Suite. Now I can provide you a URL to find out about it. Or I can provide you a URL to buy it from Amazon.com. But I can't send you a UI for iTunes. Not only that, if I google for Marciac Suite, I won't be directed to iTunes. The not-so-subtle point is that abandoning the Web paradigm, iTunes loses out on the scalability of the Web. Its actually an inconvenience for me to use iTunes these days, despite the fact that I like the service and I own two iPods.
Am I missing something here?
PS. the real point of this blog entry is that you should buy the Marciac Suite. I don't care where: it is sublime.
For example, imagine that I want to tell a friend over IM to buy Wynton Marsalis Septet's The Marciac Suite. Now I can provide you a URL to find out about it. Or I can provide you a URL to buy it from Amazon.com. But I can't send you a UI for iTunes. Not only that, if I google for Marciac Suite, I won't be directed to iTunes. The not-so-subtle point is that abandoning the Web paradigm, iTunes loses out on the scalability of the Web. Its actually an inconvenience for me to use iTunes these days, despite the fact that I like the service and I own two iPods.
Am I missing something here?
PS. the real point of this blog entry is that you should buy the Marciac Suite. I don't care where: it is sublime.
Thursday, June 15, 2006
The Best of Times, The Worst of Times
Way back in 2001, I worked with Russ Gold (founder/maintainer of HttpUnit and otherwise great guy) on the HP (formerly Bluestone) Application Server 8.0. A J2EE 1.3 server we collectively built in less than a year. That project was one of my best professional experiences and also a time where I got to know many other first-rate software folks, some of whom I continue to collaborate with to this day. Bruce Kratz is now at PrincetonSoftech, along with Al Smith, Pete Petersen, and Jay Hiremath. I worked very closely with Jon Maron and Mark Little and after we split from HP, we kept together to write a book. Jon and I still work together and Mark is at Red Hat. In looking for some info on agile programming that I need for a forthcoming blog entry, I stumbled on this article from Google cache profiling some of the work Russ and I did on HPAS. I had fun reading it and thought I'd save it from the bit bucket for a few more years...
[edit: after the initial post, I found an article Russ and I had written for the HP middleware newsletter. That article is included at the end of the post as well.]
XP's 12-practice methodology may play an important role for mobile developers struggling to maintain footing in a landscape evolving at warp speeds. It's already working For HP Middleware teams.
By Amy Cowen
Picture of Two Coders at a Computer
Agile or extreme?
While "extreme programming" remains popular in today's coding circles, a newer catchall phrase frequently appears on the scene - agile programming. In documenting their foray into XP, Gold and Pavlik use agile programming as an umbrella of sorts for a group of lightweight software development methodologies which includes XP.
"There was the recognition that there are a number of other methodologies which share XP's general approach of adapt rather than anticipate," explains Gold, addressing a growing shift from "extreme" to "agile" programming.
"Then too, at least part of the shift was probably Microsoft's fault," Gold continues. "By adopting the term XP for their new OS, they muddied the mindscape somewhat. Of course, now that Microsoft has adopted the slogan regarding agile businesses..."
Others attribute the shift from "extreme" to "agile" to be one of image. "Agile" conjures up speed and flexibility - but not risk - a change which could prove less intimidating for managers deciding whether or not to give an alternative lightweight methodology a spin.
Extreme Programming... Just the phrase raises images associated with sports from the X Games, sports that hover on the edge of mainstream legitimacy - dangerous, radical, and rule-breaking. Indeed, XP's name imbues it with a heady aura of defiance, post-modernity, and the spirit of daredevils and renegades.
Reviewing the tenets of XP, it becomes clear that XP is extreme in its level of difference from other traditional software methodologies. The irony of XP, however, is that it breaks the rules first and foremost by initiating a new set of rules.
And playing by the rules is key in XP.
This is not free-for-all programming where anything goes, where radical features are dreamed up and coded by pizza chugging hacksters who thought of a cool tweak while skateboarding through the office cubicles on a midnight break.
XP does encourage turning things up a notch. But it does so by offering a set of rules designed to ensure successful communication between team members, encourage a three musketeers mentality, rev up the production process, produce clearer and more straightforward and stable code, streamline costs, and, ultimately, enable teams to deliver solid products to happy clients.
For developers working in the ever-changing mobile application space, an "extreme" approach that churns out proudcts at light speeds compared to traditional development cycles might give developers an edge and put apps on the market in time to meet consumer demand - in time to nourish a growing user base and win critical marketshare.
XP on the scene
XP was developed and first documented by Kent Beck in his 1996 Extreme Programming Explained. Under Beck's guidance the first XP implementation happened on the now legendary - in XP circles - Chrysler Comprehensive Compensation - or C3 - development team.
Since its automotive beginnings, XP has infiltrated the ranks in all venues of software development, with success stories cropping up at companies large and small.
Foundations to build on
Beck bases the XP methodology on four core values - communication, simplicity, feedback, and courage. These values underwrite the 12 guiding XP practices (see sidebar below) and provide touchstones visible throughout the development cycle.
Traditional development often involves teams of coders holed up in individual cubbies, each working on pieces of a larger project. When finished, the pieces are assembled, and beta testing begins to see what works and what doesn't.
Assuming each coder has her own way of writing the code, her own way of approaching problems, and minimal communication between team members, it is easy to see how problems are likely to appear when the work of such individual contributors gets glued together.
There's not much holding the "whole" together.
For an XP team, however, the "whole" is being considered throughout.
The 12 XP Practices: The Planning Process, Small Releases, Metaphor, Simple Design, Testing, Refactoring, Pair Programming, Collective Ownership, Continuous Integration, 40-Hour Week, On-Site Customer, Coding Standard
From collective ownership of the code to the emphasis on the simplest design and a commitment to refactoring, the 12 practices that form the foundation of the XP methodology formalize the development process, bringing high levels of discipline, communication, and team spirit to an arena typically ruled by solo code jockeys.
HP goes XP
For Bruce Kratz, HP Middleware Lab Director, XP has found firm footing with his teams. Kratz chalks the HP Middleware team's adoption of XP methodologies up to rapidly changing needs.
"In our industry, requirements change every 90 - 120 days," explains Kratz. "XP allows us to release product in short intervals with a high level of confidence that it is a quality product."
The Middleware Division's first XP project was conducted by the HP Application Server (HPAS) team when working on Version 8 of the server. Following the successful use of XP for HPAS, both the Rich Media Technology team and the Mobile Infrastructure team are working with XP methodologies, says Kratz.
Greg Pavlik and Russell Gold, architects in the Middleware Division, worked on rebuilding EJB-related code for HP Application Server 8 and have documented their team's use of XP. While they didn't employ all 12 of XP's practices on the project, both found working with XP a positive and successful experience.
According to Pavlik, XP was the right approach for the project because "we needed a process that was highly resilient to change and evolution in the code base."
"[HP has] been in the application server business" since its acquisition of Bluestone, continues Pavlik. "But the code base today is completely different than what we started with, so the rate of change is phenomenal. Some of the software has been rewritten from scratch, but the EJB 1.1 server had been battle-hardened in some of the container logic. We wanted to preserve the foundations that we knew were solid, but didn't want them to become an albatross either. You have to have a process that supports constant and dynamic refactoring to move forward with new features and architecture changes without turning a project into spaghetti code."
The realities of ongoing and near-constant change are ones developers constantly battle.
XP helps offset the problems - and costs - associated with such change, making it a solid strategy for the HPAS team.
"You'll notice that the first XP book by Kent Beck was subtitled Embrace Change, and in one sense that's what XP is really about - flattening the cost of change away from an exponential growth curve," continues Pavlik.
Simple is as simple does
Simplicity may be a surprising tenet to see in XP discussions, but on an XP project, "simple" is the result of hard, calculated, and relentless work. Striving towards the simplest design, Beck says the XP coach has to ask, "what is the simplest thing that could possibly work?"
The simplest solution is often one that turns a blind eye to the future, and this is important in XP because XP discourages the integration of hooks for future expansion and development. Instead, on an XP project, the goal is to write a program that solves the client's immediate needs.
"Extreme Programming (XP) is a high-discipline, low-ceremony approach to development, meaning that it does not produce a lot of formal documentation or rely on a lot of formal reviews, but does insist that its practitioners be consistent in following the essential practices."
- Pavlik and Gold
"Simplicity is not easy," says Beck. "It is the hardest thing in the world to not look toward the things you'll need to implement tomorrow and next week and next month. But compulsively thinking ahead is listening to the fear of the exponential cost of change curve."
According to Beck, the "right" design is one that:
1. Runs all the tests.
2. Has no duplicated logic.
3. States every intention important to the programmers.
4. Has the fewest possible classes and methods.
Skeptics might view this focus on immediate needs - rather than on the "big picture" of the application over time - as a potentially costly approach since the code isn't necessarily extensible in ways that make it easily upgradable as needs evolve - a reality which could mean having to start over at the beginning when new requirements take center stage.
However, for those working with XP, the benefits to the XP focus on simplicity outweigh the risks.
"I feel over-engineering for the future, in anticipation of features that are not yet needed, winds up costing more," says Kratz.
Show and tell
Key to accelerating the development cycle and time-to-market is XP's focus on frequent releases and ongoing integration. Small 'nuts-and-bolts' releases happen throughout the development. The customer can see the product, evaluate the current feature set, and consider new features he wants to incorporate or add.
The scope of the project evolves and changes with each micro-release.
In part, the theory is that the penultimate product reflects what he wants rather than what he "thought" he wanted at the outset - leading to happier customers and ensuring the coding throughout the project was appropriately targeted at the desired features.
The lack of a fully-determined development specification - and the determination of features on an 'as we go' basis may sound shortsighted - and costly - to XP-skeptics.
Developers who have worked on projects in which the customer wasn't sophisticated enough to know up front what he needed, what was possible, and what would work best, may find the process of shedding reams of documentation, outlines, feature specs, and flowcharts intimidating if not downright foolish. For these developers - bitten by the clueless client in the past - XP may seem to set up a never-ending - rather than faster - project.
But for those who have gone the XP route, such fears seem misplaced.
A traditional development cycle can be so long - and so bogged down in paperwork - that by the end of the project the scope of work is no longer fully adequate to meet current client or market needs.
As Kratz notes, "No matter if we release frequently or [in] longer cycles, customer feedback may launch us in a new direction. By releasing frequently, we have the opportunity to get this feedback sooner and incorporate it into the next release."
"The Application Server Market product space is constantly evolving," Kratz adds. "So we expect to be in a constant development cycle anyway. XP allows us to keep up with technology and measure our success against customers and the market."
Back-seat driver
One of the most talked-about hallmarks of XP is the practice of pair programming. Programmers team up and work together when writing all code. One partner sits at the computer and does the typing. The other member of the pair watches everything that is typed in - looking for problems, mistakes, and potential pitfalls, and helping to talk through the code as it is being written.
While it may sound costly to devote two programmers to the same task, the XP theory is that working in a pair produces cleaner and better code, thus reducing time spent later debugging and cleaning up.
Working with someone else can be a big adjustment since programmers traditionally tend to work solo - disappearing for hours or even days to bang out some code to solve a certain problem.
"There's an important watchdog role to play when you're not the coder in the pair, but it's not what people are accustomed to. The temptation for me [when working solo] is always there to disappear for a few days and come back with a subsystem coded, missing tests, etc. Pairing can help add discipline."
- Greg Pavlik
Working in a pair means having someone else constantly watching and evaluating the code. The pairs are also constantly in communication. Some XP advocates go so far as to say that pairs should be talking at least every 45 seconds or so.
Despite the adjustment, pair programming seems to be a hit among the teams that have used it.
Having worked in an XP programming pair, Gold prefers the duo arrangement to working alone.
"I like working in a pair whenever possible," he says, "since I find it focuses me better and allows someone to catch me before I go down a path that in hindsight was foolish."
Pavlik is more noncommittal about whether he prefers to code solo or in a pair, but he readily admits that pair programming produces "much better code and ensures better practices." The caveat for Pavlik is that pair programming is slower than working alone, so not feasible for all projects.
For both Pavlik and Gold, identifying the hardest part of adjusting to pair programming is easy - "Not writing code all the time," says Pavlik.
Gold concurs, noting that the hardest aspect of working in a pair is "staying engaged when not doing the typing."
This is not necessarily surprising since for a programmer used to having free reign at the keyboard, sitting back and watching can make the fingers itch. Harnessing years of being able to tackle problems in one's own way and watch, instead, as another coder works can be difficult.
But careful watching - in addition to solid communication with the person in the driver's seat - makes pair programming work.
On the flip side, the person driving the code has to also adjust the way he works in order to remain in communication with the watcher.
"An experienced pair programmer helps his partner by thinking out loud while typing and explaining what he is doing at every point," explains Russell.
Clearly, pair programming dramatically alters how a developer approaches writing code regardless of whether or not the coder is sitting in front of the keyboard or watching from a hands-off position.
Pairs are dynamic, too, changing frequently as members of the team group and re-group to work together on aspects of the project.
Every pair will be different. Some pairs consist of two strong coders. Sometimes a pair will consist of a more experienced coder and a newer team member. Pairs might even be made up of designers and architects. Nevertheless, XP theorists maintain there are strengths in all kinds of teams - even a team of two new programmers can produce stronger code than if they were working independently.
Speaking from a managerial perspective, Kratz says he has been surprised at how well pair programming has worked within the Middleware teams.
"It's probably not for everyone," he acknowledges, "but from my seat, things are thought out better, younger team members are becoming stronger quicker, and things are getting done. Hey, they may even be having fun!"
Goodbye paper trail
An interesting offshoot of XP is a reduction in documentation requirements. Team members are in constant communication with each other - and in frequent communication with the customer - so there is no reason to spend valuable time producing documentation when there is coding that can be done instead.
In XP, the focus is on producing code - not producing paper that talks about the code that will someday be produced. The time saved leaves coders fresh to tackle the project and works with the other XP practices to push the project towards early completion.
For those with either a packrat or a cover-our-tail mentality, putting aside the documentation process can feel both risky and maybe even leave the process feeling naked.
But XP veterans have found the reduced documentation to be a liberating aspect of the process.
"Too many design docs mean death for a project," says Pavlik. "When people talk about project failure and 'analysis paralysis,' they're talking about things like a giant static model, reams of documentation, and no working code or functionality."
With a smile, Russell agrees that it wasn't hard to ditch documentation.
"In my experience, most development projects don't actually produce a lot of usable documentation during development anyway," he notes. "Most design documentation built at the start of the process winds up obsolete by the end and is not maintained."
XP recognizes this and emphasizes writing code that is clear enough to serve as its own documentation. XP advocates add that the increased emphasis on person-to-person communication helps replace the need for a paper trail.
"The biggest hurdle is teaching developers to refactor working code," says Russell.
Refactor, refactor, refactor
A central XP practice is relentless refactoring. From the outside, the process of refactoring can seem a bit vague - coders review their code to ensure that they've taken the most straightforward, most streamlined, and simplest approach. All excess is trimmed or tweaked, keeping the code facile, lean, and clean.
In their own account of their experience using XP, Gold and Pavlik write:
"It is very common to develop software which works and fulfills all existing requirements - but which cannot be easily extended to handle new requirements. The most common reaction to encountering such a situation is to sigh, roll up one's sleeves, and rewrite the offending code, taking into consideration the new needs. But doing so discards all of the work that went into debugging the original code."
XP's emphasis on refactoring counters this approach, leaving code in such a streamlined state "that new features can be added easily without changing the behavior of the working code at all," explain Gold and Pavlik.
According to Pavlik, "Refactoring is really the core practice."
Explaining the important role refactoring plays, Gold says, "The essence is to recognize that there are a large number of well understood incremental transforms which you can make to software to improve its organization and design. By applying them, one after another, and verifying that you have not made a mistake by rerunning your unit tests, you can safely restructure your software to minimize coupling, increase cohesion, shrink your methods until they can be easily understood, and give your methods, classes, and variables names which clearly convey their meaning."
Boot camp for extreme developers
Join other extreme coders in the mobile space in Helsinki, Finland, April 9-11 for 60 hours of non-stop coding and development. As you squeeze nearly 2 weeks of work into a few days, you can enable your mobile e-service app with hp mobile technologies.
Sleeping bags, saunas, and plenty of food will be available. So bring your app, your imagination and mobile services vision, and get extreme!
Visit the HP Bazaar Site for more information or to register.
"The most important aspect is to change the system without affecting the behavior," adds Pavlik. "If you are changing the behavior and the structure of system elements, you are not refactoring."
Producing leaner, more malleable code, refactoring addresses what otherwise might prove a shortcoming in the XP approach - XP's refusal to code for the future. By insisting the final code is solid, pared down so that only the mandatory essence remains, XP leaves code in a state where future work can begin with minimal rebuilding.
Indeed, refactoring "is the mechanism by which you are able to affect the cost of change curve," says Pavlik.
(For more information on refactoring, both Gold and Pavlik encourage looking at Refactoring by Martin Fowler. "It is one of those books which every serious developer should own," says Gold.)
"Testing, testing"
Another important XP practice is the use of "tests" as a QA tool. This may not sound unusual. But in XP, the tests are written prior to the commencement of writing the code. The tests are then deployed throughout to ensure quality control and to highlight bugs that might crop up.
While at first glance it seems convoluted to create tests before what's even being built has been concretely defined - keeping in mind that XP takes an "add as you go" approach to determining the scope, extreme coders find the process of creating the tests clarifying.
According to Pavlik and Gold, "testing was one of our clear wins on this project. We got into the habit of writing unit tests for the software, using a combination of ant, JUnit, and HttpUnit to run the tests in suites. We adopted a rule that nobody could commit changes to the baseline unless all tests passed. As a result, the EJB container code was almost never broken by something one the of the EJB developers did, and we had a high degree of confidence that something that worked at one time continued to work through the 8.0 release."
"Creating the tests first help you understand what you are attempting to build," Kratz adds.
A winning combination
For HP's Middleware Division, XP's 12 practices have already added up to measurable success.
However, while XP's tenets have held up well so far, Kratz remains realistic, admitting that XP isn't always the right approach.
"Smart teams will know when to follow the XP approach and when to resort to traditional development methodologies. My thought is that 'one size does not fit all.'"
Gold, too, remains pragmatic.
"XP is not magic. It is a set of 12 well-defined practices which have been found to work together well."
And the second article....
Experiences with Agile Programming Models on HP-AS 8.0
Background and Definitions
Among the constant challenges in the development of software is the problem of how to handle change, especially in system requirements. Most traditional models of development have noted that it is much cheaper to fix a problem during requirements or design than to correct it once the code has already been written. As a result, these models have tended to focus on trying to be as thorough as possible in making sure that the requirements are complete and correct, and that the design addresses all major considerations which might arise before moving on to implementation.
Unfortunately, in most systems of interest to J2EE developers the problems simply cannot be known completely until the system is built and tried. And of course, once this has happened, changing the system is difficult, costly, and error-prone.
An alternative approach for dealing with change is to build a system around what is known for certain initially, and to make the system adaptable throughout the project lifecycle using techniques that reduce the costs and risks associated with modifying working code. The use of such techniques is known as Agile Programming.
What is Agile Programming?
In the last three years or so there has been a lot of discussion among software vendors about agile software development processes. In particular, there is a lot of excitement associated with the practices that Kent Beck popularized in his book Extreme Programming Explained: Embrace Change. Other "agile" processes that are gaining adherents in the developer community include systems of practices described in SCRUM, Feature Driven Software Development, and Pragmatic Programming. Each of these systems shares some common themes: close collaboration between developers and customers, recognition that changing requirements are a reality of software, and that development practices must play to the strengths of how developers work best.
We chose to use practices from Extreme Programming, probably the best known of the Agile Programming processes, to develop HP Application Server EJB container.
What is Extreme Programming?
Extreme Programming (XP) is a high-discipline, low-ceremony approach to development, meaning that it does not produce a lot of formal documentation or rely on a lot of formal reviews, but does insist that its practitioners be consistent in following the essential practices, including:
Onsite Customer The customer makes all functionality decisions and is always available
The Planning Game Customers get detailed continuous control over what work is done
System Metaphor The overall design is described in terms of shared concepts
Simple Design The design addresses current needs, not possible future additions
Collective Ownership All code is the responsibility of multiple developers
Forty-hour Week Developers may not work overtime two weeks in a row
Test-Driven Design Functionality is validated via automated tests written before coding
Pair Programming Code is written in pairs, thus giving continuous reviews
Continuous Integration Code is pushed to the baseline on a daily basis
Refactor Mercilessly Poorly structured code is cleaned up aggressively
Small Releases Customers are given updated working code to use frequently
Coding Conventions The developers agree on a common coding style and follow it
These practices have been shown to work together to produce consistent, reliable and flexible software without running a development team into the ground. Of course, XP is deeper than just a set of practices. It declares that programming should be a humane discipline that doesn't drive talent out of the trade.
And of course, as part of its "embrace change" attitude, XP insists that each team constantly analyze those practices it is following and change them when appropriate. It is more important to emphasis the values of listening, testing, coding, and designing than to adhere to someone else's checklist or rules.
EJB Development
Both of us worked on the EJB container in the latest release of HP Application Server. The project was a major undertaking. Within a six month time frame, the team had the responsibility of rearchitecting and rewriting most of the EJB-related code base. We needed to fit the container into a service framework on which the application server was being built, interact with other newly defined services, support a number of new features like hot redeployment and pluggable transport protocols (ensuring that all J2EE Compatibility Test Suite tests ran at 100%), and bring the container in line with the EJB 2.0 specification, public final draft two.
While we had experimented with XP techniques like automated unit testing in the past, we wanted to learn more about XP and what it had to offer through experience. Another great benefit was that Russ had joined the team at this point, and brought with him experience with XP. We'll try to summarize some of the what we did do, some of what we didn't do, and look at what the implications were for our project.
XP Programming Techniques We Used
Frequent Testing
Testing was one of our clear wins on this project. We got into the habit of writing unit tests for the software, using a combination of ant, JUnit, and HttpUnit to run the tests in suites. We adopted a rule that nobody could commit changes to the baseline unless all tests passed. As a result, the EJB container code was almost never broken by something on the of the EJB developers did, and we had a high degree of confidence that something that worked at one time continued to work through the 8.0 release.
One of the most useful rules the team developed was that bug reports should be accompanied by automated JUnit tests. The test code showed the problem in a repeatable way and the tests were added into the developer suite to ensure that regression didn't occur.
Continuous Integration
We encouraged our developers to check in their code every day, while keeping the baseline working (all regression tests running at 100%). This was tricky since we automatically absorbed all changes from other groups working on other portions of the application server every day as well. At times the upstream tests would miss something that would affect us further downstream. Since we couldn't always wait for the upstream changes to be fixed, this caused us to stumble a few times when we checked in code that broke the same tests. But for the most part, we were able to keep the baseline clean throughout the cycle, and this really helped us maintain the quality of the software and protect against regression even during aggressive overhauls of the container.
Another important benefit we saw from frequent integration was that it helped everyone understand how both refactoring and new development were effecting the evolution of the system. Since change was incremental, it was comprehensible. If we had all waited until some integration phase to bring pieces together within the system, everything that a developer wasn't actively working on would have become like a black box.
Refactor Mercilessly
It is very common to develop software which works and fulfills all existing requirements - but which cannot be easily extended to handle new requirements. The most common reaction to encountering such a situation is to sigh, roll up one's sleeves, and rewrite the offending code, taking into consideration the new needs. But doing so discards all of the work that went into debugging the original code; it's also time consuming, and often leads to code no better structured than the version which was replaced. Refactoring addresses this, providing a way to restructure the original code so that new features can be added easily without changing the behavior of the working code at all. This is one of the most important practices of XP; in some ways, it is the essence of XP. There are two things about refactoring that are critical to understand. First of all, it doesn't work without tests, period. If you don't have the tests, you will almost certainly break the software. Secondly, refactoring can be divided into small and well understood steps, catalogued in such places as Martin Fowler's Refactoring: Improving the Design of Existing Code and on his website, http://www.refactoring.com. It is also greatly simplified by support in automated tools -- something that's now creeping into Java IDEs.
There were a number of times during the development of the EJB container when we started with code left over from the Total-e-Server implementation, and refactored it to allow the addition of new features. We didn't do a lot of it, but what we did was invaluable. For example, we were able to refactor a legacy class from over 2500 lines to 300 lines, gradually, over the course of the release. This made it possible for us to more readily support hot redeployment of EJBs within the container.
Simple Design
Trusting in our ability to refactor as needed, we tried hard to avoid common expensive practices such as "adding hooks" for future growth. This allowed us to avoid unnecessary code, and we did not see any place where it hurt us. It also made the code much easier to understand.
User Stories
XP also promotes the creation of user stories, which are lightweight use cases for pieces of functionality. User stories were something we used consistently, but also something we struggled with doing correctly. The user stories we developed were very much like functional requirements for the system. As a result many of the stories were driven by developers rather than the customer proxy. We did, however, use these as a low-investment strategy for planning. Our group manager played the role of a tracker and tried to make sure the stories within a cycle were consistent with the velocity measurements he had taken for past cycles. This was a great benefit for tracking and planning with agility.
XP Programming Techniques We Didn't Use Consistently
Collective Ownership
With XP, assignments are distributed so that developers don't specialize. In practice, we assigned developers to specific functional areas and for the most part did not overlap, fearing that without specialization we would not be able to develop the necessary expertise in each area. The downside, of course, is that there are many parts of the code known by only one developer and never reviewed or even looked at by the others. It is too soon to tell how this will affect future maintenance.
Pair Programming
Pair programming is a concept that seems alien and we've found that this is an idea that people find hard to digest. For one thing, it's simply very different from what we're used to. It also seems counter-intuitive that a process designed around rapid software deliveries would tie two people to a computer. Turns out we were wrong.
We found that when we paired on user stories, different areas of experience of different programmers really combined to give the pair a deeper understanding of the problem space. As a general rule, pairing led to better code. We extended this outside of our project and attempted to always pair when we worked with other teams interfacing to our container. This helped both sides to get a better understanding of the critical glue code between subsystems.
Another critical point about pairing: it encourages discipline. If one person is behind and tempted to cheat on test coverage, for example, an individual can get away with it. Another benefit of pairing is that it tends to keep people focused. You can't waste time on nonproductive web surfing when someone is working with you.
When push came to shove, we needed to deliver and we fell back on what on the more conventional techniques that we knew. We had to rely on code reviews that were often stressful and grinding to audit each other, and the there were times when class design suffered. Pair programming may seem unnatural, but it has some real benefits that make it worth considering as a basic practice for your team.
Test-Driven Design
For the most part, the team didn't do "test first" programming. By the time we started seriously writing automated tests, most of the code was already in place, and needed updating rather than writing from scratch. In addition, despite reading the XP literature, we did not realize just how powerful this technique can be. Some of us did have the opportunity to write functionality that was brand-new for EJB 2.0, and we did use test-first in those parts. The result seemed to be fewer false starts and simpler designs than in areas built without such tests.
Architecture vs. Metaphor
XP puts forward the notion of a metaphor for a system as a substitute for architecture. We didn't attempt this: we spent time working out a macro-architecture, talking it through face-to-face, and adjusting it as necessary as we progressed. The metaphor idea didn't seem to make sense for something like a framework and the approach we took seemed to work well. So this was an aspect of XP that we didn't utilize, but also didn't feel that we were missing anything.
Limits of Tests
It also became quickly clear that there are limits to what the unit tests provide. Tests aren't a substitute for really digging into the system. Constant and liberal changes can result in subtle changes of untested assumptions. It's easy to say that the problem is there aren't enough tests which, while true, isn't an answer. After all, there will never be enough tests to check every assumption about the system. Software is complicated, and middleware particularly so: it combines the challenges of concurrent, distributed systems programming with a need for an extensive understanding of low-level concepts in security, distributed transaction processing, high availability system engineering, etc.
Performance
One of the questions we had was how XP would effect performance. Refactoring typically leads to more indirection and we specifically avoided writing "optimized" code. Instead, we waited until after all features were complete to systematically profile the system under load with real end-user applications. This paid off hugely: the container was considerably faster than previous iterations and the unit tests helped to ensure that the optimizations we introduced did not break the system.
The most important bottleneck in middleware for distributed systems is resource contention, and we were able to isolate and eliminate the contention issues we identified. The result was not only a faster system, but also a more scaleable system. We had great success with this approach, but would hesitate to recommend it without unit tests in place.
Conclusion
Over the course of the release, there were times when even we doubted our ability to deliver on all of the requirements in the time required. It's our strong conclusion that using the techniques from XP that we used were fundamental to pulling the software together into a well-received, sophisticated application server. We found that the basic practices combine and play off each other to help make better software and that we missed out on some key benefits from practices we weren't able to implement. We've worked on projects where the code regresses over time and we're happy to report that the code base for the EJB container took a different path: it got better and better as the release progressed. There are unique challenges to every project: some teams are very large, some are very big, some distributed geographically.
We would encourage you to experiment with XP and other Agile Programming techniques and adopt them to your environment. They pay off in a big way: well written software that meets real requirements in short time frames.
More information regarding Extreme Programming is available at:
http://www.extremeprogramming.com;
and for more information regarding Agile Programming, check out:
http://www.agilealliance.org.
ABOUT THE AUTHORS:
This article was contributed by Russell Gold and Greg Pavlik architects with Hewlett-Packard's Middleware Division. Russell has been instrumental in applying the eXtreme programming methodologies to the Middleware development process. Russell is also the original author and maintainer of HttpUnit, an open source library for automating tests of web sites. Greg was the lead architect on the HP Application Server's EJB 2.0 and 1.1 implementations and has been a member of the EJB expert group.
[edit: after the initial post, I found an article Russ and I had written for the HP middleware newsletter. That article is included at the end of the post as well.]
XP's 12-practice methodology may play an important role for mobile developers struggling to maintain footing in a landscape evolving at warp speeds. It's already working For HP Middleware teams.
By Amy Cowen
Picture of Two Coders at a Computer
Agile or extreme?
While "extreme programming" remains popular in today's coding circles, a newer catchall phrase frequently appears on the scene - agile programming. In documenting their foray into XP, Gold and Pavlik use agile programming as an umbrella of sorts for a group of lightweight software development methodologies which includes XP.
"There was the recognition that there are a number of other methodologies which share XP's general approach of adapt rather than anticipate," explains Gold, addressing a growing shift from "extreme" to "agile" programming.
"Then too, at least part of the shift was probably Microsoft's fault," Gold continues. "By adopting the term XP for their new OS, they muddied the mindscape somewhat. Of course, now that Microsoft has adopted the slogan regarding agile businesses..."
Others attribute the shift from "extreme" to "agile" to be one of image. "Agile" conjures up speed and flexibility - but not risk - a change which could prove less intimidating for managers deciding whether or not to give an alternative lightweight methodology a spin.
Extreme Programming... Just the phrase raises images associated with sports from the X Games, sports that hover on the edge of mainstream legitimacy - dangerous, radical, and rule-breaking. Indeed, XP's name imbues it with a heady aura of defiance, post-modernity, and the spirit of daredevils and renegades.
Reviewing the tenets of XP, it becomes clear that XP is extreme in its level of difference from other traditional software methodologies. The irony of XP, however, is that it breaks the rules first and foremost by initiating a new set of rules.
And playing by the rules is key in XP.
This is not free-for-all programming where anything goes, where radical features are dreamed up and coded by pizza chugging hacksters who thought of a cool tweak while skateboarding through the office cubicles on a midnight break.
XP does encourage turning things up a notch. But it does so by offering a set of rules designed to ensure successful communication between team members, encourage a three musketeers mentality, rev up the production process, produce clearer and more straightforward and stable code, streamline costs, and, ultimately, enable teams to deliver solid products to happy clients.
For developers working in the ever-changing mobile application space, an "extreme" approach that churns out proudcts at light speeds compared to traditional development cycles might give developers an edge and put apps on the market in time to meet consumer demand - in time to nourish a growing user base and win critical marketshare.
XP on the scene
XP was developed and first documented by Kent Beck in his 1996 Extreme Programming Explained. Under Beck's guidance the first XP implementation happened on the now legendary - in XP circles - Chrysler Comprehensive Compensation - or C3 - development team.
Since its automotive beginnings, XP has infiltrated the ranks in all venues of software development, with success stories cropping up at companies large and small.
Foundations to build on
Beck bases the XP methodology on four core values - communication, simplicity, feedback, and courage. These values underwrite the 12 guiding XP practices (see sidebar below) and provide touchstones visible throughout the development cycle.
Traditional development often involves teams of coders holed up in individual cubbies, each working on pieces of a larger project. When finished, the pieces are assembled, and beta testing begins to see what works and what doesn't.
Assuming each coder has her own way of writing the code, her own way of approaching problems, and minimal communication between team members, it is easy to see how problems are likely to appear when the work of such individual contributors gets glued together.
There's not much holding the "whole" together.
For an XP team, however, the "whole" is being considered throughout.
The 12 XP Practices: The Planning Process, Small Releases, Metaphor, Simple Design, Testing, Refactoring, Pair Programming, Collective Ownership, Continuous Integration, 40-Hour Week, On-Site Customer, Coding Standard
From collective ownership of the code to the emphasis on the simplest design and a commitment to refactoring, the 12 practices that form the foundation of the XP methodology formalize the development process, bringing high levels of discipline, communication, and team spirit to an arena typically ruled by solo code jockeys.
HP goes XP
For Bruce Kratz, HP Middleware Lab Director, XP has found firm footing with his teams. Kratz chalks the HP Middleware team's adoption of XP methodologies up to rapidly changing needs.
"In our industry, requirements change every 90 - 120 days," explains Kratz. "XP allows us to release product in short intervals with a high level of confidence that it is a quality product."
The Middleware Division's first XP project was conducted by the HP Application Server (HPAS) team when working on Version 8 of the server. Following the successful use of XP for HPAS, both the Rich Media Technology team and the Mobile Infrastructure team are working with XP methodologies, says Kratz.
Greg Pavlik and Russell Gold, architects in the Middleware Division, worked on rebuilding EJB-related code for HP Application Server 8 and have documented their team's use of XP. While they didn't employ all 12 of XP's practices on the project, both found working with XP a positive and successful experience.
According to Pavlik, XP was the right approach for the project because "we needed a process that was highly resilient to change and evolution in the code base."
"[HP has] been in the application server business" since its acquisition of Bluestone, continues Pavlik. "But the code base today is completely different than what we started with, so the rate of change is phenomenal. Some of the software has been rewritten from scratch, but the EJB 1.1 server had been battle-hardened in some of the container logic. We wanted to preserve the foundations that we knew were solid, but didn't want them to become an albatross either. You have to have a process that supports constant and dynamic refactoring to move forward with new features and architecture changes without turning a project into spaghetti code."
The realities of ongoing and near-constant change are ones developers constantly battle.
XP helps offset the problems - and costs - associated with such change, making it a solid strategy for the HPAS team.
"You'll notice that the first XP book by Kent Beck was subtitled Embrace Change, and in one sense that's what XP is really about - flattening the cost of change away from an exponential growth curve," continues Pavlik.
Simple is as simple does
Simplicity may be a surprising tenet to see in XP discussions, but on an XP project, "simple" is the result of hard, calculated, and relentless work. Striving towards the simplest design, Beck says the XP coach has to ask, "what is the simplest thing that could possibly work?"
The simplest solution is often one that turns a blind eye to the future, and this is important in XP because XP discourages the integration of hooks for future expansion and development. Instead, on an XP project, the goal is to write a program that solves the client's immediate needs.
"Extreme Programming (XP) is a high-discipline, low-ceremony approach to development, meaning that it does not produce a lot of formal documentation or rely on a lot of formal reviews, but does insist that its practitioners be consistent in following the essential practices."
- Pavlik and Gold
"Simplicity is not easy," says Beck. "It is the hardest thing in the world to not look toward the things you'll need to implement tomorrow and next week and next month. But compulsively thinking ahead is listening to the fear of the exponential cost of change curve."
According to Beck, the "right" design is one that:
1. Runs all the tests.
2. Has no duplicated logic.
3. States every intention important to the programmers.
4. Has the fewest possible classes and methods.
Skeptics might view this focus on immediate needs - rather than on the "big picture" of the application over time - as a potentially costly approach since the code isn't necessarily extensible in ways that make it easily upgradable as needs evolve - a reality which could mean having to start over at the beginning when new requirements take center stage.
However, for those working with XP, the benefits to the XP focus on simplicity outweigh the risks.
"I feel over-engineering for the future, in anticipation of features that are not yet needed, winds up costing more," says Kratz.
Show and tell
Key to accelerating the development cycle and time-to-market is XP's focus on frequent releases and ongoing integration. Small 'nuts-and-bolts' releases happen throughout the development. The customer can see the product, evaluate the current feature set, and consider new features he wants to incorporate or add.
The scope of the project evolves and changes with each micro-release.
In part, the theory is that the penultimate product reflects what he wants rather than what he "thought" he wanted at the outset - leading to happier customers and ensuring the coding throughout the project was appropriately targeted at the desired features.
The lack of a fully-determined development specification - and the determination of features on an 'as we go' basis may sound shortsighted - and costly - to XP-skeptics.
Developers who have worked on projects in which the customer wasn't sophisticated enough to know up front what he needed, what was possible, and what would work best, may find the process of shedding reams of documentation, outlines, feature specs, and flowcharts intimidating if not downright foolish. For these developers - bitten by the clueless client in the past - XP may seem to set up a never-ending - rather than faster - project.
But for those who have gone the XP route, such fears seem misplaced.
A traditional development cycle can be so long - and so bogged down in paperwork - that by the end of the project the scope of work is no longer fully adequate to meet current client or market needs.
As Kratz notes, "No matter if we release frequently or [in] longer cycles, customer feedback may launch us in a new direction. By releasing frequently, we have the opportunity to get this feedback sooner and incorporate it into the next release."
"The Application Server Market product space is constantly evolving," Kratz adds. "So we expect to be in a constant development cycle anyway. XP allows us to keep up with technology and measure our success against customers and the market."
Back-seat driver
One of the most talked-about hallmarks of XP is the practice of pair programming. Programmers team up and work together when writing all code. One partner sits at the computer and does the typing. The other member of the pair watches everything that is typed in - looking for problems, mistakes, and potential pitfalls, and helping to talk through the code as it is being written.
While it may sound costly to devote two programmers to the same task, the XP theory is that working in a pair produces cleaner and better code, thus reducing time spent later debugging and cleaning up.
Working with someone else can be a big adjustment since programmers traditionally tend to work solo - disappearing for hours or even days to bang out some code to solve a certain problem.
"There's an important watchdog role to play when you're not the coder in the pair, but it's not what people are accustomed to. The temptation for me [when working solo] is always there to disappear for a few days and come back with a subsystem coded, missing tests, etc. Pairing can help add discipline."
- Greg Pavlik
Working in a pair means having someone else constantly watching and evaluating the code. The pairs are also constantly in communication. Some XP advocates go so far as to say that pairs should be talking at least every 45 seconds or so.
Despite the adjustment, pair programming seems to be a hit among the teams that have used it.
Having worked in an XP programming pair, Gold prefers the duo arrangement to working alone.
"I like working in a pair whenever possible," he says, "since I find it focuses me better and allows someone to catch me before I go down a path that in hindsight was foolish."
Pavlik is more noncommittal about whether he prefers to code solo or in a pair, but he readily admits that pair programming produces "much better code and ensures better practices." The caveat for Pavlik is that pair programming is slower than working alone, so not feasible for all projects.
For both Pavlik and Gold, identifying the hardest part of adjusting to pair programming is easy - "Not writing code all the time," says Pavlik.
Gold concurs, noting that the hardest aspect of working in a pair is "staying engaged when not doing the typing."
This is not necessarily surprising since for a programmer used to having free reign at the keyboard, sitting back and watching can make the fingers itch. Harnessing years of being able to tackle problems in one's own way and watch, instead, as another coder works can be difficult.
But careful watching - in addition to solid communication with the person in the driver's seat - makes pair programming work.
On the flip side, the person driving the code has to also adjust the way he works in order to remain in communication with the watcher.
"An experienced pair programmer helps his partner by thinking out loud while typing and explaining what he is doing at every point," explains Russell.
Clearly, pair programming dramatically alters how a developer approaches writing code regardless of whether or not the coder is sitting in front of the keyboard or watching from a hands-off position.
Pairs are dynamic, too, changing frequently as members of the team group and re-group to work together on aspects of the project.
Every pair will be different. Some pairs consist of two strong coders. Sometimes a pair will consist of a more experienced coder and a newer team member. Pairs might even be made up of designers and architects. Nevertheless, XP theorists maintain there are strengths in all kinds of teams - even a team of two new programmers can produce stronger code than if they were working independently.
Speaking from a managerial perspective, Kratz says he has been surprised at how well pair programming has worked within the Middleware teams.
"It's probably not for everyone," he acknowledges, "but from my seat, things are thought out better, younger team members are becoming stronger quicker, and things are getting done. Hey, they may even be having fun!"
Goodbye paper trail
An interesting offshoot of XP is a reduction in documentation requirements. Team members are in constant communication with each other - and in frequent communication with the customer - so there is no reason to spend valuable time producing documentation when there is coding that can be done instead.
In XP, the focus is on producing code - not producing paper that talks about the code that will someday be produced. The time saved leaves coders fresh to tackle the project and works with the other XP practices to push the project towards early completion.
For those with either a packrat or a cover-our-tail mentality, putting aside the documentation process can feel both risky and maybe even leave the process feeling naked.
But XP veterans have found the reduced documentation to be a liberating aspect of the process.
"Too many design docs mean death for a project," says Pavlik. "When people talk about project failure and 'analysis paralysis,' they're talking about things like a giant static model, reams of documentation, and no working code or functionality."
With a smile, Russell agrees that it wasn't hard to ditch documentation.
"In my experience, most development projects don't actually produce a lot of usable documentation during development anyway," he notes. "Most design documentation built at the start of the process winds up obsolete by the end and is not maintained."
XP recognizes this and emphasizes writing code that is clear enough to serve as its own documentation. XP advocates add that the increased emphasis on person-to-person communication helps replace the need for a paper trail.
"The biggest hurdle is teaching developers to refactor working code," says Russell.
Refactor, refactor, refactor
A central XP practice is relentless refactoring. From the outside, the process of refactoring can seem a bit vague - coders review their code to ensure that they've taken the most straightforward, most streamlined, and simplest approach. All excess is trimmed or tweaked, keeping the code facile, lean, and clean.
In their own account of their experience using XP, Gold and Pavlik write:
"It is very common to develop software which works and fulfills all existing requirements - but which cannot be easily extended to handle new requirements. The most common reaction to encountering such a situation is to sigh, roll up one's sleeves, and rewrite the offending code, taking into consideration the new needs. But doing so discards all of the work that went into debugging the original code."
XP's emphasis on refactoring counters this approach, leaving code in such a streamlined state "that new features can be added easily without changing the behavior of the working code at all," explain Gold and Pavlik.
According to Pavlik, "Refactoring is really the core practice."
Explaining the important role refactoring plays, Gold says, "The essence is to recognize that there are a large number of well understood incremental transforms which you can make to software to improve its organization and design. By applying them, one after another, and verifying that you have not made a mistake by rerunning your unit tests, you can safely restructure your software to minimize coupling, increase cohesion, shrink your methods until they can be easily understood, and give your methods, classes, and variables names which clearly convey their meaning."
Boot camp for extreme developers
Join other extreme coders in the mobile space in Helsinki, Finland, April 9-11 for 60 hours of non-stop coding and development. As you squeeze nearly 2 weeks of work into a few days, you can enable your mobile e-service app with hp mobile technologies.
Sleeping bags, saunas, and plenty of food will be available. So bring your app, your imagination and mobile services vision, and get extreme!
Visit the HP Bazaar Site for more information or to register.
"The most important aspect is to change the system without affecting the behavior," adds Pavlik. "If you are changing the behavior and the structure of system elements, you are not refactoring."
Producing leaner, more malleable code, refactoring addresses what otherwise might prove a shortcoming in the XP approach - XP's refusal to code for the future. By insisting the final code is solid, pared down so that only the mandatory essence remains, XP leaves code in a state where future work can begin with minimal rebuilding.
Indeed, refactoring "is the mechanism by which you are able to affect the cost of change curve," says Pavlik.
(For more information on refactoring, both Gold and Pavlik encourage looking at Refactoring by Martin Fowler. "It is one of those books which every serious developer should own," says Gold.)
"Testing, testing"
Another important XP practice is the use of "tests" as a QA tool. This may not sound unusual. But in XP, the tests are written prior to the commencement of writing the code. The tests are then deployed throughout to ensure quality control and to highlight bugs that might crop up.
While at first glance it seems convoluted to create tests before what's even being built has been concretely defined - keeping in mind that XP takes an "add as you go" approach to determining the scope, extreme coders find the process of creating the tests clarifying.
According to Pavlik and Gold, "testing was one of our clear wins on this project. We got into the habit of writing unit tests for the software, using a combination of ant, JUnit, and HttpUnit to run the tests in suites. We adopted a rule that nobody could commit changes to the baseline unless all tests passed. As a result, the EJB container code was almost never broken by something one the of the EJB developers did, and we had a high degree of confidence that something that worked at one time continued to work through the 8.0 release."
"Creating the tests first help you understand what you are attempting to build," Kratz adds.
A winning combination
For HP's Middleware Division, XP's 12 practices have already added up to measurable success.
However, while XP's tenets have held up well so far, Kratz remains realistic, admitting that XP isn't always the right approach.
"Smart teams will know when to follow the XP approach and when to resort to traditional development methodologies. My thought is that 'one size does not fit all.'"
Gold, too, remains pragmatic.
"XP is not magic. It is a set of 12 well-defined practices which have been found to work together well."
And the second article....
Experiences with Agile Programming Models on HP-AS 8.0
Background and Definitions
Among the constant challenges in the development of software is the problem of how to handle change, especially in system requirements. Most traditional models of development have noted that it is much cheaper to fix a problem during requirements or design than to correct it once the code has already been written. As a result, these models have tended to focus on trying to be as thorough as possible in making sure that the requirements are complete and correct, and that the design addresses all major considerations which might arise before moving on to implementation.
Unfortunately, in most systems of interest to J2EE developers the problems simply cannot be known completely until the system is built and tried. And of course, once this has happened, changing the system is difficult, costly, and error-prone.
An alternative approach for dealing with change is to build a system around what is known for certain initially, and to make the system adaptable throughout the project lifecycle using techniques that reduce the costs and risks associated with modifying working code. The use of such techniques is known as Agile Programming.
What is Agile Programming?
In the last three years or so there has been a lot of discussion among software vendors about agile software development processes. In particular, there is a lot of excitement associated with the practices that Kent Beck popularized in his book Extreme Programming Explained: Embrace Change. Other "agile" processes that are gaining adherents in the developer community include systems of practices described in SCRUM, Feature Driven Software Development, and Pragmatic Programming. Each of these systems shares some common themes: close collaboration between developers and customers, recognition that changing requirements are a reality of software, and that development practices must play to the strengths of how developers work best.
We chose to use practices from Extreme Programming, probably the best known of the Agile Programming processes, to develop HP Application Server EJB container.
What is Extreme Programming?
Extreme Programming (XP) is a high-discipline, low-ceremony approach to development, meaning that it does not produce a lot of formal documentation or rely on a lot of formal reviews, but does insist that its practitioners be consistent in following the essential practices, including:
Onsite Customer The customer makes all functionality decisions and is always available
The Planning Game Customers get detailed continuous control over what work is done
System Metaphor The overall design is described in terms of shared concepts
Simple Design The design addresses current needs, not possible future additions
Collective Ownership All code is the responsibility of multiple developers
Forty-hour Week Developers may not work overtime two weeks in a row
Test-Driven Design Functionality is validated via automated tests written before coding
Pair Programming Code is written in pairs, thus giving continuous reviews
Continuous Integration Code is pushed to the baseline on a daily basis
Refactor Mercilessly Poorly structured code is cleaned up aggressively
Small Releases Customers are given updated working code to use frequently
Coding Conventions The developers agree on a common coding style and follow it
These practices have been shown to work together to produce consistent, reliable and flexible software without running a development team into the ground. Of course, XP is deeper than just a set of practices. It declares that programming should be a humane discipline that doesn't drive talent out of the trade.
And of course, as part of its "embrace change" attitude, XP insists that each team constantly analyze those practices it is following and change them when appropriate. It is more important to emphasis the values of listening, testing, coding, and designing than to adhere to someone else's checklist or rules.
EJB Development
Both of us worked on the EJB container in the latest release of HP Application Server. The project was a major undertaking. Within a six month time frame, the team had the responsibility of rearchitecting and rewriting most of the EJB-related code base. We needed to fit the container into a service framework on which the application server was being built, interact with other newly defined services, support a number of new features like hot redeployment and pluggable transport protocols (ensuring that all J2EE Compatibility Test Suite tests ran at 100%), and bring the container in line with the EJB 2.0 specification, public final draft two.
While we had experimented with XP techniques like automated unit testing in the past, we wanted to learn more about XP and what it had to offer through experience. Another great benefit was that Russ had joined the team at this point, and brought with him experience with XP. We'll try to summarize some of the what we did do, some of what we didn't do, and look at what the implications were for our project.
XP Programming Techniques We Used
Frequent Testing
Testing was one of our clear wins on this project. We got into the habit of writing unit tests for the software, using a combination of ant, JUnit, and HttpUnit to run the tests in suites. We adopted a rule that nobody could commit changes to the baseline unless all tests passed. As a result, the EJB container code was almost never broken by something on the of the EJB developers did, and we had a high degree of confidence that something that worked at one time continued to work through the 8.0 release.
One of the most useful rules the team developed was that bug reports should be accompanied by automated JUnit tests. The test code showed the problem in a repeatable way and the tests were added into the developer suite to ensure that regression didn't occur.
Continuous Integration
We encouraged our developers to check in their code every day, while keeping the baseline working (all regression tests running at 100%). This was tricky since we automatically absorbed all changes from other groups working on other portions of the application server every day as well. At times the upstream tests would miss something that would affect us further downstream. Since we couldn't always wait for the upstream changes to be fixed, this caused us to stumble a few times when we checked in code that broke the same tests. But for the most part, we were able to keep the baseline clean throughout the cycle, and this really helped us maintain the quality of the software and protect against regression even during aggressive overhauls of the container.
Another important benefit we saw from frequent integration was that it helped everyone understand how both refactoring and new development were effecting the evolution of the system. Since change was incremental, it was comprehensible. If we had all waited until some integration phase to bring pieces together within the system, everything that a developer wasn't actively working on would have become like a black box.
Refactor Mercilessly
It is very common to develop software which works and fulfills all existing requirements - but which cannot be easily extended to handle new requirements. The most common reaction to encountering such a situation is to sigh, roll up one's sleeves, and rewrite the offending code, taking into consideration the new needs. But doing so discards all of the work that went into debugging the original code; it's also time consuming, and often leads to code no better structured than the version which was replaced. Refactoring addresses this, providing a way to restructure the original code so that new features can be added easily without changing the behavior of the working code at all. This is one of the most important practices of XP; in some ways, it is the essence of XP. There are two things about refactoring that are critical to understand. First of all, it doesn't work without tests, period. If you don't have the tests, you will almost certainly break the software. Secondly, refactoring can be divided into small and well understood steps, catalogued in such places as Martin Fowler's Refactoring: Improving the Design of Existing Code and on his website, http://www.refactoring.com. It is also greatly simplified by support in automated tools -- something that's now creeping into Java IDEs.
There were a number of times during the development of the EJB container when we started with code left over from the Total-e-Server implementation, and refactored it to allow the addition of new features. We didn't do a lot of it, but what we did was invaluable. For example, we were able to refactor a legacy class from over 2500 lines to 300 lines, gradually, over the course of the release. This made it possible for us to more readily support hot redeployment of EJBs within the container.
Simple Design
Trusting in our ability to refactor as needed, we tried hard to avoid common expensive practices such as "adding hooks" for future growth. This allowed us to avoid unnecessary code, and we did not see any place where it hurt us. It also made the code much easier to understand.
User Stories
XP also promotes the creation of user stories, which are lightweight use cases for pieces of functionality. User stories were something we used consistently, but also something we struggled with doing correctly. The user stories we developed were very much like functional requirements for the system. As a result many of the stories were driven by developers rather than the customer proxy. We did, however, use these as a low-investment strategy for planning. Our group manager played the role of a tracker and tried to make sure the stories within a cycle were consistent with the velocity measurements he had taken for past cycles. This was a great benefit for tracking and planning with agility.
XP Programming Techniques We Didn't Use Consistently
Collective Ownership
With XP, assignments are distributed so that developers don't specialize. In practice, we assigned developers to specific functional areas and for the most part did not overlap, fearing that without specialization we would not be able to develop the necessary expertise in each area. The downside, of course, is that there are many parts of the code known by only one developer and never reviewed or even looked at by the others. It is too soon to tell how this will affect future maintenance.
Pair Programming
Pair programming is a concept that seems alien and we've found that this is an idea that people find hard to digest. For one thing, it's simply very different from what we're used to. It also seems counter-intuitive that a process designed around rapid software deliveries would tie two people to a computer. Turns out we were wrong.
We found that when we paired on user stories, different areas of experience of different programmers really combined to give the pair a deeper understanding of the problem space. As a general rule, pairing led to better code. We extended this outside of our project and attempted to always pair when we worked with other teams interfacing to our container. This helped both sides to get a better understanding of the critical glue code between subsystems.
Another critical point about pairing: it encourages discipline. If one person is behind and tempted to cheat on test coverage, for example, an individual can get away with it. Another benefit of pairing is that it tends to keep people focused. You can't waste time on nonproductive web surfing when someone is working with you.
When push came to shove, we needed to deliver and we fell back on what on the more conventional techniques that we knew. We had to rely on code reviews that were often stressful and grinding to audit each other, and the there were times when class design suffered. Pair programming may seem unnatural, but it has some real benefits that make it worth considering as a basic practice for your team.
Test-Driven Design
For the most part, the team didn't do "test first" programming. By the time we started seriously writing automated tests, most of the code was already in place, and needed updating rather than writing from scratch. In addition, despite reading the XP literature, we did not realize just how powerful this technique can be. Some of us did have the opportunity to write functionality that was brand-new for EJB 2.0, and we did use test-first in those parts. The result seemed to be fewer false starts and simpler designs than in areas built without such tests.
Architecture vs. Metaphor
XP puts forward the notion of a metaphor for a system as a substitute for architecture. We didn't attempt this: we spent time working out a macro-architecture, talking it through face-to-face, and adjusting it as necessary as we progressed. The metaphor idea didn't seem to make sense for something like a framework and the approach we took seemed to work well. So this was an aspect of XP that we didn't utilize, but also didn't feel that we were missing anything.
Limits of Tests
It also became quickly clear that there are limits to what the unit tests provide. Tests aren't a substitute for really digging into the system. Constant and liberal changes can result in subtle changes of untested assumptions. It's easy to say that the problem is there aren't enough tests which, while true, isn't an answer. After all, there will never be enough tests to check every assumption about the system. Software is complicated, and middleware particularly so: it combines the challenges of concurrent, distributed systems programming with a need for an extensive understanding of low-level concepts in security, distributed transaction processing, high availability system engineering, etc.
Performance
One of the questions we had was how XP would effect performance. Refactoring typically leads to more indirection and we specifically avoided writing "optimized" code. Instead, we waited until after all features were complete to systematically profile the system under load with real end-user applications. This paid off hugely: the container was considerably faster than previous iterations and the unit tests helped to ensure that the optimizations we introduced did not break the system.
The most important bottleneck in middleware for distributed systems is resource contention, and we were able to isolate and eliminate the contention issues we identified. The result was not only a faster system, but also a more scaleable system. We had great success with this approach, but would hesitate to recommend it without unit tests in place.
Conclusion
Over the course of the release, there were times when even we doubted our ability to deliver on all of the requirements in the time required. It's our strong conclusion that using the techniques from XP that we used were fundamental to pulling the software together into a well-received, sophisticated application server. We found that the basic practices combine and play off each other to help make better software and that we missed out on some key benefits from practices we weren't able to implement. We've worked on projects where the code regresses over time and we're happy to report that the code base for the EJB container took a different path: it got better and better as the release progressed. There are unique challenges to every project: some teams are very large, some are very big, some distributed geographically.
We would encourage you to experiment with XP and other Agile Programming techniques and adopt them to your environment. They pay off in a big way: well written software that meets real requirements in short time frames.
More information regarding Extreme Programming is available at:
http://www.extremeprogramming.com;
and for more information regarding Agile Programming, check out:
http://www.agilealliance.org.
ABOUT THE AUTHORS:
This article was contributed by Russell Gold and Greg Pavlik architects with Hewlett-Packard's Middleware Division. Russell has been instrumental in applying the eXtreme programming methodologies to the Middleware development process. Russell is also the original author and maintainer of HttpUnit, an open source library for automating tests of web sites. Greg was the lead architect on the HP Application Server's EJB 2.0 and 1.1 implementations and has been a member of the EJB expert group.
Monday, June 12, 2006
Mothra Attacks
The sleepy little town of Shamong, New Jersey, where I live, is currently under occupation by an army of gypsy moths. My own property is heavily wooded, but I have to admit I haven't noticed any real activity: until I started looking up on the drive into the office. Its somewhat jarring to see whole sections of forest denuded of leaves!
Perspectives on Diversity
I recently had the opportunity to participate in a workshop on diversity for managers. The interesting thing about the exercise was that the focus of the workshop was not on issues of overt discrimination, but how a person's own identity influences and shapes their spheres of inclusion: in discussions, in peer groups and by implication in large organizations. I think it goes without saying that overt discrimination is a problem that needs to be dealt with, but it is also one that any healthy organization is committed to addressing quickly and effectively in the modern and globalizing business environment.
The way we project our identity into work environments unconsciously, however, is a much more interesting issue. On the one hand, our received identities are a tremendous source of personal strength and a way to build bridges with others. On the other hand, they can be a wall. Simple example: a group of Indian and British guys sitting around talking about cricket may be a wall to, say, a Canadian baseball fan. Then again, it's also a chance to build a bridge.
But what I found most interesting of all was reflecting on my own self-identification and how its shifted over the years. When I was growing up, in a lower-middle-to-working class, predominantly white and Catholic neighborhood, I picked up many of the dominant biases of my environment. In general, these were parochial: in retrospect, it was an environment that suffered from a fair amount of myopia about the broader world. My way of looking at the world was also one that focused exclusively on defending legitimacy claims about its own interests without showing the same consideration for others. After all, when you're right, you're right.
When I went off to be educated at an Ivy League school, my impulse was to seek philosophical rationalization for my own biases. Of course, I didn't think these were biases at all: as a kid, I thought I knew how the world worked (though of course I didn't) and my mission was to prove that this received way of looking at the world was correct. In many ways, I have come to see that this is quite common, and it applies to the all kinds of philosphical biases: people often tend to search for reasons to prove they are correct in what they think they know, rather than seeking a balanced perspective. In my case, I read widely, but in retrospect, with an aim to narrowing rather than opening the mind. Looking for justification rather than understanding and systematizing against all the wrong things. I managed to make an ass of myself more than once and I can only imagine what sensible friends thought at the time.
Fast forward through the years (now we're on order of decades!) and I look at my own identity as being very different than it was as a older child/younger man. I tend to err on the side of ambiguity when it comes to philosophy, religion, comparative judgements across cultures. My friends are literally from all over the world, from all kinds of religions, races, and ethnicities. And I find that even my own cultural identification is progressively more difficult to pin down, even in simple, basic ways: I can't imagine not eating Indian food, listening to jazz, trying to struggle with Japanese, coming to terms with the sheer ancientness and depth of Chinese civilization, planning trips to Africa, spending time in Europe. And there is something else very important that I've learned through life. If there's one thing you can count on about a stereotype is that the first thing real people do is prove it wrong.
The 20th century persons I tend to admire now are people like Martin Luther King or Ghandi -- though both were in some sense religious figures, while I struggle with the idea of faith regularly, I've been most attracted to their emphasis on both nonviolence and bridging extraordinary cultural divisions. Unlike the titanic political figures of the 20th century, neither changed the world by brute force; instead, they were closer to what seems to me to be the ultimate expression of the ethic that the Jesus we know about from religious history taught and lived. As humans, we like our heroes to be god-like, seemingly perfect. In fact, there is no such person that has existed. But some people do manage to rise above our imperfect condition and change the world in ways that deserve to be admired.
I've often thought about how my own process of identification and outlook has changed -- what I regard as a maturation process -- and I'm not sure how to explain it with confidence. I like to think that this is a result of reflection and inner-directed growth, but it could also simply be the influences of an environment that I'm now a part of -- and one that I love. My day to day world is both multicultural in the American sense and really quite international. One of the defining facts of the technology industry is that it is global and it will only continue to become more so.
There is a practical point to all of this: its impossible to do global business without a perspective that builds on mutual respect for people of all kinds of backgrounds. I would argue that is also true even in the American context, but that's in some way yet another blog entry. The essential question is how to ensure a broad understanding and acceptance of this fact. The cliche answer that it's a matter of education, I think, falls short. Education can just as easily reinforce biases and enforced education may increase barriers to acceptance all the more. On this I don't have a clear answer, but it is undoubtedly among the most important issues of this young century.
The way we project our identity into work environments unconsciously, however, is a much more interesting issue. On the one hand, our received identities are a tremendous source of personal strength and a way to build bridges with others. On the other hand, they can be a wall. Simple example: a group of Indian and British guys sitting around talking about cricket may be a wall to, say, a Canadian baseball fan. Then again, it's also a chance to build a bridge.
But what I found most interesting of all was reflecting on my own self-identification and how its shifted over the years. When I was growing up, in a lower-middle-to-working class, predominantly white and Catholic neighborhood, I picked up many of the dominant biases of my environment. In general, these were parochial: in retrospect, it was an environment that suffered from a fair amount of myopia about the broader world. My way of looking at the world was also one that focused exclusively on defending legitimacy claims about its own interests without showing the same consideration for others. After all, when you're right, you're right.
When I went off to be educated at an Ivy League school, my impulse was to seek philosophical rationalization for my own biases. Of course, I didn't think these were biases at all: as a kid, I thought I knew how the world worked (though of course I didn't) and my mission was to prove that this received way of looking at the world was correct. In many ways, I have come to see that this is quite common, and it applies to the all kinds of philosphical biases: people often tend to search for reasons to prove they are correct in what they think they know, rather than seeking a balanced perspective. In my case, I read widely, but in retrospect, with an aim to narrowing rather than opening the mind. Looking for justification rather than understanding and systematizing against all the wrong things. I managed to make an ass of myself more than once and I can only imagine what sensible friends thought at the time.
Fast forward through the years (now we're on order of decades!) and I look at my own identity as being very different than it was as a older child/younger man. I tend to err on the side of ambiguity when it comes to philosophy, religion, comparative judgements across cultures. My friends are literally from all over the world, from all kinds of religions, races, and ethnicities. And I find that even my own cultural identification is progressively more difficult to pin down, even in simple, basic ways: I can't imagine not eating Indian food, listening to jazz, trying to struggle with Japanese, coming to terms with the sheer ancientness and depth of Chinese civilization, planning trips to Africa, spending time in Europe. And there is something else very important that I've learned through life. If there's one thing you can count on about a stereotype is that the first thing real people do is prove it wrong.
The 20th century persons I tend to admire now are people like Martin Luther King or Ghandi -- though both were in some sense religious figures, while I struggle with the idea of faith regularly, I've been most attracted to their emphasis on both nonviolence and bridging extraordinary cultural divisions. Unlike the titanic political figures of the 20th century, neither changed the world by brute force; instead, they were closer to what seems to me to be the ultimate expression of the ethic that the Jesus we know about from religious history taught and lived. As humans, we like our heroes to be god-like, seemingly perfect. In fact, there is no such person that has existed. But some people do manage to rise above our imperfect condition and change the world in ways that deserve to be admired.
I've often thought about how my own process of identification and outlook has changed -- what I regard as a maturation process -- and I'm not sure how to explain it with confidence. I like to think that this is a result of reflection and inner-directed growth, but it could also simply be the influences of an environment that I'm now a part of -- and one that I love. My day to day world is both multicultural in the American sense and really quite international. One of the defining facts of the technology industry is that it is global and it will only continue to become more so.
There is a practical point to all of this: its impossible to do global business without a perspective that builds on mutual respect for people of all kinds of backgrounds. I would argue that is also true even in the American context, but that's in some way yet another blog entry. The essential question is how to ensure a broad understanding and acceptance of this fact. The cliche answer that it's a matter of education, I think, falls short. Education can just as easily reinforce biases and enforced education may increase barriers to acceptance all the more. On this I don't have a clear answer, but it is undoubtedly among the most important issues of this young century.
Wednesday, June 07, 2006
Open Source Tivoli?
In an interesting move, Hyperic has open sourced their management infrastructure. Perhaps its not too surprising to see that they have Bob Bickel, who built and sold off the JBoss company to RedHat, as a company advisor. I wonder if this is an area where open source makes sense, but its surely going to get them some attention.
Tuesday, June 06, 2006
Historical Blog Entries
I've gotten several emails asking how to get at content from my now-discontinued Oracle blog. You can find the last entry here; just scroll back to get to previous entries.
The only way to find a particular entry, though, seems to be Google. I can help if that proves difficult.
The only way to find a particular entry, though, seems to be Google. I can help if that proves difficult.
Monday, June 05, 2006
International Conference on Service Oriented Computing 2006
ICSOC is back in the United States this year in Chicago starting on December 4th. I am pleased to have an opportunity to serve on the program committee this year. The Call for Papers may be found here.
Tuesday, May 23, 2006
Is SOA an Architectural Style?
The term architectural style moved out of the research literature and into the lexicon on practitioners after the Fielding thesis positioned Representational State Transfer (or REST) as the architectural style of the Web. I've always been hesitant to adopt this line of thinking because REST struck me as more of a philosophy -- and a not-quite-accurate description of how the Web works in practice. Now a-days its popular, at least on blogs, to call SOA an architectural style, sometimes coupled with a statement to the effect: Yeah, and we've been doing this for years with [insert your favorite middleware framework here]. I think this misses the point somewhat radically as I'll explain below.
The second related tendency I've seen is the claim that Web services are an instance of this architectural style. I don't think either claim is right, but I'll dispense with the latter first: Web services can be used without reference or respect to any stable model. The beauty of Web services is their use to support SOA, but they can certainly be used for straightforward client server modeling as well. I happen to prefer vanilla HTTP for this kind of interaction, with POX payloads where required (its not just for AJAX), but Web services will work, especially since there is reasonable tooling available to make this pain free for developers.
The problem with the claim that SOA is just an architectural style is that it reduces SOA to an abstract model, when what most people are trying to convey is a new (or at least evolutionary) approach to organizing the data-center. This approach involves, among other things, business-contract driven services, centralized and managed policy enforcement, leveraging ubiquitous Web protocols, process-oriented modeling, etc. Only one of those listed items is technology oriented, but it has clear business benefits as well.
SOA is the tag we wound up with, but perhaps its not so accurate. If I had to come up with something of my own, it would probably be Rationalized Business-Oriented Data Center Organization Amenable to Process Orchestration For Agility and Value. Somehow I don't think RBODCOATPOFAV is going to catch on like wildfire. (But you never know, so if it does, you read it here first!)
But what I want to suggest is that what we're really talking about is a way to better organize software assets to achieve business goals. And this is not what people mean by an architectural style at all. Now you might say this is a chicken or the egg problem. Don't we need to understand the architectural style that supports this before we can talk about serving business needs effectively? In my opinion, the answer is no. I think we will be doing ourselves a tremendous disservice if the SOA tech stack is not driven from the business needs down. The tendency to develop bottom up technologies with the idea that the interests and insights of middleware software folks will solve the needs of business is flawed. We've had a generation of distributed object middleware that suggests as much. And in fact it wasn't even that good for techie solutions.
I'm very interested in qualifying this problem but I think it will take some time to reach a broad consensus among all interested parties. In the meantime, you'll see terms like SOA 2.0 emerging precisely because people are trying to find a way to force the discourse away from the discontinuities that exist today. Long time associates know I have a morbid interest in semiotics: to me, this is an attempt to align an understanding between sign, signifier and signified that does not have stability today. And the quest for clear meaning is a good thing.
The second related tendency I've seen is the claim that Web services are an instance of this architectural style. I don't think either claim is right, but I'll dispense with the latter first: Web services can be used without reference or respect to any stable model. The beauty of Web services is their use to support SOA, but they can certainly be used for straightforward client server modeling as well. I happen to prefer vanilla HTTP for this kind of interaction, with POX payloads where required (its not just for AJAX), but Web services will work, especially since there is reasonable tooling available to make this pain free for developers.
The problem with the claim that SOA is just an architectural style is that it reduces SOA to an abstract model, when what most people are trying to convey is a new (or at least evolutionary) approach to organizing the data-center. This approach involves, among other things, business-contract driven services, centralized and managed policy enforcement, leveraging ubiquitous Web protocols, process-oriented modeling, etc. Only one of those listed items is technology oriented, but it has clear business benefits as well.
SOA is the tag we wound up with, but perhaps its not so accurate. If I had to come up with something of my own, it would probably be Rationalized Business-Oriented Data Center Organization Amenable to Process Orchestration For Agility and Value. Somehow I don't think RBODCOATPOFAV is going to catch on like wildfire. (But you never know, so if it does, you read it here first!)
But what I want to suggest is that what we're really talking about is a way to better organize software assets to achieve business goals. And this is not what people mean by an architectural style at all. Now you might say this is a chicken or the egg problem. Don't we need to understand the architectural style that supports this before we can talk about serving business needs effectively? In my opinion, the answer is no. I think we will be doing ourselves a tremendous disservice if the SOA tech stack is not driven from the business needs down. The tendency to develop bottom up technologies with the idea that the interests and insights of middleware software folks will solve the needs of business is flawed. We've had a generation of distributed object middleware that suggests as much. And in fact it wasn't even that good for techie solutions.
I'm very interested in qualifying this problem but I think it will take some time to reach a broad consensus among all interested parties. In the meantime, you'll see terms like SOA 2.0 emerging precisely because people are trying to find a way to force the discourse away from the discontinuities that exist today. Long time associates know I have a morbid interest in semiotics: to me, this is an attempt to align an understanding between sign, signifier and signified that does not have stability today. And the quest for clear meaning is a good thing.
Monday, May 22, 2006
Sunday, May 21, 2006
Oracle SOA Fabric
If you didn't have a chance to watch Thomas Kurian's keynote at JavaOne, I would encourage you to do so: it was far and away the most compelling and coherent keynote presentation this year. The three major focal areas were J2EE 1.5, SOA, and Web 2.0 support in our product line and in our donations to open source projects. Most of what Thomas talked about is functionality that is available now, so its a pretty exciting time to be here.
Of course, I am focused on the SOA products, so its nice to see the Fabric infrastructure get prominent billing. The really cool thing about Fabric is that its a best of breed combination of support for service infrastructure, policy management, business activity monitoring, identity based security and event driven architecture that is designed as a fully integrated, common platform: all the essential building blocks of the next generation IT infrastructure. There's not really any other platform offering that can compare. I have a good feeling its going to shortly be the standard SOA infrastructure for many enterprises.
Of course, I am focused on the SOA products, so its nice to see the Fabric infrastructure get prominent billing. The really cool thing about Fabric is that its a best of breed combination of support for service infrastructure, policy management, business activity monitoring, identity based security and event driven architecture that is designed as a fully integrated, common platform: all the essential building blocks of the next generation IT infrastructure. There's not really any other platform offering that can compare. I have a good feeling its going to shortly be the standard SOA infrastructure for many enterprises.
Tuesday, May 16, 2006
SOA Programming at JavaOne
I'm in the valley for some internal business this week, but I will be talking at the 11am PST JavaOne session tomorrow on SCA and the emerging SOA programming model with Rob High from IBM and my friend Ed Cobb from BEA. Apologies for the late update, but I only recently nailed down the logistics. Stop by afterward to catch up.
Monday, May 08, 2006
Now This Is Cool
I started working in systems programming as a middleware guy writing the low level plumbing in queuing systems aimed at high volume telco applications. I moved on to CORBA (and also started to work with the open source TAO ORB and ACE framework), mostly in C++, with a bit of Java for management infrastructure. After that I worked on building J2EE servers - everything from the Web tier load balancer to EJBs. After that, it was portal, Web services, mobile infrastucture, more transactions, integration frameworks, etc. Almost all the work was the innards. Like ever profession, sofware makes careers a matter of specialization to some degree. For a while I resisted this, but its darn hard to be an effective generalist.
While I've always enjoyed the middleware work, I often struggled with the question of how best to make it useable for non-specialist developers. People who need to implement business logic and Web enable their applications. One of the first times I had to build a real Web application was to show off the work we'd done on one of the first EJB 1.1 containers back in 1999: we had a B2C application (and framework) that we wanted to "port" to use EJB. That's when it really struck home that J2EE didn't really solve the problem for the application developer. Just simple problems like correlating the lifecycle of EJBs to servlets was a nightmare. So I started to write light weight frameworks to tie things together to make it possible for our end user to use J2EE technologies together -- but always felt that was the perspective that the J2EE umbrella specification should have taken from the beginning.
Now we have layers on top of J2EE that make many aspects of the application server more of an SPI: the application server is necessary and important for reliability and consistency of applications, but J2EE is less and less the programming model for end users. The folks that pushed this the furthest and, in my opinion, did the best job were the consultants that developed the Spring framework. At first, I was skeptical at some of what they were doing (treating JTA as a dependable interface without knowing the innards of the application server was a primary concern), but now that the framework has gotten traction, most of the application servers have actually adapted to Spring -- OC4J, for example, is independently certified to interact with Spring correctly. That the "big guys" are bending over backward to accomodate a framework that replaces a big section of J2EE as an API is something I wouldn't have guessed would happen back in, say, 2001. At the end of the day, though, its a great development that really seems to benefit everyone involved.
I've just started to poke around with the new Spring Web Flow framework, which seems to be the right solution at the right time. Managing scopes in Web apps can be a real pain and if this does for state management what Spring does for application logic, score another one for Interface 21 -- and for Java in general.
Nice job.
While I've always enjoyed the middleware work, I often struggled with the question of how best to make it useable for non-specialist developers. People who need to implement business logic and Web enable their applications. One of the first times I had to build a real Web application was to show off the work we'd done on one of the first EJB 1.1 containers back in 1999: we had a B2C application (and framework) that we wanted to "port" to use EJB. That's when it really struck home that J2EE didn't really solve the problem for the application developer. Just simple problems like correlating the lifecycle of EJBs to servlets was a nightmare. So I started to write light weight frameworks to tie things together to make it possible for our end user to use J2EE technologies together -- but always felt that was the perspective that the J2EE umbrella specification should have taken from the beginning.
Now we have layers on top of J2EE that make many aspects of the application server more of an SPI: the application server is necessary and important for reliability and consistency of applications, but J2EE is less and less the programming model for end users. The folks that pushed this the furthest and, in my opinion, did the best job were the consultants that developed the Spring framework. At first, I was skeptical at some of what they were doing (treating JTA as a dependable interface without knowing the innards of the application server was a primary concern), but now that the framework has gotten traction, most of the application servers have actually adapted to Spring -- OC4J, for example, is independently certified to interact with Spring correctly. That the "big guys" are bending over backward to accomodate a framework that replaces a big section of J2EE as an API is something I wouldn't have guessed would happen back in, say, 2001. At the end of the day, though, its a great development that really seems to benefit everyone involved.
I've just started to poke around with the new Spring Web Flow framework, which seems to be the right solution at the right time. Managing scopes in Web apps can be a real pain and if this does for state management what Spring does for application logic, score another one for Interface 21 -- and for Java in general.
Nice job.
Sunday, May 07, 2006
Oracle SOA suite review
Good coverage and analysis of Oracle's SOA suite, with coverage of BPEL, Web services management, and BAM support: http://it.sys-con.com/read/204725.htm
Thursday, May 04, 2006
Pura Vida
I just returned from a family vacation in Costa Rica. Well, actually, I just returned from a business trip in Ireland, but I didn't have time to collect my thoughts on Costa Rica before that. We spent time in two areas: in the vicinity of the fishing town of Quepos, outside of the Manuel Antonio national park and in the area outside of La Fortuna near the Arenal volcano. The former is lowland, coastal, and semi tropical forest land; the latter is referred to as cloud forest as it is a higher elevation and somewhat wetter during the dry season.
Costa Rica has 5% of the world's biodiversity, but what is amazing is the sheer density of the biodiversity in the remaining primary and secondary forests. I was surprised by the amount of land that has been converted to agricultural use: it is extensive. American real estate speculators are omnipresent, so Costa Ricans (referred to as Ticos) are going to have a rough time protecting what natural resources are left. Given the power of the dollar, it does not bode well in my opinion.
The country itself has first world sanitation, education, and health care. There is petty crime, but violent crime seems minimal (outside San Jose). Many of the Ticos I talked with indicated they saw no reason to leave Costa Rica for the US except to visit, as "Life is too good here." The city of La Fortuna, for example, was much nicer than lower income areas in virtually any American cities, with substantially lower costs and a more interesting environment to boot. I was impressed by the pride and intensity of passion that the Ticos have for their ecological system. The guides we had in Quepos -- in particular Vanessa at Iguana Tours -- were infectious in their enthusiasm and had a deep knowledge of wildlife and the supporting ecosystem. Even our taxi drivers carried binoculars and wildlife identification guides. One of them swerved off the road and started to point out birds by their scientific names, providing us with high powered binoculars to observe them ourselves.
Of the two areas I visited, the cloud forest was my favorite, largely because of the staggering diversity of life from the ground floor of the forests to the top of the canopy. We saw everything from armies of ants that literally wore a visible path on the forest floor to three species of monkeys to sloths to poison vipers. Definitely not a vacation to waste laying on the beach. I recommend it as a destination for anyone that wants an experience much like stepping into an episode of the Jeff Corwin Experience.
Costa Rica has 5% of the world's biodiversity, but what is amazing is the sheer density of the biodiversity in the remaining primary and secondary forests. I was surprised by the amount of land that has been converted to agricultural use: it is extensive. American real estate speculators are omnipresent, so Costa Ricans (referred to as Ticos) are going to have a rough time protecting what natural resources are left. Given the power of the dollar, it does not bode well in my opinion.
The country itself has first world sanitation, education, and health care. There is petty crime, but violent crime seems minimal (outside San Jose). Many of the Ticos I talked with indicated they saw no reason to leave Costa Rica for the US except to visit, as "Life is too good here." The city of La Fortuna, for example, was much nicer than lower income areas in virtually any American cities, with substantially lower costs and a more interesting environment to boot. I was impressed by the pride and intensity of passion that the Ticos have for their ecological system. The guides we had in Quepos -- in particular Vanessa at Iguana Tours -- were infectious in their enthusiasm and had a deep knowledge of wildlife and the supporting ecosystem. Even our taxi drivers carried binoculars and wildlife identification guides. One of them swerved off the road and started to point out birds by their scientific names, providing us with high powered binoculars to observe them ourselves.
Of the two areas I visited, the cloud forest was my favorite, largely because of the staggering diversity of life from the ground floor of the forests to the top of the canopy. We saw everything from armies of ants that literally wore a visible path on the forest floor to three species of monkeys to sloths to poison vipers. Definitely not a vacation to waste laying on the beach. I recommend it as a destination for anyone that wants an experience much like stepping into an episode of the Jeff Corwin Experience.
Wednesday, May 03, 2006
Business Intelligence on the move
A while ago, I predicted that the next generation of significant IT technology would center aroung business intelligence functions and that BI and BAM products were the launching pad for this development. That is of, course, just the start. Others are now seeing the connection between search and BI that I noted then. In my opinion, Oracle is in a very strong position in this area today, controlling a stack that includes integrated BI and BAM functions across the data tier, middleware and applications stacks. I'm pleased to see that Oracle is moving forward aggressively in both BI and BAM as a core of our Fusion platform and even more interested to see how well that movement is being received by end users. While this is just one of the key differentiators for Oracle's end to end story, I predict it will become increasingly important over the next two to five years.
Monday, April 10, 2006
Back to the future
Yes, the blog has been slow this month. I have several entries that haven't made it past draft mode: just running out of time. I'll try to get some meaningful updates over the next few weeks. Some of what's going on:
Tons of product initiatives are underway right now, including our SOA platform release and forward looking work on the SOA front. In my view, Oracle's platform is second to none in the market right now.
Toward the end of May, I am slated to speak on the Service Component Architecture in Shanghai at ICSE 2006. Topic will be "Next Generation Integration Platforms". I will post the presentation to the blog around that time.
Lastly, I am planning to do an executive program at Wharton this year. I'm very excited about this opportunity to develop new skills and friendships. I'm hoping this will also lead to a nice mix of business oriented and technical posts over time.
Tons of product initiatives are underway right now, including our SOA platform release and forward looking work on the SOA front. In my view, Oracle's platform is second to none in the market right now.
Toward the end of May, I am slated to speak on the Service Component Architecture in Shanghai at ICSE 2006. Topic will be "Next Generation Integration Platforms". I will post the presentation to the blog around that time.
Lastly, I am planning to do an executive program at Wharton this year. I'm very excited about this opportunity to develop new skills and friendships. I'm hoping this will also lead to a nice mix of business oriented and technical posts over time.
Friday, March 03, 2006
WWW2006 Participation
Having seen some very interesting papers in the Web Services and XML Program Committee, this promises to be (another) interesting WWW* conference. I am not sure if I will be able to attend this year as May is looking to be a very busy month. In any case, from the conference chairs:
WWW Conference 2006
Tuesday 23 May - Friday 26 May 2006
Edinburgh International Conference Centre, Edinburgh
Registrations for the 15th International World Wide Web Conference have been going very well. A number of the tutorials sessions are now fully booked and interest in the available accomodation in this busy capital city means some hotels are also booked. If you are intending to come to the conference we recommend you register as soon as possible.
For more information please click here: WWW2006.org.
As well as a really strong refereed paper programme there is a very strong invited speaker programme as well as workshops, tutorials abd developer sessions. Speakers from a varying degree of backgrounds will be present and include:
Tim Berners-Lee - Director of World Wide Web Consortium
David Brown - Chairman of Motorola UK
Mary Ann Davidson - Chief Security Officer, Oracle
Tony Hey - Corporate VP for Technical Computing, Microsoft
David Belanger - Chief Scientist, AT&T Labs
Tim Faircliff - General Manager of digital media business, Reuters
WWW Conference 2006
Tuesday 23 May - Friday 26 May 2006
Edinburgh International Conference Centre, Edinburgh
Registrations for the 15th International World Wide Web Conference have been going very well. A number of the tutorials sessions are now fully booked and interest in the available accomodation in this busy capital city means some hotels are also booked. If you are intending to come to the conference we recommend you register as soon as possible.
For more information please click here: WWW2006.org.
As well as a really strong refereed paper programme there is a very strong invited speaker programme as well as workshops, tutorials abd developer sessions. Speakers from a varying degree of backgrounds will be present and include:
Tim Berners-Lee - Director of World Wide Web Consortium
David Brown - Chairman of Motorola UK
Mary Ann Davidson - Chief Security Officer, Oracle
Tony Hey - Corporate VP for Technical Computing, Microsoft
David Belanger - Chief Scientist, AT&T Labs
Tim Faircliff - General Manager of digital media business, Reuters
Thursday, March 02, 2006
Dave Ingham Blog Discovered
My friend Dave Ingham has moved from Arjuna to Microsoft. They seem to be syphoning up a bunch of folks from Newcastle to Redmond these days. We acquired Arjuna at Bluestone (before in turn being swallowed by HP) and I don't think at the time we realized what a stellar group of persons we were getting. My expectations were that we would leverage their expertise in transactions, but as Dave and Stuart Wheater proved, there was a much broader skill set amongst the team. Dave took over our messaging software and delivered the unfortunately named HPMS (the name was almost as bad as our servlet engine, HPIS) in record time. Looking around, as I see the percentage of the folks from Arjuna now driving companies and technologies, I'm continually amazed.
Dave, best of luck. You can follow his blog from here: http://www.daveingham.com
Dave, best of luck. You can follow his blog from here: http://www.daveingham.com
Wednesday, February 08, 2006
Monday, January 16, 2006
Saturday, January 14, 2006
End of an Era
Back in the early 90s when I was still considering a career tangentially related to materials science, I applied to work at a "dream job". At the time, I was hooked on fly fishing and passionate about the inter-mountain West. So it was natural to apply for a job at the RL Winston Rod company, which produced the best "plastic" rods (aka graphite) made as well as some of the finest split cane rods available under the tutelage of master craftsman Glenn Brackett. (I always wanted a Glenn Brackett rod but was never up to the full price. I did live in the Rockies for several years though.)
I wondered as the focus on fly fishing shifted to fast rods and saltwater if the classic rods of Winston would endure. The old IM6 rod (of which I've owned several) with its characteristic soft-tip was hard to beat as a trout fishing instrument. I was pleased to see the saltwater oriented BL5 step up to throwing big flies without feeling like a steel pole.
Many people saw Glenn openly complain about the possibility of outsourcing Winston's low end rods from Twin Bridges to China. Now I'm shocked to hear that the cane rod builders have quit Winston. I presume they will set up an independent shop (Tom Morgan also is building his own line of rods, though the prices are too high for the average fly fisher). If anyone has details on where Glenn and company wind up, drop me a note. I still want mine someday.
Truely the end of an era. With the old guard gone, I doubt we'll see rods of the same touch and feel and quality of the classic Winston into the future; certainly those looking for a rod built under the direction of Mr. Brackett will have to look elsewhere now. Nothing stays the same, but the passing of the Winston rod company I knew still saddens me.
I wondered as the focus on fly fishing shifted to fast rods and saltwater if the classic rods of Winston would endure. The old IM6 rod (of which I've owned several) with its characteristic soft-tip was hard to beat as a trout fishing instrument. I was pleased to see the saltwater oriented BL5 step up to throwing big flies without feeling like a steel pole.
Many people saw Glenn openly complain about the possibility of outsourcing Winston's low end rods from Twin Bridges to China. Now I'm shocked to hear that the cane rod builders have quit Winston. I presume they will set up an independent shop (Tom Morgan also is building his own line of rods, though the prices are too high for the average fly fisher). If anyone has details on where Glenn and company wind up, drop me a note. I still want mine someday.
Truely the end of an era. With the old guard gone, I doubt we'll see rods of the same touch and feel and quality of the classic Winston into the future; certainly those looking for a rod built under the direction of Mr. Brackett will have to look elsewhere now. Nothing stays the same, but the passing of the Winston rod company I knew still saddens me.
Saturday, January 07, 2006
Google, Amazon and Ebay: One of These is Not like the Others
When people talk about the commerce segment of the Internet, there are three names that are almost always dropped: Google, Amazon and Ebay. I'm not sure why Yahoo! doesn't make it in the mix everytime, because its perhaps the most versatile and resilient of the major players in the space. But let's stick with the first three I mentioned to make a point: Google and Amazon have found ways to co-opt the Web to drive their businesses. Specifically, they both offer a model of radical federation by which their services become desireable as ubiquitous building blocks for content providers. This was driven home for me by Adsense. Mostly because I wanted to play around with Adsense, I added it to my blog (Blogger makes this trivial). The first day I noticed I made thirty cents on click-throughs on ads. Now I'm not about to retire off of Adsense, but it serves to illustrate why people stick those Google ads in their blogs: they want a small piece of a very large revenue stream. And if their traffic really picks up, that small piece can be significant, even to a business: its damned hard to sell advertisers on a small Web site. And Google gets driven through the stratosphere by the effects of all this attention. Very nice.
There's something similar happening with Amazon: its trivial now to add a click-through link to a product from any Web site. At least I think its trivial: I'm going to try to experiment with a link to buy my last book. Again, mostly I want to experiment with the Amazon model, but, hey, I can always use a way to help fund my Starbucks habit. And of course while this might help me a little, if lots and lots of Web content providers do the same, its immensely helpful to Amazon. Neat way to grow the business.
All of this brings me to my point: EBay is different. They haven't yet figured out how to achieve radical federation of their services. The thing that bothers me is I don't understand why. It could be with auctions: there's definitely a market for niche auctions that require tighter administration than Ebay can provide. And their are lots of things that EBay won't do for liability reasons. Fine, but that shouldn't stop them from trying for a piece of the action. A couple of good examples.
Wine auctions. Wine sales are huge and Internet wine sales are growing precipitously. There are already wine auction sites. But EBay is missing out on the action. Not good.
Firearms. Whatever you think about guns, its clear that many Americans love their guns and to collectors -- and some hunters -- trading guns is practically a lifestyle. More to the point: its not cheap. Case in point, a single British shotgun can run into the 6 figures. Don't believe me? Check into the costs of a fully engraved Purdy or Holland and Holland. All that trading means an indirect revenue opportunity for an Ebay. As every software ISV knows, indirect revenue is one key to a high margin business.
And its not just auctions. Ebay could drive the next wave of growth on PayPal as the currency of the Web. Or Skype, which remains for now the technical leader in the Internet telephony race. Either could be massively federated as a service that gets sucked into a critical mass of Web sites. The key may not be to drive those technologies into the Ebay auction user base, perhaps its to drive the auction user base into one of those technologies as the next driver.
Don't get me wrong: I love Ebay. I think its a fantastic company with a great service. I use it to find all kinds of things from fishing rods to clothing. But I'm frustrated as hell that they haven't picked up on the fact that radical federation is the key to driving the next wave of growth in the Internet commerce segment. That's what it means to make a "platform play" in the software-as-a-service space: Amazon and Google are moving there aggressively and its a powerful thing indeed.
There's something similar happening with Amazon: its trivial now to add a click-through link to a product from any Web site. At least I think its trivial: I'm going to try to experiment with a link to buy my last book. Again, mostly I want to experiment with the Amazon model, but, hey, I can always use a way to help fund my Starbucks habit. And of course while this might help me a little, if lots and lots of Web content providers do the same, its immensely helpful to Amazon. Neat way to grow the business.
All of this brings me to my point: EBay is different. They haven't yet figured out how to achieve radical federation of their services. The thing that bothers me is I don't understand why. It could be with auctions: there's definitely a market for niche auctions that require tighter administration than Ebay can provide. And their are lots of things that EBay won't do for liability reasons. Fine, but that shouldn't stop them from trying for a piece of the action. A couple of good examples.
Wine auctions. Wine sales are huge and Internet wine sales are growing precipitously. There are already wine auction sites. But EBay is missing out on the action. Not good.
Firearms. Whatever you think about guns, its clear that many Americans love their guns and to collectors -- and some hunters -- trading guns is practically a lifestyle. More to the point: its not cheap. Case in point, a single British shotgun can run into the 6 figures. Don't believe me? Check into the costs of a fully engraved Purdy or Holland and Holland. All that trading means an indirect revenue opportunity for an Ebay. As every software ISV knows, indirect revenue is one key to a high margin business.
And its not just auctions. Ebay could drive the next wave of growth on PayPal as the currency of the Web. Or Skype, which remains for now the technical leader in the Internet telephony race. Either could be massively federated as a service that gets sucked into a critical mass of Web sites. The key may not be to drive those technologies into the Ebay auction user base, perhaps its to drive the auction user base into one of those technologies as the next driver.
Don't get me wrong: I love Ebay. I think its a fantastic company with a great service. I use it to find all kinds of things from fishing rods to clothing. But I'm frustrated as hell that they haven't picked up on the fact that radical federation is the key to driving the next wave of growth in the Internet commerce segment. That's what it means to make a "platform play" in the software-as-a-service space: Amazon and Google are moving there aggressively and its a powerful thing indeed.
William Henry blog discovered (and a good idea to boot)
I just discovered (via Eric's blog) that William Henry is blogging. From the looks of things, he's been doing it for a bit -- I'm just slow. William is a great guy, smart and extremely pragmatic. He managed the relationship between HP and IONA when HP was OEMing ORBIX for the application server and was always a pleasure to work with.
The thread on using RSS to provide information about Web services is a good one -- something I've also been interested in for some time. I'd add that its an interesting alternative to WSIL for a lightweight exchange of service metadata.
The thread on using RSS to provide information about Web services is a good one -- something I've also been interested in for some time. I'd add that its an interesting alternative to WSIL for a lightweight exchange of service metadata.
King Estate Oregon Pinot Gris 2004
Everything I like in an Oregon Pinot Gris and very little of what I despise in your run of the mill Pinot Grigio. This tricky grape is normally my least favorite varietal, typified by sugar water blandness and often yeasty overtones. Many of these Oregon wineries just keep producing outstanding Pinot Gris. The King Estate wine is crisp with citrus overtones that would complement almost any dish, but not overpower subtle foods like shellfish. In fact, I believe I had this wine before with Eric Newcomer, Mark Little and Kevin Connor (what, the always opinionated Kevin has no blog?) at Yabbies Coastal Kitchen at one last years WS-CAF F2F meetings. Eric and I split a shell fish basket and the wine was a great pairing. Recommended, from a confirmed Pinot Grigio hater. And another reason I like Oregon so much.
Thursday, January 05, 2006
SOA and the JCP
Some brief thoughts on SOA standards, which do not necessarily reflect Oracle's corporate position. I just read an interview arguing that SOA standards should be developed through the JCP. I just don't get that at all. I don't necessarily have an issue with the JCP being the organization of a private company, rather than a standards body per se. That's not the problem. It's specifically that the JCP doesn't work for SOA by design.
SOA is about integration. That means heterogenous technologies by definition. I don't know how many people have read the JSPA, but its provides the governing rules that make it virtually impossible to do anything that is not part of a Java compatible implementation. And that's exactly what you want in a SOA.
So how to develop SOA standards? There doesn't seem to be a perfect approach, but some combination of open source collaboration and inter-company specification collaboration seems to be a good start. Once there's some open implementation experience, it makes sense to bring the specification of the heterogeneous part to an open standards body both to ratify, clear up any open IP questions, and provide a basis for commercial implementations. Are there better approaches? Does the JCP have a role in one of those contexts? The truth is I don't know. On the first question, the answer may well be yes, and I'd like to hear more ideas. The one thing seems clear, right now the JCP won't be a place where SOA standards are developed from the get-go.
Cross posted from my other blog.
SOA is about integration. That means heterogenous technologies by definition. I don't know how many people have read the JSPA, but its provides the governing rules that make it virtually impossible to do anything that is not part of a Java compatible implementation. And that's exactly what you want in a SOA.
So how to develop SOA standards? There doesn't seem to be a perfect approach, but some combination of open source collaboration and inter-company specification collaboration seems to be a good start. Once there's some open implementation experience, it makes sense to bring the specification of the heterogeneous part to an open standards body both to ratify, clear up any open IP questions, and provide a basis for commercial implementations. Are there better approaches? Does the JCP have a role in one of those contexts? The truth is I don't know. On the first question, the answer may well be yes, and I'd like to hear more ideas. The one thing seems clear, right now the JCP won't be a place where SOA standards are developed from the get-go.
Cross posted from my other blog.
Subscribe to:
Posts (Atom)