Thursday, August 23, 2007

The Next New thing – that’s the problem!

I am confident I have found a Holy Grail. The root cause of most project failures and the main ingredient for the mess we have, and continue to have around Enterprise Applications and Integration. Scarily, I also think new projects, and even SOA are likely to suffer from the same root cause and will NOT be the panacea everyone hoped for. We are seeing evidence of that now. I see no big fix on the Horizon either, for future projects in IT and the types of problems we have today, will be the same problems we’ll have 10 years from now! Am I sticking my neck out here?

When I look back, since I started in IT 30 years ago, there has always been the “next new thing” in technology. This fact is not disputed but the penny dropped for me last year when I thought about the relationship to this fact when you add one other major ingredient – PEOPLE.

Is it not true that most people working in technology and especially in some form of development, only want to be working on the next “new thing”? Is it not also true, these same people get bored really quickly with the “new thing”, often before it even gets “old”.

Smart people in technology get bored at around the 18 month marker and want to be working on the next “new thing”. This is my Holy Grail and I am not sure that it’s ever going to be fixable. At least acknowledging this fact should allow us to prepare for it!

18 months is not long enough for most projects, not even close and if the smart people who started them are not around for the completion, testing, roll-out and continued evolution of the applications, it is no wonder the expectations are rarely met.

You can certainly do no wrong by breaking down projects into smaller pieces where feasible. This will help and that’s certainly a prospect of SOA, if done right. For Quick wins, look to smaller projects and products that can consume these “new” services from existing applications!

This Holy Grail is good for companies like mine, that will help the tens of thousands of companies that have hundreds of thousands of applications that need to be enhanced to do more now than what was originally delivered. The fact OpenSpan can take 25 year old and 1 week old applications/technologies and tie them together to deliver what users need today, goes a long way to allowing the “Next New Thing” projects to finally exceed original expectations.

Thursday, August 16, 2007

SOA Failures

My thoughts around SOA is for the Rich man, not the poor man is panning out after I found a link to an Article by Gartner (thanks to Tekrati) confirming such. This is a scary (but realistic) outlook and why companies should be looking to complimentary alternatives whilst on this path to SOA Nirvana!

"Gartner predicts that by 2010, less than 25 percent of large companies will have the sufficient technical and organisational skills necessary to deliver enterprise wide SOA"....

Why SOA deployments fail

SOA is the right road, just that it's a very long road and also not the only road. Bear that in mind when you are trying to solve business problems, Right Now!

Wednesday, August 15, 2007

The Last Mile of SOA.. What's that?

Have a web service you want to write or someone else's you'd like to consume? Say an address validation web service, a trouble ticketing log web service, a web service that monitors and records information into your analytical systems or even a simple web service that creates a shipping record into one of the main shipping company systems.

All very well and good but how the heck do you consume a web service when it's likely going to involve some heavy lifting development effort to get it into your existing applications (assuming too, you own those systems)?

What we call the last mile of SOA, really means integrating web services RIGHT NOW. Since OpenSpan inserts itself into running desktop applications, it can intercept what a user does, what data is where and even what the application does with that data. Armed with all that information (and no coding), OpenSpan allows even complex Web Services to be integrated with that information, based upon any event or trigger from the user, you define. Until Legacy systems go away (never), this approach is one of the most immediate and agile approaches to integrating web services and legacy systems. Real-time desktop application integration has come of age.

As you will read (see news), our new partnership with Aspect Software, enables their customers to interact with the Aspect Web Services, Right Now, without ripping out the back end, which would normally take years and large development efforts.

The last mile of SOA may seem like a strange term to describe some of what we do, but it's as good as any... and it works..

Monday, August 13, 2007

Fix what we already have - AGAIN

Airtran cancelled a flight for me and my family getting ready to go on a vacation because at the last minute, a first officer was a no show. The plane was there but no first officer. Fair enough, this stuff happens.

Rebooking 200 people on other flights was bad enough although they did use technology to do this. The slow and sad part involved manual integration - 2 men and 2 phones. 1 person at the gate, reading out bag tag numbers to someone on the tarmac was approximately 5 minutes per passenger.

So, in the end, Airtran couldn't transfer people to new flights fast enough because they overlooked this manual process that would not be so difficult actually to automate.

So, before you look to sweat the big back end integration projects, look to sort out the little stuff first! You'll be surprised at the customer / end user / business satisfaction rates going up.

Friday, August 3, 2007

Fix what we already have - FIRST

Why is there still no integration here? It’s 2007 for blimey sake!

Can we please stop all the hype around the next ‘fad’ in tech and get what we already have WORKING please! Real world example from last weekend:

I went into an Apple store to DROP $3800 on an Apple Pro Notebook and Final Cut for my son, who's a budding Director. First ever Apple purchase experience here:

1. they did not have the machine in the store (fair enough)
2. they then tell me to go check out the other local stores (I told THEM to check for me)
3. they had to phone around the other stores (yes, phone)
4. they pulled up the apple site to order one (which showed a 3 week delivery)
5. they told me I could PHONE the store every day to see if any “came in” (by chance)
6. they told me they only get deliveries mon-fri and this was saturday
7. they had no clue when they would get the next Mac book pro delivery. (NO CLUE)
8. I told my son to call the store the very next morning (even though it was Sunday and there were no deliveries, I was skeptical)
9. Miraculously they had one in stock and we picked it up within the hour

20 very nice Apple sales people in the store and we, the consumer, in 2007 had to PHONE in to check every day!

This is a pretty easy problem to fix in 2007 but my guess is, the guys in charge of the software for this enterprise might be just a little too focused on their next EA strategy (hype) to be ready in the year 2020. By which time, I predict THIS particular problem still won’t be fixed anyway.

Thursday, August 2, 2007

Virtualization –now Web 2.0 and Legacy Fat clients have a lot in common!

If you think you understand all of the terms surrounding Virtualization, check out and see if you learn anything new!

I am going the use the term “Virtualization” loosely in this post though since there are many things I love about “Virtualization. I think it could actually be one of the biggest evolution changes in computing since the advent of “GUI”.

If you are old enough to remember, the earliest deployments of applications to end users was never an issue. Plug in and connect a dumb terminal (green screen) to the mainframe and a user was up a running. If the application changed on the mainframe, immediately the user saw it. If you added more processors/gateways /disks to the mainframe, it could easily serve up more and more users. Maybe that’s simplifying all of this but if you throw enough “smarts” around virtualization then you get most of this back. No? We all know technology comes around in waves so I really love this one because I am old enough to remember!

Virtualization is not so new BUT now we have unbelievable cost effective CPU / Memory power to really kick it up a big notch overnight. Tape (rack) enough virtualization pieces together and you get all of the benefits around ease of deployment the mainframe had.

I also read somewhere that today’s enterprise servers only utilize about 10% of their capability. Funny, barring a very pushy enterprise salesman, enterprises wouldn’t buy more processor hardware until they had utilized 60-80% capacity of what they had. Virtualization fixes that gap for today’s servers by easily allowing a single processor to be more fully utilized (another “what comes around moment”).

There are many things to write and love about virtualization but one close to my own heart, you might not have considered, is around the fat client (G)UI. The fat client has a bad rap because whilst there is no doubt, fat client GUI’s are very enterprise user friendly and powerful (rich), the management around their application deployment sucks. However, virtualization makes the deployment problem go away, quite literally over night.

People originally thought the zero footprint browser approach would solve the fat client deployment issue. However whilst the browser wasn’t fat, it lacked any richness in UI of the fat clients to satisfy enterprise users. Years of trying to add the rich UI to the browser through ActiveX (I shudder here), scripts (lots), DHTML, Ajax, Flash etc., has left us in quite a mess. Where is the business logic now? Who supports what and where? Fat Clients perhaps are not so bad then if we have solved the deployment issue?

Someone has to say this. Web 2.0 (I use that term loosely to describe any asynchronous event driven web like UI) is really a fat client in disguise. Disguised by the fact the web browser is now so fat, it allows for real-time interactions with a server application that has been written to support code that runs inside the client model as well as the server. I’m not beating up on Web 2.0, just making sure we all agree, it’s not really new, for the enterprise user at least.

The way I see the Web 2.0 going is we are going to see the new 4GL like visual tools allowing us to build self deployable rich applications for enterprise users and that’s cool. However I for one think that, thanks to virtualization, existing legacy fat client applications will be around for a lot longer than people ever imagined and that’s cool too. Unless you have a real business case (the money and time) to take this risky approach to rip and replace your legacy applications, I wouldn’t bother just yet. I would concentrate first on making any rich legacy application (old or new) you use today, much smarter. There are tools out there that enable this (like OpenSpan) and the business benefits are huge. This allows you to focus on your EA strategy (including SOA) over the long term whilst at the same time, providing real agile business benefits to your users around what they already use.

And remember, once an application is delivered to a business user (through it’s UI), it must play nicely with all of the other applications found there. Otherwise, you’ll find out quite quickly, your users won’t be happy that your new Web 2.0 applications will be just as silo’d (un-integrated) as the other applications they have.