« The biggest problem the software industry has | Main | Instant Messaging for P2P »

Web2.0 and deferred integration costs

Web2.0 is the sum of:

  • dead cash chasing investment, and
  • Cheap broadband, and
  • Documented URL query strings, and
  • Working javascript (I know!), and
  • CSS awareness, and
  • Data served as XML, and
  • Web apps that do one thing well

The last one is interesting. There's a large collective bet right now that the integration costs on specialised web applications won't be an issue a few years out. Or at least they'll be less of an issue than building software that tries to do everything (aka "Groupware"). Today the Web application landscape doesn't look a whole lot different from departmental IT circa the mid to late Nineties - lots of little things waiting to be hooked together. It turns out that was, and still is, hard work.

If integrating web based systems turns out to be a non-problem, I think we might to close to the position where we can say there is no aspect of software design that cannot be deferred until later, except the first test. It also has a massive implications for the EAI sector.

Some people call web integration "mashups", which goes to show that no matter what, computer types are terrible are giving things names*. A mashup was what I used to do on the weekends.


* Along with POX (Plain Old XML), which seems to be a fashionable acronym with recovering WizzyDeathStar types. I suppose SmallPOX is what runs on series 60 phones.


April 25, 2006 08:11 AM

Comments

Jeff Atwood
(April 24, 2006 11:38 PM #)

Maybe it's too obvious, but one of the bullet points should probably be "Moore's Law". There's a tipping point for extensive client-side use of JavaScript..

http://www.codinghorror.com/blog/archives/000509.html

.. and for the server-side use of interpreted languages as well:

http://www.codinghorror.com/blog/archives/000573.html

I remember very clearly how long it took to sort a DHTML list via JavaScript on a company-wide standard ~700 Mhz P3 back in 2002!

Fredrik
(April 25, 2006 01:52 AM #)

"Maybe it's too obvious"

Well, I'm reading this on a 700 MHz machine that has no problems running modern "web 2.0" sites, and I've been building high-performance production systems in Python since 1995, and have a pretty good understanding of Python and performance, and my spontaneous reaction to both the posts you linked to was "WTF?". It doesn't really sound like you've been *using* the technologies you write about...

Jeff Atwood
(April 25, 2006 08:33 AM #)

I'm reading this on a 700 MHz machine that has no problems running modern "web 2.0" sites

That's a year 2000 vintage PC. Why? I hope your time is worth more than that!

I know from personal experience (mostly in gaming) that one man's "unplayably slow" is another man's "blindingly fast", so I guess YMMV. But most people would consider a 700MHz machine, and peripherals of similar vintage (slow hard drive, slow memory bus, etcetera), terribly slow by modern standards.

I've been building high-performance production systems in Python since 1995, and have a pretty good understanding of Python and performance

I didn't say you couldn't build Python apps that performed well. You certainly could. But you had to think about what you were doing. It took skill. We have so much performance on the client and servers now that you don't have to think about it. Interpreted code just runs fast, period.

I guess it's the same reason nobody writes code in assembly any more. Eventually C++ will be like assembly; it'll get pushed up to the very top of the pyramid for those extreme .01% cases. These cases do exist. They're a little.. er.. unusual though. Here's one from Michael Abrash:

http://www.google.com/search?hl=en&q=%22Optimizing+Pixomatic+for+x86+Processors%22

In this three-part article, I discuss the process of optimizing Pixomatic, an x86 3D software rasterizer for Windows and Linux written by Mike Sartain and myself for RAD Game Tools (http://www .radgametools.com/). Pixomatic was perhaps the greatest performance challenge I've ever encountered, certainly right up there with Quake. When we started on Pixomatic, we weren't even sure we'd be able to get DirectX 6 (DX6) features and performance, the minimum for a viable rasterizer. (DirectX is a set of low-level Windows multimedia APIs that provide access to graphics and audio cards.) I'm pleased to report that we succeeded. On a 3-GHz Pentium 4, Pixomatic can run Unreal Tournament 2004 at 640×480, with bilinear filtering enabled. On slower processors, performance is of course lower, but by rendering at 320×240 and stretching up to 640×480, then drawing the heads-up display (HUD) at full resolution, Unreal Tournament 2004 runs adequately well, even on a 733-MHz Pentium III.

Of course, *why* we would want this amazing software renderer, considering that good 3D cards can be had for fifty bucks.. well, I guess that's another WTF.

Scott Mark
(April 25, 2006 12:26 PM #)

I actually like the term mashup and find it a lot more refreshing than many of the other recent namings - BPEL, SaaS, SOA, etc. For one thing it's not an acronym, and I think it's quite descriptive as far as intent. It has an I'm-just-trying-to-get-things-done sort of tone that works for me.

But I imagine your mashups were more fun - how about some posts on that? ;-)

John McClean
(April 25, 2006 03:22 PM #)

The integration of a multitude of software services may be what keeps Western hands busy in the future, lest our productivity gains drive us down the path of the American farmers.