Point of origin
July 31, 2007 | co.mments
"For the non-geeks, this means that the feed URI will start with https:, it'll be a secure channel. This just has to happen, because otherwise there's a potential gold mine for a smart bad guy.
What the smart bad guy does is figure out how to (temporarily, locally) hack the DNS, say in a few key Manhattan offices, during trading hours. He sets up a fake sun.com and puts a fake news release in the feed claiming that we're the subject of a major SEC investigation, having first shorted a few million shares. Ouch!"
I doubt https is the way to go for financial statements.
First of all, SSL/TLS technologies that support https links are shall we say, web and scale unfriendly. What they do is to encrypt the "channel" between two parties so all data traveling over it is encrypted from prying eyes. To do that, 2 computers must negotiate to secure the channel, this involves maintaining a secured physical connection between them via their IP addresses. This is antithetical to how anyone is designing big web systems, which is all about not caring what computer served the data, because you can't, because the TCP/IP based web is not a broadcast medium. When you get downto brass tacks, the collective wisdom on scaling websites is about us not having to care that www.amazon.com is backed by a zillion anonymous servers. Generally the reason the Web doesn't fall over every day, as predicted it would by now, is a froth of caches and content networks. It's not https connections to origin servers.
Second, let's look at how this financial feed information is going to get picked up and syndicated around the Web. If my computer connects to Sun's via SSL to pick their financials, fair enough, our point to point connection is secured. But who's to say I'm not going to put that data into my "planet money" feed aggregator so that it can be picked up by downstream clients?
Atom RFC4287, which Tim and I worked on, is explicitly designed to allow entries to propagate through other feeds in this way, because relaying and recategorising entries is how people want web syndication to work. There are features in Atom that allows aggregators to identify entries and state the originating source, but they're not secure and are easily subject to spoofing.
One could say there will be restrictions attached to the redistribution of financial statements via Atom; but that would make accessing the feed so much less useful that current mechanisms, there'd be no point.
So, fuggedabout https for serving up quarterlies.
All of this is why you need to sign the data, which Tim mentions next:
"My hope is that if we and a few others start using signatures, the people who write clients will start checking them. This is the Internet, and we're playing with real money and shooting live ammunition; gotta be careful."
That would be great, except no-one is required to do so,
"Atom Processors MUST NOT reject an Atom Document containing such a signature because they are not capable of verifying it; they MUST continue processing and MAY inform the user of their failure to validate the signature." rfc4287
and generally, signing XML is complicated. But you can opt in, and libraries exist, though I suspect aggregator chains will result in altered signed data while leaving the signature as-is, a kind of syndication entropy that will take a few years to clean out. No matter, you don't want to use anything other
Atom RFC4287 if you want to syndicate and sign data.
July 31, 2007 07:43 PM
Post a comment
TrackBack URL for this entry: