Andre Durand

Discovering life, one mistake at a time.

Archive for the ‘Musings’

Survival of the Marketingist

November 13, 2001 By: Andre Category: Musings

This morning I pondered why so many inferior products, services and technologies end up suceeding and why so many other clearly superior products, services and technologies ultimately fail. While this is certainly not the rule, nor is it the exception.

My sense is that when it comes to technology, ‘good enough’ is often good enough, but when it comes to the marketing of technology, ‘good enough’ isn’t and you’d better plan to be the fittest, or you’ll end up the deadest.

I guess it just makes good’ole common sense, after all, if people don’t know about it, what good is it? As my mentor Bob Grayson use to say, “…there’s nothing worse than doing well that which never should have been done in the first place.” and to this thought, I’ll add, “there’s nothing more useless than doing anything and not telling anyone about it.”

Alice, Molasses, Black Holes and Rainbows…

November 13, 2001 By: Andre Category: Musings

Breadth of scope. That’s what I’m talking about. It takes an aweful lot of it to successfully bring to life technology projects which strive to change the world. 

Rainbows, Molasses and Alice in Wonderland

November 13, 2001 By: Andre Category: Musings

I woke up this morning to find an email of a new book (in draft form) which will be published by O’Reilly and was written by Marshall Rose. The book, a technical work written for developers and engineers, focuses on a new application-level framework protocol called BEEP. BEEP tackles a big and current problem for many protocol designers by taking the best practices of application protocol design and condensing them into a single framework which is inherently extensible and promises to reduce the inefficiencies design who consistently reinvent the wheel.

Somewhere between the 2nd chapter and a stroll to refill my coffee mug, I got this fleeting sense of falling deeper and deeper into Alice in Wonderland. It’s as if my feet were stuck in molasses while at the same time I was chasing a rainbow. Metaphorically speaking of course, I didn’t make the coffee…

I’ve actually never read a book quite as technical before and I don’t know that after this one, I will ever read one again. Somewhere in the crosshairs of coincidence, my pursuit to understand the technologies that will help me pursue ERROR (project iamme) and some very well written commentary on the ‘why’ behind BEEP, I find myself becoming deeper and deeper in technology, and farther and farther from my starting point. It’s like turning the resolution up on a fractal image, there’s just no end to it.

Well, I’ve decided this morning that I will print and read this book start to finish. For whom I gather to be a very technical person, Marshall has a pleasant and witty writing style which makes for easy reading, relatively speaking. While I was never a developer or the real brains behind it, (hat’s to Bryan Field-Elliot), I did develop a similar project several years ago  (MindWire) with likely as much vigor but only a fraction of the perspective or knowledge. I just hope that when I’m done, I’m able to find my way out of the maze and back into the real world, where money talks, and only on occation does the best implementation win.

My thoughts…

November 11, 2001 By: Eric J. Bowersox Category: Musings

Posted at your request!

I was doing some thinking about this yesterday, and let me tell you what I’m thinking, and see if you feel any of it’s useful.

First of all, it’s my opinion that most, if not all, of the user’s personal information would need to be stored on the client side. Server-based storage solutions are too open to fraud and abuse for some people’s tastes.

However, if all information is stored on the client side, then that offers little guarantee as to the integrity of the information therein; client-side information must always be presumed to be “untrusted” unless it can be verified in some manner. PKI offers some hope of doing this, but a sufficiently sophisticated attacker could still spoof the legitimate client-side software and return bogus information.

That given, what I’m thinking of is a piece of client-side software that would be like “identd on steroids.” Identd is a service originating on UNIX that provides remote systems the ability to find a user identifier associated with a TCP connection. (The identd protocol is specified in RFC 1413.) This would be a program that managed the user’s entire collection of personal information, and would be explicitly queried by server-side code to return pieces of personal information. The program would have explicit instructions recorded somewhere that would say, for instance, “you may give out information X as needed, you should ask me before giving out Y, and you may not give out Z at all.” This could be further modified by which entity is doing the asking, and with further qualifications such as “the entity may store this piece of information persistently” or “the entity may NOT store this piece of information persistently, but must erase it after this transaction.”

Of course, as the RFC’s author explicitly states in the “Security Considerations” section, “The information returned by this protocol is at most as trustworthy as the host providing it OR the organization operating the host.” Using some form of PKI, though, it may be possible to ensure with a certain degree of confidence that the client-side program making the responses is what it says it is. A
sufficiently-determined attacker, though, could get around these safeguards. (This will be true no matter how much time one spends on
coming up with safeguards. Part of the effort, therefore, must be focused on coming up with contingency plans for what happens if something is compromised.)

Were I writing a piece of client-side code like this, I would concentrate on two implementations for starters: a Windows executable and a Java executable. The Windows executable is necessary to create
an efficient implementation on the vast majority of systems in use, and will provide a valuable base of code to use in creating other implementations later on; the Java executable will be available as a stopgap on other systems, and as the “executable of last resort” on systems that are not supported by native code, but do support a JVM. Other implementations, as for non-PC devices, could be based on one of these two. Naturally, the persistent repository of personal
information that this program will use MUST be encrypted, as it would
provide a tempting target for other malware on the same system. The
encrypted repository should be protected with a key stored externally to the system…as a password in the mind of the user, for instance, or as biometric information, or as a hardware token, or as any combination thereof. (What can be done about recovery if the user forgets the password? Would recovery even be worthwhile to design in, given that it represents another potential avenue of compromise?)

I don’t know if I’m getting too deep here for your preliminary
analysis, or even if this is useful at all, but you asked me to tell
you what I think, and here’s a bunch of thought…
To which you replied:

First of all, I think that the protocol and software must support an ability for the user to host (on the client) his identity. Yes, I think it should be encrypted, but it also should be done in such a way that it could be stored on a smartcard for instance.

As far as trust is concerned, there needs to be mechanisms of providing verification/certification/authentication to the identity. This will need to be thought through a bit, but should not break the peer-to-peer, store at the fringes aspect of the concept.

Server-side storage of Identity information, otherwise referred to as
Trusted Host storage should be at the option of the Identity holder. They should be able to select one or more ‘hosts’ of their identity or any part of it.

And my response:

Well, if the info store is on a smartcard, that solves the problem of malware being able to get at it. Assuming the user doesn’t just leave the smartcard in the slot. 🙂
And the smartcard can be used in conjunction with biometrics and/or passwords for the user to authenticate himself to the software. Remember, authentication is based on one of three factors–something you know, something you have, or something you are–and should ideally be based on two or more of these factors.

Authenticating IDs should be very very simple…perhaps one server (or more than one) someplace that can look at a “universal identifier,” and just return a yes-or-no as to whether this identity was properly issued. It would not know who any particular identity referred to, but it would be able to say “this one is valid, and that one is not.” I’ve got some further thoughts about the construction of these universal identifiers, but that may be getting too detailed for this discussion. Further certification would probably be done through public/private keypairs that were shared among some combination of the user, the service, and that central validity checker.

And yes, I am not precluding people wishing to store personal info on the server side. Some people are more trusting than others. In fact, that assumption was implicit in my discussion of giving another entity personal information with the stipulation “you may store this” or “you may NOT store this, erase it when we’re done here.”

A Case for SIPML

November 11, 2001 By: Andre Category: Musings

Session Initiation Protocol (SIP) has become the de-facto standard for controlling the way in which two peers initiate communications. Originating in the mobile phone and wireless operator industry, the protocol has recently found its way into the Windows XP OS, and promises to play an important role in the a highly strategic area of peer-to-peer communications. Unfortunately, or perhaps fortunately for this XML fanatic, the SIP protocol is not based upon XML, and therefore lacks the corresponding flexibility and extensibility offered by XML-based meta-protocols. This essay explores the opportunity associated with creating a SIP sister protocol based upon XML.

A Case for SIPML

November 11, 2001 By: Andre Category: Musings

by Andre Durand, November 11, 2001

Session Initiation Protocol, otherwise referred to as SIP, is a protocol specifically designed to facilitate the initiation of sessions (“conversations”) between two devices or applications. In its simplest form, the SIP protocol defines the text and format of information exchanged between two devices or applications just prior to those devices or applications beginning a communication. Think of this as ‘header’ information that includes an exchange of capabilities, a negotiation of lowest common denominator capabilities, addressing information etc.

While SIP is a P2P based protocol, in practical use, additional server infrastructure supports the discovery and registration of SIP clients, this distinguished by software components supporting Registration, Location and Proxy capabilities.

The SIP protocol and the corresponding SIP infrastructure designed to support SIP communications have become the de-facto standard for the cellular telephony industry and has recently expanded from there to now provide functionality to peer-based IP applications through inclusion of a SIP stack in the new Windows XP desktop.

In conversations with individuals responsible for the design of SIP, a simple rationalization explained why SIP was not based upon XML, “XML was not a standard when SIP was created, or we would have used it,” was the answer I received.

In further investigation into the competitive nature of SIP and its corresponding distributed client/server infrastructure and Jabber, a system for real-time routing of XML, we came to the conclusion that SIP was a superior protocol for session initiation, but lacked the underlying open standards flexibility and extensibility offered by XML.

In piecing together various elements of a system which I believe would serve as a foundation for a P2P operating system, I think it would be very interesting to take the fundamentals of SIP, and recreate them in XML, in effect creating a new protocol, call it sipml.

The advantages and implications of a revised and updated session initiation protocol that leveraged XML would be immediate and obvious, but founded in increased flexibility and an ability to extend the data of conversations beyond simply communication initiation using open standards and a well known framework.

SIP, through SIP extensions, today allows for communications between SIP peers to extend beyond simply conversation initiation, but it does so in a completely proprietary way, and therefore cannot take advantage of the growing number of software middleware designed to manipulate, store and route XML.

The commercial opportunities and ramifications of SIPML are likely much larger than what is contemplated in this paper. In fact, I’d contend that some smart programmer and a few influential individuals could go a long ways in creating a stand-alone business on the concept, however, for purposes of this discussion, SIPML would become only one leg of a stool designed to provide the foundation for a P2P operating system, as outlined in prior essay’s.

A more detailed analysis providing a larger framework of the basic components contemplated in P2P operating system will be discussed in coming essays.

Digital Identity Management – Objectives

November 09, 2001 By: Andre Category: Musings

I continue in my fascination with the social and business ramifications and opportunities surrounding the concept of digital identity. As I wrote before, much of this thinking has festered at the fringes of my creative boundaries for several months, only now taking shape into thoughts which might some day find their way into a working project. This morning I thought it time to outline some of these thoughts in a rough format, which I’ve enclosed in this capsule.

Digital Identity Management – Objectives

November 09, 2001 By: Andre Category: Musings

I wanted to take a moment and outline some rough thoughts on what might constitute the perfect digital identity management system. Here is what I came up with.

We (defined as people, devices, applications or serviecs) are each responsible for the actions, commands and communications which  correlate to our digital or online identity.

We have the right and ability to control all aspects of our digital or online identity.

We agree to provide accurate and current information for our online identity.

Systems, applications and devices can query our online identity, and make copies where appropriate.

Our online identity may be stored in more than one location at the same time, for backup, archival, caching or redundancy purposes.

Our online identity allows for multiple mechanisms of verification and validation, which serve to bind our online identity with our physical one.

Methods of validation may include algorithms such as public key, biometrics or an unbroken chain of ‘trusted’ relationships.

Our online identity, its storage, format and use must provide for maximum safeguards ensuring accuracy and legitimate use, with built in protections which deter an ability for it to be stolen, copied, spoofed or deliberately or accidentally used by an unauthorized person or system.

Each individual, person or device is allowed to have as many alias’s to their identity as they wish, provided these alias’s all correspond to only one valid online identity, which can be independantly varified through a multitude of means. 

We can control who and what has access to our online identity, and what elements of information they are allowed to see and use.

The system needs to be architecturally resilient, reliable, redundant, secure, private and flexible.

Our online identity should be directory server independent, but allow us to copy certain aspects of our online identity to directory servers when applicable, such as NDS, ADS, LDAP, Passport.

Peer-to-Peer: Role in Identity Management

November 02, 2001 By: Andre Category: Musings

We’ve heard a lot about P2P lately, and the ramifications of a network where the computational power is completely distributed seems like a fascinating prospect, however, how many services do we see that are truly P2P outside of file sharing? This short essay speaks to the potential of P2P as the foundation for identity management, probably one of the most important components of our future access to a growing number of web services.

P2P and Identity Management

November 02, 2001 By: Andre Category: Musings

by Andre Durand

Identity management and the discussion of Microsoft’s plans to become the default trusted host of ones identity (MS Passport) in their new .Net web services strategy is certainly a hot topic amongst service providers and carriers who today manage gateway services to the Internet, and see losing this key strategic component as unacceptable. Having identified a motivation for major ‘gateway’ service providers (ISPs) to utilize alternative means of identity management while focusing on a different yet related topic having to do with presence, I proposed the concept of leveraging extensions to LDAP to capture the opportunity to distribute web services identity management and presence.

At a high level, this concept makes a lot of business sense, after all, LDAP today manages Intranet (internal network) access to applications and services, why not extend the protocol to accommodate external web services (Internet services), and provide a migration path for existing service providers that provide SMTP, Web and other services through LDAP?

Now I’d like to take that concept even one step further, extending the concept of distributed identity management to an extreme scenario, peer identity management. The motivational case for this is simple, the person I trust the most is myself, why should I not have an ability to host and manage my own identity? In this scenario, trusted hosting of identity is not a default model, but a backup model, where identity is first managed by the most granular component of network, a dedicated or even dynamically discovered node or “peer” on a network, and secondarily managed upstream by a trusted host of my selection.

Proponents of a centralized approach to identity management (ie. Microsoft) would argue that highly specialized service providers can do a better job hosting identity, and I agree that operationally and logistically, this is a true statement. But the question is not whether or not they “can” do a better job, or even whether they are more convenient, which is no doubt the foundation for any Microsoft assumption of dominance in this emerging opportunity, but a question of choice. Correctly structured, a system and protocol for peer based identity management would not limit the opportunity for an individual to select a trusted host, or even multiple trusted hosts.

I propose, therefore, that any identity management system or protocol should first allow me to create and host my own identity. Other peer’s or web services would then have an ability to discover my identity by first querying my node, and secondarily querying my trusted identity host. Starting with the distribution of identity management, one could construct an entire web services model, based first on the principles of maximum distribution (P2P) and secondarily relying upon trusted hosts of services or data.

See Also:

ERROR (ping identity bill of rights and principles)

ERROR (ping digital identity)