This book is pre-release and is an evolving work-in-progress. It is published here for the purposes of gaining feedback and providing early value to those who have an interesting resource oriented computing.
Please send any comments or feedback to: email@example.com
© 2018 Tony Butterfield.
All rights reserved.
Feel free to skip this chapter if you are not interested in how resource oriented computing started life and has evolved into what it is today. There should be nothing in this chapter that is prerequisite for understanding the next chapters.
Back in 2000, I'd just wrapped up with a start-up called Component Group. We had been working on a commercialising platform called CDAIS1 which grew out of experience we had working for large utility companies struggling to make object-oriented development work in large teams working with mixed experience developers. The premise was to constrain developers to work within components that communicated via a uniform interface using only XML2. Unfortunately bad timing in the market — the Dot-com bubble3 burst — caused a premature end to that.
At that time I found work contracting to Hewlett Packard Labs in Filton, Bristol. There I interviewed for a role as a developer in a team of researchers who, as all the knowledge I obtained from the recruitment agent, where working with XML and Java. During the interview, I realised this was a perfect match. The team had been working on research to find an approach to building software using components that communicated with XML. Their focus was slightly different from that at Component Group; they were looking to build the backend software to XML messaging ecosystems that grew out of industry consortium groups. Many industry groups had established open interoperability schemas using XML at that time, for example for things like procurement and supply chain management.
The research team consisted of Peter Rodgers as project lead, along with Russell Perry and Royston Sellman. All three of them had much experience in industrial research and working with standards bodies; however, they lacked any commercial software development experience. I was that which I, and they, hoped I could bring to the team. When I joined, they had a second generation prototype of what was then called Dexter. Dexter was an XML processing engine that could be configured to run applications. The focus was on using XML and XML processing technologies to create web interfaces without programming as such. XML configuration determined how pre-built components were connected and what configuration those components had.
The killer demo for Dexter at the time was a digital jukebox which streamed out mp3 to clients and let them browse and select tracks via an HTML web interface. The demo showed a third party library (to stream mp3) could be wrapped in an XML interface, and how XML could be used as a data source, the representation for data being processed, and serving the data.
Dexter, at the time, had an internal architecture modelled on CPUs. Instructions came from an XML document, then matched up to components, and finally, their inputs pre-fetched via an abstracted data layer that could either access URLs for external data or internally stored register values. The process was quite inefficient due to each step in the process being asynchronously decoupled using a separate queue and processing thread.
More to come soon; once the "Museum of NetKernel" is re-opened. (The museum if a mythical vault containing runnable versions of all NetKernel that have ever existed.)