« May 2008 »
S M T W T F S
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
You are not logged in. Log in
Entries by Topic
All topics  «
Announcements
Growth Charts
Memories
Prenatal Visits
Soundings
Technical Trading
The Squirts
Ultrasounds
Blog Tools
Edit your Blog
Build a Blog
RSS Feed
View Profile
Port's Pot
Sunday, 11 May 2008
Let's see... did he say turn gas on then strike match or strike match first then the gas?
Mood:  bright
Now Playing: Fry Daddy Bisque - Chef washes socks in the bouillabaisse (much egg many faces)
Topic: Ultrasounds

If you ever want to work semantically on the internet, you have to be able to, first; interoperate with data situated between your own process desires and some other processing demand.

If you can do that, congratulations. That was the first step. You now need to make interoperation data perform in a data portable way and you then have the wheels for functionality.

Functionality is what we call any active processing a software body (an "application") can do as touching a body of data being considered as content. If all the application does is contained in that one body of functionality as dealing with an exclusive content and doesn't go outside the present application except to direct the user elsewhere to do something else, the application is a monolith and indicative of traditional "object" oriented architecture.

Our quest here is to describe what is appearing in the industry; spotting those attributes that will allow us to track to determine whether and where technology may go in the seen and unseen landscape.

Interoperability is necessary when discussing the functionality of two or more applications as relates to a common body of data.

Interoperation is fundamental to the distribution of functionality.

The entire issue of interoperability is the core argument in the controversy surrounding the standardization of Microsoft's OOXML Office document XML formats (>6000 pages) as opposed to a more simple XML standard in ODF (measured in hundreds of pages).

We see Microsoft apparently does not want other applications working intimately with their document and programming files. They therefore drag their feet on meeting open-ness standards.

We do not see Microsoft putting energy into interoperation because interoperation opens up the probability the outside application interacting with Microsoft's own data domain will only bring diffusion of user's market view and dilution of client lock-in.

Interoperation fundamentally demands the data be in a universal form so the content can be differentiated from non-content data.

Having that interoperability provides for two adjacent methods to operate on the data between them, thus providing a foundation for a movement of data from one processing node to another.

The basic terminology would then describe the movement of data from one processing node to another (thus an interoperation between a pair of connected nodes may be transported further by allowing one of the nodes to perform a similar interoperative process with another adjacent node.) This is a sophisticated form of data portability called transactioning and is fundamental to various mainframe operating systems.

Transactioning begins with one node performing a process which triggers transport of data to facilitate a collaborative exchange of meaning and ownership to another node. Just like humans do their workflows.

This "event processing" and "transactioning" are fundamental activities necessary to provide software functionality to a mass of data.

Most  "event processing" is commonly pictured in manufacturing process automation.

Most "transactioning" is thought to be performed in financial systems automation.

But, without event processing AND transactioning, neither system automations are possible.

If you intend to work in that direction, you first need to "tag" (XML markup is applied around the text or image to instruct the machine as to the "name" and other properties of that particular text or image) each significant word and image of content to provide the machine with a way to apply industry specific meaning to bodies of content.

That's what Yahoo announced back in March 2008.

"Yahoo’s support for semantic web standards like RDF and microformats is exactly the incentive websites need to adopt them. Instead of semantic silos scattered across the Web (think Twine), Yahoo will be pulling all the semantic information together when available, as a search engine should. Until now, there were few applications that demanded properly structured data from third parties. That changes today."

This is a baby step to bring the available data throughout the "known" world to a machine "knowable" state. It's the "seamless" nature of a virtualized mass of data that makes discovering unknown relationships embedded within the data mass a matter of course and very valuable.

Bringing data into that state allows different applications from various platforms to work with the same data amongst all uses. That's interoperability.

Interoperability allows for virtualization of the interface between two interoperating machines.

The next baby step to perform is data portability. Yahoo announced that step the other day.

Where to next?

We are advancing toward a point where processing of any desired amount may be done within the objects doing the event processing + transactioning local to the data storage and protection, allowing for secure granular functionality at any data point.

This removes tremendous burdens off the centralized infrastructures and makes the distributed processing devices more valuable to the overall ecology.

Thus, I commend this article to your reading for your edification toward the future conversations we'll be having.

I know this stuff is pretty heavy for a snookywookums to read but you gotta grow up fast in this world, bucky.

http://blogs.zdnet.com/service-oriented/?p=1102

May 11th, 2008

Is anyone ready to process a trillion events per day?

Posted by Joe McKendrick @ 12:04 pm

A typical company deals with millions, if not billions, of events in a single day, and most, if not all, of these events are still handled manually, by people.

"he value of complex event processing, overall, can be summarized as improving situation awareness,” Schulte said. “Simply put, that is just knowing what is going on, so you can figure out what to do.” The benefits of complex event processing, Schulte said, include better decision quality, faster response times, reduced information glut, and reduced costs.

(more at URL)
-------------

I don't think you need much more than that to be said to help you attempt to grasp how pervasive the need is for event handling technology and transactionary technology to track the events handled. The potential for innovation in the ability to provide real-time metrics, quantization and qualification of  business processes and the impact on business design is enormous.

And that's just the first step executable without touching the legacy code or the work flows. It's the simple virtual fitting of a sensing and control feedback system to the existing electronic interfaces making data available in some form (any form because the known content has been semanticized and the search efforts have further extended knowledge) to the machine.

No control actuation is available without further outfitting at this point in describing the architecture, but this already described simple process can put a corporation or community electronic world in a semantically knowable state worth fortunes.

The first half necessary in learning to do the automation walk.

And that is achieved by simply outfitting the various data points with local processing resources to be aware of event triggering and capable of  transaction processing and audit history. That is a foundation for true trusted-but-verified human interaction as measured at the point of contact with the human... not buried in a remote server-farm.

Where possible, measure closest to the point of use and provide service local to your event. If you have the capability to process in that way, you can press on. Without that, you will stumble in the effort to execute distributed processes in a deterministic control state.

Once (and not before) that control feedback framework is available/applied to the various data points in a business, functional activities may then be applied to each data local to perform activities needed.  That processing has the opportunity to provide true parallel processing across many distributed clients and revolutionize workflow process for a productivity leap similar to the OLE/COM age. I believe the impace will be more quickly deployed and more quickly adopted at a much larger scale. 

At the most fundamental incarnation, the processing nodes allow for the building of metric systems to allow for proper critique and assessment of human-actuated business processes. That capability alone is of huge value in business immediately, even if the automation of these business work practices is years away.

But the immense value lies in learning to create software services to enhance, elevate or replace the human task while applying processing resources local to the person executing the transaction both on the "consumer" side and the "supplier side".

Another phrase from the above article: "Most of these events are not captured or automated in enterprises"

This means enabling all these new data points opens up dimensions of stratified association and relationships. In many ways these formerly unseen assets will provide useful feedback, viable new methods, and potentially new business aptitudes and direction for the automating business process.

So why do we need to do this? “We have to record events using event objects so computers can receive them and do computations on those events,”

Precisely. They do those things much more reliably and accurately.


Posted by Portuno Diamo at 9:31 PM EDT
Updated: Monday, 12 May 2008 12:14 PM EDT

View Latest Entries