« May 2008 »
S M T W T F S
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
You are not logged in. Log in
Entries by Topic
All topics  «
Announcements
Growth Charts
Memories
Prenatal Visits
Soundings
Technical Trading
The Squirts
Ultrasounds
Blog Tools
Edit your Blog
Build a Blog
RSS Feed
View Profile
Port's Pot
Thursday, 15 May 2008
What's prior art when you have a Da Vinci?
Mood:  a-ok
Now Playing: Flaming Furries - Skunks get too close to basement furnace and become frantic stink bombs (awful scenes)
Topic: Growth Charts

Did you ever get  through with a conversation and say "gee, I wish I had said..."? Well, the Yahoo posting forum format doesn't allow for much follow up because, when you do follow up, those who don't want you to read can easily bury the comments with their own innane postings.

So, if you can't stomach reading the Yahoo VCSY board (believe me. more sympathetic I could not be.), I thought you might need to see the MVC discussion from Yahoo put in a more capsulated view.

Therefore, I'll chop and paste my words of relevant revelation to the heathen who rage too much.

PLUS, there are things I wanted to say after I had already posted there, so these posts distilled into this post will also have additional verbiage by me and some editorial corrections (also by me - it's all about me, me, me.).

Concerned about authenticity and purity? Read the Yahoo posts associated with the timestamps to see just what is new and what is old.

So, (just imagine you've dropped into the middle of a conversation).

begin thread: http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_V/threadview?

m=tm&bn=33693&tid=4155&mid=4155&tof=5&rt=1&frt=1&off=1

pick up @ 14-May-08 02:37 pm

portuno: Can ANYBODY tell us what is "gibberish" about the claims in 744?
http://www.google.com/patents?vid=USPAT6826744

sw_mail: Yes, 744 is a framework that implements the MVC design pattern, first described by Trygve Reenskaug in 1979. Yet, 744 is dated 2004. The facts hurt. http://en.wikipedia.org/wiki/Model-view-controller

computerguy: I'm a professional Java developer who does a lot of work with MVC in Swing GUIs. I had no idea the pattern was that old. Thanks for the link!

portuno: Notice the "professional Java developer" makes my case all by himself with the words: "does a lot of work with MVC in Swing GUIs".

We're talking about much more than a GUI model controller, "computerguy". We're talking about extending what facility and wonders MVC can accomplish with interfaces into the actual fabric and material of the elements being used to construct the applications.

Read the next post then see if you can understand where you shot whoever sent sw_mail the link before the guy even got out the door.

portuno: MVC

LOL

This is going to be good.

"In MVC, the Model represents the information (the data) of the application and the business rules used to manipulate the data, the View corresponds to elements of the user interface such as text, checkbox items, and so forth, and the Controller manages details involving the communication to the model of user actions such as keystrokes and mouse movements."

If MVC desribes any prior art,m it is prior art already accounted for in the Siteflash patent in the discussion about content management as a prior art.

MVC
A) Model - information (the data AND the business rules - in other words the workflow)
B) View - elements of the user interface
C) Controller - activity manager between the model and the user.

Thus, an interface control mechanism.

SiteFlash
A) Content - information (the data)
B) Format (the form in which the data is presented)
C) functionality (the workflows applied to the data within the format)

Thus, an integrated application.

Very telling the list of MVC implementations is so large, yet, the patent examiner never made any reference to any of those listed. Also very telling, the list of MVC implementation is very large, yet, none of those implementations is capable of creating anything as integrated in all three facets of software construction.

If you haven't noticed, MVC forgets about the "functional program". MVC occupies the space between the modeled program behaviour and the user as an interface controller... not as a functionality integrator.

Siteflash treats the program as simply another component to be integrated with content and format. MVC assumes you already have the program crammed together with the content (and we know that doesn't work in "arbitrary" ways - precisely why all those implementations listed are proprietary programming languages and not capable of handling all components in an arbitrary fashion.

portuno: This: "In MVC, the Model represents the information (the data) of the application and the business rules used to manipulate the data, the View corresponds to elements of the user interface such as text, checkbox items, and so forth, and the Controller manages details involving the communication to the model of user actions such as keystrokes and mouse movements."

Is THAT what Microsoft lawyers are going to hang the company's future on?

So, please explain how this achieves an arbitrated framework for ANY content, ANY format and ANY functionality of an application.

You're dong what even Microsoft did in challenging this patent - you're using easily seen abstracts to blow smoke over the deeper values in virtualization and the arbitrating quality that virtualization has on the components of an application construction.

If MVC were an architecture like 744, there would BE no individual proprietary languages as listed in the wiki article. The need for the incremental refinements offered by those languages would have been swallowed up in one MVC framework capable of presenting all languages as one arbitrated field of commands.

And, if the patent examiner missed that one, you'll have to explain how Microsoft failed to adequately challenge the patent when they tried the first time.

Basically what you've got in MVC is a pattern generator. What you have in 744 is an ecology creator for full development of applications during the entire life cycle as contained in the application. The development ecology IS the application... and you can't get that out of MVC.

Nice try but it's most likely Microsoft is going to be shoveling dirt into the oncoming tide with that stance.

But, we're glad somebody out there is sending you clowns some script material. You've been sucking your own incompetence before somebody came to your rescue.

So, your turn. Demonstrate, please, precisely how and where MVC supercedes what 744 claims to do.

sw_mail: And how does the .net framework violate 744?

portuno: Did I step on your toes?

end thread.

I guess I stepped on sw_mail's fingers. Or else he's trying to find more script from some more relevant engineer.

Here's another conversation.
begin thread @ http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_V/threadview?

m=tm&bn=33693&tid=4186&mid=4186&tof=8&frt=1

pick up at 14-May-08 03:35 pm

sw_mail: And how does the .net framework violate 744?

(me: this appears to be a big question for sw_mail aka computerguy aka vcsy_is_pest).

portuno: Read this: http://www.microsoft-match.com/content/developer/net_35_sp1_changes_your_expression.html
and we'll talk later after you catch your breath.

portuno: BUT of course you really need to know software architecture and understand what you're reading both in a patent and in technical specifications.

That was just a wikipedia article about MVC but it tells the story in a basic way for laymen. Once you do some exploration, you realize why all the traditional procedural languages never advanced into the web-application arena SiteFlash and MLE/Emily occupy.

No wonder it looks "obvious". It looks almost like what they do when they build a GUI. Except, Siteflash isn't a GUI nor a GUI builder alone. SiteFlash is a developmental ecology for applications. It's a creating framework for building other creating frameworks from which operating systems and applications and, yes, even more MVC fashioned languages can be built.

The idea is that you can plug any content you have, any (what the patent is meaning when you read "arbitrary". just say "any" when you read "arbitrary" and you'll beging to see the scope. It's a virtualization platform for any content and format. In other words a Microsoft "Expression" but more. ONE OF THE THINGS Siteflash can do is to act as a content/format manager for any website you want to connect to any legacy equipment you have. BUT WAIT! Content managers or ok but it's been done before. In fact, it's what ALL there is in that "designer" discipline.

Microsoft missed a great opportunity to combine their Visual Studio's development platform with their Expression designer platform. Imagine being able to be a designer AND a developer AT THE SAME FREAKING TIME.

They didn't but SiteFlash can. So we now have the concept of a stylist who can build functionality where before, with the kind of programming environments Java Joe has at his hands, anyone that wants to build an application has to have someone to build the programming code and someone completely different building the content into a formatted web page. If the two of you work well, you can build some pretty fantastic web services - of course Microsoft is going to have to perfect their interoperable capabilities over internet. They can't even demonstrate much of it in their own proprietary ranks, much less doing that kind of thing on the web.

SiteFlash offers, at one level, precisely what any web designer would like to be able to do... without a programmer. Content, format is old hat. That's what MVC represents; automating the display of content. Call it Automatic Television. That's what Microsoft's future is going to be because they can't combine the kind of virtualization architecture to allow them to combine content and format AND functionality.

AND THAT is only the beginning.

I do appreciate the gift sw_mail gave all us VCSY longs since it's the only piece of "prior art" any skeptic has pointed to that somewhat looks like SiteFlash.

But, MVC really doesn't even look like SiteFlash once you actually read the material. And it certainly doesn't do what SiteFlash can do. Those who are passing MVC around as a "prior art" are doing an intellectually dishonest service to those they pass that idea around to, or they really do not know what needs to be done in the software construction arena to meet next-generation needs.  

So, now that we know where the "prior art" is, perhaps we can have some real developers (and I mean REAL developers) study the information on the vaunted MVC methodology, then come back and give it to the 744 patent with both barrels.

Come on, guys and gals. ONE of you must have the hot sauce to argue the situation. I mean, you're all betting your careers that Microsoft doesn't need something like SiteFlash.

And, if you're open-source, you're staring at something that could strip your third-party business to the carcass very quickly.

end thread.

There's much more but I'll wait a bit to post since you're going to need time for your little eyeballs to absorb it all.


Posted by Portuno Diamo at 9:17 AM EDT
Updated: Friday, 16 May 2008 3:26 PM EDT
Wednesday, 14 May 2008
When you feel lonely and you're feeling like only you...
Mood:  hug me
Now Playing: Happy Trapease - Skunk family caught in basement billiard bungle (adult smells)
Topic: The Squirts

I know it's like sticking your face in the diaper pail, snookywookums, but you should do a bit of reading on the Yahoo VCSY board for giggles.

You can read, can't you?

One of the "lerkers" (rhymes with "jerkers") decided to teach old daddy portuno about "architecture" and pulled out something somebody had scripted out for him.

LOL Ho, what a belly laugh that was. Have you ever heard of "MVC" The Model View Controller concept? No? Well, apparently other "professional developers" haven't either.

MVC is what much of the traditional software crowd can thank for allowing them to be able to push buttons and tweak textboxes in their "applications".

Some numbnuts think it shows prior art to invalidate the Siteflash patent. Just the other way around; the Siteflash patent shows just how limited and primitive MVC is. Unfortunately for those VCSY haters, patent 6826744, which they all hate with a passion - much more than the company, no doubt, supercedes the MVC concept in so many ways, it's difficult to know where to start the description.

But, I will attempt to more fully describe why MVC is not what 744 is or was or ever hopes to be. 744 can build MVC's. MVC's can not build 744 derivatives.

Now, I know reading the Yahoo VCSY board is not pleasant. There are hooligans and know-nothings "lerking" there who are tasked with making life unpleasant for anyone who shows the slightest interest in VCSY.

So, I'm making things a bit easier for you here. This is a post that has all the thread URL's: http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_V/threadview?m=tm&bn=33693&tid=4221&mid=4221&tof=1&frt=1
Discussion about MVC 14-May-08 04:51 pm by Portuno_Diamo

and these are the individual threads in case you prefer navigating on your own.

http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_V/threadview?m=tm&bn=33693&tid=4155&mid=4155&tof=5&frt=1
"An arbitrary object framework" for XML. 14-May-08 01:06 pm

http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_V/threadview?m=tm&bn=33693&tid=4186&mid=4186&tof=8&frt=1
I'll bet there's some furious script writing... 14-May-08 03:29 pm

http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_V/threadview?m=tm&bn=33693&tid=4205&mid=4205&tof=6&frt=1
So, ultimately, Siteflash supercedes MVC... 35 minutes ago

Not to worry if the messages aren't there. I can reconstruct the scene whenever. I'll probably do just that in a follow up post here just so we all know the score as the VCSY v MSFT case Markman Hearing marches near.


Posted by Portuno Diamo at 5:59 PM EDT
Updated: Wednesday, 14 May 2008 6:56 PM EDT
Tuesday, 13 May 2008
Where the antelopes play is full of $#!@.
Mood:  amorous
Now Playing: Life in a Bottle - Genie gets corked while antique hunting with elderly aunt (such vulgar language you never heard)
Topic: Prenatal Visits

Some legalese:

IN THE UNITED STATES DISTRICT COURT
FOR THE EASTERN DISTRICT OF TEXAS
MARSHALL DIVISION
VERTICAL COMPUTER
SYSTEMS, INC.,
Plaintiff,
vs.
MICROSOFT CORPORATION,
Defendant.§

Civil Action No.
2-07-CV-144 (DF-CE)
THE PARTIES’ JOINT MOTION FOR MODIFICATION
OF THE AMENDED DOCKET CONTROL ORDER
Plaintiff Vertical Computer Systems, Inc. (“Vertical”) and Defendant Microsoft Corporation (“Microsoft”) jointly request a modification of the Court's Amended Docket Control Order to provide for the parties to submit technical tutorials to the Court in advance of the Claim Construction hearing.

The Amended Docket Control Order currently provides for a Pre-hearing Conference and technical tutorial, if necessary, on July 9, 2008, and a Claim Construction hearing on July 10, 2008.

The parties respectfully request that the Court vacate the technical tutorial and pre-hearing conference and set a deadline for submitting technical tutorials on CD-ROM by July 2, 2008, so that the Court will have an opportunity to consider the tutorials in advance of the Claim Construction hearing on July 10.

Accordingly, Plaintiff Vertical and Defendant Microsoft respectfully request that this Court modify the Amended Docket Control Order as outlined above.

and more legalese:

IN THE UNITED STATES DISTRICT COURT
FOR THE EASTERN DISTRICT OF TEXAS
MARSHALL DIVISION
VERTICAL COMPUTER
SYSTEMS, INC.,
Plaintiff,
vs.
MICROSOFT CORPORATION,
Defendant.

Civil Action No.
2-07-CV-144 (DF-CE)
ORDER GRANTING PARTIES’ JOINT MOTION FOR
MODIFICATION OF THE AMENDED DOCKET CONTROL ORDER
The Court, having considered the Parties’ Joint Motion for Modification of the Amended Docket Control Order and finding good cause supporting it, finds the Motion should be granted.

IT IS THEREFORE ORDERED, ADJUDGED, AND DECREED that the
Parties’ Joint Motion for Modification of the Amended Docket Control Order is hereby GRANTED in its entirety, and that the Pre-hearing Conference and technical tutorial currently scheduled for July 9, 2008, shall be vacated and replaced with the following modification of the Amended Docket Control Order: Action Rule Date Deadline to submit technical tutorials to the Court Wednesday, July 2, 2008  

and opinion:

Now that's an interesting situation. No doubt the SiteFlash patent and all that can be derived from it can be complex and difficult to understand to people skilled in the art of software. It's a different way of doing things in software - a different architecture - and, as such, much of the novel value made available by this method of creating software will escape the comprehension on the first or even second pass.

In this particular architecture, what may at first appear to be massively complex interactions as theory soon appears as simplified abstracted enhancement to existing tools as well as Siteflash and MLE tools.

A tutorial will better prepare the court to hear the words in the patent claims from a much different prespective.

Now, that's from VCSY. What Microsoft will be doing is trying to demonstrate the parts and pieces of Siteflash have all been done before. It's the tactic most first and second time readers of the patent try to use to prove the patent(s - both of them share this quality) invalid based on prior art and obviousness.

I don't disagree there are examples of programming and software development prior art within the patent. But, as a bicycle is a collection of screws and bolts and wheels... all of which have been done before, the assembled "bicycle" invention is a much more useful machine than a boxed set of all these parts.

The traditional parts in patents 6826744 and 7076521 are assembled in such a way to build an integrated software system that does things much better, more wholly and more efficiently than any collections of any prior art.

So, the tutorial on VCSY's side will be much more productive as it will describe the way Siteflash uses all those currently existing software to build ecologys which, in turn, are used to build software frameworks in which businesses may function.

Try that with FrontPage - the "prior art" avowed by Microsoft to invalidate what Siteflash does. LOL

I would love to watch just these two tutorials. What a treasure trove of material with which to confound and befuddle the likes of dlkjla (a throwback poster on the Yahoo VCSY board - I thought I might help him build a fanbase here as he appears to be enamored of attention.)

PS - Yes. I know it's petty. I'm a petty little man. Who else would have a "new baby" freebie site on Lycos dedicated to chronical surmisings about the current industry state and Microsoft and their nemesis VCSY?

If you want to see the original source of the above posts (like you're making a anthropology exhibit for the science fair and all) you can look here, but, I should warn you. Put a q-tip soaked in pine tar in each ear and push firmly. That's the only thing that can innoculate you from suffering the torment of actually reading through the garbage you wade through to get those freegan goodies.


Posted by Portuno Diamo at 12:27 AM EDT
Updated: Tuesday, 13 May 2008 1:13 AM EDT
Sunday, 11 May 2008
Let's see... did he say turn gas on then strike match or strike match first then the gas?
Mood:  bright
Now Playing: Fry Daddy Bisque - Chef washes socks in the bouillabaisse (much egg many faces)
Topic: Ultrasounds

If you ever want to work semantically on the internet, you have to be able to, first; interoperate with data situated between your own process desires and some other processing demand.

If you can do that, congratulations. That was the first step. You now need to make interoperation data perform in a data portable way and you then have the wheels for functionality.

Functionality is what we call any active processing a software body (an "application") can do as touching a body of data being considered as content. If all the application does is contained in that one body of functionality as dealing with an exclusive content and doesn't go outside the present application except to direct the user elsewhere to do something else, the application is a monolith and indicative of traditional "object" oriented architecture.

Our quest here is to describe what is appearing in the industry; spotting those attributes that will allow us to track to determine whether and where technology may go in the seen and unseen landscape.

Interoperability is necessary when discussing the functionality of two or more applications as relates to a common body of data.

Interoperation is fundamental to the distribution of functionality.

The entire issue of interoperability is the core argument in the controversy surrounding the standardization of Microsoft's OOXML Office document XML formats (>6000 pages) as opposed to a more simple XML standard in ODF (measured in hundreds of pages).

We see Microsoft apparently does not want other applications working intimately with their document and programming files. They therefore drag their feet on meeting open-ness standards.

We do not see Microsoft putting energy into interoperation because interoperation opens up the probability the outside application interacting with Microsoft's own data domain will only bring diffusion of user's market view and dilution of client lock-in.

Interoperation fundamentally demands the data be in a universal form so the content can be differentiated from non-content data.

Having that interoperability provides for two adjacent methods to operate on the data between them, thus providing a foundation for a movement of data from one processing node to another.

The basic terminology would then describe the movement of data from one processing node to another (thus an interoperation between a pair of connected nodes may be transported further by allowing one of the nodes to perform a similar interoperative process with another adjacent node.) This is a sophisticated form of data portability called transactioning and is fundamental to various mainframe operating systems.

Transactioning begins with one node performing a process which triggers transport of data to facilitate a collaborative exchange of meaning and ownership to another node. Just like humans do their workflows.

This "event processing" and "transactioning" are fundamental activities necessary to provide software functionality to a mass of data.

Most  "event processing" is commonly pictured in manufacturing process automation.

Most "transactioning" is thought to be performed in financial systems automation.

But, without event processing AND transactioning, neither system automations are possible.

If you intend to work in that direction, you first need to "tag" (XML markup is applied around the text or image to instruct the machine as to the "name" and other properties of that particular text or image) each significant word and image of content to provide the machine with a way to apply industry specific meaning to bodies of content.

That's what Yahoo announced back in March 2008.

"Yahoo’s support for semantic web standards like RDF and microformats is exactly the incentive websites need to adopt them. Instead of semantic silos scattered across the Web (think Twine), Yahoo will be pulling all the semantic information together when available, as a search engine should. Until now, there were few applications that demanded properly structured data from third parties. That changes today."

This is a baby step to bring the available data throughout the "known" world to a machine "knowable" state. It's the "seamless" nature of a virtualized mass of data that makes discovering unknown relationships embedded within the data mass a matter of course and very valuable.

Bringing data into that state allows different applications from various platforms to work with the same data amongst all uses. That's interoperability.

Interoperability allows for virtualization of the interface between two interoperating machines.

The next baby step to perform is data portability. Yahoo announced that step the other day.

Where to next?

We are advancing toward a point where processing of any desired amount may be done within the objects doing the event processing + transactioning local to the data storage and protection, allowing for secure granular functionality at any data point.

This removes tremendous burdens off the centralized infrastructures and makes the distributed processing devices more valuable to the overall ecology.

Thus, I commend this article to your reading for your edification toward the future conversations we'll be having.

I know this stuff is pretty heavy for a snookywookums to read but you gotta grow up fast in this world, bucky.

http://blogs.zdnet.com/service-oriented/?p=1102

May 11th, 2008

Is anyone ready to process a trillion events per day?

Posted by Joe McKendrick @ 12:04 pm

A typical company deals with millions, if not billions, of events in a single day, and most, if not all, of these events are still handled manually, by people.

"he value of complex event processing, overall, can be summarized as improving situation awareness,” Schulte said. “Simply put, that is just knowing what is going on, so you can figure out what to do.” The benefits of complex event processing, Schulte said, include better decision quality, faster response times, reduced information glut, and reduced costs.

(more at URL)
-------------

I don't think you need much more than that to be said to help you attempt to grasp how pervasive the need is for event handling technology and transactionary technology to track the events handled. The potential for innovation in the ability to provide real-time metrics, quantization and qualification of  business processes and the impact on business design is enormous.

And that's just the first step executable without touching the legacy code or the work flows. It's the simple virtual fitting of a sensing and control feedback system to the existing electronic interfaces making data available in some form (any form because the known content has been semanticized and the search efforts have further extended knowledge) to the machine.

No control actuation is available without further outfitting at this point in describing the architecture, but this already described simple process can put a corporation or community electronic world in a semantically knowable state worth fortunes.

The first half necessary in learning to do the automation walk.

And that is achieved by simply outfitting the various data points with local processing resources to be aware of event triggering and capable of  transaction processing and audit history. That is a foundation for true trusted-but-verified human interaction as measured at the point of contact with the human... not buried in a remote server-farm.

Where possible, measure closest to the point of use and provide service local to your event. If you have the capability to process in that way, you can press on. Without that, you will stumble in the effort to execute distributed processes in a deterministic control state.

Once (and not before) that control feedback framework is available/applied to the various data points in a business, functional activities may then be applied to each data local to perform activities needed.  That processing has the opportunity to provide true parallel processing across many distributed clients and revolutionize workflow process for a productivity leap similar to the OLE/COM age. I believe the impace will be more quickly deployed and more quickly adopted at a much larger scale. 

At the most fundamental incarnation, the processing nodes allow for the building of metric systems to allow for proper critique and assessment of human-actuated business processes. That capability alone is of huge value in business immediately, even if the automation of these business work practices is years away.

But the immense value lies in learning to create software services to enhance, elevate or replace the human task while applying processing resources local to the person executing the transaction both on the "consumer" side and the "supplier side".

Another phrase from the above article: "Most of these events are not captured or automated in enterprises"

This means enabling all these new data points opens up dimensions of stratified association and relationships. In many ways these formerly unseen assets will provide useful feedback, viable new methods, and potentially new business aptitudes and direction for the automating business process.

So why do we need to do this? “We have to record events using event objects so computers can receive them and do computations on those events,”

Precisely. They do those things much more reliably and accurately.


Posted by Portuno Diamo at 9:31 PM EDT
Updated: Monday, 12 May 2008 12:14 PM EDT
SOMEbody's been eating MY porrige...
Mood:  celebratory
Now Playing: Mama Come Home - Abused mother of six walks into ambush (short documentary)
Topic: Announcements

Happy Mothers Day to all you mothers out there. As some of you lttle mothers know, the concept of "mother" can be abstracted to a technological sense. At the core of "motherhood" is the ability to communicate with ones you should be friendly with.

Therefore and To wit:

First there was interoperability...

(Horn toot: If you have been following along, you'll know we were discussing the core concepts interoperability and virtualization as far back as 2005-2006 - we've been talking about services since 2000.)

(Uhhh... pardon.)

...commonly defined as having a file system that is agnostic, and therefore useful, to outside applications.

These files could be worked on by any user with authorized access. Not only could that data be worked on in that particular file commonly amongst a group of users, one could conceivably take an instance of the data in the file and move that instance to another file to work in conjunction with other data instances and files...

But wait! I'm getting ahead of the industry. First you must have interoperability. Then, before you get to functionality, you must have portability.

So, welcome to all who are joining in making their data common (Yahoo, MySpace, FaceBook and now Google, so far) and providing a non-intrusive identification system for trusted commerce (???? that requires functionality - are you guys really ready for that?).

(I wonder if a not-so-trusted commercial entity can be reformed into a trusted commercial entity? Probation? Remediation? TIme out?

Baby steps before you get to the semantic web. If you don't do these, you're motionless.

-data interoperability
-data portability
-data functionality
-data control (governance)

 http://webworkerdaily.com/2008/04/25/data-portability/

Data Portability and the File System

April 25th, 2008 (3:00pm) Imran Ali 3 Comments

With an increasing dependence on distributed software, and web-based applications the portability of personal and corporate data is becoming an increasingly important issue for all users, but more so for web workers in particular.

Open Data philosophies have begun to coalesce around essays such as the speculative Data Bill Of Rights and the emerging Data Portability movement, web-based services that support portability are still quite rare and invariably the exception to the rule.

Services such as Flickr, del.icio.us and Gmail do allow data extraction of sorts; indeed Gmail’s support for IMAP was apparently motivated by the desire for data portability and enabling users freely import and export messages. Conversely, Microsoft announced that it would end offline Outlook support for Hotmail, effectively imprisoning user’s messages inside Microsoft services, without even a paid for option for IMAP or POP access.

Technicalities aside - portability is really about ethics and ownership. In an marketplace where users are directly contributing assets to the success of a service, we need to be able to assert ownership over those contributions and demand mechanisms to support that ownership.

(more at URL)

This demonstrates the substrata of developers and builders who have been using the newly emerging web tools in testing and developmental systems

And to think this kind of development could have been moving foreward as far back as 2001 IF the software market were a friendly place to assert Intellectual Property and demand it be respected... just as Microsoft demands.

So I'm putting this here so you will be able to begin absorbing the nomenclature necessary to describe and understand what many people will call web 3.0. Interoperability (the big discussion amongs VCSY longs in 2005-2006) and now portability (described in VCSY's XML enabler whitepaper and patent teachings) are only now becoming words familiar to the mainstream.

But, to those who've been discussing these issues since 2001 and before, we're now at the place dividing software maturity from "developmental" or "untested" technology to a ready for common consumption technology base - a mixture of ideas, realities, software and workers.

In my opinion, what the above posted URL describes is a shift from a technology base worrying about future reality to a realization and potential.

This difference between the traditional megalithic software community (those who know how to build operating systems valued above those who do not) and the granular componentization people (the nubbies) community is what marks the terminus for Microsoft relevance in the future web world.

It's not just about being able to engage in the common use of data as opposed to isolated islands of automation as carved out by the COM/Corba kingdom.

That common use of data is a first step. Tagging for semantic content is a first grip. Yahoo stated their value very well when they announced they would be tagging their content (that includes all emails in the Yahoo system past and present and future). There are serious privacy issues being walked up on very quickly as the technology is beginning to roll out of the factories and cottage cheese industry for a race to the money pot.

And, one would say, apparently much of this work has been going on in secret as the industry has not been speaking of these new "buzzwords" until beginning only a few months ago. Some days after Microsoft announced they were acquiring Yahoo.

Yahoo stood up some important technologies very quickly. Now, others are standing up very quickly. One has to assume they have had the ability to work this way for quite some time and they've been holding back (all of them) until a particular time when they would all begin staking their marketshare claim and begin farming.

Looks like a land rush or a gold rush.

I wonder who's holding the first nuglets?


Posted by Portuno Diamo at 1:08 PM EDT
Updated: Sunday, 11 May 2008 2:14 PM EDT
Thursday, 8 May 2008
What does the piper pay for mice?
Mood:  caffeinated
Now Playing: Making Rain in the MUD - Weather reporter gets hit with cold front and drizzles (adult intent)
Topic: Memories

Conspiracy? Did portuno hear the word "conspiracy"???

Want another "conspiracy"?

OK. How about this one?

SavaJe was a java based smart phone platform for distributed applications.

http://en.wikipedia.org/wiki/SavaJe

which was introduced in 2001. It had many things going for it... had.
http://www.linuxdevices.com/news/NS8885915946.html

One of its backers was Ken Ross, the namesake of Ross Systems which had its legacy trashed by the actions of the CEO Patrick Tinley and the corporation chinadotcom. That's right, the same Ross Systems that tried to take NOW Solutions away from VCSY...and lost. The same chinadotcom that bought Ross Solutions and continued the fight for NOW Solutions... and lost.

During those years from 2001, SavaJe looked poised to take the stage as a key mobile platform. Ken Ross must have been pleased even though his namesake company was being ground into the dirt.

Then, suddenly and without warning, SavaJe closed down October 25, 2006.

Sun bought SavaJe in April 12, 2007 and subsequently released JavaFX a month later May 9, 2007.

...and then Sun released "for real" JavaFX May 8, 2008... expected to be delivered "this fall".

Sun is in the same posture as Microsoft. Promise and delay. Promise and delay.

Silverlight is supposed to be Microsoft's answer to Adobe AIR eventually. So far, it's barely a competitor to Adobe Flash.

JavaFX is supposed to be Sun's answer to Adobe AIR eventually. So far, it doesn't exist.

What happened to Ken Ross' dream of having THE distributed platform for mobile computing? Well, when VCSY built  the distributed extention of Apollo-smart for Apollo industries using MLE/Emily, the field got crowded and the intellectual property issues came into play, I would say, the reality of a superior platform kicking SavaJe's Java based distributed kernels down the stairs became a stare-down.

Patent 7076521. This is the intellectual property root VCSY used for the Apollo smart card platform. Read the patent and then study JavaFX. The idea was to take the Java language developed originally for smart cards in 1995 and build the language out as the extensible platform. That never quite happened.

Although Java has many uses and capabilities, it also has problems and is ever on the verge of sucking industry dirt. It's why the industry hasn't been able to scale mobile applications to cover other areas of communications.

Sun is trying to stay in the game, but is behind the timeline compared to Adobe.

Interesting, isn't it, that the scenario for the VCSY IP back in 2005 would predict Sun and Microsoft would split in their SOA efforts of 2005 at some point when Sun realized Microsoft couldn't fulfill its promises for a solid SOA framework.

That happened in April 2006 amid turmoil about Vista's future - and Sun would go looking for a better way to IBM.

Maybe Sun found a better way that summer and made atonement for the Java transgressions through the years by sucking up what was left of SavaJe after the company went teats up 90 days after the 7076521 patent was granted.

Conspiracy? It doesn't take much more than just misguided attitudes and manipulated motives to make a series of events look like a conspiracy. But, events in train and coincidental with other supporting activities do demonstrate some sort of ... some sort of... some... uhhh, do we have a word in the English language for "covetousness"?


Posted by Portuno Diamo at 1:15 PM EDT
Updated: Thursday, 8 May 2008 1:47 PM EDT
Monday, 5 May 2008
Poking the Gophers - Parting the Hares.
Mood:  accident prone
Now Playing: Caught in the Haybailer - Farmer Brown gets suspenders stuck in the farm machinery (improvisational dance)
Topic: Prenatal Visits

I've been getting very tired of hearing myself talk. It's good to hear your opinions shared. This is well worth the read so you can see where we are now that Microsoft stands denuded.

http://www.microsoft-watch.com/content/podcasts/why_didnt_microsoft_yell_yahoo.html

Enjoy... we'll talk later. Let's have a little quiet time and let the baby sleep. Little snookywookums has had a busy few weeks.

This is a period when history is being made.


Posted by Portuno Diamo at 11:22 PM EDT
Updated: Monday, 5 May 2008 11:29 PM EDT
Thursday, 1 May 2008
STOP THE ARCANITY!
Mood:  party time!
Now Playing: Peddling For Posterity - Exercise nut steps on walnuts (ongoing blab)
Topic: Announcements

Discussions take space and they eventually get lost. I am putting this discussion here since it's a convenient storage spot and we may want to attach views from the industry to many of the various points being made.

I'll try to maintain this conversation continuity if it continues. We'll also be looking at the whole issue of distributed computing as opposed to the one-computer-per-person paradigm and how the future will be shaped by this architectural arcanity.

If you want to see the whole ball of wax, go here: http://www.microsoft-watch.com/content/web_services_browser/saas_sasses_windows.html

The following is contained in the comments section of Joe Wilcox's article:

P. Douglas :

One thing I would like to know: does the author prefer using web apps over comparable desktop apps? E.g. does the author prefer using a web email app over a desktop email client? Doesn't he realize that most people who use Windows Live Writer, prefer using the desktop client over in-browser editors? Doesn't he realize that most people prefer using MS Office over Google Apps by a gigantic margin? The author needs to compare connected desktop apps (vs. regular desktop apps) to browser apps, to gauge the viability of the former. There is no indication that connected desktop apps are going to fade over time, as they can be far richer, and more versatile than the browser. In fact, these types of apps appear to be growing in popularity.

 

Besides, who wants to go back to the horrible days of thin client of computing? In those days, users were totally at the mercy of sys admins. They did not have the empowerment that fat PCs brought. I just don't understand why pundits keep pushing for the re-emergence of thin client compputing, when it is fat PCs which democratized computing, and allowed them to write the very criticisms about the PC they are now doing.

 

Posted by P. Douglas | April 30, 2008 3:50 PM

 

portuno :

"I just don't understand why pundits keep pushing for the re-emergence of thin client compputing, when it is fat PCs which democratized computing, and allowed them to write the very criticisms about the PC they are now doing."

 

Because business and consumerism sees the move toward offloading the computing burdens from the client to other resources as a smart move. That's why.

 

Pundits are only reporting what the trends tell them is happening.

 

Posted by portuno | April 30, 2008 3:59 PM

 

P. Douglas :

"Because business and consumerism sees the move toward offloading the computing burdens from the client to other resources as a smart move. That's why."

 

Why is this a smart move? If the PC can provide apps with far richer interfaces that have more versatile utilities, how is the move to be absolutely dependent on computing resources in the cloud (and an Internet connection) better? It is one thing to augment desktop apps with services to enable users to get the best of both (the desktop and Internet) worlds, it is another thing to forgo all the advantages of the PC, and take several steps back to cloud computing of old (the mainframe). Quite frankly, if we kept on pursuing cloud computing from the 70s, there would be no consumer market for computing, and the few who would 'enjoy' it, would probably be confined to manipulating text data on green screen monitors.

 

"Pundits are only reporting what the trends tell them is happening."

 

Pundits are ignoring the trends towards connected desktop applications (away from regular desktop apps) which is proving to be more appealing than regular desktop apps and browser based apps.

 

Posted by P. Douglas | April 30, 2008 4:21 PM

 

portuno :

Why is this a smart move?

 

"If the PC can provide apps with far richer interfaces that have more versatile utilities, how is the move to be absolutely dependent on computing resources in the cloud (and an Internet connection) better?"

 

The PC can't provide apps with richer processing. The interfaces SHOULD be on the client, but, the processing resources needed to address any particular problem does not need to be on the client.

 

The kind of processing that can be done on a client doesn't need the entire library of functions available on the client.

 

If your hardware could bring in processing capabilities as they became necessary, the infrastructural footprint would be much smaller.

 

The amount of juggling the kernel would have to do to keep all things computational ready for just that moment when you might want to fold a protein or run an explosives simulation, would be reduced to the things the user really wants and uses.

 

An OS like Vista carries far too much burden in terms of memory used and processing speeds needed. THAT is the problem and THAT is why Vista will become the poster child for dividing up content and format and putting that on the client with whatever functionality is appropriate for local computing.

 

This isn't your grandfather's thin client.

 

"It is one thing to augment desktop apps with services to enable users to get the best of both (the desktop and Internet) worlds, it is another thing to forgo all the advantages of the PC, and take several steps back to cloud computing of old (the mainframe)."

 

Why does everyone always expect the extremes whenever they confront the oncoming wave of a disruption event? What is being made available is the proper delegation of processing power and resource burden.

 

You rightly care about a fast user interface experience. But, you assume the local client is always the best place to do the processing of the content that your UI is formatting.

 

The amount of processing necessary to accomplish building or providing the content that will be displayed by your formatting resources can be small or large. It is better to balance your checkbook on your client. It is better to fold a protein on a server, then pass the necessary interface data and you get to see how the protein folding is done in only a few megabytes... instead of terabytes.

 

"Quite frankly, if we kept on pursuing cloud computing from the 70s, there would be no consumer market for computing, and the few who would 'enjoy' it, would probably be confined to manipulating text data on green screen monitors."

 

We couldn't continue mainframing from that time because there was not a ubiquitous transport able to pass the kind of interface data needed outside of the corporate infrastructure.

 

Local PCs gave small businesses the ability to get the computing power in their mainframe sessions locally. And, until Windows, we had exactly that thin client experience on the "PC".

 

Windows gave us an improved "experience" but at the cost of a race in keeping hardware current with a kind of planned obsolescence schedule.

 

We are STILL chasing the "experience" on computers that can do everything else BUT formatting content well is STILL being chased - it's why "Glass" is the key improvement in Vista, is it not? It's why the "ribbon" is an "enhancement" and not just another effort to pack more functionality into an application interface...

 

THE INTERFACE. Not the computing. The interface; a particular amount of content formatted and displayed. Functionality is what the computer actually does when you press that pretty button or sweep over that pretty video.

 

Mainframes that are thirty years old connected to a beautiful modern interface can make modern thin client stations sing... and THAT is what everyone has missed in this entire equation.

 

Web platforming allows a modernization of legacy hardware AND legacy software without having to touch the client. When you understand how that happens, you will quickly see precisely what the pundits are seeing. That's why I said: 'Pundits are only reporting what the trends tell them is happening.'

 

"Pundits are ignoring the trends towards connected desktop applications (away from regular desktop apps) which is proving to be more appealing than regular desktop apps and browser based apps."

 

Do you know WHY "Pundits are ignoring the trends towards connected desktop applications"? Because there aren't any you can get to across the internet! At least until very recently.

 

If you're on your corporate intranet, fine. But, tell me please, just how many "connected desktop applications" there are? Microsoft certainly has little and THAT's even on their own network protocols.

 

THAT is what's ridiculous.

 

XML allows applications to connect. Microsoft invented SOAP to do it (and SOAP is an RPC system using XML as the conduit) and they can't do that very well. Only on the most stable and private networks.

 

DO IT ON THE INTERNET and the world might respect MSFT.

 

The result of Microsoft not being on the internet is their own operating system is being forced into islandhood and the rest of the industry takes the internet as their territory.

 

It's an architectural thing and there's no getting around those. It's the same thing you get when you build a highway interchange. It's set in concrete and that's the way the cars are going to have to go, so get used to it.

 

Lamenting the death of a dinosaur is always unbecoming. IBM did it when The Mainframe met the end of its limits in throughput and and reach. The PC applied what the mainframe could do on the desk.

 

Now, you need a desktop with literally the computing power of many not-so-old mainframes to send email, shop for shoes, and write letters to granny. Who's idea of proper usage is this? Those who want a megalith to prop up their monopoly.

 

The world wants different.

 

Since there are broadband leaps being carved out in the telecommunications industry, the server can do much more with what we all really want to do than a costly stranded processor unable to reach out and touch even those of its own kind much less the rest of the world's applications.

 

The mentality is technological bunkerism and is what happens in the later stages of disruption. It took years for this to play out on IBM.

 

It's taken only six months to play out on Microsoft and it's only just begun. We haven't even reached the tipping point and we can see the effect accelerating from week to week.

 

It's due to the nature of the media through which the change is happening. With PC's the adoption period was years. With internet services and applications, the adoption period is extremely fast.

 

Posted by portuno | April 30, 2008 11:55 PM

 

P. Douglas :

"The kind of processing that can be done on a client doesn't need the entire library of functions available on the client.

 

If your hardware could bring in processing capabilities as they became necessary, the infrastructural footprint would be much smaller.

 

The amount of juggling the kernel would have to do to keep all things computational ready for just that moment when you might want to fold a protein or run an explosives simulation, would be reduced to the things the user really wants and uses."

 

How then do you expect to work offline? I have nothing against augmenting local processing with cloud processing, but part of the appeal of the client is being able to do substantial work offline during no connection or imperfect / limited network / Internet connection scenarios. Believe me, for most people, limited network / Internet connection scenarios occur all the time. Also, the software + software services architecture minimizes bandwidth demands allowing more applications and more people to benefit from an Internet connection at a particular node. In other words, the above architecture is much more efficient than a dumb terminal architecture, or the one that you are advocating. This means that e.g. in a scenario where you have a movie being downloaded to your Xbox 360, several podcasts automatically being downloaded to your iTunes or Zune client software, your TV schedule being updated in Media Center, your using a browser, etc., and the above being multiplied for several users and several devices at a particular Internet connection, the software + software services architecture is seen to be far better and more practical than a dumb terminal architecture.

 

Posted by P. Douglas | May 1, 2008 8:04 AM

 

portuno :

@ P. Douglas,

"How then do you expect to work offline?"

 

Offline work can be done by a kernel dedicated to the kind of work needed at the time. In other words, instead of a megalith kernel (Vistas is 200MB+) running all functions, you place a kernel (an agent can be ~400KB) optimized for the specific kind of work to tbe done. This kernel can be very small (because it won't be doing ALL processing - only the processing necessary for the tasks selected - it can be only one of multiple kernels interconnected for state determinism) and the resources available online or offline (downloaded when the task is selected).

 

The big "bugaboo" during the AJAX development efforts in 2005 and 2006 was "how do you work offline"? The agent method places an operational kernel on the client which is a mirror (if necessary) of the processing capability on the remote server. When the system is "online", the kernel cooperates with the server for tasking and processing. When the client is "offline", the local agent does the work, then synchs up the local state with the server when online returns.

 

No online-offline bugaboo. Just a proper architecture. That's what was needed and AJAX doesn't provide that processing capability. All AJAX was originally intended to do was to reduce that latency between client button push, server response and client update..

 

"...part of the appeal of the client is being able to do substantial work offline during no connection or imperfect / limited network / Internet connection scenarios."

 

Correct. And you don't need a megalithic operating system to do that. What you DO need is an architecture that's fitted to take care of both kinds of processing with the most efficient resources AT THE TIME. Not packaged and lugged around waiting for the moment.

 

"...limited network / Internet connection scenarios occur all the time."

 

Agreed. So the traditional solution is to load everything that may ever be used on the client? Why don't we use that on-line time to pre-process what can be done and load the client with post processing that is most likely for that task set?

 

"Also, the software + software services architecture minimizes bandwidth demands allowing more applications and more people to benefit from an Internet connection at a particular node. In other words, the above architecture is much more efficient than a dumb terminal architecture, or the one that you are advocating."

 

"More efficient" at the cost of much larger demands on local computing resources. Much larger demands on memory (both storage and runtime). Much larger demands on processor speed (the chip has to run the background load of the OS plus any additional support apps running to care for the larger processing load you've accepted).

 

You will find there will be no "dumb terminals" in the new age. A mix of resources is what the next age requires and a prejudice against a system that was limited by communications constraints 20 years ago doesn't address the problems brought forward by crammed clients.

 

"This means that e.g. in a scenario where you have a movie being downloaded to your Xbox 360, several podcasts automatically being downloaded to your iTunes or Zune client software, your TV schedule being updated in Media Center, your using a browser, etc., and the above being multiplied for several users and several devices at a particular Internet connection, the software + software services architecture is seen to be far better and more practical than a dumb terminal architecture."

 

At a much higher cost in hardware, software, maintenance and governance.

 

Companies are not going to accept your argument when a fit client method is available. The fat client days are spelled out by economics and usefulness.

 

Because applications can't interoperate (Microsoft's own Office XML format defies interoperation for Microsoft - how is the rest of the world supposed to interoperate?) they are limited in what pre-processing, parallel processing or component processing can be done. The only model most users have any experience with is the fat client model... and the inefficiencies of that model are precisely what all the complaining is about today.

 

Instead of trying to justify that out-moded model, the industry is accepting a proper mix of capabilities and Microsoft has to face the fact (along with Apple and Linux) that a very large part of their user base can get along just fine with a much more efficient, effective and economical model - being either thin client or fit client.

 

It's a done deal and the fat client people chose to argue the issues far too late because the megaliths that advocate fat client to maintain their monopolies and legacies no longer have a compelling story.

 

The remote resources and offloaded burdens tell a much more desirable story.

 

People listen.

 

Posted by portuno | May 1, 2008 12:07 PM

 

P. Douglas :

"Offline work can be done by a kernel dedicated to the kind of work needed at the time. In other words, instead of a megalith kernel (Vistas is 200MB+) running all functions, you place a kernel (an agent can be ~400KB) optimized for the specific kind of work to tbe done. This kernel can be very small (because it won't be doing ALL processing - only the processing necessary for the tasks selected - it can be only one of multiple kernels interconnected for state determinism) and the resources available online or offline (downloaded when the task is selected)."

 

I don't quite understand what you are saying. Are you saying computers should come with multiple, small, dedicated Operating Systems (OSs)? What do you do then when a user wants to run an application that uses a range of resources spanning the services provided by these multiple OSs? Do you understand the headache this will cause developers? Instead of having to deal with a single coherent set of APIs, they will have deal with multiple overlapping APIs? Also it seems to me that if an application spans multiple OSs, there will be significant latency issues. E.g. if OS A is servicing 3 applications, and one of the applications (App 2) is being serviced by OS B, App 2 will have to wait until OS A is finished servicing requests made by 2 other applications. What you are suggesting would result in unnecessary complexity, and would wind up being overall more resource intensive than a general purpose OS - like the kinds you find in Windows, Mac, and Linux.

 

"The agent method places an operational kernel on the client which is a mirror (if necessary) of the processing capability on the remote server. When the system is "online", the kernel cooperates with the server for tasking and processing. When the client is "offline", the local agent does the work, then synchs up the local state with the server when online returns."

 

The software + services architecture is better because: of the reasons I indicated above; a user can reliably do his work on the client (i.e. he is not at the mercy of an Internet connection); data can be synched up just like in your model.

 

""More efficient" at the cost of much larger demands on local computing resources. Much larger demands on memory (both storage and runtime). Much larger demands on processor speed (the chip has to run the background load of the OS plus any additional support apps running to care for the larger processing load you've accepted)."

 

Local computing resources are cheap enough and are far more dependable than the bandwidth requirements under your architecture.

 

"At a much higher cost in hardware, software, maintenance and governance.

 

Companies are not going to accept your argument when a fit client method is available. The fat client days are spelled out by economics and usefulness."

 

Thin client advocates have been saying this for decades. The market has replied that the empowerment, and versatility advantages of the PC, outweigh whatever maintenance savings there are in thin client solutions. In other words, it is a user's overall productivity which matters (given the resources he has), and users are overall much more productive and satisfied with PCs, than they are with thin clients.

 

Posted by P. Douglas | May 1, 2008 2:27 PM

 

portuno :

P. Douglas:

"I don't quite understand what you are saying. Are you saying computers should come with multiple, small, dedicated Operating Systems (OSs)? "

 

What would you think Windows 7 will be? More of the same aggregated functionality packaged into a shrinkwrapped package? Would you not make the OS an assembly of interoperable components that could be distributed and deployed when and where needed, freeing the user's machine to use the hardware resources for the user experience rather than as a hot box for holding every dll ever made?

 

"unnecessary complexity"????

 

Explain to me how a single OS instance running many threads is less complex than multiple OS functions running their own single threads and passing results and state to downstream (or upstream if you need recursion) processes.

 

What I've just described is a fundmental structure in higher end operating systems for mainframes. IBM is replacing a system with thousands of servers with only 33 mainframes. What do you think is going on inside those mainframes? And why can't that kind of process work just as well in a single client or a client connected to a server or a client connected to many servers AND many clients fashioned into an ad hoc supercomputer for the period needed?

 

"Thin client advocates have been saying this for decades."

 

The most dangerous thing to say is "this road has been this way for years" and driving into the future with your eyes closed.

 

If your position were correct, we would never be having this conversation. But, we ARE having this conversation because the industry is moving forward and upward and leaving behind those who say "...advocates have been saying this for decades...".

 

Yada Yada Yada

 

Posted by portuno | May 1, 2008 3:48 PM

 


Posted by Portuno Diamo at 3:53 PM EDT
Updated: Thursday, 1 May 2008 4:04 PM EDT
Wednesday, 30 April 2008
And before that, I was a buggy whip snapper...
Mood:  suave
Now Playing: Lost Horizons - Out of work writers make up new stories for their curriculum vittles (tragic humor)
Topic: Growth Charts

I like reading Joe Wilcox's column at Microsoft-Watch because I enjoy watching trends and reversals. Joe is an honest hearted guy who is willing to say so when he sees he's been driving on the service road instead of the highway.

So, this article here about SaaS Sasses Windows is a turning point I've been anticipating for a long time. Now that the revelation is in, perhaps we all might be able to have some fun dissecting this pickled frog labelled "Microsoft technology" in detail.

But FIRST, snookywookums, you need to do a little homework. I know you're just a baby, but, the sooner you start learning about the new world, the less disrupted your old world will be.

Joe noted this article in his blog:

I suggest you sharpen your pencil, stick it in your ear all the way until it pokes out the other side (just to let in a little fresh air) and READ.
In the meantime, I'll start stacking in some reading material.
We'll have this place looking like a real life cultist reading room before you know it.

Posted by Portuno Diamo at 3:18 PM EDT
Updated: Wednesday, 30 April 2008 3:32 PM EDT
Tuesday, 29 April 2008
Slap It and Give It a Name.
Mood:  don't ask
Now Playing: Wah Wah Wah - Gnashing of teeth amid hedge fund brokers (juvenile crime)
Topic: The Squirts

See how fresh this one is? Only 8 seconds old. Still got some slime on it. Yech.

http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_V/threadview?m=tm&bn=33693&tid=3639&mid=3640&tof=1&frt=1

Re: Here's what vcsy.ob bagholders are saying about portapotty

portuno_diamo    8seconds ago    

Smells like money.

Why isn't Microsoft able to do what the rest of the industry is able to do? Why is Microsoft the only company VCSY is suing for infringing 744? Why is Microsoft the only company out of the giants unable to build anything useful on the web?

Why is Microsoft still billing Silverlight 2.0 as a video player when it's supposed to be a graphic user interface (GUI) for web applications when AIR engineers are "marching on".

I submit it is the fact Microsoft is afraid to transgress 521 out in the open. If so, that means the internet work in Microsoft that might be like 744 is also not being done. Thus we see delays in Dynamics and Live. And Silverlight remains a video player.

And all Ozzie can deliver is an RSS pipeline feeding XML from server to client. But, as posters on a technology forum said when they saw Feedsync: too bad Microsoft doesn't have a processing agent that can work those RSS feeds.

And is EVERYTHING a feed on the semantic web? Sounds like Microsoft (perhaps from the 8 years of Ballmer enforced discipline) treat every "conversation" in one direction. First it was SilkRoute and that was a uni-directional feed. Then there's SmartClient which is clunky with COM. Then there's Silverlight but no-where near what a real web application platform should be. Starved features tiptoeing around a ragged edge while Adobe addresses and produces.

Microsoft has the most experience of all in building software. Why can't they?


Posted by Portuno Diamo at 11:08 PM EDT
Updated: Wednesday, 30 April 2008 3:33 PM EDT

Newer | Latest | Older