Blog Tools
Edit your Blog
Build a Blog
View Profile
« May 2007 »
S M T W T F S
1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31
You are not logged in. Log in
Entries by Topic
All topics  «
Apple Fritters
Calamity
Chinadotcom and VCSY
DD to da RR
Endorsements
Facebook
GLOSSARY
Gurgle
HP and VCSY
Integroty
Microsoft and VCSY
Nobody Can Be That Stupid
Notable Opinions
Off the Wall Speculation
Panama
Pervasive Computing
Reference
SaaS
SOA
The DISCLAIMER
The Sneaky Runarounds
TIMELINE
VCSY
VCSY / Baseline
VCSY / Bashed
VCSY / Infotech
VCSY / MLE (Emily)
VCSY / NOW Solutions
VCSY - A Laughing Place #2
Saturday, 26 May 2007
When you hear the whistle blow, open your eyes.
Mood:  accident prone
Now Playing: 'Run Aground' Ship's engineer climbs to top deck to find all hands have abandoned ship without him. (Mystery / Educational)
Topic: Calamity

SOME people say they know a lot more than the rest of us. Always beware of anyone who says "It's always been like this. It will always be like this." Those are the kind of people who will steer you right into one of those moveable icebergs.

Where X86 Architecture Hits the Wall

Where X86 Architecture Hits the Wall

April 17, 2007 11:50AM 

X86 chipmakers face a challenge that IBM and Sun do not -- namely, zero control over software and hardware. An x86 CPU and its surrounding architecture must be ready to run system software coded for the least capable platform and every peripheral on the market.

...the weaknesses of the x86 approach to superscalar operation are starting to show. Professional workstation and server buyers who look to x86 systems to replace RISC machines have high expectations that include true parallel operation. In science and technology, creative professions and software development, to name a few, high-end client systems should be able to parallelize their way through heavy-lifting tasks while leaving enough power for real-time foreground interaction.

Likewise, buyers at the high end expect to be able to mix compute-intensive and I/O-intensive server applications, along with multiple virtual machines without sacrificing smooth and balanced operation of all tasks. When these buyers double the number of server CPUs, they expect a server's total performance to rise on a near-linear scale.

If RISC users came to PCs with those expectations, they'd walk away disappointed. While modern x86 server and workstation CPUs are outfitted for parallelization at the core level, PCs' intra-CPU communication, processor support components, memory, peripherals, the host operating system, the VMM (virtual machine monitor), the guest operating system, device drivers, and applications spin a web of interdependencies that, at times, requires that execution or I/O follow a specific path, even if sticking to that path calls for cyclically standing still. The result: You buy more high-end x86 systems than you should have to.

More at URL

What most people do not understand (particularly people who work in an industry [they are easily blinded by career attachment to old ways]) is that what used to be is no more. What will be is not yet readily visible and, as with all disrupted technologies, the adoption spikes catch manufacturers, vendors and users off guard.

Large corporations with preset agendae and large user masses with preset expectations get swept aside if they are not able to adapt rapidly to change.

With Microsoft, the entire software industry is changing around them while they maintain a stoic "wait and see" profile. That sort of management inertia is a common element in the failed business models of the past. The model worked right up to the tipping point in the disruptive wave. After that point, they can't unload product or assets fast enough to prevent the entire mass from rapidly becoming obsolete and worthless.

UPDATE

For more detailing and one of those cutsie air fresheners to hang off your mirror go here: 

http://vcsy.blogspot.com/2007/05/emails-from-edge-tales-from-crypto.html 

References for Your Edification:

RISC compared with CISC = http://cse.stanford.edu/class/sophomore-college/projects-00/risc/risccisc/

RISC: Reduced Instruction Set Computer

CISC: Complex Instruction Set Computer  

From above URL:

"The Overall RISC Advantage
Today, the Intel x86 is arguable the only chip which retains CISC architecture. This is primarily due to advancements in other areas of computer technology. The price of RAM has decreased dramatically. In 1977, 1MB of DRAM cost about $5,000. By 1994, the same amount of memory cost only $6 (when adjusted for inflation). Compiler technology has also become more sophisticated, so that the RISC use of RAM and emphasis on software has become ideal.

 

More Update 

Intel: Software needs to heed Moore's Law
http://news.zdnet.com/2100-3513_22-6186765.html
 

By Ina Fried, CNET News.com
Published on ZDNet News: May 25, 2007, 12:39 PM PT

SAN FRANCISCO--After years of delivering faster and faster chips that can easily boost the performance of most desktop software, Intel says the free ride is over.

Already, chipmakers like Intel and Advanced Micro Devices are delivering processors that have multiple brains, or cores, rather than single brains that run ever faster. The challenge is that [1] most of today's software isn't built to handle that kind of advance.

"The software has to also start following Moore's law," Intel fellow Shekhar Borkar said, referring to the notion that chips offer roughly double the performance every 18 months to two years. "Software has to double the amount of parallelism that it can support every two years."

Things are better on the server side, where machines are handling multiple simultaneous workloads. [2] Desktop applications can learn some from the way supercomputers and servers have handled things, but another principle, Amdahl's Law, holds that there is only so much parallelism that programs can incorporate before they hit some inherently serial task.

Speaking to a small group of reporters on Friday, Borkar said that there are other options. [3] Applications can handle multiple distinct tasks, and systems can run multiple applications. Programs and systems can also both speculate on what tasks a user might want and use processor performance that way. But what won't work is for the industry to just keep going with business as usual. 

[4]  Microsoft has recently been sounding a similar warning. At last week's Windows Hardware Engineering Conference in Los Angeles, Chief Research and Strategy Officer Craig Mundie tried to spur the industry to start addressing the issue.

[5] "We do now face the challenge of figuring out how to move, I'll say, the whole programming ecosystem of personal computing up to a new level where they can reliably construct large-scale applications that are distributed, highly concurrent, and able to utilize all this computing power," Mundie said in an interview there. "That is probably the single most disruptive thing that we will have done in the last 20 or 30 years."

Earlier this week, Microsoft's Ty Carlson said that [6] the next version of Windows will have to be "fundamentally different" to handle the amount of processing cores that will become standard on PCs. Vista, he said, is designed to handle multiple threads, but not the 16 or more that chips will soon be able to handle. [7] And the applications world is even further behind.

[8] "In 10 to 15 years' time we're going to have incredible computing power," Carlson said. "The challenge will be bringing that ecosystem up that knows how to write programs."

But Intel's Borkar said that [9] Microsoft and other large software makers have known this shift is coming and have not moved fast enough.

[10] "They talk; they talk a lot, but they are not doing much about it," he said in an interview following his discussion. [11]"It's a big company (Microsoft) and so there is inertia."

He said that [12] companies need to quickly adjust to the fact they are not going to get the same kind of performance improvements they are used to without retooling the way they do things.

"This is a physical limit," he said, referring to the fact that core chip speed is not increasing.

Despite the concern, Borkar said he is confident that the industry can rise to the challenge. Competition, for one, will spur innovation

[13] "For every software (company) that doesn't buy this, there is another that will look at it as an opportunity," Borkar said.

[14] He pointed to some areas where software has seen progress, such as in gaming. He also identified other areas that might be fruitful. [15] In particular, specific tasks could have their own optimized languages. Networking tasks, for example, could be handled by specific optimized networking code.

Intel has also been releasing more of its own software tools aimed at harnessing multicore performance. Another of [16] Intel's efforts is to work with universities to change the way programming is taught to focus more on parallelism; that way the next generation of developers will have such techniques in the forefront of their minds.

[17] "You start with the universities," Borkar said. "Us old dogs, you cannot teach us new tricks."

 

My take:

[1] Was Microsoft promising to Intel things Microsoft can not now deliver? Look at Intel articles over the past year. The plans Intel made as detailed back in June 2006 when they sold off their cell phone division were probably based on hopes Vista would be a barnburner. As it is, Vista promises to be the horse (or the bull) locked in the barn afire.

[2] Notice the complaint about parallelism. In other words, Intel is saying 'Hey, we can build the architectures, but, if the software producer (Microsoft is no doubt inferred ) doesn't provide the kind of virtualization and command/data pipeline management needed to make use of the architecture, don't blame us.' Longhorn and the associated XML/Web-based operating capabilities were intended to allow a processing platform to reach out to the inter-connected outside (via intranet or internet) for interoperable resources (other processor sets able to handle parts of the workload- here is where interoperation is no longer a 'play nice' file content but is a 'play along' operational block on the clock [aka deterministic interoperation]). With the cutting of things like WinFS and the elemental technologies that enable applications like WinFS to operate from Longhorn/Vista, the processor is doomed to remain alone... unable to reach out beyond its own proprietary buss structure for help.

[3] The 'Business as usual' comment can arguably be pinned on Microsoft's various rewrites of their operating system from 2004 when advanced capabilities (those that would have allowed x86 processors to reach outside their local processing structure to outside processing capabilities in a virtualized form) were cut from Longhorn (aka Vista) and Microsoft returned to the traditional programming and operating systems they had before their vaunted XML/dynamic languages efforts which apparently failed or were failed. I don't think Intel can be blamed for not knowing how to architect chips. Chip design, engineering and manufacturing per se is not the problem, I think. Management CAN be blamed for taking the word of a software company that's already demonstrated they can't deliver on their promises or projects. That alone will ultimately prove to be the lead weight around the neck of chip maker's swimming upstream efforts.

[4] I'll bet they have. They either can pin the blame on somebody else preventing them from developing the kind of software resources that can virualize, arbitrate and manage processing chip resources or  they will have to take the blame for strangling Intel's future.

[5] Well, we who've been watching what VCSY has been claiming their patents can do (they claim such by describing the architectural structures in their patents) and any nitwit with an eye for architectural processes can see what kind of  "...large-scale applications that are distributed, highly concurrent, and able to utilize all this computing power" may be theorized (and one would then reasonably say deployed) by the VCSY intellectual properties in conjunction with other virtual arbitrated managed and governed technologies such as IBM is able to field.

[6] Good, because Mister Gates and Mister Ballmer have aleady said the next operating system they make will be different... promise. Uhhh... Mister Microsoft, I hate to break it to you but wasn't WinFS/Longhorn/Yukon supposed to be the basis for just that software revolution? Are you telling Intel to just hang on you'll be there eventually? Should they mothball their processing facilities while your campus roller blades around to some sort of viable option beyond Vista and the Windows XP ME 2 aka Vista/Longhorn? Not trying to be ugly but this is getting to be a political and marketing farce and Microsoft appears to be counting on the rest of the technological and investing public to simply not see what VCSY has. No wonder shutting down discussion about VCSY would be such an important goal to certain people possibly representing some of these companies on boards such as RagingBull where most of this speculative information (based on easily googled fact) can be found?

[7] Well, now, just who is to blame for that when Microsoft is late fielding development tools for dynamic virtualized and arbitrated applications? Hmmm?

[8] In 10 to 15 years, Mister Carlson, Intel will be a gaming chip manufacturer and the business world will be running on IBM power architectures and cell processors. The x86 line will be a distant expensive memory if things continue at this rate.

[9] Well well well... after how many paragraphs we FINALLY get up enough nerve to name names and point he finger? Well done, Intel. You've finally bought a clue. How much will it have cost by the time you figure out the secret phrase?

[10] Talk and no action is not cheap. Not cheap at all, is it?

[11] Microsoft certainly doesn't seem to have much inertia when they want to get into advertising to find some way out of the business application quandary they seem to have gotten their Office and Operating System lines into. Money in the right management hands tends to have a fascinating lubricative effect. In the hands of incompetent and intransigent management, money empowers inertia.

[12] Retool? Microsoft has had virtulization and arbitration technology on their shelves since 2004 and beyond. What retooling are we talking about here? Change the engineering teams or change management? Let's have some avocados here, folks, so we can go about making guacamole'.

[13] I believe you are correct, sir.

[14] Your homework, dears, is to figure out for yourself what the nice man is saying here to the world at large. I have said for some time I believe Microsoft is burying the technology they should have fielded in Vista in XBox where it can't be dug out easily (they think).

[15] A core element in VCSY technology is the Very High Level Language Emily which is specifically targeted precisely at providing higher level languages for specific verticals and the term 'verticals' may mean work-tasks at very granular levels just as it may mean 'verticals' in industry disciplines and businesses.

[16] Just what would you teach in those universities? How to program like Jeff Davison and Aubrey McAuley and Luis Valdetaro? Would you mind giving them a bit of credit in your textbooks? Hmmm?

[17] That's right, Mister Borkar. In humane societies, old dogs who are put in a position where they can no longer look after themselves are put up for adoption by someone else or euthanized. Which will it be for Intel's management and staff if they can't convince these mean nasty software companies to step up to their responsibilities and make or buy some healthy dogfood for a change?

As Amdahl's Law points out, all the preprocessing and parallel plumbing in the world won't help you when the software you're running can't arbitrate virtualized operations within a serial stream and can't distribute virtualized and arbitrated functions. 

What do you think, children? Am I being too harsh on Intel? Not any more harsh than analysts will be once the story gets out. We currently have less than 700 people reading this bilge I put out. What will happen when the number is in the thousands and these reader take these writings to experts and those experts agree? What do you think will happen? Better to take care of your own problems than to preach to someone else's family how they should take care of their wayward relatives.

Perhaps the dawning is happening at Intel and it's over a year after Microsoft themselves recognized the problems they would have once long-delayed Longhorn would start using the architectures for real (not just in mock ups and simulations).

PS - Speaking of putting "credit" where credit is due, I would like to take this opportunity to acknowledge I probably never would have taken Intel to task for mismanaging expectations and realities in the software they have grown so dependent on were it not for the writings of a poster on Raging Bull VCSY who happens to have worked for Intel at "times in the past". I don't know if that past is "years" ago, "months" ago or mere "days" as someone like a consultant can work for somebody one day and "not" the next. Makes no matter to me. Where ever he may have gained his knowledge, we all owe him a debt of gratitude for opening this doorway on our view of the software/hardware dependencies that govern these two huge industries.

yers truly - portuno 


Posted by Portuno Diamo at 10:28 PM EDT
Updated: Sunday, 27 May 2007 1:25 AM EDT
Post Comment | View Comments (2) | Permalink

Saturday, 26 May 2007 - 5:38 AM EDT

Name: "baveman"

The Road to Abilene
In his book, The Abilene Paradox, Professor Jerry Harvey introduces a model that aptly describes what happened in this proposed consolidation effort. Harvey tells the story of a visit to his in-laws in Coleman, Texas. One day while they were enjoying a game of dominos and cold lemonade on the shady porch of their house, someone says, "Let's go to Abilene and have lunch at the cafeteria." They piled into an old Buick without air conditioning for the 53-mile drive to Abilene, with the temperature reaching 104 degrees. The food at the cafeteria was predictably bad. As they returned hot and sweaty (and with indigestion) from their journey, they realized that no one really had wanted to go to Abilene in the first place. Each person thought the other wanted to go, and they just went along with the group's decision.

Harvey says that many groups find themselves going to places (Abilene) that no one really wants to go (Abilene). He identifies a "Groupthink" in organizations when they fail to "manage agreement." Groups that fail to manage agreement develop "false consensus." People in organizations agree to act believing that everyone else is in agreement. No one knows about the lack of true consensus because no one is willing to ask hard questions about why they are going to Abilene in the first place. When they return, everyone says they didn't want to go and wonders how they got there.

Saturday, 26 May 2007 - 1:55 PM EDT

Name: ajaxamine
Home Page: http://ajaxamine.tripod.com

Bave - I would think corporations who have singular dictatorial heads do worst at real consensus development. That's probably the malady most common in corporate demise.

View Latest Entries