shodZ

Tuesday, August 31, 2004

von Neumann

any change in something by a couple orders of magnitude is not just a change in speed, but a change in kind

XP for Embedded SYstems

Planning

"Do only the planning you need for the next horizon—At any given level of detail, only plan to the next horizon —that is, the next release, the end of the next iteration," Kent Beck, Extreme Programming Explained.
XPers and agile proponents believe that project requirements are the most fluid part of any project. Gathering requirements is considered a regular and periodic activity that's done throughout the project.
Traditional process models, on the other hand, gather and freeze requirements at the beginning of a project. Afterwards, engineers treat changes to requirements as unhealthy disruptions and indications of incomplete work in previous phases. In contrast, the XPer assumes it's impractical to discern a complete and accurate set of requirements in an isolated first phase of a project. (Many studies of software projects strongly back up this claim; two are mentioned at the end of this article.) What is needed, then, is a process that allows the complete and accurate set of requirements to emerge as the project proceeds.
This brings up an important feature of XP and all other agile methodologies—their iterative nature. Iteration is one of the most powerful features of agile and should be implemented immediately by everyone who writes software even if you adopt nothing else. Software process models that are founded on an iterative or evolutionary approach are now literally everywhere, including all of the agile methodologies
, the Rational Unified Process, and the Microsoft Solutions Framework.
The iterative principle is simple. The project is broken up into small, meaningful subprojects where all the activities are performed: planning, design, code, and test. This gets working, bug-free software into the hands of the customer as early as possible so that the clarity of vision that can only come from watching the code run is available soon. The operative word here is feedback. Feedback with actual working code that represents a subset of the system is obtained early and regularly on an XP project.
The customer should be encouraged to use this feedback to refine and verify his vision and thus the requirements. The developers also become an important source of insight as they begin to see the system take shape. This technique exploits the fact that software is soft. That is, software is relatively easy to change (when written properly).
However, in the embedded world it's common to develop the hardware along with the software. Remember, XP evolved in the context of personal computer applications where the hardware and associated device drivers are a given. So with great caution we must discern the difference between a hardware requirement and a software requirement.
Hardware requirements are very difficult to change at a later date. For example, if the platform requires sound, and you place a buzzer on the board, the customer can't change his mind to something more sophisticated such as modulating tones because the equipment for this is not on the board. So the customer must think this requirement through ahead of time. Once the customer has chosen the hardware capabilities, XP dictates that the particular requirements for the sounds themselves should be put off until we can hear something and the customer can sit with us to get the sounds just right.
Another edict of XP maintains that you should prioritize iterations in the order of criticality, that is, develop the most critical items first. Quite often in embedded systems development, this results in a lot of "bottom up" coding. For various good reasons, you almost always have to develop some of the device drivers first. Here are some of those good reasons :
--> The real-time requirements of the system depend almost completely on the performance of these drivers, and the team is most worried about meeting these requirements.
--> Never-used-before hardware that's present in the system (such as EEPROMs or LCD displays) needs to be verified so that the hardware design can be blessed for production.
--> The rigorous testing required by XP will require some input and output to the system.

Designing

"We will continually refine the design of the system, starting from a very simple beginning. We will remove any flexibility that doesn't prove useful," Kent Beck, Extreme Programming Explained.
In the design phases, XP strives for a just-in-time approach that stresses simplicity and clarity. Further, XP maintains that we should design and code for today and not tomorrow. Kent Beck correctly observes that this is one of the hardest principles for programmers to learn. When faced with writing a device driver, a lot of programmers try to write a driver that can be used by anyone who ever uses this device. This lofty goal is expensive in terms of up-front development cost, code size, and probably code performance. This investment only pays off if the driver is used on another project in a different way at a later time.
The embedded systems developer should instead opt for clear code with obvious hooks on how to generalize later. For example, I recently wrote a serial peripheral interface port driver and opted not to use interrupts. I noted clearly in the code where I disabled interrupts for this device and stubbed out an interrupt service routine function with a note that if interrupts are needed, this function needed to be written.
Well-designed embedded firmware has a clear demarcation between application-layer code and hardware-specific code. For example, the application should call an abstracted function to turn on an LED; it need not nor should not know how this is actually accomplished. However, this technique is ripe for abuse by those who tend towards over design. Great diligence is needed by managers and developers to keep this type of layering simple and easy to understand.
The first mistake that's made is to put in several layers where only one is needed. For example, an over-designer might put in a display manager that manages the LCD (using a driver) and the LEDs (using another driver). Now the application is two steps removed from control of the LEDs. This additional complexity provides almost no clarity and burdens the system with extra code. Layering, partitioning, encapsulating, and abstracting are all vital to high-quality code, but it's easy to get too much of a good thing.
Besides, requirements aren't the only aspect of the system that's expected to evolve as the XP project progresses. The code design will evolve as well. Evolving from a simple design to something more complex is much easier and natural than evolving from an overly complex system to something simpler. Further, in the former case it is much more likely that the final design will be closer to optimal. The phrase that XP says should drive the designer is "What's the simplest thing that could possibly work?"
Finally, I'm amazed at how often an operating system is thrown into an embedded system without engineers first performing an accurate cost-versus-benefit analysis. XP maintains that any additional code in the system has a cost and burden associated with it that's probably greater than you think. An operating system is generally a lot of additional code. Also, many operating systems are feature rich and flexible. Complexity, size, and cost are always greater with this type of product. Make sure your system needs such an operating system. Engineers opt for the powerful operating system thinking that they're covered for any eventuality. The reality often is they're burdened with a monster whose documentation rivals the New York City white pages.
In sum, include the simplest operating system you can. If a simple foreground/background design will do, by all means use that instead.

Coding

"We will carefully craft a solution for today's problem today, and trust that we will be able to solve tomorrow's problem tomorrow," Kent Beck, Extreme Programming Explained.
XP dictates that code development begin as soon as possible. The embedded developer, however, must often wait for hardware. Most teams in this situation opt just to write code and compile it. I think this is a grave mistake. Compilers routinely come with simulators for the processors they support, and these can provide a great avenue for an early, productive start to coding. Even if the simulator is $1,000 extra, get it. I've never known this investment to fail to pay off. Simulators usually provide a way to get data into and out of the simulated environment so you can test with real data. They also provide ways to measure execution time so you can check performance constraints.
If a simulator isn't available, get an evaluation board. Virtually all embedded processors have evaluation boards available for sale. Even if the hardware folks are telling you that you'll only have to wait two weeks for working hardware (HA!), buy it. The feedback that comes from testing code is the best feedback there is. Why wait? Remember, untested code is practically worthless.
Even after the real hardware is available, a smart strategy is to maintain a dual platform approach where the code can always be run on a known, good platform (simulator or evaluation board) and also the real hardware. This can be invaluable in isolating hardware problems from software problems.
One of XP's more controversial requirements is pair programming. I find the main critics of pair programming usually haven't tried it with an open mind. The world of embedded systems programming, however, presents interesting challenges. A lot of embedded projects only need one or two programmers. In these cases, it's hard to require pair programming for all code development. Still, you can make good compromises that let you take advantage of pair programming. For example, in the case of a single developer, pair programming with another developer on another project can be a regularly scheduled activity. This same developer can then help other developers by pair programming with them. Strict XPers might balk at this compromise, but, as stated earlier, this is a controversial topic. All in all, we've found that productivity gains from pair programming are significant and can't be ignored.
Another XP cornerstone for coding is to integrate often. This works as well in embedded systems development as anywhere else. Most programmers underestimate how long integration takes—often by a lot. By integrating often the programmers understand sooner and get constant reminders of how time consuming this activity can be. Then they can use this lesson to more accurately schedule the next subproject.
Finally, XP proscribes that test code be written first—before the code it will test. Though this is a noble and worthwhile goal, noncompliance with this principle is high. Like pair programming, meeting this goal requires a lot of discipline in an engineering organization.
Testing

"We will write tests before we code, minute by minute. We will preserve these tests forever, and run them all together frequently. We will also derive tests from the customer's perspective," Kent Beck, Extreme Programming Explained.
The XP mantra here is strict. All code is verified through automated unit testing that's kept up to date all the time. Further, all the unit tests must run at a 100% pass rate all the time.
The embedded systems arena presents special challenges when trying to abide by this standard. XP was developed in a field where display devices and hard disks are a given; such components are extremely useful, if not essential, to automated unit testing. The hard disk is used to store scripts and test results. The display is useful as a unit test interface to report problems and results. The fact is, most embedded applications don't have a hard disk and many don't have any display device at all.
Some of the test scripts can and should coexist with the code. However, another drawback of embedded systems is the limited memory for code space. It's simply not possible in many cases to have elaborate test scenarios stored with the code image. What to do?
We've come up with what we believe is a simple and elegant solution to this problem. Almost every embedded system has (or can have) an RS-232 serial port. This is by far the most common interface to embedded targets. A serial port is also stable, simple, and cheap. Further, every PC comes with at least one serial port. Many PC-based programs for serial ports provide a way to write scripts where canned messages are sent out the serial port and responses are gathered. You can compare these responses to the expected responses and discern and display pass/fail results. You can also transfer binary files. With this test platform, a lot of the sophistication and size of the test code can be offloaded to the PC in the form of easy to read test scripts.
We prefer human readable (ASCII) messages be passed back and forth over the serial port. For example "START MEMORY TEST" could be sent to the target and a reply such as "MEMORY TEST: FAILED AT ADDRESS 12345" could be sent back. For complicated algorithmic-based systems (for example, image processing), you could send a raw binary file to the device for processing and have the results sent back for storage, display, and automatic so you can compare them with a known correct answer.
When we implemented this technique, we found it very powerful and believe it satisfies both the XP requirements and the spirit of those requirements. On all future designs, we're recommending a dedicated serial port just for this purpose. Note that a simple 3-pin header plus some inexpensive silicon is all that's required.
Another good way to unit test is to use a simulator. I mentioned this previously as a way to get a quick start writing and testing code. Simulators are made to run on platforms such as Windows and almost always provide simulated I/O. Rigorous unit testing is greatly facilitated since the host machine has the necessary peripherals (hard disk, display) built in. However, you'll not be able to test with the peripheral hardware (such as serial EEPROMs) that will be present on the final system.

XP is A-OK

We have found that both the XP techniques and the principles underlying those techniques are sound. To fit properly into the embedded systems domain, however, XP requires some tweaking. You'll find that it's a tenet of XP to expect to tailor the process for the peculiarities of each environment. Such tweaking is relatively straightforward, so developers can use XP on embedded systems and still achieve XP's overall goals.

XP in its gist

The cool people
Various luminaries created The Agile Manifesto as a response to the up-front design's flaws. Their seductive arguments contain much wisdom, gleaned from numerous failed projects.
Observing that requirements change throughout a project, the agile crowd embraces change and welcomes modifications at every phase of the project. The agile crowd believes great programmers make the best team members. Well, duh. But what happens to the less-than-great developers? It's statistically impossible for everyone to be above average. Managers can't ignore the fact that some team members just won't be as awesome as the agile crowd demands.

The agile crowd disses schedules. "Great software takes time; it'll be done when it's done," they say. Though there's much truth in this, real business pressures demand schedules. Sometimes it's geometry that governs deliveries; a narrow window happens only every couple of years in which to launch a spacecraft to Mars. Miss it, and you physically can't deliver the mission for years to come. Less dramatically but just as important is ensuring adequate cash flow: get the product into the market so the company can pay its bills.

The most visible agile method today is eXtreme Programming (XP). Software literature abounds with stories of XP; some 20 books promote the idea. Originated by Kent Beck, enhanced by dozens of others, XP sometimes seems to be taking the software world by storm.

XP's philosophy is that everything changes all of the time. People, tools, requirements, features, and the code are all in a constant state of flux. Instead of trying to immobilize the world, to institute a halt while we generate a project that meets today's specs, XPers embrace and even try to provoke change. Beck believes that software projects work best when guided by many, many small course corrections rather than just a few big ones. As he puts it, "The problem isn't change, per se, because change is going to happen; the problem, rather, is the inability to cope with change when it comes." That's a laudable concept. However, the implementation leaves, in my opinion, much to be desired.

Traditional software engineering attempts to delay coding until the requirements are nailed down. In XP the code is everything. Jump in and start coding today. To quote advocate Ron Jeffries, "Get a few people together and spend a few minutes sketching out the design. Ten minutes is ideal—half an hour should be the most time you spend to do this." Then start coding.

Needless to say that's a radical notion, one that frankly terrifies me. Embedded systems usually don't have a Windows Update feature. They've got to be right; in some cases errors can lead to death. Ten minutes of design is not the path to carefully-analyzed software.

XP takes a few software engineering ideas which work, and, as they say, turns the dial up to 10. If code inspections are good (and they are), then inspect constantly. In fact, programmers work in pairs, each pair sharing a single machine. They take turns typing while the other audits.

If tests are good (and they are), then let the tests define functionality. Developers create tests in parallel with their code; no function is done until it passes all of the tests.

If customer interaction is good (and it is), then in XP you may not develop any code unless a customer lives with you, spending 40 hours a week with the team. The on-site customer compensates for the lack of a spec; developers constantly lob questions at this (presumably savant-like) team member.

If bad code should be trashed (and it should), then all code is "refactored" (rewritten) whenever it can be improved, and by any team member, since everyone is responsible for all of the code.

XP is a fascinating and different approach to the problem of developing software. I'm entranced with their test-first, test constantly, and don't-move-on-till-the-tests-pass philosophy. If only most of us practiced such aggressive checking! Refactoring is also a great idea, though I'd argue that we should attempt to write great code from the outset, which minimizes the number of refactorings needed.

What the UML is and Isnt

  • The Uniform Modeling Language (UML) is merely a standard diagramming notation: boxes, bubbles, lines, and text. The Object Management Group (OMG) UML specification calls it "a standard way to write a system's blueprints,"
  • Yet in the context of software development and the UML, acquiring skills to read and write UML notation is often equated incorrectly with skill in object-oriented analysis and design (OOA/D).
  • Models are abstractions of something. They ignore certain details or summarize. Consider the most important point: the UML does not define any models. The UML is not a method; it is just raw diagramming notation.

Monday, August 30, 2004

Dedining an SOA

This is a nice definition of SOA. it is actually from one of the pioneers in the field(the Big BLUE).
SOA is the concept that Web Services implements, and specifies that an application can be made up from a set of independent, yet cooperating, subsystems or services.

Sunday, August 29, 2004

Requirements that have and are shaping webServices

  • Suitability both for Distributed Operation within an apllication and the use of generic services across applications.
  • Suitability for exchcanges within an organization and b/w organizations requiring cross platform support
  • Concordance with existing internet infrastructure as much as possible
  • Ability to scale
  • Solid Internationalization
  • Tolerance of Failure
  • Strong support in general s/w development and Business Workflow management tools
  • Suitability for the most trivial request/response scenarios as well as handling the most sophisticated orchestration, transaction and security concerns where necessary

OOP Concepts

  • Inheritance -- inheriting from parents
  • Polymorphism -- context specific functions
  • Encapsulation -- propery of object(data and the methods required to act on it)
  • Abstraction -- scope resolution
  • Dynamic and Static binding -- Early and Late binding
  • Persistance - ability to live beyond scope
  • Concurrence -- Concurrent execution
  • Reflection -- ability to introspect self
  • Object Composition -- i still gotto check his out myself!!!

UDDI

  • UDDI stands for Universal Description, Discovery and Integration.
  • UDDI consists of white pages where one can find addresses, contact and other identifiers; yellow pages where one can find the industrial categorizatioins based on standard taxanomies; and green pages where one can find the requisite technical information.
  • UDDI stores all requisite information via 4 information types -- business information, service information, binding information and information about specific services
  • Business information -- Stored through the businessEntity element whcih also includes support for the yellow pages
  • Service Information -- The green page information is stored via the businessService and businessTemplate elements
  • Has 2 different API`s - one for the enquirer and one for the publisher.
  • HOw UDDI copes with failure : 4 point process.
  • 1. Prepare program for web service, caching the required binding to use for run time. 2. Invoke the service using the cached bindingTemplate data. 3. If the call fails then use the binding key value and the getBindingTemplate API call to get a fresh copy of a binding template. 4. Compare the new information with the old one; if it is different retry the failed call. if the retry succeds relace teh cached data with the new one.
  • Remmenber that the UDDI registry also has to provides for mechanisms of Identity and Authorization

Principles of SOA

  1. Dynamic Services replace Static Components -- WSDL provides of describing the services one is providing for thus providing a kindoff semantic inteoperability( in the technical sense atleast).
  2. Service Exposure and Reflection replaces traditional System Integration -- As some survey rightly pointed out 60% of money today is spent on System integration. With SOA`s the tiny-miny granularity of webServices will save millions of the enterprrise dollars.
  3. Coding for broad Applicability supersedes coding for reuse -- Instead of providing interfaces only for java compatibility to spped up applications services will now have multiple interfaces with the primary being for SOAP based bindings to ensure compatibilty with all providers and consumers.
  4. Ad Hoc upgrades supplant Disruptive Upgrades -- Due to dynamicity in both the service descriptions and there invocations(WSIF) real time upgraddes are something in the near future.
  5. Scalability handled Bottom-Up instead of Top-Bottom -- This by far i think is the most important advatage. Remmember in the RISC vs CISC case, CISC won. These guys should have realised this a lot earlier.
  6. Platform dependance gives way to Platform irrevelalnce -- using the 3 corner stones spawned via XML - WSDL, SOAP and UDDI we have finally reached a stage where we can talk about platform independance.
  7. Fedaration model of software replaces disctatorship model -- due to the high degree of loose coupling possible it will be no longer possible to enjoy a monoloy because of the complexety involved.

Saturday, August 28, 2004

Paper n Ink, MainFrames and Microsoft

Paperless Offices
End of the mainframe era
does this strike any bells??
Yes the DOOM of the Redmond company was all predicted but well haha. as in the previous cases, not all wiZdom turns out to be the way u think of it.

Role Playing CIO`s

The Technology Leader
The Business Leader
The Strategist and Mentor
The Corporate Influencer