Foreign Exchange (Forex) is the arena where a nation's currency is exchanged for that of another. The foreign exchange market is the largest financial market in the world, with the equivalent of over 1.5 trillion USD changing hands daily, more than three times the aggregate amount of the US Equity and Treasury markets combined. Unlike other financial markets, the Forex market has no physical location and no central exchange. It operates through a global network of banks, corporations and individuals trading one currency for another. The lack of a physical exchange enables the Forex market to operate on a 24-hour basis, spanning from one zone to another in all the major financial centers.
On the foreign exchange market you can trade main and exotic currency pairs and crosses quickly and easily, from your home or your office. This can be done through our software platform Delta Trading.
First launched in 2001, Delta Trading® is continuously being enhanced by Delta Stock’s own developers who work to add further features and improvements to the trading platform to ensure Delta Trading® remains the premier online trading software.
We are offering both individual and institutional customers instant "click and deal" trades on live deal-able quotes.
The FX trade is based on a margin that allows you to open positions as large as 200 times the initial amount. You can earn interest on a strong currency position even if the market is not moving.
At Delta Stock we don't charge our customers commission on any Forex trade, regardless of size.
We are trying to be as helpful to our customers as possible. That is why we are constantly improving and enriching our services. At this stage our customers can execute directly from streaming prices through our platform, which is fast, reliable, stable, easy to use, secure and contains powerful functions. Even in the most demanding trading environments, orders are executed and confirmed within seconds. Real-time tables and real-time interactive charting are both flexible and customizable; they include a transparency feature that allows the customers to work with other applications and still monitor his/her trading activities. Our platform is proprietary software that has been created in-house by Delta Stock's information technology department. We enjoy a unique ability to continually develop it to meet the evolving needs of our customers. All trading activity is tracked onscreen in real time, including current open positions, real-time profit and loss, margin availability, account balances, and all historical transaction details. Our friendly and knowledgeable staff is available 24-hours a day to assist customers with any questions. Our customers can trade currency via our online dealing room and also by telephone in English, 24 hours during working days. You can chat with our dealers round the clock.
The Map function allows Forex traders to switch between important information and workspaces that can be customized to individual trading techniques. Build a workspace for a specific chart or currency pair, and then switch back and forth with just one mouse move.
Types of Orders: Market Order - This order tells the dealing desk that you want to enter the market at its current price. Stop Order - This order is used to tell the dealing desk that when the market moves to a certain price, you want this order to be filled. Stop orders can be used in two ways: 1) to get out of a losing position(s); 2) to enter the market along with the trend. Limit Order - This order is also used to tell the dealing desk when the market moves to a certain price that you want this order to be filled. Limit orders can also be used in two ways: 1) to lock in the profits of an open position(s); 2) to enter the market against the prevailing trend. OCO Order - Stands for "One Cancels the Other"; this order includes both a stop and limit order. When either the stop or the limit order is filled, the other half of the OCO order is canceled. This order is mostly used to protect open positions and to prevent unwanted positions from being opened.
You can access Delta Trading wherever you go, whenever you want through: The electronic platform installed on any computer The Web based version - Delta Trading Web The system for trade via a cellular phone - Delta Trading WAP
Through Delta Trading you can place market, limit, stop and OCO (one cancels the other) orders. We also provide you with the unique option of placing conditional orders linked to other conditional orders. This way you can build “logical trees” and follow complex strategies without being in front of the monitor every minute of the day.
We recommend all our future customers to register for free with the demo version of Delta Trading before they start trading with real money.
Thursday, March 8, 2007
Pesan Berantai
Warga Jakarta dihimbau untuk tidak melakukan banyak aktivitas di luar rumah pada malam hari terhitung mulai Rabu (7-Maret-07) hingga 6 (enam) hari ke depan. Pasalnya pada malam hari akan terjadi hujan lebat, angin kencang disertai petir (lihat ”Badai Jacobs”). Demikian yang disampaikan oleh Kepala Biro Humas dan Protokol Pemprov DKI Jakarta, menurutnya pada Pagi dan Siang hari, BMG memperkirakan terjadi hujan dengan intensitas ringan dan sedang. Pada kesempatan ini Sudin Pertamanan telah melakukan pemantauan pohon yang berpotensi tumbang terutama di jalan Protokol, dan warga hendaknya berhati-hati untuk menghindari pohon tumbang dan tebaran debu karena angin
Web Services for Remote Portals (WSRP)Abstract
Web Services for Remote Portals (WSRP) are visual, user-facing web services centric components that plug-n-play with portals or other intermediary web applications that aggregate content or applications from different sources. They are designed to enable businesses to provide content or applications in a form that does not require any manual content- or application-specific adaptation by consuming intermediary applications. As Web Services for Remote Portals include presentation, service providers determine how their content and applications are visualized for end-users and to which degree adaptation, transcoding, translation etc may be allowed.
WSRP services can be published into public or corporate service directories (UDDI) where they can easily be found by intermediary applications that want to display their content. Web application deployment vendors can wrap and adapt their middleware for use in WSRP-compliant services. Vendors of intermediary applicatios can enable their products for consuming Web Services for Remote Portals. Using WSRP, portals can easily integrate content and applications from many internal and external content providers. The portal administrator simply picks the desired services from a list and integrates them, no programmers are required to tie new content and applications into the portal.
To accomplish these goals, the WSRP standard defines a web services interface description using WSDL and all the semantics and behavior that web services and consuming applications must comply with in order to be pluggable as well as the meta-information that has to be provided when publishing WSRP services into UDDI directories. The standard allows WSRP services to be implemented in very different ways, be it as a Java/J2EE based web service, a web service implemented on Microsoft's .NET platform or a portlet published as a WSRP Service by a portal. The standard enables use of generic adapter code to plug in any WSRP service into intermediary applications rather than requiring specific proxy code.
WSRP services are WSIA component services built on standard technologies including SOAP, UDDI, and WSDL. WSRP adds several context elements including user profile, information about the client device, locale and desired markup language passed to them in SOAP requests. A set of operations and contracts are defined that enable WSRP plug-n-play.
http://www-106.ibm.com/developerworks/webservices/library/ws-wsrp/?open&l=101,t=gr
WSRP services can be published into public or corporate service directories (UDDI) where they can easily be found by intermediary applications that want to display their content. Web application deployment vendors can wrap and adapt their middleware for use in WSRP-compliant services. Vendors of intermediary applicatios can enable their products for consuming Web Services for Remote Portals. Using WSRP, portals can easily integrate content and applications from many internal and external content providers. The portal administrator simply picks the desired services from a list and integrates them, no programmers are required to tie new content and applications into the portal.
To accomplish these goals, the WSRP standard defines a web services interface description using WSDL and all the semantics and behavior that web services and consuming applications must comply with in order to be pluggable as well as the meta-information that has to be provided when publishing WSRP services into UDDI directories. The standard allows WSRP services to be implemented in very different ways, be it as a Java/J2EE based web service, a web service implemented on Microsoft's .NET platform or a portlet published as a WSRP Service by a portal. The standard enables use of generic adapter code to plug in any WSRP service into intermediary applications rather than requiring specific proxy code.
WSRP services are WSIA component services built on standard technologies including SOAP, UDDI, and WSDL. WSRP adds several context elements including user profile, information about the client device, locale and desired markup language passed to them in SOAP requests. A set of operations and contracts are defined that enable WSRP plug-n-play.
http://www-106.ibm.com/developerworks/webservices/library/ws-wsrp/?open&l=101,t=gr
Managing PowerBuilder SourceMaking Remote Development Work
How does your organization manage its PowerBuilder source? Your application has several PBLs, which contain several hundred objects. Do you use the native PowerBuilder check-in / check-out? Do you employ a “real” revision control system, like Microsoft’s Visual SourceSafe? These traditional techniques do not work for our organization, because we do a lot of off-site development. Developers may be on the road, and they are writing code in the plane or at the client site. A couple of us routinely work from home, where we do not have remote access to the office network. Therefore we apply a rather Neanderthal code-sharing model: each developer has a complete set of the PBLs, and at regular intervals, we merge the source and redistribute the merged PBLs to all the developers. The merge is a critical process. It’s extremely frustrating to devote hours to implementing a new feature, or fixing a difficult bug, and then have that work lost because the merge was done wrong.This article describes the policies and procedures we employ, as well as some technology we’ve developed, that help us manage our source and minimize merge errors. Comments showing who changed what whenWhen you are merging changes to objects, you need to know what changed. If I messed with the ItemChanged! script, and my colleague fixed a bug in RetrieveStart!, the chances of merging those changes correctly are much higher if we’ve documented those changes. Comments are essential. Why not use a differencing utility? That’s not a bad idea, and we’ll revisit it below. Comments have several advantages, however. They tell you more than “this was changed”, the key datum provided by a differencing utility. A good comment explains the nature of the change, e.g. this bug was fixed or that business rule was altered. It should also tell you who made the change when. That way, when something breaks, you can inspect the script to see what changed since last time, and you know who to ask when you have questions about the change. And, mostly, comments (properly done) are easier to use than a differences utility, when it’s time to merge code.Our standards require documenting all changes in two places, in the script itself and in a dedicated “documentation” event. All comments have the programmer’s ID and the date. Long changes have an “end” comment to show where a given change stops:// Hoyt 03/15/2001 Allow users to edit// only if they have the ADMIN roleif f_user_has_role( “admin” ) then…end if// End: Hoyt 03/15/2001The ID might be the programmer’s name or initials. It should be a unique string, however, that doesn’t match a common code construct, so you can search an object for an ID without picking up PowerBuilder code. My colleague has to use his name, for example, because his initials “ARM” collides with myriad “parm” references. The date format is constrained to formats “MM/DD/YYYY” or “M/D/YYYY”, so a search for the date can be automated, as discussed below.Every window and UserObject has a documentation event declared in the object’s lowest-level base class, so all the descendents have one. Each script change must be mentioned in the documentation event. The minimum is a description of who changed which script when. This provides a guide to the build manager who has to merge code: he goes to the documentation event to see what scripts changed, then goes to those scripts to find the specific changes. Often, we put in the changed code as well, so we can inspect just the documentation event to get an idea of how the object has changed over time:/* documentation…Hoyt 03/15/2001 ue_constructor! Added admin role functionality// Hoyt 03/15/2001 Allow users to edit// only if they have the ADMIN roleif f_user_has_role( “admin” ) then…end if// End: Hoyt 03/15/2001*/Comments are always added to the bottom of the documentation event, so they appear in chronological order.For functions, general comments are added at the bottom of the function header, with more specific comments in the body of the function. You cannot declare an event for a menu, so we have a documentation() function on our menus that serves the same purpose. With DataWindow objects (DWOs), we create a dedicated label with the text “Comment”:Figure 1: DataWindow object comment in a dedicated labelComment labels are always yellow, to make them stand out. For grids, the comment labels are located in the summary band below the first column, as in Figure 1. The summary band is ordinarily hidden (height=0), and the programmer pulls down the summary band to expose the comment when needed. For a free-form DWO, the label can be anywhere. The visible expression makes the comment invisible at run-time:Figure 2: A comment label’s expression tabA great place to put the comment is the font.escapement attribute, as shown in Figure 2. Set the escapement to zero, which has no effect on the label, and write the comment within slash-asterisk pairs. Double-click the escapement to open the Modify Expression dialog, where you can write your comment (see Figure 3):Figure 3: Comments in the escapement attributeThese DWO comments can be invaluable for purposes other than tracking what has changed. Use them to describe the business rule that drives the DWO’s complex SQL. Note the importance of some visual aspect, like the overall DWO width, so another programmer leaves that aspect in place. Figure 3 shows a good example: the next programmer is advised to keep a certain column updateable; that sort of detail is easily lost, and could break your application! Applying these methods, you can put a comment in every object. Some programmers write their documentation event comments as they go, so they don’t forget. Others wait until they are done with modifying a given object, then search it for their ID in the Library Painter browser, noting which scripts were modified today, so every script gets mentioned in the documentation event. (That’s why you should pick a different ID, if your name happens to be “String” :). The comments are valuable in themselves, and are essential for tracking changes when it’s time to merge code. If you want to promote commenting, apply the policy that undocumented changes will not be merged! That’s probably too draconian, but you can always point out in a friendly way that changes are much more likely to make it into the build if they are properly documented. No one wants their code lost!Automated identification of modified objectsIt’s common for each PowerBuilder programmer to have a dedicated “development PBL”, which contains all the objects the programmer changes. When object X is going to be changed, the programmer moves X to the development PBL, and modifies it there. When it’s time to merge the code, the programmers all send their development PBLs to the build manager.This approach has some problems. First, it doesn’t do anything to promote commenting. The programmer can make all kinds of uncommented changes, and the build manager has to figure out what changed. Second, it’s a pain. You want to edit this object, but first you’ve go to move it to your development PBL.Third, and more importantly, it is way too easy for the programmer to omit changed objects, which means those changes are lost the next time programmer copies the merged PBLs, to everyone’s frustration. This is especially true with DataWindows, which the programmer might have modified by right-clicking a DataWindow control and selecting “Modify DataWindow” from the popup menu. Since the programmer never addresses that DWO in the Library Painter, the chances are good that the DWO doesn’t get moved. PB7 makes it easier to find modified objects, because you can sort objects by modification date in the Library Painter. It is still too easy to lose modifications, though.In my organization, we don’t used the “development PBL” approach. To make it as easy as possible to get changed objects to the build manager, we created the Find Modified Objects (FMO) utility. The utility eliminates the need to manually accumulate changed objects in a development PBL, because it automates the process of finding and gathering those objects. Developers modify objects “in place”, i.e. in the application PBLs where the objects reside. Then they run FMO, which identifies all the modified objects and copies them into a new PBL for distribution to the build manager. See Figure 4.Figure 4: The Find Modified Objects utilityThe FMO window is built into our application, so every programmer gets the latest version with each distribution of the application PBLs. Here are the most important FMO features, moving from top to bottom of the window:· The Find After date-timeFMO ignores all objects that have not been modified after the specified date-time. Ordinarily, the date-time would be set to a time just after the last merge, so all subsequent changes are identified. The merged PBLs are always completely regenerated before they are redistributed, so there’s a well-defined last-merged date-time.To the right of the Find After editmasks are a CheckBox and a couple RadioButtons. If the “Use modified date-time” CheckBox is checked, then the object’s modification date is sufficient to deem the object “officially modified.” No other criteria are applied. Especially, it is not necessary to find the user’s ID (as discussed below). The RadioButtons allow you to apply this criterion to all objects or to just DataWindows. If the “For DataWindow Objects only” RB is selected, for example, then FMO will require the ID string for all objects except DWOs. This is an acknowledgement of the fact that programmers are less likely to comment DWOs. If the CheckBox is not checked, then objects will only be considered “officially modified” if they were changed after the Find After date-time, and they contain a comment with the programmer’s ID and a date on or after the Find After date-time. In other words, if “Hoyt 03/15/2001” is not found, the object is not collected.The most “conservative” method is to simply look at the modified date for all objects; this guarantees that everything you changed will get shipped to the build manager. Nothing will ever be lost! It places more burden on the build manager, however, because there will be more items, and there is no guarantee that the objects contain any comments identifying what changed. Some objects might not have any real changes at all, but were inadvertantly re-compiled when the programmer inspects an object, then saves it on the way out, from force of habit.Leaving the CheckBox unchecked means that only items with the required ID comments will be identified as changed, which will identify the least number of modified objects. This is conservative in the other direction: FMO will only find changes where the programmer has put in the proper comment, i.e. those objects where the programmer really intended for the changes to be saved.· IDThis is where I put “Hoyt”, because that’s how I identify myself in my comments. Someone else might put their initials. FMO will look for this ID string, if the user has not decided to use just the modified date for all objects. There’s an option to just look for the date, so “Hoyt” can be omitted from the comments, but this is discouraged. · PBLsThis section allows the user to specify where her PB.INI file resides. FMO examines the PB.INI to identify which PBLs comprise the application. If you have a development PBL, specifying it as the “Exclude PBL” tells FMO to avoid examining the objects therein, on the theory that you intend to ship those in any event. The “Previous PBLs” edit lets you specify the subdirectory that contains the previous release of the PBLs, so FMO can (optionally) do differences.· Export parametersFMO applies ORCA to import the modified objects into a new PBL. That process fails occasionally, esp. with routines that contain ORCA code (for some reason), so FMO exports to disk, any scripts that fail to import. The programmer can manually import those objects using the Library Painter import facility. The export parameters control how the export is done, and affects some aspects of the successful import as well. For example, here’s a Library Painter comment after running FMO with the “Prepend PBL” and “Append ID” items checked:pwr_core.pbl – For identifying a control’s backcolor (Allen) {Hoyt}The object’s “home PBL” is placed at the beginning of the comment; this makes it easy for the build manager to re-distribute the object to the PBL where it belongs. The ID is placed at the end of the comment, inside curly braces. It tells everyone who messed with this object last! The example also illustrates our practice of including the original author’s name in parentheses, to build pride-of-authorship, and so subsequent programmers can heap karma (good or bad) on the originator.This group is also where “Do differences” is turned on.· Import PBLsHere you identify the PBL into which modified objects will be imported. By clicking the [ID and Date] button, the PBL name is set to the programmer’s ID plus today’s DDMM, e.g. “0315” on March 15. As stated, all these controls remember their state, so the programmer only has to set them up the first time. A semi-facetious user-interface invention is the auto-button, indicated by the automobile icon: auto-buttons automatically “click themselves” if their CheckBox is checked. Thus, every time I run FMO, the two auto-buttons (1) delete all extant export files, and (2) set the name of the import PBL.Clicking [Find Modified Objects] puts FMO to work. It takes three or four minutes to run, with our 50-megabyte 1800+ object application, on a 500mz notebook. It sequentially searches all the objects modified after the Find After date-time, looking for a comment with the ID following by a date on-or-after the Find After date.The primary FMO output is the import PBL newly-populated with the set of modified objects, that will go to the build manager. FMO also shows the modified objects in its DataWindow, so you can print it. Optionally, FMO creates differences, formatted as one large HTML file. FMO places a pair of curly braces after each block of changes, so you can quickly move through the HTML searching for the curly braces. See Figure 5: Figure 5: Differences generated by Find Modified ObjectsThe differences output shows removed text in green, added text in blue. You can also use FMO to fire up Ken Howe’s excellent PBDelta differencing utility (see www.pbdr.com): double-click a row in the FMO DataWindow, and FMO passes PBDelta the names of the source files to compare. The HTML differences are a good way to get an overview of changes, but the amount of information can be overwhelming. PBDelta requires more exploration to find all the changes, but they are presented more intelligibly.Identifying collisionsThe manual merge is only required when more than one programmer has modified an object. The build manager uses another utility to identify these collisions. Confusingly, it is called Find Objects (no Modified!). If you’re wondering why two utilities are needed: the Find Object utility’s facility for finding duplicate objects predates FMO, and it hasn’t yet migrated to FMO.The Find Objects utility has the primary purpose of discovering which PBL contains a given object (see Figure 6). It is handy because you can specify just the fragment of a function name that you remember, e.g. “dw*attribute”, and it will instantly present you with all the objects of a given kind that match the fragment. It is “instant” because the master list of objects is preloaded and the matching object are “found” via a filter. Figure 6: The Find Objects utility after a [Search]To apply Find Objects, I copy an application object to one of the distribution PBLs created with FMO, then “open” that application and add the other distribution PBLs. The idea is to create a “dummy application” that contains only the modified objects from all the developers. Then I run Find Objects, refresh the object list, and click [Find Duplicates]. Find Objects identifies all objects that appear more than once. If the objects are different, then Find Objects examines the objects to identify which version is larger and/or newer, on the theory that the larger object (or, if there’s a tie, the newer object) is the one that should be kept. The modified date-time and size is placed in the Find Objects DataWindow, and the “this version is largest-newest” object is displayed in uppercase, so Find Object’s opinion is visible after the list is printed. In Figure 7, DBD’s version of f_is_table_column() is larger than mine, so his is the Find Object favorite shown in uppercase.Figure 7: The Find Objects DataWindow after running [Find Duplicates]Doing and verifying the mergeIf there are collisions, i.e. if Find Objects identifies duplicate objects (that differ) among the distribution PBLs, then the merge has to be done by hand, using the information in the documentation event.I start by adding all the distribution PBLs to the top of the application’s library list. This gives me the ability to modify objects in any of the distribution PBLs. A problem arises with objects that make function calls specifying themselves as an argument. For example, our DataWindow base class UserObject is called “uo_dw”, and that object makes several calls to functions that take a uo_dw as an argument. That’s fine until you have a uo_dw in another PBL that’s higher up in the library path. In that case, PB will give you a compile error when you try to save the lower-in-the-path uo_dw. “Bad argument list for function: f_whatever” where f_whatever() takes a uo_dw as an argument. It’s as though PowerBuilder thinks the higher-up uo_dw is the “real” one, and the lower-down uo_dw is a bogus fraud. The compile error prevents you from saving the merged object! You get around this by modifying the uo_dw that’s highest in the library path. Find Objects’ opinion about the newer-larger objects can be helpful with this. Occasionally, you figure out that the lower-in-the-path version is the one you should keep, and you have to change the order in the library path. Doing the merge is usually straightforward. For even the most widely-used shared objects, like a UserObject of global functions, it’s unusual for more than one developer to modify the same script. If only one person has modified a script, I’ll cut-and-paste the whole script, instead of extracting just the segment that was changed. That gets any changes there were not properly commented, and is easier too. This is a great occasion for a quick code review, by the way.If there’s a script-level collision, and I’m not confident that the comments are adequate, then I’ll run PBDelta to look for changes in detail. The FMO window remembers the last list of modified objects, so I don’t need to re-run FMO each time. Run the application, open the FMO window, and double-click the object at issue to examine it in PBDelta: it’s easier than running PBDelta directly, IMHO.As I do the merge, I change the Library Painter comment of one of the duplicate objects to something like “NOT MOVED” or “MERGED WITH DBD CHANGES”, so I don’t accidentally keep the wrong version! This can be helpful later, in the unlikely event that a programmer comes to complain that some changes weren’t merged properly. “According to the comment, I merged your stuff into mine, must have missed something… Whoops, there’s no comment in ‘documentation’ about this change!” When all the developers’ distribution PBLs have been merged, I run Find Objects’ [Find Duplicates] again, to make sure that the duplicated items have something like “NOT MOVED” in all but one of the duplicate versions.Next I use the Library Painter to copy all the objects in the distribution PBLs to a “master” PBL with a name like “new0315.pbl” (if it is March 15th). Later we can look in just that one PBL to see what changed in this build. I carefully do not copy the “NOT MOVED” (etc) objects. Finally, I copy the objects in the new0315.pbl to the application PBLs, guided by the PBL name that appears at the beginning of each Library Painter comment. This is why FMO has that “Prepend PBL” option. Someday FMO will have a “Distribute objects to PBLs” button that does the copy based on the prepended PBL name, but so far it’s a manual process. If there are more than a couple dozen objects to move, I’ll make a backup of the new0315.pbl, then move the objects instead of copying them, so they are out of the way during the rest of the process. I’ll copy or move all the objects for a given application PBL at once, which makes it easier to make sure I’ve selected the right objects. Next comes another verification step: The Find Objects utility has a [Verify Merge] facility that confirms that a PBL’s objects arrived intact in the application PBLs. I select each of the developer distribution PBLs and click [Verify Merge], and Find Objects confirms that the object in the developer PBL is identical to the object in the application PBL. That won’t be the case for those duplicate objects that were merged into another object, but those instances are obvious because they have those uppercase “ignore me” comments.When [Verify Merge] finds a discrepancy, it is usually because I’ve screwed up the process of manually copying the modified objects to their place in the application PBLs. The main danger is simply failing to copy a given object. [Verify Merge] catches such bonehead mistakes.And still one more check: I remove the distribution PBLs from the library path of my application, then run [Find Duplicates] one more time, hoping not to find any! This step confirms that I didn’t copy any object to the wrong PBL, so the new version and the old version are both resident in the application.That done, I do a build, run the application a bit to kind-of confirm that it works, and send mail to all the developers telling them that there is a new release of the “real” PBLs for them to move to their computers. Usually I send mail asking that the coders all send their changes by end-of-business Friday, and I do the merge process early the following Monday, so programmers have an e-mail and fresh PBLs when they arrive at work. It gives the development team a great excuse for not working over the weekend :).Given a team of four or five developers, all following the rules, and mostly working in separate parts of the application so there aren’t that many collisions: the merge process takes anywhere from 15 minutes to an hour. It would take a lot longer, and be much more error-prone, without the commenting conventions and the utilities.A future release of FMO will provide for automatic generation of release notes. Given something like the following in the documentation event:Hoyt 03/15/2001 FixedBug 612 You can add facilities without a GPFDBD 03/16/2001 NewFeature [Delete User] now prompts before deleting any recordsFMO will search for the magic strings and accumulate a report of fixes and enhancements that will go to the developers and QA team with each new distribution of the code.Keeping and Using the PBL HistoryWe keep all the PBLs as zip files on a network drive; the separate distribution PBLs, and the merged “new.pbl” are all retained against future need. This has come in handy when, for example, a bug in the bug tracker has been identified as fixed, but there is no corresponding comment in the relevant object’s documentation event, and we still have the bug!Given the date of the alleged fix, we can go to the developer’s distribution PBL and find the code that was lost. Finding the right PBL is straightforward because the distribution PBLs are named using the developer’s ID and the date’s DDMM, e.g. “hoyt0315.pbl”. The lost code problem can occur when a developer fails to get the new release of PBLs, perhaps because he’s on a road trip and cannot refresh his PBLs from afar. Subsequently, he submits changes based on the superceded version, and interim changes by another programmer are lost. Retaining the PBLs with the modified objects makes it comparatively easy to find and overcome this kind of error.The build manager should be able to catch such lost code errors in advance, however, by examining the documentation event of the respective objects. If the programmer’s version omits any items in the “real” object’s documentation event, then the submitted object is obviously based on an obsolete version, and a careful merge is required. The ‘documentation’ inspection is the obvious course if the coder mentions that she was unable to update her PBLs after the last redistribution.The zipped PBLs are also essential when a new bug is introduced, if an examination of the comments and the code doesn’t give you a clue. It’s easy to drop back to earlier versions until the bug goes away – just successively unzip and run -- then PBDelta is hauled out to identify suspect code. Perhaps it’s low-tech, compared to SourceSafe, but it works for us.SummaryThe process of merging changes to your PowerBuilder application can be much smoother if you make effective application of documentation and automation. Carefully comment your changes, so the build manager can accurately identify what you’ve changed. Someone looking at your script should be able to tell exactly what was added, and (if it’s not too difficult) what was there before you mucked with the code, in case you’ve introduced a bug! Including your name and the date makes it easier to follow up when there are problems or questions. A universal documentation event is a handy place to place comments identifying which scripts have changed, so the next programmer knows where to look, to find out how this object has mutated over time. It’s great for those object-level business rule discussions that don’t really belong in any specific script. If you identify the “home PBL” in each object’s Library Painter comment, then it’s easier to accurately return the objects to their home PBLs after the merge process. Putting your name in that comment as a “I touched this last” clue can be helpful too.The Find Modified Objects window and the Find Objects utility show how parts of the merge process can be automated. The actual merge remains manual, but the automated facilities practically guarantee that no modifications are lost. FMO can be very conservative, grabbing everything that has changed since the specified date-time, or you can stipulate that FMO has to find the programmer’s ID and a corresponding date. Find Objects deals effectively with duplicate objects and catches merge errors. Computers are good at mind-numbing tasks like searching for ID strings, identifying duplicate objects and confirming that modified objects actually arrived in their home PBLs. Automate such tasks and you’ll have fewer build errors, and less frustration among your developers and users.
Subscribe to:
Posts (Atom)