Mega Code Archive

 
Categories / Delphi / Examples
 

Midas Concept Simplified

Title: Midas Concept Simplified Question: What is MIDAS Answer: Hi All, I read this article somewhere. It gives an overview of what Midas is all about ================================================================================ Inprise MIDAS Overview In order to be successful in implementing modern distributed applications, developers need a reliable, easy to use, and highly scalable data access solution. Many tools provide this solution. One of the most advanced is Inprises MIDAS. MIDAS is essentially a data communication technology. Two parts of the MIDAS style of communication are the provider and the client dataset. These two components exchange information with each other in what is called a data packet. At a very low level the data packet is an array of bytes, but actually it contains a table of data with sophisticated support for data types, including BLOBs and nested tables. Client Dataset The client dataset can obtain a data packet from a provider, store it in internal memory cache, and make it available for modification. All changes to the data are maintained in an internal log. The original data is accessible through the client datasets Data property; the change log is represented by the Delta property. At any time the data owned by the client dataset could be stored in an external file. All changes will be stored along with original data. In addition to allowing changes to the cached data, the client dataset is well equipped for advanced sorting, filtering, and searching operations. The client dataset knows nothing about the real source of data it owns. Once it has received an array of bytes from a provider, that data becomes eligible to do whatever you want with it. Provider The process of building the data packet is managed by the provider component. A developer has full control over how data is prepared and packaged. It is possible to build the data packet from the result of an SQL query or to prepare it manually from any source of information imaginable. Regardless of how the data is obtained, the final result will be a unified data table encoded as an array of bytes, always. A variety of field types known by the client dataset make it possible to provide extremely rich data. This data can include strings, integers, floating-point values, date-time fields, BLOBs, and more. There is also a special kind of field type, called a dataset field. With this field a programmer can store an entire dataset as on field value of the result set, effectively describing a master/detail relationship. Using dataset fields a programmer go so far as to incorporate an entire database schema in a single array of bytes for manipulation by the client dataset. The data packet also provides other information related to the data. Two important parts of this information are constraints for both row and field-level validation and custom properties. A constraint is a simple SQL evaluation and a corresponding error message. When a client application updates the data contained by a client dataset, constraints passed to it from the provider are automatically enforced. This happens on the client (not the server) when the data entry takes place. If one of these constraints is violated, the data update will be rejected and an exception with the corresponding error message will be raised. The programmer can add custom properties to a data packet when the provider packages it. Each property has it own name, a value and life time flag. The last one defines whether this custom property will become the part of the dataset Delta data packet or not. Working with a Data Packet As soon as the data packet is placed in the internal cache of the client dataset, you can manipulate its content. You can navigate between records, locate records you are interested in, and filter subsets of records using advanced filtering operators such as LIKE, GETDATE, TRIM, LOWER, MONTH, and many others. You may also sort records in the client dataset in any order without respect to their initial sequence. The client dataset is not aware of the real source of these records and treats all the data columns equally. Of course you can modify (insert/update/delete) rows of the data as well. The client dataset keeps track of all changes made and maintains in-memory indices for you to provide different orders of data packet records. The client dataset also supports the Document/View architecture, so you can have different views on the same data packet sharing its data between several datasets. This operation is called cloning. You may have as many clones of the data packet as you want. A requirement for mobile computing is to make multiple changes to the data packet during a continuos period of time. It would be very helpful to be able to store the data packet to a temporary file and load it back when necessary, without loosing changes made to the data. The client dataset provides you this capability. This model of computing is called a briefcase. Applying Updates As soon as data in the data packet is successfully modified, all changes may be applied to their original source. In order to accomplish this goal, you must get the change log maintained by the client dataset and pass it to the provider. If the source of the data packet is a database, the provider is intelligent enough to send the data updates to the corresponding tables. You have total control of the update process and, if necessary, you may add your own business logic or override it completely. When the record from the change log cant be applied to the original source, the provider writes it to an error log along with an error message and current data values from the original source. When the change log is processed, the provider either commits all successful changes to the database or rolls them back. Commit or rollback is dependent upon the number of errors that occurred and your directions of how many errors may occur. The provider then sends all problems logged in the error log to the client dataset. Reconciling The client dataset checks the error log received from the provider and compares it to its change log. While iterating through the change log, it tries to locate the record in the error log. If the current record of the change log is not found, then the client dataset merges the record with the original data packet and removes it from the change log. For each problematic record a special event handler is fired. Developer has a full access to the new and old field values of the record in the change log, their current values in the database, when possible, and the error returned from provider. The special pre build error reconciliation dialog may be used to handle update errors along with your own code. Data Packet Delivery Now we know that three different kinds of data packets may flow between the provider and the client dataset. The following table summarizes every possible data flow. No From To Purpose 1.Provider to ClientDataset -- Data is packaged in the data packet and placed in the client datasets internal cache. 2. Client Dataset to Provider -- The content of the client datasets change log is sent back to provider to be resolved with original source of information. 3. Provider to Client Dataset -- The error log produced in the process of resolving changes by the provider is sent back to the client dataset to reconcile errors with the client datasets change log. There are two possible architectures of MIDAS-based applications. The first architecture consists of a monolithic application that hosts both the provider and the client dataset components. The second architecture consists of multiple applications that have either providers or client datasets built into them. In the first instance you have direct programmatic access to all properties of both the provider and the client dataset components: Its not a big deal to get the data packet in one place and put it in another. The second architecture is just a little bit more challenging if you are relying on standard protocols with the remote procedure call support such as IIOP (CORBA) or RPC (DCOM, DCE) in order to deliver data from one application to another. Of course you always have the choice to use plain TCP/IP to implement the information exchange between your applications. Because all data packets are represented as array of bytes, you may easily send them across the wire using any of the above technologies. Summary Inprise MIDAS provides a high performance mechanism to communicate database information. Two MIDAS components, the client dataset and provider, exchange data packets with each other. The provider is responsible for building the data packet and applying data packet updates to the original source of data. The client dataset enables manipulations of the data packet content. Different techniques may be used to deliver the data packets from the provider to the client dataset and from the client dataset back to provider. They will be considered later.