While replicating data over a decentralized Peer-to- Peer (P2P) network, transactions broadcasting updates arising from different peers run simultaneously so that a destination peer replica can be updated concurrently, that always causes transaction and data conflicts. Moreover, during data migration, connectivity interruption and network overload corrupt running transactions so that destination peers can experience duplicated data or improper data or missing data, hence replicas remain inconsistent. Different methodological approaches have been combined to solve these problems: the audit log technique to capture the changes made to data; the algorithmic method to design and analyse algorithms and the statistical method to analyse the performance of new algorithms and to design prediction models of the execution time based on other parameters. A Graphical User Interface software as prototype, have been designed with C #, to implement these new algorithms to obtain a database synchronizer-mediator. A stream of experiments, showed that the new algorithms were effective. So, the hypothesis according to which “The execution time of replication and reconciliation transactions totally depends on independent factors.” has been confirmed.

How to Cite
KITUTA EZÉCHIEL, SHRI KANT, RUCHI AGARWAL, Katembo. Mediation of Lazy Update Propagation in a Replicated Database over a Decentralized P2P Architecture. Global Journal of Computer Science and Technology, [S.l.], dec. 2019. ISSN 0975-4172. Available at: <https://computerresearch.org/index.php/computer/article/view/1891>. Date accessed: 17 jan. 2020.