2017-03-11 05:19:28 by Uchenna
To gain more performance and make maximum use of the cores, a process should execute in parallel which should result in increased speed in process execution, resources used to the maximum capacity, better performance, etc.
To gain more performance and make maximum use of the cores, a process should execute in parallel which should result in increased speed in process execution, resources used to the maximum capacity, better performance, etc. This would have been perfect if it had been the case. But, there are problems that arises that would not have been there if all processes were executed serially. As we know, processes executing in parallel also need access to some resources, communicate with each other and protect invariants and this can be troublesome provided that two or more processes might need to access or make changes to a particular resources at the same time and will try at the same time to gain access to these resources and the result of this action can never be good. This is not a new problem in computer science and have been different people have come up with ways to try to addressed this issue.
This paper is focused on the contentions that happen in memory by transactions. This particular issue has been addressed by Database concurrency control protocols (2PL and OOC). In 2PL (two phase locking), a transaction that needs a resource being used by another resource will wait for the other transaction by just spinning on a lock while in OOC (optimistic concurrency control), the transaction will wait for the other by aborting and trying again to see if the resource is available. Both methods force the transactions to execute serially which defeats the concept of parallel execution.
Unfortunately, this problem is something we cannot escape from in the real world. For instance, lets consider an IPhone 6 listed on eBay to be sold as on bid or to the best offer with a lot of watchers (people biding or monitoring the item to bid at the last minute). Getting to the end time of the item, a lot of people will be sending there offers and it can happen that many people send an offer at the same time, modern multicore databases will execute these actions serially and might not get to the real highest offer before the time ends thereby, giving a wrong amount. So this item will end up being sold not to the highest bidder but to the last offer the database was able to update before the time ran out.
This paper also looks into a new concurrency control technique phase reconciliation which can execute some highly conflicting workload efficiently in parallel and still guaranties serializability; and also presents Doppel which is a phase recognition based In-memory database. Phase reconciliation is based on shifting data between splits and reconciled phases dynamically. Phase transactions can be executed either in joint phases or in split phases. In joined phases, there are no splits and therefore, no per core values. So, here database structures are accessed using OCC. In split phases, to reduce contention, data is split and operations are done in different cores. These different cores make updates to their per-core values and which after executions on each core; the results are reconciled to a global store by short reconciliation phases. When a reconciliation phase ends, blocked transactions resume.
Work has been done to try to address issues that arise on transactional memory, database concurrency control and Distributed consistency. Phase reconciliation got inspired by most of these works and adopted some of them in its design. In this paper I would be reviewing the Phase Reconciliation for contended In-Memory Transactions paper and also will be discussing the techniques used in Dora in implementing the Main-memory database concurrency control, the design of Sync-Phase transactional memory and how RedBlue consistency is applied to keep the distributed consistency property. In this paper, I would look into the design and implementation of phase reconciliation in the context of Doppel and also into these other 3 techniques.
© 2017 UCAKU, Inc. All rights reserved.