sql - What transactions diminish data integrity? -


i've learned decent bit database integrity, , know should using transactions if "require multiple statements performed unit keep data in consistent state." database development mistakes made application developers (point 16, chosen answer)

wikipedia uses example:

  1. debit $100 groceries expense account
  2. credit $100 checking account

if try credit non-existent account id, , i'm using constraints properly, exception thrown , can catch , roll back. if there power outage these 2 changes guaranteed atomic.


however, if understand properly, transactions won't me in cases: (example php , mysql)

  1. mysql: start transaction
  2. mysql: select data table
  3. php: compute state selected data
    • php: if state valid, insert data
    • php: otherwise, don't insert data
  4. mysql: commit transaction

this won't work because queries can executed atomically without failing (it's php decides there's error, not sql constraint).

secondly, , tested, transactions committed synchronously, can started asynchronously. if start transaction, , add 10 second delay, can start slow script, , start , commit transaction in time, demonstrating concurrent transactions. 2 instances can select same data, before seeing other's modifications. modifications guaranteed atomic.


so can do? suppose locking table works, practice? conditions can described sql in single statement, more complex ones can't.

this question. shows you've been thinking bit.

the problem describing exists because database not aware of data dependencies. database, code selects data , writes data. doesn't know writing data based on data selected. in general, need tell database data dependencies. done differently in each database.

you mentioned mysql. innodb has support select ... update. issue lock resource other queries cannot access resource (depending on transaction isolation level). make second transaction in example not able execute until first 1 commits, if locking same resources. resources locks database.

let's @ example. lock rows, first create transaction , query database like:

select * tablea value > 50 update 

this select lock these rows incompatible locks blocked. can processing in php. once ready, can insert rows table:

insert tableb values ('some value') 

at point, before commit, of these rows locked. none of these rows available other clients. thus, throughout whole transaction, no other client able read of rows you've touched unless read uncommitted. make work in example, need make sure select statements in 2 using select update.

the other way tell database on update statement. when issue update statement, tell specify think data should be. if database update rows, can sure nothing else has changed data. if don't update expected number of rows, can know else has changed data, , should handle exception. optimistic concurrency guess no 1 update data, change. afterwards, can check see if did.

the query like:

select value table id = '1' 

then later:

update table set value = 'new value' id = '1' , value = 'old value' 

other databases give other options on these 2 basic ideas. example, on optimistic model, can verify timestamp (or autoincrement) value instead of actual values.


Comments

Popular posts from this blog

php - Calling a template part from a post -

Firefox SVG shape not printing when it has stroke -

How to mention the localhost in android -