so i've realised php potentially running multiple requests simultaneously. logs last night seem show 2 requests came in, processed in parallel; each triggered import of data server; each attempted insert record database. 1 request failed when tried insert record other thread had inserted (the imported data comes pks; i'm not using incrementing ids): sqlstate[23000]: integrity constraint violation: 1062 duplicate entry '865020' key 'primary' ...
.
- have diagnosed issue correctly?
- how should address this?
the following of code. i've stripped out of (the logging, creation of other entities beyond patient data), following should include relevant snippets. requests hit import() method, calls importone() each record import, essentially. note save method in importone(); that's eloquent method (using laravel , eloquent) generate sql insert/update record appropriate.
public function import() { $now = carbon::now(); // data other server in time range last import current import $calls = $this->getcalls($this->getlastimport(), $now); // each call import, insert db (or update if exists) foreach ($calls $call) { $this->importone($call); } // update last import time next import uses correct range $this->setlastimport($now); } private function importone($call) { // existing patient call, or create new 1 $patient = patient::where('id', '=', $call['patientid'])->first(); $isnewpatient = $patient === null; if ($isnewpatient) { $patient = new patient(array('id' => $call['patientid'])); } // set fields $patient->given_name = $call['patientgivenname']; $patient->family_name = $call['patientfamilyname']; // save; insert/update appropriately $patient->save(); }
i'd guess solution require mutex around entire import block? , if request couldn't attain mutex, it'd move on rest of request. thoughts?
edit: note, isn't critical failure. exception caught , logged, , request responded per usual. , import succeeds on other request, , request responded per usual. users none-the-wiser; don't know import, , isn't main focus of request coming in. really, leave running is, , aside occasional exception, nothing bad happens. if there fix prevent additional work being done/multiple requests being sent of other server unnecessarily, worth pursuing.
edit2: okay, i've taken swing @ implementing locking mechanism flock(). thoughts? following work? , how unit test addition?
public function import() { try { $fp = fopen('/tmp/lock.txt', 'w+'); if (flock($fp, lock_ex)) { $now = carbon::now(); $calls = $this->getcalls($this->getlastimport(), $now); foreach ($calls $call) { $this->importone($call); } $this->setlastimport($now); flock($fp, lock_un); // log success. } else { // not acquire file lock. log this. } fclose($fp); } catch (exception $ex) { // log failure. } }
edit3: thoughts on following alternate implementation of lock:
public function import() { try { if ($this->lock()) { $now = carbon::now(); $calls = $this->getcalls($this->getlastimport(), $now); foreach ($calls $call) { $this->importone($call); } $this->setlastimport($now); $this->unlock(); // log success } else { // not acquire db lock. log this. } } catch (exception $ex) { // log failure } } /** * db lock, returns true if successful. * * @return boolean */ public function lock() { return db::select("select get_lock('lock_name', 1) result")[0]->result === 1; } /** * release db lock, returns true if successful. * * @return boolean */ public function unlock() { return db::select("select release_lock('lock_name') result")[0]->result === 1; }
it doesn't seem having race condition, because id coming import file, , if import algorithm working correctly each thread have own shard of work done , should never conflict others. seems 2 threads receiving request create same patient , in conflict eachother because of bad algorithm.
make sure each spawned thread gets new row import file, , repeat on failure.
if cant that, , want stick mutex, using file lock doesn't seem nice solution, since solved conflict within application, while occurring in database. db lock should lot faster too, , overall more decent solution.
request database lock, this:
$db -> exec('lock tables table1
write, table2
write');
and can expect sql error when write locked table, surround patient->save() try catch.
an better solution use conditional atomic query. db query has condition within it. use query this:
insert targettable(field1) select field1 mytable not(field1 in (select field1 targettable))
Comments
Post a Comment