1
0
mirror of synced 2025-01-09 18:47:10 +03:00
Commit Graph

4 Commits

Author SHA1 Message Date
Maik Penz
0677423d14 iteration risk note
> instead of loading the whole result into memory at once
is not the full truth.

There is a certain risk of processes getting killed due to memory allocation with large iteration. This is caused by result buffering of the client not being visible to PHP and thus not from 'within' the process.

It is not certain that this occurs with all database clients but seems rather likely to at this point.

This is only a proposal for discussion as I am not certain how to best add the information or if to add it at all (was it obvious before?). Personally I got confused by the existing description and didn't notice the memory implication until further investigation using `top` reported the process to run at 3+GB memory while PHP reported 400M real peak usage.
2014-02-17 10:33:10 +01:00
goatherd
a485e791bb fix foreach coding style 2014-01-14 23:44:38 +01:00
Albert Casademont
4c90d0cedc Update docs/en/reference/batch-processing.rst
If you have only $batchSize - 1 rows (amongst other cases), the entities are never flushed, you need a final flush outside the loop.

In the bulk insert you should also need a final flush if the number of entities inserted is not a multiple of the $batchSize.
2013-02-08 16:32:03 +01:00
Benjamin Eberlei
cbcc693e36 Add 'docs/' from commit '8fcf2d45019bf38a1df728353a1e417343c69cfb'
git-subtree-dir: docs
git-subtree-mainline: 271bd37ad3
git-subtree-split: 8fcf2d4501
2013-01-24 00:02:03 +01:00