debugging - django.db.utils.DatabaseError: out of shared memory -


i'm written script load objects django database using django orm. underlying database postgres.

after running happily while, script fails error:

django.db.utils.databaseerror: out of shared memory hint: might need increase max_locks_per_transaction. 

i'm guessing problem script's efficiency, rather database settings.

the script iterates on csv file, , create database object every row in csv file. typically there couple thousand objects create. i've read some background material on database efficiency in django. can rule out mistakes - i'm not iterating on queryset, or using __in queries or offset.

but have quite lot of indexes on fields in database, , guess each time create , save object, django has update indexes. have 6 indexes on storeitem fields, example.

for item in csv_rows:    s, created = storeitem.objects.get_or_create(display_url=item['display_url'], \          retailer_img_url=item['retailer_img_url'],store=store_obj)   s.name = item['name']   s.description = item['description']   s.affiliate = item['affiliate']   ... more stuff   s.save() 

two questions:

  1. is possible updating database indexes cause error?
  2. how can debug if case?

i have had quick google , there couple of resources at:

  1. postgresql: out of shared memory?
  2. http://www.databasesoup.com/2012/06/postgresqlconf-maxlockspertransaction.html

this quotes taken http://www.postgresql.org/docs/9.1/static/runtime-config-locks.html

max_locks_per_transaction (integer)

the shared lock table tracks locks on max_locks_per_transaction * (max_connections + max_prepared_transactions) objects (e.g., tables); hence, no more many distinct objects can locked @ 1 time. parameter controls average number of object locks allocated each transaction; individual transactions can lock more objects long locks of transactions fit in lock table. not number of rows can locked; value unlimited. default, 64, has historically proven sufficient, might need raise value if have clients touch many different tables in single transaction. parameter can set @ server start.

increasing parameter might cause postgresql request more system v shared memory operating system's default configuration allows. see section 17.4.1 information on how adjust parameters, if necessary.

when running standby server, must set parameter same or higher value on master server. otherwise, queries not allowed in standby server.

so, looks should check default value of max_locks_per_transaction check set sensible value

likewise, if using default install of postgres should check other defaults in instances defaults can set low.


Comments