[Neo4j] Automating transactions
rick.bullotta at burningskysoftware.com
Fri Aug 20 16:24:13 CEST 2010
I recommend a hybrid approach of # of operations + time limit. Otherwise,
in periods of low activity, you could run a chance of a reasonable # of
transactions being discarded on a system failure. We have chosen to use
both rules for "flushing" writes - after "n" writes or after "m"
In our case, we ended up queuing writes internally and having worker
thread(s) handle this process in the background.
From: user-bounces at lists.neo4j.org [mailto:user-bounces at lists.neo4j.org] On
Behalf Of Paul A. Jackson
Sent: Friday, August 20, 2010 8:47 AM
To: (User at lists.neo4j.org)
Subject: [Neo4j] Automating transactions
I am interested in encapsulating the business of managing transactions
inside a generic graph API. I assume I will have some max count where after
that many write operations, the API will finish the transaction and start a
new one. I have a few questions around this.
1) Can I ignore reads? If I write a few nodes within a transaction,
can I then read indefinitely, or will the fact that I have an open
transaction cause neo to consume more memory until the transaction is
2) Is there any guideline for the relative amounts of memory various
operations take? (Writing a node, writing an edge, writing a property, and
so on?) Should I bump my counter once for each of these?
3) Since the API will operate in a multi-user environment, is a
per-user count a bad idea? Should I maintain a user count and a global
count and adjust the user limit based upon the number of concurrent users?
Or should I monitor available free memory instead of, or in addition to
maintaining this counter? Any other suggestions?
Thanks in advance!
Neo4j mailing list
User at lists.neo4j.org
More information about the User