I know parts of this question have been asked before, but I have some related questions.
I'm trying to execute
mysqldump -u uname -ppassword --add-drop-database --databases databaseName | gzip > fileName
I'm potentially dumping a very large (200GB?) db. Is that in itself a dumb thing to do? I then want to send the zipped file over the network for storage, delete the local dump, and purge a couple of tables.
Anyway, I was using subprocess like this, because there doesn't seem to be a way to execute the entire original call without subprocess considering | to be a table name.:
from subprocess import Popen, PIPE
f = open(FILENAME, 'wb')
args = ['mysqldump', '-u', 'UNAME', '-pPASSWORD', '--add-drop-database', '--databases', 'DB']
p1 = Popen(args, stdout=PIPE)
P2 = Popen('gzip', stdin=p1.stdout, stdout=f)
p2.communicate()
but then I read that communicate caches the data in memory, which wouldn't work for me. Is this true?
What I ended up doing for now is:
import gzip
subprocess.call(args, stdout=f)
f.close()
f = open(filename, 'rb')
zipFilename = filename + '.gz'
f2 = gzip.open(zipFilename, 'wb')
f2.writelines(f)
f2.close()
f.close()
of course this takes a million years, and I hate it.
My Questions:
1. Can I use my first approach on a very large db?
2. Could I possibly pipe the output of mysqldump to a socket and fire it across the network and save it when it arrives, rather than sending a zipped file?
Thanks!
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…