|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Generated in 0.018 seconds in which 0.002 seconds were spent on a total of 9 queries.When both numbers are fairly low, everything is pretty much fine. But what's the issue when the first number rises dramatically while the second one stays normal: Generated in 9.149 seconds in which 0.032 seconds were spent on a total of 13 queries.What would cause only the first part to skyrocket?
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Hmmm. just got a [/i]"Generated in 20.513 seconds in which 0.017 seconds were spent on a total of 9 queries."[/i]
Funny thing is that I just doubled the RAM on my server about 10 mins ago. Running on 2 gigs now after being on just 1 GB for the past year...
|
|
|
|
Joined: Jun 2006
Posts: 16,366 Likes: 126
|
Joined: Jun 2006
Posts: 16,366 Likes: 126 |
I get this all the time, but my server is b0ged so it'd explaine it...
|
|
|
|
Joined: Jul 2006
Posts: 2,143
Pooh-Bah
|
Pooh-Bah
Joined: Jul 2006
Posts: 2,143 |
so, the database returned the answer in .017 seconds, but then it took about 20 to process the answer and display it.
sounds like a machine with a ton of RAM but slow someplace else. CPU slow(ish)? drives slow(ish)?
To be honest, I've never looked at the way Rick wrote this cache stuff to see if the board needs to complete a cache creation/update operation before the page loads. If it does, and your discs are slow, that might do it.
Maybe the query that was run returned quickly, but returned a huge array of data and the CPU was too slow to operate on it faster than 20 seconds?
Either could be true. Rick would have a better idea how to look, and which to update next.
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Right, so if the first part is what's delaying the output, then I might be doing the wrong thing by concentrating on fine-tuning "my.cnf".
After upgrading to 2GB I just bumped the "key_buffer" to 256M (and the "table_cache" to 512). Not sure what I need exactly but I have around 700 users simultaneously at peak time and a rather large database to deal with.
Funny think is that the CPU shouldn't really be overworked as it's a Dual Xeon and in the stats it seems like it's plain sailing all the way. The drive isn't SCSI or even SATA, maybe that's the issue. But some of these delays are coming when the number of people online is about 150 and not 700. Very strange...
|
|
|
|
Joined: Jun 2006
Posts: 9,242 Likes: 1
Former Developer
|
Former Developer
Joined: Jun 2006
Posts: 9,242 Likes: 1 |
As long as the 2nd number, time spent on queries, is low then your MySQL server is running and handling the queries fine. Do you have SSH access to the server? If it's a unix box then when you notice the slow times you can run 'top' on the server and get an idea of what's going on at the server level. Maybe a high load average at the time, something of that nature.
I assume you're still running 6.5? I'd keep an eye if the slow times appear on any particular pages. From the looks of the debug line it just appears that for some reason the server has a lot going on at that particular time.
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Thanks guys. On a side note, if I'm having time-out issues when backing up the DB via the CP, which my.cnf setting can I alter to make sure that the backup runs smoothly?
|
|
|
|
Joined: Jun 2006
Posts: 464 Likes: 1
Addict
|
Addict
Joined: Jun 2006
Posts: 464 Likes: 1 |
You also might want to look at the php.ini in the area of how many seconds are allowed to process scripts.. That's where time out errors generally come from.
;;;;;;;;;;;;;;;;;;; ; Resource Limits ; ;;;;;;;;;;;;;;;;;;;
max_execution_time = 240 ; Maximum execution time of each script, in seconds max_input_time = 240 ; Maximum amount of time each script may spend parsing request data memory_limit = 16M ; Maximum amount of memory a script may consume (16MB)
these are normally set at like 60 seconds etc..
Last edited by Mors; 11/02/2006 7:12 PM.
Happy Customer !!!
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Also, if I decide to take a short-cut and just use the "my-huge.cnf" settings, is there anything there that I really should NOT have enabled (regardless of mem issues)?
|
|
|
|
Joined: Jun 2006
Posts: 464 Likes: 1
Addict
|
Addict
Joined: Jun 2006
Posts: 464 Likes: 1 |
no should be fine. what version of mysql ?
Happy Customer !!!
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Version 4.0.22.
In the Mysql process list in WHM I see the following:
| 5 | eximstats | localhost | eximstats | Sleep | 485 |
Could this be slowing things down?
Last edited by Conrad; 11/02/2006 7:19 PM.
|
|
|
|
Joined: Jun 2006
Posts: 464 Likes: 1
Addict
|
Addict
Joined: Jun 2006
Posts: 464 Likes: 1 |
Don't think so .. looks good to me. for the timeout issue check the php.ini and raise the amount of time allowed for a script to run and you should be ok as far as I can tell.
Happy Customer !!!
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Right, if the posts table is over 500 megs, do you think I should raise the "memory_limit" variable?
|
|
|
|
Joined: Jun 2006
Posts: 464 Likes: 1
Addict
|
Addict
Joined: Jun 2006
Posts: 464 Likes: 1 |
I would use the setting defaulted in my_huge.cnf. Not so sure in version 4 of MySQL but if you allowcate to much memory for paging, indexing etc.. it can have an adverse effect. we are working with standard table types here not innodb.
I have always had good luck with the defaulted settings in version 4. I run Version MySQL 5 now.
Happy Customer !!!
|
|
|
|
Joined: Jul 2006
Posts: 2,143
Pooh-Bah
|
Pooh-Bah
Joined: Jul 2006
Posts: 2,143 |
Not just top, but look at some of the utilities such as iostat. If that drive is just a plain old IDE drive then you could be having your problem there. rebuilding cache, rewriting temp tables, etc. Any or all of that could be an issue. If so then dropping more memory into mysql so that it uses tmp tables less often would help to a degree, but it wouldn't be the fix-all.
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Many thanks guys, it's all starting to click together.
Just a quick question about backing up large tables (the posts one is over 500 megs) via the CP. If I had trouble with the process timing out, would changing the "memory_limit = 16M " variable help?
|
|
|
|
Joined: Jun 2006
Posts: 9,242 Likes: 1
Former Developer
|
Former Developer
Joined: Jun 2006
Posts: 9,242 Likes: 1 |
If you have direct access to the server, I'd do a command line backup. It's much quicker and you bypass PHP and the webserver completely.
mysqldump databasename -u username -p > database.dump
Replace databasename with the name of the database
Replace username with the database username
It will prompt you for the database password.
Command line is always going to be the quickest and most reliable means of doing a backup.
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
OK, seems like the best way forward. Just a few quick questions for someone who will be doing this for the first time...:
- how long with the dump take compared to a CP backup?
- where will the dump be saved?
- will all the tables be saved separately?
- how will I know if the dump has been successful and hasn't given me a partial backup?
- will the WinSCP3 program allow me to use the command line prompt, or am I going to have to use a different program?
***If so, which one is the simplest? Or maybe I can run the command from the WHM?
|
|
|
|
Joined: Jun 2006
Posts: 16,366 Likes: 126
|
Joined: Jun 2006
Posts: 16,366 Likes: 126 |
lightning vs a tourtose ... the dump will be saved in the directory that you run the command from. it will save everything in one file. it should error out if there is an error. I'd do it from an ssh session.
|
|
|
|
Joined: Jun 2006
Posts: 3,839 Likes: 1
Carpal Tunnel
|
Carpal Tunnel
Joined: Jun 2006
Posts: 3,839 Likes: 1 |
It is also very easy to restore from the command line as well. Of course not everyone has ssh - but if you do take advantage
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Just tried making the backup and it seems to have gone through successfully... at least at first glance.
(Q1) It took under half a minute to get this done (as opposed to half a day via the CP), is that normal for a dump that "weighs" about 500 megs?
Also, in the Threads CP under database tools I see the following summary for my board:
Totals 958672 (rows) 532.88 MB (data) 111.07 MB (indexes)
(Q2) But the total size of the dump is 483MB when checking out the file's properties using WinSCP. Is the backup truly complete?
(Q3) On a side note, am I doing the right thing by downloading the dump in binary? Or should this be done in ASCII? (I dragged the file onto my desktop using WinSCP and it started a binary transfer)
|
|
|
|
Joined: Jun 2006
Posts: 16,366 Likes: 126
|
Joined: Jun 2006
Posts: 16,366 Likes: 126 |
Yes its normal that a SQL Dump from the command line go incredibly fast, as it doens't get processed through apache or any php scripts.
the backup should be complete, not sure why the two would differ however... I've never had an SQL dump through command line not be complete.
Generally Binary mode is for graphics and the like; ASCII mode is generally for text.
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Is the ASCII thing definite, or maybe I should do both just in case? Still wondering about the size difference between the dump file and the data shown in the Admin CP... Is there a way to dump all tables separately?
|
|
|
|
Joined: Jul 2006
Posts: 2,143
Pooh-Bah
|
Pooh-Bah
Joined: Jul 2006
Posts: 2,143 |
Download it in ASCII.
A database dump is not going to the same size as the database.
the database is a file or files that the database creates, in it's own format, to store the data.
A data dump is most certainly not that.
open the dump and read it. the first few lines describe a table, the next lines are composed of insert statements.
the sql dump file instructs mysql on how to create the table and insert the data, it is not an actual data file that the data base can use. It is nothing more than a file that the database would use to reconstruct from, but not to operate from.
Also remember that indices take space. A dump has no indices. Another thing to think about..... when you delete something from the database the database doesn't shrink. It does not give up that space. If you delete the equivalent of 100 megs of data out of the database it still doesn't shrink. Think about it, if you delete something that's in the middle of the table why would you want mysql to rewrite everything behind it in the table in order to regain that space?
That's what optimize does. Even if you do optimize your database the sql dump will nto be the same size as the actual mysql data table. They are apples and oranges.
Can you dump a table individually? Yes, read the documentation on mysql.com for the syntax.
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Thanks man.
OK, since the db/dump size is not going to be identical then I will assume that the server is giving me a complete backup.
If I dump the DB as instructed earlier in this thread (into a single file) then that's all I need to restore my database should anything bad happen, right? It'll restore all those tables from that single file?
What's the case with indeces? If the dump doesn't contain them, then what happens if I restore a DB using a dump? I get everything but the indexes? Could that pose a problem? Does the Admin CP backup copy the indeces if you do a backup that way?
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Oh, and I still can't believe that what used to take 10+ hours now takes under 30 seconds. It just seems that having command line access to the server makes an incredible difference. 10 hours or 30 secs... it's just unreal.
|
|
|
|
Joined: Jun 2006
Posts: 3,839 Likes: 1
Carpal Tunnel
|
Carpal Tunnel
Joined: Jun 2006
Posts: 3,839 Likes: 1 |
that's what my wife says as well
|
|
|
|
Joined: Jun 2006
Posts: 16,366 Likes: 126
|
Joined: Jun 2006
Posts: 16,366 Likes: 126 |
If I dump the DB as instructed earlier in this thread (into a single file) then that's all I need to restore my database should anything bad happen, right? It'll restore all those tables from that single file? You are correct, you would use the following syntax to insert the tables into the database: mysql -h localhost -uusername -ppassword database_name < dump_name.dump And yes, I love using ssh to do all of my work on a server, it just saves time being able to edit files, create and upload dumps, etc.
|
|
|
|
Joined: Jul 2006
Posts: 2,143
Pooh-Bah
|
Pooh-Bah
Joined: Jul 2006
Posts: 2,143 |
command line access makes many tasks a heck of a lot easier. It also makes it easier to make a fatal mistake. Do take care when you log in via command line.
The indices are created when the table is created, and populated as the data is inserted, just like when the database is functioning normally.
|
|
|
|
Joined: Jun 2006
Posts: 16,366 Likes: 126
|
Joined: Jun 2006
Posts: 16,366 Likes: 126 |
lol yeh, be careful with the "rm" command ("rm -rf" more speficially)...
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
I take it that's the Linux equivalent of "format c:"? What do I have to add to the " mysqldump databasename -u username -p > database.dump" command line to make the dump in the directory right above the web root?
|
|
|
|
Joined: Jun 2006
Posts: 16,366 Likes: 126
|
Joined: Jun 2006
Posts: 16,366 Likes: 126 |
It will create the dump where you issue the command.... Additionally you shoudl be able to do something like: "mysqldump databasename -u username -p > /path/to/database.dump As for RM, its quite simply "remove", -r is recursive meaning all files below the path, F means force (for directories) and / is your filesystem root... So, "rm -rf /" is saying "remove all files, under directory / without asking my permission, and nuke the directories while you're at it"
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Thanks man.
I tried to open up the mysql dump in notepad just to see what's inside but it was too large (almost 500 megs).
Is there a different, freeware notepad-type program that would come in handy here?
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Just looking through the MySQL manual and have a few question about some settings.
1. Do I need to use "--set-charset" if my board runs on a specific charset and/or if I allow special characters for users' display names?
2. -q
Supposed to be useful when dumping large tables. Will I get exactly the same dump file regardless of whether I include the -q function? (just want to make two dumps to compare if everything went through correctly).
3. -v (verbose)
Will this churn out more info as the dump is being made? Anyone found this useful?
|
|
|
|
Joined: Jun 2006
Posts: 16,366 Likes: 126
|
Joined: Jun 2006
Posts: 16,366 Likes: 126 |
use Wordpad to open it up. I've never had to use any of the extended ptions from mysql, especially verbose lol... as for the charset, i'd immagine your dump is created with the celltype of what is already in the db. I get perfect dumps on the sites i maintain with the above code for note .
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Doesn't want to load up in wordpad either. Any other program I can use or am I screwed? About the mysql dump, I looked through the manual but I'm still not sure how I can make the dump put each table in a separate file. Can someone please give me the correct command line for that?
|
|
|
|
Joined: Dec 2003
Posts: 1,796
Pooh-Bah
|
Pooh-Bah
Joined: Dec 2003
Posts: 1,796 |
I've opened huge tables like that in ultraedit - tho I think the limiting factor is the amount of ram on your computer, not so much the program - a pc with 512mb of ram is gonna choke on any file larger than maybe 100mb or so - ram is already heavily in use for everything else
|
|
|
|
Joined: Jun 2006
Posts: 3,839 Likes: 1
Carpal Tunnel
|
Carpal Tunnel
Joined: Jun 2006
Posts: 3,839 Likes: 1 |
Ultraedit is a very good option Also useful for searching files.
|
|
|
|
Joined: Aug 2004
Posts: 469
Addict
|
Addict
Joined: Aug 2004
Posts: 469 |
Yeah, doesn't look likely that I will be able to view a 500 meg file with only 1 gb ram on my home system. Going back to the mysql dump, does anyone know how to make the dump put each table in a separate file? Looked through the MySQL manual but still not clear on how to do this.
|
|
|
|
Joined: Jul 2006
Posts: 2,143
Pooh-Bah
|
Pooh-Bah
Joined: Jul 2006
Posts: 2,143 |
check the documentation on mysql for table by table, but offhand I'd say mysqldump tablename.database --opt -u username -p > /parth/to/file
then once it's done, since you have command line access then try this
head filename tail filename
You can use "more" or "less" as well.
|
|
|
0 members (),
1,448
guests, and
60
robots. |
Key:
Admin,
Global Mod,
Mod
|
|
|
|