Mysql Misc2
SELECT * FROM cc_member_network_updates cmnu WHERE cmnu.iMemberId IN($members_connected_to_twitter_string) 
AND cmnu.type='twitter' AND typeId > 0 GROUP BY cmnu.iMemberId ORDER BY iShareId DESC
use mg;

update activity ac, emailContactStatus ecs 
set ecs.status = 'emailSent' 
where ac.status in ('ready','populated','inprocess','outlook') 
and ac.activityID = ecs.activityID 
and ecs.status in ('addressed','addressedOptedOut','emailQueued');

update activity set status = 'active' where status in ('ready','populated','inprocess','outlook');

truncate table sfSyncEvent;
truncate table sfUploadEvent;
truncate table sfUploadEventContent;

drop table if exists emailQueue;

GRANT USAGE ON *.* TO 'rewriter'@'192.168.30.201' IDENTIFIED BY PASSWORD '*2A368DB86A8250024CB1D4FE2358FBCAF1E6216E';
GRANT SELECT ON `mg`.`customerDomain` TO 'rewriter'@'192.168.30.201';
GRANT SELECT, INSERT, UPDATE ON `mg`.`daemonLog` TO 'rewriter'@'192.168.30.201';
GRANT SELECT, INSERT, UPDATE ON `mg`.`daemonPrefs` TO 'rewriter'@'192.168.30.201';
GRANT SELECT, INSERT, UPDATE ON `mg`.`pageLog` TO 'rewriter'@'192.168.30.201';
GRANT SELECT ON `mg`.`urlMapping` TO 'rewriter'@'192.168.30.201';

Procedure after copying Production DB to staging:
sudo and go to MySQL data directory (/var/lib/mysql/staging1, for example)
Extract tarball: tar xf mysql_backup_2007-12-03.tar
Change owner and group if necessary: chown -R mysql.mysql ../staging1
Delete replication files plus all db3-related files: rm master.info *relay* *err *pid
Start MySQL with additional option: /usr/bin/mysqld_safe —defaults-file=/etc/my.stg1.cnf —init-file=/etc/my.init_staging.sql

Push to production procedure
Production Push Procedures
From GeniusWiki
Jump to: navigation, search
[edit] Preparation For DB Push (This should be done before push)
Dump system data out of staging 2 database — This create dumpOfSystemTableDataFromStaging2.sql.
Dump system data out of DB3 for rollback purpose — This create PrePushProductionSystemDump.sql.
Review database release script to estimate push time.
Join change control meeting.
[edit] Production DB Push Procedure
Stop Daemon ( By Zaid Or Steve)
Login to DB1 and make sure all threads finish except DBA's threads, replicant thread, mysql_agent, rewriter thread.
Stop DB3 replicant.
Run the DB update script use command like: mysql -u <UserName> -p -h 192.168.30.61 mg < dbUpdateScript.sql
Run the DB system table refresh script: mysql -u <UserName> -p -h 192.168.30.61 mg < dumpOfSystemTableDataFromStaging2.sql
In case any smartgroup condition change in the DB push script, we need to run smartGroupfix.php script after code push.
Run any other scripts under experimental directory which requested to run after code push.
Dump a copy of mg DB schema and system table data. Diff them with Staging2 schema.
Start DB3 replicant as the last step of the whole change control window — after code push, QA validation and etc.
[edit] Production DB Push RollBack Procedure
Run DB update reverse script. mysql -h 192.168.30.61 -u <UserName> -p mg < DBReverseUpdate.sql
Push back original system table data. mysql -h 192.168.30.61 -u <UserName> -p mg < PrePushProductionSystemDataDump.sql
In case smartgroup condition changed, smartgroupfix.php need to be run after code push.
Run any other scripts under experiment directory that are requested to run for rollback.
Dump a copy of mg DB schema and a copy of system table data from production DB2.
Start DB3 replicant after the rollback has been QA and validated.
Retrieved from "http://einstein.gotgenie.com/wiki/index.php/Production_Push_Procedures"

Master-Master Replication Setup

Master-Master replication:

1. master 1/slave 2:    192.168.16.4
   master 2/slave 1:    192.168.16.5

2. On master 1 my.cnf

    [mysqld]
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock

    log-bin
    binlog-do-db=<database name>          # input the database which should be replicated
    binlog-ignore-db=mysql                # input the database that should be ignored for replication
    binlog-ignore-db=test

    server-id=1

3.  On master 1, create a replication slave account in mysql.

        mysql> grant replication slave on *.* to 'replication'@192.168.16.5 identified by 'slave';

    and restart the mysql master1. 

4. Now edit my.cnf on Slave1 or Master2 :

    [mysqld]
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock

    server-id=2

    master-host = 192.168.16.4
    master-user = replication
    master-password = slave
    master-port = 3306

5. Restart mysql slave 1 and start slave threads.

6. On master 1: "show master status;" to make sure the master

    The above scenario is for master-slave.
    Now we will create a slave master scenario for the same systems and it will work as master master.

7. On Master2/Slave 1, edit my.cnf and master entries into it:

    [mysqld]
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock

    server-id=2

    master-host = 192.168.16.4
    master-user = replication
    master-password = slave
    master-port = 3306

    log-bin                             #information for becoming master added
    binlog-do-db=adam

8. Create a replication slave account on master2 for master1:

    mysql> grant replication slave on *.* to 'replication'@192.168.16.4 identified by 'slave2';

9. Edit my.cnf on master1 for information of its master.

    [mysqld]
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock

    log-bin
    binlog-do-db=adam
    binlog-ignore-db=mysql
    binlog-ignore-db=test

    server-id=1

    #information for becoming slave.
    master-host = 192.168.16.5
    master-user = replication
    master-password = slave2
    master-port = 3306

10. Restart both mysql master1 and master2. Start slaves.

Replication Setup

Replication setup:
    (note: this method won't shut down mysqld. It will blocks tables, however)

pre-1:
    - try to make master and slave running mysql of same versions;
        * if it can't be achieved, make slave's version higher than master's;
    - make 'mysql' as the owner of all files under mysql datadir;
    - if MySQL > 5.1.5, add option  "--binlog-format=row";
        (current version: 5.0.24)

1. Set up unique IDs to servers in my.cnf
    - on master:    server-id=1            (done)
    - on slave:        server-id=?            (> 1; to be determined)

2. Set up a 'replication slave' account in the master:
    - CREATE USER 'slave_user'@'slave_host' INDENTIFIED BY 'slave_passwd';
      GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'slave_host';

    - no other privileges should be given to this account;
      ( see below for security concern)

3. Make sure that binary logging is enabled on the master:
    - if no, stop the master, enable logging, and restart the master;      (enabled)

4. Acquire a global read lock:
    - Issue  'FLUSH TABLES WITH READ LOCK;';
    - Leave running the client from which you issue the FLUSH TABLES statement so that the read lock remains in effect. 

    [For InnoDB tables, note that FLUSH TABLES WITH READ LOCK also blocks COMMIT operations. 
    When you have acquired a global read lock, you can start a filesystem snapshot of your InnoDB tables. 
    Internally (inside the InnoDB storage engine) the snapshot won't be consistent (because the InnoDB caches are not flushed), 
    but this is not a cause for concern, because InnoDB resolves this at startup and delivers a consistent result. 
    This means that InnoDB can perform crash recovery when started on this snapshot, without corruption. 
    However, there is no way to stop the MySQL server while insuring a consistent snapshot of your InnoDB tables.]

5. Make a backup of all databases to be replicated in the master:
    - If want to keep master running while backing up, do
    either:        
        mysqldump -u ... -p --all-databases --master-data=2 > dump_file.sql            (text copy)

        * this will make the dump to include a comment with 'CHANGE MASTER' statement that
          indicates the replication coordinates

    or something like:
        shell> tar zcf /tmp/backup.tar.gz /var/lib/mysql                            (binary copy)

        mysql> SHOW MASTER STAUS
            (write down the output. The information is needed in later stage for this method)

        [You may not want to replicate the mysql database if the slave server has a different set of user accounts from those 
        that exist on the master. In this case, you should exclude it from the archive. You also need not include any log files 
        in the archive, or the master.info or relay-log.info files.    ]

    or:
         If you are using InnoDB tables, ideally you should use the InnoDB Hot Backup tool, which takes a consistent snapshot 
         without acquiring any locks on the master server, and records the log name and offset corresponding to the snapshot 
         to be later used on the slave. 

        Without the Hot Backup tool, the quickest way to take a binary snapshot of InnoDB tables is to shut down the master server 
        and copy the InnoDB data files, log files, and table format files (.frm files). To record the current log file name and 
        offset, issue the following statements before you shut down the server:

            mysql> FLUSH TABLES WITH READ LOCK;
            mysql> SHOW MASTER STATUS;

        Then record the log name and the offset from the output of SHOW MASTER STATUS as was shown earlier. After recording the log 
        name and the offset, shut down the server without unlocking the tables to make sure that the server goes down with the 
        snapshot corresponding to the current log file and offset.

        If the master has been running previously without binary logging enabled, the log name and position values displayed by 
        SHOW MASTER STATUS or mysqldump --master-data will be empty. In that case, the values that you need to use later when specifying 
        the slave's log file and position are the empty string ('') and 4.

6. release global read lock:
    - issue 'UNLOCK TABLES';

7. Copy the dump file to the slave and load it:

        mysql -u ... -p < dump_file.sql

    OR 
        untar it if it is a tar ball

8. Tell the slave what master to connect to and the positions in the master's binary log:
    - get the 'CHANGE MASTER' info near the beginning of dump_file.sql (use 'head', 'less', vi, etc.)
            or the written info for tar ball method;

    - connect to the slave and issue:

        CHANGE MASTER TO MASTER_HOST = 'master_host_name',
                        MASTER_USER = 'slave_user',
                        MASTER_PASSWORD = 'slave_passwd',
                        MASTER_LOG_FILE = 'master_log_file',
                        MASTER_LOG_POS = master_log_pos;

        * MASTER_HOST = 'master_host_name' must be IP address even if master and slave are in the same host.
            'localhost' won't work since socket file is not supported for replication.

        * MASTER_PORT will be added if it is not default (like in staging)

9. On the slave, issue SQL statement: START SLAVE;

10. (The slave will create a master.info file in its datadir and record the values from CHANGE MASTER statement.
        The info in master.info will get updated during replication process)

        * Make sure that master.info is readable by 'mysql' only (default). Otherwise, the slave's login and passwd
            information is in security risk. 
        *In addition, make sure that this account has 'REPLICATION SLAVE' privilege only, not even 'select'. So in case
            this account is leaked out, still no info can be retrieved from db. 

************************  done  *************************************

Note: For fine-grained control, we can tell the master which DB to be replicated, and tell the slave which DB 
        it will do replication:

    In master my.cnf:
        binlog-do-db=db_name_2
        binlog-ignore-db=db_name_3

    In slave my.cnf:
        replicate-do-db=db_name_2
        replicate-ignore-db=db_name_3

    * if no options are specified, all databases will be replicated

*************************************************************************

Troubleshooting if replication fails:

1. make sure on master:
    - server-id;        
    - bin-log enabled;    

    issue 'SHOW VARIABLE' or 'SHOW MASTER STATUS';

2. See if replication has started in slave:
    - check slave's error log for msgs that indicate a failure to connect to master;
    - or on slave, 'SHOW PROCESSLIST':
        * there should be two threads related to replication (I/O and SQL), 
            with both thread having field 'Command' value as 'connect'
        * the two threads can be distinguished by field 'State';
    - or on slave, 'SHOW SLAVE STATUS';
        * Slave_IO_Running and Slave_SQL_Running should be 'Yes';

3. See if master handles the replicate:
    - on master, 'SHOW PROCESSLIST':
        * the master thread that handles slave's I/O thread has 'Command' value as 'Binlog dump'
    - or on master, 'SHOW SLAVE HOSTS':
        * shows all connected slaves

4. Check:
    - slave connection to master (manually from slave using mysql)
    - master.info
    - if issue 'start slave'

5. If still fails, 'show slave status' in slave:
    - check the error info in the output

=======================================================================

Set up a replicant (B) from another replicant (A)

1. In slave A, 'stop slave' so that no data will fectched from master; (no need to shutdown mysql in A)

2. Tarball all files under 'mysql', except .pid, .err, and .sock files;

    a. Make sure the ib_logfile(s) are the same size in A and B. The file size is specified in my.cnf. Change it if nessecery.
    b. If data file are two big, sftp them seperately instead of tarball with others.

3. Shut down mysql in B. Copy all these files to slave B's mysql directory. Overwrite.

4. master.info:     
    It contains replicant's login name and password. Change them if slave B has different ones.
    Other information, like master IP, master mysql port, master binlog being read, and other parameters can be intact.

5. Comapre prefix of slave B's .err filename with slave A's .err filename.
    If the first part of the names are different, like "84383-ids.genius-network.com.err" in A and "108242-repldb.genius-network.com.err" in B, change the relay-binlog file name corresponding to A.

    For example:
        In A, it is "84383-ids-relay-bin.000499". Change it to "108242-repldb-relay-bin.000499" in B.

6. Also change the name of relay-bin.index file, as in step 5.
    For example: 
        In B, it is changed to "108242-repldb-relay-bin.index";

7. Edit "108242-repldb-relay-bin.index" file:
    It contains the names of all relay-binlog files. Change the names from A to B, as in step 5 and step 6.

8. Edit "relay-log.info" file:
    It contains the lastest relay-binlog filename. Change it from A to B, as in step 5 and step 6.

9. Done.

Starting mysql with basedir and datadir option:

./bin/mysqld_safe --defaults-file=/etc/my.stg1.cnf --basedir=/usr/local/mysql-5.0.56 --datadir=/var/lib/mysql/staging1

Determine if something exist in database (dont need exact count):

select count(*) from dual where exists ( select 1 from emailContactStatus where status = 'addressedOptedOut');

returns 1 record in (0.93 sec), better than
select count(*) from emailContactStatus where status = 'addressedOptedOut';

which returns 1 row (exact count) in (57.71 sec)

The server creates several status files. Some of these are located in the data directory by default, but not all.

The default PID filename is host_name.pid in the data directory. The name and location may be changed with the —pid-file=file_name option.

The default socket file is /tmp/mysql.sock. A different filename can be specified by starting the server with the —socket option. If you change the location, client programs also need to be started with the same —socket option so that they know where the socket file is located.

If you have indexes of varchars, check if you are using column prefixes for indexes, and if not, try using that (i.e. have a index on name(10), instead if name)

Analyze your SELECT queries to see which tables are used most often and increase the key buffer size to hold at least the indexes from those table in the memory. (the 80/20 rule might be applicable here)

If you have been doing lots of deletes, you should get some benefit from optimize table or by doing "myisamchk -f" (in other words, rebuilding your index)

Additionally, have a look around the documentation for named key buffer pools, and allocate the most-used indices to a separate pool so they're always in-memory, even when the default pool is filling up.

Keep in mind, indices will still be faster than a full tablescan, even if they're read from disk; so it's no disaster to leave some of the lesser-used
ones out of memory.

  1. Have an environment that is exactly the same (or at least a representation) of production environment so that you do dry push / release, and get an accurate indication of how long a real release will take.
  2. Back-up all databases (at least production database, and development database). There is always a chance that developers might accidentally drop the development database, or delete all entries from tables (unit tests). We since then have setup unit test to use a different username and password, and/or even on a different server / port. Staging databases can be created from a copy of production database.
  • Use cache (memcache)
  • Monitor and graph everything. Use Nagios and Cacti or something better.
  • Capacity planning. Know the limit of your hardware and how much of it you are using.
  • Use a CDN. Put static file on a CDN.
  • Distribute your data to geographically diverse data centers to handle power failures.
  • Consider using virtualized storage/clustered file systems from the start. It allows you to massively parallelize IO access while being able to add disk as needed without any reorganization needed.
  • Develop tools that work in a production environment. Can't simulate everything in test environment. (Having a logger)
  • Horizontal partitioning
  • Move hottest queries / tables into separate clusters.
  • Using a primary and slave there's a single point of failure because it's hard to promote a slave to a master. Went to a multi master solution. (Feedburner)
  • Profile your code
  • Know your DB workload, Cacti really helps with this.
  • ‘EXPLAIN’ all of your queries. Helps keep crushing queries out of the system.
  • End up with a design that is as minimal as possible. Simplicity is the key if you really want to build large distributed systems. (Amazon)
  • To scale you have to partition, so you are left with choosing either high consistency or high availability for a particular system. You must find the right overlap of availability and consistency. (Amazon)
  • Choose a specific approach based on the needs of the service. (Amazon)
  • You must change your mentality to build really scalable systems. Approach chaos in a probabilistic sense that things will work well. In traditional systems we present a perfect world where nothing goes down and then we build complex algorithms (agreement technologies) on this perfect world. Instead, take it for granted stuff fails, that's reality, embrace it. For example, go more with a fast reboot and fast recover approach. With a decent spread of data and services you might get close to 100%. Create self-healing, self-organizing lights out operations. (Amazon)
  • Create a shared nothing infrastructure. Infrastructure can become a shared resource for development and deployment with the same downsides as shared resources in your logic and data tiers. It can cause locking and blocking and dead lock. A service oriented architecture allows the creation of a parallel and isolated development process that scales feature development to match your growth. (Amazon)
  • Open up you system with APIs and you'll create an ecosystem around your application. (Amazon)
  • Only way to manage as large distributed system is to keep things as simple as possible. Keep things simple by making sure there are no hidden requirements and hidden dependencies in the design. Cut technology to the minimum you need to solve the problem you have. It doesn't help the company to create artificial and unneeded layers of complexity. (Amazon)
  • Organizing around services gives agility. You can do things in parallel is because the output is a service. This allows fast time to market. Create an infrastructure that allows services to be built very fast. (Amazon)
  • There's bound to be problems with anything that produces hype before real implementation (Amazon)
  • Build your own infrastructure for performance, reliability, and cost control reasons. By building it yourself you never have to say you went down because it was company X's fault. Your software may not be more reliable than others, but you can fix, debug, and deployment much quicker than when working with a 3rd party. (Amazon)
  • Use measurement and objective debate to separate the good from the bad. I've been to several presentations by ex-Amazoners and this is the aspect of Amazon that strikes me as uniquely different and interesting from other companies. Their deep seated ethic is to expose real customers to a choice and see which one works best and to make decisions based on those tests. (Amazon)
  • People's side projects, the one's they follow because they are interested, are often ones where you get the most value and innovation. Never underestimate the power of wandering where you are most interested. (Amazon)
  • Involve everyone in making dog food. Go out into the warehouse and pack books during the Christmas rush. That's teamwork. (Amazon)
  • Create a staging site where you can run thorough tests before releasing into the wild. (Amazon)
  • A robust, clustered, replicated, distributed file system is perfect for read-only data used by the web servers. (Amazon)
  • Have a way to rollback if an update doesn't work. Write the tools if necessary. (Amazon)
  • Look for three things in interviews: enthusiasm, creativity, competence. The single biggest predictor of success at Amazon.com was enthusiasm. (Amazon)
  • Hire a Bob. Someone who knows their stuff, has incredible debugging skills and system knowledge, and most importantly, has the stones to tackle the worst high pressure problems imaginable by just leaping in. (Amazon)
  • Innovation can only come from the bottom. Those closest to the problem are in the best position to solve it. any organization that depends on innovation must embrace chaos. Loyalty and obedience are not your tools. (Amazon)
  • Creativity must flow from everywhere. (Amazon)
  • Everyone must be able to experiment, learn, and iterate. Position, obedience, and tradition should hold no power. For innovation to flourish, measurement must rule. (Amazon)
  • Embrace innovation. (Amazon)
  • Don't pay for performance. Give good perks and high pay, but keep it flat. Recognize exceptional work in other ways. Merit pay sounds good but is almost impossible to do fairly in large organizations. Use non-monetary awards. It's a way of saying thank you, somebody cared. (Amazon)
  • Get big fast. The big guys like Barnes and Nobel are on your tail. Amazon wasn't even the first, second, or even third book store on the web, but their vision and drive won out in the end. (Amazon)
  • Prohibit direct database access by clients. This means you can make you service scale and be more reliable without involving your clients. This is much like Google's ability to independently distribute improvements in their stack to the benefit of all applications. (Amazon)
  • Create a single unified service-access mechanism. This allows for the easy aggregation of services, decentralized request routing, distributed request tracking, and other advanced infrastructure techniques. (Amazon)
  • Making Amazon.com available through a Web services interface to any developer in the world free of charge has also been a major success because it has driven so much innovation that they couldn't have thought of or built on their own. (Amazon)
  • Developers themselves know best which tools make them most productive and which tools are right for the job. (Amazon)
  • Don't impose too many constraints on engineers. Provide incentives for some things, such as integration with the monitoring system and other infrastructure tools. But for the rest, allow teams to function as independently as possible. (Amazon)
  • Developers are like artists; they produce their best work if they have the freedom to do so, but they need good tools. Have many support tools that are of a self-help nature. Support an environment around the service development that never gets in the way of the development itself. (Amazon)
  • You build it, you run it. This brings developers into contact with the day-to-day operation of their software. It also brings them into day-to-day contact with the customer. This customer feedback loop is essential for improving the quality of the service. (Amazon)
  • Developers should spend some time with customer service every two years. Their they'll actually listen to customer service calls, answer customer service e-mails, and really understand the impact of the kinds of things they do as technologists. (Amazon)
  • Use a "voice of the customer," which is a realistic story from a customer about some specific part of your site's experience. This helps managers and engineers connect with the fact that we build these technologies for real people. Customer service statistics are an early indicator if you are doing something wrong, or what the real pain points are for your customers. (Amazon)
  • Infrastructure for Amazon, like for Google, is a huge competitive advantage. They can build very complex applications out of primitive services that are by themselves relatively simple. They can scale their operation independently, maintain unparalleled system availability, and introduce new services quickly without the need for massive reconfiguration. (Amazon)

http://dev.mysql.com (Reference Manuals)
http://www.mysql.com/certification (all things, including status of your certification)
http://www.mysql.com/certification/studyguides (Study Guide Errata)
http://www.mysql.com/certification/candguide (price)
http://www.vue.com/mysql (page 6 to page 12)
http://www.mysql.com/certification/candidate (special offers, information on pre-release of new certifications)
http://www.mysql.com/certification/results (confirm that your certificate is valid
http://www.mysql.com/training
http://superb-west.dl.sourceforge.net/sourceforge/souptonuts/README_mysql.txt

MongoDB

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License